Tag Archives: I0

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that

(1) x is in M and M \vDash ``V = L[t] \text{ for a real }t"

(2) M satisfies (*)(\eta) (this (*) but allowing \eta as a parameter),

then M is #-generated.

So, you still have not really addressed the ctm issue at all.

Here is the critical question:

Key Question: Can there exist a ctm M such that M satisfies (*) in the hyper-universe of L(M)[G] where G is L(M)-generic for collapsing all sets to be countable.

Or even:

Lesser Key Question: Suppose that M is a ctm which satisfies (*). Must M be #-generated?

Until one can show the answer is “yes” for the Key Question, there has been no genuine reduction of this version of \textsf{IMH}^\# to V-logic.

If the answer to the Lessor Key Question is “yes” then there is no possible reduction to V-logic.

The theorem stated above strongly suggests the answer to the Lesser Key Question is actually “yes” if one restricts to models satisfying “V = L[Y]\text{ for some set }Y”.

The point of course is that if M is a ctm which satisfies “V = L[Y]\text{ for some set }Y” and M witnesses (*) then M[g] witnesses (*) where g is an M-generic collapse of Y to $lateex \omega$.

The simple consistency proofs of Original-\textsf{IMH}^\# all easily give models which satisfy “V = L[Y]\text{ for some set }Y”.

The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit kappa of cof omega can be large in HOD, and I started asking about Weak Square. It holds at kappa in our model.

Assuming the Axiom \textsf{I}0^\# is consistent one gets a model of ZFC in which for some singular strong limit \kappa of uncountable cofinality, weak square fails at \kappa and \kappa^+ is not correctly computed by HOD.

So one cannot focus on cofinality \omega (unless Axiom \textsf{I}0^\# is inconsistent).

So born of this thread is the correct version of the problem:

Problem: Suppose \gamma is a singular strong limit cardinal of uncountable cardinality such that \gamma^+ is not correctly computed by HOD. Must weak square hold at \gamma?

Aside: \textsf{I}0^\# asserts there is an elementary embedding j:L(V_{\lambda+1}^\#) \to L(V_{\lambda+1}^\#) with critical point below \lambda.

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics.

Of course you do not see any virtue in “back to square one” conjectures. Fine, we have different views (yet again),

Why should your programme be required to make “make or break” conjectures, and what is so attractive about that?

I find it quite interesting if philosophical considerations lead to specific “make or break” conjectures. Especially if there is no obvious purely mathematical basis on which to make the conjecture. The HOD Conjecture is a good example. There is no purely mathematical reason (that I know of) to make that conjecture (that the HOD Hypothesis is provable from say ZFC + extendible).  It is a prediction from the Ultimate L scenario (just as is the (provability) of the \Omega Conjecture).

Of course there is another reason for identifying such conjectures. They provide test questions for future progress. If one can refute from large cardinals that the \Omega Conjecture holds then one refutes the Ultimate L Conjecture and moreover shows that there is a failure of inner model theory based on sequences of extenders.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds?

Not necessarily. But I certainly would no longer declare as evident that V is not L. The question of V versus L would for me,  reopen.

Your V = Ultimate-L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate-L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.

If the Ultimate L Conjecture is true (provable in ZFC + extendible) then V = Ultimate L becomes a serious possibility which (to me anyway) cannot just be dismissed as is now the possibility that V = L.

For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact. (These are the “other tests which will have to be passed”).

If that does not happen or if the genuine insights come from outer models of V = Ultimate L, or even from something entirely unrelated to Ultimate L, then for me the case for V = Ultimate L will weaken, possibly significantly.

On the other hand, if in the setting of V = Ultimate L, a whole new hierarchy of large cardinals is revealed, otherwise invisible, then things get interesting. Here it might be the Axiom \textsf{I}0, in but the context of V = Ultimate L, which could be key.

You will respond that is sheer speculation without foundation or solid evidence. It is sheer speculation. We shall see about the evidence.

Maybe it is time to try once again to simply agree that we disagree and wait for future mathematical developments before continuing this debate.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

You were asked to say more about your “conception of V in which PD holds”. What you wrote said little about that and instead went into the stratosphere, with in my view an unwarranted fantasy based on very thin evidence. I too could present such a fantasy about a vesion of my Stable Core with GCH and even fine structure which is rigid and witnesses large cardinals, over which weak covering holds in the presence of large cardinals and over which V is generic. I could go on about the internal structure of this Refined Stable Core, blah, blah, blah. The point is that I would never advertise such a fantasy as I can’t back it up with enough solid evidence.

A key question is simply ignored when you write:

Continuing, one is led to degrees of supercompactness (the details here are now based on quite a number of conjectures, but let’s ignore that).

Frankly, this is quite ridiculous. The iterability problem for developing an inner model with a supercompact has been open for many years. It is the main open problem of inner model theory. So the real question for the first line of your story is: what evidence and what understanding do we have of this problem? I already tried to make the point that in inner model theory there is a history of things not going as predicted, so I do not find it credible to build a picture based on the blanket assumption that this will go the way we now expect it to go. How’s this for a conjecture: It is consistent to have a supercompact but none in an inner model of HOD. Do you have more evidence against that conjecture than for its negation?

This is not a question of a possible inconsistency in large cardinal theory! It is a question of whether our understanding of inner model theory at the level of Woodin cardinals has anything to do with inner model theory for a supercompact. Can you or John tell me what evidence you have that the iterability problem will be solved positively to enable the construction of an inner model for a supercompact in the foreseeable future?

Now if we don’t have a solution to this problem then your comment

One is quickly led to the theorem that the existence of the generalization of L to the level of exactly one supercompact cardinal is where the expansion driven by the horizontal maximality principles stops.

is vacuous, as there might not be a “generalization of L to the level of exactly one supercompact cardinal” in the first place! It is hard to appreciate an implication when the hypothesis is so debatable.

On the positive side, I do agree that your work on L(P(\lambda)) as an analogue of L(\mathbb R) is impressive and very suggestive, and should be part of the final picture. But speculation about Ultimate L seems premature to me, once again stacked on top of a pile of unproved conjectures.

What are we to make of all this? You give the feeling that you are appearing at the finish line without running the race. We would all love to see “the final picture” but the rest of us understand that we have to be realistic about what has been established and what is just conjecture. There is solid stuff in your picture: core model theory, universally Baire sets and L(P(\lambda)). But the gaps in the rest are so huge that it is not helpful to be presented with such a picture. It gives the false impression that you have figured everything out, while in fact there is a lot not yet understood even near the beginning of your story. Many of us pay close attention to what you say as you have done such great work; could you please stop rushing ahead and stay closer to the frontiers of what we actually know?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

Ok, I will add some comments to my response. Below is simply how I currently see things. It is obvious based on this account that an inconsistency in PD would render this picture completely vacuous and so I for one would have to start all over in trying to understand V. But that road would be much harder given the lessons of the collapse caused by the inconsistency of PD. How could one (i.e. me) be at all convinced that the intuitions behind ZFC are not similarly flawed?

I want to emphasize that what I describe below is just my admittedly very optimistic view. I am not advocating a program of discovery or anything like that. I am also not arguing for this view here. I am just describing how I see things now. (But that noted, there are rather specific conjectures which if proved, I think would argue strongly for this view. And if these conjectures are false then I will have to alter my view.)

This view is based on a substantial number of key theorems which have been proved (and not just by me) over the last several decades.

Starting with the conception of V as given by the ZFC axioms, there is a natural expansion of the conception along the following lines.

The Jensen Covering Lemma argues for 0^\# and introduces a horizontal maximality notion. This is the first line and gives sharps for all sets. This in turn activates a second line, determinacy principles.

The core model induction now gets under way and one is quickly led to PD and \text{AD}^{L(\mathbb R)}, and reaches the stage where one has iterable inner models with a proper class of Woodin cardinals. This is all driven by the horizontal maximality principle (roughly, if there is no iterable inner model with a proper class of Woodin cardinals then there is a generalization of L relative to which V is close at all large enough cardinals and with no sharp etc.).

Adding the hypothesis that there is a proper class of Woodin cardinals, one can now directly define the maximum extension of the projective sets and develop the basic theory of these sets. This is the collection of universally Baire sets (which has an elementary definition). The important point here is that unlike the definitions of the projective sets, this collection is not defined from below. (There is a much more technical definition one can give without assuming the existence of a proper class of Woodin cardinals).

Continuing, one is led to degrees of supercompactness (the details here are now based on quite a number of conjectures, but let’s ignore that).

Also a third line is activated now. This is the generalization of determinacy from L(\mathbb R) = L(P(\omega)) to the level of L(P(\lambda)) for suitable \lambda > \omega. These \lambda are where the Axiom I0 holds. This axiom is among the strongest large cardinal axioms we currently know of which are relatively consistent with the Axiom of Choice. There are many examples of rather remarkable parallels between L(\mathbb R) in the context that AD holds in L(\mathbb R), and L(P(\lambda)) in the context that the Axiom I0 holds at \lambda.

Now things start accelerating. One is quickly led to the theorem that the existence of the generalization of L to the level of exactly one supercompact cardinal is where the expansion driven by the horizontal maximality principles stops. This inner model cannot have sharp and is provably close to V (if it exists in the form of a weak extender model for supercompactness). So the line (based on horizontal maximality) necessarily stops (if this inner model exists) and one is left with vertical maximality and the third line (based on I0-like axioms).

One is also led by consideration of the universally Baire sets to the formulation of the axiom that V = Ultimate L and the Ultimate L Conjecture. The latter conjecture if true confirms that the line driven by horizontal maximality principles ceases. Let’s assume the Ultimate L Conjecture is true.

Now comes (really extreme) sheer speculation. The vertical expansion continues, driven by the consequences for Ultimate L of the existence of large cardinals within Ultimate L.

By the universality theorem, there must exist \lambda where the Axiom I0 holds in Ultimate L. Consider for example the least such cardinal in Ultimate L. The corresponding L(P(\lambda)) must have a canonical theory where of course I am referring to the L(P(\lambda)) of Ultimat L.

It has been known for quite some time that if the Axiom I0 holds at a given \lambda then the detailed structure theory of L(P(\lambda)) = L(V_{\lambda+1}) above \lambda can be severely affected by forcing with partial orders of size less than \lambda. But these extensions must preserve that Axiom I0 holds at \lambda. So there are natural features of L(P(\lambda)) above \lambda which are quite fragile relative to forcing.

Thus unlike the case of L(\mathbb R) where AD gives “complete information”, for L(P(\lambda)) one seems to need two things: First the relevant generalization of AD which arguably is provided by Axiom I0 and second, the correct theory of V_\lambda. The speculation is that V = Ultimate L provides the latter.

The key question will be: Does the global structure theory of L(P(\lambda)), as given in the context of the Axiom I0 and V = Ultimate L, imply that V = Ultimate L must hold in V_\lambda?

If this convergence happens at \lambda and the structure theory is at all “natural” then at least for me this would absolutely confirm that V = Ultimate L.

Aside: This is not an entirely unreasonable possibility. The are quite a number of theorems now which show that \text{AD}^{L(\mathbb R)} follows from its most basic consequences.

For example it follows from just all sets are Lebesgue measurable, have the property of Baire, and uniformization (by functions in L(\mathbb R)) for the sets A \subset \mathbb R \times \mathbb R which are \Sigma_1-definable in L(\mathbb R) from parameter \mathbb R. This is essentially the maximum amount of uniformization which can hold in L(\mathbb R) without yielding the Axiom of Choice.

Thus for L(\mathbb R), the entire global structure theory, i.e. that given by \text{AD}^{L(\mathbb R)}, is implied by a small number of its fundamental consequences.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

I think it is a bit presumptuous of me to answer these questions in an email thread with such a large cc. I will however answer (1) since that is quite closely related to issues already discussed.

The short answer to (1) is actually no. Let me explain. If the Ultimate L Conjecture is true than I would certainly move supercompact to the safe zone on equal footing with PD. I do not currently place supercompact there.

What about huge cardinals? Ultimate L will be constructed as the inner model for exactly one supercompact cardinal. But unlike all other inner model constructions, this inner model will be universal for all large cardinal axioms we know of (large cardinals such as huge will be there if they occur in the parent universe within which Ultimate L is constructed but they play no role in the actual construction).

Therefore if the Ultimate L Conjecture is true (and the proof is by the current scenarios of course), inner model theory can no longer provide direct evidence for consistency since the large cardinals past supercompact play no role in the construction of Ultimate L. This is just as for L, the large cardinals compatible with L play no role in the construction of L.

So if the Ultimate L Conjecture is true then there is serious challenge. How does one justify the large cardinals beyond supercompact? My guess is that their justification will involve how they affect the structure of Ultimate L. For example, consider the following conjecture.

Conjecture: Suppose V = Ultimate L. Suppose \lambda is an uncountable cardinal such that the Axiom of Choice fails in L(P(\lambda)). Then there is a non-trivial elementary embedding j:V_{\lambda+1} \to V_{\lambda+1}.

If conjectures such as this are true then it seems very likely that for large cardinals beyond supercompact, their true natures etc., are really only revealed within the setting of V = Ultimate L and it is only in that unveiling that one is able to make the case for consistency. In this scenario, V = Ultimate L is not a limiting axiom at all, it is the axiom by which the true nature of the large cardinal hierarchy is finally uncovered.

But it is important to keep in mind, the Ultimate L Conjecture could be false. Indeed one could reasonably conjecture that it is the conception of a weak extender model which is not correct once one reaches the level of supercompact even though it seems to be correct below that level.

However at present and for me, the Ultimate L Conjecture is the keystone in a very tempting and compelling picture.

Regards,
Hugh