Tag Archives: Inner Model Program

Re: Paper and slides on indefiniteness of CH

Dear all,

Here is some background for those who are interested. My apologies to those who are not, but delete is one key stroke away.

Jensen’s Covering Theorem states that V is either very close to L or very far from L. This opens the door for consideration of 0^\# and current generation of large cardinal axioms.

Details: “close to L” means that L computes the successors of all singular (in V) cardinals correctly. “far from L” means every uncountable cardinal is inaccessible in L.

The HOD Dichotomy Theorem [proved here] is in some sense arguably an abstract generalization of Jensen’s Covering Theorem. This theorem states that if there is an extendible cardinal then V is either very close to \text{HOD} or very far from \text{HOD}.

Details: Suppose \delta is an extendible cardinal. “very close to \text{HOD}” means the successor of every singular cardinal above \delta is correctly computed by \text{HOD}. “very far from \text{HOD}” means that every regular cardinal above \delta is a measurable cardinal in \text{HOD} and so \text{HOD} computes no successor cardinals correctly above \delta.

Aside: The restriction to cardinals above \delta is necessary by forcing considerations and the close versus far dichotomy is much more extreme than just what is indicated above about successor cardinals.

The pressing question then is: Is the \text{HOD} Dichotomy Theorem really a “dichotomy” theorem?

The \text{HOD} Conjecture is the conjecture that it is not; i.e. if there is an extendible cardinal then \text{HOD} is necessarily close to V.

Given set theoretic history, arguably the more plausible conjecture is that \text{HOD} Dichotomy Theorem is a genuine dichotomy theorem and so just as 0^\# initiates a new generation of large cardinal axioms (that imply V is not L) there is yet another generation of large cardinal axioms which corresponds to the failure of the \text{HOD} Conjecture.

But now there is tension with the Inner Model Program which predicts that \text{HOD} Conjecture is true (for completely unexpected reasons).

My question to Sy was implicitly: Why does he not, based on maximality, reject \text{HOD} Conjecture since disregarding the evidence from the Inner Model Program, the most natural speculation is that the \text{HOD} Conjecture is false.

The point here is that the analogous conjecture for L is false (since 0^\# exists).

So one could reasonably take the view that the \text{HOD} Conjecture is as misguided now as would have been the conjecture that L is close to V given the Jensen Covering Theorem. (Let’s revise history and pretend that Jensen’s Covering Theorem was proved before measurable cardinals etc. had been defined and analyzed).

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

pre-PS: Thanks Sol for correcting my spelling. My problem with German has plagued me my entire academic life.

Dear Sy,

I think we are getting to the point where we are simply talking past one another. Also the nesting of messages is making this thread somewhat difficult to follow (perhaps a line break issue or a platform issue).

You have made an important point for me: a rich structure theory together with Goedelian ‘success’ is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only ‘context’ needed for Con PD is the empirical calibration provided by a strict ‘hierarchy’ of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

Next point and within the discussion on strong rank maximality.  I wrote:

Question 1:  How can we even be sure that there is no  pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

and you responded:

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of ‘strong maximality’. Then any two sentences in T are compatible with each other.

I realized after sending the message I should have elaborated on what I had in mind on the incompatibility issue and so I will do so here. I imagine many the followers of this thread  (if any are left) will want to skip this.

Incompatibility


Let me explain the sort of incompatibility I am concerned with.

Suppose M is strongly rank maximal. One might have a \Pi_2-sentence \phi_1 certified by a rank preserving extension of M with X-cardinals and a \Pi_2-sentence \phi_2 certified by a rank preserving extension with Y-cardinals.

What if X-cardinals and Y-cardinals are mutually incompatible or worse, the existence of X-cardinals implies \phi_2 cannot hold (or vice-versa).  Then how could \phi_1\wedge\phi_2 be certified? If the certifiable \Pi_2-sentences are not closed under finite conjunction then there is a problem.

Let N_X be a rank-preserving extension of M with a proper class of X-cardinals which certifies \phi_1. Let’s call this a good witness if \phi_1 holds in all the set-generic extensions of N_X and all the \Pi_2-sentences which hold in all the set-generic extensions of N_X, are deemed certified by N_X (this is arguably reasonable given the persistence of large cardinals under small forcing).

Similarly let’s suppose that N_Y is a rank-preserving extension of M with a proper class of Y-cardinals which certifies
\phi_2 and is a good witness.

Assuming the \Omega Conjecture is provable (and recall our base theory is ZFC + a proper class of Woodin cardinals) then one of the following must hold:

  1. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_X (and so N_X certifies \phi_1\wedge \phi_2).
  2. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_Y (and so N_Y certifies \phi_1\wedge \phi_2).

To me this is a remarkable fact. I see no way to prove it at this level of generality without the \Omega Conjecture.

You wrote:

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

I completely disagree. Having more models obscures truth that is my whole point.

Moving on, I want to return to the inner model issue and illustrate an even deeper sense (beyond correctness issues) in which the Inner Model Program is not just about inner models.

Consider the following variation of the inner model program. This is simply the definable version of your “internal consistency” question which you have explored quite bit.

Question: Suppose that there exists a proper class of X-cardinals. Must there exist an inner model N with a proper class of X-cardinals such that N \subseteq \text{HOD}?

(Of course, if one allows more than the existence of a proper class of X cardinals then there is a trivial solution so here it is important that one is only allowed to use the given large cardinals).

For “small” large cardinals even at the level of  Woodin cardinals I know of no positive solution that does not use fine-structure theory.

Define a cardinal \delta to be n-hyper-extendible if \delta is extendible relative to the \Sigma_n-truth predicate.

Theorem: Suppose that HOD Conjecture is true. Suppose that for each n, there is an n-hyper-extendible cardinal. Then for each n there is an n-hyper extendible cardinal in HOD (this is a scheme of course).

The HOD Conjecture could have an elementary proof (if there is an extendible cardinal).  This does not solve the inner model problem for hyper-extendible cardinals or even shed any light on the inner model problem.

Finally you wrote:

The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

I agree on Reinhardt cardinals. But obviously disagree on the route to new hierarchies. Certainly HP has yet to indicate any promise for being able to reach new levels of consistency strength since even reaching the level of “ZFC + infinitely many Woodin cardinals” looks like a serious challenge for HP.  It would be interesting to even see a conjecture along these lines.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?
  2. Is PD consistent?

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

There is strong meta-mathematical evidence that the only way to ultimately answer 2. with “yes” is to answer 1. with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, \square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

I agree that there are interesting models outside this box. But I strongly disagree that V is one of them.

Regards,
Hugh