Tag Archives: Strong rank maximality

Re: Paper and slides on indefiniteness of CH

pre-PS: Thanks Sol for correcting my spelling. My problem with German has plagued me my entire academic life.

Dear Sy,

I think we are getting to the point where we are simply talking past one another. Also the nesting of messages is making this thread somewhat difficult to follow (perhaps a line break issue or a platform issue).

You have made an important point for me: a rich structure theory together with Goedelian ‘success’ is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only ‘context’ needed for Con PD is the empirical calibration provided by a strict ‘hierarchy’ of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

Next point and within the discussion on strong rank maximality.  I wrote:

Question 1:  How can we even be sure that there is no  pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

and you responded:

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of ‘strong maximality’. Then any two sentences in T are compatible with each other.

I realized after sending the message I should have elaborated on what I had in mind on the incompatibility issue and so I will do so here. I imagine many the followers of this thread  (if any are left) will want to skip this.


Let me explain the sort of incompatibility I am concerned with.

Suppose M is strongly rank maximal. One might have a \Pi_2-sentence \phi_1 certified by a rank preserving extension of M with X-cardinals and a \Pi_2-sentence \phi_2 certified by a rank preserving extension with Y-cardinals.

What if X-cardinals and Y-cardinals are mutually incompatible or worse, the existence of X-cardinals implies \phi_2 cannot hold (or vice-versa).  Then how could \phi_1\wedge\phi_2 be certified? If the certifiable \Pi_2-sentences are not closed under finite conjunction then there is a problem.

Let N_X be a rank-preserving extension of M with a proper class of X-cardinals which certifies \phi_1. Let’s call this a good witness if \phi_1 holds in all the set-generic extensions of N_X and all the \Pi_2-sentences which hold in all the set-generic extensions of N_X, are deemed certified by N_X (this is arguably reasonable given the persistence of large cardinals under small forcing).

Similarly let’s suppose that N_Y is a rank-preserving extension of M with a proper class of Y-cardinals which certifies
\phi_2 and is a good witness.

Assuming the \Omega Conjecture is provable (and recall our base theory is ZFC + a proper class of Woodin cardinals) then one of the following must hold:

  1. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_X (and so N_X certifies \phi_1\wedge \phi_2).
  2. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_Y (and so N_Y certifies \phi_1\wedge \phi_2).

To me this is a remarkable fact. I see no way to prove it at this level of generality without the \Omega Conjecture.

You wrote:

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

I completely disagree. Having more models obscures truth that is my whole point.

Moving on, I want to return to the inner model issue and illustrate an even deeper sense (beyond correctness issues) in which the Inner Model Program is not just about inner models.

Consider the following variation of the inner model program. This is simply the definable version of your “internal consistency” question which you have explored quite bit.

Question: Suppose that there exists a proper class of X-cardinals. Must there exist an inner model N with a proper class of X-cardinals such that N \subseteq \text{HOD}?

(Of course, if one allows more than the existence of a proper class of X cardinals then there is a trivial solution so here it is important that one is only allowed to use the given large cardinals).

For “small” large cardinals even at the level of  Woodin cardinals I know of no positive solution that does not use fine-structure theory.

Define a cardinal \delta to be n-hyper-extendible if \delta is extendible relative to the \Sigma_n-truth predicate.

Theorem: Suppose that HOD Conjecture is true. Suppose that for each n, there is an n-hyper-extendible cardinal. Then for each n there is an n-hyper extendible cardinal in HOD (this is a scheme of course).

The HOD Conjecture could have an elementary proof (if there is an extendible cardinal).  This does not solve the inner model problem for hyper-extendible cardinals or even shed any light on the inner model problem.

Finally you wrote:

The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

I agree on Reinhardt cardinals. But obviously disagree on the route to new hierarchies. Certainly HP has yet to indicate any promise for being able to reach new levels of consistency strength since even reaching the level of “ZFC + infinitely many Woodin cardinals” looks like a serious challenge for HP.  It would be interesting to even see a conjecture along these lines.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?
  2. Is PD consistent?

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

There is strong meta-mathematical evidence that the only way to ultimately answer 2. with “yes” is to answer 1. with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, \square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

I agree that there are interesting models outside this box. But I strongly disagree that V is one of them.


Re: Paper and slides on indefiniteness of CH

Dear Sy,

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

The notion of ordinal maximality to which I was referring was that in the bulletin paper and that which is used to formulate IMH* there.

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).  Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules.

Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of ‘iterability’ involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even Sigma-1-3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing.”

I completely agree this is the basic issue over which we disagree.

The position that all extensions, class or set generic, on on an equal footing is at the outset already a bias against large cardinals. The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all “possible large cardinals” whatever that means).

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where

\phi = “Every set A belongs to a set model with an X-cardinal above A.”

\phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable. Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?