Tag Archives: Consistency of large cardinals

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

OK, let’s go for just one more exchange of comments and then try to bring this to a conclusion by agreeing on a summary of our perspectives. I already started to prepare such a summary but do think that one last exchange of views would be valuable.

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

My argument is “proof-theoretic”: the consistency strengths in set theory are organised by the consistency strengths of large cardinal axioms. And we have good evidence for the strictness of this hierarchy. There is nothing semantic here.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC). Why would you conjecture in this setting that RH is false? I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC? With such evidence I would indeed conjecture that RH is true; wouldn’t you?

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?

I don’t know.

2.  Is PD consistent?

Yes.

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

We have had this exchange several times already. Let’s agree to (strongly) disagree on this point.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

Again: It is not clear that the HP will give not-PD! It is a question of finding appropriate criteria that will yield PD, perhaps criteria that will yield enough large cardinals.

As far as the strictness of the consistency hierarchy we can use quasi-lower bounds, we don’t need the lower bounds coming from core model theory.

And as I have been trying to say, building core model theory into a programme for the investigation of set-theoretic truth like HP is an inappropriate incursion of set-theoretic practice into an intrinsically-based context.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

No, we may be able to stay “within the box” as you put it:

I said that SIMH(large cardinals + \#-generation) might be what we are looking for; the problems are to intrinsically justify large cardinals and to prove the consistency of this criterion. Would you be happy with that solution?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Sun, 3 Aug 2014, W Hugh Woodin wrote:

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

This results from the standard empirical fact that the consistency of a huge variety of statements in set theory is shown assuming the consistency of large cardinal axioms and there are even “lower bound results” calibrating the strength of the large cardinal axioms required, even at the level of a supercompact. For example, \textsf{PFA}(\mathfrak c^+ \text{linked}) is consistent relative to a degree of supercompactness yet there are models with slightly less large cardinal strength (subcompact cardinals) with the property that \textsf{PFA}(\mathfrak c^+ \text{linked}) fails in all of its proper (not necessarily generic) extensions. This is strong evidence that the consistency strength of \textsf{PFA}(\mathfrak c^+ \text{linked}) is at the level of a degree of supercompactness. Thus large cardinals provide a “hierarchy” of consistency strengths whose strictness is witnessed by numerous statements of set-theoretic interest. I see this as sufficient justification for the consistency of large cardinals; we don’t need inner model theory or some structure theory which follows from their existence to know that they are consistent.

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Those tests are used (not by me but by a number of set-theorists) in favour of the truth of PD, not just of its consistency! The consistency of PD only needs the consistency of large cardinals, as justified above, and none of these tests.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Your analogy is problematic as there is no clear distinction between consistency and truth with regard to the ERH; this is because there is only one natural model of first-order arithmetic. But there is a clear distinction between consistency and truth for PD as there are many natural models of second-order arithmetic.

Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules. Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

The consistency of large cardinals does not require any “rich internal theory”. In contrast, the truth of large large cardinal axioms is much harder to justify, indeed I have yet to see a convincing argument for that.

I completely agree this is the basic issue over which we disagree. The position that all extensions, class or set generic, are on an equal footing is at the outset already a bias against large cardinals.

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

The HP is open to the conclusion that large large cardinals exist, but to achieve this one needs a philosophically well-justified and unbiased criterion for the choice of preferred universes. There may be such a criterion but so far the only ones I have come up with either contradict or tolerate large large cardinal existence; none of them succeed in proving large large cardinal existence.

The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Yes, but once again if you set things up to protect desired generalisations of 0^\# then you are showing a bias towards those generalisations. This is in my view unacceptable without justifying that way of setting things up with a philosophically well-justified and unbiased criterion for the choice of preferred universes.

In summary: The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

It would be ideal if the axioms and objects that you want to see in set theory would arise as a consequence of a philosophically well-justified approach to truth, but so far I don’t see how to do that (things are pointing in a different direction). I present this to you as an interesting challenge. It should be clear by now that the derivation of large large cardinal axioms from maximaality criteria is very problematic (but perhaps not impossible).

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

See the above. The HP is not biased against large cardinals. Rather, imposing large cardinals at the start of any investigation of truth is where the bias lies.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

OK, for the sake of discussion let us now relativise everything to large cardinal axioms. I.e., the base theory is now ZFC + Arbitrary large cardinals. This is not justified but surely worth investigating.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all ‘possible large cardinals’ whatever that means).

Please see my paper with Honzik anout strong reflection. We capture ordinal maximality with the property of “\#-generation”. Roughly speaking, V is \#-generated if it arises from a “sharp” in the same way that L arises from 0^\#. This is in my view the strongest possible form of ordinal maximality. (Aside: It also allows one to argue that reflection is perfectly compatible with a potentialist view of V, no actualism is required.)

Then the context you want is \#-generation plus large cardinals. These are your preferred universes. (Note that large cardinals in V automatically generate large cardinals past the ordinals of V via \#-generation.) Below I will use “strong maximality” to mean \#-generated plus large cardinals.

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of “strong maximality”. Then any two sentences in T are compatible with each other.

Maybe you meant to ask if any two strongly maximal universes satisfy the same \Pi_2 sentences? The answer is of course “no” if strongly maximal universes can differ about what large cardinals exist. This will surely be the case unless one requires that each strongly maximal universe satisfies “all large cardinal axioms”. How do you formulate that exactly?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

M. Stanley showed that there are countable transitive models M of ZFC with the property that the set of first-order sentences with parameters from M which hold in an arbitrary outer model of M is M-definable. (Note that this is immediate for any M if one considers only set-generic outer models of M!) This can also be done for the theory ZFC + Large cardinals (fixing a notion of large cardinal axiom). I expect that this can also be done for strong maximality (and indeed holds for all strongly maximal universes) and therefore the answer to your question is “yes”.

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where \phi = “Every set A belongs to a set model with an X-cardinal above A.” \phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable.

So we know something about what happens in set-generic extensions. But this tells us nothing about what happens in more general extensions, even those which satisfy \phi.

Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Why? You want to claim that if a \Pi_2 sentence \psi holds in all strongly maximal universes then it follows from some \phi as above. Equivalently, if a \Sigma_2 sentence is compatible with all large cardinal axioms then it holds in some strongly maximal universe. But again we have a problem with the concept of “all large cardinal axioms”. If this really could be formulated then we have the problem sentence “For some large cardinal axiom \phi there is no transitive model of \phi“. This is \Sigma_2, compatible with all large cardinal axioms but false in all strongly maximal universes. So you are forced to fix a bound on the large cardinal axioms you consider, but then you lose consequences of strong maximality.

It seems better to drop set-genericity and consider the IMH for strongly maximal universes. Honzik and I verified the consistency of the IMH for ordinal maximal (= \#-generated) universes using a variant of Jensen coding and an interesting challenge is to do this for strongly maximal universes (i.e. add the large cardinals). As you know the mathematics is now quite hard as so far we lack the inner model theory and Jensen coding theorems that are available for the smaller large cardinals.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

As I said above, I fail to see how you reach the conclusion that the \Pi_2 consequences of large cardinals are those generated by your sentences \phi. In any case your conclusion is that the \Pi_2 consequences of large cardinals are just the local versions of the very same large cardinal axioms. How does this constitute a clarification?

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

I don’t know what fundamental questions involving the large cardinal hierarchy you refer to. Consider IMH (Strong maximality). A consistency proof for this would be very revealing. Of course any model of this will satisfy PD but this does not mean that PD holds in V. As I said it is likely that the \Gamma-logic of any such universe will be definable in that universe, where \Gamma-logic is the (in my view more natural and better motivated) version of \Omega-logic which refers to arbitrary outer models and not just to set-generic ones.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

No. Consider \textsf{SIMH}^\# (the IMH with absolute parameters for ordinal maximal = #-generated universes). I conjecture that this is consistent. Just as the \textsf{IMH}^\# is consistent with strong maximality I expect the same for the \textsf{SIMH}^\#. And the \textsf{SIMH}^\# implies that the continuum is very large.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

This does sound interesting but I confess that I don’t quite understand it. The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

Many thanks, Hugh, for your stimulating comments,
Sy