Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Many thanks for your comments. It is good that we are finally having this debate (thank you, Sol). Below are some responses to your message.

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

These are impressive results, but they are about the effect of set forcing on truth, in my view avoiding the key questions about truth. Only considering the effect of set forcing on truth is misleading: Shoenfield’s absoluteness theorem is an important statement about truth, asserting that \Sigma^1_2 truth is not affected by enlargements of the universe of sets. This is inconsistent for \Sigma^1_3 truth, but to see this one has to consider enlargements not obtainable by set forcing (indeed it is consistent for set-generic enlargements).

As I said, if one admits set-forcing as a legitimate concept in discussions of set-theoretic truth then the continuum problem has an easy solution: Extend Levy absoluteness by asserting \Sigma_1 absoluteness with absolute parameters for ccc set-forcing extensions. This is consistent and implies that the continuum is very large. I do not consider this to be a legitimate solution to the continuum problem and the reason is the artificial restriction to ccc set-forcing extensions.

An unfortunate misuse of terminology in discussions of absoluteness in the literature is that “absoluteness” and “forcing” are misleadingly taken to mean “set-generic absoluteness” and “set-forcing”, giving a misleading interpretation of the absoluteness concept. Indeed one of the difficult issues is the relationship between absoluteness and large cardinals: Clearly if absoluteness means set-generic absoluteness than the existence of a proper class of large cardinals is absolute, but this evades the question of how large cardinal existence is affected by enlargements that are not set-generic, such as in Shoenfield’s Theorem.

Also note that by a theorem of Bukovsky, set-genericity is equivalent to a covering property: N is a set-generic extension of M iff for some M-cardinal \kappa, every single-valued function in N between sets in M can be covered by a \kappa-valued function in M. Put this way it is clear that set-genericity is an unwarranted restriction on a the notion of extension of a model of set theory, as there is nothing “intrinsic” about such a covering property.

Perhaps our disagreement is deeper than the above and resides in the following misunderstanding: I am looking for intrinsic sources for the truth of new axioms of set theory. The results you mention, as well as many other impressive results of yours, are highly relevant for the practice of set theory, as they tell us what we can and cannot expect from the typical methods that we use. For example, most independence results in set theory are based entirely on set-forcing, and do not need class forcing, hyperclass forcing or other methods for creating new universes of sets. But this is very different from finding justifications for the truth of new axiom candidates. Such justifications must have a deeper source and cannot be phrased in terms of specific set-theoretic methods. (I understand that Cohen agreed with this point.)

Large cardinals and determinacy rank among the the most exciting and profound themes of 20th century set theory. Their interconnection is striking and in my view lends very strong evidence to their consistency. But 21st century set theory may have a different focus and my interest is to understand what can be said about set-theoretic truth that does not hinge on the most exciting mathematical developments at a particular time in the history of the subject. I am not a naturalist in the sense of Penelope Maddy. And unlike Sol, I remain optimistic about finding new intrinsic sources of set-theoretic truth that may resolve CH and other prominent questions.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not ‘worthy of much discussion’? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

First a point of clarification: The HP entails the examination of a variety of different mathematical criteria for the choice of preferred universes. These different criteria need not agree on their first-order consequences, although each is motivated by an intrinsic feature of the universe of sets (usually maximality). The long-term question is whether these criteria will “synthesise” to a stable universal criterion with interesting first-order consequences that are not in conflict with set-theoretic practice. The answer to this questions is not yet known (the programme is new).

Now in answer to what you say above: The first mathematical criteria in the HP was the IMH. It implies that there are measurable cardinals of arbitrary Mitchell order in inner models. (It also implies that in V there are no measurable cardinals.) The reason for measurables in inner models has nothing to do with their existence in some model, it is a consequence of core model theory (with covering over core models and the fact that \square holds in core models). So I don’t see your point here.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of “iterability” involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even \Sigma^1_3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing”.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

These are wonderful applications of core model theory. They add to the evidence for the consistency of PD. But I don’t see what implications this has for the truth of PD. After all the theory “ZFC + V = L[x] for some real x + There is a supercompact cardinal in some inner model” is a strong theory which implies the consistency of PD but does not even imply \Pi_1^1 determinacy.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximality requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

But more fundamentally: Why need the mathematical criteria for preferred universes be absolute in any sense? One of the important features of the programme is the dynamic interplay between the nature of the Hyperuniverse and V. The Hyperuniverse must be defined in a background V, and conversely first-order properties of preferred universes within the Hyperuniverse are candidates for truth assertions about V. So it is to be expected that changing V will lead to changes in the preferred members of the Hyperuniverse.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

There is no question of “rejecting” any criterion for preferred universes. Instead, we are exploring the consequences of the different criteria and the extent to which “syntheses” of different criteria with each other are possible.

The criterion you propose is similar to ones I have discussed with Hannes Letigeb and Eduardo Rivello. Hannes asked me about replacing the Hyperuniverse with models of 2nd order ZFC. But of course then the only “pictures of V” are the V_\alpha, \alpha inaccessible and if we apply the HP to such universes we will arrive only at reflection principles and nothing more. Eduardo asked about using countable transitive models of V which are elementarily equivalent to V (one could strengthen this further by demanding an elementary embedding into V). The problem now is that this choice of universes “begs the question”: We want to use the hyperuniverse to understand what first-order properties V should have (based on philosophically justified criteria for the choice of preferred universes), but with Eduardo’s choice one has “built in” all first-order properties of V and therefore can learn nothing new. It is analogous to Zermelo’s quasi-categoricity for 2nd order set theory: Yes, you have categoricity modulo the ordinals and therefore arrive at a complete theory, but you have no idea what this theory says.

I am rather convinced that the generous use of all countable transitive models of ZFC is the right notion of Hyperuniverse and valuable criteria for preferred universes cannot “build in” the first-order theory of V.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

??? This is like saying we assume PD in V and take our preferred universes to reflect all first-order truths of V, and therefore PD. But this misses the point: The first-order properties of H, like those of V, are to result from criteria for preferred universes that are based on intrinsic features, such as maximality. How do you arrive at PD that way? I can well imagine that you could arrive at preferred universes that satisfy the existence of inner models with many large cardinals, but this is very far from PD.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

The Hyperuniverse contains ALL countable transitive models of ZFC, so you cannot say that it isn’t rich; it is as rich as possible. I think what you want to do is impose assumptions on V that imply that the Hyperuniverse will contain certain universes that you would like to see there. But this misses the point of the programme: Any such assumption must arise as a consequence of criteria for preferred universes that are intrinsically based. If there is a way of getting PD as a first-order consequence of such criteria I would be happy to see it. But so far things seem to be going in another direction: Only lightface PD and the existence of large cardinals only in inner models, not in V.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does. It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

You have misinterpreted the programme. It does not necessarily lead to the definitive conclusion that PD is false! The first criterion, IMH, gave this consequence, but other criteria, such as \textsf{IMH}^\# (see my paper with Honzik) do not. Briefly put: There are intrinsically-based criteria which imply that PD is false and others which do not decide PD. So far, no such criterion implies that PD is true.

In fact a key feature of the programme is that it avoids any bias with regard to specific set-theoretic statements like large cardinal existence or PD. The programme proceeds using intrinsically-based criteria and explores their consequences. Of course a strict rule is to only employ criteria which are based on an intrinsic feature of the universe of sets; reference to “forcing” or “large cardinals” or “determinacy” is inappropriate in the formulation of such criteria. My aim is for the programme to be “open-minded” and not slip in, intentionally or otherwise, technical baggage from the current practice of set theory. But currently I do think that whatever consequences the programme yields should be tested for “compatibility” with set-theoretic practice.

Best,
Sy

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>