Tag Archives: Hyperuniverse Program

Re: Paper and slides on indefiniteness of CH

Dear Claudio,

Pen, in a sense you’re right, the hyperuniverse “lives” within V (I’d rather say that it “originates from” V) and my multiverser surely has a notion of V as anybody else working with ZFC.

OK, but now I lose track of the sense in which yours is a multiverse view: there’s V and within V there’s the hyperuniverse (the collection of ctms). Any universer can say as much.

(I confess this is a disappointing. I was hoping that a true multiverser would be joining this discussion.)

I’m not sure that Sy entirely agrees with me on this point, but to me HP implies an irreversible departing from the idea of finding a single, unified body of set-theoretic truths. Even if a convergence of consequences of H-axioms were to manifest itself in a stronger and more tangible way, via, e.g., results of the calibre of those already found by Sy and Radek, I’d be reluctant to accept the idea that this would automatically reinstate our confidence in a universe-view through simply referring back such a convergence to a pristine V.

Now I’m confused again. Here’s the formulation you agreed to:

Or you might say to the universer that her worries are misplaced, that your multiverse view is out to settle on a single preferred theory of sets, it’s just that you don’t think of it as the theory of a single universe; rather, it’s somehow suggested by or extracted from the multiverse.

Though embracing a single universe is the most straightforward way of pursing unify, I was taking you to be pursuing it in a multiverse context (not to be embracing ‘a pristine V’). Fine with me.

But now that you’ve clarified that you aren’t really a multiverser, that you see all this as taking place within V, why reject unify now? And if you do, what will you say to our algebraist?

Moreover, HP, in my view, constitutes the reversal of the foundational perspective I described above (that is, to find an ultimate universe), by deliberately using V as a mere inspirational concept for formulating new set-theoretic hypotheses rather than as a fixed entity whose properties will come to be known gradually.

So there’s a sense in which you have V and a sense in which you don’t. If V is so indeterminate, how can the collection of ctms within it be a well-defined object open to precise mathematical investigation?

Has this brief summary answered (at least some of) your legitimate concerns?

I very much appreciate your efforts, Claudio but the picture still isn’t clear to me. A simple, readily understandable intuitive picture can be an immensely fruitful tool, as the iterative conception has amply demonstrated, but this one, the intuitive picture behind the HP, continues to elude me.

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think anyone would be nervous about linking set-theoretic truth to a concept private to one person and (perhaps) a handful of his co-workers,

I am disappointed to hear you say this.

I apologize for disappointing you.  I was going by some of what you’ve written in these exchanges.  First this:

First question:  Is this your personal picture or one you share with others?

I don’t know, but maybe I have persuaded some subset of Carolin Antos, Tatiana Arrigoni, Radek Honzik and Claudio Ternullo (HP collaborators) to have the same picture; we could ask them.

Why do you ask? Unless someone can refute my picture then I’m willing to be the only “weirdo” who has it.

You then softened this with:

You got this wrong. I indeed expect that others have similar pictures in their heads but can’t assume that they have the same picture. There is Sy’s picture but also Carolin’s picture, Tatiana’s picture … Set-theoretic truth is indeed about what is common to these pictures after an exchange of ideas.

I assumed you still meant to limit the range of people whose pictures are relevant to a fairly small group.  Otherwise the collection of things ‘common to these pictures’  would get too sparse.

In any case, these further explanations are most helpful …

2. Mental pictures

Each set-theorist who accepts the axioms of ZFC has at any given time an individual mental picture of the universe of sets.

OK.  Everyone has his own concept of the set-theoretic universe.

3. Intrinsic features of the universe of sets

These are those practice-independent features common to the different individual mental pictures, such as the maximality of the universe of sets. Thus intrinsic features are determined by the set theory community. (Here I might lose people who don’t like maximality, but that still leaves more than a handful.)

OK.  Intrinsic features are those common to all the concepts of set-theoretic universe (or close enough).  So far, this seems to be the usual kind of conceptualism:  there is a shared concept of the set-theoretic universe (something like the iterative conception); it’s standardly characterized as including ‘maximality’, both in ‘width’ (Sol’s ‘arbitrary subset’) and in the ‘height’ (at least small LCs).  Also reflection (see below).

4. The Hyperuniverse

This mathematical construct consists of all countable transitive models of ZFC. These provide mathematical proxies for all possible mental pictures of the universe of sets. Not all elements of the Hyperuniverse will serve as useful proxies as for example they may fail to exhibit intrinsic features such as maximality.

OK.  We stipulate that the hyperuniverse contains all CTMs of ZFC. But some of these (only some? — they’re all countable, after all) fail to exhibit maximality, etc.

5. Mathematical criteria

These are mathematical conditions imposed on elements of the Hyperuniverse which are intended to reflect intrinsic features of the universe of sets. They are to be unbiased, i.e. formulated without appeal to set-theoretic practice. A criterion is intrinsically-based if it is judged by the set theory community to faithfully reflect an intrinsic feature of the universe of sets. (There are such criteria, like reflection, which are judged to be intrinsically-based by more than handful.)

OK.  Now we’re to impose on the elements of the hyperuniverse the conditions implicit in the shared concept of the set-theoretic universe.  These include maximality,  reflection, etc.  (We’re weeding the hyperuniverse, right?)

6. Analysis and synthesis

An intrinsic feature such as maximality can be reflected by many different intrinsically-based mathematical criteria. It is then important to analyse these different criteria for consistency and the possibility of synthesizing them into a common criteria while preserving their original intentions. (I am sure that more than a handful can agree on a suitable synthesis.)

I think this is the key step (or maybe it was (5)), the step where the HP is intended to go beyond the usual efforts to squeeze intrinsic principles out of the familiar concept of the set-theoretic universe.  The key move in this ‘going beyond’ is to focus on the hyperuniverse as a way of formulating new versions of the old intrinsic principles.

Let me stop at this point, because I’m afraid my paraphrase has gone astray.  You once rejected the bit of my attempted summary of your view that said the new hyperuniverse principles ‘build on’ principles from the old concept of the set-theoretic universe, and I seem to have fallen back into that misunderstanding.  The old concept you characterize as ‘just the maximal iterative conception’.  (You don’t include maximizing ‘width’ in this, though I think it is usually included.)  I’m not sure how to describe the new concept, but the new principles implicit in it are different in that ‘they deal with external features of universes and are logical in nature’ (both quotes are from your message of 8/8).

What I’m groping for here is a characterization of where the new intrinsic principles are based.  It has to be something other than the old concept of the set-theoretic universe, the maximal iterative conception.  I keep falling into the idea that the new principles are generated by thinking about the old principles from the point of view of the hyperuniverse, that the new principles are new versions of the old ones and they go beyond the old ones by exploiting ‘the external features of universes’ (revealed by the hyperuniverse perspective) in logical terms.  But this doesn’t seem to be what you want to say.  Is there a different, new concept, with new intrinsic principles?

An aside:  as I understand things, it was the purported new concept that seemed to threaten to be limited to a select group.  If the relevant concept in all this is just the familiar concept of the set-theoretic universe — which does seem to be broadly shared, which conceptualists generally are ready to embrace — and the hyperuniverse is just a new way of extracting information from that familiar concept, then at least one of my worries disappears.

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Many thanks for your comments. It is good that we are finally having this debate (thank you, Sol). Below are some responses to your message.

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

These are impressive results, but they are about the effect of set forcing on truth, in my view avoiding the key questions about truth. Only considering the effect of set forcing on truth is misleading: Shoenfield’s absoluteness theorem is an important statement about truth, asserting that \Sigma^1_2 truth is not affected by enlargements of the universe of sets. This is inconsistent for \Sigma^1_3 truth, but to see this one has to consider enlargements not obtainable by set forcing (indeed it is consistent for set-generic enlargements).

As I said, if one admits set-forcing as a legitimate concept in discussions of set-theoretic truth then the continuum problem has an easy solution: Extend Levy absoluteness by asserting \Sigma_1 absoluteness with absolute parameters for ccc set-forcing extensions. This is consistent and implies that the continuum is very large. I do not consider this to be a legitimate solution to the continuum problem and the reason is the artificial restriction to ccc set-forcing extensions.

An unfortunate misuse of terminology in discussions of absoluteness in the literature is that “absoluteness” and “forcing” are misleadingly taken to mean “set-generic absoluteness” and “set-forcing”, giving a misleading interpretation of the absoluteness concept. Indeed one of the difficult issues is the relationship between absoluteness and large cardinals: Clearly if absoluteness means set-generic absoluteness than the existence of a proper class of large cardinals is absolute, but this evades the question of how large cardinal existence is affected by enlargements that are not set-generic, such as in Shoenfield’s Theorem.

Also note that by a theorem of Bukovsky, set-genericity is equivalent to a covering property: N is a set-generic extension of M iff for some M-cardinal \kappa, every single-valued function in N between sets in M can be covered by a \kappa-valued function in M. Put this way it is clear that set-genericity is an unwarranted restriction on a the notion of extension of a model of set theory, as there is nothing “intrinsic” about such a covering property.

Perhaps our disagreement is deeper than the above and resides in the following misunderstanding: I am looking for intrinsic sources for the truth of new axioms of set theory. The results you mention, as well as many other impressive results of yours, are highly relevant for the practice of set theory, as they tell us what we can and cannot expect from the typical methods that we use. For example, most independence results in set theory are based entirely on set-forcing, and do not need class forcing, hyperclass forcing or other methods for creating new universes of sets. But this is very different from finding justifications for the truth of new axiom candidates. Such justifications must have a deeper source and cannot be phrased in terms of specific set-theoretic methods. (I understand that Cohen agreed with this point.)

Large cardinals and determinacy rank among the the most exciting and profound themes of 20th century set theory. Their interconnection is striking and in my view lends very strong evidence to their consistency. But 21st century set theory may have a different focus and my interest is to understand what can be said about set-theoretic truth that does not hinge on the most exciting mathematical developments at a particular time in the history of the subject. I am not a naturalist in the sense of Penelope Maddy. And unlike Sol, I remain optimistic about finding new intrinsic sources of set-theoretic truth that may resolve CH and other prominent questions.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not ‘worthy of much discussion’? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

First a point of clarification: The HP entails the examination of a variety of different mathematical criteria for the choice of preferred universes. These different criteria need not agree on their first-order consequences, although each is motivated by an intrinsic feature of the universe of sets (usually maximality). The long-term question is whether these criteria will “synthesise” to a stable universal criterion with interesting first-order consequences that are not in conflict with set-theoretic practice. The answer to this questions is not yet known (the programme is new).

Now in answer to what you say above: The first mathematical criteria in the HP was the IMH. It implies that there are measurable cardinals of arbitrary Mitchell order in inner models. (It also implies that in V there are no measurable cardinals.) The reason for measurables in inner models has nothing to do with their existence in some model, it is a consequence of core model theory (with covering over core models and the fact that \square holds in core models). So I don’t see your point here.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of “iterability” involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even \Sigma^1_3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing”.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

These are wonderful applications of core model theory. They add to the evidence for the consistency of PD. But I don’t see what implications this has for the truth of PD. After all the theory “ZFC + V = L[x] for some real x + There is a supercompact cardinal in some inner model” is a strong theory which implies the consistency of PD but does not even imply \Pi_1^1 determinacy.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximality requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

But more fundamentally: Why need the mathematical criteria for preferred universes be absolute in any sense? One of the important features of the programme is the dynamic interplay between the nature of the Hyperuniverse and V. The Hyperuniverse must be defined in a background V, and conversely first-order properties of preferred universes within the Hyperuniverse are candidates for truth assertions about V. So it is to be expected that changing V will lead to changes in the preferred members of the Hyperuniverse.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

There is no question of “rejecting” any criterion for preferred universes. Instead, we are exploring the consequences of the different criteria and the extent to which “syntheses” of different criteria with each other are possible.

The criterion you propose is similar to ones I have discussed with Hannes Letigeb and Eduardo Rivello. Hannes asked me about replacing the Hyperuniverse with models of 2nd order ZFC. But of course then the only “pictures of V” are the V_\alpha, \alpha inaccessible and if we apply the HP to such universes we will arrive only at reflection principles and nothing more. Eduardo asked about using countable transitive models of V which are elementarily equivalent to V (one could strengthen this further by demanding an elementary embedding into V). The problem now is that this choice of universes “begs the question”: We want to use the hyperuniverse to understand what first-order properties V should have (based on philosophically justified criteria for the choice of preferred universes), but with Eduardo’s choice one has “built in” all first-order properties of V and therefore can learn nothing new. It is analogous to Zermelo’s quasi-categoricity for 2nd order set theory: Yes, you have categoricity modulo the ordinals and therefore arrive at a complete theory, but you have no idea what this theory says.

I am rather convinced that the generous use of all countable transitive models of ZFC is the right notion of Hyperuniverse and valuable criteria for preferred universes cannot “build in” the first-order theory of V.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

??? This is like saying we assume PD in V and take our preferred universes to reflect all first-order truths of V, and therefore PD. But this misses the point: The first-order properties of H, like those of V, are to result from criteria for preferred universes that are based on intrinsic features, such as maximality. How do you arrive at PD that way? I can well imagine that you could arrive at preferred universes that satisfy the existence of inner models with many large cardinals, but this is very far from PD.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

The Hyperuniverse contains ALL countable transitive models of ZFC, so you cannot say that it isn’t rich; it is as rich as possible. I think what you want to do is impose assumptions on V that imply that the Hyperuniverse will contain certain universes that you would like to see there. But this misses the point of the programme: Any such assumption must arise as a consequence of criteria for preferred universes that are intrinsically based. If there is a way of getting PD as a first-order consequence of such criteria I would be happy to see it. But so far things seem to be going in another direction: Only lightface PD and the existence of large cardinals only in inner models, not in V.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does. It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

You have misinterpreted the programme. It does not necessarily lead to the definitive conclusion that PD is false! The first criterion, IMH, gave this consequence, but other criteria, such as \textsf{IMH}^\# (see my paper with Honzik) do not. Briefly put: There are intrinsically-based criteria which imply that PD is false and others which do not decide PD. So far, no such criterion implies that PD is true.

In fact a key feature of the programme is that it avoids any bias with regard to specific set-theoretic statements like large cardinal existence or PD. The programme proceeds using intrinsically-based criteria and explores their consequences. Of course a strict rule is to only employ criteria which are based on an intrinsic feature of the universe of sets; reference to “forcing” or “large cardinals” or “determinacy” is inappropriate in the formulation of such criteria. My aim is for the programme to be “open-minded” and not slip in, intentionally or otherwise, technical baggage from the current practice of set theory. But currently I do think that whatever consequences the programme yields should be tested for “compatibility” with set-theoretic practice.

Best,
Sy