Tag Archives: Hyper-reflection

Re: Paper and slides on indefiniteness of CH

Dear all,

I’ve been collaborating with Sy on the HP for almost two years now and I believe I should add something to what he’s presented so far which may contribute to clarifying the programme’s goals and features. What follows does not necessarily coincide with Sy’s views or those of the other HP people (Radek and Carolin).


  1. Universe vs multiverse view. One of the points which has been sometimes overlooked by people in this thread is that the HP aims to be a theory of the set-theoretic multiverse. This means that, however robustly or thinly realistic one’s ontological commitments are, HP assumes that it is a fact that set theory is not about a single, uniquely identifiable and describable ontological framework. By the standard semantic approach to truth, this cannot but mean that the HP commits itself to pluralism as a default position, although, as Sy has been trying to explain many times, the programme has no philosophically motivated prejudice against reducing the amount of truth-variance within the hyperuniverse (H). Some might legitimately ask whether HP aims to set forward some specific argument in favour of pluralism as a general philosophical conception, but that is exactly what was thought not to be necessary in setting up the programme: HP was born to make sense of the model-theoretic plurality of frameworks within current set theory and took it as a fact that such a plurality was there from the beginning. Now, there surely are reasons to believe that the universe-view has better prospects within the foundations of set theory. Some of them might be related to a strong commitment to thorough-going realism, some to a milder, practice-based form of realism looking favourably at confirmation coming from significant strands of practice (the latter are what I believe Hugh, as, in my opinion, a universe-view supporter, sees most favourably), some to more general concerns related to the foundations of mathematics as more safely couched within a single-universe rather than a plural-universe framework. Now, a universe-view supporter has all reasons to ask himself what the “unique” solution of open problems in set theory is and, typycally, sees as only relevant to his conception any result which reassures him in the correctness of his conception. Ok, fine. But then, the HP cannot be any help to him. The HP may foster “unique” solutions to the set-theoretic problems only as a consequence of what Sy called a sort of “convergence”, in terms of consequences, of H-principles into the same axioms (that is, their consequences in members of H). But this is not even required of the programme in itself. Therefore, my final point is: the value of the HP should not be measured on the grounds of its providing us with definitive (whatever this may mean) answers to open problems in set theory.
  2. The value of HP. This clarified, we should move on and see whether HP is a valuable theory of the set-theoretic multiverse. A theory of the multiverse might be defective in different senses. Peter, for instance, has found reasons of concern in Joel’s radical conception and Hugh’s attempt at yielding one (but only to discard it as logically impossible) is partly rejected by John (Steel) and so on. Now, we believe HP has better prospects to be philosophically and foundationally more attractive. Some of the reasons are described below.a. Ontological minimality: HP identifies a core model-theoretic construct, that is, c.t.m., as the only constituent of multiverse ontology. Further, mathematical and logical, reasons for this choice have been explained at length by Sy, but I wish to recall that the main (and, to some extent, remarkable) fact is that we do not lose any information about set-theoretic truth by making this choice. The choice of c.t.m. might look illegitimate to some staunch realist, for whom ontological constructs follow our pre-theoretic grasp of them, but then I’d look forward to seeing this person argue against first-order logic as a natural consequence of this attitude. So long as we agree that the choice of logic is the correct one, then I don’t see any strong argument against restricting our attention on c.t.m.

    b. Dualism about real V: this point was addressed by Peter, who thought it could be a potential weakness. In my view, this is a strength. HP has no actualist commitment to real V, but only to a post-zermelian approach to V as an endless sequence of models (although, Zermelo’s view has a commitment to width actualism, expressed by second-order versions of the axioms). Thus, the link with an actualised real V expressed by this approach is very thin. H-principles express properties of an ideal (not real) universe and, then, “collapse” the universe into some c.t.m. Consequences of H-principles are seen as local axioms in portions of the hyperuniverse. Thus, our real V works dually: first as a regulatory ideal, as if temporarily actualised for the sake of the study of its general properties (e.g., maximality as expressed by the IMH) and, secondarily, as a c.t.m. within H endowed with specific properties. There is no conflict between these two roles and, in fact, dualism seems to us to fit set-theoretic practice marvellously. Talk of the universe within practice constantly oscillates between these two poles: the universe as an actual construct surpassing any other conceivable construct and the universe as a model (a c.t.m. for the sake of our constructions, possibly extended or reduced to, respectively, an outer or inner model). HP aims to give a general conceptual explanation for the existing dualism in our practice.

    c. Axiom-generating methodological approach: the HP supporter is not a truth-value realist nor does she believe that there are unique solutions to open set-theoretic problems. Accordingly, she thinks that no axioms (or collection theoreof) will give us the nicest possible set theory. But this is entirely fine with her: we have different universes, let’s live with them. However, she’s not entirely indifferent to issues of truth within the hyperuniverse and, in fact, she believes that the study of properties of an ideal V might bear on the structure and properties of members of H. She might think that the radical view is the only option available: there are different concepts of set and, thus, different universes which instantiate such concepts. But maybe an alternative way might be pursued. Again, she has a concept of a real V, which is an ideal actualisation of the universe, and she believes that we have reasons to hold that such an idealisation may play a role in the current practice. Therefore, she believes that properties of such an ideal construct may be studied within the basic ontological constructs she has on hand: c.t.m. within the hyperuniverse. The only way to study these properties is to examine their implications. But now she understands that (some of) the statements implied by properties of the universe may be nothing else but (new) axioms. At the end of the procedure she realises that the way a new axiom is generated is fully accounted for by her views concerning set-theoretic ontology (minimality above) and her pre-theoretic intuition of an idealised (and temporarily actualised) universe. As a consequence, she comes to see the HP as a tool to generate new knowledge (through generating new axioms) within set theory.

    d. Explanatory (epistemic) strength: even if one does not see any reason to see c. as a legitimate procedure within set theory, HP may still stand out as a way to explain how we may come to believe new axioms. In other terms, HP may be a theory explaining the role of conjectures leading to new axioms within set theory. Sy has placed a lot of emphasis on the fact that the whole procedure qualifies as an intrinsically justified procedure. Peter raised relevant concerns about whether Sy’s claims were legitimate. In my view, the ideal V described by such principles as the IMH, \textsf{IMH}^\# etc. has good prospects to be seen as related to the concept of set. Now, that is the hardest philosophical part and I acknowledge that it is far from being established with a reasonable degree of certitude. Does the concept of set imply that V (the V I have been discussing, its ideal actualisation) is maximal in the sense asserted by IMH? Why should maximality even be part of the concept of set? Now, there are arguments available in the literature providing reasons to believe that maximality plays a special role in our intuitions about sets (Gödel, Wang, in a sense Dummett (based on Russell’s self-reproductive properties), have all argued in favour of maximality as related to the set concept) but it is far from clear that all of these arguments may be used to defend principles such as IMH. Therefore, it is correct to require of the HP to provide stronger arguments in favour of the intrinsicness of IMH, \textsf{IMH}^\# etc. But suppose that the kind of principles the HP is examining has been proved to be legitimately justifiable through appeal to the set concept. Then I believe that the HP might be viewed as a strong foundational approach to truth within a relevant segment of current practice in set theory (at least, the ZFC-based portion), as grounded on the study of intrinsically justified criteria. Obviously, this wouldn’t qualify HP automatically as a theory which yields new self-evident truth! (whatever the term may mean).

  3. Is HP realist (in ontology)?. Now, someone might say: “this is just cheating. You’re talking about a concept of set which should be robust enough to imply certain properties rather than others, for instance, maximality as opposed to minimality, uniformity (Gödel) rather than constant alteration (however this might be formalised mathematically) etc. If you want to argue that this is the case, then you have to hold the belief that there is an ultimate universe of sets endowed with such properties. Even if you’re not able to prove that such properties are actually implied by the concept of set, something which should not be taken for granted, by asserting that there is a concept of set and an ultimate universe instantiating it, you automatically subscribe to some form of realism (in ontology).” I address this objection as I think that it is relevant to assessing the whole legitimacy of the HP. HP does not start with the idea that there is a universe of sets. HP thinks that there are different universes, which are built using the currently known procedures (set forcing, class forcing, inner models, etc.), which all point to one single ontological template, a c.t.m. However, HP also believes that the properties of such universes are dependent upon the properties of an ancestral universe (real V) which is nothing but an ideal universe. Such properties are also ideal, in the sense that they do not prescribe the existence of certain sets or certain ordinals, but only that the universe should enjoy some general property x, which, in our view, is related to the set concept. The maximal iterative conception is the concept that has been often set forth as the standard concept of set. As Sy has said many times, HP does not question this. What HP wants to do is to expand on this concept. But this expansion should not be construed as the deliverance of self-evident new principles giving ultimate truths, but rather as the addition of new features to the concept (e.g., if one adopted IMH, maximal iteration + maximal internal consistency of V). That’s it. Admittedly, there is not even any requirement that new additions give rise to the same truths across the multiverse. Does this mean that there are different concepts of set? It might be so, but this does not imply that there are different real universes. However, in my view, one rather wants to study alternative expansions of the concept with a common initial ground (the iterative notion), rather than alternative concepts of set. To sum up, HP may commit itself to some form of objectivity in the way we expand on the concept of set, but not in the idea that we have a pre-theoretic grasp of a universe of sets endowed with some properties. To use Pen’s terminology, a HP supporter could be a Thin Realist, but with an emphasis on refinements of and additions to the concept of set, rather than to universes or ontological frameworks.

Sorry for the very long email. I hope it addresses clearly at least some of the general concerns that people in this thread had raised with regard to the plausibility of HP and contributes effectively to the exciting debate going on.

Best regards,

Claudio

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

In your Section 6 you discuss two programmes, Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not “worthy of much discussion”? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Let me now briefly explain what the HP is about….the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is ‘omniscience’). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment  between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximally requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does.  It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

Regards,
Hugh