Category Archives: “The Thread”

Paper and slides on indefiniteness of CH

Dear all,

Here are two attachments as pdf files.

The first is a paper entitled, “The Continuum Hypothesis is neither a definite mathematical problem nor a definite logical problem”; it is a revision of my 2011 Harvard EFI project lecture.

The second consists of the slides for a recent lecture here, “An outline of Rathjen’s proof that CH is indefinite, given my criteria for definiteness.”

Comments welcome on both.

Sol Feferman

CH is Indefinite
Definiteness, and Rathjen on CH

Re: Paper and slides on indefiniteness of CH

Dear Sol,

Many thanks for the interesting draft (I have studied only the first one so far). I would like to point out some recent developments, both in the direction of modern set theory and in approaches to problems like CH, that are relevant to the discussion. Overall I agree with much of what you say and can very well understand the conclusion you have reached regarding the intrinsic undecidability of the CH, given the assumptions that you make. Perhaps the strongest point of agreement that we have is that I do think that it will never be possible to achieve a resolution of CH based on intrinsic features of the concept of set. But that does not entail intrinsic undecidability, as I will explain below. Another point of agreement that we have is that I agree that the two programmes you outline, Woodin’s \Omega-logic and the Inner Model Programme, are inadequate to establish CH as a definite logical problem. Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

Peter Koellner and I recently both presented tutorials on set-theoretic truth at the Chiemsee workshop hosted by Leitgeb-Petrakis-Schuster-Schwichtenberg. A key distinction that we both made was between intrinsic (a priori) and extrinsic (a posteriori) evidence in set theory. My position on such forms of evidence can be summarised as follows:

  1. Evidence based on intrinsic aspects of the set concept can lead no further than reflection principles. Reflection principles are weak, consistent with V = L. (Indeed may view is that they lead to any small large cardinal notion, i.e. any large cardinal notion consistent with V = L; see my paper with Honzik on this, on my webpage.) There is no hope of resolving a question like CH using them.
  2. Extrinsic evidence coming from set theory is also limited in its power. In particular, I do not find it sufficient to justify either the existence of large large cardinals or of PD (more on this below).
  3. Extrinsic evidence coming from outside set theory, either from other areas of logic (such as model theory, where non first-order questions demand more than ZFC) or from other areas of mathematics is not sufficiently explored to see what consequences this may have. This is an important gap in the literature that should be filled.
  4. There is a new source of intrinsic evidence in set theory that shows promise for resolving questions like CH, via my Hyperuniverse Programme (see my paper with Arrigoni on this, on my webpage, and the discussion below).

I’ll now fill in the above by referring explicitly to parts of your interesting paper.

Page 2

CH has ceased to exist as a definite problem in the ordinary sense.

I agree, but keep in mind that there has been no systematic study of the axioms-candidates beyond ZFC that best suit the needs of mathematics or logic outside set theory. I cannot claim that there will be a consensus about the advisability or otherwise of CH among working mathematicians but this question has not been systematically explored.

…even its [CH's] status in the logical sense is seriously in question.

I will argue below via the HP (Hyperuniverse Programme) that there are serious reasons to possibly regard CH as a well-defined logical problem.

Since axioms for very large cardinals are taken for granted in current programs that aim to settle CH…

No. Large large cardinals are not taken for granted in my approach (more below).

Page 3

But most importantly, as long as mathematicians think of mathematical problems as questions of truth or falsity, they do not regard problems in the logical sense relevant to their fundamental aims insofar as those are relative to some axioms or models of a formal language.

I mostly agree, but this may be changing. In particular, the remarkable combinatorial power of forcing axioms like PFA or MM which resolve such a wide array of questions (Farah, Moore, Todorcevic, …) may now be persuading mathematicians to use them in their work. This has already happened to some extent with MA.

Page 4

The Continuum Hypothesis is perhaps unique in having originated as a problem in the ordinary sense and evolved into one in the logical sense.

Really? What about the Suslin Problem (of a similar flavour, but still different)? In fact couldn’t this characterisation apply to any problem of mathematics that was studied and later turned out to be independent of the axioms of ZFC?

Regarding Arnold’s collection: I do agree that mathematicians typically regard independent problems as not “real problems of mathematics”. But there is a new development in set theory, which I think is of the greatest importance, perhaps as profound as the discovery of independence: Unclassifiability results in descriptive set theory. The basic question is whether important classes of mathematical structures admit a desirable classification (nice invariants). For example, simple classes of countable structures have invariants which are real numbers but many interesting classes cannot be classified up to “equivalence” by countable structures of any kind. This is a dramatic development in set theory which has transformed areas of mathematics like operator algebras where good classifications were pursued and never found and now shown not to exist by descriptive set-theoretic methods. My point is that the next edition of Arnold’s collection may (perhaps should) take into account problems in this area, which have nothing to do with forcing or large cardinals, but are concerned with strictly mathematical issues regarding “Borel reducibility” (first discovered by Harvey, by the way, if I am not mistaken).

But in any case I fully agree that “CH can no longer be considered to be a problem in the ordinary sense as far as the mathematical community at large is concerned”.  My only point is that there are other problems of current interest in set theory that surely can be so regarded.

Page 7

My point here is to emphasize that contrary to some appearances, we are dealing with a logical subject through and through.

If this refers to set theory as a whole, then this claim is not right in light of the recent developoments in descriptive set theory. For example, Ben Miller recently showed that the Harrington-Kechris-Louveau generalisation of the Glimm-Effros Dichotomy, which originally used logical methods, can be proved without them. So there are ZFC-provable results coming out of set theory of mathematical interest making no use of logical methods.

Page 9

Why accept large cardinals? I. The consistency hierarchy.

A distinction must be made between small large cardinals, those consistent with V = L, and large large cardinals, those which are not. The former can be justified using reflection, an intrinsic feature of the maximal iterative concept of set. But in my view there is no convincing argument, based on either intrinsic or extrinsic evidence, for the existence of large large cardinals. What you present in your Section 4 is evidence only for the consistency of large large cardinals, not for their existence. Aside from the more refined question of whether consistency strengths are linearly ordered, the fact remains that large cardinals provide an apparently cofinal “hierarchy” of consistency strengths. This is obviously of great importance for set theory. But it makes no distinction for example between the existence of large cardinals in V and their existence in inner models of V. (Indeed the initial indications of the HP are that they do indeed exist in inner models but not in V.)

Page 12

You indicate the extrinsic evidence for large cardinal existence based upon their consequences for regularity properties in the projective hierarchy. As I have argued, this is a very weak argument, as it is based on an extrapolation from the 1st projective level to the higher levels; but there are analogous extrapolations that are provably false, such as generalisations of Shoenfield absoluteness to the higher projective levels or even from Borel sets to \Sigma_1^1 sets in the descriptive set theory of generalised Baire Space (\kappa^\kappa instead of \omega^\omega).

Also note (see the Koellner quote you provide on Page 13), that although large cardinals prove PD, in the converse direction you only get that PD gives inner models for large cardinals, not their existence. So again this is an important hint that what is going on is that large cardinals may fail to exist in V but only in inner models and PD may fail in V but only hold in certain inner models which fail to contain all of the real numbers.

Two other claims have been made to justify large cardinal existence and AD in L(\mathbb R): Woodin has asserted that the only explanation for the consistency of large cardinals is their existence. He points out that this is the case, for example, for the totality of exponentiation on the natural numbers (I agree) and analogously for large cardinal axioms. But I think Woodin is making a simple mistake here: It is indeed hard to imagine a coherent explanation for exponentiation to be consistently total without being actually total. But this is simply because V_\omega, where the question is formulated, has no proper inner models. But V can have many inner models and for this reason the natural explanation for the consistency of large cardinals without their existence is provided by their existence in inner models. No such argument is available for statements of strength regarding the totality of functions on \omega.

Page 14

There can be no question that for the same [mathematical] community, the proposed raising of these large cardinal axioms to the status of ‘truths’ alongside the accepted informal principles of set theory is indeed deeply problematic.

I would only add that this is deeply problematic for some members of the set theory community as well. That is of course not to detract from the beauty and importance of the Martin-Steel Theorem, a fine piece of mathematics that can only be properly judged by researchers with experience with large cardinals. The result should not be diminished in importance by others who lack such experience, simply because it is not clear that large cardinal axioms are true. Indeed the connection between “truth” in set theory and what is central to the mathematical development of the subject is not obvious and is in need of clarification (I plan to prepare a paper on this with my philosophical colleagues).

In your Section 6 you discuss two programmes, \Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

\Omega-logic is clearly more developed (despite the unverified \Omega-conjecture), but it suffices to note that this programme achieves nothing more than a sophisticated analysis of set-generic absoluteness. The highlights are Woodin’s absoluteness of the theory of H(\omega_1) for set-generic extensions from large cardinals and Viale’s absoluteness of the the theory of H(\omega_2) for stationary-preserving set-generic extensions from large cardinals + MM^{+++}. As Cohen implicitly asked: What does this have to do with the analysis of set-theoretic truth? It is only a beautiful mathematical theory. Indeed, if one wants a solution to CH using the concept of set-genericity the solution is straightforward: \Sigma_1 absoluteness with absolute parameters for ccc forcing extensions (a parameter is absolute if it is uniformly definable in cardinal-preserving extensions). This principle is consistent and implies that the continuum is very large.

Obviously this is not a solution to the continuum problem and in my view the only problem is that it hinges on the purely technical notion of “ccc forcing extension”.

Let me now briefly explain what the HP is about (see the attached Chiemsee slides or my paper with Arrigoni on my website for more details) and why it addresses the concerns you raise. Two of your quotes are relevant:

But one cannot say that it [CH] is a definite logical porblem in some absolute sense unless the systems of models in question have been singled out in some canonical way.

(Gödel) Probably there exist other axioms based on hitherto unknown principles … which a more profound understanding of the concepts underlying logic and mathematics would enable us to recognize as implied by these concepts.

Briefly: I do not think that CH can be resolved using intrinsic features of the concept of set. Instead I create a context, the Hyperuniverse = The set of countable transitive models of ZFC, in which one can compare different universes of set theory. This comparison evokes intrinsic features of the set-theoretic universe which could not otherwise be expressed, such as the maximality of a universe in comparison with other universes. These features are formulated as mathematical criteria and the universes which satisfy these criteria are regarded as “preferred universes of set theory”. First-order statements which hold in all preferred universes become candidates for new axioms of set theory, which can be tested against set-theoretic practice. Ideally if these candidates in addition are supported by extrinsic evidence then a strong case can be made for their truth.

Thus the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is “omniscience”). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

In summary: My view is that CH will be seen as a definite logical problem if it can be resolved by examining intrinsic features of preferred universes of set theory. In other words, I see a realistic scenario for contradicting the claim you make in the title of your paper. At the same time, I cannot yet claim with confidence that the programme will resolve CH; it may only lead to a small set of possible extensions of ZFC, which I believe will still be of significant interest. It is simply too early to say what will happen, as the choices of motivating phiolsophical principles for preferred universes, the formulation of mathematical criteria to instantiate these principles and the theorems required to establish consistency of the desired criteria are all important future challenges to be met. But I remain optimistic about the prospect of resolving ZFC-undecidable propositions like CH via the programme.

With best wishes and many thanks for including me into the discussion,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

In your Section 6 you discuss two programmes, Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not “worthy of much discussion”? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Let me now briefly explain what the HP is about….the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is ‘omniscience’). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment  between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximally requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does.  It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Many thanks for your comments. It is good that we are finally having this debate (thank you, Sol). Below are some responses to your message.

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

These are impressive results, but they are about the effect of set forcing on truth, in my view avoiding the key questions about truth. Only considering the effect of set forcing on truth is misleading: Shoenfield’s absoluteness theorem is an important statement about truth, asserting that \Sigma^1_2 truth is not affected by enlargements of the universe of sets. This is inconsistent for \Sigma^1_3 truth, but to see this one has to consider enlargements not obtainable by set forcing (indeed it is consistent for set-generic enlargements).

As I said, if one admits set-forcing as a legitimate concept in discussions of set-theoretic truth then the continuum problem has an easy solution: Extend Levy absoluteness by asserting \Sigma_1 absoluteness with absolute parameters for ccc set-forcing extensions. This is consistent and implies that the continuum is very large. I do not consider this to be a legitimate solution to the continuum problem and the reason is the artificial restriction to ccc set-forcing extensions.

An unfortunate misuse of terminology in discussions of absoluteness in the literature is that “absoluteness” and “forcing” are misleadingly taken to mean “set-generic absoluteness” and “set-forcing”, giving a misleading interpretation of the absoluteness concept. Indeed one of the difficult issues is the relationship between absoluteness and large cardinals: Clearly if absoluteness means set-generic absoluteness than the existence of a proper class of large cardinals is absolute, but this evades the question of how large cardinal existence is affected by enlargements that are not set-generic, such as in Shoenfield’s Theorem.

Also note that by a theorem of Bukovsky, set-genericity is equivalent to a covering property: N is a set-generic extension of M iff for some M-cardinal \kappa, every single-valued function in N between sets in M can be covered by a \kappa-valued function in M. Put this way it is clear that set-genericity is an unwarranted restriction on a the notion of extension of a model of set theory, as there is nothing “intrinsic” about such a covering property.

Perhaps our disagreement is deeper than the above and resides in the following misunderstanding: I am looking for intrinsic sources for the truth of new axioms of set theory. The results you mention, as well as many other impressive results of yours, are highly relevant for the practice of set theory, as they tell us what we can and cannot expect from the typical methods that we use. For example, most independence results in set theory are based entirely on set-forcing, and do not need class forcing, hyperclass forcing or other methods for creating new universes of sets. But this is very different from finding justifications for the truth of new axiom candidates. Such justifications must have a deeper source and cannot be phrased in terms of specific set-theoretic methods. (I understand that Cohen agreed with this point.)

Large cardinals and determinacy rank among the the most exciting and profound themes of 20th century set theory. Their interconnection is striking and in my view lends very strong evidence to their consistency. But 21st century set theory may have a different focus and my interest is to understand what can be said about set-theoretic truth that does not hinge on the most exciting mathematical developments at a particular time in the history of the subject. I am not a naturalist in the sense of Penelope Maddy. And unlike Sol, I remain optimistic about finding new intrinsic sources of set-theoretic truth that may resolve CH and other prominent questions.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not ‘worthy of much discussion’? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

First a point of clarification: The HP entails the examination of a variety of different mathematical criteria for the choice of preferred universes. These different criteria need not agree on their first-order consequences, although each is motivated by an intrinsic feature of the universe of sets (usually maximality). The long-term question is whether these criteria will “synthesise” to a stable universal criterion with interesting first-order consequences that are not in conflict with set-theoretic practice. The answer to this questions is not yet known (the programme is new).

Now in answer to what you say above: The first mathematical criteria in the HP was the IMH. It implies that there are measurable cardinals of arbitrary Mitchell order in inner models. (It also implies that in V there are no measurable cardinals.) The reason for measurables in inner models has nothing to do with their existence in some model, it is a consequence of core model theory (with covering over core models and the fact that \square holds in core models). So I don’t see your point here.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of “iterability” involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even \Sigma^1_3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing”.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

These are wonderful applications of core model theory. They add to the evidence for the consistency of PD. But I don’t see what implications this has for the truth of PD. After all the theory “ZFC + V = L[x] for some real x + There is a supercompact cardinal in some inner model” is a strong theory which implies the consistency of PD but does not even imply \Pi_1^1 determinacy.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximality requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

But more fundamentally: Why need the mathematical criteria for preferred universes be absolute in any sense? One of the important features of the programme is the dynamic interplay between the nature of the Hyperuniverse and V. The Hyperuniverse must be defined in a background V, and conversely first-order properties of preferred universes within the Hyperuniverse are candidates for truth assertions about V. So it is to be expected that changing V will lead to changes in the preferred members of the Hyperuniverse.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

There is no question of “rejecting” any criterion for preferred universes. Instead, we are exploring the consequences of the different criteria and the extent to which “syntheses” of different criteria with each other are possible.

The criterion you propose is similar to ones I have discussed with Hannes Letigeb and Eduardo Rivello. Hannes asked me about replacing the Hyperuniverse with models of 2nd order ZFC. But of course then the only “pictures of V” are the V_\alpha, \alpha inaccessible and if we apply the HP to such universes we will arrive only at reflection principles and nothing more. Eduardo asked about using countable transitive models of V which are elementarily equivalent to V (one could strengthen this further by demanding an elementary embedding into V). The problem now is that this choice of universes “begs the question”: We want to use the hyperuniverse to understand what first-order properties V should have (based on philosophically justified criteria for the choice of preferred universes), but with Eduardo’s choice one has “built in” all first-order properties of V and therefore can learn nothing new. It is analogous to Zermelo’s quasi-categoricity for 2nd order set theory: Yes, you have categoricity modulo the ordinals and therefore arrive at a complete theory, but you have no idea what this theory says.

I am rather convinced that the generous use of all countable transitive models of ZFC is the right notion of Hyperuniverse and valuable criteria for preferred universes cannot “build in” the first-order theory of V.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

??? This is like saying we assume PD in V and take our preferred universes to reflect all first-order truths of V, and therefore PD. But this misses the point: The first-order properties of H, like those of V, are to result from criteria for preferred universes that are based on intrinsic features, such as maximality. How do you arrive at PD that way? I can well imagine that you could arrive at preferred universes that satisfy the existence of inner models with many large cardinals, but this is very far from PD.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

The Hyperuniverse contains ALL countable transitive models of ZFC, so you cannot say that it isn’t rich; it is as rich as possible. I think what you want to do is impose assumptions on V that imply that the Hyperuniverse will contain certain universes that you would like to see there. But this misses the point of the programme: Any such assumption must arise as a consequence of criteria for preferred universes that are intrinsically based. If there is a way of getting PD as a first-order consequence of such criteria I would be happy to see it. But so far things seem to be going in another direction: Only lightface PD and the existence of large cardinals only in inner models, not in V.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does. It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

You have misinterpreted the programme. It does not necessarily lead to the definitive conclusion that PD is false! The first criterion, IMH, gave this consequence, but other criteria, such as \textsf{IMH}^\# (see my paper with Honzik) do not. Briefly put: There are intrinsically-based criteria which imply that PD is false and others which do not decide PD. So far, no such criterion implies that PD is true.

In fact a key feature of the programme is that it avoids any bias with regard to specific set-theoretic statements like large cardinal existence or PD. The programme proceeds using intrinsically-based criteria and explores their consequences. Of course a strict rule is to only employ criteria which are based on an intrinsic feature of the universe of sets; reference to “forcing” or “large cardinals” or “determinacy” is inappropriate in the formulation of such criteria. My aim is for the programme to be “open-minded” and not slip in, intentionally or otherwise, technical baggage from the current practice of set theory. But currently I do think that whatever consequences the programme yields should be tested for “compatibility” with set-theoretic practice.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

The notion of ordinal maximality to which I was referring was that in the bulletin paper and that which is used to formulate IMH* there.

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).  Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules.

Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of ‘iterability’ involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even Sigma-1-3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing.”

I completely agree this is the basic issue over which we disagree.

The position that all extensions, class or set generic, on on an equal footing is at the outset already a bias against large cardinals. The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all “possible large cardinals” whatever that means).

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where

\phi = “Every set A belongs to a set model with an X-cardinal above A.”

\phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable. Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your helpful comments on my draft, “The Continuum Hypothesis is neither a definite mathematical problem nor a definite logical problem,” and especially for bringing your Hyperuniverse Program (HP) to my attention.  I had seen your 2013 article with Arrigoni on HP back then but had not taken in its point.  I have now read it as well as your Chiemsee slides, and will certainly take it into account in the final version of my paper.

I’m glad that we are in considerable agreement about my fundamental argument that one must distinguish mathematical problems in the ordinary sense from logical problems, and that as of now what I claim in the title is true, even taking HP into consideration.  Is my title misleading since it does not say “as of the time of writing”? The reader will see right away in the abstract and the opening section that what I claim does not exclude the possibility that in the future CH will return as a definite mathematical problem [quite unlikely] or that it will somehow become a definite logical problem.

This is not the place to respond to your many interesting comments on the draft, nor on the substance of the HP and your subsequent exchange with Woodin.  But I would like to make some suggestions regarding your terminology for HP (friendly to my mind).  First all, it seems to me that “preferred models” is too weak to express what you are after.  How about, “premier models” or some such?  (Tapping into the Thesaurus could lead to the best choice.) Secondly, I’m not happy about your use of “intrinsic evidence for set-theoretic truth” both because “intrinsic evidence” is commonly used to refer to the constellation of Gödel’s ideas in that respect (not the line you are taking) as opposed to “extrinsic evidence”, and because “set-theoretic truth” suggests a platonistic view (which you explicitly reject).  I don’t have anything to take its place, but it reminds me of the kinds of methodological maxims that Maddy has promoted, so perhaps a better choice of terminology can be found in her writings in place of that.

Best,
Sol

Re: Paper and slides on indefiniteness of CH

Dear Sol,

On Sun, 3 Aug 2014, Solomon Feferman wrote:

Dear Sy,

Thanks for your helpful comments on my draft, “The Continuum Hypothesis is neither a definite mathematical problem nor a definite logical problem,” and especially for bringing your Hyperuniverse Program (HP) to my attention.  I had seen your 2013 article with Arrigoni on HP back then but had not taken in its point.  I have now read it as well as your Chiemsee slides, and will certainly take it into account in the final version of my paper. I’m glad that we are in considerable agreement about my fundamental argument that one must distinguish mathematical problems in the ordinary sense from logical problems, and that as of now what I claim in the title is true, even taking HP into consideration.  Is my title misleading since it does not say “as of the time of writing”? The reader will see right away in the abstract and the opening section that what I claim does not exclude the possibility that in the future CH will return as a definite mathematical problem [quite unlikely] or that it will somehow become a definite logical problem.

This does appear to constitute a significant retreat in your position. In the quote of yours that I used in my Chiemsee tutorial you refer to CH as being “inherently vague”, in other words dealing with concepts that render it impossible to ever assign it a truth value. If you now concede the possibility that new ideas such as hinted at by Gödel in the quote below (and perhaps provided by the hyperuniverse programme) may indeed lead to a solution, then the “inherent vagueness” argument disappears and our positions are quite close. Indeed we may only differ in the degree of optimism we have about the chances of resolving ZFC-undecidable problems in abstract set theory through philosophically-justifiable logical methods.

“(Gödel) Probably there exist other axioms based on hitherto unknown principles … which a more profound understanding of the concepts underlying logic and mathematics would enable us to recognize as implied by these concepts.”

This is not the place to respond to your many interesting comments on the draft, nor on the substance of the HP and your subsequent exchange with Woodin.  But I would like to make some suggestions regarding your terminology for HP (friendly to my mind).  First all, it seems to me that “preferred models” is too weak to express what you are after.  How about, “premier models” or some such?  (Tapping into the Thesaurus could lead to the best choice.)

I do see your point here, because I do want to suggest not simply a “preference” for certain universes over others but rather a “compelling” or “justified” preference. I’ll give the terminology more thought, thanks for the comment.

Secondly, I’m not happy about your use of “intrinsic evidence for set-theoretic truth” both because “intrinsic evidence” is commonly used to refer to the constellation of Gödel’s ideas in that respect (not the line you are taking) as opposed to “extrinsic evidence”, and because “set-theoretic truth” suggests a platonistic view (which you explicitly reject).  I don’t have anything to take its place, but it reminds me of the kinds of methodological maxims that Maddy has promoted, so perhaps a better choice of terminology can be found in her writings in place of that.

I do not think that “set-theoretic truth” entails a platonistic viewpoint (indeed there is a concept of “truth-value determinism” that falls short of Platonism). The goal of the programme is indeed to make progress in our understanding of truth in set theory and a key claim is that there is intrinsic evidence regarding the nature of the set-theoretic universe that transcends the older form of such evidence emanating from the maximal iterative conception. I think that the dichotomy intrinsic (a priori) versus extrinsic (a posteriori) which Peter Koellner has emphasized is a valuable way to clarify the debate. Nevertheless I do appreciate that some have suggested that the distinction is not as sharp as I may have assumed and I would like to hear more about that.

Another very interesting question concerns the relationship between truth and practice. It is perfectly possible to develop the mathematics of set theory without consideration of set-theoretic truth. Indeed Saharon has suggested that ZFC exhausts what we can say regarding truth but of course that does not force him to work just in ZFC. Conversely, the HP makes it clear that one can investigate truth in set theory quite independently from set-theoretic practice; indeed the IMH arose from such an investigation and some would argue that it conflicts with set-theoretic practice (as it denies the existence of inaccessibles). So what is the relationship between truth and practice? If there are compelling arguments that the continuum is large and measurable cardinals exist only in inner models but not in V will this or should this have an effect on the development of set theory? Conversely, should the very same compelling arguments be rejected because their consequences appear to be in conflict with current set-theoretic practice?

Best wishes and many thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Thank you for the plug, Sol.  Sy says some interesting things in his BSL paper about ‘true in V':  it doesn’t ‘reflect an ontological state of affairs concerning the universe of all sets as a reality to which existence can be ascribed independently of set-theoretic practice’, but rather ‘a façon de parler that only conveys information about set-theorists’ epistemic attitudes, as a description of the status that certain statements have or are expected to have in set-theorist’s eyes’ (p. 80). There is ‘no “external” constraint … to which one must be faithful’, only ‘justifiable procedures’ (p. 80); V is ‘a product of our own, progressively developing along with the advances of set theory’ (p. 93).  This sounds more or less congenial to my Arealist (a non-platonist):   in the course of doing set theory, when we adopt an axiom or prove a theorem from axioms we accept, we say it’s ‘true in V’, and the Arealist will say this along with the realist; the philosophical debate is about what we say when we’re describing set-theoretic activity itself, and here the Arealist denies (and the realist asserts) that it’s out to discover the truth about some objectively existing abstracta.  (By the way, I don’t think ‘truth-value realism’ is the way to go here.  In its usual form, it avoids abstract entities, but there remains an external fact-of-the-matter quite independent of the practice to which we’re supposed to be faithful.)  Unfortunately the rest of my story of the Arealist as it stands won’t be much help because the non-platonistic grounds given there in favor of embracing various set-theoretic methods or principles are fundamentally extrinsic and Sy is out to find a new kind of intrinsic support.

I’m probably insufficiently attentive, or just plain dim, but I confess to being confused about how this new intrinsic evidence is intended to work.   It isn’t a matter of being part of the concept of set, nor is it given by the clear light of mathematical intuition.  It does involve, quoting from Gödel, ‘a more profound understanding of basic concepts underlying logic and mathematics’, and in particular, in Sy’s words, ‘a logical-mathematical analysis of the hyperuniverse’ (p. 79).  Is it just a matter of switching from the concept of set to the concept of the hyperuniverse?  (My guess is no.)  Our examination of the hyperuniverse is supposed to ‘evoke’ (p. 79) certain general principles (the principles are ‘based on’ general features of the hyperuniverse (p. 87)), which will in turn ‘suggest’ (pp. 79, 87) criteria for singling out the preferred universes — and the items ultimately supported by these considerations are the first-order statements true in all preferred universes.

One such general principle is maximality, but I’d like to understand better how it arises intrinsically out of our contemplation of the hyperuniverse (at the top of p. 88).  On p. 93, the principle (or its more specific versions) is said to be ‘the rigorous expression of what it means for an element of the hyperuniverse, i.e., a countable transitive model of ZFC, to display “maximal properties”‘.  Does this mean that maximality for the hyperuniverse derives from a prior principle of maximality inherent in the concept of set?

With all best wishes,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Sun, 3 Aug 2014, W Hugh Woodin wrote:

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

This results from the standard empirical fact that the consistency of a huge variety of statements in set theory is shown assuming the consistency of large cardinal axioms and there are even “lower bound results” calibrating the strength of the large cardinal axioms required, even at the level of a supercompact. For example, \textsf{PFA}(\mathfrak c^+ \text{linked}) is consistent relative to a degree of supercompactness yet there are models with slightly less large cardinal strength (subcompact cardinals) with the property that \textsf{PFA}(\mathfrak c^+ \text{linked}) fails in all of its proper (not necessarily generic) extensions. This is strong evidence that the consistency strength of \textsf{PFA}(\mathfrak c^+ \text{linked}) is at the level of a degree of supercompactness. Thus large cardinals provide a “hierarchy” of consistency strengths whose strictness is witnessed by numerous statements of set-theoretic interest. I see this as sufficient justification for the consistency of large cardinals; we don’t need inner model theory or some structure theory which follows from their existence to know that they are consistent.

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Those tests are used (not by me but by a number of set-theorists) in favour of the truth of PD, not just of its consistency! The consistency of PD only needs the consistency of large cardinals, as justified above, and none of these tests.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Your analogy is problematic as there is no clear distinction between consistency and truth with regard to the ERH; this is because there is only one natural model of first-order arithmetic. But there is a clear distinction between consistency and truth for PD as there are many natural models of second-order arithmetic.

Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules. Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

The consistency of large cardinals does not require any “rich internal theory”. In contrast, the truth of large large cardinal axioms is much harder to justify, indeed I have yet to see a convincing argument for that.

I completely agree this is the basic issue over which we disagree. The position that all extensions, class or set generic, are on an equal footing is at the outset already a bias against large cardinals.

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

The HP is open to the conclusion that large large cardinals exist, but to achieve this one needs a philosophically well-justified and unbiased criterion for the choice of preferred universes. There may be such a criterion but so far the only ones I have come up with either contradict or tolerate large large cardinal existence; none of them succeed in proving large large cardinal existence.

The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Yes, but once again if you set things up to protect desired generalisations of 0^\# then you are showing a bias towards those generalisations. This is in my view unacceptable without justifying that way of setting things up with a philosophically well-justified and unbiased criterion for the choice of preferred universes.

In summary: The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

It would be ideal if the axioms and objects that you want to see in set theory would arise as a consequence of a philosophically well-justified approach to truth, but so far I don’t see how to do that (things are pointing in a different direction). I present this to you as an interesting challenge. It should be clear by now that the derivation of large large cardinal axioms from maximaality criteria is very problematic (but perhaps not impossible).

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

See the above. The HP is not biased against large cardinals. Rather, imposing large cardinals at the start of any investigation of truth is where the bias lies.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

OK, for the sake of discussion let us now relativise everything to large cardinal axioms. I.e., the base theory is now ZFC + Arbitrary large cardinals. This is not justified but surely worth investigating.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all ‘possible large cardinals’ whatever that means).

Please see my paper with Honzik anout strong reflection. We capture ordinal maximality with the property of “\#-generation”. Roughly speaking, V is \#-generated if it arises from a “sharp” in the same way that L arises from 0^\#. This is in my view the strongest possible form of ordinal maximality. (Aside: It also allows one to argue that reflection is perfectly compatible with a potentialist view of V, no actualism is required.)

Then the context you want is \#-generation plus large cardinals. These are your preferred universes. (Note that large cardinals in V automatically generate large cardinals past the ordinals of V via \#-generation.) Below I will use “strong maximality” to mean \#-generated plus large cardinals.

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of “strong maximality”. Then any two sentences in T are compatible with each other.

Maybe you meant to ask if any two strongly maximal universes satisfy the same \Pi_2 sentences? The answer is of course “no” if strongly maximal universes can differ about what large cardinals exist. This will surely be the case unless one requires that each strongly maximal universe satisfies “all large cardinal axioms”. How do you formulate that exactly?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

M. Stanley showed that there are countable transitive models M of ZFC with the property that the set of first-order sentences with parameters from M which hold in an arbitrary outer model of M is M-definable. (Note that this is immediate for any M if one considers only set-generic outer models of M!) This can also be done for the theory ZFC + Large cardinals (fixing a notion of large cardinal axiom). I expect that this can also be done for strong maximality (and indeed holds for all strongly maximal universes) and therefore the answer to your question is “yes”.

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where \phi = “Every set A belongs to a set model with an X-cardinal above A.” \phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable.

So we know something about what happens in set-generic extensions. But this tells us nothing about what happens in more general extensions, even those which satisfy \phi.

Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Why? You want to claim that if a \Pi_2 sentence \psi holds in all strongly maximal universes then it follows from some \phi as above. Equivalently, if a \Sigma_2 sentence is compatible with all large cardinal axioms then it holds in some strongly maximal universe. But again we have a problem with the concept of “all large cardinal axioms”. If this really could be formulated then we have the problem sentence “For some large cardinal axiom \phi there is no transitive model of \phi“. This is \Sigma_2, compatible with all large cardinal axioms but false in all strongly maximal universes. So you are forced to fix a bound on the large cardinal axioms you consider, but then you lose consequences of strong maximality.

It seems better to drop set-genericity and consider the IMH for strongly maximal universes. Honzik and I verified the consistency of the IMH for ordinal maximal (= \#-generated) universes using a variant of Jensen coding and an interesting challenge is to do this for strongly maximal universes (i.e. add the large cardinals). As you know the mathematics is now quite hard as so far we lack the inner model theory and Jensen coding theorems that are available for the smaller large cardinals.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

As I said above, I fail to see how you reach the conclusion that the \Pi_2 consequences of large cardinals are those generated by your sentences \phi. In any case your conclusion is that the \Pi_2 consequences of large cardinals are just the local versions of the very same large cardinal axioms. How does this constitute a clarification?

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

I don’t know what fundamental questions involving the large cardinal hierarchy you refer to. Consider IMH (Strong maximality). A consistency proof for this would be very revealing. Of course any model of this will satisfy PD but this does not mean that PD holds in V. As I said it is likely that the \Gamma-logic of any such universe will be definable in that universe, where \Gamma-logic is the (in my view more natural and better motivated) version of \Omega-logic which refers to arbitrary outer models and not just to set-generic ones.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

No. Consider \textsf{SIMH}^\# (the IMH with absolute parameters for ordinal maximal = #-generated universes). I conjecture that this is consistent. Just as the \textsf{IMH}^\# is consistent with strong maximality I expect the same for the \textsf{SIMH}^\#. And the \textsf{SIMH}^\# implies that the continuum is very large.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

This does sound interesting but I confess that I don’t quite understand it. The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

Many thanks, Hugh, for your stimulating comments,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Penny,

Many thanks for your insightful comments. Please see my responses below.

On Tue, 5 Aug 2014, Penelope Maddy wrote:

Thank you for the plug, Sol.  Sy says some interesting things in his BSL paper about ‘true in V':  it doesn’t ‘reflect an ontological state of affairs concerning the universe of all sets as a reality to which existence can be ascribed independently of set-theoretic practice’, but rather ‘a façon de parler that only conveys information about set-theorists’ epistemic attitudes, as a description of the status that certain statements have or are expected to have in set-theorist’s eyes’ (p. 80). There is ‘no “external” constraint … to which one must be faithful’, only ‘justifiable procedures’ (p. 80); V is ‘a product of our own, progressively developing along with the advances of set theory’ (p. 93).  This sounds more or less congenial to my Arealist (a non-platonist):   in the course of doing set theory, when we adopt an axiom or prove a theorem from axioms we accept, we say it’s ‘true in V’, and the Arealist will say this along with the realist; the philosophical debate is about what we say when we’re describing set-theoretic activity itself, and here the Arealist denies (and the realist asserts) that it’s out to discover the truth about some objectively existing abstracta.  (By the way, I don’t think ‘truth-value realism’ is the way to go here.  In its usual form, it avoids abstract entities, but there remains an external fact-of-the-matter quite independent of the practice to which we’re supposed to be faithful.)

My apologies here. In my reply to Sol I only made reference to truth-value realism for the purpose of illustrating that one can ascribe meaning to set-theoretic truth without being a platonist. Indeed my view of truth is very far from the truth-value realist, it is entirely epistemic in nature.

Unfortunately the rest of my story of the Arealist as it stands won’t be much help because the non-platonistic grounds given there in favor of embracing various set-theoretic methods or principles are fundamentally extrinsic and Sy is out to find a new kind of intrinsic support.

Yes. I am trying to make the case that there are unexplored intrinsic sources of evidence in set theory. Some have argued that we must rely solely on extrinsic sources, evidence emanating directly from current set-theoretic practice, because intrinsic evidence cannot take us past what is derivable from the maximal iterative conception. I do agree that this conception can lead us no further than reflection principles compatible with V = L.

But in fact my intuition goes further and suggests that no intrinsic first-order property of the universe of sets will enable us to resolve problems like CH. We have to examine features of the universe of sets that are only revealed by comparing it to other possible universes (goodbye Platonism) and infer first-order properties from these “higher-order” intrinsic features of V (a name for the epistemically-conceived universe of sets).

Obviously a direct comparison of V with other universes is not possible (V contains all sets) so we must instead content ourselves with the comparison of pictures of V. These pictures are perfectly provided by the hyperuniverse (also conceived of non-platonistically). And by Löwenheim-Skolem we lose none of the first-order features of V when we model it within the hyperuniverse.

Now consider the effect that this has on the principle of maximality. Whereas the maximal iterative concept allows us to talk about generating sets inside V by iterating powerset “as long as possible”, the hyperuniverse allows us to express the maximality of (a picture of) V in a more powerful way: maximal means “as large as possible in comparison to other universes” and the hyperuniverse gives a precise meaning to this by providing those “other universes”. Maximality is no longer just an internal matter regarding the existence of sets within V, but is also an external matter regarding the largeness of the universe of sets as a whole in comparison to other universes. Thus the move from the concept of set to the concept of set-theoretic universe.

Now comes a crucial point. I assert that maximality is an intrinsic feature of the universe of sets. Certainly I can assert that there is a rich discussion of maximality in the philosophy of set theory literature with some strong advocates of the principle, including Goedel, Scott and yourself (correct me if I am wrong).

Maximality is not the only philosophical principle regarding the set-theoretic universe that drives the HP but surely it is currently the most important one. Another is omniscience (the definability in V of truth across universes external to V). Maybe there will be more.

I’m probably insufficiently attentive, or just plain dim, but I confess to being confused about how this new intrinsic evidence is intended to work.   It isn’t a matter of being part of the concept of set, nor is it given by the clear light of mathematical intuition.  It does involve, quoting from Gödel, ‘a more profound understanding of basic concepts underlying logic and mathematics’, and in particular, in Sy’s words, ‘a logical-mathematical analysis of the hyperuniverse’ (p. 79).  Is it just a matter of switching from the concept of set to the concept of the hyperuniverse?  (My guess is no.)  Our examination of the hyperuniverse is supposed to ‘evoke’ (p. 79) certain general principles (the principles are ‘based on’ general features of the hyperuniverse (p. 87)), which will in turn ‘suggest’ (pp. 79, 87) criteria for singling out the preferred universes — and the items ultimately supported by these considerations are the first-order statements true in all preferred universes. One such general principle is maximality, but I’d like to understand better how it arises intrinsically out of our contemplation of the hyperuniverse (at the top of p. 88).  On p. 93, the principle (or its more specific versions) is said to be ‘the rigorous expression of what it means for an element of the hyperuniverse, i.e., a countable transitive model of ZFC, to display “maximal properties”‘.  Does this mean that maximality for the hyperuniverse derives from a prior principle of maximality inherent in the concept of set?

You ask poignant questions; I hope that what I say above is persuasive!

Many thanks for your interest, and very best wishes,
Sy