# Re: Paper and slides on indefiniteness of CH

Dear Harvey,

Ok, I will add some comments to my response. Below is simply how I currently see things. It is obvious based on this account that an inconsistency in PD would render this picture completely vacuous and so I for one would have to start all over in trying to understand V. But that road would be much harder given the lessons of the collapse caused by the inconsistency of PD. How could one (i.e. me) be at all convinced that the intuitions behind ZFC are not similarly flawed?

I want to emphasize that what I describe below is just my admittedly very optimistic view. I am not advocating a program of discovery or anything like that. I am also not arguing for this view here. I am just describing how I see things now. (But that noted, there are rather specific conjectures which if proved, I think would argue strongly for this view. And if these conjectures are false then I will have to alter my view.)

This view is based on a substantial number of key theorems which have been proved (and not just by me) over the last several decades.

Starting with the conception of V as given by the ZFC axioms, there is a natural expansion of the conception along the following lines.

The Jensen Covering Lemma argues for $0^\#$ and introduces a horizontal maximality notion. This is the first line and gives sharps for all sets. This in turn activates a second line, determinacy principles.

The core model induction now gets under way and one is quickly led to PD and $\text{AD}^{L(\mathbb R)}$, and reaches the stage where one has iterable inner models with a proper class of Woodin cardinals. This is all driven by the horizontal maximality principle (roughly, if there is no iterable inner model with a proper class of Woodin cardinals then there is a generalization of L relative to which V is close at all large enough cardinals and with no sharp etc.).

Adding the hypothesis that there is a proper class of Woodin cardinals, one can now directly define the maximum extension of the projective sets and develop the basic theory of these sets. This is the collection of universally Baire sets (which has an elementary definition). The important point here is that unlike the definitions of the projective sets, this collection is not defined from below. (There is a much more technical definition one can give without assuming the existence of a proper class of Woodin cardinals).

Continuing, one is led to degrees of supercompactness (the details here are now based on quite a number of conjectures, but let’s ignore that).

Also a third line is activated now. This is the generalization of determinacy from $L(\mathbb R) = L(P(\omega))$ to the level of $L(P(\lambda))$ for suitable $\lambda > \omega$. These $\lambda$ are where the Axiom I0 holds. This axiom is among the strongest large cardinal axioms we currently know of which are relatively consistent with the Axiom of Choice. There are many examples of rather remarkable parallels between $L(\mathbb R)$ in the context that AD holds in $L(\mathbb R)$, and $L(P(\lambda))$ in the context that the Axiom I0 holds at $\lambda$.

Now things start accelerating. One is quickly led to the theorem that the existence of the generalization of L to the level of exactly one supercompact cardinal is where the expansion driven by the horizontal maximality principles stops. This inner model cannot have sharp and is provably close to V (if it exists in the form of a weak extender model for supercompactness). So the line (based on horizontal maximality) necessarily stops (if this inner model exists) and one is left with vertical maximality and the third line (based on I0-like axioms).

One is also led by consideration of the universally Baire sets to the formulation of the axiom that V = Ultimate L and the Ultimate L Conjecture. The latter conjecture if true confirms that the line driven by horizontal maximality principles ceases. Let’s assume the Ultimate L Conjecture is true.

Now comes (really extreme) sheer speculation. The vertical expansion continues, driven by the consequences for Ultimate L of the existence of large cardinals within Ultimate L.

By the universality theorem, there must exist $\lambda$ where the Axiom I0 holds in Ultimate L. Consider for example the least such cardinal in Ultimate L. The corresponding $L(P(\lambda))$ must have a canonical theory where of course I am referring to the $L(P(\lambda))$ of Ultimat L.

It has been known for quite some time that if the Axiom I0 holds at a given $\lambda$ then the detailed structure theory of $L(P(\lambda)) = L(V_{\lambda+1})$ above $\lambda$ can be severely affected by forcing with partial orders of size less than $\lambda$. But these extensions must preserve that Axiom I0 holds at $\lambda$. So there are natural features of $L(P(\lambda))$ above $\lambda$ which are quite fragile relative to forcing.

Thus unlike the case of $L(\mathbb R)$ where AD gives “complete information”, for $L(P(\lambda))$ one seems to need two things: First the relevant generalization of AD which arguably is provided by Axiom I0 and second, the correct theory of $V_\lambda$. The speculation is that V = Ultimate L provides the latter.

The key question will be: Does the global structure theory of $L(P(\lambda))$, as given in the context of the Axiom I0 and V = Ultimate L, imply that V = Ultimate L must hold in $V_\lambda$?

If this convergence happens at $\lambda$ and the structure theory is at all “natural” then at least for me this would absolutely confirm that V = Ultimate L.

Aside: This is not an entirely unreasonable possibility. The are quite a number of theorems now which show that $\text{AD}^{L(\mathbb R)}$ follows from its most basic consequences.

For example it follows from just all sets are Lebesgue measurable, have the property of Baire, and uniformization (by functions in $L(\mathbb R))$ for the sets $A \subset \mathbb R \times \mathbb R$ which are $\Sigma_1$-definable in $L(\mathbb R)$ from parameter $\mathbb R$. This is essentially the maximum amount of uniformization which can hold in $L(\mathbb R)$ without yielding the Axiom of Choice.

Thus for $L(\mathbb R)$, the entire global structure theory, i.e. that given by $\text{AD}^{L(\mathbb R)}$, is implied by a small number of its fundamental consequences.

Regards,
Hugh

# Re: Paper and slides on indefiniteness of CH

Dear Sy,

I write with your permission to summarize for the group a brief exchange we had in private. Before that exchange began, you had agreed to these three points:

1. The relevant concept is the familiar iterative conception, which includes a rough idea of maximality in ‘height’ and ‘width’.
2. To give an intrinsic justification or intrinsic evidence for a set-theoretic principle is to show that it is implicit in the concept in (1).
3. The HP is a method for extracting more of the implicit content of the concept in (1) than has heretofore been possible.

We then set about exploring how the process in (3) is supposed to work, beginning with more careful attention to the iterative conception in (1). You summarize it this way:

“Maximal” means “as large as possible”, whether one is talking about

a. Vertical or ordinal-maximality: the ordinal sequence is “as long as possible”, or about

b. Horizontal or powerset-maximality: the powerset of any set is “as large as possible”.

In other words there is implicitly a “comparative” (and “modal”) aspect to “maximality”, as to be “as large as possible” can only mean “as large as possible within the realm of ‘possibilities'”.

Thus to explain ordinal- and powerset-maximality we need to compare different possible mental pictures of the set-theoretic universe. In the case of ordinal-maximality we need to consider the possibility of two mental pictures P and P* where P* “lengthens” P, i.e. the universe described by P is a rank initial segment of the universe described by P*. We can now begin to explain ordinal-maximality. If a picture P of the universe is ordinal-maximal then any “property” of the universe described by P also holds of a rank initial segment of that universe. This is also called “reflection”.

In the case of powerset maximality we need to consider the possibility of two mental pictures P and P* of the universe where P* “thickens” P, i.e. the universe described by P is a proper inner model of the universe described by P*.

There seemed to me to be something off about a universe being ‘maximal in width’, but also having a ‘thickening’. Citing Peter Koellner’s work, you replied that reflection actually involves ‘lengthenings’ (to which the ‘thickenings’ would be analogous), because it appeals to higher-order logics:

Reflection has the appearance of being “internal” to $V$, referring only to $V$ and its rank initial segments. But this is a false impression, as “reflection” is normally taken to mean more than 1st-order reflection. Consider 2nd-order reflection (for simplicity without parameters):

$({*})$ If a 2nd-order sentence holds of $V$ then it holds of some $V_\alpha$.

This is equivalent to:

$({*}{*})$ If a 1st-order sentence holds of $V_{\text{Ord} + 1}$ then it holds of some $V_{\alpha + 1}$,

where $\text{Ord}$ denotes the class of ordinals and $V_{\text{Ord} + 1}$ denotes the (3rd-order) collection of classes. In other words, 2nd-order reflection is just 1st-order reflection from $V_{\text{Ord} + 1}$ to some $V_{\alpha + 1}$. Note that $V_{\text{Ord} + 1}$ is a “lengthening” of $V = V_\text{Ord}$. Analogously, 3rd order reflection is 1st-order reflection from the lengthening $V_{\text{Ord} + 2}$ to some $V_{\alpha + 2}$. Stronger forms of reflection refer to longer lengthenings of $V$.

1st-order forms of reflection do not require lengthenings of $V$ but are very weak, below one inaccessible cardinal. But higher-order forms yield Mahlo cardinals and much more, and this is what Goedel and others had in mind when they spoke of reflection.

Another way of seeing that lengthenings are implicit in reflection is as follows. In its most general form, reflection says:

$({*}{*}{*})$ If a “property” holds of $V$ then it holds of some $V_\alpha$.

This is equivalent to:

$({*}{*}{*}{*})$ If a “property” holds of each $V_\alpha$ then it holds of $V$.

[$({*}{*}{*})$ for a "property" is logically equivalent to $({*}{*}{*}{*})$ for the negation of that "property".]

OK, now apply $({*}{*}{*}{*})$ to the property of having a lengthening that models ZFC. Clearly each $V_\alpha$ has such a lengthening, namely $V$. So by $({*}{*}{*}{*})$, $V$ itself has lengthenings that model ZFC! One can then use this to infer huge amounts of reflection, far past what Goedel was talking about.

I am not assuming that everybody is a “potentialist” about $V$. Even the Platonist can have mental images of the lengthenings demanded for reflection. And without such lengthenings, reflection has been reduced to a principle weaker than one inaccessible cardinal.

Now given that lengthenings are essential to ordinal-maximality isn’t it clear that thickenings are essential to powerset-maximality? We can then begin to explain powerset-maximality as follows: A picture P of the universe is powerset-maximal if any “property” of the universe described by a thickening of P also holds of the universe described by some thinning of P. What I called the weak-IMH is the “follow your nose” mathematical formulation of this notion of powerset-maximality for first-order properties.

(So, what do you think, Peter?)

Finally, you suggested that you might consider retracting (2) above and returning to the proposal of a different conception of set. The challenge there is to do so without returning to the unappealing idea that ‘intrinsic justification’ and ‘set-theoretic truth’ are determined by a conception of the set-theoretic universe that’s special to a select group.

All best,
Pen

# Re: Paper and slides on indefiniteness of CH

Dear Penny (or do you prefer Pen or Penelope?),

Thanks for these clarifications and amendments!  (The only changes of mind that strike me as objectionable are the ones where the person pretends it hasn’t happened.)

I am relieved to hear that.

I’m still keen to formulate a nice tight summary of your approach, and then to raise a couple of questions, so let me take another shot at it. Here’s a revised version of the previous summary:

Many thanks for doing this. But you may lose patience with me, as I am still going to be difficult and fine-tune your description even further! My apologies in advance.

We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it).

I think that I agree but am not entirely clear about your use of the term “external truth”. For example, I remain faithful to the extrinsically-confirmed fact that large cardinal axioms are consistent. Is that part of what you mean by “external truth”? With this one exception, my concept of truth is entirely based on intrinsic evidence.

One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations.  The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims.  These are the ones that need to be be taken seriously as we evaluate any candidate for a new set theoretic axiom or principle.  They include ZFC and the consistency of LCs.

Tatiana and I talked about “de facto” truth in our BSL paper, but I no longer think that this is necessary. The only “de facto” claims other than the consistency of large cardinals that must be strictly respected are intrinsic (the axioms of ZFC, for example).

The intrinsic constraints aren’t limited to items that are implicit in the concept of set.  One of the items present in this concept is a notion of maximality.  The new intrinsic considerations arise when we begin to consider, in addition, the concept of the hyperuniverse.

Sorry to be difficult, but the 2nd concept is not “hyperuniverse” but “universe of sets”. The hyperuniverse is just a mathematical construct that facilitates the investigation and clarification of the concept of “universe of sets”. This approach is so thoroughly anti-platonistic and epistemic that all one has is an idea of “V = the universe of sets” that is expressed through a wide spectrum of different “pictures of V”. The hyperuniverse allows one to capture this idea of “picture of V” in a sufficiently precise way to enable the logico-mathematical examination and comparison of the different pictures.

One of the items present in this concept is a new notion of maximality, building on the old, that generates the schema of Logical Maximality and its various instances (and more, set aside for now).

Yes, but I would not say that it “builds on the old” if this refers just to the maximal iterative conception. The new maximality principles are different from that conception not just because they deal with external features of universes and are logical in nature, but also because they typically emphasize not the the length of the ordinal sequence (vertical maximality) but the strength of the powerset operation (horizontal maximality). But maybe by the “old notion of maximality” you were referring to more than just the maximal iterative conception. In any case, this is not a major issue as one could just drop the phrase “building on the old”.

At this point, we have the de facto part of practice and various maximality principles.  If the principles conflict with the de facto part, they’re subject to serious question (see below).  They’re further tested by their ability to settle independent questions.  Once we’re settled on a principle, we use it to define ‘preferred universe’ and count as ‘true-in-V’ anything that’s true in all preferred universes.

Yes, but as I said above I’m not too generous about the meaning of “de facto”. Surely many of my colleagues would insist that the existence of large large cardinals is a de facto truth to be respected, but I do not. Second (minor point), the potential conflict with de facto truth is not revealed by the criteria (what you call principles) themselves, but by their first-order consequences, which may take time to extract. Third, notice that there is no static or definitive aspect to this kind of truth. It is a process of dynamic investigation and discovery, because neither the motivating philosophical principles (maximality, omniscience, internal unreachability, …) nor the choice of logico-mathematical criteria instantiating them is fixed; they are subject to enrichment and improvement with the belief that as things progress one is converging towards a coherent and well-justified interpretation of set-theoretic truth.

I hope this has inched a bit closer!  Assuming so, here are the two questions I wanted to pose to you:

Despite all of my complaining above, I do think that your summary is accurate enough that you can now fairly start to attack the programme (!). Just one more thing before I respond to your points below: My collaborator Claudio Ternullo (co-author of “Believing the New Axioms”, under review, on my webpage, the source of much of what I have said in these mails) suggested that I might clarify two more points. First: The criteria for the choice of preferred universes (Claudio has named them “H-axioms”) should be viewed as expressing “higher-order” features of V. This may be reminiscent of Zermelo’s use of full 2nd order set theory, but in fact it is quite different as Zermelo’s very strong criteria leads to a very scanty family of universes, which can be used to reveal nothing about V other than reflection principles (I made this point to Hugh). In fact nearly all of our H-axioms are expressible in a very restricted fragment of 2nd order set theory (using Barwise’s work in infinitary logic they are first-order over the least universe containing the given universe as an element). So a key feature of the programme is to use intrinsic higher-order features of V to make new discoveries about first-order truth which cannot be seen as intrinsic without the use of higher-ordrer ideas. This fits well with my claim that intrinsic first-order evidence is too limited in its power. Second: I should clarify that when I refer to “maximality” of universes I have in mind a logical notion of maximality, i.e., we are not maximising the fanily of sets based on some concept of the “absolute infinite” but rather maximising a family of logical properties; if a logical property occurs externally then it also occurs internally.

Having said that I turn now to your questions.

1.  What is the status of ‘the concept of set’ and ‘the concept of set-theoretic universe’?

This might sound like a nicety of interest only to philosophers, but there’s a real danger of falling into something ‘external’, something too close for comfort to an ontology of abstracta or a form of truth-value realism.

The concept of set is clear enough in the discussion, I have not proposed any change to its usual meaning.

Let me illustrate the concept of universe using maximality. Indeed the word “external” does enter the discussion but only in a very indirect and limited way.

Internal maximality is the usual form, an example is given by the maximal iterative conception. It is an assertion about the freedom to generate new sets through methods of construction and iteration.

External maximality is only conceivable in the context of a non-platonistic conception of V. As there is no fixed ontology, we can easily imagine that there might be a broader interpretation of V, as a universe containing more sets than does our initial interpretation. However we have no clear mechanism for producing such a broadening, there remains only the anti-ontological idea that this should be possible. External maximality asserts that nothing meaningful would be gained by such a broadening. However as we have no clear mechanism for producing such broadenings we can only work indirectly with small pictures of V, elements of the hyperuniverse, where such broadenings are indeed possible and the concept of broadening takes on a precise meaning (a broadening is simply another picture of V which contains the given picture as a subset without changing its ordinals). A worry is that by substituting V with a small picture we have not been faithful to our conception of V; this is true (for example V is not countable but the pictures of V as provided by the hyperuniverse are), however our pictures of V do retain first-order properties of V and that’s all we are interested in anyway. Thus we can fairly capture the first-order consequences of the external maximality of V using pictures of V that are genuinely maximal in a context where we really do have a mechanism for broadening to larger (pictures of) universes.

So my point is that there is no need in the HP to embed V itself into a multiverse and no danger of falling victim to a Balaguerian full-blooded Platonism. The only multiverse needed is the hyperuniverse of (countable) pictures of V.

Thus the concept of universe refers to possible interpretations of V and we can study these universes through pictures of them provided by the hyperuniverse. This is not to say however that the hyperuniverse is fixed! It is just as epistemically conceived as is V. Indeed each interpretation of V gives rise to an interpretation of the hyperuniverse H.

2.  The challenge we friends of extrinsic justifications like to put to defenders of intrinsic justifications is this: suppose some candidate principle generates a lot of deep-looking mathematics, but conflicts with intrinsically generated principles; would you really want to say ‘gee, that’s too bad, but we have to jettison that deep-looking mathematics’?  (I’d argue that this isn’t entirely hypothetical.  Choice was initially controversial largely because it conflicted with one strong theme in the contemporary concept of set, namely, the idea that a set is determined by a property.  The mathematics generated by Choice was so irresistible that (much of the) mathematical community switched to the iterative conception. Trying to shut down attractive mathematical avenues has been a fool’s errand in the history of mathematics.)

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

And these remarks to Sol also jumped out:

Another very interesting question concerns the relationship between truth and practice. It is perfectly possible to develop the mathematics of set theory without consideration of set-theoretic truth. Indeed Saharon has suggested that ZFC exhausts what we can say regarding truth but of course that does not force him to work just in ZFC. Conversely, the HP makes it clear that one can investigate truth in set theory quite independently from set-theoretic practice; indeed the IMH arose from such an investigation and some would argue that it conflicts with set-theoretic practice (as it denies the existence of inaccessibles). So what is the relationship between truth and practice? If there are compelling arguments that the continuum is large and measurable cardinals exist only in inner models but not in V will this or should this have an effect on the development of set theory? Conversely, should the very same compelling arguments be rejected because their consequences appear to be in conflict with current set-theoretic practice?

And today, to me, you add:

I see that the HP is the correct source for axiom *candidates* which must then be tested against current set-theoretic practice. There is no naturalist leaning here, as I am in no way allowing set-theoretic practice to influence the choice of axiom-candidates; I am only allowing a certain veto power by the mathematical community. The ideal situation is if an (intrinsically-based) axiom candidate is also evidenced by set-theoretic practice; then a strong case can be made for its truth.
But I am very close to dropping this last “veto power” idea in favour of the following (which I already mentioned to Sol in an earlier mail): Perhaps we should accept the fact that set-theoretic truth and set-theoretic practice are quite independent of each other and not worry when we see conflicts between them. Maybe the existence of measurable cardinals is not “true” but set theory can proceed perfectly well without taking this into consideration.

Let me just make two remarks on all this.  First, if you allow the practice to have ‘veto power’, I don’t see how you aren’t allowing it to influence the choice of principles.  Second, if you don’t allow the practice to have ‘veto power’, but you also don’t demand that the practice conform to truth (as I was imagining in my generic challenge to intrinsic justification given above), then — to put it bluntly — who cares about truth?  I thought the whole project was to gain useful guidance for the future development of set theory.

It seems that you have now isolated the key point in this discussion: what is the point of trying to clarify truth in set theory? I never imagined that it was to guide the future development of set theory! (I’ll say below what I thought the point was.)

But I think that I just fell over the edge and am ready to revoke my generous offer of “veto power” to the working set-theorist. Doing so takes the thrust out of intrinsically based discoveries about truth. You are absolutely right, “veto power” would constrain the necessary freedom one needs in the investigation of intrinsically-based criteria for the choice of preferred universes. The only way to avoid that would be to hoard together a group of brilliant young set-theorists whose minds have not yet been influenced (polluted?) by set-theoretic practice, deny them access to the latest results in set theory and set them to work on the HP in isolation. From time to time somebody would have to drop by and provide them with the mathematical methods they need to create preferred universes. Then after a good amount of time we could see what conclusions they reach! LC? PD? CH? What?

Obviously my plan is unrealistic. So forget the “veto power”. Now what? Well, I guess we have a bifurcation:

Penny truth: Truth derived from and intended for the guidance of the development of set theory.

Sy truth: Truth resulting purely from an investigation of intrinsically-based criteria for the choice of preferred pictures of the universe of sets. The only deference paid to set-theoretic practice is to respect the consistency of large cardinals.

There is no a priori reason to think that these two forms of truth will be compatible with each other.

I owe you an answer to the question: Why study Sy truth?

Aside from the obvious appeal on purely philosophical grounds of understanding what is intrinsic about the fundamental and important concept of set, I would like to have a notion of truth in set theory that is immune to the influence of fads, forceful personalities, available grant money, … I really am not confident that what we now consider to be important in the subject will be important in the future; I am more confident about the “stability” of Sy truth. Second, and this may appeal to you more, it is already clear that the new approach taken by the HP has generated new mathematical ideas that never would have been generated through the usual practice. Taking a practice-independent look at set-theoretic truth generates new forms of set-theoretic practice. And I do like the practice of set theory, even though I don’t want it to dictate investigations of truth! It is valuable to develop set theory in new directions.

Now by bifurcating truth into Penny truth and Sy truth does one in fact eliminate some of the conflicts that you have seen arise in my exchanges with Hugh? I imagine that PD is Penny-true but whether it is Sy-true or not is still open.

I end this mail with a question for you: How does what I call Penny-truth work (it’s OK to just tell me to read your books)? And do you think that it has succeeded or will succeed in guiding the development of set theory? Is there a danger that it will guide it away from areas that should have been investigated?

All the best,
Sy