I will start in this first post by introducing the HOD Dichotomy Theorem. This dichotomy leads to a fork in the road, each side of which points to a different future of set theory. The first leads to the prospect of an ultimate inner model—one that is compatible with all (standard) large cardinal axioms—and the Ultimate- Conjecture is a precise conjecture as to what such an inner model might look like. The second leads to the prospect of a large cardinal hierarchy that transcends the axiom of choice. These two directions will be described in the second and third post, respectively.

Each possibility is of great foundational significance. In this respect, we are at a decisive phase in the development of the search for new axioms.

**1. The HOD Dichotomy **

As motivation for the HOD Dichotomy, we begin with the Dichotomy.

** 1.1. The Dichotomy **

The following remarkable result is a combination of results due to Jensen, Kunen, and Silver, the most profound part being due to Jensen.

**Theorem 1 (The Dichotomy Theorem)** * Exactly one of the following holds:*

- For all singular cardinals ,
- is singular in , and

- Every uncountable cardinal is inaccessible in .

This is a remarkable dichotomy. In the first case (where is close to ) the cardinal structure of provides us with a good approximation to that of , while in the second case (where is far ) the cardinal structure of is a very poor approximation of that of , since every (truly) uncountable cardinal—, , etc.—is thought is misidentified as being an inaccessible cardinal (and much more—it is thought to be a Mahlo cardinal, a weakly compact cardinal, an indescribable cardinal and, more generally, it is thought to have any large cardinal property that is consistent with .)

There is a “switch”—a certain real number, —the existence of which determines which case holds: If does not exist then we are in the first case, where is close to , and if exists then we are in the second case, where is far from . But, although is lurking in the background, the dichotomy is stated without reference to it.

A natural question is whether there is a generalization of the Dichotomy Theorem, where is replaced by richer inner models. It turns out that there are generalizations where is replaced by richer “-like” fine-structural models . But the real import of our question is whether there is in some sense an “ultimate” generalization of the dichotomy. To render this question precise let us compare the notion of definability involved in with richer notions of definability.

** 1.2. The Inner Models and HOD **

Recall that is defined by letting , , (for limit ordinals ) and , where is the set of all subsets of which are first-order definable over with parameters from . Thus, is obtained by iterating first-order definability (locally) along the stem of the ordinals.

Now, there is a hierarchy of definability that goes far beyond that involved in the above construction. Each level of definability in this hierarchy can be transcended by a “diagonalization”, leading to a richer notion of definability. Gödel asked whether there was a notion of definability that was in some sense “absolute” (or “ultimate”) in that it could not be transcended in this way. He noticed two things. First, he noticed that any notion of definability which does not render all of the ordinals definable can be transcended (as can be seen by considering the least ordinal which is not definable according to the notion); and so any such notion must include OD, the collection of sets that are ordinal definable. Second, he noticed that the notion of ordinal definability cannot be so transcended (since by reflection OD is ordinal definable). It is for this reason that proposed the notion of ordinal definability as a candidate for an “absolute” (or “ultimate”) notion of definability.

However, OD is not quite what we are looking for, since OD is not a model of ZFC. But if one lets HOD be the collection of sets that are *hereditarily* OD, then HOD is a model of ZFC.

There are many respects in which the inner models and HOD are at opposite extremes.

- is built up from below by iterating local, first-order definability along the ordinals, while HOD is built from above in a way that renders it entangled with (indeed HOD is simply where is the -theory of the ordinals.)
- If exists then is not a class-generic extension of (in fact, , a real number, is not in any class-generic extension of ), while as a theorem of ZFC (and so under any additional large cardinal axioms) due to Vopenka, is a class-generic extension of HOD.
- The statement is not compatible with modest large cardinal assumptions (more precisely, it is not compatible with anything from and beyond), while the statement is compatible with
*all*of the (standard) large cardinal assumptions.

So it is natural to ask whether there is a generalization of the Dichotomy Theorem where is replaced by the much richer model HOD. It turns out that there is.

** 1.3. The HOD Dichotomy **

To describe the HOD Dichotomy we first have to introduce a couple of notions.

**Definition 2** * A cardinal is an extendible cardinal if for all there is an elementary embedding
*

such that and .

**Definition 3** * An uncountable regular cardinal is -strongly measurable in HOD if there exists such that *

- is a cardinal in HOD and , and
- There is no partition of the set
such that for each , is a stationary subset of .

**Remark 1** * It is routine to show that if is an uncountable regular cardinal which is -strongly measurable in HOD then is a measurable cardinal in HOD. *

We can now state the HOD Dichotomy Theorem, due to Woodin.

**Theorem 4 (HOD Dichotomy Theorem)** * Suppose that is an extendible cardinal. Then exactly one of the following holds: *

- For all singular cardinals ,
- is singular in , and
- .

- Every regular cardinal greater than or equal to is -strongly measurable in .

In the first case we say that is “close” to above and in the second case we say that is “far” from .

This is a remarkable dichotomy. In the first case, where is “close” to , the cardinal structure of provides us with a good approximation of that of , at least above the extendible , while in the second case, where is “far” , every regular cardinal greater than or equal to is thought in to be -strongly measurable (and hence inaccessible, Mahlo, weakly compact, indescribable and much more, such as measurable).

There are some key differences between the two dichotomy theorems. First, the HOD Dichotomy Theorem is not proved by cases on a “switch”, an analogue of for . Second, in the case of the HOD Dichotomy Theorem all of the action takes place above the extendible cardinal, that is, it is above the extendible cardinal that HOD is close to in the first case and far from in the second case.

** 1.4. Two Futures **

It is natural to ask which side of the Dichotomy holds and which side of the HOD Dichotomy holds. In the case of the Dichotomy this is not something that can be settled in , unless in one could refute the existence of . And in the case of the HOD Dichotomy, we shall see, it cannot be settled in , unless one refutes the existence of a certain kind of large cardinal, a “choiceless cardinal”. But for present purposes the point is that we are asking a foundational question, one which (modulo an outright inconsistency) is tied up with the justification of new axioms, and, as such, any answer is going to have to be somewhat delicate.

Let us first deal with the Dichotomy. In this case there is good reason to believe that the second side of the dichotomy holds—that is “far” from —since there is good reason to believe that exists. And the same is true of the Dichotomies, for various fine-structural generalizations of . In these cases the future always fell on the second side. Let us now turn to the HOD Dichotomy.

Assume that there is an extendible cardinal . The HOD Dichotomy presents us with a fork in the road leading to two possible futures, one in which HOD is close to above , the other in which HOD is far from above . Which future holds?

Extrapolating from the case of and its generalizations one might think that for similar reasons the future lies on the second side of the HOD Dichotomy. But given the discussion above about the differences between HOD and L and its generalizations we should take pause. For we already know that there are many respects in which HOD is close to —for example, is a class-generic extension of HOD, and HOD can accomodate all (standard) large cardinals. Perhaps the closeness between HOD and is even greater, and the first side of the HOD Dichotomy maintains.

Let is introduce some terminology.

**Definition 5 (The HOD Hypothesis)** * There is a proper class of cardinals such that is not -strongly measurable in HOD. *

**Definition 6 (The HOD Conjecture)** * The HOD Hypothesis is provable in ZFC. *

Thus, the HOD Conjecture is the conjecture that (provably in ZFC) we are in the first half of the HOD Dichotomy (where HOD is close to ).

The HOD Conjecture is a surprising conjecture. It is not the sort of thing that one would readily conjecture. Indeed, when I was a graduate student it went by a different name—it was called the “Silly Conjecture”!

To see why it is such a surprising thing to conjecture, suppose that is a cardinal which is not -strongly measurable in HOD. Then, by definition, for all such that is a cardinal in HOD and , there is a partition of the set

such that for each , is a stationary subset of . In other words, we can effect in a definable fashion (that is, in HOD) a partition of the stationary set (defined in ) into stationary (in !) sets. That one might prove in that such definable partitions into truly stationary sets exist for arbitrarily large cardinals would be really quite astonishing. For this reason, on the face of it, it would be much more plausible to conjecture the opposite, in which case, by the HOD Dichotomy Theorem one would have the surprising result that HOD is close to above an extendible .

Let us now summarize the two futures:

- The first future is (in which HOD is close to above ) is given by the HOD Conjecture. The reasons for the HOD Conjecture come—as we shall see in the second post—from inner model theory. More precisely, the HOD Conjecture follows from the Ultimate- Conjecture and there is some inner model theoretic evidence for the Ultimate- Conjecture. In this sense the first future is the future in which “inner model theory wins” and where “pattern prevails”.
- The second future (in which HOD is far from above ) is given by the possibility of “choiceless large cardinals”, as we shall see in the third post. This is the future in which “chaos reigns”.

I will describe the first future and the reasons for the HOD Conjecture (aka the “Silly Conjecture”) in the next post.

]]>At this point we should look at all possible variants. That’s because I just realised that a good understanding of height potentialism may be necessary even for the proper formulation of height maximality (#-generation). That was the purpose of my recent mail addressed to Geoffrey and Pen (that I cc-ed you and Peter on).

Thanks,

Sy

Before we close this thread, it would be nice if you could state what the current version of is. This would at least leave me with something specific to think about.

Is it:

1) (SDF: Nov 5) M is weakly #-generated and for each phi, if

for each countable alpha, phi holds in an outer model of M which

is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2) (SDF: Nov 8) M is weakly #-generated and for all : Suppose that whenever is a generator for M (iterable at least to the height of M), holds in an outer model M with a generator which is at least as iterable as . Then holds in an inner model of M.

or something else? Or perhaps it is now a work in progress?

Regards,

Hugh

My participation in this interesting discussion is now at its end, as almost anything I say at this point would just be a repeat of what I’ve already said. I don’t regret having triggered this Great Debate on July 31, in response to your interesting paper, as I have learned enormously from it. Yet at the same time I wish to offer you an apology for the more than 500 subsequent e-mails, which surely far exceeds what you expected or wanted.

Before signing off I’d like to leave you with an abridged summary of my views and also give appropriate thanks to Pen, Geoffrey, Peter, Hugh, Neil and others for their insightful comments. Of course I am happy to discuss matters further with anyone, and indeed there is one question that I am keen to ask Geoffrey and Pen, but I’ll do that “privately” as I do think that this huge e-mail list is no longer the appropriate forum. My guess is that the vast majority of recipients of these messages are quite bored with the whole discussion but too polite to ask me to remove their name from the list.

All the best, and many thanks, Sy

Sy’s Summary

**1. Regarding Evidence for Set-Theoretic Truth and CH**

I see three sources for such evidence: from set theory as a branch of mathematics (Type 1), from set theory as a foundation for mathematics (Type 2) and from set theory as a study of the set-concept (Type 3). More precisely, Type 1 evidence regards new axioms that best serve the interests of the development of “good set theory”, Type 2 evidence regards new axioms that best resolve independent questions in mathematics outside of set theory and Type 3 evidence regards axioms that are derivable from the maximality of V in height and width. In my view: There are and always will be competing Type 1 axioms, corresponding to the myriad forms of “good set theory”. Type 2 evidence has not yet been systematically explored, but at present the axioms that seem to do the best job are the Forcing Axioms, which have powerful combinatorial consequences of genuine interest to mathematicians outside of set theory. (Note: I am ignoring descriptive set theory, despite its deep and impressive connections with mathematics outside of set theory, because the part of descriptive set theory that is relevant to mathematics outside of set theory can be carried out in ZFC.) Type 3 evidence is only now being systematically explored via my Hyperuniverse Programme (HP).

Specifically regarding CH: I do not expect a resolution based solely on Type 1 evidence, as I expect that there will always be “good set theory” that implies it and “good set theory” that refutes it, due to the lack of consensus, even taking predications and verifications into account, about a single “good set theory”. However the very preliminary indications from Type 2 evidence are that the continuum has size and from Type 3 evidence that the continuum is very large, of size at least the first weakly inaccessible. This very preliminary evidence should however not be taken too seriously, as the studies of Type 2 and Type 3 evidence are in their infancy.

Finally, I would like to propose the view that the most desirable approach to truth in set theory is to combine all three of the above perspectives. If an axiom well serves the development of set theory as a branch of mathematics, is instrumental in resolving independent questions in mathematics outside of set theory and is compatible with (or better, derivable from) the maximality of V in height and width, then there is a strong case to be made for its truth. Of course meeting these three requirements (Types 1, 2 and 3 evidence all at the same time) is a very tall order and it is far too early to make any claims about what axioms may qualify for truth in this strong sense. But I do feel optimistic about our chances of success with this combined approach, perhaps not with CH but with other hypotheses, like PD or even large cardinals.

**2. Regarding the HP (Hyperuniverse Programme)**

Here I want to readily acknowledge the crucial contributions of participants in this discussion for forcing me to strengthen my own understanding of the HP. I began with a departure from the concept of set to the concept of set-theoretic universe, which after exchanges with Pen I chose to abandon. I also had “intrinsic features of V” in mind beyond just maximality, and thanks to Pen I simplified this to just “maximality of V in height and width”, agreeing to ground this form of maximality on intuitions shared by the set theory community. And in response to messages from Pen and Peter, I clarified that the HP aims to determine what is derivable from maximality in height and width and not what is “self-evident” or “unfoldable” from this feature; indeed this determination will be the result of a lengthy process which reaches a consensus, making heavy use of the mathematical techniques of set theory.

A point that caused confusion concerned the ontological framework. Motivated by a crucial message from Geoffrey, I chose to adopt a “single-multiverse” view enhanced by “height potentialism” (despite my own personal views which favour “radical potentialism”). In other words, there is a single V, but V can be lengthened and not thickened. I then explained how it is that one can nevertheless discuss properties of “thickenings” of V in terms of first-order properties of (mild) lengthenings of V and therefore implement forms of maximality which refer to “thickenings” (such as my original IMH criterion). A nice corollary of this is that one can apply Loewenheim-Skolem to argue that the analysis of the relevant maximality critieria can be undertaken entirely within the Hyperuniverse (the multiverse of countable transitive models of ZFC) without changing the results of that analysis. Although this reduction to the Hyperuniverse is optional, it offers a conceptually simpler way to express maximality criteria; thanks to Peter and Hugh I was forced to clarify the point that these Hyperuniverse criteria are expressed within a very restricted language and not all Hyperuniverse properties are expressible in this way.

That is where the HP stands at the moment. Thanks again mostly to Pen, but also to Geoffrey, Peter and Hugh for their helpful insights. There is a huge amount of work to be done concerning the formulation, analysis and unification of different maximality criteria. This is no easy task, as there are many plausible candidates for criteria which reflect the maximality of V in height and width. Moreover some maximality criteria are seen to be inconsistent only after hard mathematical work. I ask my colleagues not to judge the programme before seeing the maximality criteria that it develops as the result of a careful analysis. You will see that valuable evidence for set-theoretic truth will result.

]]>Thanks for your letter.

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, , …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

I would like to repeat my request: Could you please give us an account of #-generation, explain how it arises from “length maximality”, and make a convincing case that it captures all (in particular, the Erdos cardinal ) and only the large cardinals that we can expect to follow from “length maximality”.

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V= L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for , necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms.

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the ). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

You have not understood what I (or Pen, or Tony, or Charles, or anyone else who has discussed this matter in the literature) mean by “prediction and confirmation”. To understand what we mean you have to read the things we wrote; for example, the slides I sent you in response to precisely this question.

You cite cases of the form: “X was working with theory T. X conjectured P. The conjecture turned out to be true. Ergo: T!”

That is clearly not how “prediction and confirmation” works in making a case for new axioms. Why? Take T to be an arbitrary theory, say (to be specific) “ + Exp is not total.” X conjectures that P follows from T. It turns out that A was right. Does that provide evidence for “Exp is not total”?

Certainly not.

This should be evident by looking at the case of “prediction and confirmation” in the physical sciences. Clearly not *every* verified prediction made on the basis of a theory T provides epistemic support for T. There are multiple (obvious) reasons for this, which I won’t rehears. But let me mention two that are relevant to the present discussion. First, the theory T could have limited scope — it could pertain to (what is thought, for other reasons) to be a fragment of the physical universe; e.g. the verified predictions of macroscopic mechanics do not provide epistemic support for conclusions about how subatomic particles behave. Cf. your V=L example. Second, the predictions must bear on the theory in a way that distinguishes it from other, competing theories.

Fine. But falling short of that ideal one at least would like to see a prediction which, if true, would (according to you) lend credence to your program and, if false, would (according to you) take credence away from your program, however slight the change in credence might be. But you appear to have also renounced these weaker rational constraints.

Fine. The Hyperuniverse Program is a different sort of thing. It isn’t like (an analogue of) astronomy. And you certainly don’t want it to be like (an analogue of) astrology. So there must be some rational constraints. What are they?

Apparently, the fact that a program suggests principles that continue to falter is not a rational constraint. What then are the rational constraints? Is the idea that we are just not there yet but that at the end of inquiry, when the dust settles, we will have convergence and we will arrive at “optimal” principles, and that at *that* stage there will be a rationally convincing case for the new axioms? (If so, then we will just have to wait and see whether you can deliver on this promise.)

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There are two reasons I keep giving a summary of the changes, of how we got to where we are now. First, this thread is quite intricate and its useful to give reader a summary of the state of play. Second, in assessing prospects and tenability of a program it is useful to keep track of its history, especially when that program is not in the business of making predictions.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism!

This is not correct. (I wish I didn’t have to document this).

I never attributed height-actualism to you. (I hope that was a typo on your part). I wrote (in the private letter of Oct. 6, which you quoted and responded to in public):

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

I never attributed height actualism. I only very tentatively said that it appeared you have switched to width actualism and said that I didn’t believe that this was your official view.

That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

This is not correct. (Again, I wish I didn’t have to document this.)

You responded to my letter (in public) on Oct. 9, quoting the above passage, writing:

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

I then wrote letters asking you to confirm that you were indeed a radical potentialist. You confirmed this. (For the documentation see the beginning of my letter on K.)

So, I wrote the letter on K, after which you said that you regretted having admitted to radical potentialism.

You didn’t endorse width-actualism until Nov. 3, in response to the story about K. And it is only now that we are starting to see the principles associated with “width-actualism + height potentialism” (New IMH#, etc.)

I am fully aware (and have acknowledged) that you have said that the HP program is compatible with “width-actualism + height potentialism”. The reason I have focused on “radical potentialism” and not “width-actualism + height potentialism” is two-fold. First, you explicitly said that this was your official view. Second, you gave us the principles associated with this view (Old-IMH#, etc.) and have only now started to give us the principles associated with “width-actualism + height potentialism” (New-IMH#, etc.) I wanted to work with your official view and I wanted something definite to work with.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

I certainly agree that it is more likely that one will get an answer on PD than an answer on CH. Of course, I believe that we already have a convincing case for PD. But let me set that aside and focus on your program. And let me also set aside questions about the epistemic force behind the principles you are getting (as “suggested” or “intrinsically motivated”) on the basis of the “‘maximal’ iterative conception of set” and focus on the mathematics behind the actual principles.

(1) You proposed Strong Unreachability (as “compellingly faithful to maximality”) and you have said quite clearly that V does not equal HOD (“Maximality implies that there are sets (even reals) which are not ordinal-definable” (Letter of August 21)). From these two principles Hugh showed (via a core model induction argument) that PD follows. [In fact, in place of the second one just needs (the even more plausible "V does not equal K".]

(2) Max (on Oct. 14) proposed the following:

In other words, maybe he should require that is not equal to the of for any real x and more generally that for no cardinal is equal to the of when is a subset of . In fact maybe he should go even further and require this with replaced by the much bigger model of sets hereditarily-ordinal definable with the parameter !

Hugh pointed out (on Oct. 15) that the latter violates ZFC. Still, there is a principle in the vicinity that Max could still endorse, namely,

(H) For all uncountable cardinals is not correctly computed by HOD.

Hugh showed (again by a core model induction argument) that this implies PD.

So you already have different routes (based on principles “suggested” by the “‘maximal’ iterative conception of set” leading to PD. So things are looking good!

(3) I expect that things will look even better. For the core model induction machinery is quite versatile. It has been used to show that lots of principles (like PFA, there is an dense ideal on , etc.) imply PD. Indeed there is reason to believe (from inner model theory) that every sufficiently strong “natural” theory implies PD. (Of course, here both “sufficiently strong” and “natural” are necessary, the latter because strong statements like “Con(ZFC + there is a supercompact)” and “There is a countable transitive model of ZFC with a supercompact” clearly cannot imply PD.)

Given the “inevitability” of PD — in this sense: that time and again it is show to follow from sufficiently strong “natural” theories — it entirely reasonable to expect the same for the principles you generate (assuming they are sufficiently strong). It will follow (as it does in the more general context) out of the core model induction machinery. This has already happened twice in the setting of HP. I would expect there to be convergence on this front, as a special case of the more general convergence on PD.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the , but because of a better understanding of the way the translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Let us focus on a productive exchange of your current view, of the program as you now see it.

It would be helpful if you could:

(A) Confirm that the official view is indeed now “width-actualism + height potentialism”.

[If you say the official view is “radical potentialism” (and so are sticking with Old-, etc.) then [insert story of K.] If you say the official view is “width-actualism + height potentialism” then please give us a clear statement of the principles you now stand behind (New-, etc.)]

(B) Give us a clear statement of the principles you now stand behind (New-, etc.), what you know about their consistency, and a summary of what you can currently do with them. In short, it would be helpful if you could respond to Hugh’s last letter on this topic.

Thanks for continuing to help me understand your program.

Best,

Peter

Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory.

I was assuming that any theory capable of ‘swamping’ all others would ‘subsume’ the (Type 1 and Type 2) virtues of the others. It has been argued that a theory with large cardinals can subsume the virtues of V=L by regarding them as virtues of working within L. I can’t speak to forcing axioms, but I think Hugh said something about this at some point in this long discussion.

All best,

Pen

]]>Pen:

I am sorry to have annoyed you with the issue of TR and Type 2 evidence, indeed you have made it clear many times that the TR does take such evidence into account. I got that! But in your examples, you vigorously hail the virtues of and other developments that have virtually no relevance for math outside of set theory, rather than Forcing Axioms, which provide real Type 2 evidence! As I said, I think you got it very right with your excellent “Defending”, but in your 2nd edition you might want to hail the virtues of Forcing Axioms well above , Ultimate L (should it be “ripe” by the time of your 2nd edition) or other math-irrelevant topics, giving FA’s their richly-earned praise for winning evidence of Types both 1 and 2.

I was really hoping for your reaction to the following, but I guess I ain’t gonna get it:

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored.

Let me make this more specific. Look at the following axioms:

- V = L
- V is not L, but is a canonical model of ZFC, generic over L
- Large Cardinal axioms like Supercompact
- Forcings Axioms like PFA
- AD in
- Cardinal Characteristics like
- (The famous) “Etcetera”

It seems that each of these has pretty good Type 1 evidence (useful for the development of set theory, with P’s and V’s).

But look! We can discriminate between these examples with evidence of Types 2 and 3! Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory. And of course Type 3 kills V = L. So using all three Types of evidence, we have a clear winner, Forcing Axioms!

I expect that without a heavy use of Type 2 and Type 3 evidence, we aren’t going to get any consensus about set-theoretic truth using only Type 1 evidence.

Peter:

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, , …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V = L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for , necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the ). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism! That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the IMH#, but because of a better understanding of the way the translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Best, Sy

]]>In an attempt to move things along, I would like to both summarize where we are

and sharpen what I was saying in my (first) message of Nov 8. My points were

possibly swamped by the technical questions I raised.

1) We began with Original-

This is the #-generated version. In an attempt to provide a -logic formulation

you proposed a principle which I called (in my message of Nov 5):

2) New-

I raised the issue of consistency and you then came back on Nov 8 with the principle :

What this translates to for a countable model V is then this:

V is weakly #-generated and for all : Suppose that whenever is a generator for V (iterable at least to the height of V), holds in an outer model M of V with a generator which is at least as iterable as . Then holds in an inner model of V.

Let’s call this:

3) Revised-New-

(There are too many principles)

But: Revised-New- is just the disjunct of Original- and New-

So Revised-New- is consistent. But is Revised-New- really what you had in mind?

(The move from New- to the disjunct of Original- and New- seems a bit problematic to me.)

Assuming Revised-New- is what you have in mind, I will continue.

Thus, if New- is inconsistent then Revised-New- is just Original-.

So we are back to the consistency of New-.

The theorem (of my message of Nov 8 but slightly reformulated here)

**Theorem.** *Assume PD. Then there is a countable ordinal and a real such that if is a ctm such that
1) is in and
2) satisfies Revised-New- with parameter
then is #-generated (and so satisfies Original-)*

strongly suggests (but does not prove) that New- is

inconsistent if one also requires be a model of “ for some set ”.

Thus if New- is consistent it likely must involve weakly #-generated models which *cannot* be coded by a real in an outer model which is #-generated.

So just as happened with SIMH, one again comes to an interesting CTM question whose resolution seem essential for further progress.

Here is an extreme version of the question for New-:

**Question: **Suppose M is weakly #-generated. Must there exist a weakly #-generated outer model of M which contains a set which is *not* set-generic over M?

[This question seems to have a positive solution. But, building weakly #-generated models which cannot be coded by a real in an outer model which is weakly #-generated still seems quite difficult to me. Perhaps Sy has some insight here.]

Regards,

Hugh

Peter is right, Sy. There’s no difference of opinion here between Peter and me about what counts as evidence, whether we call it ‘good set theory’ or ‘P and Vs’.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics?

I don’t mean to be cranky about this, Sy, but I’ve lost track of how many times I’ve repeated that my Thin Realist recognizes evidence of both your Type 1 (from set theory) and Type 2 (from mathematics). I think I’ve mentioned that the foundational goal of set theory in particular plays a central role (especially in Naturalism in Mathematics).

All best,

Pen

]]>**Theorem.** *Assume PD. Then there is a countable ordinal and a real such that if M is a ctm such that*

(1)* is in M and *

(2)* M satisfies (this but allowing as a parameter),*

*then M is #-generated.*

So, you still have not really addressed the ctm issue at *all*.

Here is the critical question:

**Key Question**: Can there exist a ctm M such that M satisfies in the hyper-universe of where is -generic for collapsing all sets to be countable.

Or even:

**Lesser Key Question**: Suppose that M is a ctm which satisfies . *Must* M be #-generated?

Until one can show the answer is “yes” for the Key Question, there has been no genuine reduction of this version of to -logic.

If the answer to the Lessor Key Question is “yes” then there is no possible reduction to -logic.

The theorem stated above strongly suggests the answer to the Lesser Key Question is actually “yes” if one restricts to models satisfying “”.

The point of course is that if M is a ctm which satisfies “” and M witnesses then witnesses where is an -generic collapse of to $lateex \omega$.

The simple consistency proofs of Original- all easily give models which satisfy “”.

The problem

(*) Suppose is not correctly computed by HOD for any infinite cardinal . Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit kappa of cof omega can be large in HOD, and I started asking about Weak Square. It holds at kappa in our model.

Assuming the Axiom is consistent one gets a model of ZFC in which for some singular strong limit of uncountable cofinality, weak square fails at and is not correctly computed by HOD.

So one cannot focus on cofinality (unless Axiom is inconsistent).

So born of this thread is the correct version of the problem:

Problem: Suppose is a singular strong limit cardinal of *uncountable* cardinality such that \gamma^+ is not correctly computed by HOD. Must weak square hold at \gamma?

Aside: asserts there is an elementary embedding with critical point below .

Regards, Hugh

]]>