Dear Sy,

My comments were about not about anything else — I address one thing at a time, not all things at once.

The third theorem is a relic — I neglected to delete it when I added the other two. It should be deleted.

Best,

Peter

Dear Sy,

My comments were about not about anything else — I address one thing at a time, not all things at once.

The third theorem is a relic — I neglected to delete it when I added the other two. It should be deleted.

Best,

Peter

Dear Peter,

On Sun, 26 Oct 2014, Koellner, Peter wrote:

Dear Sy,

I have one more comment on choiceless large cardinal axioms that concerns .

It is worth pointing out that Hugh’s consistency proof of shows a good deal more (as pointed out by Hugh):

Theorem:Assume that every real has a sharp. Then, in the hyperuniverse there exists a real such thatevery#-generated M to which belongs, satisfies and in the following very strong sense:(*) Every sentence which holds in a definable inner model of some #-generated model N, holds in a definable inner model of M.

There is no requirement here that N be an outer model of M. In this sense, is not really about outer models. It is much more general.

It follows that not only is consistent with all (choice) large cardinal axioms (assuming, of course, that they are consistent) but also that is consistent with all choiceless large cardinal axioms (assuming, of course, that they are consistent).

The point is that is powerless to provide us with insight into where inconsistency sets in.

Before you protest let me clarify: I know that you have not claimed otherwise! You take the evidence for consistency of large cardinal axioms to be entirely extrinsic.

I protest for a different reason: The above argument is too special to . For example, consider Hugh’s variant which he called . I don’t see how you argue as above with this stronger principle, which is known to be consistent, using my proof with Radek based on #-generated Jensen coding.

My point is simply to observe to everyone that makes no predictions on this matter.

So what? How do you know that makes no predictions on this matter?

And, more generally, I doubt that you think that the hyperuniverse program has the resources to make predictions on this question since you take evidence for consistency of large cardinal axioms to be extrinsic.

In contrast “V = Ultimate L”

doesmake predictions on this question, in the following precise sense:

Theorem (Woodin).Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals.

Theorem (Woodin).Assume the Ultimate L Conjecture. Then there are no Super Reinhardt cardinals and there are no Berkeley cardinals.

Theorem (Woodin).Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals (or Super Reinhardt cardinals or Berkeley Cardinals, etc.)

(Here the Ultimate-L Conjecture is a conjectured theorem of ZFC.)

Interesting. (Did you intend there to be a difference between the first and third theorems above?)

But probably there’s a proof of no Reinhardt cardinals in ZF, even without Ultimate L:

**Conjecture**: In ZF, the Stable Core is rigid.

Note that V is generic over the Stable Core.

Best,

Sy

Dear Sy,

I have one more comment on choiceless large cardinal axioms that concerns .

It is worth pointing out that Hugh’s consistency proof of shows a good deal more (as pointed out by Hugh):

Theorem: Assume that every real has a sharp. Then, in the hyperuniverse there exists a real x_0 such that *every* #-generated M to which belongs, satisfies and in the following very strong sense:

(*) Every sentence which holds in a definable inner model of some #-generated model N, holds in a definable inner model of M.

There is no requirement here that N be an outer model of M. In this sense, is not really about outer models. It is much more general.

It follows that not only is consistent with all (choice) large cardinal axioms (assuming, of course, that they are consistent) but also that is consistent with all choiceless large cardinal axioms (assuming, of course, that they are consistent).

The point is that is powerless to provide us with insight into where inconsistency sets in.

Before you protest let me clarify: I know that you have not claimed otherwise! You take the evidence for consistency of large cardinal axioms to be entirely extrinsic.

My point is simply to observe to everyone that makes no predictions on this matter. And, more generally, I doubt that you think that the hyperuniverse program has the resources to make predictions on this question since you take evidence for consistency of large cardinal axioms to be extrinsic.

In contrast “V = Ultimate L” *does* make predictions on this question, in the following precise sense:

**Theorem (Woodin).** *Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals.*

**Theorem (Woodin).** *Assume the Ultimate L Conjecture. Then there are no Super Reinhardt cardinals and there are no Berkeley cardinals.*

**Theorem (Woodin).** *Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals (or Super Reinhardt cardinals or Berkeley Cardinals, etc.)*

(Here the Ultimate L Conjecture is a conjectured theorem of ZFC.)

Best, Peter

Claudio Ternullo wrote

The HP is about the collection of all c.t.m. of ZFC (aka the “hyperuniverse” [H]). A “preferred” member of H is one of these c.t.m. satisfying some H-axiom (e.g., IMH).

Your coauthor has not explained why HP doesn’t carry the name CTMP = countable transitive model program. That is my suggestion and has been supported by Hugh. Why not?

What does the choice of a countable transitive model have to do with “(intrinsic) maximality in set theory”?

At a fundamental level, what does “(intrinsic) maximality in set theory” mean in the first place?

Which axioms of ZFC are motivated or associated with “(intrinsic) maximality in set theory”? And why? Which ones aren’t and why?

What is your preferred precise formulation of IMH? E.g., is it in terms of countable models?

What do you make of the fact that the IMH is inconsistent with even an inaccessible (if I remember correctly)? If this means that IMH needs to be redesigned, how does this reflect on whether CTMP = HP is really being properly motivated by “(intrinsic) maximality in set theory”?

What is the simplest essence of the ideas surrounding “fixing or redesigning IMH”? Please, in generally understandable terms here, so that people can get to the essence of the matter, and not have it clouded by technicalities.

Overall, it would be particularly useful to avoid quoting complicated technicalities and adhere to generally understandable considerations. After all, CTMP = HP is being offered as some sort of truly foundational program. Legitimate foundational programs lend themselves to generally understandable explanations with overwhelmingly attractive features.

I have not been able to engage your coauthor in this way, so perhaps this is going to fall on you. Sorry about that (smile).

Harvey

Dear Sy,

So what is ? You wrote in your message of Sept 29:

The is compatible with all large cardinals. So is the

A second question. The version of you specified in your next message to me on Sept 29:

The (crude, uncut) is the statement that V is #-generated and if a sentence with absolute parameters holds in a cardinal-preserving, #-generated outer model then it holds in an inner model. It implies a strong failure of CH but is not known to be consistent.

does not even obviously imply . Perhaps you meant, the above together with ? If not then calling it is rather misleading. Either way it is closer to .

Anyway this explains my confusion, thanks.

Regards,

Hugh

Dear Hugh,

Sorry for my delayed response.

For the purposes of this discussion:

The is the statement that V is #-generated and if a sentence holds in a #-generated outer model then it holds in an inner model. It is consistent with all large cardinals.

The (crude, uncut) is the statement that V is #-generated and if a sentence with absolute parameters holds in a cardinal-preserving, #-generated outer model then it holds in an inner model. It implies a strong failure of CH but is not known to be consistent.

The reason that the can be shown to be consistent is that Jensen coding works for #-generated models.

The reason that the is not known to be consistent is that Jensen coding will collapse cardinals if GCH fails.

I hope that this clarifies the situation.

Thanks for your interest,

Sy

Dear Hugh,

I have to leave on a short trip now, and will respond in more detail as soon as I can.

You have again misunderstood the !

Below are some brief responses.

On Mon, 29 Sep 2014, W Hugh Woodin wrote:

Dear Sy,

The disadvantage of your formulation of is that it is not even in general a property of M and so it is appealing in more essential ways to the structure of the “hyperuniverse”.

No. It appeals only to the ordinals of “lengthenings”, not to the structure of the Hyperuniverse!

This is why the consistency proof of uses substantially more than a Woodin cardinal with an inaccessible above, unlike the case of and .

OK, It seems we will just have to agree that we disagree here.

OK, so you disagree with treating width actualism with “lengthenings”, unlike Pen, Geoffrey and myself. I am missing a coherent explanation for your view.

I think it is worth pointing out to everyone that , and even the weaker which we know to be consistent, implies that there is a real such that does not exist.

No, that is not true. The is compatible with all large cardinals. So is the . What argument are you thinking of?

(even though exists in the parent hyperuniverse which is a bit odd to say the least in light of the more essential role that the hyperuniverse is playing). The reason of course is that implies that there is a real such that correctly computes .

No, it does not. What argument do you have in mind?

Sy

Dear Sy,

The disadvantage of your formulation of is that it is not even in general a property of M and so it is appealing in more essential ways to the structure of the “hyperuniverse”. This is why the consistency proof of uses substantially more than a Woodin cardinal with an inaccessible above, unlike the case of and .

OK, It seems we will just have to agree that we disagree here.

I think it is worth pointing out to everyone that , and even the weaker \textsf{SIMH}(\omega_1)$ which we know to be consistent, implies that there is a real such that does not exist (even though exists in the parent hyperuniverse which is a bit odd to say the least in light of the more essential role that the hyperuniverse is playing). The reason of course is that implies that there is a real such that correctly computes .

This is a rather high price to pay for getting not-CH.

Thus for me at least, has all the problems of with regard to isolating candidate truths of V.

Regards,

Hugh

Dear Hugh,

As I said, if you synthesise the IMH with a weak form of reflection then you will contradict large cardinals. For example if you relativise the IMH to models which are only generated by a presharp which is iterable to the height of the model then you will contradict #’s for reals. The only synthesis that is friendly to large cardinals is with the strongest possible form of reflection, given by #-generation. More on this below:

Consider the following extreme version of :

Suppose M is a ctm and M \models ZFC. Then M witnesses extreme- if:

- There is a thickening of M, satisfying ZFC, in which M is a #-generated inner model.

??? This just says that M is #-generated to its own height! It is a weakened form of #-generation.

- M witnesses in all thickenings of M, satisfying ZFC, in which M is a #-generated inner model.

This makes no sense to me. The point of the synthesis is to say that IMH holds for models that satisfy reflection. You are only looking at models which satisfy weak reflection, i.e. which are presharp-generated up to their height! How do you motivate this? Even in the basic V-logic you get iterability up to the least admissible past the height. Of course we want our presharps to stay iterable past the height of the model; this is necessary to capture reflection to its fullest.

One advantage to extreme- is that the formulation does not need to refer to sharps in the hyperuniverse (and so there is a natural variation which can be formulated just using the V-logic of M). This also implies that the property that M witnesses extreme- is as opposed to which is not even in general .

Yes, your weak version of reflection can be captured in V-logic. But this is not much of an advantage as it is heavily outweighed by its disadvantages: We don’t want weak reflection (weak #-generation), we want reflection (#-generation), and this is captured by the natural infinitary logics fixing V defined in arbitrary “lengthenings” of V resulting by adding new L-levels (like the “lengthenings” that Pen, Geoffrey and I have discussed, but instead of iterating powerset to get new von Neumann ranks, one iterates *definable* powerset to generate new Gödel ranks). So once again, full #-generation is a property captured by logics associated to “lengthenings” of V, just like the IMH. It simply makes no sense to stop with weak #-generation, as there is no advantage of doing so.

Given the motivations you have cited for etc., it seems clear that extreme- is the correct result of synthesizing IMH with reflection unless it is inconsistent.

No! The motivation I cited was to assert the IMH for models that obey reflection and to do this you need to use the correct form of reflection, not what you are suggesting.

Thm: Assume every real has a sharp and that some countable

ordinal is a Woodin cardinal in a definable inner model. Then there is a ctm which witnesses that extreme- holds.

However unlike , extreme- is not consistent with all large cardinals.

Thm:If M satisfies extreme- then there is a real x in M such that in M, does not exist.

Originally, when I formulated #-generation I had the weaker form that you are suggesting in mind, knowing this result quite well. But later I came to the full form of #-generation and this problem with large cardinal nonexistence disappeared. I wasn’t actually looking for a way to rescue large cardinals but that was a nice consequence of the correct point of view.

Once again: Any form of reflection weaker than (full) #-generation will kill large cardinals when synthesised with the IMH. And even full #-generation is perfectly compatible with width actualism; it just requires consideration of logics in arbitrary “Gödel-lengthenings” of V.

This seems to be a bit of an issue for the motivation of IMH# and . How will you deal with this?

Explained above.

Yours,

Sy

Dear Pen and Geoffrey,

On Wed, 24 Sep 2014, Penelope Maddy wrote:

Thanks, Geoffrey. Of course you’re right. To use Peter’s terminology, if you’re a

potentialist about height, but an actualist about width, then CH is determinate in the usual

way. I was speaking of Sy’s potentialism, which I think is intended to be potentialist about

both height and width.

You both say that if one hangs onto width actualism then “CH is determinate in the usual way”. I have no idea what “the usual way” means; can you tell me?

But nothing you have said suggests that there is any problem at all with determining the CH as a radical potentialist. Again:

… solving the CH via the HP would amount to verifying that the pictures of V which optimally exhibit the Maximality feature of the set concept all satisfy CH or all satisfy its negation. I do consider this to be discovering something about V. But I readily agree that it is not the “ordinary way people think of that project”.

And in more detail:

We have many pictures of V. Through a process of comparison we isolate those pictures which best exhibit the feature of Maximality, the “optimal” pictures. Then we have three possibilities:

a. Does CH hold in all of the optimal pictures?

b. Does CH fail in all of the optimal pictures?

c. OtherwiseIn Case a, we have inferred CH from Maximality, in Case b we have inferrred -CH from Maximality and in Case c we come to no definitive conclusion about CH on the basis of Maximality.

OK, maybe this is not the “usual way” (whatever that is), but don’t you acknowledge that this is a programme that could resolve CH using Maximality?

I also owe Pen an answer to:

… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?

As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP! The programme generates different possible mathematical consequences of Maximality, mirroring different aspects of Maximality. For example, the IMH is a way of handling width maximality and -generation a way of handling height maximality. Each has its interesting mathematical consequences. But they contradict each other! The aim of the programme is to generate the entire spectrum of possible ways of formulating the different aspects of Maximality, analysing them, comparing them, unifying them, … until the picture converges on an optimal Maximality criterion. Then we can talk about what to “endorse”. I conjecture that the negation of CH will be a consequence, but it is too soon to make that claim.

The IMH is significant for many reasons. First, it refutes the claim that “Maximality in width” implies the existence of large cardinals; indeed the IMH is the most natural formulation of “Maximality in width” and it refutes the existence of large cardinals! Second, it illustrates how one can legitimately talk about “arbitrary thickenings” of V in discussions of Maximality, without tying one hands to the restrictive notion of forcing extension. Third, as discussed at length in my papers with Tatiana, it inspires a re-think of the role of large cardinals in set theory, explaining this in terms of their existence in inner models as opposed to their existence in V.

But the HP has moved beyond the IMH to other criteria like -generation, unreachability and (possibly) omniscience, together with different ways of unifying these criteria into new “synthesised” criteria. It is an ongoing study with a lot of math behind it (yes Pen, “good set theory” that people can care about!) but this study is still in its infancy. I can’t come to any definitive conclusions yet, sorry to disappoint. But I’ll keep you posted.

Best,

Sy