# Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Oct 26, 2014, at 7:39 PM, Sy David Friedman wrote:

Dear Peter,

But probably there’s a proof of no Reinhardt cardinals in ZF, even without Ultimate-L:

Conjecture: In ZF, the Stable Core is rigid.

Note that V is generic over the Stable Core.

I took a brief look at your paper on the stable core and did not immediately see anything that genuinely seemed to argue for the conjecture you make above. (Maybe I just did not look at the correct paper).

Are you just really conjecturing that there is no (nontrivial) $j:\text{HOD} \to \text{HOD}$, or more generally that if V is a generic extension of an inner model N (by a class forcing which is amenable to N) then here is no nontrivial $j:N \to N$? Or is there extra information about the Stable Core which motivates the conjecture?

I would think that based on HP etc., you would actually conjecture that there is a nontrivial $j:\text{HOD} \to \text{HOD}$. This would have the added advantage explaining why $V \neq \text{HOD}$ follows from maximality considerations etc. (This declaration you have made at several points in this thread and which I must confess I have never really understand the reasons for.)

This seems like a perfect opportunity for you to use your conception of HP and boldly make a conjecture. (i.e. that the existence of $j:\text{HOD} \to \text{HOD}$ is consistent because by the HP protocols, the class free version, stated as (1) below, must be true in the preferred ct.’s and these are #-generated).

The axiom (that there is such a $j$) surely transcends the hierarchy we have now. So this HP insight if well grounded would be a remarkable success.

You could then very naturally go further and modify your unreachability property to:
if $N$ is a proper inner model of $V$ then there is a nontrivial $j:N \to N$.

If fact you could combine everything and go with the following perfect pairing:

1) For all sufficiently large $\kappa$, there is a nontrivial elementary embedding
$j: \text{HOD}\cap V_{\kappa} \to \text{HOD} \cap V_{\kappa}$.

2) If $N$ is a proper inner model of $V$ (definable from parameters) then $N \subset \text{HOD}_A$ for some set $A \subset \text{Ord}$.

In the Ultimate-L approach one faces a similar choice but there one is compelled to reject
such non-rigidity conjectures (since they must be false in that approach).

But why do you? You did after all write to Peter on Oct 19:

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

Regards,
Hugh

# Re: Paper and slides on indefiniteness of CH

Dear Peter,

On Mon, 27 Oct 2014, Koellner, Peter wrote:

Dear Sy,

The reason I didn’t quote that paragraph is that I had no comment on it. But now, upon re-reading it, I do have a comment. Here’s the paragraph:

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

Here’s the comment: This is a splendid endorsement of Hugh’s work on Ultimate L.

??? It is a scenario (not endorsement) of an inner model theory of some kind; why Hugh’s version of it?

Let us hope that we don’t have to wait until the middle of the 22nd century.

We appear to disagree on whether AD^L(R) is “parasitic” on AD in the way that “I am this [insert Woodin's forcing] class-sized forcing extension of an inner model of L”, where L is a choiceless large cardinal axiom. At least, I think we disagree. It is hard to tell, since you did not engage with those comments (which addressed the whole point at issue).

I have given up on trying to understand the word “parasitic”.

Let us push the analogy [between AD and choiceless large cardinals].

Shortly after AD was introduced $L(\mathbb R)$ was seen as the natural inner model. And Solovay conjectured that $\text{AD}^{L(\mathbb R)}$ follows from large cardinal axioms, in particular from the existence of a supercompact.

This leads to a fascinating challenge, given the analogy: Fix a choiceless large cardinal axiom C (Reinhardt, Super Reinhardt, Berkeley, etc.) Can you think of a large cardinal axiom L (in the context of ZFC) and an inner model M such that you would conjecture (in parallel with Solovay) that L implies that C holds in M?

You have overstretched the analogy to the point where it doesn’t work any more. $\text{AD}^{L(\mathbb R)}$ is not about large cardinals and we had little reason to believe that it would outstrip LC axioms consistent with AC. Reinhardt cardinals are likely stronger (in consistency strength) than any LC axiom consistent with AC (I think they are just plain inconsistent). So we cannot expect an inner model for Reinhardt’s axiom just from a LC axiom consistent with AC! We need some other way of extending ZFC for that. Maybe the latter is the “fascinating challenge” that you want to formulate? I.e. how can we extend ZFC + LCs to yield an inner model for a Reinhardt cardinal?

Best,
Sy

# Re: Paper and slides on indefiniteness of CH

[This is the second of two letters on choiceless large cardinals.]

Dear Pen,

Thanks for your interest in the work that Bagaria, Woodin, and I have been doing on the choiceless large cardinal hierarchy. I’m sorry for the delay in responding. This is a heavy semester for me.

I should start by clarifying something: Should it turn out that the choiceless large cardinals are consistent that would not in and of itself pose a serious problem for AC. One would have to look at the details of this hierarchy — whether it is robust as the hierarchy of large cardinals consistent with choice — and one would have to look at its applications and implications, to see whether it has fruitful consequences. One would have to look at the extrinsic evidence for these axioms and weigh it against the extrinsic evidence for AC. It is hard (for me) to see how any such considerations could overturn AC. But it would be rather awkward. For to use large cardinal axioms as a yardstick to measure consistency strength, if one wanted to stick with AC then, after passing through the very natural hierarchy of theories of the form “ZFC + L”, where L is a standard large cardinal axiom, we would (very likely) have to adapt the ad hoc measure of switching to theories of the form “I am this [insert a description of Woodin's forcing to force AC] class-size forcing extension of an inner model satisfying L”, where L is a choiceless large cardinal axiom.

It would be much nicer if this whole choiceless large cardinal hierarchy were to just go away! That is what we are trying to show.

But there is another motivation for the investigation: There have been many purported proofs of the inconsistency of large cardinal axioms. For example, quite frequently one finds purported proofs of the inconsistency of measurable cardinals. And, lower down, it has even been claimed — for example, by Edward Nelson — that PA (and even a weak fragment PRA) is inconsistent. These proofs never stand the test of time. The only inconsistency proofs that have stood the test of time are the simple ones, like Kunen’s proof that Reinhardt cardinals are inconsistent (with AC). But it raises an interesting question: Is there a “deep inconsistency”, that is, is there a proof of the inconsistency of a large cardinal axiom which is rather involved. Since the proof of an inconsistency is going to be easier the stronger the assumption, instead of trying to show that PA or measurable cardinals are inconsistent (which, we do not believe) it makes sense to start up very high, with some outlandishly strong hypothesis, show that it is inconsistent, and then work one’s way down.

In a sense then, the goal is to probe the transition point (in the canonical class of “principles of pure strength”, most notably large cardinal axioms) between consistency and inconsistency. I believe that (fine-structural) inner model theory is a way of sharpening our understanding from below. Here we are hoping to gain understanding from above. If one could find an inconsistency in the choiceless large cardinal hierarchy then it is natural to expect (e.g. on the basis of the speed-up phenomenon) that as one pushes the inconsistency downward the proof of the inconsistency gets more involved — it gets “deeper”.

In any case, these are the goals — wipe out the choiceless large cardinal hierarchy, find a “deep inconsistency”, and probe the transition point between consistency and inconsistency.

I am not here to promote this work. I make no claim that it is deep or philosophically profound. I simply find it interesting and fun, an extension of the game we played as children: “What is the biggest number you can think of?”

And instead of getting into the technical details — defining “Berkeley cardinal”, stating the main results, etc. — I will just attach the slides for a talk I gave on the subject, so that anyone who is interested can have a look.

It gets weird up there …

Best, Peter.

Deep_Inconsistency

# Re: Paper and slides on indefiniteness of CH

Dear Sy,

(1) First, let me first recall the context of your first comment: I was outlining a way in which “maximality” (understood in one way) could arguably lead to a violation of AC. The version of “maximality” that I was using was based on maximizing interpretability power. You pointed out that this form of maximality had to do with theories while the general form you considered had to do with sets. That is true. But (and this may not have been clear) I was imagining someone who used “maximality considerations” to parallel what people say about maximizing interpretability power but instead to make the claim that such theories were “intrinsically justified on the “maximal” iterative concept of set” (something that John would not maintain). I was imagining someone like Magidor who thought that Vopenka was so justified (in some sense at least) and went on further to make the claim that Reinhardt cardinals and Super Reinhardt cardinals and Berkeley cardinals and so on … were so justified. I can imagine someone making such a case, especially in light of the vagueness surrounding “maximality”. Anyway, that was the set-up of my rather fanciful scenario.

As you know I don’t even see how to get measurables out of the MIC. One could try to embarrass someone who thinks they can by suggesting that such a claim would lead them into inferring not-AC from Maximality via Reinhardt cardinals. But I don’t think that works, because one has to draw the line when ZFC speaks up: I like the axiom “not every real is generic over HOD” as a natural form of Maximality, but the reality is that ZFC won’t tolerate this. I don’t consider this to be a flaw in the idea that Maximality entails V not-equal HOD, but only a reality check, a mathematical constraint on adopting something that is conceptually appealing (a frequent occurrence in the HP).

Yes, I am aware of that. I was not saying that on your conception of “the “maximal” iterative conception of set” you should be able to get measurables. And I was not endorsing the view that “maximality” leads to Reinhardt cardinals and hence a failure of AC. Remember, I had just said that I don’t buy the arguments that “maximality” implies AC or that “maximality” implies not-AC. Indeed, I had said that I don’t even have a grip on “maximality”. It seems to me to a useful heuristic and nothing more than that. And given the vagueness of the notion and the different ways different people employ it, with little convergence (except along specific dimensions — as with forcing axioms), it seems that “anything goes”. I then went on to outline a fanciful story about choiceless large cardinals.

I confess: I was sort of trying to change the subject, simply because I find this topic fascinating.

(B) One could keep V = L by appealing to $\Sigma^1_2$ absoluteness and instead of asserting the existence of the large cardinal axiom assert that there is a countable transitive model of ZFC that satisfies the large cardinal axioms.

I take both of these to be examples of unnatural theories. In each case the new theory refers to the old theory it in such a way that it is not taken at face value. It is a case of “kicking away the ladder after one has climbed it”. Or, to change the metaphor, the new theory is parasitic on the old and if we kill the host then the new theory doesn’t have a chance.

Isn’t $\text{AD}^{L(\mathbb R)}$ parasitic on AD?

No, $\text{AD}^{L(\mathbb R)}$ is not parasitic on AD in the sense of “parasitic” as I was using it. Of course, $\text{AD}^{L(\mathbb R)}$ involves the concept of determinacy. So does the statement “all open sets are determined” but that statement is not parasitic on AD. For every concept C we are almost always interested in figuring out whether C holds of a class of objects that falls short of everything (of which it makes sense to ask “does C hold of X?”). One exception (not a very interesting one) is when C is “is identical to itself”.

In the case of constructibility, when Goedel introduced L he was quite clear in saying that the construction of L (as he was using it) made sense only in the context where one presupposed all of the ordinals, as given by ZFC. If one took the predicativist considerations seriously one would not arrive at L (satisfying ZFC) but rather something like $L_{\Gamma_0}$. And when Jensen showed that various things (like $\square$) hold in L he was simply restricting his attention to a certain inner model and not presupposing that those things (like $\square$) really held.

Moreover, in the case where C is the concept of determinacy (which is at issue here) no one to my knowledge has ever maintained that all sets of reals are determined (any more than when dealing with the concept of Lebesgue measurability anyone has maintained that all sets of reals are Lebesgue measurable).

In the very paper — Mycielski and Steinhaus (1962) — in which AD was introduced the authors wrote:

“Our axiom can be considered as a restriction of the classical notion of a set leading to a smaller universum, say of determined sets, which reflect some physical intuitions which are not fulfilled by the classical sets (e.g. paradoxical decompositions of the sphere are eliminated by [AD]). Our axiom could be considered as an axiom added to the classical set theory claiming the existence of a class of sets satisfying [AD] and the classical axioms without the axiom of choice.”

Shortly thereafter, in 1964, proposed that the axiom held in an inner model: “a subclass of the class of all sets with the same membership relation. It would be still more pleasant if such a submodel contains all the real numbers.” Solovay and Takeuti pointed out that the obvious inner model is $L(\mathbb R)$. And, in the late 1960s Solovay conjectured that under suitable large cardinal assumptions — in particular, the existence of a supercomact — that AD actually holds in $L(\mathbb R)$.

As Kanamori puts it: “ZF + AD was never widely entertained as a serious alternative to ZFC, and increasingly from the early 1970’s onward consequences of ZF + AD were regarded as what holds in $L(\mathbb R)$ assuming $AD^{L(\mathbb R)}$.”

The case of Reinhardt cardinals is entirely different. This axiom was proposed.

The statement “I am such and such a class-size forcing extension (satisfying ZFC) of an inner model of (say) a Super Reinhardt cardinal” (which is based on a fairly deep theorem) is parasitic on the inner model in a way that “all open sets are determined” (or “all projective sets are determined” or “all sets of reals in $L(\mathbb R)$ are determined”) is NOT parasitic on the statement “all sets of reals are determined”.

(3) You asked about whether (from a mathematical point of view) such a move was even possible in the case of choiceless large cardinal axioms. The situation is delicate.

An old result of Woodin is that if there is a Super Reinhardt cardinal then one can force AC via a class-size forcing. A modification of this result shows that if any choiceless large cardinal axiom is consistent with an extendible cardinal then one can force AC over an model of the choiceless large cardinal and an extendible.

The question then is whether choiceless large cardinals are consistent with an extendible. In general, if the HOD Conjecture holds then choiceless large cardinals are not consistent with an extendible cardinal. For example, this is true of Berkeley cardinals. If the HOD Conjecture fails then a whole new world — a very bizarre world — opens up. That’s what we are investigating.

As a journal editor I’ve had to deal with papers that analyse some family of new large cardinal notions for their own sake and the question comes up each time: What is the point of this if the new notions are not known to be useful for something else in set theory?

Are these choiceless LC axioms useful for set theory with AC? (Pen asked a related question.) If not, then the most generous interpretation of this study is that it prepares mathematical tools for the day when such applications are found.

Let me first describe (in my letter to Pen) why I am interested in choiceless large cardinal axioms. To anticipate that let me say here that, yes, I am interested in an “application” of these axioms: The goal is to show that they imply 0 = 1.

Best,
Peter

# Re: Paper and slides on indefiniteness of CH

Dear Peter,

On Sat, 18 Oct 2014, Koellner, Peter wrote:

[This is the first of two letters on choiceless large cardinal axioms''.]

Dear Sy,

I’m sorry for the delay in responding — I’ve had a hard time keeping up with this thread.

I’m glad we agree on the question of AC and “maximality”.

(1) First, let me first recall the context of your first comment: I was outlining a way in which “maximality” (understood in one way) could arguably lead to a violation of AC. The version of “maximality” that I was using was based on maximizing interpretability power. You pointed out that this form of maximality had to do with theories while the general form you considered had to do with sets. That is true. But (and this may not have been clear) I was imagining someone who used “maximality considerations” to parallel what people say about maximizing interpretability power but instead to make the claim that such theories were “intrinsically justified on the “maximal” iterative concept of set” (something that John would not maintain). I was imagining someone like Magidor who thought that Vopenka was so justified (in some sense at least) and went on further to make the claim that Reinhardt cardinals and Super Reinhardt cardinals and Berkeley cardinals and so on … were so justified. I can imagine someone making such a case, especially in light of the vagueness surrounding “maximality”. Anyway, that was the set-up of my rather fanciful scenario.

As you know I don’t even see how to get measurables out of the MIC. One could try to embarrass someone who thinks they can by suggesting that such a claim would lead them into inferring not-AC from Maximality via Reinhardt cardinals. But I don’t think that works, because one has to draw the line when ZFC speaks up: I like the axiom “not every real is generic over HOD” as a natural form of Maximality, but the reality is that ZFC won’t tolerate this. I don’t consider this to be a flaw in the idea that Maximality entails V not-equal HOD, but only a reality check, a mathematical constraint on adopting something that is conceptually appealing (a frequent occurrence in the HP).

The refutation of Reinhardt cardinals in ZFC is another illustration of my point that one cannot infer LC existence from LC consistency.

(2) In continuing with this scenario I made the remark: “On this picture, AC would be viewed like V = L, as a limiting principle, a principle that holds up to a certain point in the interpretability hierarchy (while one is following a “natural” path) and then gets turned off past a certain stage.” You pointed out (as you did to John) that there are ways of climbing the interpretability hierarchy where one can avoid flipping off this switch (or other switches). That is true. And you give an example (in terms of statements asserting that there is an inner model satisfying a certain statement). I am aware of such examples. That’s why I was shielding myself with that powerful device … the “scare quotes”. (I qualified the statement with “while one is following a “natural” path”.)

Here are some ways of climbing the interpretability hierarchy that I take to be “non-natural”. (The main point is not so much that they are “non-natural” but rather that they are not legitimate unless the host theory is legitimate. They are parasitic and get their legitimacy from that of the host.)

(A) Given a theory T (in the language of set theory and extending ZFC) the theory PA + \cup_n<\omega T\rest n (where T\rest n is the first n-sentences of T) is mutually interpretable with T. In this way one could climb the consistency hierarchy while remaining in number theory.

I guess you meant $\text{Con}(T\restriction n)$ instead of $T\restriction n$.

(B) One could keep V = L by appealing to $\Sigma^1_2$ absoluteness and instead of asserting the existence of the large cardinal axiom assert that there is a countable transitive model of ZFC that satisfies the large cardinal axioms.

I take both of these to be examples of unnatural theories. In each case the new theory refers to the old theory it in such a way that it is not taken at face value. It is a case of “kicking away the ladder after one has climbed it”. Or, to change the metaphor, the new theory is parasitic on the old and if we kill the host then the new theory doesn’t have a chance.

Isn’t $\text{AD}^{L(\mathbb R)}$ parasitic on AD?

I view your suggestion — of moving from a choiceless large cardinal axiom L to a ZFC axiom of the form “There is an inner model M of ZF+L and I am a this class-size forcing extention of M” — in the same light.

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

(3) You asked about whether (from a mathematical point of view) such a move was even possible in the case of choiceless large cardinal axioms. The situation is delicate.

An old result of Woodin is that if there is a Super Reinhardt cardinal then one can force AC via a class-size forcing. A modification of this result shows that if any choiceless large cardinal axiom is consistent with an extendible cardinal then one can force AC over an model of the choiceless large cardinal and an extendible.

The question then is whether choiceless large cardinals are consistent with an extendible. In general, if the HOD Conjecture holds then choiceless large cardinals are _not_ consistent with an extendible cardinal. For example, this is true of Berkeley cardinals. If the HOD Conjecture fails then a whole new world — a very bizarre world — opens up. That’s what we are investigating.

As a journal editor I’ve had to deal with papers that analyse some family of new large cardinal notions for their own sake and the question comes up each time: What is the point of this if the new notions are not known to be useful for something else in set theory?

Are these choiceless LC axioms useful for set theory with AC? (Pen asked a related question.) If not, then the most generous interpretation of this study is that it prepares mathematical tools for the day when such applications are found.

Best, Sy

# Re: Paper and slides on indefiniteness of CH

On Sat, 18 Oct 2014, W Hugh Woodin wrote:

And similarly, as I said before, suppose that any model of Reinhardt’s axiom can be thickened to a model of ZFC. Then once again AC is not threatened, Reinhardt cardinals and beyond will simply exist in inner models of V and choice holds in V. I think that Hugh in fact explained that any model of a Reinhardt can indeed be thickened to a model of AC.

Just so the mathematics does not drift: I need a proper class of supercompact cardinals. So Super-Reinhardt is OK as is Reinhardt + There is an Extendible.

Do you mean an extendible above the Reinhardt? Aren’t Reinhardt cardinals extendible?

Thanks,
Sy

# Re: Paper and slides on indefiniteness of CH

Dear Pen and Peter,

On Tue, 14 Oct 2014, Penelope Maddy wrote:

Thanks for bringing up the choiceless cardinals, Peter. As a strictly amateur kibitzer, my hunch has been that their consistency alone wouldn’t be enough to unseat AC, that they’d have to generate something mathematically attractive (analogous to ordinary LCs generating the #’s, say).

Even then I don’t see that AC is threatened:

There is a hierarchy of extensions of PD, asserting AD for larger and larger collections of sets of reals within $L(\mathbb R)$. They are all consistent with AC. But then when you go “too far” and assert AD for all sets of reals in $L(\mathbb R)$ you contradict AC. Of course AC was never threatened by this, as we just say that AD holds in the inner model $L(\mathbb R)$ and V is a model of AC which is thicker. (Or do you take the position that AD offers nothing “mathematically attractive” not offered by its fragments?)

And similarly, as I said before, suppose that any model of Reinhardt’s axiom can be thickened to a model of ZFC. Then once again AC is not threatened, Reinhardt cardinals and beyond will simply exist in inner models of V and choice holds in V. I think that Hugh in fact explained that any model of a Reinhardt can indeed be thickened to a model of AC.

Best,
Sy

PS: I am travelling now so my e-mails this week and next might be rather sporadic.

# Re: Paper and slides on indefiniteness of CH

[This is the first of two letters on "choiceless large cardinal axioms".]

Dear Sy,

I’m sorry for the delay in responding — I’ve had a hard time keeping up with this thread.

I’m glad we agree on the question of AC and “maximality”.

(1) First, let me first recall the context of your first comment: I was outlining a way in which “maximality” (understood in one way) could arguably lead to a violation of AC. The version of “maximality” that I was using was based on maximizing interpretability power. You pointed out that this form of maximality had to do with theories while the general form you considered had to do with sets. That is true. But (and this may not have been clear) I was imagining someone who used “maximality considerations” to parallel what people say about maximizing interpretability power but instead to make the claim that such theories were “intrinsically justified on the “maximal” iterative concept of set” (something that John would not maintain). I was imagining someone like Magidor who thought that Vopenka was so justified (in some sense at least) and went on further to make the claim that Reinhardt cardinals and Super Reinhardt cardinals and Berkeley cardinals and so on … were so justified. I can imagine someone making such a case, especially in light of the vagueness surrounding “maximality”. Anyway, that was the set-up of my rather fanciful scenario.

(2) In continuing with this scenario I made the remark: “On this picture, AC would be viewed like V=L, as a limiting principle, a principle that holds up to a certain point in the interpretability hierarchy (while one is following a “natural” path) and then gets turned off past a certain stage.” You pointed out (as you did to John) that there are ways of climbing the interpretability hierarchy where one can avoid flipping off this switch (or other switches). That is true. And you give an example (in terms of statements asserting that there is an inner model satisfying a certain statement). I am aware of such examples. That’s why I was shielding myself with that powerful device … the “scare quotes”. (I qualified the statement with “while one is following a “natural” path”.)

Here are some ways of climbing the interpretability hierarchy that I take to be “non-natural”. (The main point is not so much that they are “non-natural” but rather that they are not legitimate unless the host theory is legitimate. They are parasitic and get their legitimacy from that of the host.)

(A) Given a theory $T$ (in the language of set theory and extending ZFC) the theory $\textsf{PA} + \bigcup_{n<\omega} T\restriction n$ (where $T\restriction n$ is the first $n$ sentences of T) is mutually interpretable with T. In this way one could climb the consistency hierarchy while remaining in number theory.

(B) One could keep V = L by appealing to $\Sigma^1_2$ absoluteness and instead of asserting the existence of the large cardinal axiom assert that there is a countable transitive model of ZFC that satisfies the large cardinal axioms.

I take both of these to be examples of unnatural theories. In each case the new theory refers to the old theory it in such a way that it is not taken at face value. It is a case of “kicking away the ladder after one has climbed it”. Or, to change the metaphor, the new theory is parasitic on the old and if we kill the host then the new theory doesn’t have a chance.

I view your suggestion — of moving from a choiceless large cardinal axiom L to a ZFC axiom of the form “There is an inner model M of ZF+L and I am a this class-size forcing extention of M” — in the same light.

(3) You asked about whether (from a mathematical point of view) such a move was even possible in the case of choiceless large cardinal axioms. The situation is delicate.

An old result of Woodin is that if there is a Super Reinhardt cardinal then one can force AC via a class-size forcing. A modification of this result shows that if any choiceless large cardinal axiom is consistent with an extendible cardinal then one can force AC over an model of the choiceless large cardinal and an extendible.

The question then is whether choiceless large cardinals are consistent with an extendible. In general, if the HOD Conjecture holds then choiceless large cardinals are not consistent with an extendible cardinal. For example, this is true of Berkeley cardinals. If the HOD Conjecture fails then a whole new world — a very bizarre world — opens up. That’s what we are investigating.

Best,
Peter

# Re: Paper and slides on indefiniteness of CH

Dear Peter,

Suppose it should turn out that the “choiceless” large cardinals are consistent. (This is a hierarchy of large cardinals that extends beyond $\textsf{I}0$. The first major marker is a Reinhardt cardinal. After that one has Super Reinhardt cardinals and then the hierarchy of Berkeley cardinals. (This is something that Woodin, Bagaria, and myself have been recently investigating.))  Suppose that the principles in this hierarchy are consistent. Then if we are to follow the principle of “maximality” — in the sense of maximizing interpretability power — these principles will lead us upward to theories that violate AC. On this picture, AC would be viewed like V = L, as a limiting principle, a principle that holds up to a certain point in the interpretability hierarchy (while one is following a “natural” path) and then gets turned off past a certain stage.

Thanks for bringing up the choiceless cardinals, Peter.  As a strictly amateur kibitzer, my hunch has been that their consistency alone wouldn’t be enough to unseat AC, that they’d have to generate something mathematically attractive (analogous to ordinary LCs generating the #’s, say).  In any case, if you (or one of your co-workers) were willing, I suspect more than a few of us would be very interested to hear about the state of play on these cardinals.

All best,
Pen

# Re: Paper and slides on indefiniteness of CH

Dear All:

Here are some questions and comments on the question of AC and “maximality”.

QUESTIONS:

Sy: In response to the result Hugh mentioned — which bears on Choiceless-HP — you wrote that the existence of supercompacts was “still unclear”. In your letter of August 8 to Pen you wrote: “I remain faithful to the extrinsically-confirmed fact that large cardinal axioms are consistent.” From that I assume that you take it to be extrinsically confirmed that supercompact cardinals are consistent. But here you say that the question of their existence is unclear. I would be very interested to hear what you have to say about what it would take to achieve the leap from consistency to existence. Would it be to show that such principles are intrinsically justified on the basis of the “maximal” iterative conception of set? If so do you have in mind any candidates for doing that? (I am aware of the fact that while, e.g., $\textsf{IMH}^\#$ is consistent with all large cardinals it does not imply them.)

Harvey: The equivalence you mention between AC and the existence of maximal cliques is intriguing. You said that this topic (of how AC follows from “maximality”) has been well understood for a long time. What other results do you have in mind? I would be interested to hear whether you think that such results make a case for the claim that AC is indeed intrinsically justified on the basis of the “maximal” iterative conception of set? Since, like me, you put “maximality” in scare quotes I assume that the answer is “no”.

I share these doubts and that is one reason I have a weak grasp on the notion of “being intrinsically justified on the basis of the “maximal” iterative conception of set” and, consequently, cannot put much stock in it.

Considerations of maximality have certainly served as a useful heuristic that has led to some wonderful mathematics, and, in some cases, to a unified program, as in the case of forcing axioms (which can be construed as generalizations of the Baire Category Theorem). But it seems (to me at least) that the notion of “maximality” is a rather vague notion, one that has many dimensions, depending on what exactly it is that one is trying to maximize (for example, whether it is generalizations of the Baire Category Theorem, or the sort of inter-relations between candidate “V”s that Sy is investigating, or interpretability power). Moreover, there are widely conflicting intuitions as to when something follows from “maximality”. For example, some claim (and I believe Magidor is an example) that Vopenka’s Principle is so justified, while Sy would claim that it is not. (I say this with some hesitancy since although I have discussed the matter with both Magidor and Sy I am not sure that they attach the same significance to the term “intrinsically justified”. (Philosophy is one of those areas that can seem tedious and frustrating at times since instead of proceeding freely in a shared language and simply saying things about the world, one must, at times turn inward and discuss the language itself and the various senses that are attached to terms.))

Shifting focus to the question of AC more specifically, some have claimed that AC is intrinsically justified on the basis of the “maximal” iterative conception of set. (I believe that Ramsey argued this and that Tait defends the claim since he argues for something much stronger, namely, that AC follows from the meaning of higher-order quantifiers.)

One way in which people try to argue for AC on the basis of “maximality” is that if one has a collection of non-empty sets then there must, by “maximality”, be a choice function since otherwise the universe of sets would be impoverished.

There are problems with this.

First, the question of whether AC holds depends on two things — (1) the breath of the collections of non-empty sets that one has to select from and (2) the breath of the collection of choice functions. For AC to hold one needs the proper balance between (1) and (2). It is not straightforward that “maximality” implies AC because in addition to giving us lots of choice functions it also makes matters harder by giving us lots of collections of non-empty sets to choose from. What one gets out of “maximality” depends on where one puts the emphasis — on (1) or (2). For example, if one puts the emphasis on (2) then one can make a case for AC but if one puts the emphasis on (1) then one can make a case for things like Reinhardt cardinals (which provide us with so many sets that it is hard to find choice functions for them).

Second, (and relatedly), by parity of reasoning one could argue for AD on the grounds that by “maximality” there must be lots of winning strategies. But AD contradicts AC.

Let me now focus on one dimension of “maximality”, namely, that of interpretability power, and say something that’s a bit “far out”.

Some have maintained that what we are trying to maximize is interpretability power. Steel maintains this and I believe that Maddy maintains this. I am pretty certain that neither of them would argue for this on the basis of what is “intrinsically justified on the basis of the ‘maximal’ iterative conception of set” but that is because neither of them would put much stock in the notion of “being an intrinsic justification on the basis of the ‘maximal’ iterative conception of set”. But someone who does put stock in this notion, might argue along similar lines. So let us run with this idea.

Suppose it should turn out that the “choiceless” large cardinals are consistent. (This is a hierarchy of large cardinals that extends beyond $\textsf{I}0$. The first major marker is a Reinhardt cardinal. After that one has Super Reinhardt cardinals and then the hierarchy of Berkeley cardinals. (This is something that Woodin, Bagaria, and myself have been recently investigating.)) Suppose that the principles in this hierarchy are consistent. Then if we are to follow the principle of “maximality” — in the sense of maximizing interpretability power — these principles will lead us upward to theories that violate AC. On this picture, AC would be viewed like V = L, as a limiting principle, a principle that holds up to a certain point in the interpretability hierarchy (while one is following a “natural” path) and then gets turned off past a certain stage.

I really hope that these “choiceless large cardinals” are not consistent (and that is something we are trying to show). But my point is that if they are and one runs wild with “maximality” considerations then one can put together a case for the negation of AC.

In summary, it seems that there is not enough unity and convergence in this enterprise to inspire confidence in the notion of “being intrinsically justified on the basis of the ‘maximal’ iterative conception of set”. But raising skeptical concerns is all too easy and not very inspiring. So let us wait and see. Perhaps a unified notion will emerge and there will be convergence.

Best,
Peter