Re: Paper and slides on indefiniteness of CH

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

??? My question has nothing to do with ctm’s! It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway). I was referring to the many different forms of set-theoretic practice which disagree with each other on basic questions like CH. How do you assign a truth value to CH in light of this fact?

For your second question, If the tests are passed, then yes I do think that V = Ulitmate-L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Here come the irrelevant ctm’s again. But you do say that V = Ultimate L will “swamp all the others”, so perhaps that is your answer to my question. Now do you really believe that? You suggested that Forcing Axioms can somehow be “part of the picture” even under V = Ultimate L, but that surely doesn’t mean that Forcing Axioms are false and Ultimate L is true.

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

If the Ultimate-L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics. Why should your programme be required to make “make or break” conjectures, and what is so attractive about that? As I understand the way Pen would put it, it all comes down to “good set theory” for your programme, and for that we need only see what comes out of your programme and not subject it to “death-defying” tests.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds? Your V = Ultimate L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Tue, 28 Oct 2014, W Hugh Woodin wrote:

My point is that the non-rigidity of HOD is a natural extrapolation of ZFC large cardinals into a new realm of strength. I only reject it now because of the Ultimate L Conjecture and its implication of the HOD Conjecture. It would be interesting to have an independent line which argues for the non-rigidity of HOD.

This is the only reason I ask.

Please don’t confuse two things: I conjectured the rigidity of the Stable Core for purely mathematical reasons. I don’t see it as part of the HP. Indeed, I don’t see a clear argument that the nonrigidity of inner models follows from some form of maximality.

But I still don’t have an answer to this question:

What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements?

But I did answer your question by stating how I see things developing, what my conception of V would be, and the tests that need to be passed. You were not happy with the answer. I guess I have nothing else to add at this point since I am focused on a rather specific scenario.

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Best,
Sy

PS: With regard to your mail starting with “PS:”: I have worked with people in model theory. When we get an idea we sometimes say “but that would give an easy solution to Vaught’s conjecture” so we start to look for (and find) a mistake. That’s all I meant by my comments: What I was doing would have given a “not difficult solution to the HOD conjecture”; so on this basis I should have doubted the argument and indeed I found a bug.

Re: Paper and slides on indefiniteness of CH

Dear Peter,

I think we should all be grateful to you for this eloquent description of how we gather evidence for new axioms based on the development of set theory. The first two examples (and possibly the third) that you present are beautiful cases of how a body of ideas converges on the formulation of a principle or principles with great explanatory power for topics which lie at the heart of the subject. Surely we have to congratulate those who have facilitated the results on determinacy and forcing axioms (and perhaps in time Hugh for his work on Ultimate L) for making this possible. Further, the examples mentioned meet your high standard for any such programme, which is that it “makes predictions which are later verified”.

I cannot imagine a more powerful statement of how Type 1 evidence for the truth of new axioms works, where again by “Type 1″ I refer to set theory’s role as a field of mathematics and therefore by “Type 1 evidence” I mean evidence for the truth of a new axiom based on its importance for generating “good set theory”, in the sense that Pen has repeatedly emphasized.

But I do think that what you present is only part of the picture. Set theory is surely a field of mathematics that has its own key questions and as it evolves new ideas are introduced which clarify those questions. But surely other areas of mathematics share that feature, even if they are free of questions of independence; they can have analogous debates about which developments are most important for the field, just as in set theory. So what you describe could be analagously described in other areas of mathematics, where “predictions” are made about how certain approaches will lead to the solution of central open problems. Briefly put: In your description of programmes for set theory, you treat set theory in the same way as one would treat any field of mathematics.

But set theory is much more that. Before I discuss this key point, let me interrupt myself with a brief reference to where this whole e-mail thread began, Sol’s comments about the indefiniteness of CH. As I have emphasized, there is no evidence that the pursuit of programmes like the ones you describe will agree on CH. Look at your 3 examples: The first has no opinion on CH, the second denies it and the third confirms it! I see set theory as a rich and developing subject, constantly transforming itself with new ideas, and as a result of that I think it unreasonable based on past and current evidence to think that CH will be decided by the Type 1 evidence that you describe. Pen’s suggestion that perhaps there will be a theory “whose virtues swamp the rest” is wishful thinking. Thus if we take only Type 1 evidence for the truth of new axioms into account (Sol rightly pointed out the misuse of the term “axiom” and Shelah rightly suggested the better term “semi-axiom”), we will not resolve CH and I expect that we won’t resolve much at all. Something more is needed if your goal is to say something about truth in set theory. (Of coures it is fine to not have that goal, and only a handful of set-theorists have that goal.)

OK, back to the point that set theory is more than just a branch of mathematics. Set theory also has a role as a foundation for mathematics (Type 2). Can we really assume that Type 1 axioms like the ones you suggest in your three examples are the optimal ones for the role of set theory as a foundation? Do we really have a clear understanding of what axioms are optimal in this sense? I think it is clear that we do not.

The preliminary evidence would suggest that of the three examples you mention, the first and third are quite irrelevant to mathematics outside of set theory and the second (Forcing Axioms) is of great value to mathematics outside of set theory. Should we really ignore this in a discussion of set-theoretic truth? I mean set theory is a great branch of mathematics, rife with ideas, but can we really assert the “truth” of an axiom which serves set theory’s needs when other axioms that contradict it do a better job in providing other areas of mathematics what they need?

There is even more to the picture, beyond set theory as a branch of or a foundation for math. I am referring to its Type 3 role, as a study of the concept of set. There is widespread agreement that this concept entails the maximality of V in height and width. The challenge is to explain this feature in mathematical terms, the goal of the HP. There is no a priori reason whatsoever to assume that the mathematical consequences of maximality in this sense will conform to axioms which best serve the Type 1 or Type 2 needs of set theory (as a branch of or foundation for mathematics). Moreover, to pursue this programme requires a very different approach than what is familiar to the Type 1 set-theorist, perfectly described in your previous e-mail. I am asking you to please be open-minded about this, because the standards you set and the assumptions that you make when pursuing new axioms for “good set theory” do not apply when pursuing consequences of maximality in the HP. The HP is a very different kind of programme.

To illustrate this, let me begin with two quotes which illustrate the difference and set the tone for the HP:

I said to Hugh:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

In other words, my starting point is not what facilitates the “best set theory”, but what one can understand about maximality of V in height and width.

On a recent occasion, Hugh said to me:

[Yet] you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

This second quote precisely indicates the difference in our points of view. The HP is intended to be an unbiased analysis of the maximality of V in height and width, grounded in our intuitions about this feature and limited by what is possible mathematically. These intuitions are indeed fairly robust, surely more so than our judgments about what is “good set theory”. I know of no persuasive argument that large cardinal existence (beyond what is compatible with V = L) follows from the maximlity of V in height and width. Indeed in the literature authors such as Gödel had doubts about this, whereas they have felt that inaccessible cardinals are derivable from maximality in height.

So the only reasonable interpretation of Hugh’s comment is that he feels that LC existence is necessary for “good set theory” and that such Type 1 evidence should override any investigation of the maximality of V in height and width. Pen and I discussed this (in what seems like) ages ago in the terminology of “veto power” and I came to the conclusion that it should not be the intention of the HP to have its choice of criteria dictated by what is good for the practice of set theory as mathematics.

To repeat, the HP works like this: We have an intuition about maximality (of V in height and width) which we can test out with various criteria. It is a lengthy process by which we formulate, investigate and compare different criteria. Sometimes we “unify” or “synthesise” two criteria into one, resulting in a new criterion that based on our intuitions about maximality does a better job of expressing this feature than did the individual criteria which were unified. And sometimes our criteria conflict with reality, namely they are shown to be inconsistent in ZFC. Here are some examples:

Synthesis: The IMH is the most obvious criterion for expressing the maximality of V in width. #-generation is the strongest criterion for expressing the maximality of V in height. If we unify these we get IMH#, which is consistent but behaves differently than either the IMH alone or #-generation alone. Our intuition says that the IMH# better expresses maximality than either the IMH alone or #-generation alone.

Inconsistency (examples with HOD): We can consistently assert the maximality principle V \noteq \text{HOD}. A natural strengthening is that \alpha^+ of HOD is less than \alpha^+ for all infinite cardinals \alpha. Still consistent. But then we go to the further natural strengthening \alpha^+ of HOD_x is less than \alpha^+ for all subsets x of \alpha (for all infinite cardinals \alpha). This is inconsistent. So we back off to the latter but only for \alpha of cofinality \omega. Now it is consistent for many such \alpha, not yet known to be consistent for all such \alpha. We continue to explore the limits of maximality in this way, in light of what is consistent with ZFC. A similar issue arises with the statement that \alpha is inaccessible in HOD for all infinite regular \alpha, which is not yet known to be consistent (my belief is that it is).

The process continues. There is a constant interplay betrween criteria suggested by our maximality intuitions and the mathematics behind these criteria. Obviously we have to modify what we are doing as we learn more of the mathematics. Indeed, as you pointed out in your more recent e-mail, there are maximality criteria which contradict ZFC; this has been obvious for a long time, in light of Vopenka’s theorem.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

Once again, the aim of the programme is to understand the consequences of the maximality of V in height and width. Your criterion of “making predictions” may be fine for your Type 1 programmes, which are grounded by nothing more than “good set theory”, but it is not appropriate for the HP. That is because the HP is grounded by an intrinsic feature of the set-concept, maximality, which will take a long time to understand. I see no basis for your suggestion that the programme is “infinitely revisable”, it simply requires a huge amount of mathematics to carry out. Already the synthesis of the IMH with #-generation is considerable progress, although to get a deeper understanding we’ll definitely have to deal with the \textsf{SIMH}^\# and HOD-maximality.

If you insist on a “prediction” the best I can do is to say that the way things look now, at this very preliminary stage of the programme, I would guess that both not-CH and the nonexistence of supercompacts will come out. But that can’t be more than a guess at this point.

Now I ask you this: Suppose we have two Type 1 axioms, like the ones in your examples. Suppose that one is better than the other for Type 2 reasons, i.e., is more effective for mathematics outside of set theory. Does that tip the balance between those two Type 1 axioms in terms of which is closer to the truth? And I ask the same question for Type 3: Could you imagine joining forces and giving priority to axioms that both serve the needs of set theory as mathematics and are derivable from the maximality of V in height and width?

One additional worry is the vagueness of the idea of the ” ‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

No, the demands you want to make of a programme are appropriate for finding the right axioms for “good set theory” but not for an analysis of the maximality of V in height and width. For the latter it is more than sufficient to analyse the natural candidates for maximality criteria provided by our intuitions and achieve a synthesis. I predict that this will happen with striking consequences, but those consequences cannot be predicted without a lot of hard work.

Thanks,
Sy

PS: The above also addresses your more recent mail: I don’t reject a form of maximality just because it contradicts supercompacts (because I don’t see how supercompact existence is derivable form any form of maximality) and I don’t see any problem with rejecting maximality principles that contradict ZFC, simply because by convention ZFC is taken in the HP as the standard theory.

PPS: A somewhat weird but possibly interesting investigation would indeed be to drop the ZFC convention and examine criteria for the maximality of V in height and width over a weaker theory.

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

The Stability Predicate S is the important thing. V is generic over the Stable Core = (L[S],S). As far as I know, V may not be generic over HOD; but it is generic over (HOD,S).

V is always a symmetric extension of HOD but maybe you have something else in mind.

Let A be a V-generic class of ordinals (so A codes V). Then A is (HOD, P)-generic for a class partial order P which is definable in V. So if T is the \Sigma_2-theory of the ordinals then P is definable in (HOD,T) and A is generic over (HOD,T).

Why are you stating a weaker result than mine? I show that for some A, (V,A) models ZFC and is generic over the Stable Core and hence over (HOD,S) where S is the Stability predicate. The Stability Predicate is \Delta_2, not \Sigma_2. And a crucial point is that its only reference to truth in V is via the “stability relationships” between V_\alpha‘s, a much more absolute property than truth which is much easier to analyse. As I said, the Stability Predicate is the important thing in my conjecture.

But you did not answer my question. Are you just really conjecturing that if V is generic over N then there is no nontrivial j:N \to N?

But I did answer your question: The Stability Predicate is the basis for my conjecture, not just some arbitrary predicate that makes V generic over HOD. In fact my conjecture looks stronger than the rigidity of (HOD,S), as the Stable Core (L[S],S) is smaller.

Let me phrase this more precisely.

Suppose A is a V-generic class of ordinals, N is an inner model of V, P is a partial order which is amenable to N and that A is (N,P)-generic.

Are you conjecturing that there is no non-trivial j:N \to N? Or that there is no nontrivial j:(N,P) \to (N,P)? Or nothing along these general lines?

As I said: Nothing along those general lines.

I show that (in Morse-Kelley), the (enriched) Stable Core is rigid for “V-constructible” embeddings. That makes key use of the (enriched) Stability Predicate. I wouldn’t know how to handle a different predicate.

I would think that based on HP etc., you would actually conjecture that there is a nontrivial j:\text{HOD} \to \text{HOD}. </blockquote>          No. This is the "reality check" that Peter and I discussed. Maximality suggests that V is as far from HOD as possible, but we have to acknowledge what is not possible. </blockquote>    So maximality considerations have no predictive content. It is an idea which has to be continually revised in the face of new results.  </blockquote>        Finally you are catching on! I have been trying to say this from the beginning, and both you and Peter were strangely trying to "pin me down" on what the "definitive consequences" or the "make-or-break predictions" of the HP are. It is a study of maximality criteria, with the aim of converging towards the optimal such criterion. How can you expect such a programme to make "definitive predictions" in short time? In recursion theory language, the process is latex \Delta_2$ and not \Sigma_1 (changes of direction are permitted when necessary; witness the IMH being replaced by the \textsf{IMH}^\#). And set-theoretic practice is the big daddy: If you investigate a maximality criterion which ZFC proves inconsistent then you have to revise what you are doing (is “all regular cardinals inaccessible in HOD” consistent? I think so, but may be wrong.)

Yet you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

If the goal is to understand maximality then that would be cheating! You may have extrinsic reasons for wanting LCs as opposed to LCs in inner models (important note: for Reinhardt cardinals that would be the only option anyway!) but those reasons have no role in an analysis of maximality of V in height and width.

I guess this is yet another point we just disagree on.

But I still don’t have an answer to this question: “What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements?”

Have you read Pen’s “Defending the Axioms”, and if so, does her Thin Realist describe your views? And if so, do you have an argument that LC existence is necessary for “good set theory”?

PS: With embarrassment and apologies to the group, I have to report that I found a bug in my argument that maximality kills supercompacts. I’ll try to fix it and let you know what happens. I am very sorry for the premature claim.

Suppose that there is an extendible and that the HOD Conjecture fails. Then:

1) Every regular cardinal above the least extendible cardinal is measurable in HOD (so HOD computes no successors correctly above the least extendible cardinal).

2) Suppose \gamma is an inaccessible cardinal which is a limit of extendible cardinals. Then there is a club C \subset \gamma such that every \kappa \in C is a regular cardinal in \text{HOD} (and hence inaccessible in HOD).

So, if you fix the proof, you have proved the HOD Conjecture.

I’ll try not to let that scare me ;)

But I’m also not suprised that there was a bug in my proof!

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

I too find Peter’s above-quoted remarks compelling and I strongly endorse them.

But you are keeping me on too short a leash, Pen! Please acknowledge the difference between a discussion of the motivation, setup and legitimacy of my approach on the one hand, and its implementation on the other. In the implementation of the HP I have to delve into hard-core set theory, including issues involving HOD and indiscernibles which, although surely not as difficult to appreciate as Ultimate-L, are nevertheless demanding of a strong knowledge of set theory. The IMH may be readily understandable to a set theory bonehead, but it is too much to expect that #-generation or the proper treatment of class-genericity are immediately and “generally understandable”. Otherwise put: it is too much to expect that to uncover the real meaning of “maximality in height and width” that we are bound to explaining every move to our grandparents. Why should everything in the implementation of the HP lie at the surface of set theory? Maximality is to be viewed as intrinsic, but the tools needed to understand it come from set-theoretic practice, even though there is no “extrinsic justification” involved.

Ciao,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Mon, 27 Oct 2014, W Hugh Woodin wrote:

Dear Sy,

On Oct 26, 2014, at 7:39 PM, Sy David Friedman wrote:

Dear Peter,

But probably there’s a proof of no Reinhardt cardinals in ZF, even without Ultimate L:

Conjecture: In ZF, the Stable Core is rigid.

Note that V is generic over the Stable Core.

I took a brief look at your paper on the stable core and did not immediately see anything that genuinely seemed to argue for the conjecture you make above. (Maybe I just did not look at the correct paper). Are you just really conjecturing that there is no (nontrivial)  j:\text{HOD} \to \text{HOD}, or more generally that if V is a generic extension of an inner model N (by a class forcing which is amenable to N) then here is no nontrivial j:N \to N? Or is there extra information about the Stable Core which motivates the conjecture?

The Stability Predicate S is the important thing. V is generic over the Stable Core = (L[S],S). As far as I know, V may not be generic over HOD; but it is generic over (HOD,S).

I would think that based on HP etc., you would actually conjecture that  there is a nontrivial j:HOD \to HOD.

No. This is the “reality check” that Peter and I discussed. Maximality suggests that V is as far from HOD as possible, but we have to acknowledge what is not possible. The existence of reals not in HOD is possible but the existence of reals which are not set-generic over HOD is impossible. I have no idea if HOD is rigid but the fact that V is generic over the Stable Core is evidence that the Stable Core, and hence (HOD,S), is rigid. In my second paper on the Stable Core (the Enriched Stable Core) I show that (HOD,S*) is indeed rigid for some definable predicate S* (the Enriched Stability Predicate) with respect to “constructible” embeddings.

Thanks, Sy

PS: With embarrassment and apologies to the group, I have to report that I found a bug in my argument that maximality kills supercompacts. I’ll try to fix it and let you know what happens. I am very sorry for the premature claim.

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Hmm… \aleph_{\omega} is an infinite cardinal which is singular in HOD.

Something is missing in the formulation of (**).

You are absolutely right! My apologies for that. It should have been:

(*) \alpha^+ of \text{HOD} is less than \alpha^+ for all infinite
cardinals \alpha, and

(**) There is a closed unbounded class of cardinals which are regular in HOD.

Or even more simply:

({*}{*}{*}) There is a closed unbounded class of cardinals alpha such that \alpha is regular in HOD and \alpha^+ of HOD is less than \alpha^+.

({*}{*}{*}) is consistent with #-generation (ordinal maximality) and implies that there are no supercompacts.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

On Mon, 27 Oct 2014, Harvey Friedman wrote:

“It just occurred to me that we already have a consistent maximality principle which provably contradicts large cardinal existence.

This is not enough evidence to infer the nonexistence of supercompacts from maximality but this is definitely pointing in that direction!”

Well, how about the closely related

“It just occurred to me that we already have a maximality principle which provably contradicts ZFC.

This is not enough evidence to infer that ZFC fails from maximality but this is definitely pointing in that direction!”

Sorry, I thought that it was understood that in the context of the HP, “maximality” only refers to the “maximality of V in height and width”. Do you have a maximality principle in this sense which contradicts ZFC? I would be very interested in hearing about that!

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

This is an addendum to what I wrote to you about large cardinal existence on 25.October.

It just occurred to me that we already have a consistent maximality principle which provably contradicts large cardinal existence.

Consider:

(*) \alpha^+ of HOD is less than \alpha^+ for all infinite
cardinals \alpha, and

({*}{*}) Every infinite cardinal is regular in HOD.

Cummings, Golshani and I proved the consistency of the conjunction of (*) and (**).

Using work of Cummings-Schimmerling, Gitik and Dzamonja-Shelah we get:

Fact. The conjunction of (*) and ({*}{*}) implies that there are no supercompacts.

[One can weaken the hypothesis to just: For cofinally many singular strong limit cardinals \alpha of cofinality \omega, \alpha is regular in HOD and \alpha^+ of HOD is less than \alpha^+.]

This is not enough evidence to infer the nonexistence of supercompacts from maximality but this is definitely pointing in that direction!

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter,

On Mon, 27 Oct 2014, Koellner, Peter wrote:

Dear Sy,

The reason I didn’t quote that paragraph is that I had no comment on it. But now, upon re-reading it, I do have a comment. Here’s the paragraph:

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

Here’s the comment: This is a splendid endorsement of Hugh’s work on Ultimate L.

??? It is a scenario (not endorsement) of an inner model theory of some kind; why Hugh’s version of it?

Let us hope that we don’t have to wait until the middle of the 22nd century.

We appear to disagree on whether AD^L(R) is “parasitic” on AD in the way that “I am this [insert Woodin's forcing] class-sized forcing extension of an inner model of L”, where L is a choiceless large cardinal axiom. At least, I think we disagree. It is hard to tell, since you did not engage with those comments (which addressed the whole point at issue).

I have given up on trying to understand the word “parasitic”.

Let us push the analogy [between AD and choiceless large cardinals].

Shortly after AD was introduced L(\mathbb R) was seen as the natural inner model. And Solovay conjectured that \text{AD}^{L(\mathbb R)} follows from large cardinal axioms, in particular from the existence of a supercompact.

This leads to a fascinating challenge, given the analogy: Fix a choiceless large cardinal axiom C (Reinhardt, Super Reinhardt, Berkeley, etc.) Can you think of a large cardinal axiom L (in the context of ZFC) and an inner model M such that you would conjecture (in parallel with Solovay) that L implies that C holds in M?

You have overstretched the analogy to the point where it doesn’t work any more. \text{AD}^{L(\mathbb R)} is not about large cardinals and we had little reason to believe that it would outstrip LC axioms consistent with AC. Reinhardt cardinals are likely stronger (in consistency strength) than any LC axiom consistent with AC (I think they are just plain inconsistent). So we cannot expect an inner model for Reinhardt’s axiom just from a LC axiom consistent with AC! We need some other way of extending ZFC for that. Maybe the latter is the “fascinating challenge” that you want to formulate? I.e. how can we extend ZFC + LCs to yield an inner model for a Reinhardt cardinal?

Best,
Sy