Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your letter.

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

I would like to repeat my request: Could you please give us an account of #-generation, explain how it arises from “length maximality”, and make a convincing case that it captures all (in particular, the Erdos cardinal \kappa(\omega)) and only the large cardinals that we can expect to follow from “length maximality”.

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V= L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms.

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

You have not understood what I (or Pen, or Tony, or Charles, or anyone else who has discussed this matter in the literature) mean by “prediction and confirmation”. To understand what we mean you have to read the things we wrote; for example, the slides I sent you in response to precisely this question.

You cite cases of the form: “X was working with theory T. X conjectured P. The conjecture turned out to be true. Ergo: T!”

That is clearly not how “prediction and confirmation” works in making a case for new axioms. Why? Take T to be an arbitrary theory, say (to be specific) “\textsf{I}\Delta_0 + Exp is not total.”  X conjectures that P follows from T. It turns out that A was right. Does that provide evidence for “Exp is not total”?

Certainly not.

This should be evident by looking at the case of “prediction and confirmation” in the physical sciences. Clearly not every verified prediction made on the basis of a theory T provides epistemic support for T. There are multiple (obvious) reasons for this, which I won’t rehears. But let me mention two that are relevant to the present discussion. First, the theory T could have limited scope — it could pertain to (what is thought, for other reasons) to be a fragment of the physical universe; e.g. the verified predictions of macroscopic mechanics do not provide epistemic support for conclusions about how subatomic particles behave. Cf. your V=L example. Second, the predictions must bear on the theory in a way that distinguishes it from other, competing theories.

Fine. But falling short of that ideal one at least would like to see a prediction which, if true, would (according to you) lend credence to your program and, if false, would (according to you) take credence away from your program, however slight the change in credence might be. But you appear to have also renounced these weaker rational constraints.

Fine. The Hyperuniverse Program is a different sort of thing. It isn’t like (an analogue of) astronomy. And you certainly don’t want it to be like (an analogue of) astrology. So there must be some rational constraints. What are they?

Apparently, the fact that a program suggests principles that continue to falter is not a rational constraint. What then are the rational constraints? Is the idea that we are just not there yet but that at the end of inquiry, when the dust settles, we will have convergence and we will arrive at “optimal” principles, and that at that stage there will be a rationally convincing case for the new axioms? (If so, then we will just have to wait and see whether you can deliver on this promise.)

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There are two reasons I keep giving a summary of the changes, of how we got to where we are now. First, this thread is quite intricate and its useful to give reader a summary of the state of play. Second, in assessing prospects and tenability of a program it is useful to keep track of its history, especially when that program is not in the business of making predictions.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism!

This is not correct. (I wish I didn’t have to document this).

I never attributed height-actualism to you. (I hope that was a typo on your part). I wrote (in the private letter of Oct. 6, which you quoted and responded to in public):

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

I never attributed height actualism. I only very tentatively said that it appeared you have switched to width actualism and said that I didn’t believe that this was your official view.

That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

This is not correct. (Again, I wish I didn’t have to document this.)

You responded to my letter (in public) on Oct. 9, quoting the above passage, writing:

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

I then wrote letters asking you to confirm that you were indeed a radical potentialist. You confirmed this. (For the documentation see the beginning of my letter on K.)

So, I wrote the letter on K, after which you said that you regretted having admitted to radical potentialism.

You didn’t endorse width-actualism until Nov. 3, in response to the story about K. And it is only now that we are starting to see the principles associated with “width-actualism + height potentialism” (New IMH#, etc.)

I am fully aware (and have acknowledged) that you have said that the HP program is compatible with “width-actualism + height potentialism”. The reason I have focused on “radical potentialism” and not “width-actualism + height potentialism” is two-fold. First, you explicitly said that this was your official view. Second, you gave us the principles associated with this view (Old-IMH#, etc.) and have only now started to give us the principles associated with “width-actualism + height potentialism” (New-IMH#, etc.) I wanted to work with your official view and I wanted something definite to work with.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

I certainly agree that it is more likely that one will get an answer on PD than an answer on CH. Of course, I believe that we already have a convincing case for PD. But let me set that aside and focus on your program. And let me also set aside questions about the epistemic force behind the principles you are getting (as “suggested” or “intrinsically motivated”) on the basis of the  “‘maximal’ iterative conception of set” and focus on the mathematics behind the actual principles.

(1) You proposed Strong Unreachability (as “compellingly faithful to maximality”) and you have said quite clearly that V does not equal HOD (“Maximality implies that there are sets (even reals) which are not ordinal-definable” (Letter of August 21)). From these two principles Hugh showed (via a core model induction argument) that PD follows. [In fact, in place of the second one just needs (the even more plausible "V does not equal K".]

(2) Max (on Oct. 14) proposed the following:

In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the \kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model \text{HOD}_A of sets hereditarily-ordinal definable with the parameter A!

Hugh pointed out (on Oct. 15) that the latter violates ZFC. Still, there is a principle in the vicinity that Max could still endorse, namely,

(H) For all uncountable cardinals \kappa, \kappa^+ is not correctly computed by HOD.

Hugh showed (again by a core model induction argument) that this implies PD.

So you already have different routes (based on principles “suggested” by the “‘maximal’ iterative conception of set” leading to PD. So things are looking good!

(3) I expect that things will look even better. For the core model induction machinery is quite versatile. It has been used to show that lots of principles (like PFA, there is an \omega_1 dense ideal on \omega_1, etc.) imply PD. Indeed there is reason to believe (from inner model theory) that every sufficiently strong “natural” theory implies PD. (Of course, here both “sufficiently strong” and “natural” are necessary, the latter because strong statements like “Con(ZFC + there is a supercompact)” and “There is a countable transitive model of ZFC with a supercompact” clearly cannot imply PD.)

Given the “inevitability” of PD — in this sense: that time and again it is show to follow from sufficiently strong “natural” theories — it entirely reasonable to expect the same for the principles you generate (assuming they are sufficiently strong). It will follow (as it does in the more general context) out of the core model induction machinery. This has already happened twice in the setting of HP. I would expect there to be convergence on this front, as a special case of the more general convergence on PD.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the \textsf{IMH}^\#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Let us focus on a productive exchange of your current view, of the program as you now see it.

It would be helpful if you could:

(A) Confirm that the official view is indeed now “width-actualism + height potentialism”.

[If you say the official view is “radical potentialism” (and so are sticking with Old-\textsf{IMH}^\#, etc.) then [insert story of K.] If you say the official view is “width-actualism + height potentialism” then please give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.)]

(B) Give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.), what you know about their consistency, and a summary of what you can currently do with them. In short, it would be helpful if you could respond to Hugh’s last letter on this topic.

Thanks for continuing to help me understand your program.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

We are “back to square one”, not because a definite program made a definite prediction which if refuted would set it back to square one, but rather because the entire program — its background picture and the principles it has offered on the basis of that picture — has changed.

So we now have to start over and examine the new background picture (width-actualism + height potentialism) and the new principles being produced.

One conclusion one might draw from all of the changes (of a fundamental nature) and the lack of convergence is that the notion that is supposed to be guiding us through the tree of possibilities — the “”maximal” iterative conception of set” — is too vague to steer a program down the right path.

But that is not the conclusion you draw. Why? What, in light of the fact that so far this oracle — the “”maximal” iterative conception of set” has led to dead ends — is the basis for your confidence that eventually, through backing up and taking another branch, it will lead us down a branch that will bear fruit, that it will lead to `optimal’ criteria that will come to be accepted in a way that is firmer than the approaches based on good, old-fashioned evidence, especially evidence based on prediction and confirmation.

In any case, it is mathematically intriguing and will be of interest to see what can be done with the new principles.

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

To understand what Pen means by the phrase you quoted — “depth, fruitfulness, effectiveness, importance, and so on” — you have to look at the examples she gives. There is a long section in the book where she gives precisely the kind of evidence that John, Tony, Hugh, and I have cited. In this regard, there is no lack of continuity in her work. This part of her view has remained in place since “Believing the Axioms”. You can find it in every one of her books, including the latest, “Defending the Axioms”.

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

It is interesting that you mention Forcing Axioms since it provides a nice contrast.

Forcing Axioms are also based on “maximality” but the sense of maximality at play is more specific and in this case the program has led to a lot of convergence. Moreover, it has implications for other areas of mathematics.

Todorcevic’s paper for the EFI Project has a nice account of this. Here are just a few examples:

Theorem (Baumgartner) Assume \mathfrak{mm}>\omega_1. Then all separable \aleph_1 dense linear orders are isomorphic.

Theorem (Farah) Assume \mathfrak{mm}>\omega_1 (or just OGA). Then all automorphisms of the Calkin algebra are inner.

Theorem (Moore) Assume \mathfrak{mm}>\omega_1. Then the class of uncountable linear orders has a five-element basis.

Theorem (Todorcevic) Assume \mathfrak{mm}>\omega_1. Then every directed set of cardinality at most \aleph_1 is Tukey-equivalent to one of 1, \omega, \omega_1, \omega\times\omega_1, or [\omega_1]^{<\omega}.

The picture under CH is dramatically different. And this difference between the two pictures has been used as part of the case for Forcing Axioms. (See, again, Todorcevic’s paper.) I am not endorsing that case. But it is a case that needs to be reckoned with.

An additional virtue of this program is that it does make predictions. It is a precise program that has made predictions (like the prediction that Moore confirmed). Moreover, the philosophical case for the program has been taken to largely turn on an open problem, namely, of whether \textsf{MM}^{++} and (*) are compatible. If they are compatible advocates of the program would see that as strengthening the case. If they are not compatible then advocates of the program admit that it would be a problem.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

I hope you don’t think I have been maintaining that it can’t be done! I have just been trying to understand the program — the picture, the principles, etc. I have also been trying to understand the basis of your pessimism in the slow, painstaking approach through accumulation of evidence — an approach that has worked so well in the fifty years of research on \text{AD}^{L(\mathbb R)}. I have been trying to understand the basis of your pessimism of Type 1 (understood properly, in terms of evidence) and the basis of your optimism in Type 3, especially in light of the different track records so far. Is there reason to think that the Type 1 approach will not lead to a resolution of CH? (I am open-minded about that.) Is there reason to think that your approach to Type 3 will? I guess for the latter we will just have to wait and see.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think I now have a much better grip on the picture you are working with. This letter is an attempt to sum thing up — both mathematical and philosophical — and express my misgivings, in what I hope you will take to be a light-hearted and friendly manner.

Let start with your radical potentialism. To set the stage let me recap.

In my letter on Oct. 26 I carefully laid out the varieties of potentialism and actualism and asked which version you held. You answered on Oct. 26 and I was pretty sure that I understood. But I wanted to be sure so I asked for confirmation. You confirmed that in your P.S. (of a letter on a different topic) on Oct. 26:

PS: In answer to an earlier question, I am indeed naturally inclined to think in terms of the stronger form of radical potentialism. Indeed I do think that, as with height actualism, there are arguments to suggest that the weaker form of radical potentialism without the stronger form is untenable.

Here “the stronger form of radical potentialism” was the one I explicitly asked for confirmation on. To be clear: You endorse the strong form of radical potentialism according to which for every transitive model of ZFC there is an actual extension (meaning an actual lengthening and/or thickening) in which that model is actually seen to be countable.

So, on this view, everything is ultimately (and actually) countable. Thus, on this view, we actually live in the hyperuniverse, the space of countable transitive models of ZFC. That’s all there is. That’s our world.

This is very close to Skolem’s view. He took it to entail that set theory had evaporated, which is why I used that phrase. But you do not. Why? Because one can still do set theory in this limited world. How?

This brings us back to my original questions about your “dual use of ‘V'”, at times for little-V’s (countable transitive models of ZFC) and at other times for “the real thing”, what I called SUPER-V (to disambiguate the notation). I had originally thought that your view was this: There is SUPER-V. Actualism holds with regard to SUPER-V. There are no actual lengthenings or thickenings of SUPER-V, only virtual ones. Everything takes place in SUPER-V. It is the guide to all our claims about “intrinsic justifications on the basis of the “maximal” iterative conception of set” (subsequently demoted to “intrinsically motivated (heuristically) on the basis of the ‘maximal’ iterative conception of set”). By appealing to the downward Löwenheim-Skolem theorem I thought you were arguing that without loss of generality we could gain insight into SUPER-V by investigating the space of countable transitive models of ZFC.

The virtue of that picture (which I erroneously thought you held) is that you would have something to hang your hat on — SUPER-V –, something to cash out the intuitions that you claimed about the ” ‘maximal’ iterative conception of set”. The drawback what that it was hard to see (for me at least) how we could gain insight into SUPER-V (which had no actual lengthenings or thickenings) by investigating countable transitive models of ZFC (which do!).

But that is all neither here not there, since that is not your view. Your view is far more radical. There is just the hyperuniverse, the space of countable transitive models of ZFC. There is no need for appeal to the Löwenheim-Skolem theorem, since everything is countable!


I now have a much better grip on the background philosophical picture. This is what I suspected all along the way that is why I have been pressing you on these matters.

I want now to examine this world view — to take ii seriously and elaborate its consequences. To do that I will follow your lead with Max and tell a story. The story is below, out of the main body of this letter.

Best,
Peter


Let me introduce K. He has a history of getting into situations like this.

Let us enter the hyperuniverse…

K awakes. He looks around. He is surrounded by countable transitive models of ZFC. Nothing else.

How did I get here? Why are all these countable transitive models of ZFC kicking around? Why not just countable transitive models of ZFC – Replacement + \Sigma_2-Replacement? Why not anything else?

K takes a stroll in this strange universe, trying to get his bearing. All of the models he encounters are transitive models of ZFC. He encounters some that satisfy V = L, some that satisfy \textsf{PD}, some that satisfy \textsf{MM}^{++}, etc. But for every model he encounters he finds another model in which the previous model is witnessed to be countable.

He thinks: “I must be dreaming. I have fallen through layers of sleep into the world of countable transitive models of ZFC. The reason all of these countable transitive models of V = L, \textsf{PD}, \textsf{PFA}, etc. are kicking around is that these statements are \beta-consistent, something I know from my experience with the outer world. In the outer world, before my fall, I was not wedded to the idea that there was a SUPER-V — I was open minded about that. But I was confident that there was a genuine distinction between the countable and the uncountable. And now, through the fall, I have landed in the world of the countable transitive models of ZFC. The uncountable models are still out there — everything down here derives from what lies up there.”

At this point a voice is heard from the void…

S: No! You are not dreaming — you have not fallen. There is no outer world. This is all that there is. Everything is indeed countable.

K: What? Are you telling me set theory has evaporated.

S: No. Set theory is alive and well.

K: But all of the models around here are countable, as witnessed by other models around here. That violates Cantor’s theorem. So, set theory has evaporated.

S: No! Set theory is alive and well. You must look at set theory in the right way. You must redirect your vision. Attend not to the array of all that you see around you. Attend to what holds inside the various models. After all, they all satisfy ZFC — so Cantor’s Theorem holds. And you must further restrict your attention, not just to any old model but to the optimal ones.

K: Hold on a minute. I see that Cantor’s Theorem holds in each of the models around here. But it doesn’t really hold. After all, everything is countable!

S: No, no, you are confused.

[K closes his eyes...]

S: Hey! What are you doing?

K: I’m trying to wake up.

S: Wait! Just stay a while. Give it a chance. It’s a nice place. You’ll learn to love it. Let me make things easier. Let me introduce Max. He will guide you around.

[Max materializes.]

Max takes K on a tour, to all the great sites — the “optimal” countable transitive models of ZFC. He tries to give K a sense of how to locate these, so that one day he too might become a tour guide. Max tells K that the guide to locating these is “maximality”, with regard to both “thickenings” and “lengthenings”.

K: I see, so like forcing axioms (for “thickenings”) and the resemblance principles of Magidor and Bagaria (for “lengthenings”)?

Max: No, no, not that. The “optimal” models are “maximal” in a different sense. Let me try to convey this sense.

Let’s start with IMH. But bear in mind this is just a first approximation. It will turn out to have problems. The goal is to investigate the various principles that are suggested by “maximality” (as a kind of “intrinsic heuristic”) in the hope that we will achieve convergence and find the true principles of “maximality” that enable us to locate the “optimal” universes. O

[Insert description of IMH. Let "CTM-Space" be the space of transitive models of ZFC. In short, let "CTM-Space" be the world that K has fallen into.]

K: I see why you said that there would be problems with IMH: If V\in \text{CTM-Space} satisfies IMH, then V contains a real x such that V satisfies “For every transitive model M of ZFC, x is not in M“; in particular, there is no rank initial segment of V that satisfies ZFC and so such a V cannot contain an inaccessible cardinal. In fact, every ordinal of such a V is definable from x. So such a V is “humiliated” in a dramatic fashion by a real x within it.

Max: I know. Like I said, it was just a first approximation. IMH is, as you observe, incompatible with inaccessible cardinals in a dramatic way. I was just trying to illustrate the sense of “width maximality” that we are trying to articulate. Now we have to simultaneously incorporate “height maximality”. We do this in terms of #-generation.

[Insert description of #-generation and \textsf{IMH}^\#]

K: I have a bunch of problems with this. First, I don’t see how you arrive at #-generation. You use the # to generate the model but then you ignore the #.

Second, there is a trivial consistency proof of \textsf{IMH}^\# but it shows even more, namely, this: Assume that for every real x, x^\# exists. Then there is a real x_0 such that for any V\in \text{CTM-Space} containing x_0, V satisfies Extreme-\textsf{IMH}^\# in the following sense: if \varphi holds in any #-generated model N (whether it is an outer extension of V or not) then \varphi holds in an inner model of V. So what is really going on has nothing to do with outer models — it is much more general than that. This gives not just the compatibility of \textsf{IMH}^\# with all standard large cardinal but also with all choiceless large cardinals.

[It should be clear at this point (and more so below) that despite this new and strange circumstances K is still able to access H.]

Third, there is a problem of articulation. The property of being a model V\in \text{CTM-Space} which satisfies \textsf{IMH}^\# is a \Pi_3 property over the entire space \text{CTM-Space}. When I was living in the outer world (where I could see \text{CTM-Space} as a set) I could articulate that property and thereby locate the V’s in CTM-Space that satisfy \textsf{IMH}^\#. But how can you (we) do that down here? If you really believe that the \Pi_3 property over the space \text{CTM-Space} is a legitimate property then you are granting that the domain \text{CTM-Space} is a determinate domain (to make sense of the determinateness of the alternating quantifiers in the \Pi_3 property). But if you believe that \text{CTM-Space} is a determinate domain then why can’t you just take the union of all the models in \text{CTM-Space} to form a set? Of course, that union will not satisfy ZFC. But my point is that by your lights it should make perfect sense, in which case you transcend this world. In short you can only locate the models V\in \text{CTM-Space} that satisfy \textsf{IMH}^\# by popping outside of $latex \text{CTM-Space}$!

Max: Man, are you ever cranky…

K: I’m just a little lost and feeling homesick. What about you? How did you get there? I have the sense that you got here they same way I did and that (in your talk of \textsf{IMH}^\#) you still have one foot in the outer world.

Max: No, I was born here. Bear with me. Like I said, we are just getting started. Let’s move on to \textsf{IMH}^\#.

K: Wait a second. Before we do that can you tell me something?

Max: Sure.

K: We are standing in  \text{CTM-Space}, right?

Max: Right.

K: And from this standpoint we are proving things about various principles that hold in various V‘s in \text{CTM-Space}, right?

Max: Right.

K: What theory are allowed to use in proving these results about things in \text{CTM-Space}? We are standing here, in \text{CTM-Space}. Our quantifiers range over \text{CTM-Space}. Surely we should be using a theory of \text{CTM-Space}. But we have been using ZFC and that doesn’t hold in \text{CTM-Space}. Of course, it holds in every V in \text{CTM-Space} but it does not hold in \text{CTM-Space} itself. We should be using a theory that holds in \text{CTM-Space} if we are to prove things about the objects in \text{CTM-Space}. What is that theory?

Max: Hmm … I see the point … Actually! … Maybe we are really in one of the V‘s in \text{CTM-Space}! This \text{CTM-Space} is itself in one of the V‘s of a larger \text{CTM-Space}

K (to himself): This is getting trippy.

Max: … Yes, that way we can invoke ZFC.

K: But it doesn’t make sense. Everything around us is countable and that isn’t true of any V in any \text{CTM-Space}.

Max: Good point. Well maybe we are in the \text{CTM-Space} of a V that is itself in the \text{CTM-Space} of a larger V.

K: But that doesn’t work either. For then we can’t help ourself to ZFC. Sure, it holds in the V of whose \text{CTM-Space} we are locked in. But it doesn’t hold here! You seem to want to have it both ways — you want to help yourself to ZFC while living in a \text{CTM-Space}.

Max: Let me get back to you on that one … Can we move on to \textsf{SIMH}^\#.

K: Sure.

[Insert a description of the two version of \textsf{SIMH}^\#. Let us call these Strong-\textsf{SIMH}^\# and Weak-\textsf{SIMH}^\#. Strong-\textsf{SIMH}^\# is based on the unification of \textsf{SIMH} (as formulated in the 2006 BSL paper) but where one restricts to #-generated models. (See Hugh's letter of 10/13/14 for details.) Weak-\textsf{SIMH}^\# is the version where one restricts to cardinal-preserving outer models.]

K: Well, Strong-\textsf{SIMH}^\# is not know to be consistent. It does imply \textsf{IMH}^\# (so the `S’ makes sense). But it also strongly denies large cardinals. In fact, it implies that there is a real x such that \omega_1^{L[x]}=\omega_1 and hence that x^\# doesn’t exist! So that’s no good.

Weak-\textsf{SIMH}^\# is not known to be consistent, either. Moreover, it is not known to imply \textsf{IMH}^\# (so why the “S”). It is true that it implies not-CH (trivially). But we cannot do anything with it since since very little is known about building cardinal-perserving outer models over an arbitrary initial model.

Max: Like I said, we are just getting started.

[Max goes on to describe various principles concerning HOD that are supposed to follow from "maximality". K proves to be equally "difficult".]

K: Ok, let’s back up. What is our guide? I’ve lost my compass. I don’t have a grip on this sense of “maximality” that you are trying to convey to me. If you want to teach me how to be a tour guide and locate the “optimal” models I need something to guide me.

Max: You do have something to guide you, namely, the “maximal” iterative conception of set”!

K: Well, I certainly understand the “iterative conception of set” but when we fell into \text{CTM-Space} we gave up on that. After all, every model here is witnessed to be countable in another model. Everything is countable! That flies in the face of the “iterative conception of set”, a conception that was supposed to give us ZFC, which doesn’t hold here in \text{CTM-Space}.

Max: No, no. You are looking at things the wrong way. You are fixated on \text{CTM-Space}. You have to direct your attention to the models within it. You have to think about things differently. You see, in this new way of looking at things to say that a statement \varphi is true (in this new sense) is not to say that it holds in some V in \text{CTM-Space}; and it is not to say that it holds in all V‘s in \text{CTM-Space}; rather it is to say that it holds in all of the “optimal” V‘s in \text{CTM-Space}. This is our new conception of truth: We declare \varphi to be true if and only if it holds in all of the “optimal” V’s in \text{CTM-Space}. For example, if we want to determine whether CH is true (in this new sense) we have to determine whether it holds in all of the “optimal” V‘s in \text{CTM-Space}. If it holds in all of them, it is true (in this new sense), if it fails in all of them it is false (in this new sense), and if it holds in some but not in others then it is neither true nor false (in this new sense). Got it?

K: Yeah, I got it. But you are introducing deviant notions. This is no longer about the “iterative conception of set” in the straightforward sense and it is no longer about truth in the straightforward sense. But let me go along with it, employing these deviant notions and explaining why I think that they are problematic.

It was disconcerting enough falling into \text{CTM-Space}. Now you are asking me to fall once again, into the “optimal” models in \text{CTM-Space}. You are asking me to, as it were, “thread my way” through the “optimal” models, look at what holds across them, and embrace those statements at true (in this new, deviant sense).

I have two problems with this:

First, this whole investigation of principles — like \textsf{IMH}, \textsf{IMH}^\#, Strong-\textsf{SIMH}^\#, Weak-\textsf{IMH}^\#, etc. — has taking place in \text{CTM-Space}. (We don’t have ZFC here but we are setting that aside. You are going to get back to me on that.) The trouble is that you are asking me to simultaneously view things from inside the “optimal” models (to “thread my way through the ‘optimal’ models”) via principles that make reference to what lies outside of those models (things like actual outer extensions). In order for me to make sense of those principles I have to occupy this external standpoint, standing squarely in \text{CTM-Space}. But if I do that then I can see that none of these “optimal” models are the genuine article. It is fine for you to introduce this deviant notion of truth — truth (in this sense) being what holds across the “optimal” models. But to make sense of it I have to stand right here, in \text{CTM-Space}, and access truth in the straightforward sense — truth in \text{CTM-Space}. And those truths (the ones required to make sense of your principles) undermine the “optimal” models since, e.g., those truths reveal that the “optimal” models are countable!

But let us set that aside. (I have set many things aside already. So why stop here.) There is a second problem. Even if I were to embrace this new conception of truth — as what holds across the “optimal” models in \text{CTM-Space} — I am not sure what it is that I would be embracing. For this new conception of truth makes reference to the notion of an “optimal” model in \text{CTM-Space} and that notion is totally vague. It follows that this new notion of truth is totally vague.

You have referred to a specific sense of “maximality” but I don’t have clear intuitions about the notion you have in mind. And the track record of the principles that you claimed to generate from this notion is, well, pretty bad, and doesn’t encourage me in thinking that there is indeed a clear underlying conception.

Tell me Max: How do you do it? How do you get around? What is your compass? How are you able to locate the “optimal” models? How are you able to get a grip on this specific notion of “maximality”?

Max: That’s easy. I just ask S!

[With those words, K awoke. No one knows what became of Max.]

THE END

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I owe you a response to your other letters (things have been busy) but your letter below presents an opportunity to make some points now.

On Oct 31, 2014, at 12:20 PM, Sy David Friedman wrote:

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the IMH# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I don’t buy this. Let’s go back to IMH. It violates inaccessibles (in a dramatic fashion). One way to repair it would have been to simply restrict to models that have inaccessibles. That would have been pretty ad hoc. It is not what you did. What you did is even more ad hoc. You restricted to models that are #-generated. So let’s look at that.

We take the presentation of #’s in terms of \omega_1-iterable countable models of the form (M,U). We iterate the measure out to the height of the universe. Then we throw away the # (“kicking away the ladder once we have climbed it”) and imagine we are locked in the universe it generated. We restrict IMH to such universes. This gives \textsf{IMH}^\#.

It is hardly surprising that the universes contain everything below the # (e.g. below 0^\# in the case of a countable transitive model of V=L) used to generate it and, given the trivial consistency proof of \textsf{IMH}^\# it is hardly surprising that it is compatible with all large cardinal axioms (even choicless large cardinal axioms). My point is that the maneuver is even more ad hoc than the maneuver of simply restricting to models with inaccessibles. [I realized that you try to give an "internal" account of all of this, motivating what one gets from the # without grabbing on to it. We could get into it. I will say now: I don't buy it.]

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

We do have “maximality” arguments that give supercompacts and extendibles, namely, the arguments put forth by Magidor and Bagaria. To be clear: I don’t think that such arguments provide us with much in the way of justification. On that we agree. But in my case the reason is that is that I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification. With such a vague notion “anything goes”. The point here, however, is that you would have to argue that the “maximality” arguments you give concerning HOD (or whatever) and which may violate large cardinal axioms are more compelling than these other “maximality” arguments for large cardinals. I am dubious of the whole enterprise — either for or against — of basing a case on “maximality”. It is a pitting of one set of vague intuitions against another. The real case, in my view, comes from another direction entirely.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I am not entirely sure what Harvey has in mind but here is something that he could have in mind.

In your letters to Pen and Hugh (on Oct. 27) you pointed to principles (after fixing an oversight) based on “maximality” that you claimed implied that there are no supercompacts. Hugh pointed out the problem in that claim. But, supposing it were true, why would you choose the “no supercompact route” and not simply throw out the principles you derived from “maximality”? After all, “maximality” is vague and has not been a very reliable guide so far, seeing as how many shifts there have been.

For the comparison that Harvey was drawing we have to look to another thing that you proposed followed from “maximality”, namely, the following, in your letter on Max, on Oct. 14):

The set-theorists tell him that maybe his mistake is to start talking about preserving cardinals before maximising the notion of cardinal itself. In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the \kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model \text{HOD}_A of sets hereditarily-ordinal definable with the parameter A! [Sy's Maximality Protocol, Part 2]

Hugh pointed out (on Oct. 15) the following:

Suppose that \kappa is a singular strong limit cardinal of uncountable cofinality. Then there exists A \subset \kappa such that \kappa^+ is correctly computed by \text{HOD}_A.

This is by a theorem of Shelah.

So you were led via “maximality” to a principle that contradicts ZFC (and is consistent with \text{ZFC} - \text{Replacement} + \Sigma_1\text{-Replacement})

So I think Harvey’s answer to your “Do you have a maximality principle in this sense which contradicts ZFC? I would be very interested in hearing about that!” should be: “No. But you do!”

I take it that Harvey was pointing to the parallel between these two cases, on the assumption that you could actually get no supercompacts from “maximality”. (It is a bit ironic: The math worked out in the case where you got a violation of ZFC but not (yet) in the case where you got a violation of supercompacts.)

[I realize these are just "intrinsic heuristics" that are very much open to revision and that ZFC is in good order. I am just articulating the parallel that Harvey was drawing.]

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

It is a virtue of a program if it generate predictions which are subsequently verified. To the extent that these predictions are verified one obtains extrinsic evidence for the program. To the extent that these predictions are refuted one obtains extrinsic evidence for the problematic nature of the program. It need not be a prediction which would “seal the deal” in the one case and “set it back to square one” in the other (two rather extreme cases). But there should be predictions which would lend support in the one case and take away support in the other.

The programs for new axioms that I am familiar with have had this feature. Here are some examples:

(1) Definable Determinacy.

The descriptive set theorists made many predictions that were subsequently verified and taken as support for axioms of definable determinacy. To mention just a few. There was the prediction that \text{AD}^{L(\mathbb R)} would lift the structure theory of Borel sets of reals (provable in ZFC) to sets of reals in L(\mathbb R). This checked out. There was the prediction that \text{AD}^{L(\mathbb R)} followed from large cardinals. This checked out. The story here is long and impressive and I think that it provides us with a model of a strong case for new axioms. For the details of this story — which is, in my view, a case of prediction and verification and, more generally, a case that parallels what happens when one makes a case in physics — see the Stanford Encyclopedia of Philosophy entry “Large Cardinals and Determinacy”, Tony Martin’s paper “Evidence in Mathematics”, and Pen’s many writings on the topic.

(2) Forcing Axioms

These axioms are based on ideas of “maximality” in a rather special sense. The forcing axioms ranging from \textsf{MA} to \textsf{MM}^{++} are a generalization along one dimension (generalizations of the Baire Category Theorem, as nicely spelled out in Todorcevic’s recent book “Notes on Forcing Axioms”) and the axiom (*) is a generalization along a closely related dimension. As in the case of Definable Determinacy there has been a pretty clear program and a great deal of verification and convergence. And, at the current stage advocates of forcing axioms are able to point to a conjecture which if proved would support their view and if refuted would raise a serious problem (though not necessarily setting it back to square one), namely, the conjecture that \textsf{MM}^{++} and (*) are compatible. That I take to be a virtue of the program. There are test cases. (See Magidor’s contribution to the EFI Project for more on this aspect of the program.)

(3) Ultimate L

Here we have lots of predictions which if proved would support the program and there are propositions which if proved would raise problems for the program. The most notable on is the “Ultimate L Conjecture”. But there are many other things. E.g. That conjecture implies that V=HOD. So, if the ideas of your recent letter work out, and your conjecture (combined with results of “Suitable Extender Models, I”) proves the HOD Conjecture then this will lend some support “V = Ultimate L” in that “V = Ultimate L” predicts a proposition that was subsequently verified in ZFC.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

One additional worry is the vagueness of the idea of the “‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

I think it would be nice to revisit all of these topics. Let me say two things about the axiom “V = Ultimate L” and your request that it be presented in “generally understandable terms”.

(1) The development of inner model theory has involved a long march up the large cardinal hierarchy and has generally had the feature that when you build an inner model for one key level of the large cardinal hierarchy — say measurable, strong, or Woodin — you have to start over when you target the next level, building on the old inner model theory while adding a new layer of complexity (from measures to extenders, from linear iterations to non-linear iterations) — because the inner models for one level are not able to accommodate the large cardinals at the next (much as L cannot accommodate a measurable).

Moreover, the definitions of the inner models — especially in their fine-structural variety — are very involved. One essentially has to develop the theory in tandem with the definition. It looked like it would be a long march up the large cardinal hierarchy, with inner models and associated axioms of the form “V = M” of increasing complexity.

One of the main recent surprises is that things change at the level of a supercompact cardinal: If you can develop the inner model theory for a superpact cardinal then there is a kind of “overflow” — it “goes all the way” — and the model can accommodate much stronger large cardinals. Another surprise is that one can actually write down the axiom — “V = Ultimate L” — for the conjectured inner model in a very crisp and concise fashion.

(2) You will, however, find that the axiom “V = Ultimate L” may not meet your requirement of being explainable in “generally understandable terms”. It is certainly easy to write down. It is just three short lines. But it involves some notions from modern set theory — like the notion of a Universally Baire set of reals and the notion of \Theta. These notions are not very advanced but may not meet your demand or being “generally understandable”. Moreover, to appreciate the motivation for the axiom one must have some further background knowledge — for example, one has to have some knowledge of the presentation of HOD, in restricted contexts like L(\mathbb R), as a fine-structural inner model (a “strategic inner model”). Again, I think that one can give a high-level description of this background but to really appreciate the axiom and its motivation one has to have some knowledge of these parts of inner model theory.

I don’t see any of this as a shortcoming. I see it as the likely (and perhaps inevitable) outcome of what happens when a subject advances. For comparison: Newton could write down his gravitational equation in “generally understandable terms” but Einstein could not meet this demand for his equations. To understand the Einstein Field Equation one must understand the notions a curvature tensor, a metric tensor, and stress-energy tensor. There’s no way around that. And I don’t see it as a drawback. It is always good to revisit a subject, to clean it up, to make it more accessible, to strive to present it in as generally understandable terms as possible. But there are limits to how much that can be done, as I think the case of the Einstein Field Equations (now with us for almost 100 years) illustrates.

Best, Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

The reason I didn’t quote that paragraph is that I had no comment on it. But now, upon re-reading it, I do have a comment. Here’s the paragraph:

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

Here’s the comment: This is a splendid endorsement of Hugh’s work on Ultimate L. Let us hope that we don’t have to wait until the middle of the 22nd century.

We appear to disagree on whether AD^{L(\mathbb R)} is “parasitic” on AD in the way that “I am this [insert Woodin's forcing] class-sized forcing extension of an inner model of L”, where L is a choiceless large cardinal axiom. At least, I think we disagree. It is hard to tell, since you did not engage with those comments (which addressed the whole point at issue).

But we do agree on the interest of the analogy between determinacy and choiceless large cardinal axioms. So let me elaborate on that analogy and raise an interesting question that it raises.

To begin with, it is of interest to note that the first “proof” of PD was from choiceless large cardinal axioms.

Theorem (Woodin, 1979). Assume ZF. For each n<\omega, if there is an n-fold strong rank to rank embedding sequence then (boldface) \Pi^1_{n+2}-determinacy holds. So PD follows from the assumption that for each n, there is an n-fold strong rank to rank embedding sequence. (For the definitions see the “Stanford Encyclopedia Entry” (online) on “Large Cardinals and Determinacy.) The interesting thing about these large cardinal hypotheses is that for n>0 they are inconsistent with AC (by Kunen).

Fortunately (in December of 1983) Woodin both strengthened the conclusion and reduced the large cardinal hypothesis to one that is not known to be inconsistent with AC:

Theorem (Woodin). Assume ZFC + \textsf{I}0. Then AD^{L(\mathbb R)} holds.

In fact, \textsf{I}0 — the statement that there is a non-trivial elementary embedding j: L(V_{\lambda+1})\to L(V_{\lambda+1}), with critical point less than\lambda — was introduced on the basis of the analogy with determinacy. Indeed there is a strong parallel between the structure theory of L(\mathbb R) under \text{AD}^{L(\mathbb R)} and the structure theory of L(V_{\lambda+1}) under \textsf{I}0. E.g. under \text{AD}^{L(\mathbb R)}, \omega_1 is measurable in L(\mathbb R ), and under \textsf{I}0, \lambda^+ (the analogue of \omega_1) is measurable in L(V_{\lambda+1}).

Moreover, there are much stronger large cardinal axioms (in the context of ZFC) and these too are based on the analogy with definable determinacy. (See “Suitable Extender Models, II”, Woodin).

Let us push the analogy.

Shortly after AD was introduced L(\mathbb R) was seen as the natural inner model. And Solovay conjectured that \text{AD}^{L(\mathbb R)} follows from large cardinal axioms, in particular from the existence of a supercompact.

This leads to a fascinating challenge, given the analogy: Fix a choiceless large cardinal axiom C (Reinhardt, Super Reinhardt, Berkeley, etc.) Can you think of a large cardinal axiom L (in the context of ZFC) and an inner model M such that you would conjecture (in parallel with Solovay) that L implies that C holds in M?

People (e.g. Martin) thought that AD was inconsistent, and they tried to prove it. Solovay thought otherwise and had a clear conjecture. People (e.g. us) think (or rather hope!) that Reinhardt cardinals are inconsistent, and have tried to prove it. The analogy suggests that we are mistaken and that there will be another Solovay-like conjecture…

I would love to see such a conjecture!

Best,
Peter