Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your letter.

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

I would like to repeat my request: Could you please give us an account of #-generation, explain how it arises from “length maximality”, and make a convincing case that it captures all (in particular, the Erdos cardinal \kappa(\omega)) and only the large cardinals that we can expect to follow from “length maximality”.

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V= L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms.

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

You have not understood what I (or Pen, or Tony, or Charles, or anyone else who has discussed this matter in the literature) mean by “prediction and confirmation”. To understand what we mean you have to read the things we wrote; for example, the slides I sent you in response to precisely this question.

You cite cases of the form: “X was working with theory T. X conjectured P. The conjecture turned out to be true. Ergo: T!”

That is clearly not how “prediction and confirmation” works in making a case for new axioms. Why? Take T to be an arbitrary theory, say (to be specific) “\textsf{I}\Delta_0 + Exp is not total.”  X conjectures that P follows from T. It turns out that A was right. Does that provide evidence for “Exp is not total”?

Certainly not.

This should be evident by looking at the case of “prediction and confirmation” in the physical sciences. Clearly not every verified prediction made on the basis of a theory T provides epistemic support for T. There are multiple (obvious) reasons for this, which I won’t rehears. But let me mention two that are relevant to the present discussion. First, the theory T could have limited scope — it could pertain to (what is thought, for other reasons) to be a fragment of the physical universe; e.g. the verified predictions of macroscopic mechanics do not provide epistemic support for conclusions about how subatomic particles behave. Cf. your V=L example. Second, the predictions must bear on the theory in a way that distinguishes it from other, competing theories.

Fine. But falling short of that ideal one at least would like to see a prediction which, if true, would (according to you) lend credence to your program and, if false, would (according to you) take credence away from your program, however slight the change in credence might be. But you appear to have also renounced these weaker rational constraints.

Fine. The Hyperuniverse Program is a different sort of thing. It isn’t like (an analogue of) astronomy. And you certainly don’t want it to be like (an analogue of) astrology. So there must be some rational constraints. What are they?

Apparently, the fact that a program suggests principles that continue to falter is not a rational constraint. What then are the rational constraints? Is the idea that we are just not there yet but that at the end of inquiry, when the dust settles, we will have convergence and we will arrive at “optimal” principles, and that at that stage there will be a rationally convincing case for the new axioms? (If so, then we will just have to wait and see whether you can deliver on this promise.)

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There are two reasons I keep giving a summary of the changes, of how we got to where we are now. First, this thread is quite intricate and its useful to give reader a summary of the state of play. Second, in assessing prospects and tenability of a program it is useful to keep track of its history, especially when that program is not in the business of making predictions.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism!

This is not correct. (I wish I didn’t have to document this).

I never attributed height-actualism to you. (I hope that was a typo on your part). I wrote (in the private letter of Oct. 6, which you quoted and responded to in public):

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

I never attributed height actualism. I only very tentatively said that it appeared you have switched to width actualism and said that I didn’t believe that this was your official view.

That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

This is not correct. (Again, I wish I didn’t have to document this.)

You responded to my letter (in public) on Oct. 9, quoting the above passage, writing:

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

I then wrote letters asking you to confirm that you were indeed a radical potentialist. You confirmed this. (For the documentation see the beginning of my letter on K.)

So, I wrote the letter on K, after which you said that you regretted having admitted to radical potentialism.

You didn’t endorse width-actualism until Nov. 3, in response to the story about K. And it is only now that we are starting to see the principles associated with “width-actualism + height potentialism” (New IMH#, etc.)

I am fully aware (and have acknowledged) that you have said that the HP program is compatible with “width-actualism + height potentialism”. The reason I have focused on “radical potentialism” and not “width-actualism + height potentialism” is two-fold. First, you explicitly said that this was your official view. Second, you gave us the principles associated with this view (Old-IMH#, etc.) and have only now started to give us the principles associated with “width-actualism + height potentialism” (New-IMH#, etc.) I wanted to work with your official view and I wanted something definite to work with.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

I certainly agree that it is more likely that one will get an answer on PD than an answer on CH. Of course, I believe that we already have a convincing case for PD. But let me set that aside and focus on your program. And let me also set aside questions about the epistemic force behind the principles you are getting (as “suggested” or “intrinsically motivated”) on the basis of the  “‘maximal’ iterative conception of set” and focus on the mathematics behind the actual principles.

(1) You proposed Strong Unreachability (as “compellingly faithful to maximality”) and you have said quite clearly that V does not equal HOD (“Maximality implies that there are sets (even reals) which are not ordinal-definable” (Letter of August 21)). From these two principles Hugh showed (via a core model induction argument) that PD follows. [In fact, in place of the second one just needs (the even more plausible "V does not equal K".]

(2) Max (on Oct. 14) proposed the following:

In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the \kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model \text{HOD}_A of sets hereditarily-ordinal definable with the parameter A!

Hugh pointed out (on Oct. 15) that the latter violates ZFC. Still, there is a principle in the vicinity that Max could still endorse, namely,

(H) For all uncountable cardinals \kappa, \kappa^+ is not correctly computed by HOD.

Hugh showed (again by a core model induction argument) that this implies PD.

So you already have different routes (based on principles “suggested” by the “‘maximal’ iterative conception of set” leading to PD. So things are looking good!

(3) I expect that things will look even better. For the core model induction machinery is quite versatile. It has been used to show that lots of principles (like PFA, there is an \omega_1 dense ideal on \omega_1, etc.) imply PD. Indeed there is reason to believe (from inner model theory) that every sufficiently strong “natural” theory implies PD. (Of course, here both “sufficiently strong” and “natural” are necessary, the latter because strong statements like “Con(ZFC + there is a supercompact)” and “There is a countable transitive model of ZFC with a supercompact” clearly cannot imply PD.)

Given the “inevitability” of PD — in this sense: that time and again it is show to follow from sufficiently strong “natural” theories — it entirely reasonable to expect the same for the principles you generate (assuming they are sufficiently strong). It will follow (as it does in the more general context) out of the core model induction machinery. This has already happened twice in the setting of HP. I would expect there to be convergence on this front, as a special case of the more general convergence on PD.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the \textsf{IMH}^\#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Let us focus on a productive exchange of your current view, of the program as you now see it.

It would be helpful if you could:

(A) Confirm that the official view is indeed now “width-actualism + height potentialism”.

[If you say the official view is “radical potentialism” (and so are sticking with Old-\textsf{IMH}^\#, etc.) then [insert story of K.] If you say the official view is “width-actualism + height potentialism” then please give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.)]

(B) Give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.), what you know about their consistency, and a summary of what you can currently do with them. In short, it would be helpful if you could respond to Hugh’s last letter on this topic.

Thanks for continuing to help me understand your program.

Best,
Peter

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>