Re: Paper and slides on indefiniteness of CH

Dear Sy,

We are “back to square one”, not because a definite program made a definite prediction which if refuted would set it back to square one, but rather because the entire program — its background picture and the principles it has offered on the basis of that picture — has changed.

So we now have to start over and examine the new background picture (width-actualism + height potentialism) and the new principles being produced.

One conclusion one might draw from all of the changes (of a fundamental nature) and the lack of convergence is that the notion that is supposed to be guiding us through the tree of possibilities — the “”maximal” iterative conception of set” — is too vague to steer a program down the right path.

But that is not the conclusion you draw. Why? What, in light of the fact that so far this oracle — the “”maximal” iterative conception of set” has led to dead ends — is the basis for your confidence that eventually, through backing up and taking another branch, it will lead us down a branch that will bear fruit, that it will lead to `optimal’ criteria that will come to be accepted in a way that is firmer than the approaches based on good, old-fashioned evidence, especially evidence based on prediction and confirmation.

In any case, it is mathematically intriguing and will be of interest to see what can be done with the new principles.

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

To understand what Pen means by the phrase you quoted — “depth, fruitfulness, effectiveness, importance, and so on” — you have to look at the examples she gives. There is a long section in the book where she gives precisely the kind of evidence that John, Tony, Hugh, and I have cited. In this regard, there is no lack of continuity in her work. This part of her view has remained in place since “Believing the Axioms”. You can find it in every one of her books, including the latest, “Defending the Axioms”.

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

It is interesting that you mention Forcing Axioms since it provides a nice contrast.

Forcing Axioms are also based on “maximality” but the sense of maximality at play is more specific and in this case the program has led to a lot of convergence. Moreover, it has implications for other areas of mathematics.

Todorcevic’s paper for the EFI Project has a nice account of this. Here are just a few examples:

Theorem (Baumgartner) Assume \mathfrak{mm}>\omega_1. Then all separable \aleph_1 dense linear orders are isomorphic.

Theorem (Farah) Assume \mathfrak{mm}>\omega_1 (or just OGA). Then all automorphisms of the Calkin algebra are inner.

Theorem (Moore) Assume \mathfrak{mm}>\omega_1. Then the class of uncountable linear orders has a five-element basis.

Theorem (Todorcevic) Assume \mathfrak{mm}>\omega_1. Then every directed set of cardinality at most \aleph_1 is Tukey-equivalent to one of 1, \omega, \omega_1, \omega\times\omega_1, or [\omega_1]^{<\omega}.

The picture under CH is dramatically different. And this difference between the two pictures has been used as part of the case for Forcing Axioms. (See, again, Todorcevic’s paper.) I am not endorsing that case. But it is a case that needs to be reckoned with.

An additional virtue of this program is that it does make predictions. It is a precise program that has made predictions (like the prediction that Moore confirmed). Moreover, the philosophical case for the program has been taken to largely turn on an open problem, namely, of whether \textsf{MM}^{++} and (*) are compatible. If they are compatible advocates of the program would see that as strengthening the case. If they are not compatible then advocates of the program admit that it would be a problem.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

I hope you don’t think I have been maintaining that it can’t be done! I have just been trying to understand the program — the picture, the principles, etc. I have also been trying to understand the basis of your pessimism in the slow, painstaking approach through accumulation of evidence — an approach that has worked so well in the fifty years of research on \text{AD}^{L(\mathbb R)}. I have been trying to understand the basis of your pessimism of Type 1 (understood properly, in terms of evidence) and the basis of your optimism in Type 3, especially in light of the different track records so far. Is there reason to think that the Type 1 approach will not lead to a resolution of CH? (I am open-minded about that.) Is there reason to think that your approach to Type 3 will? I guess for the latter we will just have to wait and see.

Best,
Peter

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>