Tag Archives: Forcing axioms

Re: Paper and slides on indefiniteness of CH

Dear Sy,

We are “back to square one”, not because a definite program made a definite prediction which if refuted would set it back to square one, but rather because the entire program — its background picture and the principles it has offered on the basis of that picture — has changed.

So we now have to start over and examine the new background picture (width-actualism + height potentialism) and the new principles being produced.

One conclusion one might draw from all of the changes (of a fundamental nature) and the lack of convergence is that the notion that is supposed to be guiding us through the tree of possibilities — the “”maximal” iterative conception of set” — is too vague to steer a program down the right path.

But that is not the conclusion you draw. Why? What, in light of the fact that so far this oracle — the “”maximal” iterative conception of set” has led to dead ends — is the basis for your confidence that eventually, through backing up and taking another branch, it will lead us down a branch that will bear fruit, that it will lead to `optimal’ criteria that will come to be accepted in a way that is firmer than the approaches based on good, old-fashioned evidence, especially evidence based on prediction and confirmation.

In any case, it is mathematically intriguing and will be of interest to see what can be done with the new principles.

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

To understand what Pen means by the phrase you quoted — “depth, fruitfulness, effectiveness, importance, and so on” — you have to look at the examples she gives. There is a long section in the book where she gives precisely the kind of evidence that John, Tony, Hugh, and I have cited. In this regard, there is no lack of continuity in her work. This part of her view has remained in place since “Believing the Axioms”. You can find it in every one of her books, including the latest, “Defending the Axioms”.

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

It is interesting that you mention Forcing Axioms since it provides a nice contrast.

Forcing Axioms are also based on “maximality” but the sense of maximality at play is more specific and in this case the program has led to a lot of convergence. Moreover, it has implications for other areas of mathematics.

Todorcevic’s paper for the EFI Project has a nice account of this. Here are just a few examples:

Theorem (Baumgartner) Assume \mathfrak{mm}>\omega_1. Then all separable \aleph_1 dense linear orders are isomorphic.

Theorem (Farah) Assume \mathfrak{mm}>\omega_1 (or just OGA). Then all automorphisms of the Calkin algebra are inner.

Theorem (Moore) Assume \mathfrak{mm}>\omega_1. Then the class of uncountable linear orders has a five-element basis.

Theorem (Todorcevic) Assume \mathfrak{mm}>\omega_1. Then every directed set of cardinality at most \aleph_1 is Tukey-equivalent to one of 1, \omega, \omega_1, \omega\times\omega_1, or [\omega_1]^{<\omega}.

The picture under CH is dramatically different. And this difference between the two pictures has been used as part of the case for Forcing Axioms. (See, again, Todorcevic’s paper.) I am not endorsing that case. But it is a case that needs to be reckoned with.

An additional virtue of this program is that it does make predictions. It is a precise program that has made predictions (like the prediction that Moore confirmed). Moreover, the philosophical case for the program has been taken to largely turn on an open problem, namely, of whether \textsf{MM}^{++} and (*) are compatible. If they are compatible advocates of the program would see that as strengthening the case. If they are not compatible then advocates of the program admit that it would be a problem.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

I hope you don’t think I have been maintaining that it can’t be done! I have just been trying to understand the program — the picture, the principles, etc. I have also been trying to understand the basis of your pessimism in the slow, painstaking approach through accumulation of evidence — an approach that has worked so well in the fifty years of research on \text{AD}^{L(\mathbb R)}. I have been trying to understand the basis of your pessimism of Type 1 (understood properly, in terms of evidence) and the basis of your optimism in Type 3, especially in light of the different track records so far. Is there reason to think that the Type 1 approach will not lead to a resolution of CH? (I am open-minded about that.) Is there reason to think that your approach to Type 3 will? I guess for the latter we will just have to wait and see.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

It is a virtue of a program if it generate predictions which are subsequently verified. To the extent that these predictions are verified one obtains extrinsic evidence for the program. To the extent that these predictions are refuted one obtains extrinsic evidence for the problematic nature of the program. It need not be a prediction which would “seal the deal” in the one case and “set it back to square one” in the other (two rather extreme cases). But there should be predictions which would lend support in the one case and take away support in the other.

The programs for new axioms that I am familiar with have had this feature. Here are some examples:

(1) Definable Determinacy.

The descriptive set theorists made many predictions that were subsequently verified and taken as support for axioms of definable determinacy. To mention just a few. There was the prediction that \text{AD}^{L(\mathbb R)} would lift the structure theory of Borel sets of reals (provable in ZFC) to sets of reals in L(\mathbb R). This checked out. There was the prediction that \text{AD}^{L(\mathbb R)} followed from large cardinals. This checked out. The story here is long and impressive and I think that it provides us with a model of a strong case for new axioms. For the details of this story — which is, in my view, a case of prediction and verification and, more generally, a case that parallels what happens when one makes a case in physics — see the Stanford Encyclopedia of Philosophy entry “Large Cardinals and Determinacy”, Tony Martin’s paper “Evidence in Mathematics”, and Pen’s many writings on the topic.

(2) Forcing Axioms

These axioms are based on ideas of “maximality” in a rather special sense. The forcing axioms ranging from \textsf{MA} to \textsf{MM}^{++} are a generalization along one dimension (generalizations of the Baire Category Theorem, as nicely spelled out in Todorcevic’s recent book “Notes on Forcing Axioms”) and the axiom (*) is a generalization along a closely related dimension. As in the case of Definable Determinacy there has been a pretty clear program and a great deal of verification and convergence. And, at the current stage advocates of forcing axioms are able to point to a conjecture which if proved would support their view and if refuted would raise a serious problem (though not necessarily setting it back to square one), namely, the conjecture that \textsf{MM}^{++} and (*) are compatible. That I take to be a virtue of the program. There are test cases. (See Magidor’s contribution to the EFI Project for more on this aspect of the program.)

(3) Ultimate L

Here we have lots of predictions which if proved would support the program and there are propositions which if proved would raise problems for the program. The most notable on is the “Ultimate L Conjecture”. But there are many other things. E.g. That conjecture implies that V=HOD. So, if the ideas of your recent letter work out, and your conjecture (combined with results of “Suitable Extender Models, I”) proves the HOD Conjecture then this will lend some support “V = Ultimate L” in that “V = Ultimate L” predicts a proposition that was subsequently verified in ZFC.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

One additional worry is the vagueness of the idea of the “‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Pen,

Thanks for this! I didn’t know about the Single Truth Convention, and I will respect it from now on. Somehow I thought that, for example, constructivists would claim that there are at least 2 distinct truth notions, but I guess I was wrong about that (maybe you would say that they are working with a different set-concept?).

Of course I knew that the official version of Thin Realism takes both Type 1 (good set theory) and Type 2 (set theory as a foundation) evidence into account; but the fact is that in the present forum with only one exception we’ve been talking exclusively about axioms that are good for set theory, like PD and large cardinals, but not good for other areas of mathematics (the exception is when I brought in forcing axioms). More on this point below.

So there were “terminological” errors in what I said. But correcting those errors leaves the argument unchanged and in fact I will make the argument even stronger in this mail.

There are 3 kinds of evidence for Truth (not 3 kinds of Truth!), emanating from the 3 roles of Set Theory that I indicated: (1) A branch of mathematics, (2) A foundation for mathematics and (3) A study of the set-concept.

Now you will object immediately and say that there is no Type 3 evidence; Type 3 is just an engine (heuristic) for generating new examples of Types 1 and 2. Fine, but it still generates evidence, albeit indirectly! You would simply cleanse that evidence of its source (the maximality of the set-concept) and just regard it as plain good set theory or mathematics. It is hard to imagine (although not to be ruled out) that Type 3 will ever generate anything useful for mathematics outside of set theory, so let’s say that Type 3 provides an indirect way of generating evidence of Type 1. Clearly the way that this evidence is generated is not the usual direct way. In any case even your Thin Realist will take Type 3 generated evidence into account (as long as it entails good set theory).

So up to this point the only difference we have is that I regard Type 3 considerations as more than just an indirect way of generating new Type 1 evidence; I would like to preserve the source of that evidence and say that Type 3 considerations enhance our understanding of the set-concept through the MIC, i.e., they are good for the development of the philosophy of the set-concept. Of course the radical skeptic regards this as pure nonsense, I understand that. But I continue to think that there is more at play here than just good set theory or good math, there is also something valuable in better understanding the concept of set. For now we can just leave that debate aside and regard it just as a polite, collegial disagreement which can safely be ignored.

OK, so we have 3 sources for truth. But there is an important difference between Type 1 vs. Types 2, 3 and this regards the issue of “grounding” for the evidence.

Type 3 evidence (i.e. evidence evoked through Type 3 considerations, as you may prefer to say) is grounded in the maximal iterative conception. The HP is limited to squeezing out consequences of that. Of course there is some wiggle room here, but it is fairly robust to say that some mathematical criterion reflects the Maximality of the universe or synthesises two other such criteria.

Type 2 evidence is grounded in the practice of mathematics outside of set theory. Functional analysts, number theorists, toplogists, group theorists, … are not thinking about set theory directly, but axioms of set theory are of course useful for them. You cite AC, which is a perfect example, but in the contemporary setting we can look at the value for example of forcing axioms for the combinatorial power they provide for resolving problems in mathematics outside of set theory. Now just as Type 3 evidence is limited to what grounds it, namely considerations regarding the maximality of the set-concept, so is Type 2 evidence limited by what is valuable for the work of mathematicians who don’t have set theory in their heads, who are not thinking about actualism vs. potentialism, reflection principles, HOD, … To see that this is a nontrivial limitation, just note that two of the most discussed axioms of set theory in this forum, PD and large cardinals, appear to have nothing relevant to say about areas of mathematics outside of set theory. In this sense the Type 2 evidence for forcing axioms is overwhelming in comparison to Type 2 evidence for anything else we have been discussing here, including Ultimate-L or what is generated by the HP.

So there is a big gap in what we know about about evidence for set-theoretic truth. We have barely scratched the surface with the issue of what new axioms of set theory are good for the development of mathematics outside of set theory. Your great example, the Axiom of Choice, won the day for its value in the development of mathematics outside of set theory, yet for some reason this important point has been forgotten and the focus has been on what is good for the development of just set theory. (This may be the only point on which Angus MacIntyre and I may agree: To understand the fondations of mathematics one has to take a close look at mathematics and see what it needs, and not just play around with set theory all the time.)

In my view, Type 1 evidence is very poorly grounded, I would even say not grounded at all. It is evidence that says that some axiom of set theory is good for set theory. That could mean 100 different things. One person says that V = L is true because it is such a strong theory, another that forcing axioms are true because they have great combinatorial strength, Ultimate-L is true for reasons I don’t yet understand, …, not to forget Aczel’s AFA, or constructive set theory, … the list is almost endless. With just Type 1 evidence, we allow set-theorists to run rampant, declaring their own brand of set theory to be particularly “good set theory”. I think this is what I meant earlier when I griped about set-theorists promoting their latest discoveries to the level of evidence for truth.

Pen, can you give Type 1 evidence a better grounding? I’m not sure that I grasped the point of Hugh’s latest mail, but maybe he is hinting at a way of doing this:

“Assuming V = Ultimate L one can have inner models containing the reals of say MM. But assuming MM one cannot have an inner model containing the reals which satisfies V = Ultimate L.”

Perhaps (not to put words in Hugh’s mouth) he is saying that Axiom B is better than Axiom A if models of Axiom B produce inner models of Axiom A but not conversely. Is this a start on how to impose a justifiable preference for one kind of good set theory over another? But maybe I missed the point here, because it seems that one could have “MM together with an inner model of Ultimate-L not containing all of the reals”, in which case that would be an even better synthesis of the 2 axioms! (Here we go again, with maximality and synthesis, this time with first-order axioms, rather than with the set-concept and Hyperuniverse-criteria.)

Anyway, in the absence of a better grounding for Type 1 evidence I am strongly inclined to favour what is well-grounded, namely evidence of Types 2 and 3.

You raised the issue of conflict. It is clear that there can Type 1 conflicts, i.e. conflicts between different forms of Type 1 evidence, and that’s why I’m asking for a better grounding for Type 1. We don’t know yet if there are Type 2 conflicts, because we don’t know much about Type 2 evidence at all. And the hardest part of the HP is dealing with Type 3 conflicts; surely they arise but my “synthesis method” is meant to resolve them.

But what about conflicts between evidence of different Types (1, 2 or 3)? The Single Truth Convention makes this tough, I can’t weasel out anymore by simply saying that there are just different forms of Truth (too bad). Nor can I accept simply rejecting evidence of a particular Type (as you “almost” seemed to suggest when you hinted that Type 3 should defer to Types 1 and 2). This is a dilemma. To present a wild scenario, suppose we have:

(Type 1) The axioms that give the “best set theory” imply CH.
(Type 2) The axioms that give the best foundation for mathematics outside of set theory imply not-CH.
(Type 3) The axioms that follow from the maximality of the set-concept imply not-CH.

What do we tell Sol at that point? Does the majority win, 2 out of 3 for not-CH? I don’t think that Sol will be very impressed by that.

Sorry that this mail got so long. As always, I look forward to your reply.

All the best,
Sy