Tag Archives: Predictions

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.

Peter:

1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter,

I think we should all be grateful to you for this eloquent description of how we gather evidence for new axioms based on the development of set theory. The first two examples (and possibly the third) that you present are beautiful cases of how a body of ideas converges on the formulation of a principle or principles with great explanatory power for topics which lie at the heart of the subject. Surely we have to congratulate those who have facilitated the results on determinacy and forcing axioms (and perhaps in time Hugh for his work on Ultimate L) for making this possible. Further, the examples mentioned meet your high standard for any such programme, which is that it “makes predictions which are later verified”.

I cannot imagine a more powerful statement of how Type 1 evidence for the truth of new axioms works, where again by “Type 1″ I refer to set theory’s role as a field of mathematics and therefore by “Type 1 evidence” I mean evidence for the truth of a new axiom based on its importance for generating “good set theory”, in the sense that Pen has repeatedly emphasized.

But I do think that what you present is only part of the picture. Set theory is surely a field of mathematics that has its own key questions and as it evolves new ideas are introduced which clarify those questions. But surely other areas of mathematics share that feature, even if they are free of questions of independence; they can have analogous debates about which developments are most important for the field, just as in set theory. So what you describe could be analagously described in other areas of mathematics, where “predictions” are made about how certain approaches will lead to the solution of central open problems. Briefly put: In your description of programmes for set theory, you treat set theory in the same way as one would treat any field of mathematics.

But set theory is much more that. Before I discuss this key point, let me interrupt myself with a brief reference to where this whole e-mail thread began, Sol’s comments about the indefiniteness of CH. As I have emphasized, there is no evidence that the pursuit of programmes like the ones you describe will agree on CH. Look at your 3 examples: The first has no opinion on CH, the second denies it and the third confirms it! I see set theory as a rich and developing subject, constantly transforming itself with new ideas, and as a result of that I think it unreasonable based on past and current evidence to think that CH will be decided by the Type 1 evidence that you describe. Pen’s suggestion that perhaps there will be a theory “whose virtues swamp the rest” is wishful thinking. Thus if we take only Type 1 evidence for the truth of new axioms into account (Sol rightly pointed out the misuse of the term “axiom” and Shelah rightly suggested the better term “semi-axiom”), we will not resolve CH and I expect that we won’t resolve much at all. Something more is needed if your goal is to say something about truth in set theory. (Of coures it is fine to not have that goal, and only a handful of set-theorists have that goal.)

OK, back to the point that set theory is more than just a branch of mathematics. Set theory also has a role as a foundation for mathematics (Type 2). Can we really assume that Type 1 axioms like the ones you suggest in your three examples are the optimal ones for the role of set theory as a foundation? Do we really have a clear understanding of what axioms are optimal in this sense? I think it is clear that we do not.

The preliminary evidence would suggest that of the three examples you mention, the first and third are quite irrelevant to mathematics outside of set theory and the second (Forcing Axioms) is of great value to mathematics outside of set theory. Should we really ignore this in a discussion of set-theoretic truth? I mean set theory is a great branch of mathematics, rife with ideas, but can we really assert the “truth” of an axiom which serves set theory’s needs when other axioms that contradict it do a better job in providing other areas of mathematics what they need?

There is even more to the picture, beyond set theory as a branch of or a foundation for math. I am referring to its Type 3 role, as a study of the concept of set. There is widespread agreement that this concept entails the maximality of V in height and width. The challenge is to explain this feature in mathematical terms, the goal of the HP. There is no a priori reason whatsoever to assume that the mathematical consequences of maximality in this sense will conform to axioms which best serve the Type 1 or Type 2 needs of set theory (as a branch of or foundation for mathematics). Moreover, to pursue this programme requires a very different approach than what is familiar to the Type 1 set-theorist, perfectly described in your previous e-mail. I am asking you to please be open-minded about this, because the standards you set and the assumptions that you make when pursuing new axioms for “good set theory” do not apply when pursuing consequences of maximality in the HP. The HP is a very different kind of programme.

To illustrate this, let me begin with two quotes which illustrate the difference and set the tone for the HP:

I said to Hugh:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

In other words, my starting point is not what facilitates the “best set theory”, but what one can understand about maximality of V in height and width.

On a recent occasion, Hugh said to me:

[Yet] you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

This second quote precisely indicates the difference in our points of view. The HP is intended to be an unbiased analysis of the maximality of V in height and width, grounded in our intuitions about this feature and limited by what is possible mathematically. These intuitions are indeed fairly robust, surely more so than our judgments about what is “good set theory”. I know of no persuasive argument that large cardinal existence (beyond what is compatible with V = L) follows from the maximlity of V in height and width. Indeed in the literature authors such as Gödel had doubts about this, whereas they have felt that inaccessible cardinals are derivable from maximality in height.

So the only reasonable interpretation of Hugh’s comment is that he feels that LC existence is necessary for “good set theory” and that such Type 1 evidence should override any investigation of the maximality of V in height and width. Pen and I discussed this (in what seems like) ages ago in the terminology of “veto power” and I came to the conclusion that it should not be the intention of the HP to have its choice of criteria dictated by what is good for the practice of set theory as mathematics.

To repeat, the HP works like this: We have an intuition about maximality (of V in height and width) which we can test out with various criteria. It is a lengthy process by which we formulate, investigate and compare different criteria. Sometimes we “unify” or “synthesise” two criteria into one, resulting in a new criterion that based on our intuitions about maximality does a better job of expressing this feature than did the individual criteria which were unified. And sometimes our criteria conflict with reality, namely they are shown to be inconsistent in ZFC. Here are some examples:

Synthesis: The IMH is the most obvious criterion for expressing the maximality of V in width. #-generation is the strongest criterion for expressing the maximality of V in height. If we unify these we get IMH#, which is consistent but behaves differently than either the IMH alone or #-generation alone. Our intuition says that the IMH# better expresses maximality than either the IMH alone or #-generation alone.

Inconsistency (examples with HOD): We can consistently assert the maximality principle V \noteq \text{HOD}. A natural strengthening is that \alpha^+ of HOD is less than \alpha^+ for all infinite cardinals \alpha. Still consistent. But then we go to the further natural strengthening \alpha^+ of HOD_x is less than \alpha^+ for all subsets x of \alpha (for all infinite cardinals \alpha). This is inconsistent. So we back off to the latter but only for \alpha of cofinality \omega. Now it is consistent for many such \alpha, not yet known to be consistent for all such \alpha. We continue to explore the limits of maximality in this way, in light of what is consistent with ZFC. A similar issue arises with the statement that \alpha is inaccessible in HOD for all infinite regular \alpha, which is not yet known to be consistent (my belief is that it is).

The process continues. There is a constant interplay betrween criteria suggested by our maximality intuitions and the mathematics behind these criteria. Obviously we have to modify what we are doing as we learn more of the mathematics. Indeed, as you pointed out in your more recent e-mail, there are maximality criteria which contradict ZFC; this has been obvious for a long time, in light of Vopenka’s theorem.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

Once again, the aim of the programme is to understand the consequences of the maximality of V in height and width. Your criterion of “making predictions” may be fine for your Type 1 programmes, which are grounded by nothing more than “good set theory”, but it is not appropriate for the HP. That is because the HP is grounded by an intrinsic feature of the set-concept, maximality, which will take a long time to understand. I see no basis for your suggestion that the programme is “infinitely revisable”, it simply requires a huge amount of mathematics to carry out. Already the synthesis of the IMH with #-generation is considerable progress, although to get a deeper understanding we’ll definitely have to deal with the \textsf{SIMH}^\# and HOD-maximality.

If you insist on a “prediction” the best I can do is to say that the way things look now, at this very preliminary stage of the programme, I would guess that both not-CH and the nonexistence of supercompacts will come out. But that can’t be more than a guess at this point.

Now I ask you this: Suppose we have two Type 1 axioms, like the ones in your examples. Suppose that one is better than the other for Type 2 reasons, i.e., is more effective for mathematics outside of set theory. Does that tip the balance between those two Type 1 axioms in terms of which is closer to the truth? And I ask the same question for Type 3: Could you imagine joining forces and giving priority to axioms that both serve the needs of set theory as mathematics and are derivable from the maximality of V in height and width?

One additional worry is the vagueness of the idea of the ” ‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

No, the demands you want to make of a programme are appropriate for finding the right axioms for “good set theory” but not for an analysis of the maximality of V in height and width. For the latter it is more than sufficient to analyse the natural candidates for maximality criteria provided by our intuitions and achieve a synthesis. I predict that this will happen with striking consequences, but those consequences cannot be predicted without a lot of hard work.

Thanks,
Sy

PS: The above also addresses your more recent mail: I don’t reject a form of maximality just because it contradicts supercompacts (because I don’t see how supercompact existence is derivable form any form of maximality) and I don’t see any problem with rejecting maximality principles that contradict ZFC, simply because by convention ZFC is taken in the HP as the standard theory.

PPS: A somewhat weird but possibly interesting investigation would indeed be to drop the ZFC convention and examine criteria for the maximality of V in height and width over a weaker theory.