Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sol,

My participation in this interesting discussion is now at its end, as almost anything I say at this point would just be a repeat of what I’ve already said. I don’t regret having triggered this Great Debate on July 31, in response to your interesting paper, as I have learned enormously from it. Yet at the same time I wish to offer you an apology for the more than 500 subsequent e-mails, which surely far exceeds what you expected or wanted.

Before signing off I’d like to leave you with an abridged summary of my views and also give appropriate thanks to Pen, Geoffrey, Peter, Hugh, Neil and others for their insightful comments. Of course I am happy to discuss matters further with anyone, and indeed there is one question that I am keen to ask Geoffrey and Pen, but I’ll do that “privately” as I do think that this huge e-mail list is no longer the appropriate forum. My guess is that the vast majority of recipients of these messages are quite bored with the whole discussion but too polite to ask me to remove their name from the list.

All the best, and many thanks, Sy

Continue reading

Re: Paper and slides on indefiniteness of CH

Dear Pen and Peter,

Pen:

I am sorry to have annoyed you with the issue of TR and Type 2 evidence, indeed you have made it clear many times that the TR does take such evidence into account. I got that! But in your examples, you vigorously hail the virtues of \text{AD}^{L(\mathbb R)} and other developments that have virtually no relevance for math outside of set theory, rather than Forcing Axioms, which provide real Type 2 evidence! As I said, I think you got it very right with your excellent “Defending”, but in your 2nd edition you might want to hail the virtues of Forcing Axioms well above \text{AD}^{L(\mathbb R)}, Ultimate L (should it be “ripe” by the time of your 2nd edition) or other math-irrelevant topics, giving FA’s their richly-earned praise for winning evidence of Types both 1 and 2.

I was really hoping for your reaction to the following, but I guess I ain’t gonna get it:

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored.

Let me make this more specific. Look at the following axioms:

  • V = L
  • V is not L, but is a canonical model of ZFC, generic over L
  • Large Cardinal axioms like Supercompact
  • Forcings Axioms like PFA
  • AD in L(\mathbb R)
  • Cardinal Characteristics like \mathfrak b < \mathfrak a < \mathfrak d
  • (The famous) “Etcetera”

It seems that each of these has pretty good Type 1 evidence (useful for the development of set theory, with P’s and V’s).

But look! We can discriminate between these examples with evidence of Types 2 and 3! Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory. And of course Type 3 kills V = L. So using all three Types of evidence, we have a clear winner, Forcing Axioms!

I expect that without a heavy use of Type 2 and Type 3 evidence, we aren’t going to get any consensus about set-theoretic truth using only Type 1 evidence.

Peter:

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V = L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism! That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the IMH#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Best, Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh and Pen,

Hugh:

1. You proposed:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated

I can’t see why this would be true. One needs alpha-iterable presharps for each alpha to witness the weak #-generation of M, and although each of these presharps can be preserved by some real coding M, there is no single real that does this for all alpha simultaneously.

Instead, I realise that the theory-version of \textsf{IMH}^\# results in a statement for countable models which is a bit weaker than what I said. So I have to change the formulation of \textsf{IMH}^\# again! (Peter, before you go crazy, let me again emphasize that this is how the HP works: We make an investigation of maximality criteria and only through a lot of math and contemplation do we start to understand what is really going on. It requires time and patience.)

OK, the theory version would say: #-generation for V is consistent in V-logic (formulated in any lengthening of V) and for every phi, the theory in V-logic which says that V is #-generated and phi holds in an outer model M of V which is #-generated proves that phi holds in an inner model of V.

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all \phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

For each \phi the above hypothesis implies that for each countable \alpha, \phi holds in an outer model of V with an \alpha-iterable generator. But if V is in fact fully #-generated then the hypothesis implies that \phi holds in an outer model of V which is also fully #-generated. So now we get consistency just like we did for the original oversimplified form of the \textsf{IMH}^\# for countable models.

2. You said:

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy … This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

Sorry, I still don’t get it. Forcing extensions of L don’t play much of a role in understanding small large cardinals, do they? Yet if 0^\# provably does not exist I don’t see the argument for V = L; in fact I don’t even see the argument for CH. Now why wouldn’t you favour something like “V is a forcing extension of Ultimate L which satisfies MM” or something like that?

3. The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit \kappa of cofinality \omega can be large in HOD, and I started asking about Weak Square. It holds at \kappa in our model.

Pen:

You have caved into Peter’s P’s an V’s (Predictions and Verifications)!

Peter wrote:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

Then you said:

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here …

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

Pen, I really think you’ve made a wrong turn here. You were basing your Thin Realism very sensibly on what set-theorists actually do in their practice, what they think is important, what will lead to exciting new developments. P’s and V’s are a side issue, sometimes of value but surely not central to the practice of “good set theory”.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics? Everyone keeps ignoring this point in this thread, despite my repeated attempts to bring it forward. Does a functional analyst or algebraist care about Ultimate L or the HP? Of course not! They might laugh if they were to hear about the arguments that we have been having, which for them are just esoteric and quite irrelevant to mathematics as a whole. Forcing Axioms can at least lay a claim to be really useful both for set theory and other areas of mathematics, surely they have to be part of theory of truth. Anyone who makes claims about set-theoretic truth, be it Ultimate L or HP or anything else, which ignores them is missing something important. And won’t it be embarrassing if 100 years from now, set-theorists will announce that they finally figured out what the “correct axioms for set theory” are and mathematicians from other fields don’t care as the “new and true axioms” are either quite useless for what they are doing or even conflict with the axioms that they would like to have for their own “good mathematics”?

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored. And we must also recognise that the procedure for uncovering evidence of these three types depends heavily on the type in question. “Defending” (even without P’s and V’s) teaches us how the process works in Type 1. For Type 2 we have to get into the trenches and see what the weapons being used in core mathematics are, and how we can help when independence infiltrates. For Type 3 it has to be what I am doing: an open-minded, sometimes sloppy and contantly changing (at least at the start) “shotgun approach” to investigating maximality criteria with the optimistic and determined aim of seeing a clear picture after a lot of very hard work is accomplished. The math is very challenging and as you have seen it is even hard to get things formulated properly. But I have lost patience with and will now ignore all complaints that “it cannot be done”, complaints based on nothing more than unjustified pessimism.

Yes, there is a lack of consensus regarding “good set theory”. But Peter is plain wrong to say that it has “no place in a foundational enterprise”. It has a very important place, but to reach a consensus about what the “correct” axioms of set theory should be, the evidence from “good set theory” must be augmented, not just by P’s and V’s but also by other forms of evidence coming from math outside of set theory and from the study of the maximality of V in height and width.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

Thanks, Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen and Hugh,

Pen:

Well I said that we covered everything, but I guess I was wrong! A new question for you popped into my head. You said:

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved
has to be shared, so that it won’t degenerate into ‘Sy’s truth’.

I just realised that I may have misunderstood this.

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

I disagree with the last sentence of this quote (I expect that you do too), but the fact remains that if we don’t require a consensus about “good set theory” then truth does break into (“degenerate into” is inappropriate) “Hugh’s truth”, “Saharon’s truth”, “Stevo’s truth”, “Ronald’s truth” and so on. (Note: I don’t mean to imply that Saharon or Stevo really have opinions about truth, here I only refer to what one reads off from their forms of “good set theory”.) I don’t think that’s bad and see no need for one form of “truth” that “swamps all the others”.

Now when it comes to the HP you insist that there is just one “shared picture”. What do you mean now by “picture”? Is it just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”? If so, then I agree that this is the starting point of the HP and should be shared, independently of how the HP develops.

In my mail to you of 31.October I may have misinterpreted you by assuming that by “picture” you meant something sensitive to new developments in the programme. For example, when I moved from a short fat “picture” based on the IMH to a taller one based on the \textsf{IMH}^\#, I thought you were regarding that as a change in “picture”. Let me now assume that I made a mistake, i.e., that the “shared picture” to which you refer is just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”.

Now I ask you this: Are you going further and insisting that there must be a consensus about what mathematical consequences this “shared picture” has? That will of course be necessary if the HP is to claim “derivable consequences” of the maximality of V in height and width, and that is indeed my aim with the HP. But what if my aim were more modest, simply to generate “evidence” for axioms based on maximality just as TR generates “evidence” for axioms based on “good set theory”; would you then agree that there is no need for a consensus, just as there is in fact no consensus regarding evidence based on “good set theory”?

In this way one could develop a good analogy between Thin Realism and a gentler form of the HP. In TR one investigates different forms of “good set theory” and as a consequence generates evidence for what is true in the resulting “pictures of V”. In the gentler form of the HP one investigates different forms of “maximality in height and width” to generate evidence for what is true in a “shared picture of V”. In neither case is there the presumption of a consensus concerning the evidence generated (in the original HP there is). This gentler HP would still be valuable, just as generating different forms of evidence in TR is valuable. What it generates will not be “intrinsic to the concept of set” as in the original ambitious form of the HP, but only “intrinsically-based evidence”, a form of evidence generated through an examination of the maximality of V in height and width, rather than by “good set theory”.

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2. Could you explain a bit more why V = Ultimate L is attractive? You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.” But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals alpha, alpha^+ = alpha^+ of M.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0^\# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = \text{HOD}.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.

Peter:

1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the \textsf{IMH}^\# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

To repeat: I am not out to kill any particular axiom of set theory! I just want to take an unbiased look at what comes out of Maximality Criteria. It is far too early to conclude from the HP that extendibles don’t exist.

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Thu, 30 Oct 2014, Penelope Maddy wrote:

I’m pretty sure Hugh would disagree with what I’m about to say, which naturally gives me pause. With that understood, I confess that from where I sit as a relatively untutored observer, it looks as if the evidence Hugh is offering is overwhelming of your Type 1 (involving the mathematical virtues of the attendant set theory).

Let me give you a counterexample.

With co-authors I established the consistency of the following

Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of \text{HOD} is less than \alpha^+.

Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why? I have yet to see an argument that large cardinal existence is needed for “good set theory”, so it does not follow from Type 1 evidence. That is why I think that large cardinal existence is part of Hugh’s personal theory of truth.

My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand.

There is some ready to hand: At present, Type 2 evidence points towards Forcing Axioms, and these contradict CH and therefore contradict Ultimate L.

He has a ‘picture’ of what the set theoretic universe is like, a picture that guides his thinking, but he doesn’t expect the rest of us to share that picture and doesn’t appeal to it as a way of supporting his claims. If the mathematics goes this way rather than that, he’s quite ready to jettison a given picture and look for another. In fact, at times it seems he has several such pictures in play, interrelated by a complex system of implications (if this conjecture goes this way, the universe like this; if it goes that way, it looks like that…) But all this picturing is only heuristic, only an aide to thought — the evidence he cites is mathematical. And, yes, this is more or less how one would expect a good Thin Realist to behave (one more time: the Thin Realist also recognizes Type 2 evidence). (My apologies, Hugh. You must be thinking, with friends like these … )

That’s a lot to put in Hugh’s mouth. Probably we should invite Hugh to confirm what you say above.

The HP works quite differently. There the picture leads the way —

As with your description above, the “picture” as you call it keeps changing, even with the HP. Recall that the programme began solely with the IMH. At that time the “picture” of V was very short and fat: No inaccessibles but lots of inner models for measurable cardinals. Then came #-generation and the \textsf{IMH}^\#; a taller, handsomer universe, still with a substantial waistline. As we learn more about maximality, we refine this “picture”.

the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved has to be shared, so that it won’t degenerate into ‘Sy’s truth’. So far, to be honest, I’m still not clear on the HP picture, either in its height potentialist/width actualist form or its full multiverse form. Maybe Peter is doing better than I am on that.

I have offered to work with the height potentialist/width actualist form, and even drop the reduction to ctm’s, to make people happy (this doesn’t affect the mathematical conclusions of the programme). Regarding Peter: Unless he chooses to be more open-minded, what I hear from him is a premature pessimism about the HP based on a claim that there will be “no convergence regarding what can be inferred from the maximal iterative conception”. To be honest, I find it quite odd that (excluding my coworkers Claudio and Radek) I have received the most encouragement from Hugh, who seems open-minded and interested in seeing what comes out of the HP, just as we all want to see what comes out of Ultimate L (my criticisms long ago had nothing to do with the programme itself, only with the way it had been presented).

Pen, I know that you have said that in any event you will encourage the “good set theory” that comes out of the HP. But the persistent criticism (not just from you) of the conceptual approach, aside from the math, while initially of extraordinary value to help me clarify the approach (I am grateful to you for that), is now becoming somewhat tiresome. I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

In light of your HOD Dichotomy I interpreted the HOD conjecture to say that if there is an extendible delta then HOD correctly computes successors of singulars above \delta correctly. All I meant was that if you drop the extendible then this conclusion need not hold. I am guessing (I really don’t know) that if there is an extendible then this conclusion does hold (and hence the HOD Conjecture is true).

Unless you can derive extendibles from some form of maximality the consequence I would draw from the HOD conjecture would be that maximality violates the existence of extendible cardinals.

Best, Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Wed, 29 Oct 2014, W Hugh Woodin wrote:

My question to Sy was implicitly: Why does he not, based on maximality, reject HOD Conjecture since disregarding the evidence from the Inner Model Program, the most natural speculation is that the HOD Conjecture is false.

Two points:

1. The HP is concerned with maximality but does not aim to make “conjectures”; its aim is to throw out maximality criteria and analyse them, converging towards an optimal criterion, that is all. A natural maximality criterion is that V is “far from \text{HOD}” and indeed my work with Cummings and Golshani shows that this is consistent. In fact, I would guess that an even stronger statement that V is “very far from \text{HOD}” is consistent, namely that all regular cardinals are inaccessible in \text{HOD} and more. What you call “the \text{HOD} Conjecture” (why does it get this special name? There are many other conjectures one could make about \text{HOD}!) presumes an extendible cardinal; what is that doing there? I have no idea how to get extendible cardinals from maximality.

2. Sometimes I make conjectures, for example the rigidity of the Stable Core. But this has nothing to do with the HP as I don’t see what non-rigidity of inner models has to do with maximality. I don’t have reason to believe in the rigidity of \text{HOD} (with no predicate) and I don’t see what such a statement has to do with maximality.