The Ultimate-L Conjecture

In this three-part post I would like to motivate and provide a high-level overview of Woodin’s recent work in inner model theory, the goal being to describe the Ultimate-{L} Conjecture, so that we might discuss the mathematics surrounding it and its potential philosophical significance. (All unattributed results below are due to Woodin.)

I will start in this first post by introducing the HOD Dichotomy Theorem. This dichotomy leads to a fork in the road, each side of which points to a different future of set theory. The first leads to the prospect of an ultimate inner model—one that is compatible with all (standard) large cardinal axioms—and the Ultimate-{L} Conjecture is a precise conjecture as to what such an inner model might look like. The second leads to the prospect of a large cardinal hierarchy that transcends the axiom of choice. These two directions will be described in the second and third post, respectively.

Each possibility is of great foundational significance. In this respect, we are at a decisive phase in the development of the search for new axioms.

1. The HOD Dichotomy

As motivation for the HOD Dichotomy, we begin with the {L} Dichotomy.

1.1. The {L} Dichotomy

The following remarkable result is a combination of results due to Jensen, Kunen, and Silver, the most profound part being due to Jensen.

Theorem 1 (The {L} Dichotomy Theorem) Exactly one of the following holds:

  1. For all singular cardinals {\gamma},
    • {\gamma} is singular in {L}, and
    • {\gamma^+=(\gamma^+)^L.}
  2. Every uncountable cardinal is inaccessible in {L}.

Continue reading

Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sy,

Before we close this thread, it would be nice if you could state what the current version of \textsf{IMH}^\# is. This would at least leave me with something specific to think about.

Is it:

1) (SDF: Nov 5) M is weakly #-generated and for each phi, if
for each countable alpha, phi holds in an outer model of M which
is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2) (SDF: Nov 8) M is weakly #-generated and for all \phi: Suppose that whenever g is a generator for M (iterable at least to the height of M), \phi holds in an outer model M with a generator which is at least as iterable as g. Then \phi holds in an inner model of M.

or something else? Or perhaps it is now a work in progress?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sol,

My participation in this interesting discussion is now at its end, as almost anything I say at this point would just be a repeat of what I’ve already said. I don’t regret having triggered this Great Debate on July 31, in response to your interesting paper, as I have learned enormously from it. Yet at the same time I wish to offer you an apology for the more than 500 subsequent e-mails, which surely far exceeds what you expected or wanted.

Before signing off I’d like to leave you with an abridged summary of my views and also give appropriate thanks to Pen, Geoffrey, Peter, Hugh, Neil and others for their insightful comments. Of course I am happy to discuss matters further with anyone, and indeed there is one question that I am keen to ask Geoffrey and Pen, but I’ll do that “privately” as I do think that this huge e-mail list is no longer the appropriate forum. My guess is that the vast majority of recipients of these messages are quite bored with the whole discussion but too polite to ask me to remove their name from the list.

All the best, and many thanks, Sy

Continue reading

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your letter.

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

I would like to repeat my request: Could you please give us an account of #-generation, explain how it arises from “length maximality”, and make a convincing case that it captures all (in particular, the Erdos cardinal \kappa(\omega)) and only the large cardinals that we can expect to follow from “length maximality”.

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V= L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms.

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

You have not understood what I (or Pen, or Tony, or Charles, or anyone else who has discussed this matter in the literature) mean by “prediction and confirmation”. To understand what we mean you have to read the things we wrote; for example, the slides I sent you in response to precisely this question.

You cite cases of the form: “X was working with theory T. X conjectured P. The conjecture turned out to be true. Ergo: T!”

That is clearly not how “prediction and confirmation” works in making a case for new axioms. Why? Take T to be an arbitrary theory, say (to be specific) “\textsf{I}\Delta_0 + Exp is not total.”  X conjectures that P follows from T. It turns out that A was right. Does that provide evidence for “Exp is not total”?

Certainly not.

This should be evident by looking at the case of “prediction and confirmation” in the physical sciences. Clearly not every verified prediction made on the basis of a theory T provides epistemic support for T. There are multiple (obvious) reasons for this, which I won’t rehears. But let me mention two that are relevant to the present discussion. First, the theory T could have limited scope — it could pertain to (what is thought, for other reasons) to be a fragment of the physical universe; e.g. the verified predictions of macroscopic mechanics do not provide epistemic support for conclusions about how subatomic particles behave. Cf. your V=L example. Second, the predictions must bear on the theory in a way that distinguishes it from other, competing theories.

Fine. But falling short of that ideal one at least would like to see a prediction which, if true, would (according to you) lend credence to your program and, if false, would (according to you) take credence away from your program, however slight the change in credence might be. But you appear to have also renounced these weaker rational constraints.

Fine. The Hyperuniverse Program is a different sort of thing. It isn’t like (an analogue of) astronomy. And you certainly don’t want it to be like (an analogue of) astrology. So there must be some rational constraints. What are they?

Apparently, the fact that a program suggests principles that continue to falter is not a rational constraint. What then are the rational constraints? Is the idea that we are just not there yet but that at the end of inquiry, when the dust settles, we will have convergence and we will arrive at “optimal” principles, and that at that stage there will be a rationally convincing case for the new axioms? (If so, then we will just have to wait and see whether you can deliver on this promise.)

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There are two reasons I keep giving a summary of the changes, of how we got to where we are now. First, this thread is quite intricate and its useful to give reader a summary of the state of play. Second, in assessing prospects and tenability of a program it is useful to keep track of its history, especially when that program is not in the business of making predictions.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism!

This is not correct. (I wish I didn’t have to document this).

I never attributed height-actualism to you. (I hope that was a typo on your part). I wrote (in the private letter of Oct. 6, which you quoted and responded to in public):

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

I never attributed height actualism. I only very tentatively said that it appeared you have switched to width actualism and said that I didn’t believe that this was your official view.

That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

This is not correct. (Again, I wish I didn’t have to document this.)

You responded to my letter (in public) on Oct. 9, quoting the above passage, writing:

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

I then wrote letters asking you to confirm that you were indeed a radical potentialist. You confirmed this. (For the documentation see the beginning of my letter on K.)

So, I wrote the letter on K, after which you said that you regretted having admitted to radical potentialism.

You didn’t endorse width-actualism until Nov. 3, in response to the story about K. And it is only now that we are starting to see the principles associated with “width-actualism + height potentialism” (New IMH#, etc.)

I am fully aware (and have acknowledged) that you have said that the HP program is compatible with “width-actualism + height potentialism”. The reason I have focused on “radical potentialism” and not “width-actualism + height potentialism” is two-fold. First, you explicitly said that this was your official view. Second, you gave us the principles associated with this view (Old-IMH#, etc.) and have only now started to give us the principles associated with “width-actualism + height potentialism” (New-IMH#, etc.) I wanted to work with your official view and I wanted something definite to work with.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

I certainly agree that it is more likely that one will get an answer on PD than an answer on CH. Of course, I believe that we already have a convincing case for PD. But let me set that aside and focus on your program. And let me also set aside questions about the epistemic force behind the principles you are getting (as “suggested” or “intrinsically motivated”) on the basis of the  “‘maximal’ iterative conception of set” and focus on the mathematics behind the actual principles.

(1) You proposed Strong Unreachability (as “compellingly faithful to maximality”) and you have said quite clearly that V does not equal HOD (“Maximality implies that there are sets (even reals) which are not ordinal-definable” (Letter of August 21)). From these two principles Hugh showed (via a core model induction argument) that PD follows. [In fact, in place of the second one just needs (the even more plausible "V does not equal K".]

(2) Max (on Oct. 14) proposed the following:

In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the \kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model \text{HOD}_A of sets hereditarily-ordinal definable with the parameter A!

Hugh pointed out (on Oct. 15) that the latter violates ZFC. Still, there is a principle in the vicinity that Max could still endorse, namely,

(H) For all uncountable cardinals \kappa, \kappa^+ is not correctly computed by HOD.

Hugh showed (again by a core model induction argument) that this implies PD.

So you already have different routes (based on principles “suggested” by the “‘maximal’ iterative conception of set” leading to PD. So things are looking good!

(3) I expect that things will look even better. For the core model induction machinery is quite versatile. It has been used to show that lots of principles (like PFA, there is an \omega_1 dense ideal on \omega_1, etc.) imply PD. Indeed there is reason to believe (from inner model theory) that every sufficiently strong “natural” theory implies PD. (Of course, here both “sufficiently strong” and “natural” are necessary, the latter because strong statements like “Con(ZFC + there is a supercompact)” and “There is a countable transitive model of ZFC with a supercompact” clearly cannot imply PD.)

Given the “inevitability” of PD — in this sense: that time and again it is show to follow from sufficiently strong “natural” theories — it entirely reasonable to expect the same for the principles you generate (assuming they are sufficiently strong). It will follow (as it does in the more general context) out of the core model induction machinery. This has already happened twice in the setting of HP. I would expect there to be convergence on this front, as a special case of the more general convergence on PD.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the \textsf{IMH}^\#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Let us focus on a productive exchange of your current view, of the program as you now see it.

It would be helpful if you could:

(A) Confirm that the official view is indeed now “width-actualism + height potentialism”.

[If you say the official view is “radical potentialism” (and so are sticking with Old-\textsf{IMH}^\#, etc.) then [insert story of K.] If you say the official view is “width-actualism + height potentialism” then please give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.)]

(B) Give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.), what you know about their consistency, and a summary of what you can currently do with them. In short, it would be helpful if you could respond to Hugh’s last letter on this topic.

Thanks for continuing to help me understand your program.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

 Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory.

I was assuming that any theory capable of ‘swamping’ all others would ‘subsume’ the (Type 1 and Type 2) virtues of the others.  It has been argued that a theory with large cardinals can subsume the virtues of V=L by regarding them as virtues of working within L.  I can’t speak to forcing axioms, but I think Hugh said something about this at some point in this long discussion.

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Pen and Peter,

Pen:

I am sorry to have annoyed you with the issue of TR and Type 2 evidence, indeed you have made it clear many times that the TR does take such evidence into account. I got that! But in your examples, you vigorously hail the virtues of \text{AD}^{L(\mathbb R)} and other developments that have virtually no relevance for math outside of set theory, rather than Forcing Axioms, which provide real Type 2 evidence! As I said, I think you got it very right with your excellent “Defending”, but in your 2nd edition you might want to hail the virtues of Forcing Axioms well above \text{AD}^{L(\mathbb R)}, Ultimate L (should it be “ripe” by the time of your 2nd edition) or other math-irrelevant topics, giving FA’s their richly-earned praise for winning evidence of Types both 1 and 2.

I was really hoping for your reaction to the following, but I guess I ain’t gonna get it:

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored.

Let me make this more specific. Look at the following axioms:

  • V = L
  • V is not L, but is a canonical model of ZFC, generic over L
  • Large Cardinal axioms like Supercompact
  • Forcings Axioms like PFA
  • AD in L(\mathbb R)
  • Cardinal Characteristics like \mathfrak b < \mathfrak a < \mathfrak d
  • (The famous) “Etcetera”

It seems that each of these has pretty good Type 1 evidence (useful for the development of set theory, with P’s and V’s).

But look! We can discriminate between these examples with evidence of Types 2 and 3! Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory. And of course Type 3 kills V = L. So using all three Types of evidence, we have a clear winner, Forcing Axioms!

I expect that without a heavy use of Type 2 and Type 3 evidence, we aren’t going to get any consensus about set-theoretic truth using only Type 1 evidence.

Peter:

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V = L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism! That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the IMH#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Best, Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

In an attempt to move things along, I would like to both summarize where we are
and sharpen what I was saying in my (first) message of Nov 8. My points were
possibly swamped by the technical questions I raised.

1) We began with Original-\textsf{IMH}^\#

This is the #-generated version. In an attempt to provide a V-logic formulation
you proposed a principle which I called (in my message of Nov 5):

2) New-\textsf{IMH}^\#

I raised the issue of consistency and you then came back on Nov 8 with the principle (*):

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), \phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

Let’s call this:

3) Revised-New-\textsf{IMH}^\#

(There are too many (*) principles)

But: Revised-New-\textsf{IMH}^\# is just the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\#

So Revised-New-\textsf{IMH}^\# is consistent. But is Revised-New-\textsf{IMH}^\# really what you had in mind?

(The move from New-\textsf{IMH}^\# to the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\# seems a bit problematic to me.)

Assuming Revised-New-\textsf{IMH}^\# is what you have in mind, I will continue.

Thus, if New-\textsf{IMH}^\# is inconsistent then Revised-New-\textsf{IMH}^\# is just Original-\textsf{IMH}^\#.

So we are back to the consistency of New-\textsf{IMH}^\#.

The theorem (of my message of Nov 8 but slightly reformulated here)

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that
1) x is in M and M \vDash ``V = L[t]\text{ for real }t"
2) M satisfies Revised-New-\textsf{IMH}^\# with parameter \eta
then M is #-generated (and so M satisfies Original-\textsf{IMH}^\#)

strongly suggests (but does not prove) that New-\textsf{IMH}^\# is
inconsistent if one also requires M be a model of “V = L[Y] for some set Y”.

Thus if New-\textsf{IMH}^\# is consistent it likely must involve weakly #-generated models M which cannot be coded by a real in an outer model which is #-generated.

So just as happened with SIMH, one again comes to an interesting CTM question whose resolution seem essential for further progress.

Here is an extreme version of the question for New-\textsf{IMH}^\#:

Question: Suppose M is weakly #-generated. Must there exist a weakly #-generated outer model of M which contains a set which is not set-generic over M?

[This question seems to have a positive solution. But, building weakly #-generated models which cannot be coded by a real in an outer model which is weakly #-generated still seems quite difficult to me. Perhaps Sy has some insight here.]

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Peter is right, Sy. There’s no difference of opinion here between Peter and me about what counts as evidence, whether we call it ‘good set theory’ or ‘P and Vs’.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics?

I don’t mean to be cranky about this, Sy, but I’ve lost track of how many times I’ve repeated that my Thin Realist recognizes evidence of both your Type 1 (from set theory) and Type 2 (from mathematics). I think I’ve mentioned that the foundational goal of set theory in particular plays a central role (especially in Naturalism in Mathematics).

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that

(1) x is in M and M \vDash ``V = L[t] \text{ for a real }t"

(2) M satisfies (*)(\eta) (this (*) but allowing \eta as a parameter),

then M is #-generated.

So, you still have not really addressed the ctm issue at all.

Here is the critical question:

Key Question: Can there exist a ctm M such that M satisfies (*) in the hyper-universe of L(M)[G] where G is L(M)-generic for collapsing all sets to be countable.

Or even:

Lesser Key Question: Suppose that M is a ctm which satisfies (*). Must M be #-generated?

Until one can show the answer is “yes” for the Key Question, there has been no genuine reduction of this version of \textsf{IMH}^\# to V-logic.

If the answer to the Lessor Key Question is “yes” then there is no possible reduction to V-logic.

The theorem stated above strongly suggests the answer to the Lesser Key Question is actually “yes” if one restricts to models satisfying “V = L[Y]\text{ for some set }Y”.

The point of course is that if M is a ctm which satisfies “V = L[Y]\text{ for some set }Y” and M witnesses (*) then M[g] witnesses (*) where g is an M-generic collapse of Y to $lateex \omega$.

The simple consistency proofs of Original-\textsf{IMH}^\# all easily give models which satisfy “V = L[Y]\text{ for some set }Y”.

The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit kappa of cof omega can be large in HOD, and I started asking about Weak Square. It holds at kappa in our model.

Assuming the Axiom \textsf{I}0^\# is consistent one gets a model of ZFC in which for some singular strong limit \kappa of uncountable cofinality, weak square fails at \kappa and \kappa^+ is not correctly computed by HOD.

So one cannot focus on cofinality \omega (unless Axiom \textsf{I}0^\# is inconsistent).

So born of this thread is the correct version of the problem:

Problem: Suppose \gamma is a singular strong limit cardinal of uncountable cardinality such that \gamma^+ is not correctly computed by HOD. Must weak square hold at \gamma?

Aside: \textsf{I}0^\# asserts there is an elementary embedding j:L(V_{\lambda+1}^\#) \to L(V_{\lambda+1}^\#) with critical point below \lambda.

Regards, Hugh