Tag Archives: Hyperuniverse Programme

Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sol,

My participation in this interesting discussion is now at its end, as almost anything I say at this point would just be a repeat of what I’ve already said. I don’t regret having triggered this Great Debate on July 31, in response to your interesting paper, as I have learned enormously from it. Yet at the same time I wish to offer you an apology for the more than 500 subsequent e-mails, which surely far exceeds what you expected or wanted.

Before signing off I’d like to leave you with an abridged summary of my views and also give appropriate thanks to Pen, Geoffrey, Peter, Hugh, Neil and others for their insightful comments. Of course I am happy to discuss matters further with anyone, and indeed there is one question that I am keen to ask Geoffrey and Pen, but I’ll do that “privately” as I do think that this huge e-mail list is no longer the appropriate forum. My guess is that the vast majority of recipients of these messages are quite bored with the whole discussion but too polite to ask me to remove their name from the list.

All the best, and many thanks, Sy

Continue reading

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your letter.

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

I would like to repeat my request: Could you please give us an account of #-generation, explain how it arises from “length maximality”, and make a convincing case that it captures all (in particular, the Erdos cardinal \kappa(\omega)) and only the large cardinals that we can expect to follow from “length maximality”.

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V= L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms.

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

You have not understood what I (or Pen, or Tony, or Charles, or anyone else who has discussed this matter in the literature) mean by “prediction and confirmation”. To understand what we mean you have to read the things we wrote; for example, the slides I sent you in response to precisely this question.

You cite cases of the form: “X was working with theory T. X conjectured P. The conjecture turned out to be true. Ergo: T!”

That is clearly not how “prediction and confirmation” works in making a case for new axioms. Why? Take T to be an arbitrary theory, say (to be specific) “\textsf{I}\Delta_0 + Exp is not total.”  X conjectures that P follows from T. It turns out that A was right. Does that provide evidence for “Exp is not total”?

Certainly not.

This should be evident by looking at the case of “prediction and confirmation” in the physical sciences. Clearly not every verified prediction made on the basis of a theory T provides epistemic support for T. There are multiple (obvious) reasons for this, which I won’t rehears. But let me mention two that are relevant to the present discussion. First, the theory T could have limited scope — it could pertain to (what is thought, for other reasons) to be a fragment of the physical universe; e.g. the verified predictions of macroscopic mechanics do not provide epistemic support for conclusions about how subatomic particles behave. Cf. your V=L example. Second, the predictions must bear on the theory in a way that distinguishes it from other, competing theories.

Fine. But falling short of that ideal one at least would like to see a prediction which, if true, would (according to you) lend credence to your program and, if false, would (according to you) take credence away from your program, however slight the change in credence might be. But you appear to have also renounced these weaker rational constraints.

Fine. The Hyperuniverse Program is a different sort of thing. It isn’t like (an analogue of) astronomy. And you certainly don’t want it to be like (an analogue of) astrology. So there must be some rational constraints. What are they?

Apparently, the fact that a program suggests principles that continue to falter is not a rational constraint. What then are the rational constraints? Is the idea that we are just not there yet but that at the end of inquiry, when the dust settles, we will have convergence and we will arrive at “optimal” principles, and that at that stage there will be a rationally convincing case for the new axioms? (If so, then we will just have to wait and see whether you can deliver on this promise.)

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There are two reasons I keep giving a summary of the changes, of how we got to where we are now. First, this thread is quite intricate and its useful to give reader a summary of the state of play. Second, in assessing prospects and tenability of a program it is useful to keep track of its history, especially when that program is not in the business of making predictions.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism!

This is not correct. (I wish I didn’t have to document this).

I never attributed height-actualism to you. (I hope that was a typo on your part). I wrote (in the private letter of Oct. 6, which you quoted and responded to in public):

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

I never attributed height actualism. I only very tentatively said that it appeared you have switched to width actualism and said that I didn’t believe that this was your official view.

That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

This is not correct. (Again, I wish I didn’t have to document this.)

You responded to my letter (in public) on Oct. 9, quoting the above passage, writing:

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

I then wrote letters asking you to confirm that you were indeed a radical potentialist. You confirmed this. (For the documentation see the beginning of my letter on K.)

So, I wrote the letter on K, after which you said that you regretted having admitted to radical potentialism.

You didn’t endorse width-actualism until Nov. 3, in response to the story about K. And it is only now that we are starting to see the principles associated with “width-actualism + height potentialism” (New IMH#, etc.)

I am fully aware (and have acknowledged) that you have said that the HP program is compatible with “width-actualism + height potentialism”. The reason I have focused on “radical potentialism” and not “width-actualism + height potentialism” is two-fold. First, you explicitly said that this was your official view. Second, you gave us the principles associated with this view (Old-IMH#, etc.) and have only now started to give us the principles associated with “width-actualism + height potentialism” (New-IMH#, etc.) I wanted to work with your official view and I wanted something definite to work with.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

I certainly agree that it is more likely that one will get an answer on PD than an answer on CH. Of course, I believe that we already have a convincing case for PD. But let me set that aside and focus on your program. And let me also set aside questions about the epistemic force behind the principles you are getting (as “suggested” or “intrinsically motivated”) on the basis of the  “‘maximal’ iterative conception of set” and focus on the mathematics behind the actual principles.

(1) You proposed Strong Unreachability (as “compellingly faithful to maximality”) and you have said quite clearly that V does not equal HOD (“Maximality implies that there are sets (even reals) which are not ordinal-definable” (Letter of August 21)). From these two principles Hugh showed (via a core model induction argument) that PD follows. [In fact, in place of the second one just needs (the even more plausible "V does not equal K".]

(2) Max (on Oct. 14) proposed the following:

In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the \kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model \text{HOD}_A of sets hereditarily-ordinal definable with the parameter A!

Hugh pointed out (on Oct. 15) that the latter violates ZFC. Still, there is a principle in the vicinity that Max could still endorse, namely,

(H) For all uncountable cardinals \kappa, \kappa^+ is not correctly computed by HOD.

Hugh showed (again by a core model induction argument) that this implies PD.

So you already have different routes (based on principles “suggested” by the “‘maximal’ iterative conception of set” leading to PD. So things are looking good!

(3) I expect that things will look even better. For the core model induction machinery is quite versatile. It has been used to show that lots of principles (like PFA, there is an \omega_1 dense ideal on \omega_1, etc.) imply PD. Indeed there is reason to believe (from inner model theory) that every sufficiently strong “natural” theory implies PD. (Of course, here both “sufficiently strong” and “natural” are necessary, the latter because strong statements like “Con(ZFC + there is a supercompact)” and “There is a countable transitive model of ZFC with a supercompact” clearly cannot imply PD.)

Given the “inevitability” of PD — in this sense: that time and again it is show to follow from sufficiently strong “natural” theories — it entirely reasonable to expect the same for the principles you generate (assuming they are sufficiently strong). It will follow (as it does in the more general context) out of the core model induction machinery. This has already happened twice in the setting of HP. I would expect there to be convergence on this front, as a special case of the more general convergence on PD.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the \textsf{IMH}^\#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Let us focus on a productive exchange of your current view, of the program as you now see it.

It would be helpful if you could:

(A) Confirm that the official view is indeed now “width-actualism + height potentialism”.

[If you say the official view is “radical potentialism” (and so are sticking with Old-\textsf{IMH}^\#, etc.) then [insert story of K.] If you say the official view is “width-actualism + height potentialism” then please give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.)]

(B) Give us a clear statement of the principles you now stand behind (New-\textsf{IMH}^\#, etc.), what you know about their consistency, and a summary of what you can currently do with them. In short, it would be helpful if you could respond to Hugh’s last letter on this topic.

Thanks for continuing to help me understand your program.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Looks like I have three roles here.

1. Very lately, some real new content that actually investigates some generally understandable aspects of “intrinsic maximality”. This has led rather nicely to legitimate foundational programs of a generally understandable nature, involving new kinds of investigations into decision procedures in set theory.

2. Attempts to direct the discussion into more productive topics. Recall the persistent subject line of this thread! The last time I tried this, I got a detailed response from Peter which I intended to answer, but put 1 above at a higher priority.

3. And finally, some generally understandable commentary on what is both not generally understandable and having no tangible outcome.

This is a brief dose of 3.

QUOTE FROM BSL PAPER BY MR. ENERGY (jointly authored):

The approach that we present here shares many features, though not all, of Goedel’s program for new axioms. Let us briefly illustrate it. The Hyperuni- verse Program is an attempt to clarify which first-order set-theoretic state- ments (beyond ZFC and its implications) are to be regarded as true in V , by creating a context in which different pictures of the set-theoretic universe can be compared. This context is the hyperuniverse, defined as the collection of all countable transitive models of ZFC.

DIGRESSION: The above seems to accept ZFC as “true in V”, but later discussions raise issues with this, especially with AxC.

So here we have the idiosyncractic propogandistic slogan “HP” for

*Hyperuniverse Program*

And we have the DEFINITION of the hyperuniverse as

**the collection of all countable transitive models of ZFC**

QUOTE FROM THIS MORNING BY MR. ENERGY:

That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

If it is supposed to be “inappropriate to refer to the HP as the study of ctm’s”, and “no need to consider ctm’s at all”, then why coin the term Hyperuniverse Program and then DEFINE the Hyperuniverse as the collection of all countable transitive models of ZFC???

THE SOLUTION (as I suggested many times)

Stop using HP and instead use CTMP = countable transitive model program. Only AFTER something foundationally convincing arises, AFTER working through all kinds of pitfalls carefully and objectively, consider trying to put forth and defend a foundational program.

In the meantime, go for a “full-blown theory of ctm’s” (language from Mr. Energy) so that you at least have something tangible to show for the effort if and when people reject your foundational program(s).

GENERALLY UNDERSTANDABLE AND VERY DIRECT PITFALLS IN USING INTRINSIC MAXIMALITY

It is “obvious” from intrinsic maximality that the GCH fails at all infinite cardinals because of “width considerations”.

This “refutes” the continuum hypothesis. This also “refutes” the existence of (\omega+2)-extendible cardinals, since they imply that the GCH holds at some infinite cardinals (Solovay).

QED

LESSONS TO BE LEARNED

You have to creatively analyze what is wrong with the above use of “intrinsic maximality”, and how it is fundamentally to be distinguished from other uses of “intrinsic maximality” that one is putting forward as legitimate. If this can be done in a suitably creative and convincing way, THEN you have at least the beginnings of a legitimate foundational program. WARNING: if the distinction is drawn too artificially, then you are not creating a legitimate foundational program.

Harvey

Re: Paper and slides on indefiniteness of CH

On Oct 31, 2014, at 12:20 PM, Sy David Friedman wrote:

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles).

But why do you have that impression? That is what I am interested in. You have given no reason and at the same time there seem to be many reasons for you not to have that impression. Why not reveal what you know?

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

Let the Strong HOD Hypothesis be: No successor of a singular strong limit of uncountable cofinality is \omega-strongly measurable in HOD

(Recall: this is not known to consistently fail with appealing to something like Reinhardt Cardinals. The restriction to uncountable cofinality is necessary because of the Axiom I0: Con (ZFC + I0) gives the consistency with ZFC that there is a singular strong limit cardinal  whose successor is \omega-strongly measurable in HOD.)

If the Strong HOD Hypothesis holds in V and if the Maximality Criterion holds in V, then there are no supercompact cardinals, in fact there are no cardinals \kappa which are \omega_1+\omega-extendible; i.e. no \kappa for which there is j:V_{\kappa+\omega_1+\omega} \to V_{j(\kappa +\omega_1+\omega)}.

If ZFC proves the HOD Hypothesis, it surely proves the Strong HOD Hypothesis.

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates.

I see you making speculations for which I do not yet see another explanation of. But fine, take all the time you want. I have no problem with agreeing that HP is in a (mathematically) embryonic phase and we have to wait before being able to have a substantive (mathematical) discussion about it.

There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

But if the synthesis of maximality, in the sense of failure of the HOD Hypothesis, together with large cardinals, in the sense of there is an extendible cardinal, yields a greatly enhanced version of maximality, why is this not enough?

That is what I am trying to understand.

Regards.
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the \textsf{IMH}^\# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

To repeat: I am not out to kill any particular axiom of set theory! I just want to take an unbiased look at what comes out of Maximality Criteria. It is far too early to conclude from the HP that extendibles don’t exist.

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Thu, 30 Oct 2014, Penelope Maddy wrote:

I’m pretty sure Hugh would disagree with what I’m about to say, which naturally gives me pause. With that understood, I confess that from where I sit as a relatively untutored observer, it looks as if the evidence Hugh is offering is overwhelming of your Type 1 (involving the mathematical virtues of the attendant set theory).

Let me give you a counterexample.

With co-authors I established the consistency of the following

Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of \text{HOD} is less than \alpha^+.

Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why? I have yet to see an argument that large cardinal existence is needed for “good set theory”, so it does not follow from Type 1 evidence. That is why I think that large cardinal existence is part of Hugh’s personal theory of truth.

My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand.

There is some ready to hand: At present, Type 2 evidence points towards Forcing Axioms, and these contradict CH and therefore contradict Ultimate L.

He has a ‘picture’ of what the set theoretic universe is like, a picture that guides his thinking, but he doesn’t expect the rest of us to share that picture and doesn’t appeal to it as a way of supporting his claims. If the mathematics goes this way rather than that, he’s quite ready to jettison a given picture and look for another. In fact, at times it seems he has several such pictures in play, interrelated by a complex system of implications (if this conjecture goes this way, the universe like this; if it goes that way, it looks like that…) But all this picturing is only heuristic, only an aide to thought — the evidence he cites is mathematical. And, yes, this is more or less how one would expect a good Thin Realist to behave (one more time: the Thin Realist also recognizes Type 2 evidence). (My apologies, Hugh. You must be thinking, with friends like these … )

That’s a lot to put in Hugh’s mouth. Probably we should invite Hugh to confirm what you say above.

The HP works quite differently. There the picture leads the way —

As with your description above, the “picture” as you call it keeps changing, even with the HP. Recall that the programme began solely with the IMH. At that time the “picture” of V was very short and fat: No inaccessibles but lots of inner models for measurable cardinals. Then came #-generation and the \textsf{IMH}^\#; a taller, handsomer universe, still with a substantial waistline. As we learn more about maximality, we refine this “picture”.

the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved has to be shared, so that it won’t degenerate into ‘Sy’s truth’. So far, to be honest, I’m still not clear on the HP picture, either in its height potentialist/width actualist form or its full multiverse form. Maybe Peter is doing better than I am on that.

I have offered to work with the height potentialist/width actualist form, and even drop the reduction to ctm’s, to make people happy (this doesn’t affect the mathematical conclusions of the programme). Regarding Peter: Unless he chooses to be more open-minded, what I hear from him is a premature pessimism about the HP based on a claim that there will be “no convergence regarding what can be inferred from the maximal iterative conception”. To be honest, I find it quite odd that (excluding my coworkers Claudio and Radek) I have received the most encouragement from Hugh, who seems open-minded and interested in seeing what comes out of the HP, just as we all want to see what comes out of Ultimate L (my criticisms long ago had nothing to do with the programme itself, only with the way it had been presented).

Pen, I know that you have said that in any event you will encourage the “good set theory” that comes out of the HP. But the persistent criticism (not just from you) of the conceptual approach, aside from the math, while initially of extraordinary value to help me clarify the approach (I am grateful to you for that), is now becoming somewhat tiresome. I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Regarding a few recent quotes from Mr. Energy:

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway).

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I assume that Hugh wants to claim that all natural paths in higher set theory, where “all x^\#, x \subseteq \omega, exist” lies there in an early stage of development, lead to PD. Although it would be very nice to have some formalized version of this, it does seem to make sense, and I don’t recall seeing any convincing counterexample to this.

For instance, one can set up a language for natural statements in the projective hierarchy, and try to prove rigorous theorems backing up this statement.

“It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway).”

Any version of HP that I have seen, even for IMH, including any of my own (the only one that is “new” involves Boolean algebras), is awkward compared to using countable transitive models. So “perfectly well” seems like an exaggeration at best. Also I think (have I got this right?) Hugh pointed out a place in the proliferating “fixes” of IMH for which ctms are needed, or perhaps where the awkwardness of not using them becomes really severe? In addition, I think you never answered some of Hugh’s questions about formulating precise and interesting fixes of IMH.

ASIDE: It now seems that any settling of CH via “HP” or CTMP is extremely remote. You did not start the discussion here with this point of view. Recall the subject line of this email.

I think of IMH, with that triple paper, as something not uninteresting. My impression is that you don’t have a comparable second not uninteresting development. IMH has, under standard views at least, a prima facie fatal flaw that calls into doubt the coherence of the very notion you keep talking about – intrinsic maximality of the set theoretic universe. What seems most dubious about “HP” is that it is not robust, and doesn’t have a second not uninteresting success for a wide range of people to really ponder. My back channels indicate to me that the “fixes” artificially layer the idea of IMH on top of large cardinals, which is not a convincing way to proceed.

You should simply rename it CTMP (countable transitive model program), as Hugh and I have said, and then you have a license to pursue practically any grammatically coherent question whatsoever in the realm of ctms as a not uninteresting corner of higher set theory. If something foundational or philosophically coherent comes out of pursuing CTMP then you can try to make something of it foundationally or philosophically. You just don’t have enough success with “HP” to do this with it now. No, you can’t reasonably just invent a branch of set theory called “intrinsic maximality” without more not uninteresting successes. That’s way premature.

Since you spent the bulk of your career on not uninteresting technical work in set theory, it is heroic to try to “get religion” and do something “truly important”, as you are 61. I can see how you got excited with IMH, and got just the right help with the technical complications (Welch, Woodin). But you are trying to dress this up into a foundational/philosophical program under a hopelessly idiosyncratic propogandistic name (HP) way too early, and should have instead stated CTMP and pondered the difficulties with “intrinsic maximality” in an objective and creative way. Incidentally, the way PD is used to prove the consistency of IMH in that triple paper does lend some credence to Hugh conjecturing that “HP” may well simply be another path leading to PD.

OK, I am skeptical in many dimensions of higher set theory, and probably will be raising issues with Koellner/Woodin. You have done some of that already, sometimes with unexpectedly strong language. But I don’t think that the “HP” is strong enough at this point to be using it to set an example that would undermine Koellner/Woodin. You surely can raise some legitimate issues without holding up “HP” as superior.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I’m pretty sure Hugh would disagree with what I’m about to say, which naturally gives me pause. With that understood, I confess that from where I sit as a relatively untutored observer, it looks as if the evidence Hugh is offering is overwhelming of your Type 1 (involving the mathematical virtues of the attendant set theory). My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand. He has a ‘picture’ of what the set theoretic universe is like, a picture that guides his thinking, but he doesn’t expect the rest of us to share that picture and doesn’t appeal to it as a way of supporting his claims. If the mathematics goes this way rather than that, he’s quite ready to jettison a given picture and look for another. In fact, at times it seems he has several such pictures in play, interrelated by a complex system of implications (if this conjecture goes this way, the universe like this; if it goes that way, it looks like that…) But all this picturing is only heuristic, only an aide to thought — the evidence he cites is mathematical. And, yes, this is more or less how one would expect a good Thin Realist to behave (one more time: the Thin Realist also recognizes Type 2 evidence). (My apologies, Hugh. You must be thinking, with friends like these…)

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved has to be shared, so that it won’t degenerate into ‘Sy’s truth’. So far, to be honest, I’m still not clear on the HP picture, either in its height potentialist/width actualist form or its full multiverse form. Maybe Peter is doing better than I am on that.

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Geoffrey,

I’d have thought that a true “multiverser” would want to replace all talk of “V” –understood as the universe of [absolutely] all ordinals(and sets, etc.)–with some more  benign term, such as some very large, (perhaps maximally) fat, transitive model of (here a ref to ZFC + some very large card axioms).

I was imagining the multiverser saying that there isn’t just one universe, there are a bunch — with some account to tell us what universes there are and what they’re like.  None of them is V.

But, for most mathematical purposes outside the higher reaches of set theory itself, I thought that it wouldn’t matter. Several messages back, ref was made to how a group theorist, for instance, might choose. But couldn’t either view accommodate any new axiom that might possibly matter to a group theorist? That seems to be the case with my modal version of multiverse, in which the possible structures are, up to isomorphism, linearly ordered by “end-extension”.

This is why I asked Claudio if a potentialist (like you) counts as a multiverser.  In practice, it doesn’t seem there’s a lot of difference between your potentialist multiverser and a universer who says:  there’s a single fixed universe, but we can’t describe it completely; we have to keep adding more large cardinal axioms.  If the algebraist comes to the set theorist in his foundational role and asks a question turns out to hinge on, say, inaccessibles (as apparently in Wiles’ original proof), you’d say, ‘no problem, what you want lives in this end-extention’, and my universer would say, ‘no problem, there are inaccessibles’.

But I was imagining that Claudio’s multiverse would be more varied that that.  So I floated a couple of possibilities:

You might say to the algebraist:  there’s a so-and-so if there’s one in one of the universes of the multiverse.  Or you might say to the universer that her worries are misplaced, that your multiverse view is out to settle on a single preferred theory of sets, it’s just that you don’t think of it as the theory of a single universe; rather, it’s somehow suggested by or extracted from the multiverse.

(Claudio seemed to opt for the second, but ultimately rejected it; I’m not sure what he thinks about the first.)

On the first, for our simple example, the multiverser would presumably say pretty much what your potentialist says:  here, work in this universe with inaccessibles.  For this to work, the multiverser would need to give us a theory of what universes there are.  For your simple height potentialist, perhaps we have this, but the more varied multiverser would owe us such an account.  (I would have asked about that if Claudio had gone for this option.)

Matters get a little harder when the algebraist is after something dicier.  Suppose he wants a definable (projective) well-ordering of the reals.  My universer might say:  well, there’s isn’t such a thing, but if you restrict yourself to thinking inside L, you can have one there; just be sure that all the other apparatus you need is available there, too.  Would your potentialist want to say something like that?

All best,

Pen

PS:  To be honest, I have the uneasy feeling that there’s something off in this way of thinking about the foundational goal, but I don’t know what it is.

Re: Paper and slides on indefiniteness of CH

Dear Claudio and Sy,

1. I was trying to figure out whether the HP aims to come up with a single, accepted theory. I asked whether:

your multiverse view is out to settle on a single preferred theory of sets, it’s just that you don’t think of it as the theory of a single universe; rather, it’s somehow suggested by or extracted from the multiverse.

Claudio replied:

HP implies an irreversible departing from the idea of finding a single, unified body of set-theoretic truths.

Sy replied:

“Unify” plays a huge role in the Hyperuniverse analysis. I called it “synthesis” before. It is only with “Unification” that one gets convergence towards a single theory of truth in the HP.

2. I was trying to figure out whether the hyperuniverse a collection of ctms inside V:

Pen, in a sense you’re right, the hyperuniverse “lives” within V (I’d rather say that it “originates from” V) and my multiverser surely has a notion of V as anybody else working with ZFC.

OK, but now I lose track of the sense in which yours is a multiverse view: there’s V and within V there’s the hyperuniverse (the collection of ctms). Any universer can say as much.

Sy replied:

Yes, but what is new is to use a multiverse as a tool to gain knowledge about V.

Claudio replied:

I wasn’t claiming that the whole hyperuniverse is within V. That is simply impossible, insofar as there are members of H which satisfy CH and others which don’t, some which satisfy IMH and some which don’t and so on. However, it is always possible (and logically necessary) to see any member of H as living in V. Any multiverser may concede that universes, say, mutually differring set-generic models, are in V, but this doesn’t commit her to be a universer.

I’m at a loss.

All best,

Pen