Tag Archives: Good set theory

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You wrote:

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

” Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

What Peter wrote is this:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here — for example, there was evidence for the existence of sets in the successes of Cantor and Dedekind, and more recently for PD, in the results cited by Peter, Hugh, John Steel, and others.  (Using the term ‘good set theory’ allows me to leave open the question of Thin Realism vs. Arealism.  For the Arealist, these same considerations are just reasons to add sets or PD to our mathematics/set theory, but the Thin Realist sees them as evidence for existence and truth.)

This doesn’t preclude people disagreeing about what parts of set theory they believe to be more interesting, important, promising, etc.  (Scientists also disagree on such matters.)

At the present juncture, it’s more difficult to find and assess new evidence, but that’s to be expected.  Peter and Hugh have made it clear, I think, that they regard many of the current options as open (including HP, when it begins to generate definite claims), that more information is needed. If one theory eventually ‘swamps’ the rest (I should have noted that ‘swamping’ often involves ‘subsuming’), then the apparently contrary evidence will have to be faced and explained.  (Einstein had to explain why there was so much evidence in support of Newton.)

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.

Peter:

1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the \textsf{IMH}^\# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

To repeat: I am not out to kill any particular axiom of set theory! I just want to take an unbiased look at what comes out of Maximality Criteria. It is far too early to conclude from the HP that extendibles don’t exist.

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

We haven’t discussed Hugh’s Ultimate L program much. There are two big differences between this program and CTMP (aka HP). As I understand it,

1. It offers a proposed preferred set theoretic universe in which it is clear that CH holds – but the question of its existence relative to large cardinals (or relative consistency) is a (or the) major open question in the program.

2. In connection with 1, there are open conjectures (formulated in ZFC) which show how to refute Reinhardt’s axiom (the existence of j:V \to V) within ZF (and more).

So even if one rejects 1, this effort will leave us at least with 2, which is nearly universally regarded as important in the set theory community.

It would be nice for most people on this thread to have a generally understandable account of at least the structure of 1. I know that there has been some formulations already on the thread, but they are a while ago, and relatively technical. So let me ask some leading questions.

Can this Ultimate L proposal be presented in the following generally understandable shape?

Goedel’s constructible sets, going by the name of L, are built up along the ordinals in a very well defined way. This allows all of the usual set theoretic problems like CH to become nice mathematical problems, when formulated within the set theoretic universe of constructible sets, L. Thanks to Goedel, Jensen, and others, all of these problems have been settled as L problems. (L is the original so called inner model).

Dana Scott showed that L cannot accommodate measurable cardinals. There is an incompatibility.

Jack Silver showed that L can be extended to accommodate measurable cardinals. He worked out L[U], where U stands for a suitable measure on a measurable cardinal. The construction is somewhat analogous to the original Goedel’s L. Also all of the usual set theoretic problems like CH are settled in L[U].

This Gödel-Silver program (you don’t usually see that name though) has been lifted to considerably stronger large cardinals, with the same outcome. The name you usually see is “the inner model program”. The program slowed down to a trickle, and is stalled at some medium large cardinals considerably stronger than measurable cardinals, but very much weaker than – well it’s a bit technical and I’ll let others fill in the blanks here.

“Inner model theory for a large cardinal” became a reasonably understood notion at an informal or semiformal level. And some good test questions emerged that seem to be solvable only by finding an appropriate “inner model theory” for some large cardinals.

So I think this sets the stage for a generally understandable or almost generally understandable discussion of what Hugh is aiming to do.

Perhaps Hugh has picked out some important essential features of what properties the inner models so far have had, adds to them some additional desirable features, and either conjectures or proves that there is a largest such inner model – if there is any such inner model at all. I am hoping that this is screwed up only a limited amount, and the accurate story can be given in roughly these terms, black boxing the important details.

There are also a lot of important issues that we have only touched on in this thread that I think we should return to. Here is a partial list.

1. Sol maintains that there is a crucial difference between (\mathbb N,+,\times) and (P(\mathbb N),\mathbb N,\in,+,\times) that drives an enormous difference in the status of first order sentences. Whereas Peter for sure, and probably Pen, Hugh, Geoffrey strongly deny this. I think that Sol’s position is stronger on this, but I am interested in playing both sides of the fence on this. In particular, one enormous difference between the two structures that is mathematically striking is that the first is finitely generated (even 1-generated), whereas the second is not even countably generated. Of course, one can argue both for and against that this indisputable fact does or does not inform us about the status of first order sentences. Peter has written on the thread that he has refuted Sol’s arguments in this connection, and Sol denies that Peter has refuted Sol’s arguments in this connection. Needs to be carefully and interactively discussed, even though there has been published stuff on this.

2. The idea of “good set theory” has been crucial to the entire thread here. Obviously, there is the question of what is good set theory. But even more basic is this: I don’t actually hear or see much done at all in higher set theory other than studies of models of higher set theory. By higher set theory I mean more or less set theory except for DST = descriptive set theory. See, DST operates just like any normal mathematical area. DST does not study models of DST, or models of any set theory. DST basically works with Borel and sometimes analytic sets and functions, and applies these notions to shed light on a variety of situations in more or less core mathematics. E.g., ergodic theory, group actions, and the like. Higher set theory operates quite differently. It’s almost entirely wrapped up in metamathematical considerations. Now maybe there is a point of view that says I am wrong and if you look at it right, higher set theorists are simply pursuing a normal mathematical agenda – the study of sets. I don’t see this, unless the normal mathematical area is supposed to be “the study of models of higher set theory”. Perhaps people might want to interpret working out what can be proved from forcing axioms? Well, I’m not sure this is similar to the situation in a normal area of mathematics like DST. So my point is: judging new axioms for set theory on the basis of “good set theory” or “bad set theory” doesn’t quite match the situation on the ground, as I see it.

3. In fact, the whole enterprise of higher set theory has so many features that are so radically different from the rest of mathematics, that the whole enterprise, to my mind, should come into serious question. Now I want to warn you that I am both a) incredibly enthusiastic about the future of higher set theory, and b) incredibly dismissive about any future of higher set theory whatsoever — all at the same time. This is because a) is based on certain special aspects of higher set theory, whereas b) is based on the remaining aspects of higher set theory. So when you see me talking from both sides of my mouth, you won’t be shocked.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Peter,

My apologies for the actualism/potentialism confusion! The situation is this: We have been throwing around 3 views:

1. Actualism in height and width (Neil Barton?)
2. Actualism only in width (Pen and Geoffrey?)
3. Actualism in neither.

Now the problem with me is that I have endorsed both 2 and 3 at different times! What I have been trying to say is that the choice between 2 and 3 does not matter for the HP, the programme can be presented from either point of view without any change in the mathematics. In 3 the universes to which V is compared actually are there, as part of the background multiverse (an extreme multiverse view) and in 2 you can only talk about them with “quotes”, yet the question of what is true in them is internal to (a mild lengthening of) V.

I have been a chameleon on this: My personal view is 3, but since no one shares that view I have offered to adopt view 2, to avoid a philosophical debate which has no pracatical relevance for the HP.

It is similar with the use of countable models! Starting with view 2 I argue that the comparisons that are made of V with other “universes” (in quotes) could equally well be done by replacing V by a ctm and removing the quotes. But again, this is not necessary for the programme, as one could simply refuse to do that and awkardly work with quoted “universes” all of the time. I don’t understand why anyone would want to do such an awkward thing, but I am willing to play along and sadly retitle the programme the MP (Maximality Programme) instead of the Hyperuniverse Programme, as now the countable models play no role anymore. In this way the MP is separated from the study of countable transitive models altogether.

In summary: There is some math going on in the HP which is robust under changes of interpretation of the programme. My favourite interpretation would be View 3 above, but I have settled on View 2 to make people happy, and am even willing to drop the reduction to countable models to make even more people happy.

I am an extreme potentialist who is willing to behave like a width actualist.

The mathematical dust has largely settled — as far as the program as it currently stands is concerned –, thanks to Hugh’s contributions.

What? There is plenty of unsettled mathematical dust out there, not just with the future development of the HP but also with the current discussion of it. See my mail of 25.October to Pen, for example. What do we say about the likelihood that maximality of V with respect to HOD likely contradicts large cardinal existence? Even if the HP leads to the failure of supercompacts to exist, can one at least get PD out of the HP and if so, how?

More broadly, a lot remains unanswered in this discussion regarding Type 1 evidence (for “good set theory”): If \text{AD}^{L(\mathbb R)} is parasitic on \text{AD} how does one argue that it is a good choice of theory? When we climb the interpretability hierarchy, should we drop AC in our choice of theories and instead talk about what happens in inner models, as in the case of AD? Similarly, why is large cardinal existence in V preferred over LC existence in inner models? Are Reinhardt cardinals relevant to these questions? And with regard to Ultimate L: What theory of truth is to be used when assessing its merits? Is it just Thin Realism, and if so, what is the argument that it yields “the best set theory” (“whose virtues swamp all the others” as Pen would say) and if not, is there something analagous to the HP analysis of maximality from which Ultimate L could be derived?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You wrote to Pen:

But to turn to your second comment above: We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory? (Confession: I have to credit this e-mail discussion for helping me reach that conclusion; recall that I started by telling Sol that the HP might give a definitive refutation of CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!)

ZF + AD will always generate “good set theory”…   Probably also V = L…

This seems like a rather dubious basis for the indeterminateness of a problem.

I guess we have something else to put on our list of items we simply have to agree we disagree  about.

So the best one can do with a problem like CH is to say: “Based on a certain Type of evidence, the truth value of CH is such and such.” As said above, Type 1 evidence (the development of set theory as an area of mathematics) will never yield a fixed truth value, we don’t know yet about Type 2 evidence (ST as a foundation) and I still conjecture that Type 3 evidence (based on the Maximality of the universe of sets in height and width) will imply that CH is false.

There will never be such a resolution of CH (for the reasons I gave above). The best one can do is to give a widely persuasive argument that CH (or not-CH) is needed for the foundations of mathematics or that CH (or not-CH) follows from the Maximality of the set-concept. But I would not expect either achievement to draw great acclaim, as nearly all set-theorists care only about the mathematical development of set theory and CH is not a mathematical problem.

This whole discussion about CH is of interest only to philosophers and a handful of philosophically-minded mathematicians. To find the leading open questions in set theory, one has to instead stay closer to what set-theorists are doing. For example: Provably in ZFC, is V generic over an inner model which satisfies GCH?

Why is this last question a leading question?  If there is an inner model with a measurable Woodin cardinal it is true, V is a (class) generic extension of an inner model of GCH.

You must mean something else. Focusing on eliminating the assumption of there is an inner model of a measurable Woodin cardinal seems like a rather technical problem.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Mon, 20 Oct 2014, Penelope Maddy wrote:

It seems to me disingenuous to suggest that resolving CH, and devising a full account of sets of reals more generally, is not one of the goals of set theory — indeed a contemporary goal with strong roots in the history of the subject.

Good luck selling that to the ST public. This is interesting to you, me and many others in this thread, but very few set-theorists think it’s worth spending much time on it, let’s not deceive ourselves. They are focused on “real set theory”, mathematical developments, and don’t take these philosophical discussions very seriously. …  Resolving CH was certainly never my goal; I got into the HP to better understand large cardinals and internal consistency, with no particular focus on CH. … It would be interesting to ask other set-theorists (not Hugh or I) what the goals of set theory are; I think you might be very surprised by what you hear, and also surprised by your failure to hear “solve CH”.

The goal I mentioned was resolving CH as part of a full theory of sets of reals more generally.  I said ‘resolving’ to leave open the possibility that the ‘resolution’ will be a understanding of why CH doesn’t have a determinate truth value, after all (e.g., a multiverse resolution).

I’m not sure I understand what you mean by “a full theory of sets of reals”, but I presume you mean a theory with “practical completeness”, meaning that it resolves all of the interesting questions about sets of reals? You seem to imply that we already have such a theory for sets of integers; I am not even convinced of that!

But to turn to your second comment above: We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory? (Confession: I have to credit this e-mail discussion for helping me reach that conclusion; recall that I started by telling Sol that the HP might give a definitive refutation of CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!)

So the best one can do with a problem like CH is to say: “Based on a certain Type of evidence, the truth value of CH is such and such.” As said above, Type 1 evidence (the development of set theory as an area of mathematics) will never yield a fixed truth value, we don’t know yet about Type 2 evidence (ST as a foundation) and I still conjecture that Type 3 evidence (based on the Maximality of the universe of sets in height and width) will imply that CH is false.

It’s not a matter of how many people are actively engaged in the project: there might be lots of perfectly good reasons why most set theorists aren’t (because there are other exciting new projects and goals, because CH has been around for a long time and looks extremely hard to crack, etc.).  I would ask you this:   is CH one of the leading open questions of set theory?

No! The main reason is that, as Sol has pointed out, it is not a mathematical problem but a logical one. The leading open questions of set theory are mathematical.

Is it the sort of thing that would draw great acclaim if someone were to come up with a widely persuasive ‘resolution’?

There will never be such a resolution of CH (for the reasons I gave above). The best one can do is to give a widely persuasive argument that CH (or not-CH) is needed for the foundations of mathematics or that CH (or not-CH) follows from the Maximality of the set-concept. But I would not expect either achievement to draw great acclaim, as nearly all set-theorists care only about the mathematical development of set theory and CH is not a mathematical problem.

This whole discussion about CH is of interest only to philosophers and a handful of philosophically-minded mathematicians. To find the leading open questions in set theory, one has to instead stay closer to what set-theorists are doing. For example: Provably in ZFC, is V generic over an inner model which satisfies GCH?

Best,
Sy