Tag Archives: Thin realism

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.


1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.


1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 24 Oct 2014, W Hugh Woodin wrote:

Dear Sy,

You wrote to Pen:

But to turn to your second comment above: We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory? (Confession: I have to credit this e-mail discussion for helping me reach that conclusion; recall that I started by telling Sol that the HP might give a definitive refutation of CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!)

ZF + AD will always generate “good set theory”…   Probably also V=L…

This seems like a rather dubious basis for the indeterminateness of a problem.

I guess we have something else to put on our list of items we simply have to agree we disagree about.

What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements? I read “Defending the Axioms” and am convinced by Pen’s Thin Realism when it comes to such evidence coming either from set theory as a branch of mathematics or as a foundation of mathematics. On this basis, CH cannot be established unless a definitive case is made that it is necessary for a “good set theory” or for a “good foundation for mathematics”. It is quite clear that there never will be a case that we need CH (or not-CH) for “good set theory”. I’m less sure about its necessity for a “good foundation”; we haven’t looked at that yet.

We need ZF for good set theory and we need AC for a good foundation. That’s why we can say that the axioms of ZFC are true.

On the other hand if you only regard evidence derived from the maximality of V as worthy of consideration then you should get the negation of CH. But so what? Why should that be the only legitimate relevant evidence regarding the truth value of CH? That’s why I no longer claim that the HP will solve the continuum problem (something I claimed at the start of this thread, my apologies). But nor will anything like Ultimate L, for the reasons above.

I can agree to disagree provided you tell me on what basis you conclude that statements of set theory are true.


Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think it is great that you are getting interested in the philosophy
and foundations of set theory but you really have to do your homework
more carefully.

  1. You take it as a given that the principles of Zermelo set theory
    follow from (that is, are intrinsically justified on the basis of) the
    iterative conception of set. This is highly contested. See, for
    example, the classic paper of Charles Parsons on the topic.
  2. You say that when Pen’s “Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that.”

I think you have misunderstood Pen. Thin Realism is a metaphysical thesis. It has nothing to do at all with whether the justification of an axiom references set theory alone or mathematics more generally. In fact, Pen’s Thin Realist does reference other areas of mathematics!

  1. You go on to talk of three notions of truth in set theory and you
    say that we should just proceed with all three. This is something that has been discussed at length in the literature of pluralism in mathematics. The point I want to make here is that it requires an argument. You cannot just say: “Let’s proceed with all three!” For
    comparison imagine a similar claim with regard to number theory or physics. One can’t just help oneself to relativism. It requires an argument!

For some time now I have wanted to write more concerning your
program. But I still don’t have a grip on the X where X is your view
and at this stage I can only make claims of the form “If your view is
X then Y follows.” Moreover, as the discussion has proceeded my grip on X has actually weakened. And this applies not just to the
philosophical parts of X but also to the mathematical parts of X.

Let’s start with something where we can expect an absolutely clear and unambiguous answer: A mathematical question, namely, the question Hugh asked. Let me repeat it:

What is \textsf{SIMH}^\#(\omega_1)? You wrote in your message of Sept 29:

The IMH# is compatible with all large cardinals. So is the \textsf{SIMH}^\#(\omega_1)

It would also be useful to have an answer to the second question I
asked. The version of \textsf{SIMH}^\# you specified in your next message to me
on sept 29:

The (crude, uncut) \textsf{SIMH}^\# is the statement that V is #-generated and
if a sentence with absolute parameters holds in a cardinal-preserving,
#-generated outer model then it holds in an inner model. It implies a
strong failure of CH but is not known to be consistent.

does not even obviously imply \textsf{IMH}^\#. Perhaps you meant, the above
together with \textsf{IMH}^\#? Or something else?


Re: Paper and slides on indefiniteness of CH

Dear Sy,

Now here we come to an important distinction that is ignored in discussions of Thin Realism: The Axiom of Choice didn’t get elected to the club because it is beneficial to the development of Set Theory! It got elected only because of its broader value for the development of mathematics outside of Set Theory, for the way it strengthens Set Theory as a foundation of mathematics. It is much more impressive for a statement of Set Theory to be valuable for the foundations of mathematics than it is for it to be valuable for the foundations of just Set Theory itself!

In other words when a Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that. In the case of AC it was much more than that.

If by ‘thin realism’, you mean the view described by me, then this is incorrect.  My Thin Realist embraces considerations based on benefits to set theory and to mathematics more generally — and would argue for Choice on the basis of its benefits in both areas.

This has a corresponding effect on discussions of set-theoretic truth. Corresponding to the above 3 roles of Set Theory we have three notions of truth:

  1. True in the sense of Pen’s Thin Realist, i.e. a statement is true because of its importance for producing “good Set Theory”.
  2. True in the sense assigned to AC, i.e., a statement is true based on Set Theory’s role as a foundation of mathematics, i.e. because it is important for the development of areas of mathematics outside of Set Theory.
  3. True in the intrinsic sense, i.e., derivable from the maximal iterative conception of set.

Again, my Thin Realist embraces the considerations in (1) and (2). As for (3), she thinks having an intuitive picture of what we’re talking about is extremely valuable, as a guide to thinking, as a source of new avenues for exploration, etc.  Her reservation about considerations of type (3) is just this:  if there were conflict between type (3) and types (1) and (2), she would change her concept to retain the good mathematics, in set theory and in mathematics more broadly.  (This happened in the case of Choice.)

A more subtle point, quite important to us philosophers, is that Thin Realism doesn’t include a different sort of truth.  Truth is truth. Where the Thin Realist differs is in what she thinks set theory is about (the ‘metaphysics or ‘ontology’).  Because of this, she differs on what she takes to be evidence for truth.  So what I really meant in the previous paragraph is this:  benefits to set theory and to math are evidence for truth; intrinsic considerations, important as they are, only aid and suggest routes to our accumulation of such evidence.

  1. Pen’s model Thin Realist John Steel will go for Hugh’s Ultimate L axiom, assuming certain hard math gets taken care of.

I don’t know what you intend to be covered by ‘certain hard math’, but I take it a lot has to happen before a Thin Realist think we have sufficient evidence to include V=Ultimate L as a new axiom.

As I understand it (I am happy to be corrected), Pen is no fan of Type 3 truth

I hope I’ve now explained my stand on this:  none of these are types of truth; types 1 and 2 are evidence for truth; 3 is of great heuristic value.

I am most pessimistic about Type 1 truth (Thin Realism). To get any useful conclusions here one would not only have to talk about “good Set Theory” but about “the Best Set Theory”, or at least show that all forms of “good Set Theory” reach the same conclusion about something like CH. Can we really expect to ever do that? To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate-L, shoot down forcing axioms or argue that one of these is not “good Set Theory”. Pen, how do you propose to do that? Forcing axioms are here to stay as “good Set Theory”, they can’t be “shot down”. And even if Ultimate-L dies, there will very likely be something to replace it. Why should we expect this replacement for Ultimate-L to come to the same conclusion about CH that forcing axioms reach (i.e. that CH is false)?

I think it’s simply too soon to try to make any of these judgments.

All best,