Re: Paper and slides on indefiniteness of CH

Dear Peter,

You scold Sy:

You take it as a given that the principles of Zermelo set theory
follow from (that is, are intrinsically justified on the basis of) the
iterative conception of set. This is highly contested. See, for
example, the classic paper of Charles Parsons on the topic.

Another, earlier, classic paper on the subject is of course Goedel’s “What is Cantor’s continuum problem” (1947) in which he certainly expresses the view that ZF and more is implicit in the iterative conception of set theory. I would say that Bill Reinhardt’s paper on reflection principles, etc. (1974) is another earlier example supporting that view. Gödel doesn’t really present much argument, but it seems to me that Reinhardt’s discussion is strongly persuasive.

In the phrase “iterative concept of set” the sneaky word is “iterative” when we are speaking of transfinite iteration. In the presence of the ordinals, there is no problem: iterate a given operation along the ordinals. But, in the case of foundations of set theory, the ordinals are not given: they are elements of the universe being generated. So what is the engine of iteration? Maybe better: what are the engines of iteration?

A  reasonable suggestion was Reinhardt’s: given a transitive class X of ordinals, it should be a set if it has some mark that distinguishes it from all of its elements. This leads to the indescribability of the universe: if V_X satisfies \varphi(A) for some formula \varphi and X \subseteq V_X and \varphi(A \cap V_\alpha) fails for all \alpha \in X, then X is an ordinal.

As long as \varphi is first-order, this seems unobjectionable as an engine of iteration—and that is more than enough to found ZF.  I grant that when \varphi is higher order, there are problems (viz. concerning the meaning of quantification over subclasses of X when X might turn out to be V).

The reflection principal behind Gödel’s examples (Mahlos, hyperMahlos, etc.) is even more modest: an operation on \text{Ord} is total on some \alpha.

I think that somewhere in the vast literature in this thread piling up in my email box is the claim by Sy that the existence of cardinals up to Erdős \kappa_{\omega_1} (or something like that) is implicit in the notion of the iterative hierarchy. That does seem to me to be questionable; at least from the “bottom-up” point of view that what can be intrinsically justified are engines for iterating. Sy, as I recall, believes that axioms are intrinsically justified by reference to the universe V that they describe—a top-down point of view. No slander intended, Sy!

Bill

Re: Paper and slides on indefiniteness of CH

Dear Peter,

Dear Sy,

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

Below is what I wrote to Geoffrey about this on 25.September:

Yes, in a nutshell what I am saying is that there are two “equivalent” ways of handling this:

1. You are a potentialist in height but remain actualist in width.

Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).

2. You let loose and adopt a potentialist view for both length and width.

Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

You said that for the purposes of the program one could either use length potentialism + width potentialism or length potentialism + width actualism. The idea, as I understand it, is that on the latter one has enough height to encode “imaginary width extensions”. I take it that you are then using the hyper universe (where one has actual height and width extensions) to model the actual height extensions and virtual width extensions of “V” (“the real thing”).

Is this right? Would you mind spelling out a bit for me why the two approaches are equivalent? In particular, I do not fully understand how you are treating “imaginary width extensions”.

Below is a copy of my explanation of this to Geoffrey on 24.September; please let me know if you have any questions. As Geoffrey pointed out, it is important to put “thickenings” in quotes and/or to distinguish non-standard from standard interpretations of power sets.


 

Dear Geoffrey,

Thanks for your valuable comments. It is nice to hear that you are happy with “lengthenings”; I’ll now try to convince you that there is no problem with “thickenings”, provided they are interpreted correctly. Indeed, you are right, “lengthenings” and “thickenings” are not fully analogous, there are important differences between these two notions, which I can address in a future mail (I don’t want this mail to be too long).

So as the starting point of this discussion, let’s take the view that V can be “lengthened” to a longer universe but cannot be thickened by adding new sets without adding new ordinals.

We can talk about forcing extensions of V, but we regard these as “non-standard”, not part of V. What other “non-standard extensions” of V are there? Surely they are not all just forcing extensions; what are they?

To answer this question it is very helpful to take a detour through a study of countable transitive models of ZFC. OK, I understand that we have dropped V for now, but please bear with me on this, it is instructive.

So let v denote a countable transitive model of ZFC. What “non-standard extensions” does v have? Of course just like V, v has its forcing extensions. A convenient fact is that forcing extensions of v can actually be realised as new countable transitive models of ZFC; these are genuine thickenings of v that exist in the ambient big universe big-V. This is not surprising, as v is so little. But we don’t even have to restrict ourselves to forcing extensions of v, we can talk about arbitrary thickenings of v, namely countable transitive models of ZFC with the same ordinals as v but more sets.

Alright, so far we have our v together with all of its thickenings. Now we bring in Maximality. We ask the following question: How good a job does v do of exhibiting the feature of Maximality? Of course we immediately say: Terrible! v is only countable and therefore can be enlarged in billions of different ways! But we are not so demanding, we are less interested in the fact that v can be enlarged than we are in the question of whether v can be enlarged in such a way as to reveal new properties, new and interesting internal structures, …, things we cannot find if stay inside v.

I haven’t forgotten that we are still just playing around with countable models, please bear with me a bit longer. OK, so let’s say that v does a decent job of exhibiting Maximality if any first-order property that holds in some thickening of v already holds in some thinning, i.e. inner model, of v. That seems to be a perfectly reasonable demand to make of v if v is to be admitted to the Maximality Club. Please trust me when I say that there are such v‘s, exhibiting this form of Maximality. Good.

Now here is the next important observation, implicit in Barwise but more explicit in M.Stanley: Let v^+ be a countable transitive model of ZFC which lengthens little-V. There are little-V’s in the Maximality Club which have such lengthenings. (Probably this is not a big deal for you, as you believe that V itself should have such a lengthening.) The interesting thing is this: Whether or not a given first-order property holds in a thickening of v is something definable inside v^+. More exactly, there is a logic called “v-logic” which can be formulated inside v and a theory T in that logic whose models are exactly the isomorphic copies of thickenings of v; moreover whether a first-order statement is consistent with T is definable inside v^+. In summary, the question of whether a first-order property phi holds in a thickening of v, a blatantly semantic question, is reduced to a syntactic question which can be answered definably inside v^+: we just ask if $\varphi$ is consistent with the magic theory T. (Yes, this is a Completeness Theorem in the style of Gödel-Barwise.)

Very interesting. So if you allow v to be lengthened, not even thickened, you are able to “see” what first-order properties hold in thickenings of v and thereby determine whether or not v belongs to the Maximality Club. This is great news, because now we can throw away our thickenings! We just talk about which first-order properties are consistent with our magic theory T and this is fuilly described in v^+, any lengthening of v to a model of ZFC. We don’t need real thickenings anymore, we can just talk about imaginary “thickenings”, i.e. models of T.

Thanks for your patience, now we go back to V. You are OK with lengthenings, so let V^+ be a lengthening of V to another model of ZFC. Now just as for v, there is a magic theory T described in V^+ whose models are the “thickenings” of V, but now it’s “thickenings” in quotes, because these models are, like forcing extensions of V, only “non-standard extensions” of V in the sense to which you referred. In V^+ we can define what it means for a first-order sentence to hold in a “thickening” of V; we just ask if it is consistent with the magic theory T. And finally, we can say that V belongs to the Maximality Club if any first-order sentence which is consistent with T (i.e. holds in a “thickening” of V) also holds in a thinning (i.e. inner model) of V. We have said all of this without thickening V! All we had to do was “lengthen” V to a longer model of ZFC in order to understand what first-order properties can hold in “thickenings” of V.

So I hope that this clarifies how the IMH works. You don’t really need to thicken V, but only “thicken” V, i.e. consider models of theories expressible in V^+. These are the “thicker pictures” that I have been talking about. And the IMH just says that V belongs to the Maximality Club in the above sense.


From a foundational and philosophical point of view the two pictures are quite different. On the first CH does not have fixed sense in a specific (upward-open-ended) V but instead one must look at CH across various “candidates for V”. On the second CH does have fixed sense in a specific (upward-open-ended) V. And, as far as I understand your implementation of the first approach for every candidate V there is an extension (in width and height) in which that candidate is countable, as in the case of the hyper universe. Is that right?

Yes. Below is what I said to Pen about this on 23.September:

We have many pictures of V. Through a process of comparison we

isolate those pictures which best exhibit the feature of Maximality, the “optimal” pictures. Then we have 3 possibilities:

a. Does CH hold in all of the optimal pictures?

b. Does CH fail in all of the optimal pictures?

c. Otherwise

In Case a, we have inferred CH from Maximality, in Case b we have inferrred -CH from Maximality and in Case c we come to no definitive conclusion about CH on the basis of Maximality.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You quoted from my message

Assuming V = Ultimate L one can have inner models containing the reals of say MM. But assuming MM one cannot have an inner model containing the reals which satisfies V = Ultimate-L.

and then wrote in your message to Pen:

Perhaps (not to put words in Hugh’s mouth) he is saying that Axiom B is better than Axiom A if models of Axiom B produce inner models of Axiom A but not conversely. Is this a start on how to impose a justifiable preference for one kind of good set theory over another? But maybe I missed the point here, because it seems that one could have “MM together with an inner model of Ultimate-L not containing all of the reals”, in which case that would be an even better synthesis of the 2 axioms! (Here we go again, with maximality and synthesis, this time with first-order axioms, rather than with the set-concept and Hyperuniverse-criteria.)

I was responding to what you wrote in your previous message:

To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate L, shoot down forcing axioms, or argue that one of these is not “good Set Theory”.

Let me try again. We accept the Axiom of Choice. This has not suspended or impeded the study of AD. The reason of course is that the study has become the study of certain sets of reals that exist within V; i.e. the study of certain inner models of V which contain the reals.

You have implied above that if we accept V = Ultimate L then that is conflict with the successes of Forcing Axioms. The point that I was trying to make is: not necessarily. One could view the Forcing Axioms as dealing with a fragment of P(\mathbb R); i.e. those sets of reals which belong to inner models of Forcing Axioms etc. It is important to consider inner models here which contain the reals (versus arbitrary inner models) because many of the structures of interest (Calkin Algebra etc), are correctly computed by any such inner model.

This methodology covers essentially all the applications of Forcing Axioms to $latex H-{\mathfrak c^+}.

One cannot reverse the roles here of V = Ultimate L and Forcing Axioms because if Forcing Axioms hold (i.e. CH fails) then there can be no inner model of V = Ultimate L which contains the reals (there can be no inner model containing the reals in which CH holds).

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Pen,

Thanks for this! I didn’t know about the Single Truth Convention, and I will respect it from now on. Somehow I thought that, for example, constructivists would claim that there are at least 2 distinct truth notions, but I guess I was wrong about that (maybe you would say that they are working with a different set-concept?).

Of course I knew that the official version of Thin Realism takes both Type 1 (good set theory) and Type 2 (set theory as a foundation) evidence into account; but the fact is that in the present forum with only one exception we’ve been talking exclusively about axioms that are good for set theory, like PD and large cardinals, but not good for other areas of mathematics (the exception is when I brought in forcing axioms). More on this point below.

So there were “terminological” errors in what I said. But correcting those errors leaves the argument unchanged and in fact I will make the argument even stronger in this mail.

There are 3 kinds of evidence for Truth (not 3 kinds of Truth!), emanating from the 3 roles of Set Theory that I indicated: (1) A branch of mathematics, (2) A foundation for mathematics and (3) A study of the set-concept.

Now you will object immediately and say that there is no Type 3 evidence; Type 3 is just an engine (heuristic) for generating new examples of Types 1 and 2. Fine, but it still generates evidence, albeit indirectly! You would simply cleanse that evidence of its source (the maximality of the set-concept) and just regard it as plain good set theory or mathematics. It is hard to imagine (although not to be ruled out) that Type 3 will ever generate anything useful for mathematics outside of set theory, so let’s say that Type 3 provides an indirect way of generating evidence of Type 1. Clearly the way that this evidence is generated is not the usual direct way. In any case even your Thin Realist will take Type 3 generated evidence into account (as long as it entails good set theory).

So up to this point the only difference we have is that I regard Type 3 considerations as more than just an indirect way of generating new Type 1 evidence; I would like to preserve the source of that evidence and say that Type 3 considerations enhance our understanding of the set-concept through the MIC, i.e., they are good for the development of the philosophy of the set-concept. Of course the radical skeptic regards this as pure nonsense, I understand that. But I continue to think that there is more at play here than just good set theory or good math, there is also something valuable in better understanding the concept of set. For now we can just leave that debate aside and regard it just as a polite, collegial disagreement which can safely be ignored.

OK, so we have 3 sources for truth. But there is an important difference between Type 1 vs. Types 2, 3 and this regards the issue of “grounding” for the evidence.

Type 3 evidence (i.e. evidence evoked through Type 3 considerations, as you may prefer to say) is grounded in the maximal iterative conception. The HP is limited to squeezing out consequences of that. Of course there is some wiggle room here, but it is fairly robust to say that some mathematical criterion reflects the Maximality of the universe or synthesises two other such criteria.

Type 2 evidence is grounded in the practice of mathematics outside of set theory. Functional analysts, number theorists, toplogists, group theorists, … are not thinking about set theory directly, but axioms of set theory are of course useful for them. You cite AC, which is a perfect example, but in the contemporary setting we can look at the value for example of forcing axioms for the combinatorial power they provide for resolving problems in mathematics outside of set theory. Now just as Type 3 evidence is limited to what grounds it, namely considerations regarding the maximality of the set-concept, so is Type 2 evidence limited by what is valuable for the work of mathematicians who don’t have set theory in their heads, who are not thinking about actualism vs. potentialism, reflection principles, HOD, … To see that this is a nontrivial limitation, just note that two of the most discussed axioms of set theory in this forum, PD and large cardinals, appear to have nothing relevant to say about areas of mathematics outside of set theory. In this sense the Type 2 evidence for forcing axioms is overwhelming in comparison to Type 2 evidence for anything else we have been discussing here, including Ultimate-L or what is generated by the HP.

So there is a big gap in what we know about about evidence for set-theoretic truth. We have barely scratched the surface with the issue of what new axioms of set theory are good for the development of mathematics outside of set theory. Your great example, the Axiom of Choice, won the day for its value in the development of mathematics outside of set theory, yet for some reason this important point has been forgotten and the focus has been on what is good for the development of just set theory. (This may be the only point on which Angus MacIntyre and I may agree: To understand the fondations of mathematics one has to take a close look at mathematics and see what it needs, and not just play around with set theory all the time.)

In my view, Type 1 evidence is very poorly grounded, I would even say not grounded at all. It is evidence that says that some axiom of set theory is good for set theory. That could mean 100 different things. One person says that V = L is true because it is such a strong theory, another that forcing axioms are true because they have great combinatorial strength, Ultimate-L is true for reasons I don’t yet understand, …, not to forget Aczel’s AFA, or constructive set theory, … the list is almost endless. With just Type 1 evidence, we allow set-theorists to run rampant, declaring their own brand of set theory to be particularly “good set theory”. I think this is what I meant earlier when I griped about set-theorists promoting their latest discoveries to the level of evidence for truth.

Pen, can you give Type 1 evidence a better grounding? I’m not sure that I grasped the point of Hugh’s latest mail, but maybe he is hinting at a way of doing this:

“Assuming V = Ultimate L one can have inner models containing the reals of say MM. But assuming MM one cannot have an inner model containing the reals which satisfies V = Ultimate L.”

Perhaps (not to put words in Hugh’s mouth) he is saying that Axiom B is better than Axiom A if models of Axiom B produce inner models of Axiom A but not conversely. Is this a start on how to impose a justifiable preference for one kind of good set theory over another? But maybe I missed the point here, because it seems that one could have “MM together with an inner model of Ultimate-L not containing all of the reals”, in which case that would be an even better synthesis of the 2 axioms! (Here we go again, with maximality and synthesis, this time with first-order axioms, rather than with the set-concept and Hyperuniverse-criteria.)

Anyway, in the absence of a better grounding for Type 1 evidence I am strongly inclined to favour what is well-grounded, namely evidence of Types 2 and 3.

You raised the issue of conflict. It is clear that there can Type 1 conflicts, i.e. conflicts between different forms of Type 1 evidence, and that’s why I’m asking for a better grounding for Type 1. We don’t know yet if there are Type 2 conflicts, because we don’t know much about Type 2 evidence at all. And the hardest part of the HP is dealing with Type 3 conflicts; surely they arise but my “synthesis method” is meant to resolve them.

But what about conflicts between evidence of different Types (1, 2 or 3)? The Single Truth Convention makes this tough, I can’t weasel out anymore by simply saying that there are just different forms of Truth (too bad). Nor can I accept simply rejecting evidence of a particular Type (as you “almost” seemed to suggest when you hinted that Type 3 should defer to Types 1 and 2). This is a dilemma. To present a wild scenario, suppose we have:

(Type 1) The axioms that give the “best set theory” imply CH.
(Type 2) The axioms that give the best foundation for mathematics outside of set theory imply not-CH.
(Type 3) The axioms that follow from the maximality of the set-concept imply not-CH.

What do we tell Sol at that point? Does the majority win, 2 out of 3 for not-CH? I don’t think that Sol will be very impressed by that.

Sorry that this mail got so long. As always, I look forward to your reply.

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Classes have begun here and time is tight, so I’ll be brief.

I didn’t know about the Single Truth Convention and I will respect it from now on. Somehow I thought that, for example, constructivists would claim that there are at least 2 distinct truth notions, but I guess I was wrong about that (maybe you would say that they are working with a different set-concept?).

It isn’t a convention, its part of Thin Realism as I describe it. Other philosophers might say other things.  Please forgive me if I don’t wander into constructivism.

In any case even your Thin Realist will take Type 3 generated evidence into account (as long as it entails good set theory).

Yes, but you may recall that the necessity for good set theoretic/mathematical consequences was one of the main points of contention between us.

Pen, can you give Type 1 evidence a better grounding? I’m not sure that I grasped the point of Hugh’s latest mail, but maybe he is hinting at a way of doing this:

“Assuming V = Ultimate-L one can have inner models containing the reals of say MM. But assuming MM one cannot have an inner model containing the reals which satisfies V = Ultimate-L.”

Perhaps (not to put words in Hugh’s mouth) he is saying that Axiom B is better than Axiom A if models of Axiom B produce inner models of Axiom A but not conversely. Is this a start on how to impose a justifiable preference for one kind of good set theory over another?

In my simple-minded terms, I take Hugh to be arguing that we can preserve the set-theoretic/mathematical benefits of MM even while assuming V=Ult L because MM holds in an inner model containing the reals.  His recent message gives another example:  we can assume AC and still have the benefits of exploring AD because AD is true in L(\mathbb R).  This kind of thinking goes back to the old purported ‘conflict’ between V = L and large cardinals:  you can have your large cardinals and preserve the benefits of V = L in L.

This is a dilemma. To present a wild scenario, suppose we have:

(Type 1) The axioms that give the “best set theory” imply CH.

(Type 2) The axioms that give the best foundation for mathematics outside of set theory imply not-CH.

(Type 3) The axioms that follow from the maximality of the set-concept imply not-CH.

What do we tell Sol at that point? Does the majority win, 2 out of 3 for not-CH?

I don’t know what we’d do then.  I don’t think we can know that without having the actual theories in front of us.

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think it is great that you are getting interested in the philosophy
and foundations of set theory but you really have to do your homework
more carefully.

  1. You take it as a given that the principles of Zermelo set theory
    follow from (that is, are intrinsically justified on the basis of) the
    iterative conception of set. This is highly contested. See, for
    example, the classic paper of Charles Parsons on the topic.
  2. You say that when Pen’s “Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that.”

I think you have misunderstood Pen. Thin Realism is a metaphysical thesis. It has nothing to do at all with whether the justification of an axiom references set theory alone or mathematics more generally. In fact, Pen’s Thin Realist does reference other areas of mathematics!

  1. You go on to talk of three notions of truth in set theory and you
    say that we should just proceed with all three. This is something that has been discussed at length in the literature of pluralism in mathematics. The point I want to make here is that it requires an argument. You cannot just say: “Let’s proceed with all three!” For
    comparison imagine a similar claim with regard to number theory or physics. One can’t just help oneself to relativism. It requires an argument!

For some time now I have wanted to write more concerning your
program. But I still don’t have a grip on the X where X is your view
and at this stage I can only make claims of the form “If your view is
X then Y follows.” Moreover, as the discussion has proceeded my grip on X has actually weakened. And this applies not just to the
philosophical parts of X but also to the mathematical parts of X.

Let’s start with something where we can expect an absolutely clear and unambiguous answer: A mathematical question, namely, the question Hugh asked. Let me repeat it:

What is \textsf{SIMH}^\#(\omega_1)? You wrote in your message of Sept 29:

The IMH# is compatible with all large cardinals. So is the \textsf{SIMH}^\#(\omega_1)

It would also be useful to have an answer to the second question I
asked. The version of \textsf{SIMH}^\# you specified in your next message to me
on sept 29:

The (crude, uncut) \textsf{SIMH}^\# is the statement that V is #-generated and
if a sentence with absolute parameters holds in a cardinal-preserving,
#-generated outer model then it holds in an inner model. It implies a
strong failure of CH but is not known to be consistent.

does not even obviously imply \textsf{IMH}^\#. Perhaps you meant, the above
together with \textsf{IMH}^\#? Or something else?

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Now here we come to an important distinction that is ignored in discussions of Thin Realism: The Axiom of Choice didn’t get elected to the club because it is beneficial to the development of Set Theory! It got elected only because of its broader value for the development of mathematics outside of Set Theory, for the way it strengthens Set Theory as a foundation of mathematics. It is much more impressive for a statement of Set Theory to be valuable for the foundations of mathematics than it is for it to be valuable for the foundations of just Set Theory itself!

In other words when a Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that. In the case of AC it was much more than that.

If by ‘thin realism’, you mean the view described by me, then this is incorrect.  My Thin Realist embraces considerations based on benefits to set theory and to mathematics more generally — and would argue for Choice on the basis of its benefits in both areas.

This has a corresponding effect on discussions of set-theoretic truth. Corresponding to the above 3 roles of Set Theory we have three notions of truth:

  1. True in the sense of Pen’s Thin Realist, i.e. a statement is true because of its importance for producing “good Set Theory”.
  2. True in the sense assigned to AC, i.e., a statement is true based on Set Theory’s role as a foundation of mathematics, i.e. because it is important for the development of areas of mathematics outside of Set Theory.
  3. True in the intrinsic sense, i.e., derivable from the maximal iterative conception of set.

Again, my Thin Realist embraces the considerations in (1) and (2). As for (3), she thinks having an intuitive picture of what we’re talking about is extremely valuable, as a guide to thinking, as a source of new avenues for exploration, etc.  Her reservation about considerations of type (3) is just this:  if there were conflict between type (3) and types (1) and (2), she would change her concept to retain the good mathematics, in set theory and in mathematics more broadly.  (This happened in the case of Choice.)

A more subtle point, quite important to us philosophers, is that Thin Realism doesn’t include a different sort of truth.  Truth is truth. Where the Thin Realist differs is in what she thinks set theory is about (the ‘metaphysics or ‘ontology’).  Because of this, she differs on what she takes to be evidence for truth.  So what I really meant in the previous paragraph is this:  benefits to set theory and to math are evidence for truth; intrinsic considerations, important as they are, only aid and suggest routes to our accumulation of such evidence.

  1. Pen’s model Thin Realist John Steel will go for Hugh’s Ultimate L axiom, assuming certain hard math gets taken care of.

I don’t know what you intend to be covered by ‘certain hard math’, but I take it a lot has to happen before a Thin Realist think we have sufficient evidence to include V=Ultimate L as a new axiom.

As I understand it (I am happy to be corrected), Pen is no fan of Type 3 truth

I hope I’ve now explained my stand on this:  none of these are types of truth; types 1 and 2 are evidence for truth; 3 is of great heuristic value.

I am most pessimistic about Type 1 truth (Thin Realism). To get any useful conclusions here one would not only have to talk about “good Set Theory” but about “the Best Set Theory”, or at least show that all forms of “good Set Theory” reach the same conclusion about something like CH. Can we really expect to ever do that? To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate-L, shoot down forcing axioms or argue that one of these is not “good Set Theory”. Pen, how do you propose to do that? Forcing axioms are here to stay as “good Set Theory”, they can’t be “shot down”. And even if Ultimate-L dies, there will very likely be something to replace it. Why should we expect this replacement for Ultimate-L to come to the same conclusion about CH that forcing axioms reach (i.e. that CH is false)?

I think it’s simply too soon to try to make any of these judgments.

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Oct 8, 2014, at 6:48 AM, Sy David Friedman wrote:

I am most pessimistic about Type 1 truth (Thin Realism). To get any useful conclusions here one would not only have to talk about “good Set Theory” but about “the Best Set Theory”, or at least show that all forms of “good Set Theory” reach the same conclusion about something like CH. Can we really expect to ever do that? To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate-L, shoot down forcing axioms or argue that one of these is not “good Set Theory”. Pen, how do you propose to do that? Forcing axioms are here to stay as “good Set Theory”, they can’t be “shot down”. And even if Ultimate-L dies, there will very likely be something to replace it. Why should we expect this replacement for Ultimate-L to come to the same conclusion about CH that forcing axioms reach (i.e. that CH is false)?

I do not see this at all, In fact, not surprisingly, I completely disagree.

If V = Ultimate L  (and there are large enough cardinals) then one will have inner models containing the reals in which the forcing axioms hold  (including Martin’s Maximum). Thus the theorems of Martin’s Maximum for say H_{\mathfrak c^+} all apply to the objects in such inner models.

For example, consider Farah’s result that all automorphisms of the Calkin Algebra are inner automorphisms assuming MM.  Any inner model containing the reals correctly computes the Calkin Algebra, so Farah’s result applies equally well to automorphisms which belong to such inner models.

One also has inner models containing the reals of the Pmax-axiom and inner models containing  the reals for all of its variations. These axioms are much powerful at the level of H_{\mathfrak c^+}.

This is completely analogous to the theory of determinacy which flourishes in the Axiom of Choice universe through the study of inner models of AD which contain the reals.

Finally note there there is a fundamental asymmetry here. Assuming V = Ultimate L one can have inner models containing the reals of say MM. But assuming MM one cannot have an inner model containing the reals which satisfies V = Ultimate L.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Pen, Sol and others,

It occurs to me that some of my disagreements with Pen and Sol could be resolved just by being clear about how the term “Set Theory” is used.

As I see it, Set Theory is three things:

  1. It is a branch of mathematics.
  2. It is a foundation for mathematics.
  3. It is the study of the concept of set.

Regarding 3: It is plain as pie that there is indeed a “concept of set”, familiar to schoolchildren who are victims of the “new math” (Venn diagrams, essentially). Even kids understand basic set-theoretic operations; probably once they are out of short pants they understand what we mean by powerset.

Now take a look at the “standard” axioms of ZFC. Why are they “standard”? It’s because we all seem to feel that they are “essential to Set Theory”. But there are two distinct sources for believing that:

As Boolos clarified in his paper on the iterative conception (IC), the axioms of Zermelo set theory are derivable from the concept of set as expressed by that conception. Replacement is not derivable from the IC, but it easily follows once we invoke Maximality, i.e. we strengthen the IC to the MIC (maximal iterative conception), also part of the concept of set.

As Pen has clearly expressed, the Axiom of Choice is a different matter: It does not follow from the MIC, but it does follow from the role of Set Theory as a foundation for mathematics. She can say this better than I, but the idea is that mathematics did much better once the old restrictive idea of set given by a rule was liberated through AC.

Now here we come to an important distinction that is ignored in discussions of Thin Realism: The Axiom of Choice didn’t get elected to the club because it is beneficial to the development of Set Theory! It got elected only because of its broader value for the development of mathematics outside of Set Theory, for the way it strengthens Set Theory as a foundation of mathematics. It is much more impressive for a statement of Set Theory to be valuable for the foundations of mathematics than it is for it to be valuable for the foundations of just Set Theory itself!

In other words when a Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that. In the case of AC it was much more than that.

This has a corresponding effect on discussions of set-theoretic truth. Corresponding to the above 3 roles of Set Theory we have three notions of truth:

  1. True in the sense of Pen’s Thin Realist, i.e. a statement is true because of its importance for producing “good Set Theory”.
  2. True in the sense assigned to AC, i.e., a statement is true based on Set Theory’s role as a foundation of mathematics, i.e. because it is important for the development of areas of mathematics outside of Set Theory.
  3. True in the intrinsic sense, i.e., derivable from the maximal iterative conception of set.

Examples:

  1. Pen’s model Thin Realist John Steel will go for Hugh’s Ultimate-L axiom, assuming certain hard math gets taken care of. Will he then regard it as “true” based on its importance for producing “good Set Theory”? I assume so. If not, then maybe Pen will have to look for a new Thin Realist.
  2. Examples here are much harder to find! What have axioms beyond ZFC done for areas of math outside of Set Theory? Surely forcing axioms have had some dramatic combinatorial consequences, but large cardinals haven’t yet had a similar impact. Descriptive Set Theory has had recent and major implications for functional analysis, but the DST being used is just part of good old ZFC. To understand this situation better I think it’s time for set-theorists to stop being so self-centered and to take a close look at independence outside of set theory, with the aim of seeing which axioms beyond ZFC are the most fruitful for resolving those cases of independence (I’m happy to lead the charge!).
  3. Small large cardinals come easily out of the MIC. Precisely what I am doing with the HP is to derive further consequences. Maybe the negation of CH! Work in progress.

Now I see absolutely no argument for rejecting any of these three notions of Truth in Set Theory. Nor do I see an argument that they should reach common conclusions! Maybe you’ll find this to be excessively diplomatic, taking the heat and excitement out of the Great Set Theory Truth Debate, but I’m sure that even if we agree to this proposed Grand Truce, we’ll still find interesting things to argue about.

As I understand it (I am happy to be corrected), Pen is no fan of Type 3 truth and Sol is no fan of Type 1 truth. OK, I have nothing against aesthetic preferences. But to say that an answer to the Continuum Problem based on one of these three takes on Truth is “illegitimate” is going too far. If someone is going to say that CH is true (or false) then she has to say what notion of Truth is being referenced. Indeed, maybe CH is Type 2 true but Type 3 false!

In any case, it is clearly very hard (but in my view possible) to come to conclusions about what is true in any of these senses. As I have emphasized in the HP (Type 3 truth), for me to make a verdict about CH I will have to first produce “optimal” maximality criteria and show that CH is decided in the same way by those criteria. That is very hard work. For Type 2 truth one would similarly have to show that the statements of Set Theory which are most fruitful for the further development of Set Theory as a foundation for mathematics converge on a theory which settles CH. We have barely begun an investigation of the class of such statements!

I am most pessimistic about Type 1 truth (Thin Realism). To get any useful conclusions here one would not only have to talk about “good Set Theory” but about “the Best Set Theory”, or at least show that all forms of “good Set Theory” reach the same conclusion about something like CH. Can we really expect to ever do that? To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate L, shoot down forcing axioms or argue that one of these is not “good Set Theory”. Pen, how do you propose to do that? Forcing axioms are here to stay as “good Set Theory”, they can’t be “shot down”. And even if Ultimate L dies, there will very likely be something to replace it. Why should we expect this replacement for Ultimate L to come to the same conclusion about CH that forcing axioms reach (i.e. that CH is false)?

Nevertheless, as a stubborn optimist I do still expect that at least one of these forms of truth will generate some useful conclusions. But I have given up on the idea that there is a unique, supreme notion of truth in Set Theory that overrides all others; there are at least three distinct and legitimate forms to be taken seriously (despite my pessimism about Thin Realism). And maybe there is even yet a another form of set-theoretic truth that I have overlooked.

Best to all,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen, Geoffrey, Hugh and others,

As discussed, the aim of the HP is to derive consequences of the maximality feature of the set concept via mathematical criteria of maximality which can be formulated in a way which is consistent with width actualism and which can be analysed through a mathematical analysis carried out within the Hyperuniverse.

With this background, the real work of the programme consists of the formulation, analysis and synthesis of such criteria for the selection of those countable transitive models of ZFC which are optimal in terms of their maximality properties.

As a guide to the choice of criteria I’d like to tentatively suggest (yes, this is subject to change as we learn more about Maximality) the following three-step

Maximality Protocol

  1. Impose Height-Maximality via the criterion of #-generation.
  2. Impose Cardinal-Maximality (the class of cardinals should be as thin as possible): A tentative criterion for this is that for any infinite ordinal alpha and subset x of \alpha, (\alpha^+)^{\text{HOD}_x} < \alpha^+. [\text{OD}_x consists of those sets which are definable with x and ordinals as parameters; a set is in \text{HOD}_x if it and all elements of its transitive closure belong to \text{OD}_x.]
  3. Impose Width-Maximality via the criterion \textsf{SIMH}^\# (Levy absoluteness with absolute parameters for cardinal-prserving, #-generated outer models).

I don’t know if Cardinal-Maximaity as formulated above is consistent. Cummings, Golshani and I showed that one can consistently get (\alpha^+)^{\text{HOD}} < \alpha^+ for all infinite cardinals \alpha, and this is good evidence for the consistency of Cardinal-Maximality, but it is significantly weaker. And as already said, I don’t know if the \textsf{SIMH}^\# is consistent.

As I see it, there are two options. Either the Maximality Protocol can be successfully implemented (i.e. relative to large cardinals there are #-generated, cardinal-maximal models obeying the \textsf{SIMH}^\#) and this will be strong evidence that the negation of CH is derivable from Maximality. Or there will be new inconsistency arguments which will tell us a lot about the nature of Maximality in set theory and lead to new, compelling and consistent criteria.

As always I welcome your thoughts.

Best,
Sy