Tag Archives: Potential countability

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think we are approaching the point where a summary of this discussion is in order. The mathematical dust has largely settled — as far as the program as it currently stands is concerned –, thanks to Hugh’s contributions. But there is one major remaining matter of a more philosophical nature that still remains unclear to me — it has to do with my original question of whether you are an actualist or a potentialist and ultimately with the picture that forms the backdrop of your program. To get clear on this matter I will have to recapitulate a good part of this discussion. Please bear with me.

In response to my original question, on Sept. 14 you wrote:

I am a radical potentialist, indeed you might say a Skolem-worshipper! (Remember what Pen quoted from my article with Tatiana: “V is a product of our own”!) Indeed my view is that there is no real V, but instead a huge wealth of different “pictures of V”. Given any such picture P of V, let V_P denote the universe depicted by P; there are pictures P* of V such that V_P is a rank-initial segment of V_{P^*}, a proper inner model of V_{P^*}, or (here’s the radical Skolem-worshipping) even countable in V_{P^*}! So yes, it is a given for me that you can lengthen or thicken a picture of V, in fact you can make it countable!

Now here is a possible source of confusion: Sometimes one fixes an initial picture P and corresponding universe V_P as a reference picture; in that case one can talk about cardinality, in reference to V_P. But of course V_Pitself is countable from the perspective of a bigger V_{P^*}, so there is no absolute notion of “countable”, only a relativised one.’

In response to this — with focus on the line “there is no absolute notion of “countable”, only a relativized one” — on Sept. 15 I responded:

I thought: “He can’t mean this! Look. If everything is countable from the perspective of an enlargement and if enlargements always exist then everything is countable. (Of course it can be countable from a local perspective — when one has one’s blinkers on and looks no further — but given the tenet that it countable from a higher perspective it follows that it is ultimately countable.) But if everything is countable — or if “there is no absolute notion of `countable’, only a relativized one” — then how can he be understanding CH? This whole exchange was sparked with the presentation of a new and promising approach to questions like CH, one that promised to reinvigorate “intrinsic justifications” to the point where they could touch questions like CH. But now it seems that on this approach the straightforward sense of CH has evaporated. Indeed it seems that set theory has evaporated!”

Your view, as described above, is indeed like that of Skolem. But Skolem (rightly) took this view to involve a rejection of set theory. And yet you don’t. You seem to want to have it both ways: reject an absolute notion of countability and say something about CH (beyond that it has no meaning).

This got me greatly confused.

But then in the outline of the HP program that you sent on the same day things changed. For in that outline you speak of mental pictures of “the universe V of all sets” and you write: “But although we can form mental pictures of other universes, the only such universes we can actually produce are wholly contained within V, simply because V by its very definition contains all sets.” So now you appear to be an actualist and not a potentialist at all. (Of course you are a potentialist with regard to the little V’s — the countable transitive models of ZFC — but we are all potentialists with regard to those, trivially.)

So, which is it: Are you a potentialist or an actualist?

On Sept. 15 you responded:

OK, now to radical potentialism: Maybe it would help to talk first about something less radical: Width potentialism. In this any picture of the universe can be thickened, keeping the same ordinals, even to the extent of making ordinals countable. So for any ordinal alpha of V we can imagine how to thicken V to a universe where alpha is countable. So any ordinal is “potentially countable”. But that does not mean that every ordinal is countable! There is a big difference between universes that we can imagine (where our \aleph_1 becomes countable) and universes we can “produce”. So this “potential countability” does not threaten the truth of the powerset axiom in V!

At that point I thought: “OK, I think I am getting a grip on the picture: Sy distinguishes between extensions that “actually” exist and extensions which “potentially” (or “virtually”) exist. When talking about extensions that actually exist (lengthenings and thickenings) he doesn’t use scare quotes but when talking about extensions that do not actually exists but only potentially (or virtually) exist he uses scare quotes — as, for example, when in the context of width-actualism he speaks of “thickenings”.”

Let’s take stock: (1) In the case of countable transitive models of ZFC we all agree that there are actual lengthenings and thickenings (no scare quotes). And we can agree that there is always such an actual extension in which any given model is seen to be countable. (2) In the context of with-actualism there are actual lengthenings but only virtual thickenings (“thickenings”, with scare quotes — the “imaginary” extensions). And we can agree, via Jensen coding through class forcing, that there is always such a virtual thickening (“thickening”, with scare quotes — an “imaginary” extension) in which the model is “seen” to be countable. (3) But in distinguishing your radical potentialism from width actualism + height potentialism you must endorse actual lengthenings and actual thickenings and, moreover, such actual extensions in which any given transitive model of ZFC (whether countable or not) is (actually) seen to be countable. But then everything is ultimately countable, as I pointed out and you rejected.

This got me greatly confused. It seemed we were back to where we started.

I doubted that until on 21 Oct you wrote:

Extreme Multiverse view: We have no single V but a wealth of different possible V’s. This wealth is so wealthy that any particular V can be thickened or lengthened (no quotes!) and shockingly, made countable by going to a larger V. So there is no absolute notion of cardinality, only distinct notions of cardinality within each of the possible V’s. OK, now when talking about maximality of a possible V we simply mean that lengthening or thickening V will not reveal new properties that we couldn’t already see in V. (Note: One could go further and even look at blowups of V which see V as countable, but mathematically this doesn’t seem to add much.) Then when we talk of a first-order statement like not-CH being a consequence of maximality we mean that it holds in all of the possible V’s which are maximal.

Frankly speaking, the Extreme Multiverse View is my own personal view of things and gives the cleanest and clearest approach to studying maximality. That’s because it allows the freedom to make all of the moves that you want to make in comparing a possible V to other possible V’s.

Note that the multiverse described above looks exactly like the Hyperuniverse of a model of ZFC. In other words, the Extreme Multiverse View says that whether or not we realise it, we live in a Hyperuniverse, and we are kidding ourselves when we claim that we have truly uncountable sets: Some bigger universe looks down at us and laughs when she hears this, knowing perfectly well that we are just playing around with countable stuff.

In the second sentence you emphatically indicate that there are no scare quotes — these lengthenings and thickenings actually exist — there is “no absolute notion of cardinality”.

So we are back to where we started, to the view I thought you held all along.

I thus repeat my earlier point:

But now it seems that on this approach the straightforward sense of CH has evaporated. Indeed it seems that set theory has evaporated! Your view, as described above, is indeed like that of Skolem. But Skolem (rightly) took this view to involve a rejection of set theory. And yet you don’t. You seem to want to have it both ways: reject an absolute notion of countability and say something about CH (beyond that it has no meaning).

I hope you don’t repeat your earlier response:

So any ordinal is “potentially countable”. But that does not mean that every ordinal is countable! There is a big difference between universes that we can imagine (where our aleph_1 becomes countable) and universes we can “produce”. So this “potential countability” does not threaten the truth of the powerset axiom in V!

Because if you do we will be caught in a loop…

I suspect your view has changed. Or not changed. In any case, what is your view?

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Thu, 25 Sep 2014, Penelope Maddy wrote:

Dear Sy,

This is tremendously helpful:

Yes, in a nutshell what I am saying is that there are two  “equivalent” ways of handling this:

  1. You are a potentialist in height but remain actualist in width.
    Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).
  2. You let loose and adopt a potentialist view for both length and width. Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

(Thanks for bringing this out, Geoffrey!)

Great!

I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally –

I have been rescued by ‘scare quotes’! I suggest a Nobel prize for whomever invented those.

then I began to wonder again: is he really an actualist in width, after all?  If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).

OK! You see, I am very accomodating: If you want me to care about the “good set theory” that comes out of the HP then I’m willing to do that, and if you want to interpret the HP as a width actualist, then please be my guest. If you want to say that I am doing some nutty philosophy (radical skepticism? Are there really people like that?) rather than good solid epistemology then I am disappointed, but I probably can live with that as long as you agree that the programme is a sensible approach to extracting new mathematical consequences of the Maximality feature of the set-concept.

However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).

It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it.

I appreciate your point: For length potentialism we’ve got the hierachy of V_\alpha‘s to think about. For width potentialism we are missing an analogous hierarchy of strict inner models converging to V to hang our hats on (at least not yet, and probably never). So let’s stick with width actualism for now. Agreed.

But nonetheless I can’t resist the temptation to defend radical potentialism (potentialism in both height and width), despite having signed the width-actualism-treaty! Indeed, I’ll give you a way to think about it, in fact two ways to think about it at no extra charge! (Apologies: This e-mail is again ridiculously long.)

First way:

We start with our good friend V. Now let’s dip our little toe into thickenings, without quotes, by acknowledging what Azriel Lévy taught us: We can form V[G], a generic extension of V which makes the continuum of V countable, giving rise to a new and larger continuum in V[G]. We have performed a “Levy collapse”. Now the Levy collapse extension is not unique, but all Levy collapases look exactly the same, they all satisfy exactly the same first-order sentences (and more). So I’m gambling that you can picture a Levy collapse.

So now in the spirit of the maximal ITERATIVE conception of set, we Levy collapse again and again. We build a sequence

V\subset V[G] \subset V[G_1] \subset V[G_2] \subset

where at a succesor stage we make the previous continuum countable, giving rise to a new and bigger continuum. One can even make sense of what to do at limit stages (it’s not the union, but close). So maybe you can visualise this picture of increasing Levy collapses, indexed by the ordinal numbers. Now at any stage before we use up all of the ordinals of V we have a model of ZFC; the continuum is now rather big in comparison to the continuum of V, but we still have a bona fide continuum, indeed the powerset axiom still holds.

Now visualise the limit of this process, after running out of V-ordinals. Panic! The continuum is now as big as the ordinals, indeed as big as the entire thickened universe! We have lost the powerset axiom! But at least we can say that we have a nice picture of this “monster” we have created V[\text{Fat}] and it will be a model of ZFC – the powerset axiom.

Now we restore good sense and bring in lengthenings (further thickening will never restore ZFC). I have to beg your indulgence at this point (sorry about this) and talk about a very slightly different kind of lengthening. In the case of the usual lengthenings we ponder the fact that V is the union of its von Neumann levels V_\alpha, \alpha an ordinal, and then we lengthen to a V^* by adding new von Neumann levels. Set-theorists however often prefer a different hierarchy (sorry John) which we call the “H-hierarchy”. H_\kappa is only defined when \kappa is an infinite cardinal number. Now H_{\aleph_0} is the same as V_\omega, nothing fancy there. But H_{\aleph_1} is not a von Neumann level at all, it is the union of all countable transitive sets. So it contains V_{\omega+1} as a subset but it also contains, for example, all countable ordinals and more. But in the set-theorist’s heart, there is no essential difference between V_{\omega+1} and H_{\aleph_1} because any countable transitive set can be “coded” by a subset of V_\omega anyway, so the difference between these two structures is really rather cosmetic. Set-theorists prefer H_{\aleph_1} over V_{\omega+1} because the former is a model of ZFC – Powerset and the latter is not. Similarly, for any infinite cardinal \kappa, H_\kappa is the union of all transitive sets of cardinality less than \kappa. The H-hierarchy is a beautiful hierarchy (better than the V-hierarchy in my opinion) and V is the union of the H_\kappa‘s as \kappa ranges over infinite cardinals, just like V is the union of the V_\alpha‘s as $\alpha$ ranges over the ordinal numbers.

OK, back to where I was. Instead of talking about lengthenings in terms of new von Neumann levels (“new V_\alpha‘s”) let’s talk about them in terms of new H-levels (“new H_\kappa‘s”). We had V[\text{Fat}], a thickening of V with an impossibly fat continuum, as large as the entire model V[\text{Fat}] itself. As experienced lengtheners, what do we do? We lengthen V[\text{Fat}] to a longer universe V^* where now V[\text{Fat}] is just the H_{\aleph_1} of V^*, the 2nd H-level of V^*. In V[\text{Fat}], every set is countable so V[\text{Fat}] is the union of its countable transitive elements. This looks just like the H_{\aleph_1} of some taller universe V^*. As Geoffrey quoted Hilary: “Even God couldn’t make a universe for Zermelo set theory that it would be impossible to extend.” The slight difference now is that we are lengthening not a model of Zermelo set theory but a model of ZFC – Power (to a longer model of ZFC).

To get from V to V^* we only had to combine a “canonical iteration” of Levy collapses, which we can clearly picture, with a “lengthening”, which as length potentialists we can also picture. So we are happy campers so far.

Now what happened to our original V? Oddly enough, it now became a transitive set of ordinal height the \omega_1 (= \aleph_1, just a notation thing) of V^*. So we have thlickened (thickened and lengthened) V to a V^* in which V is a set of size \aleph_1 (it is still uncountable). But you can guess what comes next! I’m going to repeat the same process, starting with V^* and the very first move is V^* \subset V^*[G^*] where the continuum of V^* is countable in V^*[G^*]. Now we have made V countable, as it had size the continuum of V^* and now we have made the continuum of V^* countable. At last we have reached a thickening of V through a procedure we can think about in which V becomes countable.

As a radical potentialist there is no end to this business and any universe can be repeatedly thlickened to universes that make the universes that appear earlier in the thlickening process countable. An actualist view on thlickenings is impossible, because if you were to try to put an end to all this thlickening by taking a union you would end up not with a model of ZFC but with a model of
ZFC minus Powerset in which every set is countable!

Sorry that took so long. Next:

Second way:

Recall how we can slightly lengthen V (no thickening!) to make sense out of what first-order properties hold in “thickenings” of V:

… let V^+ be a lengthening of V to another model of ZFC. Now just as for little-V, there is a magic theory T described in V^+ whose models are the “thickenings” of V, but now it’s “thickenings” in quotes, because these models are, like forcing extensions of V, only “non-standard extensions” of V in the sense to which you referred. In V^+ we can define what it means for a first-order sentence to hold in a “thickening” of V; we just ask if it is consistent with the magic theory T … So I hope that this clarifies how the IMH works. You don’t really need to thicken V, but only “thicken” V, i.e. consider models of theories expressible in V^+. These are the “thicker pictures” that I have been talking about.

Well, there is a magic theory T’ for “thlickenings” too! I.e., if we want to know if a first-order property holds in some “thlickening” (recall this means “lengthening and thickening”) of V we can just ask if it is consistent with the theory T’. So again we can make sense out of “thlickenings”, enlargements of V in both height and width, by slightly lengthening V to a universe V^+ which models ZFC (KP is sufficient). As with “thickenings”, which don’t acutally exist as objects in V^+, neither doe “thlickenings” exist in V^+; it is however the case that when we contemplate what can happen in “thickenings” or “thlickenings” of V, this can be understood inside V^+, even though these objects are not directly available there.

Have I now convinced you that potentialism in both height and width is not so hard to visualise?

If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind:  e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V;

Almost. When we think about what properties hold in V[G], we don’t really mean V[G], we just fiddle around in V^+, a slight lengthening of V.

In my view, this point of view liberates the theory of absoluteness in set theory. If you look at how set-theorists treat the question of absoluteness, i.e., how certain statements do not change truth value when “thickening” V in certain ways, there has been a constant fear about the meaning of “thickening”. Indeed, in all pre-HP discussion of absoluteness, these “thickenings” were always set-generic extensions, because with set-generic extensions one is indeed “just fiddling around with things inside V“, rather than in V^+. If you can tolerate the move from V to V^+, a slight “lengthening” of V, then suddenly you can talk about arbitrary “thickenings” when discussing absoluteness, not just set-generic extensions. This is great, as there is no convincing way to argue that the set-generic extensions are the only ones worth considering.

However I don’t see how to comfortably run the HP with both height and width actualism. The problem is that to handle “thickenings” in the presence of width actualism it seems necessary to allow a least a bit of “lengthening” (to a KP-model). Without this you don’t catch the magic theory needed to talk about what can happen in “thickenings”. Indeed, you are then stuck with pre-HP absoluteness theory, where the only “thickenings” you can make sense out of are given by set-forcing or by carefully chosen examples of class-forcing, a disappointing restriction.

so V[G] is a ‘thickening’, right?

Yes, V[G] is a “thickening”, with quotes, even if G is generic for a class forcing, hyperclass forcing, hyperhyperclass forcing, whatever.

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V.

The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)

“… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?”
As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!

Maybe ‘endorse’ is too strong.

Well now that I think about it, I did “endorse” the IMH at first! (Read my paper with Tatiana.) So there’s an example, but as you know I changed my mind and withdrew that “endorsement”!

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.

2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.

3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

PS:   You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”.  I can’t speak for Geoffrey, but I took us to be saying something pretty simple:  if we think of V as being fixed in width, then CH is either true or false there.  I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.

Sorry, I misread this! You are not talking about the “usual methods for determining the CH”, but rather that its truth value is “determinate in the usual way”.

“My bad”. (I just heard this expression but understand that it is all the rage in the States now.)

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter,

Before I forget I should mention now that tomorrow I will be off to a conference, so may not be responding to e-mail promptly for the rest of this week.

Thanks a lot for your messages; they are very helpful for sharpening the arguments that I am making. And I apologise if my description of radical potentialism caused so much confusion! Let me try to clarify it better in this mail.

Your second point is about “desirable properties”; let me address that first. The HP is aimed primarily at what is derivable from the intrinsic feature of maximality, i.e. it is concerned with the maximal iterative conception. But I also mentioned “omniscience”, which I do not see as derivable from maximality, at least no one has presented such an argument and I don’t know of one. If omniscience were to be included as intrinsic to the “concept of set” then Pen would have been right to say that we have changed to a different concept! I only used the word “desirable feature” informally to suggest that I find omniscience desirable, nothing more. I would very much like to hear suggestions about what practice-independent notions like omniscience should be regarded as “desriable”; I have no idea how to formulate that. Actually I am curious to know: Do you see it as a “desirable” feature of the universe of sets? Maybe you don’t want to talk about “desirable features” at all, and I can understand that.

OK, now to radical potentialism: Maybe it would help to talk first about something less radical: Width potentialism. In this any picture of the universe can be thickened, keeping the same ordinals, even to the extent of making ordinals countable. So for any ordinal alpha of V we can imagine how to thicken V to a universe where alpha is countable. So any ordinal is “potentially countable”. But that does not mean that every ordinal *is* countable! There is a big difference between universes that we can imagine (where our aleph_1 becomes countable) and universes we can “produce”. So this “potential countability” does not threaten the truth of the powerset axiom in V!

The standard form of potentialism can be viewed as a process of lengthening as opposed to thickening. Once again, there is no model of ZFC “at the end” because there is no “end”.

Now radical potentialism is in effect a unification of these two forms of potentialism. We allow V to be lengthened and thickened simultaneously. If we were to keep thickening to make every ordinal of V countable then after \text{Ord}(V) steps we are forced to also lengthen to reach a (picture of a) universe that satisfies ZFC. In that universe, the original V looks countable. But then we could repeat the process with this new universe until it is seen to be countable. The potentialist aspect is that we cannot end this process by taking the union of all of our pictures. In fact, whereas in the standard discussion of lengthenings there could be a debate about whether we can arrive at “the end”, if we allow both lengthenings and thickenings, potentialism is the only possibility; actualism is ruled out because the union of our “universes” would not be a model of ZFC and would therefore have to be lengthened further! And again, the “potential countability of V” does not threaten the truth of the axioms of ZFC in V!

Now in powerset and ordinal maximality we are not comparing V to pictures of other universes which see V as countable, even though there are such pictures. We are only looking at lengthenings that have V as a rank-initial segment and thickenings that have the same ordinals as V. From the perspective of a given V, these lengthenings and thickenings are only pictures of course, we are not talking about actual universes of sets, as those would be contained in V. But as I said in my last mail, even a platonist, with his own special V can imagine lengthenings and thickenings. It seems that I have some platonistically-leaning colleagues who discuss the set-generic multiverse surrounding V, which makes no sense if all universes are contained in V. The relevant set-generic extensions can be “pictured” but not “produced”. There are other constructions which take a countable universe and lengthen it, and doing this to V can also be “pictured” by a Platonist.

So set theory has not evaporated, CH is still a good problem. There is a huge wealth of pictures of V and some are “better” than others in the sense that some are better witnesses to maximality than others. The minimal model of ZFC is a terrible witness to maximality. A witness to the IMH does a much better job. In the HP we want to figure out which are the “best” witnesses to maximality. We may conclude that these “best” witnesses to maximality satisfy not CH, or we may conclude otherwise. It is too early to make such a judgment.

Now the next move (please see the outline) is to realise that if the spectrum of pictures is as rich as I describe, allowing V to “look countable” in pictures of larger universes, then in the “maximality test” where V is compared to pictures obtained through thickening and lengthening we might as well carry out this test inside a large picture of V where the orignal universe looks countable and where the lengthenings and thickenings you need actually exist as transitive models of ZFC. The result is that if you want to know if something first order holds in all universes that pass the “maximality test” you can simply assume that the test is taking place in the Hyperuniverse of some background V. (Of course depending on the choice of that backgound V, there may or may not exist universes that pass the maximality test.) This is the reduction of the problem to the Hyperuniverse, where these pictures can actually be realised as transitive models of ZFC.

Best,
Sy