On Thu, 25 Sep 2014, Penelope Maddy wrote:
This is tremendously helpful:
Yes, in a nutshell what I am saying is that there are two “equivalent” ways of handling this:
- You are a potentialist in height but remain actualist in width.
Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).
- You let loose and adopt a potentialist view for both length and width. Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.
Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.
(Thanks for bringing this out, Geoffrey!)
I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally –
I have been rescued by ‘scare quotes’! I suggest a Nobel prize for whomever invented those.
then I began to wonder again: is he really an actualist in width, after all? If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).
OK! You see, I am very accomodating: If you want me to care about the “good set theory” that comes out of the HP then I’m willing to do that, and if you want to interpret the HP as a width actualist, then please be my guest. If you want to say that I am doing some nutty philosophy (radical skepticism? Are there really people like that?) rather than good solid epistemology then I am disappointed, but I probably can live with that as long as you agree that the programme is a sensible approach to extracting new mathematical consequences of the Maximality feature of the set-concept.
However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).
It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it.
I appreciate your point: For length potentialism we’ve got the hierachy of ‘s to think about. For width potentialism we are missing an analogous hierarchy of strict inner models converging to V to hang our hats on (at least not yet, and probably never). So let’s stick with width actualism for now. Agreed.
But nonetheless I can’t resist the temptation to defend radical potentialism (potentialism in both height and width), despite having signed the width-actualism-treaty! Indeed, I’ll give you a way to think about it, in fact two ways to think about it at no extra charge! (Apologies: This e-mail is again ridiculously long.)
We start with our good friend . Now let’s dip our little toe into thickenings, without quotes, by acknowledging what Azriel Lévy taught us: We can form , a generic extension of which makes the continuum of countable, giving rise to a new and larger continuum in . We have performed a “Levy collapse”. Now the Levy collapse extension is not unique, but all Levy collapases look exactly the same, they all satisfy exactly the same first-order sentences (and more). So I’m gambling that you can picture a Levy collapse.
So now in the spirit of the maximal ITERATIVE conception of set, we Levy collapse again and again. We build a sequence
where at a succesor stage we make the previous continuum countable, giving rise to a new and bigger continuum. One can even make sense of what to do at limit stages (it’s not the union, but close). So maybe you can visualise this picture of increasing Levy collapses, indexed by the ordinal numbers. Now at any stage before we use up all of the ordinals of V we have a model of ZFC; the continuum is now rather big in comparison to the continuum of V, but we still have a bona fide continuum, indeed the powerset axiom still holds.
Now visualise the limit of this process, after running out of V-ordinals. Panic! The continuum is now as big as the ordinals, indeed as big as the entire thickened universe! We have lost the powerset axiom! But at least we can say that we have a nice picture of this “monster” we have created and it will be a model of ZFC – the powerset axiom.
Now we restore good sense and bring in lengthenings (further thickening will never restore ZFC). I have to beg your indulgence at this point (sorry about this) and talk about a very slightly different kind of lengthening. In the case of the usual lengthenings we ponder the fact that V is the union of its von Neumann levels , an ordinal, and then we lengthen to a by adding new von Neumann levels. Set-theorists however often prefer a different hierarchy (sorry John) which we call the “-hierarchy”. is only defined when is an infinite cardinal number. Now is the same as , nothing fancy there. But is not a von Neumann level at all, it is the union of all countable transitive sets. So it contains as a subset but it also contains, for example, all countable ordinals and more. But in the set-theorist’s heart, there is no essential difference between and because any countable transitive set can be “coded” by a subset of anyway, so the difference between these two structures is really rather cosmetic. Set-theorists prefer over because the former is a model of ZFC – Powerset and the latter is not. Similarly, for any infinite cardinal , is the union of all transitive sets of cardinality less than . The H-hierarchy is a beautiful hierarchy (better than the V-hierarchy in my opinion) and V is the union of the ‘s as ranges over infinite cardinals, just like is the union of the ‘s as $\alpha$ ranges over the ordinal numbers.
OK, back to where I was. Instead of talking about lengthenings in terms of new von Neumann levels (“new ‘s”) let’s talk about them in terms of new H-levels (“new ‘s”). We had , a thickening of with an impossibly fat continuum, as large as the entire model itself. As experienced lengtheners, what do we do? We lengthen to a longer universe where now is just the of , the 2nd H-level of . In , every set is countable so is the union of its countable transitive elements. This looks just like the of some taller universe . As Geoffrey quoted Hilary: “Even God couldn’t make a universe for Zermelo set theory that it would be impossible to extend.” The slight difference now is that we are lengthening not a model of Zermelo set theory but a model of ZFC – Power (to a longer model of ZFC).
To get from to we only had to combine a “canonical iteration” of Levy collapses, which we can clearly picture, with a “lengthening”, which as length potentialists we can also picture. So we are happy campers so far.
Now what happened to our original ? Oddly enough, it now became a transitive set of ordinal height the (= , just a notation thing) of . So we have thlickened (thickened and lengthened) to a in which is a set of size (it is still uncountable). But you can guess what comes next! I’m going to repeat the same process, starting with and the very first move is where the continuum of is countable in . Now we have made countable, as it had size the continuum of and now we have made the continuum of countable. At last we have reached a thickening of through a procedure we can think about in which becomes countable.
As a radical potentialist there is no end to this business and any universe can be repeatedly thlickened to universes that make the universes that appear earlier in the thlickening process countable. An actualist view on thlickenings is impossible, because if you were to try to put an end to all this thlickening by taking a union you would end up not with a model of ZFC but with a model of
ZFC minus Powerset in which every set is countable!
Sorry that took so long. Next:
Recall how we can slightly lengthen V (no thickening!) to make sense out of what first-order properties hold in “thickenings” of V:
… let be a lengthening of to another model of ZFC. Now just as for little-, there is a magic theory T described in whose models are the “thickenings” of , but now it’s “thickenings” in quotes, because these models are, like forcing extensions of , only “non-standard extensions” of in the sense to which you referred. In we can define what it means for a first-order sentence to hold in a “thickening” of ; we just ask if it is consistent with the magic theory T … So I hope that this clarifies how the IMH works. You don’t really need to thicken , but only “thicken” , i.e. consider models of theories expressible in . These are the “thicker pictures” that I have been talking about.
Well, there is a magic theory T’ for “thlickenings” too! I.e., if we want to know if a first-order property holds in some “thlickening” (recall this means “lengthening and thickening”) of we can just ask if it is consistent with the theory T’. So again we can make sense out of “thlickenings”, enlargements of in both height and width, by slightly lengthening to a universe which models ZFC (KP is sufficient). As with “thickenings”, which don’t acutally exist as objects in , neither doe “thlickenings” exist in ; it is however the case that when we contemplate what can happen in “thickenings” or “thlickenings” of , this can be understood inside , even though these objects are not directly available there.
Have I now convinced you that potentialism in both height and width is not so hard to visualise?
If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind: e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V;
Almost. When we think about what properties hold in , we don’t really mean , we just fiddle around in , a slight lengthening of .
In my view, this point of view liberates the theory of absoluteness in set theory. If you look at how set-theorists treat the question of absoluteness, i.e., how certain statements do not change truth value when “thickening” in certain ways, there has been a constant fear about the meaning of “thickening”. Indeed, in all pre-HP discussion of absoluteness, these “thickenings” were always set-generic extensions, because with set-generic extensions one is indeed “just fiddling around with things inside “, rather than in . If you can tolerate the move from to , a slight “lengthening” of , then suddenly you can talk about arbitrary “thickenings” when discussing absoluteness, not just set-generic extensions. This is great, as there is no convincing way to argue that the set-generic extensions are the only ones worth considering.
However I don’t see how to comfortably run the HP with both height and width actualism. The problem is that to handle “thickenings” in the presence of width actualism it seems necessary to allow a least a bit of “lengthening” (to a KP-model). Without this you don’t catch the magic theory needed to talk about what can happen in “thickenings”. Indeed, you are then stuck with pre-HP absoluteness theory, where the only “thickenings” you can make sense out of are given by set-forcing or by carefully chosen examples of class-forcing, a disappointing restriction.
so is a ‘thickening’, right?
Yes, is a “thickening”, with quotes, even if is generic for a class forcing, hyperclass forcing, hyperhyperclass forcing, whatever.
You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?
The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V.
The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)
“… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?”
As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!
Maybe ‘endorse’ is too strong.
Well now that I think about it, I did “endorse” the IMH at first! (Read my paper with Tatiana.) So there’s an example, but as you know I changed my mind and withdrew that “endorsement”!
Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course. Can you point to an HP-generated principle that has that sort of status?
I can come close. It would be the . But it’s not really analogous to Ultimate L for several reasons:
1. I hesitate to “conjecture” that the is consistent.
2. The in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the will be part of that Optimal criterion; it may have to first be unified with other criteria.
3. The , unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.
PS: You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”. I can’t speak for Geoffrey, but I took us to be saying something pretty simple: if we think of V as being fixed in width, then CH is either true or false there. I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.
Sorry, I misread this! You are not talking about the “usual methods for determining the CH”, but rather that its truth value is “determinate in the usual way”.
“My bad”. (I just heard this expression but understand that it is all the rage in the States now.)
All the best,