Tag Archives: Thickenings

Re: Paper and slides on indefiniteness of CH

Dear Peter,

Dear Sy,

You now appear to have endorsed width actualism. (I doubt that you actually believe it but rather have phrased your program in terms of width actualism since many people accept this.)

No, I have not endorsed width actualism. I only said that the HP can be treated equivalently either with width actualism or with radical potentialism.

Below is what I wrote to Geoffrey about this on 25.September:

Yes, in a nutshell what I am saying is that there are two “equivalent” ways of handling this:

1. You are a potentialist in height but remain actualist in width.

Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).

2. You let loose and adopt a potentialist view for both length and width.

Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

You said that for the purposes of the program one could either use length potentialism + width potentialism or length potentialism + width actualism. The idea, as I understand it, is that on the latter one has enough height to encode “imaginary width extensions”. I take it that you are then using the hyper universe (where one has actual height and width extensions) to model the actual height extensions and virtual width extensions of “V” (“the real thing”).

Is this right? Would you mind spelling out a bit for me why the two approaches are equivalent? In particular, I do not fully understand how you are treating “imaginary width extensions”.

Below is a copy of my explanation of this to Geoffrey on 24.September; please let me know if you have any questions. As Geoffrey pointed out, it is important to put “thickenings” in quotes and/or to distinguish non-standard from standard interpretations of power sets.


 

Dear Geoffrey,

Thanks for your valuable comments. It is nice to hear that you are happy with “lengthenings”; I’ll now try to convince you that there is no problem with “thickenings”, provided they are interpreted correctly. Indeed, you are right, “lengthenings” and “thickenings” are not fully analogous, there are important differences between these two notions, which I can address in a future mail (I don’t want this mail to be too long).

So as the starting point of this discussion, let’s take the view that V can be “lengthened” to a longer universe but cannot be thickened by adding new sets without adding new ordinals.

We can talk about forcing extensions of V, but we regard these as “non-standard”, not part of V. What other “non-standard extensions” of V are there? Surely they are not all just forcing extensions; what are they?

To answer this question it is very helpful to take a detour through a study of countable transitive models of ZFC. OK, I understand that we have dropped V for now, but please bear with me on this, it is instructive.

So let v denote a countable transitive model of ZFC. What “non-standard extensions” does v have? Of course just like V, v has its forcing extensions. A convenient fact is that forcing extensions of v can actually be realised as new countable transitive models of ZFC; these are genuine thickenings of v that exist in the ambient big universe big-V. This is not surprising, as v is so little. But we don’t even have to restrict ourselves to forcing extensions of v, we can talk about arbitrary thickenings of v, namely countable transitive models of ZFC with the same ordinals as v but more sets.

Alright, so far we have our v together with all of its thickenings. Now we bring in Maximality. We ask the following question: How good a job does v do of exhibiting the feature of Maximality? Of course we immediately say: Terrible! v is only countable and therefore can be enlarged in billions of different ways! But we are not so demanding, we are less interested in the fact that v can be enlarged than we are in the question of whether v can be enlarged in such a way as to reveal new properties, new and interesting internal structures, …, things we cannot find if stay inside v.

I haven’t forgotten that we are still just playing around with countable models, please bear with me a bit longer. OK, so let’s say that v does a decent job of exhibiting Maximality if any first-order property that holds in some thickening of v already holds in some thinning, i.e. inner model, of v. That seems to be a perfectly reasonable demand to make of v if v is to be admitted to the Maximality Club. Please trust me when I say that there are such v‘s, exhibiting this form of Maximality. Good.

Now here is the next important observation, implicit in Barwise but more explicit in M.Stanley: Let v^+ be a countable transitive model of ZFC which lengthens little-V. There are little-V’s in the Maximality Club which have such lengthenings. (Probably this is not a big deal for you, as you believe that V itself should have such a lengthening.) The interesting thing is this: Whether or not a given first-order property holds in a thickening of v is something definable inside v^+. More exactly, there is a logic called “v-logic” which can be formulated inside v and a theory T in that logic whose models are exactly the isomorphic copies of thickenings of v; moreover whether a first-order statement is consistent with T is definable inside v^+. In summary, the question of whether a first-order property phi holds in a thickening of v, a blatantly semantic question, is reduced to a syntactic question which can be answered definably inside v^+: we just ask if $\varphi$ is consistent with the magic theory T. (Yes, this is a Completeness Theorem in the style of Gödel-Barwise.)

Very interesting. So if you allow v to be lengthened, not even thickened, you are able to “see” what first-order properties hold in thickenings of v and thereby determine whether or not v belongs to the Maximality Club. This is great news, because now we can throw away our thickenings! We just talk about which first-order properties are consistent with our magic theory T and this is fuilly described in v^+, any lengthening of v to a model of ZFC. We don’t need real thickenings anymore, we can just talk about imaginary “thickenings”, i.e. models of T.

Thanks for your patience, now we go back to V. You are OK with lengthenings, so let V^+ be a lengthening of V to another model of ZFC. Now just as for v, there is a magic theory T described in V^+ whose models are the “thickenings” of V, but now it’s “thickenings” in quotes, because these models are, like forcing extensions of V, only “non-standard extensions” of V in the sense to which you referred. In V^+ we can define what it means for a first-order sentence to hold in a “thickening” of V; we just ask if it is consistent with the magic theory T. And finally, we can say that V belongs to the Maximality Club if any first-order sentence which is consistent with T (i.e. holds in a “thickening” of V) also holds in a thinning (i.e. inner model) of V. We have said all of this without thickening V! All we had to do was “lengthen” V to a longer model of ZFC in order to understand what first-order properties can hold in “thickenings” of V.

So I hope that this clarifies how the IMH works. You don’t really need to thicken V, but only “thicken” V, i.e. consider models of theories expressible in V^+. These are the “thicker pictures” that I have been talking about. And the IMH just says that V belongs to the Maximality Club in the above sense.


From a foundational and philosophical point of view the two pictures are quite different. On the first CH does not have fixed sense in a specific (upward-open-ended) V but instead one must look at CH across various “candidates for V”. On the second CH does have fixed sense in a specific (upward-open-ended) V. And, as far as I understand your implementation of the first approach for every candidate V there is an extension (in width and height) in which that candidate is countable, as in the case of the hyper universe. Is that right?

Yes. Below is what I said to Pen about this on 23.September:

We have many pictures of V. Through a process of comparison we

isolate those pictures which best exhibit the feature of Maximality, the “optimal” pictures. Then we have 3 possibilities:

a. Does CH hold in all of the optimal pictures?

b. Does CH fail in all of the optimal pictures?

c. Otherwise

In Case a, we have inferred CH from Maximality, in Case b we have inferrred -CH from Maximality and in Case c we come to no definitive conclusion about CH on the basis of Maximality.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Sat, 27 Sep 2014, Penelope Maddy wrote:

Dear Sy,

I fear that height actualism is not dead; surely there must be even a few Platonists out there, and for such people (they are not “nuts”!) I’d have to work a lot harder to make sense of the HP. Is the Height Actualism Club large enough to make that worth the effort? It would help a lot to know how the height actualists treat proper classes: are they all first-order definable? And how do they feel about “collections of proper classes”; do they regard that as nonsense?

I have no strong commitment to height actualism, but I did once think about proper classes as something other than what looks like just another few ranks in the hierarchy — something more like extensions of properties, so that they could be self-membered, for example.  My goal was to understand some of Reinhardt’s arguments this way, but it didn’t work for that job, so I left it behind.

So you generated IMH first, then developed the HP from it? Where did IMH come from?

I launched the (strictly mathematical) Internal Consistency Programme. A first-order statement is “internally consistent” if it holds in an inner model (assuming the existence of inner models with large cardinals). To be “internally consistent” is stronger than to be just plain old consistent, so new methods are needed to show that consistent statements are internally consistent (sometimes they are not) and there’s also a new notion of “internal consistency strength” (measured by large cardinals) that can differ from the usual notion of consistency strength. All of this work was of course about what first-order statements can hold in inner models so it was an obvious question to ask if one could “maximise” what is internally consistent. That is exactly the inner model hypothesis.

I see.  Thank you.

Can you remind us briefly why you withdrew your endorsement of IMH?

Because it only takes maximality in width into account and fails to consider maximality in height!

It this the problem of IMH implying there are no inaccessibles?

Yes, exactly!

We’re now out of my depth, though, so I hope we might hear others on this. E.g., it seems the countable models and the literal thickenings (as opposed to imaginary ‘thickenings’) have both dropped out of the picture.  ??

No, otherwise it wouldn’t be the Hyperuniverse Programme! (Recall that the Hyperuniverse is the collection of countable transitive models of ZFC.)

An important step in the HP for facilitating the math is the “Reduction to the Hyperuniverse”. Recall that we have reduced the discussion of “thickenings” of V to a magic theory in a logic called “V-logic” which lives in a slight “lengthening” V^+ of V, a model of KP with V as an element. In other words, the IMH (for example) is not first-order in V but it becomes first-order in V^+. But now that we’re first-order we can apply Loewenheim-Skolem to V^+! This gives a countable v and v^+ with the same first-order properties as V and V^+. What this means is that if we want to know if a first-order property follows from the IMH it suffices to show that it holds just in the countable v‘s whose associated v^+‘s see that v obeys the IMH. The move from V to v doesn’t change anything except now our “thickenings” of v with quotes are now real thickenings of v without quotes! So we can discard the v^+‘s with their magic theories and just talk boldly and directly about real thickenings of countable transitive models of ZFC. Fantasy has become reality.

In summary the moves are as follows: To handle the “thickenings” needed to make sense if the IMH we create a slight lengthening V^+ of V to make the IMH first-order, then apply Loewenheim-Skolem to reduce the problem of deriving first-order properties from the IMH to a study of countable transitive models together with their real thickenings. So in the end we get rid of “thickenings” altogether and can work the math on countable transitive models of ZFC, nice clean math inside the Hyperuniverse!

The above applies not just to the IMH but also to other HP-criteria.

I’m glad you asked this question!

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Thu, 25 Sep 2014, Penelope Maddy wrote:

Dear Sy,

This is tremendously helpful:

Yes, in a nutshell what I am saying is that there are two  “equivalent” ways of handling this:

  1. You are a potentialist in height but remain actualist in width.
    Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).
  2. You let loose and adopt a potentialist view for both length and width. Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

(Thanks for bringing this out, Geoffrey!)

Great!

I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally –

I have been rescued by ‘scare quotes’! I suggest a Nobel prize for whomever invented those.

then I began to wonder again: is he really an actualist in width, after all?  If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).

OK! You see, I am very accomodating: If you want me to care about the “good set theory” that comes out of the HP then I’m willing to do that, and if you want to interpret the HP as a width actualist, then please be my guest. If you want to say that I am doing some nutty philosophy (radical skepticism? Are there really people like that?) rather than good solid epistemology then I am disappointed, but I probably can live with that as long as you agree that the programme is a sensible approach to extracting new mathematical consequences of the Maximality feature of the set-concept.

However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).

It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it.

I appreciate your point: For length potentialism we’ve got the hierachy of V_\alpha‘s to think about. For width potentialism we are missing an analogous hierarchy of strict inner models converging to V to hang our hats on (at least not yet, and probably never). So let’s stick with width actualism for now. Agreed.

But nonetheless I can’t resist the temptation to defend radical potentialism (potentialism in both height and width), despite having signed the width-actualism-treaty! Indeed, I’ll give you a way to think about it, in fact two ways to think about it at no extra charge! (Apologies: This e-mail is again ridiculously long.)

First way:

We start with our good friend V. Now let’s dip our little toe into thickenings, without quotes, by acknowledging what Azriel Lévy taught us: We can form V[G], a generic extension of V which makes the continuum of V countable, giving rise to a new and larger continuum in V[G]. We have performed a “Levy collapse”. Now the Levy collapse extension is not unique, but all Levy collapases look exactly the same, they all satisfy exactly the same first-order sentences (and more). So I’m gambling that you can picture a Levy collapse.

So now in the spirit of the maximal ITERATIVE conception of set, we Levy collapse again and again. We build a sequence

V\subset V[G] \subset V[G_1] \subset V[G_2] \subset

where at a succesor stage we make the previous continuum countable, giving rise to a new and bigger continuum. One can even make sense of what to do at limit stages (it’s not the union, but close). So maybe you can visualise this picture of increasing Levy collapses, indexed by the ordinal numbers. Now at any stage before we use up all of the ordinals of V we have a model of ZFC; the continuum is now rather big in comparison to the continuum of V, but we still have a bona fide continuum, indeed the powerset axiom still holds.

Now visualise the limit of this process, after running out of V-ordinals. Panic! The continuum is now as big as the ordinals, indeed as big as the entire thickened universe! We have lost the powerset axiom! But at least we can say that we have a nice picture of this “monster” we have created V[\text{Fat}] and it will be a model of ZFC – the powerset axiom.

Now we restore good sense and bring in lengthenings (further thickening will never restore ZFC). I have to beg your indulgence at this point (sorry about this) and talk about a very slightly different kind of lengthening. In the case of the usual lengthenings we ponder the fact that V is the union of its von Neumann levels V_\alpha, \alpha an ordinal, and then we lengthen to a V^* by adding new von Neumann levels. Set-theorists however often prefer a different hierarchy (sorry John) which we call the “H-hierarchy”. H_\kappa is only defined when \kappa is an infinite cardinal number. Now H_{\aleph_0} is the same as V_\omega, nothing fancy there. But H_{\aleph_1} is not a von Neumann level at all, it is the union of all countable transitive sets. So it contains V_{\omega+1} as a subset but it also contains, for example, all countable ordinals and more. But in the set-theorist’s heart, there is no essential difference between V_{\omega+1} and H_{\aleph_1} because any countable transitive set can be “coded” by a subset of V_\omega anyway, so the difference between these two structures is really rather cosmetic. Set-theorists prefer H_{\aleph_1} over V_{\omega+1} because the former is a model of ZFC – Powerset and the latter is not. Similarly, for any infinite cardinal \kappa, H_\kappa is the union of all transitive sets of cardinality less than \kappa. The H-hierarchy is a beautiful hierarchy (better than the V-hierarchy in my opinion) and V is the union of the H_\kappa‘s as \kappa ranges over infinite cardinals, just like V is the union of the V_\alpha‘s as $\alpha$ ranges over the ordinal numbers.

OK, back to where I was. Instead of talking about lengthenings in terms of new von Neumann levels (“new V_\alpha‘s”) let’s talk about them in terms of new H-levels (“new H_\kappa‘s”). We had V[\text{Fat}], a thickening of V with an impossibly fat continuum, as large as the entire model V[\text{Fat}] itself. As experienced lengtheners, what do we do? We lengthen V[\text{Fat}] to a longer universe V^* where now V[\text{Fat}] is just the H_{\aleph_1} of V^*, the 2nd H-level of V^*. In V[\text{Fat}], every set is countable so V[\text{Fat}] is the union of its countable transitive elements. This looks just like the H_{\aleph_1} of some taller universe V^*. As Geoffrey quoted Hilary: “Even God couldn’t make a universe for Zermelo set theory that it would be impossible to extend.” The slight difference now is that we are lengthening not a model of Zermelo set theory but a model of ZFC – Power (to a longer model of ZFC).

To get from V to V^* we only had to combine a “canonical iteration” of Levy collapses, which we can clearly picture, with a “lengthening”, which as length potentialists we can also picture. So we are happy campers so far.

Now what happened to our original V? Oddly enough, it now became a transitive set of ordinal height the \omega_1 (= \aleph_1, just a notation thing) of V^*. So we have thlickened (thickened and lengthened) V to a V^* in which V is a set of size \aleph_1 (it is still uncountable). But you can guess what comes next! I’m going to repeat the same process, starting with V^* and the very first move is V^* \subset V^*[G^*] where the continuum of V^* is countable in V^*[G^*]. Now we have made V countable, as it had size the continuum of V^* and now we have made the continuum of V^* countable. At last we have reached a thickening of V through a procedure we can think about in which V becomes countable.

As a radical potentialist there is no end to this business and any universe can be repeatedly thlickened to universes that make the universes that appear earlier in the thlickening process countable. An actualist view on thlickenings is impossible, because if you were to try to put an end to all this thlickening by taking a union you would end up not with a model of ZFC but with a model of
ZFC minus Powerset in which every set is countable!

Sorry that took so long. Next:

Second way:

Recall how we can slightly lengthen V (no thickening!) to make sense out of what first-order properties hold in “thickenings” of V:

… let V^+ be a lengthening of V to another model of ZFC. Now just as for little-V, there is a magic theory T described in V^+ whose models are the “thickenings” of V, but now it’s “thickenings” in quotes, because these models are, like forcing extensions of V, only “non-standard extensions” of V in the sense to which you referred. In V^+ we can define what it means for a first-order sentence to hold in a “thickening” of V; we just ask if it is consistent with the magic theory T … So I hope that this clarifies how the IMH works. You don’t really need to thicken V, but only “thicken” V, i.e. consider models of theories expressible in V^+. These are the “thicker pictures” that I have been talking about.

Well, there is a magic theory T’ for “thlickenings” too! I.e., if we want to know if a first-order property holds in some “thlickening” (recall this means “lengthening and thickening”) of V we can just ask if it is consistent with the theory T’. So again we can make sense out of “thlickenings”, enlargements of V in both height and width, by slightly lengthening V to a universe V^+ which models ZFC (KP is sufficient). As with “thickenings”, which don’t acutally exist as objects in V^+, neither doe “thlickenings” exist in V^+; it is however the case that when we contemplate what can happen in “thickenings” or “thlickenings” of V, this can be understood inside V^+, even though these objects are not directly available there.

Have I now convinced you that potentialism in both height and width is not so hard to visualise?

If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind:  e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V;

Almost. When we think about what properties hold in V[G], we don’t really mean V[G], we just fiddle around in V^+, a slight lengthening of V.

In my view, this point of view liberates the theory of absoluteness in set theory. If you look at how set-theorists treat the question of absoluteness, i.e., how certain statements do not change truth value when “thickening” V in certain ways, there has been a constant fear about the meaning of “thickening”. Indeed, in all pre-HP discussion of absoluteness, these “thickenings” were always set-generic extensions, because with set-generic extensions one is indeed “just fiddling around with things inside V“, rather than in V^+. If you can tolerate the move from V to V^+, a slight “lengthening” of V, then suddenly you can talk about arbitrary “thickenings” when discussing absoluteness, not just set-generic extensions. This is great, as there is no convincing way to argue that the set-generic extensions are the only ones worth considering.

However I don’t see how to comfortably run the HP with both height and width actualism. The problem is that to handle “thickenings” in the presence of width actualism it seems necessary to allow a least a bit of “lengthening” (to a KP-model). Without this you don’t catch the magic theory needed to talk about what can happen in “thickenings”. Indeed, you are then stuck with pre-HP absoluteness theory, where the only “thickenings” you can make sense out of are given by set-forcing or by carefully chosen examples of class-forcing, a disappointing restriction.

so V[G] is a ‘thickening’, right?

Yes, V[G] is a “thickening”, with quotes, even if G is generic for a class forcing, hyperclass forcing, hyperhyperclass forcing, whatever.

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V.

The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)

“… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?”
As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!

Maybe ‘endorse’ is too strong.

Well now that I think about it, I did “endorse” the IMH at first! (Read my paper with Tatiana.) So there’s an example, but as you know I changed my mind and withdrew that “endorsement”!

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.

2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.

3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

PS:   You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”.  I can’t speak for Geoffrey, but I took us to be saying something pretty simple:  if we think of V as being fixed in width, then CH is either true or false there.  I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.

Sorry, I misread this! You are not talking about the “usual methods for determining the CH”, but rather that its truth value is “determinate in the usual way”.

“My bad”. (I just heard this expression but understand that it is all the rage in the States now.)

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

This is tremendously helpful:

Yes, in a nutshell what I am saying is that there are two “equivalent” ways of handling this:

1. You are a potentialist in height but remain actualist in width.

Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).

2. You let loose and adopt a potentialist view for both length and width.

Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

(Thanks for bringing this out, Geoffrey!)

I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally — then I began to wonder again: is he really an actualist in width, after all? If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height). It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it. If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind: e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V; so latex V[G]$ is a ‘thickening’, right?

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?

As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!

Maybe ‘endorse’ is too strong. Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course. Can you point to an HP-generated principle that has that sort of status?

All best,
Pen

PS: You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”. I can’t speak for Geoffrey, but I took us to be saying something pretty simple: if we think of V as being fixed in width, then CH is either true or false there. I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.