Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Sat, 27 Sep 2014, Penelope Maddy wrote:

Dear Sy,

I fear that height actualism is not dead; surely there must be even a few Platonists out there, and for such people (they are not “nuts”!) I’d have to work a lot harder to make sense of the HP. Is the Height Actualism Club large enough to make that worth the effort? It would help a lot to know how the height actualists treat proper classes: are they all first-order definable? And how do they feel about “collections of proper classes”; do they regard that as nonsense?

I have no strong commitment to height actualism, but I did once think about proper classes as something other than what looks like just another few ranks in the hierarchy — something more like extensions of properties, so that they could be self-membered, for example.  My goal was to understand some of Reinhardt’s arguments this way, but it didn’t work for that job, so I left it behind.

So you generated IMH first, then developed the HP from it? Where did IMH come from?

I launched the (strictly mathematical) Internal Consistency Programme. A first-order statement is “internally consistent” if it holds in an inner model (assuming the existence of inner models with large cardinals). To be “internally consistent” is stronger than to be just plain old consistent, so new methods are needed to show that consistent statements are internally consistent (sometimes they are not) and there’s also a new notion of “internal consistency strength” (measured by large cardinals) that can differ from the usual notion of consistency strength. All of this work was of course about what first-order statements can hold in inner models so it was an obvious question to ask if one could “maximise” what is internally consistent. That is exactly the inner model hypothesis.

I see.  Thank you.

Can you remind us briefly why you withdrew your endorsement of IMH?

Because it only takes maximality in width into account and fails to consider maximality in height!

It this the problem of IMH implying there are no inaccessibles?

Yes, exactly!

We’re now out of my depth, though, so I hope we might hear others on this. E.g., it seems the countable models and the literal thickenings (as opposed to imaginary ‘thickenings’) have both dropped out of the picture.  ??

No, otherwise it wouldn’t be the Hyperuniverse Programme! (Recall that the Hyperuniverse is the collection of countable transitive models of ZFC.)

An important step in the HP for facilitating the math is the “Reduction to the Hyperuniverse”. Recall that we have reduced the discussion of “thickenings” of V to a magic theory in a logic called “V-logic” which lives in a slight “lengthening” V^+ of V, a model of KP with V as an element. In other words, the IMH (for example) is not first-order in V but it becomes first-order in V^+. But now that we’re first-order we can apply Loewenheim-Skolem to V^+! This gives a countable v and v^+ with the same first-order properties as V and V^+. What this means is that if we want to know if a first-order property follows from the IMH it suffices to show that it holds just in the countable v‘s whose associated v^+‘s see that v obeys the IMH. The move from V to v doesn’t change anything except now our “thickenings” of v with quotes are now real thickenings of v without quotes! So we can discard the v^+‘s with their magic theories and just talk boldly and directly about real thickenings of countable transitive models of ZFC. Fantasy has become reality.

In summary the moves are as follows: To handle the “thickenings” needed to make sense if the IMH we create a slight lengthening V^+ of V to make the IMH first-order, then apply Loewenheim-Skolem to reduce the problem of deriving first-order properties from the IMH to a study of countable transitive models together with their real thickenings. So in the end we get rid of “thickenings” altogether and can work the math on countable transitive models of ZFC, nice clean math inside the Hyperuniverse!

The above applies not just to the IMH but also to other HP-criteria.

I’m glad you asked this question!

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy and friends,

Firstly, let me just say that I’ve found this thread really interesting, and I’m very keen to keep the discussion together (though the technical subtleties sometimes elude me, it’s great to see the thought process).

You asked:

But I fear that height actualism is not dead; surely there must be even a few Platonists out there, and for such people (they are not “nuts”!) I’d have to work a lot harder to make sense of the HP. Is the Height Actualism Club large enough to make that worth the effort? It would help a lot to know how the height actualists treat proper classes: are they all first-order definable? And how do they feel about “collections of proper classes”; do they regard that as nonsense?

I think this very much depends on who you talk to. For example, lots of height actualists like to render proper class talk using plural reference (given in Boolos and developed in Uzquiano). Again, however, people differ on whether all “pluralities” should be first-order definable.

Another route some have taken is to regard class talk as simply shorthand for talk for first-order satisfaction in V, and associated formalisations of such talk in NBG (without putting words in people’s mouths, this is I believe Peter Koellner’s position, but I’m prepared to be corrected on this). This is obviously all first-order definable.

Still another way is to take Leon Horsten and Philip Welch’s approach, and regard proper classes as mereological fusions of sets that lie outside the scope of our first-order quantifiers. Again, whether or not you think such things are first-order definable depends on taste.

Finally, there’s the view that proper classes should be understood as “properties” or some other similar intensional notion. Pen’s already mentioned her 1983 paper, but it also pops up in some of Øystein Linnebo’s work from his (I beieve) pre-potentialist days (e.g. “Sets, Properties, and Unrestricted Quantification”).

Similarly whether or not one should have collections of proper classes is also going to depend on who you talk to. I’d say most actualists reject collections of proper classes, and this maps loosely on to the philosophical views above; properties/mereological fusions are fundamentally different kinds of entities from sets, and you can’t (for reasons that differ between authors) take collections of them. Similarly, in the plural case, this turns on the legitimacy of super-plural quantification, a hotly debated topic in the Philosophy of Language, Logic, and Set Theory (see, for example, the different views presented by Hanoch Ben-Yami’s “Higher-Level Plurals versus Articulated Reference, and an Elaboration of Salva Veritate” and Linnebo and Nicholas’ “Superplurals in English”). I’d say most actualists who take the plural route reject super plural quantification (usually by arguing that it is more ontologically committing than plural quantification).

There are some who have embraced `collections’ of proper classes, however. In his PhD thesis Hewitt allows for finite order superplural reference and some have been tempted to use collections of proper classes in the pursuit of a class-theoretic interpretation of Category Theory (I think Muller has something like this in “Sets, Classes, and Categories”, but it’s a while since I read that – I should check the reference). Of course, the extent to which this is coherent is fiercely contested!

In short, when talking about “Actualism” we face exactly the same problem as when talking about “Potentialism”; there’s a whole gamut of positions referred to by the term. Constructing general arguments for and against these kinds of positions, and analysing the extent to which a view that falls under one of the two labels can account transparently for a piece of set-theoretic discourse is thus rather tricky.

Very Best,
Neil

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I fear that height actualism is not dead; surely there must be even a few Platonists out there, and for such people (they are not “nuts”!) I’d have to work a lot harder to make sense of the HP. Is the Height Actualism Club large enough to make that worth the effort? It would help a lot to know how the height actualists treat proper classes: are they all first-order definable? And how do they feel about “collections of proper classes”; do they regard that as nonsense?

I have no strong commitment to height actualism, but I did once think about proper classes as something other than what looks like just another few ranks in the hierarchy — something more like extensions of properties, so that they could be self-membered, for example. My goal was to understand some of Reinhardt’s arguments this way, but it didn’t work for that job, so I left it behind.

So you generated IMH first, then developed the HP from it? Where did IMH come from?

I launched the (strictly mathematical) Internal Consistency Programme. A first-order statement is “internally consistent” if it holds in an inner model (assuming the existence of inner models with large cardinals). To be “internally consistent” is stronger than to be just plain old consistent, so new methods are needed to show that consistent statements are internally consistent (sometimes they are not) and there’s also a new notion of “internal consistency strength” (measured by large cardinals) that can differ from the usual notion of consistency strength. All of this work was of course about what first-order statements can hold in inner models so it was an obvious question to ask if one could “maximise” what is internally consistent. That is exactly the inner model hypothesis.

I see.  Thank you.

Can you remind us briefly why you withdrew your endorsement of IMH?

Because it only takes maximality in width into account and fails to consider maximality in height!

It this the problem of IMH implying there are no inaccessibles?

Can SIMH# be stated in a simple form like that of IMH?  Can you explain its mathematical attractions?

The SIMH# is a “unification” of the SIMH and the IMH#. The SIMH is not too hard to explain, but the IMH# is much tougher. (I don’t imagine that you found my e-mail to Bob very enlightening!). Let me do the SIMH now, and if you haven’t heard enough I’ll give the IMH# a go in my next e-mail.

SIMH

The acronym denotes the Strong Inner Model Hypothesis. For the sake of clarity, however, I’ll give you a simplified version that doesn’t quite imply the original IMH; please forgive that…

The attraction of the SIMH# is that it is a natural criterion that mirrors both height and width maximality and solves the continuum problem (negatively).

This is helpful.  Thank you.

We’re now out of my depth, though, so I hope we might hear others on this.  E.g., it seems the countable models and the literal thickenings (as opposed to imaginary ‘thickenings’) have both dropped out of the picture. ??

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Fri, 26 Sep 2014, Penelope Maddy wrote:

Dear Sy,

(radical skepticism? Are there really people like that?)

Well, um, yes, there are — I’m one of them.  Skepticism seems to me a particularly fascinating topic, especially as a tool for analyzing various philosophical methods (‘meta-philosophy’, as it’s called).

Now that is embarrassing! It seems I called you a “nut”! I apologise for that. Of course I know nothing about “radical skepticism” but since it seems to deny any kind of “meaning” I thought it sounded “nutty”. But you are not a “nut” so there must be something really worthwhile about it!

If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).

OK. It seems that my valiant attempt to sell radical potentialism went over like a lead balloon. But I do see the appeal of your and Geoffrey’s attachment to width actualism, so as I can still run the HP using it I won’t beg you to reconsider radical potentialism just yet.

However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).

I didn’t ask for this.  Just actualism on width.

Nice to hear. But I fear that height actualism is not dead; surely there must be even a few Platonists out there, and for such people (they are not “nuts”!) I’d have to work a lot harder to make sense of the HP. Is the Height Actualism Club large enough to make that worth the effort? It would help a lot to know how the height actualists treat proper classes: are they all first-order definable? And how do they feel about “collections of proper classes”; do they regard that as nonsense?

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V. The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)

So you generated IMH first, then developed the HP from it?  Where did IMH come from?

I launched the (strictly mathematical) Internal Consistency Programme. A first-order statement is “internally consistent” if it holds in an inner model (assuming the existence of inner models with large cardinals). To be “internally consistent” is stronger than to be just plain old consistent, so new methods are needed to show that consistent statements are internally consistent (sometimes they are not) and there’s also a new notion of “internal consistency strength” (measured by large cardinals) that can differ from the usual notion of consistency strength. All of this work was of course about what first-order statements can hold in inner models so it was an obvious question to ask if one could “maximise” what is internally consistent. That is exactly the inner model hypothesis.

Can you remind us briefly why you withdrew your endorsement of IMH?

Because it only takes maximality in width into account and fails to consider maximality in height! The latter is expressed by an entirely different criterion called “#-generation”, which contradicts the IMH. So you can’t have “perfect width maximality” (the IMH) and height maximality at the same time; instead you are compelled to “unify” the two criteria into a single criterion that says that the universe is maximal in height and is msximal in width only when compared to other universes that are also maximal in height. That’s the IMH#.

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

  1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.
  2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.
  3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

Can \textsf{SIMH}^\# be stated in a simple form like that of IMH?  Can you explain its mathematical attractions?

The \textsf{SIMH}^\# is a “unification” of the \textsf{SIMH} and the \textsf{IMH}^\#. The \textsf{SIMH} is not too hard to explain, but the \textsf{IMH}^\# is much tougher. (I don’t imagine that you found my e-mail to Bob very enlightening!). Let me do the \textsf{SIMH} now, and if you haven’t heard enough I’ll give the \textsf{IMH}^\# a go in my next e-mail.

SIMH

The acronym denotes the Strong Inner Model Hypothesis. For the sake of clarity, however, I’ll give you a simplified version that doesn’t quite imply the original IMH; please forgive that.

A cardinal is “absolute” if it is not only definable but is definable by the same formula in all cardinal-preserving extensions (“thickenings”) of V. For example, \aleph_1 is absolute because it is obviously “the least uncountable cardinal” in all cardinal-preserving extensions. The same applies to \aleph_2, \aleph_3,\dots, \aleph_\omega, \dots for a long way. But notice that the cardinality of the continuum could fail to be absolute, as the size of the continuum could grow in a cardinal-preserving extension (this is what Cohen did when he used forcing to make CH false; Bob Solovay got the ultimate result).

Now recall that the IMH says that if a first-order sentence without parameters holds in an outer model (“thickening”) of V then it holds in an inner model (“thinning”) of V. The SIMH says that if a first-order sentence with absolute parameters holds in a cardinal-preserving outer model of V then it holds in an inner model of V (of course with the same parameters). The SIMH implies that CH is false: By Cohen’s result there is a cardinal-preserving outer model of V in which the continuum has size at least \aleph_2 of V and therefore using the SIMH we conclude that there is an inner model of V in which the continuum has size at least \aleph_2 of V; it follows that also in V, the continuum has size at least \aleph_2, i.e. CH is false. In fact by the same argument, the SIMH implies that the continuum is very, very large, bigger than aleph_alpha for any ordinal alpha which is countable in Gödel’s universe L of constructible sets!

The \textsf{SIMH}^\# is the same as the \textsf{SIMH} except we require that V is \#-generated (maximal in height) and instead of considering all cardinal-preserving outer models of V we only consider outer models of V which are \#-generated (maximal in height). It is a “unification” of height maximality with a strong form of width maximality.

The attraction of the \textsf{SIMH}^\# is that it is a natural criterion that mirrors both height and width maximality and solves the continuum problem (negatively).

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

(radical skepticism? Are there really people like that?)

Well, um, yes, there are — I’m one of them.  Skepticism seems to me a particularly fascinating topic, especially as a tool for analyzing various philosophical methods (‘meta-philosophy’, as it’s called).

If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).

However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).

I didn’t ask for this.  Just actualism on width.

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V.

The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)

So you generated IMH first, then developed the HP from it?  Where did IMH come from?  Can you remind us briefly why you withdrew your endorsement of IMH?

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V = Ultimate L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.

2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.

3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

Can \textsf{SIMH}^\# be stated in a simple form like that of IMH?  Can you explain its mathematical attractions?

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Thu, 25 Sep 2014, Penelope Maddy wrote:

Dear Sy,

This is tremendously helpful:

Yes, in a nutshell what I am saying is that there are two  “equivalent” ways of handling this:

  1. You are a potentialist in height but remain actualist in width.
    Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).
  2. You let loose and adopt a potentialist view for both length and width. Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

(Thanks for bringing this out, Geoffrey!)

Great!

I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally –

I have been rescued by ‘scare quotes’! I suggest a Nobel prize for whomever invented those.

then I began to wonder again: is he really an actualist in width, after all?  If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height).

OK! You see, I am very accomodating: If you want me to care about the “good set theory” that comes out of the HP then I’m willing to do that, and if you want to interpret the HP as a width actualist, then please be my guest. If you want to say that I am doing some nutty philosophy (radical skepticism? Are there really people like that?) rather than good solid epistemology then I am disappointed, but I probably can live with that as long as you agree that the programme is a sensible approach to extracting new mathematical consequences of the Maximality feature of the set-concept.

However, there are limits: It’s hard to run this programme as both a width and height actualist (more on this below).

It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it.

I appreciate your point: For length potentialism we’ve got the hierachy of V_\alpha‘s to think about. For width potentialism we are missing an analogous hierarchy of strict inner models converging to V to hang our hats on (at least not yet, and probably never). So let’s stick with width actualism for now. Agreed.

But nonetheless I can’t resist the temptation to defend radical potentialism (potentialism in both height and width), despite having signed the width-actualism-treaty! Indeed, I’ll give you a way to think about it, in fact two ways to think about it at no extra charge! (Apologies: This e-mail is again ridiculously long.)

First way:

We start with our good friend V. Now let’s dip our little toe into thickenings, without quotes, by acknowledging what Azriel Lévy taught us: We can form V[G], a generic extension of V which makes the continuum of V countable, giving rise to a new and larger continuum in V[G]. We have performed a “Levy collapse”. Now the Levy collapse extension is not unique, but all Levy collapases look exactly the same, they all satisfy exactly the same first-order sentences (and more). So I’m gambling that you can picture a Levy collapse.

So now in the spirit of the maximal ITERATIVE conception of set, we Levy collapse again and again. We build a sequence

V\subset V[G] \subset V[G_1] \subset V[G_2] \subset

where at a succesor stage we make the previous continuum countable, giving rise to a new and bigger continuum. One can even make sense of what to do at limit stages (it’s not the union, but close). So maybe you can visualise this picture of increasing Levy collapses, indexed by the ordinal numbers. Now at any stage before we use up all of the ordinals of V we have a model of ZFC; the continuum is now rather big in comparison to the continuum of V, but we still have a bona fide continuum, indeed the powerset axiom still holds.

Now visualise the limit of this process, after running out of V-ordinals. Panic! The continuum is now as big as the ordinals, indeed as big as the entire thickened universe! We have lost the powerset axiom! But at least we can say that we have a nice picture of this “monster” we have created V[\text{Fat}] and it will be a model of ZFC – the powerset axiom.

Now we restore good sense and bring in lengthenings (further thickening will never restore ZFC). I have to beg your indulgence at this point (sorry about this) and talk about a very slightly different kind of lengthening. In the case of the usual lengthenings we ponder the fact that V is the union of its von Neumann levels V_\alpha, \alpha an ordinal, and then we lengthen to a V^* by adding new von Neumann levels. Set-theorists however often prefer a different hierarchy (sorry John) which we call the “H-hierarchy”. H_\kappa is only defined when \kappa is an infinite cardinal number. Now H_{\aleph_0} is the same as V_\omega, nothing fancy there. But H_{\aleph_1} is not a von Neumann level at all, it is the union of all countable transitive sets. So it contains V_{\omega+1} as a subset but it also contains, for example, all countable ordinals and more. But in the set-theorist’s heart, there is no essential difference between V_{\omega+1} and H_{\aleph_1} because any countable transitive set can be “coded” by a subset of V_\omega anyway, so the difference between these two structures is really rather cosmetic. Set-theorists prefer H_{\aleph_1} over V_{\omega+1} because the former is a model of ZFC – Powerset and the latter is not. Similarly, for any infinite cardinal \kappa, H_\kappa is the union of all transitive sets of cardinality less than \kappa. The H-hierarchy is a beautiful hierarchy (better than the V-hierarchy in my opinion) and V is the union of the H_\kappa‘s as \kappa ranges over infinite cardinals, just like V is the union of the V_\alpha‘s as $\alpha$ ranges over the ordinal numbers.

OK, back to where I was. Instead of talking about lengthenings in terms of new von Neumann levels (“new V_\alpha‘s”) let’s talk about them in terms of new H-levels (“new H_\kappa‘s”). We had V[\text{Fat}], a thickening of V with an impossibly fat continuum, as large as the entire model V[\text{Fat}] itself. As experienced lengtheners, what do we do? We lengthen V[\text{Fat}] to a longer universe V^* where now V[\text{Fat}] is just the H_{\aleph_1} of V^*, the 2nd H-level of V^*. In V[\text{Fat}], every set is countable so V[\text{Fat}] is the union of its countable transitive elements. This looks just like the H_{\aleph_1} of some taller universe V^*. As Geoffrey quoted Hilary: “Even God couldn’t make a universe for Zermelo set theory that it would be impossible to extend.” The slight difference now is that we are lengthening not a model of Zermelo set theory but a model of ZFC – Power (to a longer model of ZFC).

To get from V to V^* we only had to combine a “canonical iteration” of Levy collapses, which we can clearly picture, with a “lengthening”, which as length potentialists we can also picture. So we are happy campers so far.

Now what happened to our original V? Oddly enough, it now became a transitive set of ordinal height the \omega_1 (= \aleph_1, just a notation thing) of V^*. So we have thlickened (thickened and lengthened) V to a V^* in which V is a set of size \aleph_1 (it is still uncountable). But you can guess what comes next! I’m going to repeat the same process, starting with V^* and the very first move is V^* \subset V^*[G^*] where the continuum of V^* is countable in V^*[G^*]. Now we have made V countable, as it had size the continuum of V^* and now we have made the continuum of V^* countable. At last we have reached a thickening of V through a procedure we can think about in which V becomes countable.

As a radical potentialist there is no end to this business and any universe can be repeatedly thlickened to universes that make the universes that appear earlier in the thlickening process countable. An actualist view on thlickenings is impossible, because if you were to try to put an end to all this thlickening by taking a union you would end up not with a model of ZFC but with a model of
ZFC minus Powerset in which every set is countable!

Sorry that took so long. Next:

Second way:

Recall how we can slightly lengthen V (no thickening!) to make sense out of what first-order properties hold in “thickenings” of V:

… let V^+ be a lengthening of V to another model of ZFC. Now just as for little-V, there is a magic theory T described in V^+ whose models are the “thickenings” of V, but now it’s “thickenings” in quotes, because these models are, like forcing extensions of V, only “non-standard extensions” of V in the sense to which you referred. In V^+ we can define what it means for a first-order sentence to hold in a “thickening” of V; we just ask if it is consistent with the magic theory T … So I hope that this clarifies how the IMH works. You don’t really need to thicken V, but only “thicken” V, i.e. consider models of theories expressible in V^+. These are the “thicker pictures” that I have been talking about.

Well, there is a magic theory T’ for “thlickenings” too! I.e., if we want to know if a first-order property holds in some “thlickening” (recall this means “lengthening and thickening”) of V we can just ask if it is consistent with the theory T’. So again we can make sense out of “thlickenings”, enlargements of V in both height and width, by slightly lengthening V to a universe V^+ which models ZFC (KP is sufficient). As with “thickenings”, which don’t acutally exist as objects in V^+, neither doe “thlickenings” exist in V^+; it is however the case that when we contemplate what can happen in “thickenings” or “thlickenings” of V, this can be understood inside V^+, even though these objects are not directly available there.

Have I now convinced you that potentialism in both height and width is not so hard to visualise?

If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind:  e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V;

Almost. When we think about what properties hold in V[G], we don’t really mean V[G], we just fiddle around in V^+, a slight lengthening of V.

In my view, this point of view liberates the theory of absoluteness in set theory. If you look at how set-theorists treat the question of absoluteness, i.e., how certain statements do not change truth value when “thickening” V in certain ways, there has been a constant fear about the meaning of “thickening”. Indeed, in all pre-HP discussion of absoluteness, these “thickenings” were always set-generic extensions, because with set-generic extensions one is indeed “just fiddling around with things inside V“, rather than in V^+. If you can tolerate the move from V to V^+, a slight “lengthening” of V, then suddenly you can talk about arbitrary “thickenings” when discussing absoluteness, not just set-generic extensions. This is great, as there is no convincing way to argue that the set-generic extensions are the only ones worth considering.

However I don’t see how to comfortably run the HP with both height and width actualism. The problem is that to handle “thickenings” in the presence of width actualism it seems necessary to allow a least a bit of “lengthening” (to a KP-model). Without this you don’t catch the magic theory needed to talk about what can happen in “thickenings”. Indeed, you are then stuck with pre-HP absoluteness theory, where the only “thickenings” you can make sense out of are given by set-forcing or by carefully chosen examples of class-forcing, a disappointing restriction.

so V[G] is a ‘thickening’, right?

Yes, V[G] is a “thickening”, with quotes, even if G is generic for a class forcing, hyperclass forcing, hyperhyperclass forcing, whatever.

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

The original version of the IMH that we have been discussing uses the familiar kind of “thickening”. The original version says that if a first-order sentence holds in some “thickening” of V then it holds in some thinning (inner model) of V. And by “thickening” I just mean a model of ZFC containing V with the same ordinals as V.

The IMH was the very first HP-generated principle. (In the interest of full and honest disclosure I should however confess that the HP did not exist when the IMH was formulated; indeed the HP was triggered by contemplating the intuitions behind the IMH.)

“… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?”
As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!

Maybe ‘endorse’ is too strong.

Well now that I think about it, I did “endorse” the IMH at first! (Read my paper with Tatiana.) So there’s an example, but as you know I changed my mind and withdrew that “endorsement”!

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.

2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.

3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

PS:   You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”.  I can’t speak for Geoffrey, but I took us to be saying something pretty simple:  if we think of V as being fixed in width, then CH is either true or false there.  I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.

Sorry, I misread this! You are not talking about the “usual methods for determining the CH”, but rather that its truth value is “determinate in the usual way”.

“My bad”. (I just heard this expression but understand that it is all the rage in the States now.)

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

This is tremendously helpful:

Yes, in a nutshell what I am saying is that there are two “equivalent” ways of handling this:

1. You are a potentialist in height but remain actualist in width.

Then you need the quotes in “thickenings” when talking about width maximality. But if you happen to look at width maximality for a countable transitive model, then “thickening” = thickening without quotes (your really can thicken!).

2. You let loose and adopt a potentialist view for both length and width.

Then you don’t need to worry about quotes, because any picture of V is potentially countable, i.e. countable in some large picture V*, where all of the “thickenings” of V that you need are really thickenings of V that exist inside V*.

Mathematically there is no difference, but I do appreciate that philosophically one might feel uncomfortable with width potentialism and therefore wish to work with “thickenings” in quotes and not real thickenings taken from some bigger picture of V.

(Thanks for bringing this out, Geoffrey!)

I’d taken to heart your insistence, Sy, that you’re a potentialist in height and width, but when you started exchanging with Geoffrey about ‘thickenings’ — where those are what we philosophers call ‘scare quotes’, indicating that the notion isn’t intended quite literally — then I began to wonder again: is he really an actualist in width, after all? If the two versions are equivalent, I’d like very much to stick with the actualist reading of width (still potentialist in height). It’s not that I’m so philosophically uncomfortable with width potentialism; it’s just that I don’t know how to think about it. If we can just talk about ‘thickenings’ (not thickenings), I start to hope I might have some idea of what you have in mind: e.g., we don’t really mean V[G], we’re really just fiddling around with things inside V; so latex V[G]$ is a ‘thickening’, right?

You’ve said there are other types of ‘thickenings’, but is it possible to state an HP-generated principle (a version of the IMH, I guess) that just uses this familiar kind of ‘thickening’?

… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?

As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP!

Maybe ‘endorse’ is too strong. Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate-L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course. Can you point to an HP-generated principle that has that sort of status?

All best,
Pen

PS: You asked what Geoffrey and I mean by saying that for a width actualist, “CH is determinate in the usual way”. I can’t speak for Geoffrey, but I took us to be saying something pretty simple: if we think of V as being fixed in width, then CH is either true or false there. I suspect Geoffrey and I would disagree on the ontology here (in the technical philosophical sense), but that’s another matter.

Re: Paper and slides on indefiniteness of CH

Dear Pen and Geoffrey,

On Wed, 24 Sep 2014, Penelope Maddy wrote:

Thanks, Geoffrey. Of course you’re right. To use Peter’s terminology, if you’re a
potentialist about height, but an actualist about width, then CH is determinate in the usual
way. I was speaking of Sy’s potentialism, which I think is intended to be potentialist about
both height and width.

You both say that if one hangs onto width actualism then “CH is determinate in the usual way”. I have no idea what “the usual way” means; can you tell me?

But nothing you have said suggests that there is any problem at all with determining the CH as a radical potentialist. Again:

… solving the CH via the HP would amount to verifying that the pictures of V which optimally exhibit the Maximality feature of the set concept all satisfy CH or all satisfy its negation. I do consider this to be discovering something about V. But I readily agree that it is not the “ordinary way people think of that project”.

And in more detail:

We have many pictures of V. Through a process of comparison we isolate those pictures which best exhibit the feature of Maximality, the “optimal” pictures. Then we have three possibilities:

a. Does CH hold in all of the optimal pictures?
b. Does CH fail in all of the optimal pictures?
c. Otherwise

In Case a, we have inferred CH from Maximality, in Case b we have inferrred -CH from Maximality and in Case c we come to no definitive conclusion about CH on the basis of Maximality.

OK, maybe this is not the “usual way” (whatever that is), but don’t you acknowledge that this is a programme that could resolve CH using Maximality?

I also owe Pen an answer to:

… at some point could you give one explicit HP-generated mathematical principle that you endorse, and explain its significance to us philosophers?

As I tried to explain to Peter, it is too soon to “endorse” any mathematical principle that comes out of the HP! The programme generates different possible mathematical consequences of Maximality, mirroring different aspects of Maximality. For example, the IMH is a way of handling width maximality and \#-generation a way of handling height maximality. Each has its interesting mathematical consequences. But they contradict each other! The aim of the programme is to generate the entire spectrum of possible ways of formulating the different aspects of Maximality, analysing them, comparing them, unifying them, … until the picture converges on an optimal Maximality criterion. Then we can talk about what to “endorse”. I conjecture that the negation of CH will be a consequence, but it is too soon to make that claim.

The IMH is significant for many reasons. First, it refutes the claim that “Maximality in width” implies the existence of large cardinals; indeed the IMH is the most natural formulation of “Maximality in width” and it refutes the existence of large cardinals! Second, it illustrates how one can legitimately talk about “arbitrary thickenings” of V in discussions of Maximality, without tying one hands to the restrictive notion of forcing extension. Third, as discussed at length in my papers with Tatiana, it inspires a re-think of the role of large cardinals in set theory, explaining this in terms of their existence in inner models as opposed to their existence in V.

But the HP has moved beyond the IMH to other criteria like \#-generation, unreachability and (possibly) omniscience, together with different ways of unifying these criteria into new “synthesised” criteria. It is an ongoing study with a lot of math behind it (yes Pen, “good set theory” that people can care about!) but this study is still in its infancy. I can’t come to any definitive conclusions yet, sorry to disappoint. But I’ll keep you posted.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Bob,

Let (N,U) be a sharp, i.e., N is a model of ZFC – Power with largest cardinal \kappa, \kappa is inaccessible in N, (N,U) is amenable, U is a normal measure on \kappa in (N,U) and (N,U) is iterable. Then (N,U) generates the inner model M if M is the union of the H_{\kappa_i}^{N_i} where (N_i,U_i) is the i-th iterate of (N,U) with largest cardinal \kappa_i (the image of \kappa under the iteration map).

A model M is \#-generated if it is of the above form some sharp (N,U). I argue in my paper with Honzik that \#-generation is the optimal form of reflection. The \kappa_i‘s are indiscnernibles in M and enjoy in M all of the large cardinal properties that the Silver indiscernibles enjoy in L.

Best,
Sy