Tag Archives: Hyperclasses

Re: Paper and slides on indefiniteness of CH

Dear Neil,

Many thanks for your interest. Some comments below:

On Sat, 25 Oct 2014, Neil Barton wrote:

Dear HP-ers, HP-worriers, and friends, In this thread (which I confess has been moving pretty quickly for me; I’ve read it all but do apologise if I’m revisiting some old ground) we’ve seen that the key claim is that there is a deep relationship between countable transitive models and some V, either real, ideal, or sat within a multiverse.

Well, yes, this has been something I have emphasized, the “reduction to the Hyperuniverse” which sets up a connection between the study of the maximality of V and the study of ctm’s. But please note that this is not really essential to the programme! As I said in my mail of 23.October to Pen and Geoffrey:

“Now I can be even more accomodating. Some of you doubters out there may buy the way I propose to treat maximality via a Single-Universe view (via lengthenings and “thickenings”) but hide your money when it comes to the “reduction to the Hyperuniverse” (due to some weird dislike of countable transitive models of ZFC). OK, then i would say the following, something I should have said much earlier: Fine, forget about the reduction to countable transitive models, just stay with the (awkward) way of analysing maximality that I describe above (via lengthenings and “thickenings”) without leaving “the real V”! You don’t need to move the discussion to countable transitive models anyway, it was just what I considered to be a convenience of great clarification-power, nothing more!

Is everybody happy now? You can have your “real V” and you don’t need to talk about countable transitive models of ZFC. What remains is nevertheless a powerful way to discuss and extract consequences from the maximality of V in height and width. Of course you will make me sad if you block the move to ctm’s, because then you strip the programme of the name “Hyperuniverse Programme” and it becomes the “Maximality Programme” or something like that. I guess I’ll get over that disappointment in time, as it’s only a change of name, not a change of approach or content in the programme.”

I have a few general worries on this that if assuaged will help me better appreciate the view.

OK, let’s discuss the reduction to ctm’s anyway. I respond to your comments below.

I’m going to speak in “Universey” terms, just because its the easiest way I find for me to speak. Indeed, when I first heard the HP material, it occurred to me that this looked like an epistemological methodology for a Universist; we’re using the collection of all ctms as a structure to find out information (even probabilistic) about V more widely. If substantive issues turn on this way of speaking, let me know and I’ll understand better.

You are starting off just fine, this is a perfectly reasonable way to interpret the programme.

Let’s first note that in the wake of independence, it’s going to be a pretty hard-line Universist (read “nutty Universist”) who asserts that we shouldn’t be studying truth across models in order to understand V better.

Pen Maddy is not a nut! You can simply ground truth in what is good set theory and mathematics, as would a Thin Realist (right, Pen?) and not bother with all of this talk about models.

Indeed the model theory gives us a fascinating insight into the way sets behave and ways in which V might be. However, its then essential to the HPers position that it is the “truth across ctms” approach that tells us best about V, rather than “truth across models” more generally.

The reduction to ctm’s asserts that even though looking just at ctm’s appears to be more restrictive, in fact it gives the same results as if you were to consider models more generally. More on this below.

I see at least two ways this might be established: A. Ctms (and the totality of) are more easily understood than other kinds of model.

For a simple reason: You can build lengthenings and thickenings of ctm’s, while you can’t do this for utm’s (uncountable transitive models). Consider forcing: For ctm’s, forcing estensions actually exist but for utm’s they can only be “imagined”!

B. Ctms are a better guide to (first-order) truth than other kinds of model.

I wouldn’t say that; I would only say that they are just as good as using arbitrary models.

I worry that both A and B are false (something I came to worry in the context of trying to use the HP for an absolutist).

B might be false (because of the word “better”) but A is true for rather vacuous reasons (due to the ability to bend and twist models which are countable, rather than to just “imagine” doing that).

A.1. It would be good if we could show two things to address the first question: A.1.1. The Hyperuniverse is in some sense “tractable” in the sense that we can refer to it easily using fairly weak resources. A.1.2. The Hyperuniverse is in some sense “minimal”; we only have the models we need to study pictures of V. There’s no extraneous subject matter confusing things.

You are demanding! How about just: ctm’s are nice to work with and give accurate information about the maximalaty of V?

The natural way to assuage A.1.1. for someone who accepts something more than just first-order resources is to provide a categoricity proof for the hyperuniverse from fairly weak resources (we don’t want to go full second-order; it’s the very notion of arbitrary subset we’re trying to understand). I thought about doing this in ancestral logic, but this obviously won’t work; there are uncountably many members of the Hyperuniverse and the downward LST holds for ancestral logic. So, I don’t see how we’re able to refer to the hyperuniverse better than just models in general in studying ways V might be.

(Of course, you might not care about categoricity; but lots of philosophers do, so it’s at least worth a look)

Probably I miss your point here, but it may be that you have overlooked the “dualism” between the Hyperuniverse and V that Claudio has emphasized. Note that the Hyperuniverse is just as ill-defined as V itself; it depends heavily on V. The suggestion of the “reduction to the Hyperuniverse” is only that by working with the Hyperuniverse (a particular multiverse conception) one can conveniently phrase maximality issues in a way that is faithful to the meaning of maximality for V. There is no presumption that the Hyperuniverse is any more “categorical” or “tractable” than V itself.

Re: A.1.2 The Hyperuniverse is not minimal. For any complete, maximal, truth set T of first-order sentences consistent with ZFC, there’s many universes in H satisfying that truth set. So really, for studying `first-order pictures of V’ there’s lots in there you don’t need.

That is the whole point! You have to start somewhere, and the Hyperuniverse is the natural starting point, as it is the arena in which the mathematics of set theory takes place (recall how forcing works: “Let M be a ctm, and P a partial order in M, we then consider P-generic extensions of M and show that they exist, also as ctm’s”). The incompleteness of ZFC is precisely manifested in the fact that the Hyperuniverse is “too big” and must be thinned out to its subcollection consisting of those universes which best exhibit maximality features.

So, I’d like to hear from the HPers the sense in which we can more easily access the elements of H. One often hears set theorists refer to ctms (and indeed Skolem hulls and the like) as “nice”, “manageable”, “tractable”. I confess that in light of the above I don’t really understand what is meant by this (unless it’s something trivial like guaranteeing the existence of generics in V). So, what is meant by this kind of talk? Is there anything philosophically or epistemically deep here?

In my view there is nothing deep here, it is only the observation that the Hyperuniverse is closed under the model-building methods of set theory: No matter what kind of forcing or infinitary logic construction we do to create new models from ctm’s, we end up again with a ctm. The deeper point is that ctm’s do a faithful job of representing what is implied by the maximality of V in height and width. This latter point is not obvious.

By the way, I have heard “nice” and “managable” but never “tractable” in reference to the Hyperuniverse.

And when one says that the Hyperuniverse is more “accessible” than V one cannot really mean this literally, as it is just as ill-defined as V and thoroughly dependent upon V. Instead it only means that in the Hyperuniverse one can stretch one’s elbows and explore many different pictures of V, something which is awkward to do sticking just with V. Again, think about forcing extensions of V; how does one gain access to those? The only way is to imagine them, as you have no context in which to build them. In the HP you observe that the properties of forcing extensions, indeed of arbitrary outer models, that you want to explore are nevertheless “almost first-order” in V (they are first-order in slight lengthenings of V) and therefore what you conclude about these properties would be the same if you were to replace V with a countable version of itself.

On to B. Are ctms a better guide to truth in V than other kinds of model? Certainly on the Universist picture it seems like the answer should be no; various kinds of construction that are completely illegitimate over V are legitimate of ctms; e.g. \alpha-hyperclass forcing (assuming you don’t believe in hyperclasses, which you shouldn’t if you’re a Universist).

Wait a minute, slow down! Re-read the comments of Pen and Geoffrey about this. They have entertained a height potentialism which naturally permits the addition of alpha-many new von Neumann levels on top of V! For them, there is no problem talking about “alpha-hyperclasses”” (P&G, please confirm).

So maybe you are talking about a width and height actualist? As I hvae said, the HP becomes very awkward with such a limitation.

Why should techniques of this kind produce models that look anything like a way V might be when V has no hyperclasses?

I haven’t gotten into this because I have been at least presuming height potentialism. I can say something about an HHP, a Handcuffed version of the HP, if you like, but let’s postpone that a bit.

Now maybe a potentialist has a response here, but I’m unsure how it would go. Sy’s potentialist seems to hold that it’s a kind of epistemic potentialism; we don’t know how high V is so should study pictures on which it has different heights.

Pen and Geoffrey, please help me here! You have talked favourably of height potentialism and Neil thinks there’s something afoul here.

But given this, it still seems that hyperclasses are out; whatever height V turns out to have, there aren’t any hyperclasses.

Huh? If you add one new level to V you get classes, if you add two new levels to V you get hyperclasses, if you add alpha new levels to V you get alpha-hyperclasses. Plain as pie.

Or are you really suggesting that we don’t know what the height of V is, but given any guess at that we have to stop ourselves from thinking that it could have a greater height? Now that is nutty!

If one wants to look at pictures of V, maybe it’s better just to analyse the model theory more generally with standard transitive models and a ban on hyperclass forcing?

Ban on hyperclass forcing? What? Maybe it’s time for Neil Barton to make a confession: You are an actualist, right? For you, V has a fixed height and width and it is nonsense to think about increasing height or width, right? Please come clean here, it’s OK, I will still like you.

But now you have a problem. I assume that you are OK with classes. And you know what it means for a class relation E on V to be wellfounded and extensional. So you know what it means for the structure (V,E) to be a model of ZFC which has the standard (V,epsilon) as a rank initial segment. In other words, whether you like it or not, you have no problem thinking about lengthenings of V in terms of classes. So you are being dragged kicking and screaming into the world of Hyperclasses! Do you really have a coherent explanation for why these class structures which represent lengthenings of V do not exist?

[A note; like Pen I have worries that one can't make sense of the hybrid-view. The only hybrid I can make sense of is to be epistemically hyperuniversist and ontologically universist. I worry that my inability to see the "real" potentialist picture here is affecting how I characterise the debate.]

The hybrid works perfectly well with height potentialism (“multiversism light”); starting with that the move to the rich multiverse perspective provided by the Hyperuniverse is OK and dualises nicely (as Claudio said) with the single-universe view of V (augmented by height potentialism). The only hybrid that worries me is to mix actualism about V with the HP, as actualism doesn’t allow you to make the moves you want to make that are essential to analysing maximality. But in my view, I don’t think that actualism in height (as opposed to actualism in width) is really a coherent view; it seems that at least Hilary and Geoffrey will agree.

But if you want me to take a stab at the HHP (the Handcuffed HP) I will be happy to do so.

Anyway, I’m sympathetic to the idea that I’ve missed a whole bunch of subtleties here. But I’d love to have these set to rights.

I am very grateful for your interesting comments! I have surely come to understand the HP much better as the result of comments like yours.

P.S. I’ve added my good friend Chris Scambler to the list who was interested in the discussion. I hope this is okay with everyone here. P.P.S. If there are responses I’ll try to reply as quick as I can, but time is tight currently.

I know what you mean. I can’t recall when time was not tight!

All the best, Sy

PS: We will (indeed must) have a Hyperuniverse Project meeting at the KGRC in September 2015. I hope that this can be merged with whatever plans you, Toby or others may have for meetings on the philosophy of mathematics in the second half of 2015. I haven’t forgotten about the London SOTFOM2 in January.

Re: Paper and slides on indefiniteness of CH

Dear HP-ers, HP-worriers, and friends,

In this thread (which I confess has been moving pretty quickly for me; I’ve read it all but do apologise if I’m revisiting some old ground) we’ve seen that the key claim is that there is a deep relationship between countable transitive models and some V, either real, ideal, or sat within a multiverse. I have a few general worries on this that if assuaged will help me better appreciate the view.

I’m going to speak in “Universey” terms, just because its the easiest way I find for me to speak. Indeed, when I first heard the HP material, it occurred to me that this looked like an epistemological methodology for a Universist; we’re using the collection of all ctms as a structure to find out information (even probabilistic) about V more widely. If substantive issues turn on this way of speaking, let me know and I’ll understand better.

Let’s first note that in the wake of independence, it’s going to be a pretty hard-line Universist (read “nutty Universist”) who asserts that we shouldn’t be studying truth across models in order to understand V better. Indeed the model theory gives us a fascinating insight into the way sets behave and ways in which V might be. However, its then essential to the HPers position that it is the “truth across ctms” approach that tells us best about V, rather than “truth across models” more generally. I see at least two ways this might be established:

A. Ctms (and the totality of) are more easily understood than other kinds of model.

B. Ctms are a better guide to (first-order) truth than other kinds of model.

I worry that both A and B are false (something I came to worry in the context of trying to use the HP for an absolutist).

A.1. It would be good if we could show two things to address the first question:

A.1.1. The Hyperuniverse is in some sense “tractable” in the sense that we can refer to it easily using fairly weak resources.
A.1.2. The Hyperuniverse is in some sense “minimal”; we only have the models we need to study pictures of V. There’s no extraneous subject matter confusing things.

The natural way to assuage A.1.1. for someone who accepts something more than just first-order resources is to provide a categoricity proof for the hyperuniverse from fairly weak resources (we don’t want to go full second-order; it’s the very notion of arbitrary subset we’re trying to understand). I thought about doing this in ancestral logic, but this obviously won’t work; there are uncountably many members of the Hyperuniverse and the downward LST holds for ancestral logic. So, I don’t see how we’re able to refer to the hyperuniverse better than just models in general in studying ways V might be.

(Of course, you might not care about categoricity; but lots of philosophers do, so it’s at least worth a look)

Re: A.1.2 The Hyperuniverse is not minimal. For any complete, maximal, truth set T of first-order sentences consistent with ZFC, there’s many universes in H satisfying that truth set. So really, for studying “first-order pictures of V” there’s lots in there you don’t need.

So, I’d like to hear from the HPers the sense in which we can more easily access the elements of H. One often hears set theorists refer to ctms (and indeed Skolem hulls and the like) as `nice’, `managable’, “tractable”. I confess that in light of the above I don’t really understand what is meant by this (unless it’s something trivial like guaranteeing the existence of generics in V). So, what is meant by this kind of talk? Is there anything philosophically or epistemically deep here?

On to B. Are ctms a better guide to truth in V than other kinds of model? Certainly on the Universist picture it seems like the answer should be no; various kinds of construction that are completely illegitimate over V are legitimate of ctms; e.g. \alpha-hyperclass forcing (assuming you don’t believe in hyperclasses, which you shouldn’t if you’re a Universist). Why should techniques of this kind produce models that look anything like a way V might be when V has no hyperclasses? Now maybe a potentialist has a response here, but I’m unsure how it would go. Sy’s potentialist seems to hold that it’s a kind of epistemic potentialism; we don’t know how high V is so should study pictures on which it has different heights. But given this, it still seems that hyperclasses are out; whatever height V turns out to have, there aren’t any hyperclasses. If one wants to look at pictures of V, maybe it’s better just to analyse the model theory more generally with standard transitive models and a ban on hyperclass forcing?

[A note; like Pen I have worries that one can't make sense of the hybrid-view. The only hybrid I can make sense of is to be epistemically hyperuniversist and ontologically universist. I worry that my inability to see the `real' potentialist picture here is affecting how I characterise the debate.]

Anyway, I’m sympathetic to the idea that I’ve missed a whole bunch of subtleties here. But I’d love to have these set to rights.

With Best Wishes,

Neil.

P.S. I’ve added my good friend Chris Scambler to the list who was interested in the discussion. I hope this is okay with everyone here.

P.P.S. If there are responses I’ll try to reply as quick as I can, but time is tight currently.