Tag Archives: Summary

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think we are approaching the point where a summary of this discussion is in order. The mathematical dust has largely settled — as far as the program as it currently stands is concerned –, thanks to Hugh’s contributions. But there is one major remaining matter of a more philosophical nature that still remains unclear to me — it has to do with my original question of whether you are an actualist or a potentialist and ultimately with the picture that forms the backdrop of your program. To get clear on this matter I will have to recapitulate a good part of this discussion. Please bear with me.

In response to my original question, on Sept. 14 you wrote:

I am a radical potentialist, indeed you might say a Skolem-worshipper! (Remember what Pen quoted from my article with Tatiana: “V is a product of our own”!) Indeed my view is that there is no real V, but instead a huge wealth of different “pictures of V”. Given any such picture P of V, let V_P denote the universe depicted by P; there are pictures P* of V such that V_P is a rank-initial segment of V_{P^*}, a proper inner model of V_{P^*}, or (here’s the radical Skolem-worshipping) even countable in V_{P^*}! So yes, it is a given for me that you can lengthen or thicken a picture of V, in fact you can make it countable!

Now here is a possible source of confusion: Sometimes one fixes an initial picture P and corresponding universe V_P as a reference picture; in that case one can talk about cardinality, in reference to V_P. But of course V_Pitself is countable from the perspective of a bigger V_{P^*}, so there is no absolute notion of “countable”, only a relativised one.’

In response to this — with focus on the line “there is no absolute notion of “countable”, only a relativized one” — on Sept. 15 I responded:

I thought: “He can’t mean this! Look. If everything is countable from the perspective of an enlargement and if enlargements always exist then everything is countable. (Of course it can be countable from a local perspective — when one has one’s blinkers on and looks no further — but given the tenet that it countable from a higher perspective it follows that it is ultimately countable.) But if everything is countable — or if “there is no absolute notion of `countable’, only a relativized one” — then how can he be understanding CH? This whole exchange was sparked with the presentation of a new and promising approach to questions like CH, one that promised to reinvigorate “intrinsic justifications” to the point where they could touch questions like CH. But now it seems that on this approach the straightforward sense of CH has evaporated. Indeed it seems that set theory has evaporated!”

Your view, as described above, is indeed like that of Skolem. But Skolem (rightly) took this view to involve a rejection of set theory. And yet you don’t. You seem to want to have it both ways: reject an absolute notion of countability and say something about CH (beyond that it has no meaning).

This got me greatly confused.

But then in the outline of the HP program that you sent on the same day things changed. For in that outline you speak of mental pictures of “the universe V of all sets” and you write: “But although we can form mental pictures of other universes, the only such universes we can actually produce are wholly contained within V, simply because V by its very definition contains all sets.” So now you appear to be an actualist and not a potentialist at all. (Of course you are a potentialist with regard to the little V’s — the countable transitive models of ZFC — but we are all potentialists with regard to those, trivially.)

So, which is it: Are you a potentialist or an actualist?

On Sept. 15 you responded:

OK, now to radical potentialism: Maybe it would help to talk first about something less radical: Width potentialism. In this any picture of the universe can be thickened, keeping the same ordinals, even to the extent of making ordinals countable. So for any ordinal alpha of V we can imagine how to thicken V to a universe where alpha is countable. So any ordinal is “potentially countable”. But that does not mean that every ordinal is countable! There is a big difference between universes that we can imagine (where our \aleph_1 becomes countable) and universes we can “produce”. So this “potential countability” does not threaten the truth of the powerset axiom in V!

At that point I thought: “OK, I think I am getting a grip on the picture: Sy distinguishes between extensions that “actually” exist and extensions which “potentially” (or “virtually”) exist. When talking about extensions that actually exist (lengthenings and thickenings) he doesn’t use scare quotes but when talking about extensions that do not actually exists but only potentially (or virtually) exist he uses scare quotes — as, for example, when in the context of width-actualism he speaks of “thickenings”.”

Let’s take stock: (1) In the case of countable transitive models of ZFC we all agree that there are actual lengthenings and thickenings (no scare quotes). And we can agree that there is always such an actual extension in which any given model is seen to be countable. (2) In the context of with-actualism there are actual lengthenings but only virtual thickenings (“thickenings”, with scare quotes — the “imaginary” extensions). And we can agree, via Jensen coding through class forcing, that there is always such a virtual thickening (“thickening”, with scare quotes — an “imaginary” extension) in which the model is “seen” to be countable. (3) But in distinguishing your radical potentialism from width actualism + height potentialism you must endorse actual lengthenings and actual thickenings and, moreover, such actual extensions in which any given transitive model of ZFC (whether countable or not) is (actually) seen to be countable. But then everything is ultimately countable, as I pointed out and you rejected.

This got me greatly confused. It seemed we were back to where we started.

I doubted that until on 21 Oct you wrote:

Extreme Multiverse view: We have no single V but a wealth of different possible V’s. This wealth is so wealthy that any particular V can be thickened or lengthened (no quotes!) and shockingly, made countable by going to a larger V. So there is no absolute notion of cardinality, only distinct notions of cardinality within each of the possible V’s. OK, now when talking about maximality of a possible V we simply mean that lengthening or thickening V will not reveal new properties that we couldn’t already see in V. (Note: One could go further and even look at blowups of V which see V as countable, but mathematically this doesn’t seem to add much.) Then when we talk of a first-order statement like not-CH being a consequence of maximality we mean that it holds in all of the possible V’s which are maximal.

Frankly speaking, the Extreme Multiverse View is my own personal view of things and gives the cleanest and clearest approach to studying maximality. That’s because it allows the freedom to make all of the moves that you want to make in comparing a possible V to other possible V’s.

Note that the multiverse described above looks exactly like the Hyperuniverse of a model of ZFC. In other words, the Extreme Multiverse View says that whether or not we realise it, we live in a Hyperuniverse, and we are kidding ourselves when we claim that we have truly uncountable sets: Some bigger universe looks down at us and laughs when she hears this, knowing perfectly well that we are just playing around with countable stuff.

In the second sentence you emphatically indicate that there are no scare quotes — these lengthenings and thickenings actually exist — there is “no absolute notion of cardinality”.

So we are back to where we started, to the view I thought you held all along.

I thus repeat my earlier point:

But now it seems that on this approach the straightforward sense of CH has evaporated. Indeed it seems that set theory has evaporated! Your view, as described above, is indeed like that of Skolem. But Skolem (rightly) took this view to involve a rejection of set theory. And yet you don’t. You seem to want to have it both ways: reject an absolute notion of countability and say something about CH (beyond that it has no meaning).

I hope you don’t repeat your earlier response:

So any ordinal is “potentially countable”. But that does not mean that every ordinal is countable! There is a big difference between universes that we can imagine (where our aleph_1 becomes countable) and universes we can “produce”. So this “potential countability” does not threaten the truth of the powerset axiom in V!

Because if you do we will be caught in a loop…

I suspect your view has changed. Or not changed. In any case, what is your view?

Best,
Peter

Re: Paper and slides on indefiniteness of CH

This message will try to say where we are in a generally understandable way.

1. Sol Feferman originally put out a request for comments on his paper and slides concerning the indefiniteness of CH. Sol maintains that CH = continuum hypothesis, is neither a mathematical NOR a logical problem. I think Sol is pretty definite that CH is never going to become a mathematical problem, and is definitely not currently a logical problem. And that there is some realistic possibility of it becoming a logical problem, provided some theory emerges of sufficiently widespread acceptance or interest in the set theory community, with the question of the status of CH within that theory.

NOTE: I have been wanting to turn to the Woodin program (Ultimate L) to see if this is clear, simple, and coherent enough to turn CH into a logical problem. I am planning to approach this from a high level generally understandable perspective shortly to see what shakes out. We have seen that the HP, at least in its present form, does not meet such standards.

2. My own position has a lot of overlap with Sol’s but differs in detail and emphasis. The bottom line is probably that both Sol and I agree that “the program of “settling CH” is not a relatively promising area of research in the foundations of mathematics”. In my foundational methodology, I never subscribe to philosophical views, but do think in terms of which side has the better arguments. I believe that arguments can always be strengthened, and attacks against arguments can also always be strengthened. I don’t come to a definite conclusion that “CH is not a mathematical problem”, even if prospects look poor right now for it being so. One thing is clear: it is not an important mathematical problem right now given the present research activities in mathematics today. It was at the time of Hilbert’s problem list. An interesting discussion is just why and how this changed, and whether this is simply due to Goedel/Cohen. I believe that the story goes well beyond Goedel/Cohen here, but this message is not about that. (Sol does deal with this, but I don’t think that it is the last word).

3. In particular, right now, the arguments against CH “having a definite truth value” are stronger than the arguments for CH “having a definite truth value”: but the argument against “having a definite truth value” being definite enough for most philosophically purposes, is, in my view, stronger than the argument for “having a definite truth value” being definite enough for most philosophical purposes. Incidentally, I was recently on the phone with a well known philosopher who strongly disagrees with me about this, and regards CH as (or equivalent to) a clearly stated problem in higher order logic, which he regards as automatically having a “definite truth value” in a “definite sense”.

4. There was little traffic in reaction to Sol’s original request for comments. Until Sy wrote forcefully and extensively about a “program” called HP = hyperuniverse program. Sy urged Sol to incorporate an account of HP in his paper(s). The headline statement of HP was very simple: study the countable transitive models of ZFC and their relationships, and this will reveal the “correct” or “best” axioms for set theory, including axioms that might well settle CH.

5. There were numerous attempts by Sy to justify the claim that such a study of countable transitive models of ZFC (dubbed the hyperuniverse) would provide the “correct” or “best” axioms for set theory, mostly under some form of “intrinsic maximality of the set theoretic universe”. Sy attempted to make this philosophically and foundationally coherent, but left a lot of objections by Pen and Peter unanswered (at least to the satisfaction of Pen and Peter and many others). One of his coworkers in HP answered Pen with answers in direct fundamental contradiction with those of Sy. No “reconciliation” between these very opposing views that Pen pointed out very clearly has been given.

6. On the philosophical side, Sy attempts to convince us that HP is a foundational program that responds to the “intrinsic maximality of the set theoretic universe”, without getting involved in the badly needed analysis of just what “intrinsic maximality of the set theoretic universe” means, or could mean, or should mean. Because of the lack of such a discussion – let alone creative ideas about it – it does not appear that anyone on this list, except a handful of HP coworkers, are being persuaded that HP is a legitimate foundational program.

7. In this connection, both Hugh and I believe that the HP is better viewed and better named as CTMP = countable transitive model program. There was an attempt by Sy to claim that ctms (countable transitive models) are of fundamental foundational importance for present purposes, based on the downward Skolem Lowenheim theorem. However, that is merely a technical point that comes after a framework for analyzing “intrinsic maximality of the set theoretic universe” is first accepted. In the absence of a careful and persuasive discussion of that framework, there is no relevant use of the Skolem Lowenheim theorem. Also, Hugh pointed out that in some of the recent HP proposals, the link between arbitrary sets and countable sets is broken. Another unanswered question of Hugh for Sy.

8. Another even more critical unanswered question of Hugh for Sy simply asks for clarity concerning what happens in CTMP (aka HP) after we see that the IMH contradicts the existence of inaccessible cardinals. IMH = inner model hypothesis, is the initial assertion coming out of the CTMP (aka HP). Hugh asked recently (and I think Hugh has been asking for months) for this clarity, and I have been expecting a response from Sy.

9. The issues of CTMP (aka HP) after IMH is critical. That IMH refutes the existence of an inaccessible cardinal should immediately make a thorough analysis of just what is meant by “intrinsic maximality of the set theoretic universe” urgent. Instead, Sy chose to reject IMH in favor of a number of “fixes”. Hugh has not gotten a satisfactory response as to what these “fixes” are. Furthermore, back channels affirm my suspicions that these “fixes” are ad hoc, taking the large cardinals as given, and then layering a kind of IMH on top of it. If this is the plan, then we really have, prima facie, a serious dose of philosophical incoherence.

10. With Hugh, Pen, Peter, Geoffrey, we see that Sy has to some extent lived up to his professional responsibility for interactive engagement (given that Sy has forcefully pushed the HP) – but only up to a point. There is still a significant degree of non responsiveness. If Sy would follow the principle of writing in generally understandable ways whenever practical, the non responsiveness and drawbacks would be apparent, and the discussion would be much more productive. However, with me, there is a complete refusal to engage. If he did engage in a professional manner, the prima facie emptiness of the HP would have gotten addressed months ago and either a new idea would have emerged from the interaction, or the HP would have simply morphed, rather pleasantly and uneventfully, into the not uninteresting CTMP.

11. Having been appalled at the utter waste of time for the overwhelming majority of people on this list, at least compared to what it could have been, I took the plunge and started a discussion of “intrinsic maximality in the set theoretic universe”. I was especially motivated by Sy saying that “intrinsic maximality of the set theoretic universe” doesn’t even generate AxC. This in the context of all sorts of pronouncements about what it does generate, of a comparatively technical nature. Sy’s coworker states unequivocally that “intrinsic maximality of the set theoretic unvierse” generates all of ZFC.

12. Already this has led to what appears to be an apparently new study of AxC and variants in the absolutely classically fundamental contexts of satisfiability of sentences in first order predicate calculus. But I am hoping to have a lot more to say about “intrinsic maximality” not only in the set theoretic universe, but much more generally. It appears that much of this thread is an important lesson in how NOT to do philosophy and foundations, and the fruits that come out if philosophy and foundations is done competently.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy (and all),

Thank you for your summary.  I summarized my position on 9/13 and will include that message below for anyone who might be interested.

It seems to me that there are still a number of issues connected to the HP in need of exploration:  e.g., the nature and role of your ‘potentialism’, the rationale for your focus on countable models, the strength and mathematical attractions of the various principles generated by your methods.  Perhaps the best course at this point would be to reconstitute the list on an ‘opt-in’ basis, or perhaps mechanize it, so that people could ‘opt-out’ as they see fit.  What do you think?

All best,
Pen

From 9/13:

Dear Sy,

Thank you for your forthright responses to Peter and me.  I think you and I can at least agree that the discussion between you and me has reached a natural conclusion.

You’ve explained how the landscape now looks to you, so I’ll try to sketch how it now looks to me.  First one last summary of the line of thought we’ve been following:

1.  You began by identifying set-theoretic truth with what’s intrinsic to a concept of the set-theoretic universe that appeared to be idiosyncratic.

2.  You switched to the familiar iterative conception, with vague principles of maximality of height and width, and argued for slightly-less-vague versions of these principles.

3.  Concerns were raised about the claim that these slightly-less-vague principles were intrinsic to the familiar iterative conception.

4.  You switched to investigating what’s intrinsic to a different concept which you call ‘a radical form of potentialism’ (in your message to Peter).

By this route, we’ve returned to something in the rough vicinity of (1), with two big differences:  the idiosyncratic concept has been described in a bit more detail, and you don’t claim that being intrinsic to this new concept is the only form of set-theoretic truth, just that it’s one form.

So we have this:   you aren’t squeezing intrinsic principles out of a concept familiar to all of us, a concept that we all know to have helped generate an impressive mathematical theory (to put it mildly).

And we have this:  you don’t care whether the project of investigating the intrinsic content of this new concept produces any good mathematics.  You think this is irrelevant to its value (as documented in my recent message to Harvey).

So much for summary.  Here’s how the situation now looks to me.  Granted, there’s a sense of ‘true’ as in ‘true to Sy’s new concept’.  But there’s a pretty much unlimited supply of concepts available to us humans, or devisable by us, and some truths intrinsic to each one of them.  Only a very small fraction of these concepts (and their attendant intrinsic truths) are of any mathematical interest whatsoever.  You propose to investigate one of these many concepts, but only for its own sake, not with the goal of finding anything of mathematical value.  I don’t see why anyone has reason to sign onto this project, or to care about it one way or the other, unless it reveals some mathematical interest despite you.

Of course, if you’re willing to use your new concept to generate some set-theoretic principles and put those principles up to the test of their mathematical value, then of course let’s get on with it!

Thanks again for your patience with all my inquiries.

Best wishes,
Pen

Re: Paper and slides on indefiniteness of CH

Dear all,

Sol, and most recently Dana, have requested a summary of the debate. I think this is a good idea, as Pen has suggested that her debate with me has drawn to a close. Hugh and I already summarised our positions in my debate with him; below is a summary of my position in the Sy-Pen debate regarding the HP. I would not dare to try to summarise Pen’s position (earlier I tried to summarise Hugh’s position in the Sy-Hugh debate and got it wrong).

Sy’s position on the HP

  1. The Maximality of the universe of sets is an intrinsic feature of the set-concept. There are other features (like Omniscience) which, although not clearly “intrinsic”, are in a vague sense “desirable” and worthy of a better understanding.
  2. Maximality is legitimately analysed in terms of principles of comparison between pictures of the set-theoretic universe. Included in this comparison are (external) enlargements in which a given depicted universe is “lengthened”, “thickened” or even seen to be countable. The Maximal pictures are those for which enlargement does not reveal first-order properties that are not already present internally.
  3. The potential countability of our depicted universes facilitates a reduction to the Hyperuniverse, the collection of countable transitive models of ZFC. In particular, to derive a first-order sentence from Maximality it suffices to see that it holds in all universes of the Hyperuniverse which survive the Maximality-Test when compared to other universes of the Hyperuniverse.
  4. In the Hyperuniverse, the different possible ways to test Maximality become mathematical ctiteria for the selection of the Optimal (optimally-maximal) universes. This is the dynamic part of the programme, whereby different criteria are formulated, analysed, compared and unified, with the aim of converging on an Optimal Maximality Criterion. The universes which obey the Optimal Criterion are the Optimal universes, and any first-order sentence holding in all Optimal universes can be recognised as a consequence of the intrinsic feature of Maximality, i.e. true on intrinsic grounds.
  5. Finally, I acknowledge that the legitimacy of this approach to discovering new first-order statements based on intrinsic evidence hinges on the way this approach treats Maximality via depicted universes and formulates mathematical criteria for Maximality. If these are regarded as illegitimate then I still maintain that the programme is worthwhile for purely mathematical reasons, as it generates set-theoretic properties that would not have otherwise been explored and which demand the development of new set-theoretic methods as well as new uses for known methods.

I agree with Pen that there doesn’t seem to be a lot more for us to discuss regarding the HP. Although I said above that I wouldn’t dare to try to summarise her position, I think that I can safely say that she does regard the way that the HP treats Maximality to be illegitimate (although she welcomes any good mathematics that comes out of it). I can only hope that not everyone agrees with her. Time will tell.

Thanks to Pen, Peter, Harvey and Hugh for their comments. Of course I welcome further comments, especially ones sent privately to me, as I imagine it’s a lot easier to express oneself freely without having so many luminaries “listening in”. But as far as I’m concerned it is fine now to put this heavily public e-mail discussion to rest, should others feel the same way.

Best wishes to all,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thank you for your forthright responses to Peter and me. I think you and I can at least agree that the discussion between you and me has reached a natural conclusion.

You’ve explained how the landscape now looks to you, so I’ll try to sketch how it now looks to me. First one last summary of the line of thought we’ve been following:

  1. You began by identifying set-theoretic truth with what’s intrinsic to a concept of the set-theoretic universe that appeared to be idiosyncratic.
  2. You switched to the familiar iterative conception, with vague principles of maximality of height and width, and argued for slightly-less-vague versions of these principles.
  3. Concerns were raised about the claim that these slightly-less-vague principles were intrinsic to the familiar iterative conception.
  4. You switched to investigating what’s intrinsic to a different concept which you call ‘a radical form of potentialism’ (in your message to Peter).

By this route, we’ve returned to something in the rough vicinity of (1), with two big differences: the idiosyncratic concept has been described in a bit more detail, and you don’t claim that being intrinsic to this new concept is the only form of set-theoretic truth, just that it’s one form.

So we have this: you aren’t squeezing intrinsic principles out of a concept familiar to all of us, a concept that we all know to have helped generate an impressive mathematical theory (to put it mildly).

And we have this: you don’t care whether the project of investigating the intrinsic content of this new concept produces any good mathematics. You think this is irrelevant to its value (as documented in my recent message to Harvey).

So much for summary. Here’s how the situation now looks to me. Granted, there’s a sense of ‘true’ as in ‘true to Sy’s new concept’. But there’s a pretty much unlimited supply of concepts available to us humans, or devisable by us, and some truths intrinsic to each one of them. Only a very small fraction of these concepts (and their attendant intrinsic truths) are of any mathematical interest whatsoever. You propose to investigate one of these many concepts, but only for its own sake, not with the goal of finding anything of mathematical value. I don’t see why anyone has reason to sign onto this project, or to care about it one way or the other, unless it reveals some mathematical interest despite you.

Of course, if you’re willing to use your new concept to generate some set-theoretic principles and put those principles up to the test of their mathematical value, then of course let’s get on with it!

Thanks again for your patience with all my inquiries.

Best wishes,
Pen

Re: Paper and slides on indefiniteness of CH

Dear all,

For the sake of clarification in the discussion, I’d like to restate the main views in my paper regarding definite/indefinite mathematical and logical problems and what I take to be the inherent vagueness of the concept of arbitrary subset of an infinite set, be it the natural numbers, the real numbers, etc.  I find it simplest to quote myself.

At the beginning of the paper, I wrote:

I want to begin by distinguishing mathematical problems in the direct, or ordinary sense from those in the indirect, or logical sense.  This is a rough distinction, of course, but I think a workable one that is easily squared with experience.  Although the Continuum Hypothesis (CH) in any of its usual forms is prima facie a mathematical problem in the ordinary sense, it has become inextricably entwined with questions in the logical (i.e., metamathematical) sense.  I shall argue that for all intents and purposes, CH has ceased to exist as a definite problem in the ordinary sense and that even its status in the logical sense is seriously in question….

Mathematicians at any one (more or less settled) time find themselves working inmedia res, proceeding from an accepted set of informal concepts and a constellation of prior results. The attitude is mainly prospective, and open mathematical problems formulated in terms of currently accepted concepts present themselves directly as questions of truth or falsity.  Considered simply as another branch of mathematics, mathematical logic (or metamathematics) is no different in these respects, but it is distinguished by making specific use of the concepts of formal languages and of axiomatic systems and their models relative to such languages.  So we can say that a problem is one in the logical sense if it makes essential use of such concepts.  For example, we ask if such and such a system is consistent, or consistent relative to another system, or if such and such a statement is independent of a given system or whether it has such and such a model, and so on.  A problem is one in the ordinary sense simply if it does not make use of the logical concepts of formal language, formal axiomatic system and models for such.  Rightly or wrongly, it is a fact that the overwhelming majority of mathematicians not only deal with their problems in the ordinary sense, but shun thinking about problems in their logical sense or that turn out to be essentially dependent on such.  Mathematicians for the most part do not concern themselves with the axiomatic foundations of mathematics, and rarely appeal to logical principles or axioms from such frameworks to justify their arguments.  …. But most importantly, as long as mathematicians think of mathematical problems as questions of truth or falsity, they do not regard problems in the logical sense relevant to their fundamental aims insofar as those are relative to some axioms or models of a formal language.

I speak here of mathematics in the ordinary sense and mathematical logic as ongoing enterprises, and the judgment as to whether a problem is of the one sort or the other is to some extent contextual.  The history shows that CH ceased to be a mathematical problem in the ordinary sense in 1904-1908, but it took a long while for people to realize that.  As far as I can tell from the contributors to the discussion, except possibly for Bob Solovay (see also below) this has been accepted in the discussion.

Now, the further question whether a mathematical problem is definite or indefinite involves personal judgment to some extent.  But I expect when we go down the list of Hilbert’s problems or the Millennium problems, there will be substantial agreement as to whether a mathematical problem is definite (or definite relative to the background state of knowledge and efforts) or not (it might be programmatic, for example). So, from the point of view of 1900, CH is a definite problem, but in our current eyes, it is no longer. This is not a philosophical judgment but simply an assessment of the subject then and now.

The matter is different for logical problems.  In sec. 6 of the paper, I return to the question of the status of CH as a logical problem. I wrote:

Clearly, it can be considered as a definite logical problem relative to any specific axiomatic system or model.  But one cannot say that it is a definite logical problem in some absolute sense unless the systems or models in question have been singled out in some canonical way.

I can see that there could well be differences of opinion as to whether my criterion in terms of canonicity is the right one to take, and even if it is taken, that there can be greater differences in judgment as to whether a logical problem is definite or not (compared to the assessments above of mathematical problems). In the paper, I examined two approaches to CH as a logical problem, the \Omega-logic approach and the inner model program.  My conclusion was that neither of these yet meets the criterion to situate CH as a definite logical problem.  In the discussion, both Hugh and Sy have presented what they claim to be definite logical problems that are relevant to CH as a logical problem, but differ in their assessments of these.  I have not formed a final view on these matters, but am thus far not convinced by either of them. However, I intend to take their arguments into serious consideration in the final version of the paper.  (I have also pointed out earlier that there could well be other proposals for such that ought to be considered.)  Part of the differences between Hugh and Sy concern the weight to be given to “intrinsic” vs. “extrinsic” evidence.  Those terms are no more definite than “definite” and “indefinite”, and also involve matters of judgment.  I have questioned whether Sy’s use of “intrinsic” is a useful extension of Gödel’s and suggest that perhaps another term in its place would be more revealing of his claims.

In the final section 7 of the paper proper, I raised what I call the “duck” problem:

We saw earlier that for all intents and purposes, CH has ceased to be a definite mathematical problem in the ordinary sense. It is understandable that there might be considerable resistance to accepting this, since the general concepts of set and function involved in the statement of CH have in the last hundred years become an accepted part of mathematical practice and have contributed substantially to the further development of mathematics in the ordinary sense.  How can something that appears so definite on the face of it not be?  In more colloquial terms, how can something that walks like a duck, quacks like a duck and swims like a duck not be a duck?

I go on to say that “of course there are those like Gödel and a few others for whom there is no “duck” problem; on their view, CH is definite and we only have to search for new ways to settle it …”  But here I take “definite” in the sense that it “has determinate truth value” in some platonistic sense. Thanks to Bob’s remarks, I’m glad that I can class him among the few others. In view of Geoffrey’s appeal to “full” third order semantics over the natural numbers, I would so classify him too, but he might have reasons to resist.

The “duck” problem is a philosophical problem, not a question of what is definite or not as a mathematical or logical problem in the ongoing development of those subjects.  And as a confirmed anti-platonist, I have had to grapple with it.  In part because of all the circumstantial evidence discussed in the body of the paper concerning the problematic status of CH, my conclusion was as follows.

I have long held that CH in its ordinary reading is essentially indefinite (or “inherently vague”) because the concepts of arbitrary set and function needed for its formulation can’t be sharpened without violating what those concepts are supposed to be about.

Again, here, the question of whether something is “indefinite” is evidently different from its use in the body of the paper in assessing the status of CH as a mathematical and logical problem. I shall have to emphasize that in the final version of the paper.  Also the notions of definiteness and indefiniteness brought up in the appendix are philosophically motivated and have to be distinguished as such.

Finally, some (Harvey?) say that what is “inherently vague” is itself “inherently vague”.  On the contrary, I explain above exactly in what sense I am taking it.  That is why we can agree that sharpening of the concept of arbitrary set to that, e.g., of constructible set, or set constructible over the reals, etc., violates what that concept is supposed to be about.  I can’t prove that no such sharpening is possible, but that is my conviction and have to leave it as it lays.

Best,
Sol

PS: In my view, the side discussion raised by Harvey and pursued by Geoffrey as to the methodology and the philosophy of the natural sciences–as interesting as that may be in and of itself–is not relevant to the issues here.

Re: Paper and slides on indefiniteness of CH

Dear Sol,

On Thu, 7 Aug 2014, Solomon Feferman wrote:

Dear Sy,

I’m very pleased that my paper has led to such a rich exchange and that it has brought out the importance of clarifying one’s aims in the ongoing development of set theory. Insofar as it might affect my draft, I still have much to absorb in the exchange thus far, and there will clearly be some aspects of it that are beyond my current technical competence. In any case, I agree it would be good to bring the exchange to a conclusion with a summary of positions.

I owe you a summary of my position in my debate with Hugh, much clarified by that debate, and offer it below. After speaking with Hugh it is evident that my attempt to also summarise his position was a failure so I won’t try that again.

Sy’s position in the Sy-Hugh Debate (over our different approaches to truth in set theory), triggered by Sol Feferman’s provocative article on CH

(The two phrases in “quotes” are explained at the end.)

  1. I make a sharp distinction between extrinsic and intrinsic evidence for truth in set theory and base my approach to truth in the Hyperuniverse Programme almost exclusively on intrinsic considerations; the only exception is my acceptance of the consistency of “nearly all” large cardinals, which I take to be extrinsically justified.
  2. I feel that the use of notions from set-theoretic practice (such as set-forcing or iterability) in a discussion of truth is illegitimate unless they can be described in terms of intrinsic features of sets and universes of sets.
  3. I am not persuaded by arguments in favour of the existence of large large cardinals and offer an explanation for large cardinal consistency without large cardinal existence in terms of their existence in inner models but not in V.
  4. I am not persuaded that PD is true as the result of the rich structure theory that it provides for the higher projective levels or by the claim (of Peter Koellner and perhaps others) that all sufficiently strong natural theories yield PD. I find the structure theory unconvincing as analogous extrapolations from lower to higher projective levels provably fail and challenge the claim, as theories asserting the existence of inner models for very large cardinals need not yield PD.
  5. I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. Thus in response to Sol’s question, I feel that it is possible to resolve CH by logical means.

“nearly all”: If LC is a large cardinal notion and there is evidence that the consistency of some natural statement of set theory (not mentioning LCs, even in a disguised form) does not follow from Con LC (but does follow from the consistency of some stronger LC’) then I take LC to be consistent. So far this has been verified at least up to the level of supercompacts.

“cardinal-absolute parameters”: A parameter (like Aleph_alpha for any recursive ordinal alpha) is cardinal-absolute if it has a definition which is valid in all cardinal-preserving extensions.

Final comment:

A worthwhile project would be to find intrinsic justifications for large cardinal existence and intrinsically-based descriptions of technical notions from set-theoretic practice that suffice for Hugh’s arguments. This would be exciting as it would enable Hugh and I to combine our ideas in a powerful way.