Tag Archives: Reflection

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Bill,

On Thu, 9 Oct 2014, William Tait wrote:

I think that somewhere in the vast literature in this thread piling up in my email box is the claim by Sy that the existence of cardinals up to Erdos \kappa_{\omega_1} (or something like that) is implicit in the notion of the iterative hierarchy.

No, Erdos \kappa_{\omega_1} gives you the existence of 0^\# and my position is that Height-Maximality is consistent V = L.

That does seem to me to be questionable; at least from the “bottom-up” point of view that what can be intrinsically justified are engines for iterating. Sy, as I recall, believes that axioms are intrinsically justified by reference to the universe V that they describe—a top-down point of view. No slander intended, Sy!

And no slander committed! I analyse Height Maximality as a height potentialist, comparing V to its lengthenings and shortenings. A close reading of my e-mails reveals an unannounced move on my part to distinguish Reflection from more powerful forms of Height Maximality (I arrive at #-generation, which implies the existence of all large cardinals compatible with V = L but does not imply the existence of 0^\#!).

So I am no longer challenging the limits of Reflection, but instead asserting that Height Maximality is much more than that.

Best,
Sy

PS: Today I fly to California for a week of “real set theory” (I hope we won’t talk about truth there!). My aim is to drop out of this discussion whilst there, as the collaboration will be all-consuming and leave no time for anything else. But maybe I’ll type out another e-mail or two to this group in case I am awake at some weird hour due to jet-lag.

Re: Paper and slides on indefiniteness of CH

Sy wrote:

In other words, we can discuss lengthenings and shortenings of V without declaring ourselves to be multiversers. Similarly we can discuss “thickenings” in quotes. No multiverse yet. But then via a Loewenheim-Skolem argument we realise that it suffices to work with a countable little-V, where it is natural and mathematically extremely useful to regard lengthenings and “thickenings” as additional universes. Thus the reduction of the study of Maximality of V to the study of mathematical criteria for the selection of preferred “pictures of V” inside the Hyperuniverse, The Hyperuniverse is of course entirely dependent on V; if we accept a new axiom about V then this will affect the Hyperuniverse. For example if we accept a little more than first-order reflection then a consequence is that the Hyperuniverse is nonempty.

If Sy would slow down and carefully explain in universally understandable terms just what he is talking about, we would all probably recognize that the use of the “Löwenheim-Skolem argument” is bogus.

I’m not sure that Sy is aware that there are some standards for doing philosophy and foundations of set theory (or anything else). Perhaps Sy believes that with enough energetic offerings of slogans, and enough seeking of soundbites from philosophers (which he has found are not all that easy to get), you can avoid having to come up with real foundational/philosophical ideas that work.

I am not aware of a single person on this email list who is inclined to believe that CTMP (aka HP) constitutes any kind of legitimate foundational program for set theory – at least on the basis of anything offered up here. (CTMP appears to be a not uninteresting technical study, but even as a technical study, it currently suffers from a lack of systemization – at least judging by what is being offered up here).

If there is a single person on this email list who thinks that CTMP (aka HP) constitutes any kind of legitimate foundational program for set theory, I think that we would all very much appreciate that they come forward and say why they think so, and start offering up some clear, deliberate, and generally understandable answers to the questions I raised a short time ago. I copy them below.

Now I am not primarily here to tear down silly propoganda. Enough of this has already been done by me and others. I am making efforts to steer this discussion into productive channels that meet that great standard: being generally understandable to everybody, with no attempt to mask flawed ideas — or seemingly unsound ideas — in a mixture of technicalities, slogans, and propoganda I invited Sy to engage in a productive discussion that would meet at least minimal standards for how foundations and philosophy can be discussed, and he has refused to engage dozens of times.

So again, if there is anybody here who thinks that CTMP (aka HP) is a legitimate foundational program for set theory, please say so, and engage in the following questions I posted recently: In the meantime, I am finishing up a wholly positive message that I hope you are interested in.

QUESTIONS – lightly edited from the original list

Why doesn’t HP carry the obvious name CTMP = countable transitive model program. That is my suggestion and has been supported by Hugh.

What does the choice of a countable transitive model have to do with “(intrinsic) maximality in set theory”? Avoid quoting complicated technicalities, meaningless slogans, or idiosyncratic jargon and adhere to generally understandable considerations.

At a fundamental level, what does “(intrinsic) maximality in set theory” mean in the first place?

Which axioms of ZFC are motivated or associated with “(intrinsic) maximality in set theory”? And why? Which ones aren’t and why?

What is your preferred precise formulation of IMH? E.g., is it in terms of countable models?

What do you make of the fact that the IMH is inconsistent with even an inaccessible (if I remember correctly)? If this means that IMH needs to be redesigned, how does this reflect on whether CTMP = HP is really being properly motivated by “(intrinsic) maximality in set theory”?

What is the simplest essence of the ideas surrounding “fixing or redesigning IMH”? Please, in generally understandable terms here, so that people can get to the essence of the matter, and not have it clouded by technicalities.

Overall, it would be particularly useful to avoid quoting complicated technicalities or idiosyncratic jargon and adhere to generally understandable considerations. After all, CTMP = HP is being offered as some sort of truly foundational program. Legitimate foundational programs lend themselves to generally understandable explanations with overwhelmingly attractive features.

Harvey

Re: Paper and slides on indefiniteness of CH

I just sent this posting to the FOM email list:

I am participating in a small email group concerning higher set theory, focusing originally on whether the continuum hypothesis is a “genuine” problem. The discussion has partly gone into issues surrounding the foundations of higher set theory. The ideas in this posting were inspired by the interchange there.

CAUTION: I am not an active expert in this kind of higher set theory, and so what I say may be either known, partly true, or even false.

As a modification of existing ideas concerning “maximality” I offer the following axiom over MK class theory with the global axiom of choice.

DEFINITION 1. Let \varphi be a sentence of set theory. We say that \varphi is set theoretically consistent if and only if ZFC + \varphi is consistent with the usual axioms and rules of infinitary logic. At the minimum, the standard axioms and rules of L_{V,omega}, with quantifiers ranging over V, with epsilon,=, but one may consider the much stronger language L_{V,\text{Ord}} where set length blocks of quantifiers are used. I think that for what I am going to do, it makes no difference, so that there is stability here.

Then we can formulate in class theory,

POSTULATE A. If a sentence is set theoretically consistent then it holds in some transitive model of ZFC containing all ordinals. It follows that it holds in some L[x], x a real.

Now what happens if we relativize this in an obvious way?

DEFINITION 2. Let \alpha be an infinite ordinal. Let varphi be a sentence of set theory. We say that varphi is set theoretically consistent relative to P(\alpha) if and only if ZFC + varphi is consistent with the usual axioms and rules of infinitary logic together with “all subsets of alpha are among the actual subsets of alpha”, where the latter is formulated in the obvious way using infinitary logic.

POSTULATE \textbf{B}_\alpha. If a sentence is set theoretically consistent relative to P(\alpha) then it holds in some transitive model of ZFC containing all ordinals.

POSTULATE C. Postulate \text{B}_\alpha holds for all infinite \alpha.

To prove consistency, or consistency of this for some alpha, we seem to need

  1. An extension of Jensen’s coding the universe where we add a subset of \alpha + that codes the universe without adding any subsets of \alpha. This has probably been done.
  2. Cone determinacy for subsets of \alpha^+.E.g., using the equivalence relation x \sim y if and only if x,y are interdefinable over V_{\alpha^+}. I don’t know if this has been done.

Turning now to SRM once again from this link, let’s look at:

  1. Linearly ordered integral domain axioms.
  2. Finite interval. [x,y] exists.
  3. Boolean difference. A\setminus B exists.
  4. Set addition. A + B = \{x+y: x \in A\text{ and }y \in B\} exists.
  5. Set multiplication. A\times B = \{xy: x \in A\text{ and }y \in B\} exists.
  6. Least element. Every nonempty set has a least element.
  7. n^0 = 1. m \geq 0 implies n^{m+1} = n^m \times n. n^m is defined only if m \geq 0.
  8. \{n^0,\dots,n^m\} exists.
  9. \{0 + n^0, 1 + n^1,\dots,m + n^m\} exists.

I originally said that 1-8 is a conservative extension of EFA, which is wrong. I corrected this by saying that 1-7,9 is a conservative extension of EFA. There are some subtle points, and it is possible that I might need some standard laws of exponentiation, maybe just n^(m+r) = n^m \cdot n^r, or even less, BUT: much easier is to use the full 1-9. 1-9 is a conservative extension of EFA. I really like the Foundational Traction here, as I unravel the subtleties of the situation properly.

Now to justifications of SRP.

On Sep 13, 2014, at 2:18 AM, Rupert McCallum wrote:

William Tait wrote an essay that appeared in “The Provenance of Pure Reason” called “Constructing Cardinals from Below” which discussed a set of reflection principles that justify SRP. Unfortunately Peter Koellner later observed that some of the reflection principles he considered were inconsistent. I wrote down my own thoughts in a recent Mathematical Logic Quarterly article about how one might find principled grounds for distinguishing the consistent ones from the inconsistent ones.

Bill Tait wrote:

Thanks for the announcement, Rupert; I look forward to reading the paper.

In the interests of immodesty, let me mention that here is a bit of unclarity: I considered reflection principles G^m_n for m, n < \omega and used the G^m_2 to derive the existence of m-ineffable cardinals. I also proved the G^m_2 consistent relative to a measurable. Peter showed that they are consistent relative to \kappa_\omega.

What was unfortunate was certainly not that Peter found the G^m_n inconsistent for n> 2, rather it was my proposing them.

This illustrates a systematic problem with attempts at philosophical foundations of higher set theory. One formulates a reasonable looking idea, makes some associated philosophy, and obtains some partial information. Then perhaps somebody shows there is an inconsistency. Then one adjusts the philosophy to explain why the stuff that seems fine was a good idea, and the stuff that turned sour was, in retrospect, a bad idea.

Of course, so far, we have seen that proposals that survive a lot of attention and work, especially detailed structure (not so clear what this means for proposals compatible with V = L), invariably have not, so far, led to inconsistencies. Kunen with Reinhardt’s j:V \to V wasn’t around too long, and didn’t have any structure theory. However, I1,I2,I3 have been around a long time, but not too much structure theory, still a fairly substantial amount of research. What confidence should we have that they are consistent?

I don’t have any problem with this process as research – there is little or no alternative. BUT, in order to respond to inquiries like “Why should I believe subtle cardinals exist, or that they are consistent with ZFC?” from top core mathematicians, it is not going to be well received in anything like its present form. Probably also my approach with V,V',V'',V''',\dots is still not innovative enough.

Having said this, it still appears that there is a kind of comfort zone with ZFC. But I explain this as follows. ZF is a kind of unique transfer from the finite world, and I have reported on this some time ago on the FOM.

I have just seen an article by Rupert McCallum which FOM readers may find of interest. It also contains a good list of references. A quick glance reminds me of the SRP characterization in terms of countable transitive models with outside elementary embeddings.

My feeling at the moment is that this area of justifying cardinals relatively low down (SRP hierarchy = subtle cardinal hierarchy = ineffable cardinal hierarchy) needs a really striking simple new idea.

Harvey Friedman

Re: Paper and slides on indefiniteness of CH

Dear Pen and Sy,

I’m sorry about the delay. I’ll now focus the point at issue — the idea of “lengthenings” of V (and the reference to my work).

My comments are addressed to Sy.

A. Preliminary Point: Actualism versus Potentialism

I would like to get a better sense of how you understand “V”. Let me begin with a distinction between two forms of actualism and potentialism; one concerning height, the other concerning width.

The height actualist maintains that in terms of height the universe of sets forms a “completed totality” while the height potentialist maintains that in terms of height the universe of sets is “open-ended” or “indefinitely extensible”. The notions of width actualism and width potentialism are defined similarly, except with “height” replaced by width. The former concerns the ordinals, the latter concerns the powerset operation.

This, to be sure, is a rough distinction. But it has a long history, going back to Aristotle. One way of formally regimenting this informal distinction is by employing intuitionistic logic for domains for which one is a potentialist and reserving classic logic for domains for which one is an actualist. This is the approach Sol takes in his work on semi-intuitionistic systems of set theory. (I won’t presuppose that regimentation in what I go on to say below.)

Some examples: Gödel was (probably) an actualist with regard to both width and height. Zermelo was (probably) an actualist with regard to width and a potentialist with regard to height. Sol is an actualist with regard to width up to V_\omega and a potentialist with regard to width and height beyond that.

Question: Are you an actualist or a potentialist with regard to height? with regard to width?

One reason I ask is that in your account the symbol “V” appears to play a dual role, first for “a surrogate” — one of many countable transitive models of ZFC that provides “an accurate picture of V” — and second for “the real thing” (the thing of which the former models are supposed to provide pictures).

If you answer this question I think it will help me get a better grip on your view. But the main thing I want to address is the dual role of “V” your account.

B. The Idea of “Lengthening” V.

In your account you consider “lengthenings of V” and “broadenings of V”. The purported acceptability of the former is supposed to lend credence to the latter.

The idea of a “lengthening of V” is certainly one which, upon first appearance, is very strange. For it seems to fly in the face of the whole idea of V. Reinhardt once articulated this strangeness by saying something like this (I paraphrase): “It doesn’t seem to make a whole lot of sense to speak of “lengthening V” since if one takes, e.g., the powerset of V one would seem to have more sets and, being sets, these should be in V, by the very conception of V as the universe of all sets.”

If by “V” we mean “the real thing” then the idea of a “lengthening of V” doesn’t make any sense on an actualist conception. And it is unclear that it even makes sense on the potentialist conception. (The issue is whether on the potentialist conception it even makes sense to speak of “V” in this sense.) But, of course, if by “V” one means not “the real thing” but “a surrogate for the real thing” — like a countable transitive model of ZFC or a rank initial segment of the universe — then one can make sense (on either the actualist or potentialist conception) of a “lengthening of V”.

In the reflection principles paper I start of by presenting a philosophical dilemma, which I will here briefly summarize as follows: The actualist can make sense of V as “the real thing” (and so can motivate reflection) but cannot make sense of higher-order quantification over V. The potentialist, on the other hand, can make sense of higher-order quantification over “V” (by understanding this to be some surrogate, some rank initial segment) but has a hard time motivating reflection since now V (as “the real thing”) has evaporated.

It is in the attempt to have the best of both worlds that people have tried to come up with ideas that resolve this dilemma. I see Ackermann and Reinhardt (in a spirit different than that from the paraphrase above) as doing just this. Reinhardt considered “lengthenings” of V, in his work on (what I call) “extension principles” — the principles leading to extendible cardinals and beyond. In taking this step he introduces (what I call) the “theory of legitimate candidates”. On this conception there are many different legitimate candidates for “the true universe V”– call them V, V', V'', etc — and the idea is that they should all resemble one another to various degrees that we can articulate. Your work is very much in this spirit, though you expand it to include “broadenings of V”, in addition to “lengthenings of V”.

I should say, before turning to your arguments, that I do not see how you are appealing to my work. My discussion of higher-order reflection principles is entirely critical. I present a philosophical critique of higher-order reflection, saying that I cannot make sense of them (on either the actualist or potentialist conception). But then not wanting to rest too much weight on a philosophical critique I consider a specific proposal, namely, that of Tait. I set aside the question of whether higher-order reflection principles make sense and are justified and I consider the technical question of how far Tait’s principles go, showing that they are either consistent and weak or inconsistent. But at no point do I endorse the idea that higher-order principles make sense and are intrinsically justified (or even justified at all). The discussion is entirely critical.

And regarding “the theory of legitimate candidates”: That is something I discussed briefly in the paper on reflection principles and at length in my dissertation. I tried to make philosophical sense of the idea as a way of reaching strong principles. I considered both height reflection and width reflection. I managed to get principles (which bear resemblance to your principles on sharp-generation) that imply \text{AD}^{L(\mathbb R)} (by an easy core model induction argument). But my conclusion was that (a) I couldn’t make sense of the philosophical basis of this conception (“the theory of legitimate candidates”) and (b) I thought it would be a real stretch to say that such principles were intrinsically justified.

Now, onto your arguments.

C. The Case for “Lengthenings of V”

The first argument:

Reflection has the appearance of being “internal” to V, referring only to V and its rank initial segments. But this is a false impression, as “reflection” is normally taken to mean more than 1st-order reflection. Consider 2nd-order reflection (for simplicity without parameters):

(*) If a 2nd-order sentence holds of V then it holds of some V_\alpha. This is equivalent to:

({*}{*}) If a 1st-order sentence holds of V_{\text{Ord} + 1} then it holds of some V_{\alpha + 1},

where Ord denotes the class of ordinals and V_{\text{Ord} + 1} denotes the (3rd-order) collection of classes. In other words, 2nd-order reflection is just 1st-order reflection from V_{\text{Ord} + 1} to some V_{\alpha + 1}. Note that V_{\text{Ord} + 1} is a “lengthening” of V = V_\text{Ord}. Analogously, 3rd order reflection is 1st-order reflection from the lengthening V_{\text{Ord} + 2} to some V_{\alpha + 2}. Stronger forms of reflection refer to longer lengthenings of V.

1st-order forms of reflection do not require lengthenings of V but are very weak, below one inaccessible cardinal. But higher-order forms yield Mahlo cardinals and much more, and this is what Gödel and others had in mind when they spoke of reflection.

The question, of course, is whether this is legitimate, on either the actualist or the potentialist conception. As mentioned above, the challenge for the actualist is to make sense of full-second order set theory (over V). (Some have tried to do this by invoking a plural interpretation of the second-order quantifiers.) And the challenge for the potentialist is to make sense of the idea that there should be rank-initial segments of the universe that satisfy second-order reflection (something that looks like a posit).

The second argument:

Another way of seeing that lengthenings are implicit in reflection is as follows. In its most general form, reflection says:

({*}{*}{*}) If a “property” holds of V then it holds of some V_\alpha. This is equivalent to:

({*}{*}{*}{*}) If a “property” holds of each V_\alpha then it holds of V.

[({*}{*}{*}) for a "property" is logically equivalent to ({*}{*}{*}{*}) for the negation of that "property".]

OK, now apply ({*}{*}{*}{*}) to the property of having a lengthening that models ZFC. Clearly each V_\alpha has such a lengthening, namely V. So by ({*}{*}{*}{*}), V itself has lengthenings that model ZFC! One can then use this to infer huge amounts of reflection, far past what Gödel was talking about.

Reinhardt pointed out (at one point, in an actualist spirit) that when one is reflecting one must be careful of what one reflects. For example, it obviously doesn’t make sense to reflect the property of having all and only the sets in V (`the real thing’). For this reason he prohibited reference “V” or anything involving “V” and instead observed that one can only reflect “structural properties” (roughly speaking, properties that are “internally characterizable without de re reference to V”). Gödel also followed this course (in his discussions with Wang.)

From this perspective the property you are reflecting is not a legitimate candidate for reflection. The point is that V simply has no lengthenings! Part of what we mean by V (on this point of view) is that it contains _all_ of the sets; there are no lengthenings. — that is part of what we _mean_ by V! To speak of sets outside of V involves essential reference to V itself and this feature, by Reinhardt’s criterion, prohibits it from being a candidate for reflection.

Of course, if one understand by “V” not “the real thing” but “a surrogate that resembles the real thing” then one can make sense of “lengthenings of V”. But if that is the understanding of “V” that you are working with then you don’t need to give an argument since you already have the conclusion from the start.

So, either the argument doesn’t work or you do not need to give it since you have presupposed an understanding of “V” in which it is immediate.


To summarize: There appears to be a dual use of “V” in your account. At times it refers to “the real thing”, at times it refers to “a surrogate that resembles the real thing” (like a countable transitive model of ZFC). It is on the latter understanding that you can speak of “lengthenings of V” and “broadenings of V”. And on this understanding you do not have to give an argument that such things exist since it is immediate. But then the challenge is to make a case for why the principles you endorse are intrinsically justified (or intrinsically plausible or even just plausible) on this understanding. I don’t see how such a case goes. When I reflect on the concept of countable transitive model of ZFC I get very little. You must be reflecting on a different concept, one that involves both the surrogates (among the countable transitive models of ZFC) and something else. What?

Best,
Peter

P.S. Thanks for the clarification on your retraction of your earlier claim that IMH is intrinsically justified. The shift — and your comments about the dynamical element of this investigation — make me think that you are really speaking of what is either intrinsically plausible or extrinsically justified.

The questions I raised at the end — (a) – (b) — apply to \textsf{IMH}^\# just as much as they apply to IMH.

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

You are right, to get strength from reflection one not only needs higher-order logic (lengthenings) but also class (2nd order) parameters. As I said in what I wrote, I chose to ignore parameters to simplify the discussion. In fact allowing more than 2nd-order parameters will lead to inconsistency unless one treats them carefully using embeddings, what I call “Magidor reflection”. But obviously I wanted to avoid this subtle discussion of parameters to bring out the main point: Lenthenings are required to derive an inaccessible from reflection.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

From the mathematical point of view, the discussion of Reflection in what you wrote (by Sy) seems to be oversimplified (and seems to be incorrect). The reflection principles

  1. anything first order true in V is true in some V_\lambda
  2. anything second order true in V is true in some V_\lambda

are both fairly weak. (1) has models (V_\lambda,\in) where \lambda < \mathfrak c^+. So does (2). If you only want to consider such models where \lambda is strongly inaccessible, then 2) has models (V_\lambda,\in) where lambda is among the first \mathfrak c^+ strongly inaccessible cardinals. So (2) only gives you a handful of strongly inaccessible cardinals in a context like MK or MK + global choice.

  1. anything first order true in V is true in some V_\lambda, with set parameters.
  2. anything second order true in V is true in some V_\lambda, with set parameters.

As is well known, the models (V_\lambda,\in) of (3) are exactly the models (V_\lambda,\in) of ZF. If (4) is formulated as a scheme over NBG then we get a system which is equiconsistent with the normal formulation of second order reflection, (6) below. However, it does not appear that (4) over even MK with global choice will prove the existence of a Mahlo cardinal (I haven’t thought about showing this).

  1. anything first order true in any (V,A) is true in some (V_\lambda,A \cap V_\lambda,\in), A arbitrary.
  2. anything second order true in any (V,A) is true in some (V_\lambda,A \cap V_\lambda,\in), A arbitrary.

(5) holds in exactly the (V_\lambda,\in) for which lambda is strongly inaccessible. (6) is the normal way of formulating second order reflection. As a scheme over NBG, it proves the existence of weakly compact cardinals. A subtle cardinal proves the existence of models (V_{\lambda+1},V_{\lambda},\in).

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I write with your permission to summarize for the group a brief exchange we had in private. Before that exchange began, you had agreed to these three points:

  1. The relevant concept is the familiar iterative conception, which includes a rough idea of maximality in ‘height’ and ‘width’.
  2. To give an intrinsic justification or intrinsic evidence for a set-theoretic principle is to show that it is implicit in the concept in (1).
  3. The HP is a method for extracting more of the implicit content of the concept in (1) than has heretofore been possible.

We then set about exploring how the process in (3) is supposed to work, beginning with more careful attention to the iterative conception in (1). You summarize it this way:

“Maximal” means “as large as possible”, whether one is talking about

a. Vertical or ordinal-maximality: the ordinal sequence is “as long as possible”, or about

b. Horizontal or powerset-maximality: the powerset of any set is “as large as possible”.

In other words there is implicitly a “comparative” (and “modal”) aspect to “maximality”, as to be “as large as possible” can only mean “as large as possible within the realm of ‘possibilities'”.

Thus to explain ordinal- and powerset-maximality we need to compare different possible mental pictures of the set-theoretic universe. In the case of ordinal-maximality we need to consider the possibility of two mental pictures P and P* where P* “lengthens” P, i.e. the universe described by P is a rank initial segment of the universe described by P*. We can now begin to explain ordinal-maximality. If a picture P of the universe is ordinal-maximal then any “property” of the universe described by P also holds of a rank initial segment of that universe. This is also called “reflection”.

In the case of powerset maximality we need to consider the possibility of two mental pictures P and P* of the universe where P* “thickens” P, i.e. the universe described by P is a proper inner model of the universe described by P*.

There seemed to me to be something off about a universe being ‘maximal in width’, but also having a ‘thickening’. Citing Peter Koellner’s work, you replied that reflection actually involves ‘lengthenings’ (to which the ‘thickenings’ would be analogous), because it appeals to higher-order logics:

Reflection has the appearance of being “internal” to V, referring only to V and its rank initial segments. But this is a false impression, as “reflection” is normally taken to mean more than 1st-order reflection. Consider 2nd-order reflection (for simplicity without parameters):

({*}) If a 2nd-order sentence holds of V then it holds of some V_\alpha.

This is equivalent to:

({*}{*}) If a 1st-order sentence holds of V_{\text{Ord} + 1} then it holds of some V_{\alpha + 1},

where \text{Ord} denotes the class of ordinals and V_{\text{Ord} + 1} denotes the (3rd-order) collection of classes. In other words, 2nd-order reflection is just 1st-order reflection from V_{\text{Ord} + 1} to some V_{\alpha + 1}. Note that V_{\text{Ord} + 1} is a “lengthening” of V = V_\text{Ord}. Analogously, 3rd order reflection is 1st-order reflection from the lengthening V_{\text{Ord} + 2} to some V_{\alpha + 2}. Stronger forms of reflection refer to longer lengthenings of V.

1st-order forms of reflection do not require lengthenings of V but are very weak, below one inaccessible cardinal. But higher-order forms yield Mahlo cardinals and much more, and this is what Goedel and others had in mind when they spoke of reflection.

Another way of seeing that lengthenings are implicit in reflection is as follows. In its most general form, reflection says:

({*}{*}{*}) If a “property” holds of V then it holds of some V_\alpha.

This is equivalent to:

({*}{*}{*}{*}) If a “property” holds of each V_\alpha then it holds of V.

[({*}{*}{*}) for a "property" is logically equivalent to ({*}{*}{*}{*}) for the negation of that "property".]

OK, now apply ({*}{*}{*}{*}) to the property of having a lengthening that models ZFC. Clearly each V_\alpha has such a lengthening, namely V. So by ({*}{*}{*}{*}), V itself has lengthenings that model ZFC! One can then use this to infer huge amounts of reflection, far past what Goedel was talking about.

I am not assuming that everybody is a “potentialist” about V. Even the Platonist can have mental images of the lengthenings demanded for reflection. And without such lengthenings, reflection has been reduced to a principle weaker than one inaccessible cardinal.

Now given that lengthenings are essential to ordinal-maximality isn’t it clear that thickenings are essential to powerset-maximality? We can then begin to explain powerset-maximality as follows: A picture P of the universe is powerset-maximal if any “property” of the universe described by a thickening of P also holds of the universe described by some thinning of P. What I called the weak-IMH is the “follow your nose” mathematical formulation of this notion of powerset-maximality for first-order properties.

I proposed that we consult with Peter Koellner directly about this business of ‘lengthenings’ in reflection, and this is why we again address the group.

(So, what do you think, Peter?)

Finally, you suggested that you might consider retracting (2) above and returning to the proposal of a different conception of set. The challenge there is to do so without returning to the unappealing idea that ‘intrinsic justification’ and ‘set-theoretic truth’ are determined by a conception of the set-theoretic universe that’s special to a select group.

All best,
Pen