Tag Archives: V = Ultimate L

Re: Paper and slides on indefiniteness of CH

Dear Pen and Hugh,


Well I said that we covered everything, but I guess I was wrong! A new question for you popped into my head. You said:

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved
has to be shared, so that it won’t degenerate into ‘Sy’s truth’.

I just realised that I may have misunderstood this.

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

I disagree with the last sentence of this quote (I expect that you do too), but the fact remains that if we don’t require a consensus about “good set theory” then truth does break into (“degenerate into” is inappropriate) “Hugh’s truth”, “Saharon’s truth”, “Stevo’s truth”, “Ronald’s truth” and so on. (Note: I don’t mean to imply that Saharon or Stevo really have opinions about truth, here I only refer to what one reads off from their forms of “good set theory”.) I don’t think that’s bad and see no need for one form of “truth” that “swamps all the others”.

Now when it comes to the HP you insist that there is just one “shared picture”. What do you mean now by “picture”? Is it just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”? If so, then I agree that this is the starting point of the HP and should be shared, independently of how the HP develops.

In my mail to you of 31.October I may have misinterpreted you by assuming that by “picture” you meant something sensitive to new developments in the programme. For example, when I moved from a short fat “picture” based on the IMH to a taller one based on the \textsf{IMH}^\#, I thought you were regarding that as a change in “picture”. Let me now assume that I made a mistake, i.e., that the “shared picture” to which you refer is just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”.

Now I ask you this: Are you going further and insisting that there must be a consensus about what mathematical consequences this “shared picture” has? That will of course be necessary if the HP is to claim “derivable consequences” of the maximality of V in height and width, and that is indeed my aim with the HP. But what if my aim were more modest, simply to generate “evidence” for axioms based on maximality just as TR generates “evidence” for axioms based on “good set theory”; would you then agree that there is no need for a consensus, just as there is in fact no consensus regarding evidence based on “good set theory”?

In this way one could develop a good analogy between Thin Realism and a gentler form of the HP. In TR one investigates different forms of “good set theory” and as a consequence generates evidence for what is true in the resulting “pictures of V”. In the gentler form of the HP one investigates different forms of “maximality in height and width” to generate evidence for what is true in a “shared picture of V”. In neither case is there the presumption of a consensus concerning the evidence generated (in the original HP there is). This gentler HP would still be valuable, just as generating different forms of evidence in TR is valuable. What it generates will not be “intrinsic to the concept of set” as in the original ambitious form of the HP, but only “intrinsically-based evidence”, a form of evidence generated through an examination of the maximality of V in height and width, rather than by “good set theory”.


1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2. Could you explain a bit more why V = Ultimate L is attractive? You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.” But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals alpha, alpha^+ = alpha^+ of M.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0^\# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = \text{HOD}.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?


Re: Paper and slides on indefiniteness of CH

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

??? My question has nothing to do with ctm’s! It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway). I was referring to the many different forms of set-theoretic practice which disagree with each other on basic questions like CH. How do you assign a truth value to CH in light of this fact?

For your second question, If the tests are passed, then yes I do think that V = Ulitmate-L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Here come the irrelevant ctm’s again. But you do say that V = Ultimate L will “swamp all the others”, so perhaps that is your answer to my question. Now do you really believe that? You suggested that Forcing Axioms can somehow be “part of the picture” even under V = Ultimate L, but that surely doesn’t mean that Forcing Axioms are false and Ultimate L is true.

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

If the Ultimate-L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics. Why should your programme be required to make “make or break” conjectures, and what is so attractive about that? As I understand the way Pen would put it, it all comes down to “good set theory” for your programme, and for that we need only see what comes out of your programme and not subject it to “death-defying” tests.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds? Your V = Ultimate L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.


Re: Paper and slides on indefiniteness of CH

Dear Harvey,

I think it would be nice to revisit all of these topics. Let me say two things about the axiom “V = Ultimate L” and your request that it be presented in “generally understandable terms”.

(1) The development of inner model theory has involved a long march up the large cardinal hierarchy and has generally had the feature that when you build an inner model for one key level of the large cardinal hierarchy — say measurable, strong, or Woodin — you have to start over when you target the next level, building on the old inner model theory while adding a new layer of complexity (from measures to extenders, from linear iterations to non-linear iterations) — because the inner models for one level are not able to accommodate the large cardinals at the next (much as L cannot accommodate a measurable).

Moreover, the definitions of the inner models — especially in their fine-structural variety — are very involved. One essentially has to develop the theory in tandem with the definition. It looked like it would be a long march up the large cardinal hierarchy, with inner models and associated axioms of the form “V = M” of increasing complexity.

One of the main recent surprises is that things change at the level of a supercompact cardinal: If you can develop the inner model theory for a superpact cardinal then there is a kind of “overflow” — it “goes all the way” — and the model can accommodate much stronger large cardinals. Another surprise is that one can actually write down the axiom — “V = Ultimate L” — for the conjectured inner model in a very crisp and concise fashion.

(2) You will, however, find that the axiom “V = Ultimate L” may not meet your requirement of being explainable in “generally understandable terms”. It is certainly easy to write down. It is just three short lines. But it involves some notions from modern set theory — like the notion of a Universally Baire set of reals and the notion of \Theta. These notions are not very advanced but may not meet your demand or being “generally understandable”. Moreover, to appreciate the motivation for the axiom one must have some further background knowledge — for example, one has to have some knowledge of the presentation of HOD, in restricted contexts like L(\mathbb R), as a fine-structural inner model (a “strategic inner model”). Again, I think that one can give a high-level description of this background but to really appreciate the axiom and its motivation one has to have some knowledge of these parts of inner model theory.

I don’t see any of this as a shortcoming. I see it as the likely (and perhaps inevitable) outcome of what happens when a subject advances. For comparison: Newton could write down his gravitational equation in “generally understandable terms” but Einstein could not meet this demand for his equations. To understand the Einstein Field Equation one must understand the notions a curvature tensor, a metric tensor, and stress-energy tensor. There’s no way around that. And I don’t see it as a drawback. It is always good to revisit a subject, to clean it up, to make it more accessible, to strive to present it in as generally understandable terms as possible. But there are limits to how much that can be done, as I think the case of the Einstein Field Equations (now with us for almost 100 years) illustrates.

Best, Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I have one more comment on choiceless large cardinal axioms that concerns \textsf{IMH}^\#.

It is worth pointing out that Hugh’s consistency proof of \textsf{IMH}^\# shows a good deal more (as pointed out by Hugh):

Theorem: Assume that every real has a sharp. Then, in the hyperuniverse there exists a real x_0 such that every #-generated M to which x_0 belongs, satisfies \textsf{IMH}^\# and in the following very strong sense:

(*) Every sentence \phi which holds in a definable inner model of some #-generated model N, holds in a definable inner model of M.

There is no requirement here that N be an outer model of M. In this sense, \textsf{IMH}^\# is not really about outer models. It is much more general.

It follows that not only is \textsf{IMH}^\# consistent with all (choice) large cardinal axioms (assuming, of course, that they are consistent) but also that \textsf{IMH}^\# is consistent with all choiceless large cardinal axioms (assuming, of course, that they are consistent).

The point is that \textsf{IMH}^\# is powerless to provide us with insight into where inconsistency sets in.

Before you protest let me clarify: I know that you have not claimed otherwise! You take the evidence for consistency of large cardinal axioms to be entirely extrinsic.

My point is simply to observe to everyone that \textsf{IMH}^\# makes no predictions on this matter. And, more generally, I doubt that you think that the hyperuniverse program has the resources to make predictions on this question since you take evidence for consistency of large cardinal axioms to be extrinsic.

In contrast “V = Ultimate L” does make predictions on this question, in the following precise sense:

Theorem (Woodin). Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals.

Theorem (Woodin). Assume the Ultimate L Conjecture. Then there are no Super Reinhardt cardinals and there are no Berkeley cardinals.

Theorem (Woodin). Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals (or Super Reinhardt cardinals or Berkeley Cardinals, etc.)

(Here the Ultimate L Conjecture is a conjectured theorem of ZFC.)

Best, Peter

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Now here we come to an important distinction that is ignored in discussions of Thin Realism: The Axiom of Choice didn’t get elected to the club because it is beneficial to the development of Set Theory! It got elected only because of its broader value for the development of mathematics outside of Set Theory, for the way it strengthens Set Theory as a foundation of mathematics. It is much more impressive for a statement of Set Theory to be valuable for the foundations of mathematics than it is for it to be valuable for the foundations of just Set Theory itself!

In other words when a Thin Realist talks about some statement being true as a result of its role for producing “good mathematics” she almost surely means just “good Set Theory” and nothing more than that. In the case of AC it was much more than that.

If by ‘thin realism’, you mean the view described by me, then this is incorrect.  My Thin Realist embraces considerations based on benefits to set theory and to mathematics more generally — and would argue for Choice on the basis of its benefits in both areas.

This has a corresponding effect on discussions of set-theoretic truth. Corresponding to the above 3 roles of Set Theory we have three notions of truth:

  1. True in the sense of Pen’s Thin Realist, i.e. a statement is true because of its importance for producing “good Set Theory”.
  2. True in the sense assigned to AC, i.e., a statement is true based on Set Theory’s role as a foundation of mathematics, i.e. because it is important for the development of areas of mathematics outside of Set Theory.
  3. True in the intrinsic sense, i.e., derivable from the maximal iterative conception of set.

Again, my Thin Realist embraces the considerations in (1) and (2). As for (3), she thinks having an intuitive picture of what we’re talking about is extremely valuable, as a guide to thinking, as a source of new avenues for exploration, etc.  Her reservation about considerations of type (3) is just this:  if there were conflict between type (3) and types (1) and (2), she would change her concept to retain the good mathematics, in set theory and in mathematics more broadly.  (This happened in the case of Choice.)

A more subtle point, quite important to us philosophers, is that Thin Realism doesn’t include a different sort of truth.  Truth is truth. Where the Thin Realist differs is in what she thinks set theory is about (the ‘metaphysics or ‘ontology’).  Because of this, she differs on what she takes to be evidence for truth.  So what I really meant in the previous paragraph is this:  benefits to set theory and to math are evidence for truth; intrinsic considerations, important as they are, only aid and suggest routes to our accumulation of such evidence.

  1. Pen’s model Thin Realist John Steel will go for Hugh’s Ultimate L axiom, assuming certain hard math gets taken care of.

I don’t know what you intend to be covered by ‘certain hard math’, but I take it a lot has to happen before a Thin Realist think we have sufficient evidence to include V=Ultimate L as a new axiom.

As I understand it (I am happy to be corrected), Pen is no fan of Type 3 truth

I hope I’ve now explained my stand on this:  none of these are types of truth; types 1 and 2 are evidence for truth; 3 is of great heuristic value.

I am most pessimistic about Type 1 truth (Thin Realism). To get any useful conclusions here one would not only have to talk about “good Set Theory” but about “the Best Set Theory”, or at least show that all forms of “good Set Theory” reach the same conclusion about something like CH. Can we really expect to ever do that? To be specific: We’ve got an axiom proposed by Hugh which, if things work out nicely, implies CH. But then at the same time we have all of the “very good Set Theory” that comes out of forcing axioms, which have enormous combinatorial power, many applications and imply not CH. So it seems that if Type 1 truth will ever have a chance of resolving CH one would have to either shoot down Ultimate-L, shoot down forcing axioms or argue that one of these is not “good Set Theory”. Pen, how do you propose to do that? Forcing axioms are here to stay as “good Set Theory”, they can’t be “shot down”. And even if Ultimate-L dies, there will very likely be something to replace it. Why should we expect this replacement for Ultimate-L to come to the same conclusion about CH that forcing axioms reach (i.e. that CH is false)?

I think it’s simply too soon to try to make any of these judgments.

All best,

Re: Paper and slides on indefiniteness of CH

Dear Bob,

What is the precise definition of “maximality”. Is it evident from that definition, that maximality implies there are reals not in HOD? If not, can you give a cite as to where this is proved?

I do not know of a precise definition of “maximality”. Rather I regard “maximality” as an “intrinsic feature” of the universe of sets via what Pen has referred to as “the usual kind of conceptualism: there is a shared concept of the set-theoretic universe (something like the iterative conception); it’s standardly characterized as including ‘maximality’, both in ‘width’ (Sol’s ‘arbitrary subset’) and in the ‘height’ (at least small LCs).” [Pen: I dropped the bit about "reflection" as I wasn't sure what you meant; but I don't think that its omission will affect this discussion.]

“Maximality” is indeed formulated mathematically in a number of different ways in the HP, but I don’t know if there is an ultimate mathematical formulation which fully captures it and therefore cannot claim that there will be a precise definition of this intrinsic feature.

Nevertheless I do regard the existence of reals not in HOD to be derivable from “maximality” for the following reason, which I expect to be shared by others who share the maximal iterative conception: Part of this conception is that the powerset of omega consists of arbitrary subsets of omega. This is violated by V = L, which insists that the only subsets of omega are those which are predicatively definable relative to the ordinals, and also by V = HOD, which insists that the only subsets of omega are those which are definable relative to ordinals.

Returning to V = Ultimate L: I would expect that anyone claiming this to be true would also claim V = L to be true had Jack succeeded in showing that 0^\# cannot exist. But V = L is in my view clearly wrong (irregardless of whether 0^\# can exist), as well-expressed by Gödel:

“From an axiom in some sense opposite to [V=L], the negation of Cantor’s conjecture could perhaps be derived. I am thinking of an axiom which … would state some maximum property of the system of all sets, whereas [V=L] states a minimum property. Note that only a maximum property would seem to harmonize with the concept of set …”

This argument seems to refute V = Ultimate L.


Re: Paper and slides on indefiniteness of CH

Dear Hugh,

The axiom V = Ultimate L implies V = HOD

So V = Ultimate L cannot be “true” because it violates the maximality of the universe of sets. Recall Sol’s comment about “sharpenings” of the set concept that violate what the set concept is supposed to be about. Maximality implies that there are sets (even reals) which are not ordinal-definable.


PS: This is of course not to say that V = Ultimate L is mathematically uninteresting or cannot play a role in the formulation of some future “true” axiom of set theory.

PPS: Since the Reinhardt fiasco I think it would be best to refer to statements that are not known to be consistent (relative to LCs) as “hypotheses” and not as “axioms”, especially in the context of a discussion over truth in set theory.

Re: Paper and slides on indefiniteness of CH

Quick answer to the question directed to me below. —Hugh

On Aug 20, 2014, at 2:49 AM, Sy David Friedman wrote:

As I understand it (please correct me) your [Solomon Feferman's] valid point is that by for example taking “set” to mean “constructible set” we have violated the intrinsic feature of “maximality”, a feature which the concept of set is meant to exhibit. (Aside: I can then well imagine that on similar grounds you would hesitate to accept an axiom called V = Ultimate-L! But perhaps Hugh will clarify that this need not even imply V = HOD, so it should not be regarded as an anti-maximality statement.)

The axiom V = Ultimate L implies V = HOD and that V is not a set-generic extension of any inner model.

Re: Paper and slides on indefiniteness of CH

If certain technical conjectures are proved, then “V=ultimate L” is an attractive axiom.

The paper “Gödel’s program“, available on my webpage, gives an argument for that. I would guess the technical conjectures are true; in any case, large cardinals should decide them.

“V = Ultimate L” implies CH.

I don’t think we should care how hard it is to understand the statement of  “V = Ultimate L”. But in fact, one semester of graduate set theory is all you need to understand it. The conjectures are just that it implies GCH, and is consistent with the existence of supercompacts. One graduate semester is enough to understand the conjectures.


Re: Paper and slides on indefiniteness of CH

Thanks, Harvey, for trying to get the discussion focused back to the point of departure, namely my contentions that CH is neither a definite mathematical problem nor a definite logical problem [as of now].  As far as I can tell, no one is contending that it may still be considered to be a definite mathematical problem. As I wrote at the beginning of sec. 6 of my paper, I wrote that “[CH] can be considered as a definite logical problem relative to any specific axiomatic system or model.  But one cannot say that it is a definite logical problem in some absolute sense unless the systems or models in question have been singled out in some canonical way.”

I agree with what you say in point 1.  One proposal that has been explicitly offered by Hugh is to establish the proposition, V = Ulimate-L, though it is not clear in what sense it would mean to establish that.  It sounds like a candidate for a canonical proposition, with the paradigmatic V = L in mind. But what a wealth of difference:  it is certainly a very complex (and sophisticated) statement, I suppose even for set-theorists, and surely, as you say, for logicians generally and philosophers of mathematics.  By contrast (point 2) Sy’s HP program has its appeal when considered in general terms, but like you re point 3, I would like to see something much more definite.  And when we have that, the question will be, what would it mean to establish whatever that is?

In both these cases, if the proposed “solution” fails, CH is left in limbo.

Perhaps there are other proposals for asserting CH as a definite logical problem; unless I’ve missed something, none has been put forward in this discussion.  But in any case, the same kinds of questions would have to be raised about such.

Re 4, first, I hope “blackboxing” does not get accepted as a verb.  Second: “Good mathematics” and “good set theory”: only the experts can judge what constitutes that. And the practice of very good mathematical, logical and set-theoretical work will go on whether or not it has a clear foundational purpose. But there are certain problems where what one is up to cries out for such a purpose, with CH standing first in line, making as it does the fundamental concepts and methods of set theory genuinely problematic.

Perhaps we can also have a meeting of minds about “mental pictures” (point 5).  I’ve written a lot about that; in particular, I have referred to my article “Conceptions of the continuum” (http://math.stanford.edu/~feferman/papers.html, #83). I now look forward to seeing what you have to say about such.