Category Archives: “The Thread”

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks so much for your patient responses to my elementary questions!  I now see that I was viewing those passages in your BSL paper through the wrong lens, but rather than detailing the sources of my previous errors, I hope you’ll forgive me in advance for making some new ones.  As I now (mis?)understand your picture, it goes roughly like this…

We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it).  One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations.  The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims.  These are the ones that ‘due to the role that they play in the practice of set theory and, more generally, of mathematics, should not be contradicted by any further candidate for a set-theoretic statement that may be regarded as ultimate and unrevisable’ (p. 80).  (Is it really essential that these statements be ‘ultimate and unrevisable’?  Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?)  These include ZFC and the consistency of LCs.

The intrinsic constraints aren’t limited to items that are ‘implicit in the concept of set’.  They also include items ‘implicit in the concept of a set-theoretic universe’.  (This sounds reminiscent of Tony’s reading in ‘Gödel’s conceptual realism’.  Do you find this congenial?)  One of the items present in the latter concept is a notion of maximality.  The new intrinsic considerations arise at this point, when we begin to consider, not just V, but a range of different ‘pictures of V’ and their interrelations in the hyperuniverse.  When we do this, we come to see that the vague principle of maximality derived from the concept of a set-theoretic universe can be made more precise — hence the schema of Logical Maximality and its various instances.

At this point, we have the de facto part of practice and various maximality principles (and more, but let’s stick with this example for now).  If the principles conflict with the de facto part, they’re rejected.  Of the survivors, they’re further tested by their ability to settle independent questions.

Is this at least a bit closer to the story you want to tell?

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

There is no retreat from my view that the concept of the continuum (qua the set of arbitrary subsets of the natural numbers) is an inherently vague or indefinite one, since any attempt to make it definite (e.g. via L or an L-like inner model) runs counter to what it is supposed to be about. I talk here about the concept of the continuum, not the supposed continuum itself, as a confirmed anti-platonist.  Mathematics in my view is about intersubjectively shared (human) conceptions of idealized structures, not any supposed such structures in and of themselves.  See my article “Conceptions of the continuum” (Intellectica 51 (2009), 169-189).

I can’t have claimed that I have established that CH is neither a definite mathematical problem nor a definite logical problem, since one can’t say precisely what such problems are in either case.  Rather, as workers in mathematics and logic, we generally know one when we see one.  So, the Goldbach conjecture and the Riemann Hypothesis (not “Reimann” as has appeared elsewhere in this exchange) are definite mathematical problems.  And the decidability of the first order theory of the reals with exponentiation is a definite logical problem.  (Logical problems make use of the concept of formal language and are relative to models or axioms.) Even though CH has the appearance of a definite mathematical problem, it has ceased to be one for all intents and purposes because it was long recognized that only logical considerations could be brought to bear to settle it, if at all.  So then what would make it a definite logical problem? Something as definite as: CH is true in L.  I can’t exclude that some time in the future, some model or axiom system will be produced that will be as canonical in nature for some concept of set as L is for the concept of hereditarily predicatively definable set.  But I’m not holding my breath either.

I don’t know whether your concept of set-theoretical truth can be assimilated to Maddy’s A-realism, but in either case I see it as trying to have your platonist cake without eating it.  It allows you to accept CH v not-CH, but so what?

Best,
Sol

Re: Paper and slides on indefiniteness of CH

pre-PS: Thanks Sol for correcting my spelling. My problem with German has plagued me my entire academic life.

Dear Sy,

I think we are getting to the point where we are simply talking past one another. Also the nesting of messages is making this thread somewhat difficult to follow (perhaps a line break issue or a platform issue).

You have made an important point for me: a rich structure theory together with Goedelian ‘success’ is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only ‘context’ needed for Con PD is the empirical calibration provided by a strict ‘hierarchy’ of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

Next point and within the discussion on strong rank maximality.  I wrote:

Question 1:  How can we even be sure that there is no  pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

and you responded:

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of ‘strong maximality’. Then any two sentences in T are compatible with each other.

I realized after sending the message I should have elaborated on what I had in mind on the incompatibility issue and so I will do so here. I imagine many the followers of this thread  (if any are left) will want to skip this.

Incompatibility


Let me explain the sort of incompatibility I am concerned with.

Suppose M is strongly rank maximal. One might have a \Pi_2-sentence \phi_1 certified by a rank preserving extension of M with X-cardinals and a \Pi_2-sentence \phi_2 certified by a rank preserving extension with Y-cardinals.

What if X-cardinals and Y-cardinals are mutually incompatible or worse, the existence of X-cardinals implies \phi_2 cannot hold (or vice-versa).  Then how could \phi_1\wedge\phi_2 be certified? If the certifiable \Pi_2-sentences are not closed under finite conjunction then there is a problem.

Let N_X be a rank-preserving extension of M with a proper class of X-cardinals which certifies \phi_1. Let’s call this a good witness if \phi_1 holds in all the set-generic extensions of N_X and all the \Pi_2-sentences which hold in all the set-generic extensions of N_X, are deemed certified by N_X (this is arguably reasonable given the persistence of large cardinals under small forcing).

Similarly let’s suppose that N_Y is a rank-preserving extension of M with a proper class of Y-cardinals which certifies
\phi_2 and is a good witness.

Assuming the \Omega Conjecture is provable (and recall our base theory is ZFC + a proper class of Woodin cardinals) then one of the following must hold:

  1. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_X (and so N_X certifies \phi_1\wedge \phi_2).
  2. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_Y (and so N_Y certifies \phi_1\wedge \phi_2).

To me this is a remarkable fact. I see no way to prove it at this level of generality without the \Omega Conjecture.

You wrote:

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

I completely disagree. Having more models obscures truth that is my whole point.

Moving on, I want to return to the inner model issue and illustrate an even deeper sense (beyond correctness issues) in which the Inner Model Program is not just about inner models.

Consider the following variation of the inner model program. This is simply the definable version of your “internal consistency” question which you have explored quite bit.

Question: Suppose that there exists a proper class of X-cardinals. Must there exist an inner model N with a proper class of X-cardinals such that N \subseteq \text{HOD}?

(Of course, if one allows more than the existence of a proper class of X cardinals then there is a trivial solution so here it is important that one is only allowed to use the given large cardinals).

For “small” large cardinals even at the level of  Woodin cardinals I know of no positive solution that does not use fine-structure theory.

Define a cardinal \delta to be n-hyper-extendible if \delta is extendible relative to the \Sigma_n-truth predicate.

Theorem: Suppose that HOD Conjecture is true. Suppose that for each n, there is an n-hyper-extendible cardinal. Then for each n there is an n-hyper extendible cardinal in HOD (this is a scheme of course).

The HOD Conjecture could have an elementary proof (if there is an extendible cardinal).  This does not solve the inner model problem for hyper-extendible cardinals or even shed any light on the inner model problem.

Finally you wrote:

The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

I agree on Reinhardt cardinals. But obviously disagree on the route to new hierarchies. Certainly HP has yet to indicate any promise for being able to reach new levels of consistency strength since even reaching the level of “ZFC + infinitely many Woodin cardinals” looks like a serious challenge for HP.  It would be interesting to even see a conjecture along these lines.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?
  2. Is PD consistent?

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

There is strong meta-mathematical evidence that the only way to ultimately answer 2. with “yes” is to answer 1. with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, \square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

I agree that there are interesting models outside this box. But I strongly disagree that V is one of them.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Penny,

On Wed, 6 Aug 2014, Penelope Maddy wrote:

As I now (mis?)understand your picture, it goes roughly like this … We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it).  One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations.  The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims.  These are the ones that ‘due to the role that they play in the practice of set theory and, more generally, of mathematics, should not be contradicted by any further candidate for a set-theoretic statement that may be regarded as ultimate and unrevisable’ (p. 80).  (Is it really essential that these statements be ‘ultimate and unrevisable’?  Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?)  These include ZFC and the consistency of LCs. The intrinsic constraints aren’t limited to items that are ‘implicit in the concept of set’.  They also include items ‘implicit in the concept of a set-theoretic universe’.  (This sounds reminiscent of Tony’s reading in ‘Gödel’s conceptual realism’.  Do you find this congenial?)  One of the items present in the latter concept is a notion of maximality.  The new intrinsic considerations arise at this point, when we begin to consider, not just V, but a range of different ‘pictures of V’ and their interrelations in the hyperuniverse.  When we do this, we come to see that the vague principle of maximality derived from the concept of a set-theoretic universe can be made more precise — hence the schema of Logical Maximality and its various instances. At this point, we have the de facto part of practice and various maximality principles (and more, but let’s stick with this example for now).  If the principles conflict with the de facto part, they’re rejected.  Of the survivors, they’re further tested by their ability to settle independent questions. Is this at least a bit closer to the story you want to tell?

Yes, but as my views have evolved slightly since Tatiana and I wrote the BSL paper I’d like to take the liberty (see below) of fine-tuning and enhancing the picture you present above. My apologies for these modifications, but I understand that changes in one’s point of view are not prohibited in philosophy?  ;)

As you say, I take “true in V” to be free of any realist ontology: there is no fixed class of objects constituting the elements of the universe of all sets. But this does not prevent us from having a conception of this universe or from making assertions about what is true in it. My notion of set-theoretic truth (truth in V) consists of those conclusions we can draw based upon intrinsic features of the relevant concepts. The relevant concepts include of course the concept of “set”, but also (and this is a special aspect of the HP) the concept of “set-theoretic universe” (“picture of V”).

Intrinsic features of the concept of set include (and in my view are limited to) what one can derive from the maximal iterative concept (together with some other basic features of sets), resulting in the axioms of ZFC together with reflection principles. These are concerned with “internal” features of V.

To understand intrinsic features of the concept of set-theoretic universe we need a context in which we may compare universes and this is provided by the hyperuniverse. The hyperuniverse admits only countable universes (countable pictures of V) but by Löwenheim-Skolem this will suffice, as our aim is to clarify the truth of first-order statements about V. An example of an intrinsic feature of the concept of universe is its “maximality”. This is already expressed by “internal” features of a universe based on the maximal iterative concept. But in the HP it is also expressed by “external” features of a universe based on its relationship with other universes (“maximal” = “as large as possible” and the hyperuniverse provides a meaning to the term “possible universe”).

With this setup we can then instantiate “maximality”, for example, in various ways as a precise mathematical criterion phrased in terms of the “logic of the hyperuniverse”. The IMH (powerset maximality) and \#-generation (ordinal maximality) are examples, but there are others which strengthen these or synthesise two or more criteria together (IMH for \#-generated universes for example). The “preferred universes” for a given criterion are those which obey it and first-order statements that hold in all such preferred universes become candidates for axioms of set theory.

With this procedure we have a way of arriving at axiom-candidates that are based on intrinsic features of the concepts of set and set-theoretic universe. A point worth making is that our notions of V and hyperuniverse are interconnected; neither is burdened by an ontology yet they are inseparable, as the hyperuniverse is defined with reference to V and our understanding of truth in V is influenced by the (intrinsically-based) preferences we impose on elements of the hyperuniverse.

What has changed in my perspective since the BSL paper (I cannot speak for Tatiana) regards the “ultimate” nature of what the programme reveals about truth and the relationship between the programme and set-theoretic practice. Penny, you are perfectly right to ask:

Is it really essential that these statements be ‘ultimate and unrevisable’?  Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?

At the time we wrote the paper we were thinking almost exclusively of the IMH, which contradicts the existence of inaccessible cardinals. This is of course a shocking outcome of a reasoned procedure based on the concept of “maximality”! This caused us to rethink the role of large cardinals in set-theoretic practice and to support the conclusion that in fact the importance of large cardinals in set theoretic practice derives from their existence in inner models, not in V. Indeed, I still support that conclusion and on that basis Tatiana and I were prepared to declare the first-order consequences of the IMH as being ultimate truths.

But what I came to realise is that the IMH deals only with “powerset maximality” and it is compelling to also introduce “ordinal maximality” into the picture. (I should have come to that conclusion earlier, as indeed the existence of inaccessible cardinals is derivable from the intrinsic maximal iterative concept of set!) There are various ways to formalise ordinal maximality as a mathematical criterion: If we take the line that Peter Koellner has advocated then we arrive at something I’ll call KM (for Koellner maximality) which roughly speaking asserts the existence of omega-Erdos cardinals. A much stronger form due to Honzik and myself is \#-generation, which roughly speaking asserts the existence of any large cardinal notion compatible with V = L. Now IMH + KM is inconsistent but we can “synthesise” IMH with KM to create a new criterion IMH(KM), which is consistent. Similarly we can consistently formulate the synthesis IMH(\#-generation) of IMH with \#-generation. Unfortunately IMH(KP) does not change much, as it yields the inconsistency of large cardinals just past \omega-Erdos, and so again we contradict large cardinal existence. But the surprise is that IMH(\#-generation) is a synthesised form of powerset maximality with ordinal maximality which is compatible with all large cardinals (even supercompacts!), and one can argue that #-generation is the “correct” mathematical formulation of ordinal maximality.

This was an important lesson for me and strongly confirms what you suggested: In the HP (Hyperuniverse Programme) we are not able to declare ultimate and unrevisable truths. Instead it is a dynamic process of exploration of the different ways of instantiating intrinsic features of universes. learning their consequences and synthesising criteria together with the long-term goal of converging towards a stable notion of “preferred universe”. At each stage in the process, the first-order statements which hold in the preferred universes can be regarded as legitimate axiom candidates, providing an approximation to “ultimate and unrevisable truth” which may be modified as new ideas arise in the formulation of mathematical criteria for preferred universes. Indeed the situation is even more complex, as in the course of the programme we may wish to consider other intrinsic features of universes (I have ignored “omniscience” in this discussion), giving rise to a new set of mathematical criteria to be considered. And it is of course too early to claim that the process really will converge towards a unique notion of “preferred universe” and not to more than one such notion (fortunately there are as of yet no signs of such a bifurcation as “synthesis” appears to be a very powerful and successful way of combining criteria).

Finally: Why do I refer to “axiom candidates” and not to “axioms” when I mention first-order properties shared by preferred universes? This is out of respect for “set-theoretic practice”. As you know my aim is to base truth wholly on intrinsic considerations, independent of what may be the current trends in the mathematics of set theory. In my BSL paper we try to fix a concept of defacto truth and set the ground rule that such truth cannot be violated. My view now is rather different. I see that the HP is the correct source for axiom candidates which must then be tested against current set-theoretic practice. There is no naturalist leaning here, as I am in no way allowing set-theoretic practice to influence the choice of axiom-candidates; I am only allowing a certain veto power by the mathematical community. The ideal situation is if an (intrinsically-based) axiom candidate is also evidenced by set-theoretic practice; then a strong case can be made for its truth.

But I am very close to dropping this last “veto power” idea in favour of the following (which I already mentioned to Sol in an earlier mail): Perhaps we should accept the fact that set-theoretic truth and set-theoretic practice are quite independent of each other and not worry when we see conflicts between them. Maybe the existence of measurable cardinals is not “true” but set theory can proceed perfectly well without taking this into consideration. In the converse direction I simply repeat what I said recently to Hugh:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

The HP is about intrinsic sources of truth and we have no a priori guarantee that the results of the programme will fit well with current set-theoretic practice. What to do about that is however unclear to me at the moment.

All the best, thanks again for your interest,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sol,

On Wed, 6 Aug 2014, Solomon Feferman wrote:

Dear Sy,

There is no retreat from my view that the concept of the continuum (qua the set of arbitrary subsets of the natural numbers) is an inherently vague or indefinite one, since any attempt to make it definite (e.g. via L or an L-like inner model) runs counter to what it is supposed to be about. I talk here about the concept of the continuum, not the supposed continuum itself, as a confirmed anti-platonist.  Mathematics in my view is about intersubjectively shared (human) conceptions of idealized structures, not any supposed such structures in and of themselves.  See my article “Conceptions of the continuum” (Intellectica 51 (2009), 169-189).

I fully agree with all of this. Indeed, the concept of continuum is inherently vague.

But elsewhere you have gone further by claiming that CH is an inherently vague problem! This is a much stronger claim. Indeed, we have many theorems about inherently vague concepts: Take Koenig’s theorem that the continuum cannot have size \aleph_\omega. Given that we don’t really understand what the continuum is, it is remarkable that we can prove something so nontrivial about its size!

I can’t have claimed that I have established that CH is neither a definite mathematical problem nor a definite logical problem, since one can’t say precisely what such problems are in either case.  Rather, as workers in mathematics and logic, we generally know one when we see one.  So, the Goldbach conjecture and the Riemann Hypothesis (not “Reimann” as has appeared elsewhere in this exchange) are definite mathematical problems.  And the decidability of the first order theory of the reals with exponentiation is a definite logical problem.  (Logical problems make use of the concept of formal language and are relative to models or axioms.) Even though CH has the appearance of a definite mathematical problem, it has ceased to be one for all intents and purposes because it was long recognized that only logical considerations could be brought to bear to settle it, if at all.  So then what would make it a definite logical problem? Something as definite as: CH is true in L.  I can’t exclude that some time in the future, some model or axiom system will be produced that will be as canonical in nature for some concept of set as L is for the concept of hereditarily predicatively definable set.  But I’m not holding my breath either.

So as I understand it your claim of the inherent vagueness of CH was based solely on the lack of available methods for showing that it is a definite logical problem, and not on a belief that such methods cannot exist. Is that right? If so, then I suppose that your claim was simply intended as a provocative challenge to the set theory community!

Consider the following. The IMH asserts that if we enlarge V to an outer model W (i.e. universe with the same ordinals) then any sentence true in an inner model of W is also true in an inner model of V. (To fully make sense of this we need the HP but for the present discussion that can be ignored.) The SIMH is the same statement but where the sentences in question are allowed to include “absolute” parameters. (A parameter is absolute if it is definable by a fixed parameter-free formula in all cardinal-preserving outer models of V.) For the sake of argument, suppose that the SIMH is consistent (I don’t know if it is).

The SIMH implies the negation of CH (by an easy argument). In other words the decidability of CH is reduced to a logical problem of absoluteness within the hyperuniverse (when things are formulated properly). Does this argument (via an axiom system which is canonical for the concept of set-theoretic universe) suggest to you that your claim of the inherent vagueness of CH may be in doubt

(Remark for the large cardinal lovers: If the SIMH is consistent then surely it can be consistently modified to incorporate large cardinals together with ordinal-maximality.)

I don’t know whether your concept of set-theoretical truth can be assimilated to Maddy’s A-realism, but in either case I see it as trying to have your platonist cake without eating it. It allows you to accept CH v not-CH, but so what?

I’m not sure that I get your point here, but I do believe that there is no difficulty with a fully epistemological, Platonism-free concept of truth. With no ontology we can still have a mental picture of the universe of sets, just as you have a mental picture of the inherently vague continuum. The concept of “truth in V” that I have in mind evolves as this picture is clarified through the exploration of intrinsic features of the concepts of set and set-theoretic universe. As hinted above with the SIMH it is perfectly possible that not-CH will be a byproduct of this investigation.

Do you see a problem with this approach to truth?

Thanks again and best wishes,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

OK, let’s go for just one more exchange of comments and then try to bring this to a conclusion by agreeing on a summary of our perspectives. I already started to prepare such a summary but do think that one last exchange of views would be valuable.

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

My argument is “proof-theoretic”: the consistency strengths in set theory are organised by the consistency strengths of large cardinal axioms. And we have good evidence for the strictness of this hierarchy. There is nothing semantic here.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC). Why would you conjecture in this setting that RH is false? I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC? With such evidence I would indeed conjecture that RH is true; wouldn’t you?

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?

I don’t know.

2.  Is PD consistent?

Yes.

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

We have had this exchange several times already. Let’s agree to (strongly) disagree on this point.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

Again: It is not clear that the HP will give not-PD! It is a question of finding appropriate criteria that will yield PD, perhaps criteria that will yield enough large cardinals.

As far as the strictness of the consistency hierarchy we can use quasi-lower bounds, we don’t need the lower bounds coming from core model theory.

And as I have been trying to say, building core model theory into a programme for the investigation of set-theoretic truth like HP is an inappropriate incursion of set-theoretic practice into an intrinsically-based context.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

No, we may be able to stay “within the box” as you put it:

I said that SIMH(large cardinals + \#-generation) might be what we are looking for; the problems are to intrinsically justify large cardinals and to prove the consistency of this criterion. Would you be happy with that solution?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I’m very pleased that my paper has led to such a rich exchange and that it has brought out the importance of clarifying one’s aims in the ongoing development of set theory. Insofar as it might affect my draft, I still have much to absorb in the exchange thus far, and there will clearly be some aspects of it that are beyond my current technical competence. In any case,  I agree it would be good to bring the exchange to a conclusion with a summary of positions.

In the meantime, to help me understand better, here is a question about HP: if I understand you properly, if HP is successful, it will show the consistency of the existence of large large cardinals in inner models. Then how would it be possible to establish the success of HP without assuming the consistency of large large cardinals in V?  If so, isn’t the program circular?  If not, it appears that one would be getting something from nothing.

Best,
Sol

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Ok one more round. This is a short one since you did not raise many new questions etc. in your last response.

On Aug 7, 2014, at 9:32 AM, Sy David Friedman wrote:

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Absolutely not,  given the special nature of LC.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC).

Of course, I thought that was clear.

Why would you conjecture in this setting that RH is false?

Because I think “Reinhardt without AC” Is inconsistent. The Oracle could be malicious after all.

(Aside: I actually think that  “ZF + Reinhardt + extendible” is inconsistent. The situation for “ZF + Reinhardt” is a bit less clear to me at this stage. But this distinction is not really relevant to this discussion, e.g. everything in these exchanges could have been in the context of super-Reinhardt).

I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC?

I am not sure what you are referring to here. The hierarchy of axioms past I0 that I have discussed in the JML papers are all AC based.

With such evidence I would indeed conjecture that RH is true; wouldn’t you?

This seems an odd position. Suppose that the Oracle matched 100 number theoretic (\Pi^0_1) sentences with the consistency of variations of the notion of Reinhardt cardinals.  This increases one confidence in these statements?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

1) Is PD true?

I don’t know.

2) Is PD consistent?

Yes.

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

But I have not suggested that to get Con(Definable determinacy) one needs to get Definable determinacy.  I have suggested that to get Con PD one needs to get PD.  (For me, PD is boldface PD, perhaps you have interpreted PD as light-face PD).

The local/global issue is not present at the level you indicate.  It only occurs past the level of 1 Woodin cardinal, I have said this repeatedly.

Why? If  0^\# exists then it is unique. M_1^\# (the analog of 0^\#) at the next projective level has a far more subtle uniqueness.

(For those unfamiliar with the notation: M_1 is the “minimum” fine-structural inner model with 1 Woodin cardinal and the notion of minimality makes perfect sense for iterable models through elementary embeddings).

The iterable M_1^\# is unique but the iterable $M_1^\#$ implies all sets have sharps. In fact in the context of all sets have sharps, the existence of M_1^# is equivalent to the existence of a proper class inner model with a Woodin cardinal.

Without a background of sharps there examples where there are no definable inner models past the level of 1 Woodin cardinal no matter what inner models one assumes exist. The example is not contrived, it is L[x] for a Turing cone of x and this example lies at the core of the consistency proof of IMH.


 

Some final comments.

The inner model program for me has come down to one main conjecture (the Ultimate-L conjecture) and two secondary conjectures, the \Omega Conjecture and the HOD Conjecture. These are not vague conjectures, they are each precisely stated. None of these conjectures involves any concept of fine-structure or related issues.

The stage is also set for the possibility of an anti-inner model theorem.  A refutation of the \Omega Conjecture would in my view be such an anti-inner model theorem and there are other possibilities.

So the entire program as presently conceived is for me falsifiable.

If the Ultimate-L Conjecture is provable then I think this makes a far more compelling case for LC than anything coming out of HP for denying LC. I would (perhaps unwisely) go much further. If the Ultimate-L Conjecture is provable then there is an absolutely compelling case for CH and in fact for V = Ultimate L. (The precise formulation of V = Ultimate L is already specified, it is again not some vague axiom).

How about this:  We each identify a critical conjecture  whose proof we think absolutely confirms our position and whose refutation we also admit sends us back to “square 1”. For me it is the Ultimate-L Conjecture.

HP is still in its infancy so this may not be a fair request. So maybe we have to wait on this. But you should at least be able to articulate why you think HP even has a chance.

Aside: IMH simply traces back to Turing determinacy as will \text{IMH}^*. For each real x let M_x be the minimum model of ZFC containing x. The theory of M_x is constant on a cone as is its second order theory. Obviously this (Turing) stable theory will have a rich structure theory.  But this is just one instance of many analogous stable theories (this is the power of PD and beyond) and HP is just borrowing this. It is also a theorem that Turing-PD is equivalent to PD.

But why should this have anything to do with V.

Here is a question: Why is not the likely scenario simply that HP ends up stair-stepping up to PD and that the ultimate conclusion of the entire enterprise is simply yet another argument for PD?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sol,

On Thu, 7 Aug 2014, Solomon Feferman wrote:

Dear Sy,

I’m very pleased that my paper has led to such a rich exchange and that it has brought out the importance of clarifying one’s aims in the ongoing development of set theory. Insofar as it might affect my draft, I still have much to absorb in the exchange thus far, and there will clearly be some aspects of it that are beyond my current technical competence. In any case, I agree it would be good to bring the exchange to a conclusion with a summary of positions.

Thanks again for triggering the discussion with your interesting paper.

In the meantime, to help me understand better, here is a question about HP: if I understand you properly, if HP is successful, it will show the consistency of the existence of large large cardinals in inner models.

To be clear, the HP does not produce a single criterion for preferred universes, but a family of them, and each must be analysed for its consequences. But many such criteria will indeed produce inner models with at least measurable cardinals and I would conjecture that an inner model with a Woodin cardinal should also come out. However the programme achieves this only via the core model theory and not directly on its own. In particular I see no scenario for it to produce an inner model with a supercompact, as the core model theory seems unable to do that.

On the other hand all of the criteria seem to be compatible with the existence of arbitrarily large cardinals in inner models, even if they fail to produce such iner models.

However I don’t consider the creation of inner models with large cardinals, or even the confirmation of the consistency of large cardinals, to be a central goal of the programme. The programme will likely have more valuable consequences for understanding problems like CH whose undecidability does not hinge on large cardinal assumptions.

Then how would it be possible to establish the success of HP without assuming the consistency of large large cardinals in V?  If so, isn’t the program circular?  If not, it appears that one would be getting something from nothing.

The answer is given by core model theory: Without assuming the consisency of large cardinals one can use this theory to show that various set-theoretic properties yield inner models with large cardinals. A nice example is the failure of the singular cardinal hypothesis, which without any further assumptions produces inner models with many measurable cardinals.

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for these clarifications and amendments!  (The only changes of mind that strike me as objectionable are the ones where the person pretends it hasn’t happened.)  I’m still keen to formulate a nice tight summary of your approach, and then to raise a couple of questions, so let me take another shot at it. Here’s a revised version of the previous summary:

We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it).  One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations.  The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims.  These are the ones that need to be be taken seriously as we evaluate any candidate for a new set theoretic axiom or principle. They include ZFC and the consistency of LCs.

The intrinsic constraints aren’t limited to items that are implicit in the concept of set.  One of the items present in this concept is a notion of maximality.  The new intrinsic considerations arise when we begin to consider, in addition, the concept of the hyperuniverse. One of the items present in this concept is a new notion of maximality, building on the old, that generates the schema of Logical Maximality and its various instances (and more, set aside for now).

At this point, we have the de facto part of practice and various maximality principles.  If the principles conflict with the de facto part, they’re subject to serious question (see below).  They’re further tested by their ability to settle independent questions.  Once we’re settled on a principle, we use it to define ‘preferred universe’ and count as ‘true-in-V’ anything that’s true in all preferred universes.

I hope this has inched a bit closer!  Assuming so, here are the two questions I wanted to pose to you:

  • What is the status of ‘the concept of set’ and ‘the concept of set-theoretic universe’?

This might sound like a nicety of interest only to philosophers, but there’s a real danger of falling into something ‘external’, something too close for comfort to an ontology of abstracta or a form of truth-value realism.

  • The challenge we friends of extrinsic justifications like to put to defenders of intrinsic justifications is this: suppose some candidate principle generates a lot of deep-looking mathematics, but conflicts with intrinsically generated principles; would you really want to say ‘gee, that’s too bad, but we have to jettison that deep-looking mathematics’?  (I’d argue that this isn’t entirely hypothetical.  Choice was initially controversial largely because it conflicted with one strong theme in the contemporary concept of set, namely, the idea that a set is determined by a property.  The mathematics generated by Choice was so irresistible that (much of the) mathematical community switched to the iterative conception. Trying to shut down attractive mathematical avenues has been a fool’s errand in the history of mathematics.)

You’ve had some pretty interesting things to say about this!  This remark to Hugh, which you repeat, was what made me realize I’d so badly misunderstood you the first time around:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

And these remarks to Sol also jumped out:

Another very interesting question concerns the relationship between truth and practice. It is perfectly possible to develop the mathematics of set theory without consideration of set-theoretic truth. Indeed Saharon has suggested that ZFC exhausts what we can say regarding truth but of course that does not force him to work just in ZFC. Conversely, the HP makes it clear that one can investigate truth in set theory quite independently from set-theoretic practice; indeed the IMH arose from such an investigation and some would argue that it conflicts with set-theoretic practice (as it denies the existence of inaccessibles). So what is the relationship between truth and practice? If there are compelling arguments that the continuum is large and measurable cardinals exist only in inner models but not in V will this or should this have an effect on the development of set theory? Conversely, should the very same compelling arguments be rejected because their consequences appear to be in conflict with current set-theoretic practice?

And today, to me, you add:

I see that the HP is the correct source for axiom *candidates* which must then be tested against current set-theoretic practice. There is no naturalist leaning here, as I am in no way allowing set-theoretic practice to influence the choice of axiom-candidates; I am only allowing a certain veto power by the mathematical community. The ideal situation is if an (intrinsically-based) axiom candidate is also evidenced by set-theoretic practice; then a strong case can be made for its truth.

But I am very close to dropping this last “veto power” idea in favour of the following (which I already mentioned to Sol in an earlier mail): Perhaps we should accept the fact that set-theoretic truth and set-theoretic practice are quite independent of each other and not worry when we see conflicts between them. Maybe the existence of measurable cardinals is not “true” but set theory can proceed perfectly well without taking this into consideration.

Let me just make two remarks on all this.  First, if you allow the practice to have ‘veto power’, I don’t see how you aren’t allowing it to influence the choice of principles.  Second, if you don’t allow the practice to have ‘veto power’, but you also don’t demand that the practice conform to truth (as I was imagining in my generic challenge to intrinsic justification given above), then — to put it bluntly — who cares about truth?  I thought the whole project was to gain useful guidance for the future development of set theory.

All best,
Pen