Re: Paper and slides on indefiniteness of CH

Dear Sy,

In answer to your questions below, it seems to me that my work has philosophical significance in several ways. First, it shows that the reach of Quine’s (and perhaps Putnam’s) indispensability argument is extremely limited (for whatever that’s worth). Secondly, I believe it shows that one can’t sustain the view from Galileo to Tegmark that mathematics (and the continuum in particular) is somehow embedded in nature. Relatedly, it does not sustain the view that the success of analysis in natural science must be due to the independent reality of the real number system.

My results tell us nothing new about physics. And indeed, they do not tell us that physics is somehow conservative over PA. In fact it can’t because if Michael Beeson is right, quantum mechanics is inconsistent with general relativity; see his article, “Constructivity, computability, and the continuum”, in G. Sica (ed.) Essays on the Foundations of Mathematics and Logic, Volume 2 (2005), pp. 23-25. It just tells us that the mathematics used in the different parts of physics is conservative over PA.

Finally, to be “quite happy with ZFC” is not the same as saying that there is a good philosophical justification for it.

Best,
Sol

Re: Paper and slides on indefiniteness of CH

Dear Sol,

This message is not specifically about your rebuttal of Hilary’s claim, but about a more general issue which I hope that you can shed light on.

You write:

Q1. Just which mathematical entities are indispensable to current scientific theories?, and
Q2. Just what principles concerning those entities are need for the required mathematics?

My very general question is: What do we hope to gain by showing that something can be “captured by limited means” (in this case regarding what mathematics is needed for physical theory)? Does this tell us something new about what we have “captured”?

I am of course familiar with advantages of, for example, establishing that some computable function is in fact provably total in PA, as then one might extract useful and new information about the growth rate of such a function. In set theory is something analagous, which is if you can bring down the large cardinal strength enough, core model theory kicks in and you have a good chance of achieving a much better understanding. Or if one starts with a philosophical position, like predicativity, it is somehow gratifying to know that one can capture it precisely with formal means.

But frankly speaking, too often there is a connotation of “of, we don’t really need all of that bad set theory to do this”, or even more outdated: “what a relief, now we know that this is consistent because we captured it in a system conservative over PA!”. Surely in the 21st century we are not going to worry anymore about the consistency of ZFC.

Is the point that (as you say at the end of your message) that you think you have to invoke some kind of platonistic ontology if you are not using limited means, and for some reason this makes you feel uncomfortable (even though I presume you don’t have inconsistency worries)?

It is tempting to think that your result using your system W might tell us something new about physics. Does it? On the other hand you have not claimed that “physics is conservative over PA” exactly, but only that the math needed to do a certain amount of physics is conservative over PA.

Finally, how is it that you claim that “only a platonistic philosophy of mathematics provides justification” for impredicative 2nd order arithmetic? That just seems wrong, as there are plenty of non-platonists out there (I am one) who are quite happy with ZFC. But maybe I don’t understand how you are using the word “justification”.

Thanks in advance for your clarifications. And please understand, I am not suggesting that it is not valuable to “capture things by limited means”, I just want to have a better understanding of what you feel is gained by doing that.

All the best,
Sy

Re: Paper and slides on indefiniteness of CH

On Fri, Aug 29, 2014 at 2:50 AM, Harvey Friedman wrote:

​I developed some alternative conservative extensions of PA and HA called ALPO and B. The former, “analysis with the limited principle of omniscience”, was based on classical logic low down, and constructive logic higher up, and the latter for “Bishop” which was based on constructive logic. If I recall, both systems accommodated extensionality, which demanded additional subtleties in the conservation proofs. Both of these systems, if I recall, had substantially simpler axioms, and of course, there is the obvious issue of how important an advantage it is to have extensionality and to have simple axiomatizations.

There are different ways of isolating weak formal systems in which substantial portions of actual mathematics can be formalized; they each have advantages and disadvantages. I am of course familiar with your systems ALPO and B and appreciate your fine conservation results for those as having real metamathematical interest. But the price you pay to insure extensionality seems to be that they both employ (as you say) partial or full intuitionstic logic and thus are not as readily useful to represent current mathematics in which non-constructive arguments are ubiquitous.

And if it is just constructive mathematics that one wants to represent, Bishop has shown that extensionality is a red herring: every “natural” mathematical kind carries an appropriate “equality” relation, and functions of interest on such objects are supposed to preserve those relations. But the objects themselves can be interpreted as being explicitly given and the functions of interest are given by underlying computable operations. That is a way to have one’s constructive cake and eat it too. Incidentally, the model theorist Abraham Robinson also promoted thinking in terms of equality relations instead of extensionality for quite different reasons.

Simplicity is not really relevant here. The question was, what mathematical notions and principles concerning them are indispensable to (current mathematized) science? The answer provided by my system W is: no more than what can be reduced to PA. Formal systems that are reduced to PA like your fragments of set theory or fragments of 2nd order arithmetic may be formally simpler on the face of it, but one has to see what it takes to actually check to see what part of mathematics can be done on that basis. I contend that the simpler the formal system the less direct is that verification. My system W makes use of a less familiar formalism, but it is more readily used than others I have seen to carry that out. Witness the notes to which i have referred: “How a little bit goes a long way. Predicative foundations of analysis.”

The real issue you want to deal with is the formalization of the actual mathematics.

Well, the only issue I dealt with in response to Hilary was the question of the formalization of scientifically applicable mathematics.

There is a “new” framework to, at least potentially, deal with formalizations of actual mathematics without prejudging the matter with a particular style of formalization. This is the so called SRM = Strict Reverse Mathematics program. I put “new” in quotes because my writings about this PREDATE my discovery and formulation of the Reverse Mathematics program. SRM at the time was highly premature. I’m just now trying to get SRM off the ground properly.

The basic idea is this. Suppose you want to show that a body of mathematics is “a conservative extension of PA”. Closely related formulations are that the body of mathematics is “interpretable in PA” or “interpretable in ACA_0″.. You take the body of mathematics itself as a system of mathematical statements including required definitions, and treat that as an actual formal system. This is quite different than the usual procedures for formalizing actual mathematics in a formal system, where one has, to various extents, restate the mathematics using coding and various and sundry adjustments. SRM could well be adapted to constructive mathematics as well. Actually, it does appear that there is merit to treating actual abstract mathematics as partially constructive. When I published on ALPO and B, I did not relate it to SRM ideas. ​

I remember your talking about the idea of SRM before you turned to RM. That was ages ago (c. 1970?). I am totally skeptical of this because the only way that an actual body of mathematics can be treated as a formal system is to be given as a formal system to begin with, and no mathematics of interest has that character (proof checking systems notwithstanding) Moreover, we understand different formalizations, e.g. of elementary arithmetic, as being essentially the same even though differing in formal details, because we understand them in terms of their intended meaning.

Consider analysis: say there are 100 textbooks on functional analysis, all covering essentially the same material. Which one is “the” text for your SRM? Ditto for every other part of mathematics.

​There is a research program that I have been suggesting that I haven’t had the time to get into – namely, systematically introduce physical notions directly into formalisms from f.o.m.

I don’t foresee any problems with that.

I conjecture that something striking will come out of pursuing this with systematic imagination. ​

Let’s see.

​I am curious as to where your anti-Platonist view kicks in. I understand that you reject \textsf{Z}_2 per se on anti-Platonist grounds. Presumably, you do not expect to be able to interpret \textsf{Z}_2 in a system that you do accept? Perhaps the only candidate for this is Spector’s interpretation? Now what about \Pi^1_1\textsf{-CA}_0? This is almost interpretable in $\textsf{ID}_{<\omega}$ and interpretable just beyond. So you reject \Pi^1_1\textsf{-CA}_0 but accept the target of an interpretation? What about \Pi^1_2\textsf{-CA}_0? How convincing are ordinal notation systems as targets of interpretations — or more traditionally, their use for consistency proofs?

First, as a mathematician (specializing in logic and related topics), my work doesn’t hew in a direct way to my philosophy. I use current everyday set theory (sets, set-theoretical operations, cardinals, ordinals) like most every other mathematician. But one of my aims is to investigate what is really needed for what, and to see whether that has philosophical significance.

Second, proof theory does not make the consistency of this or that formal system any more convincing than what one was reasonably convinced of before. (See my article, “Does proof theory have a viable rationale?”) But the reduction of subsystems of classical analysis to constructive theories of iterated inductive definitions from below is significant for a generalized Hilbert’s program. In any case, I don’t have a red line.

Here is my view. There are philosophies of mathematics roughly corresponding to a lot of the levels of the interpretation hierarchy ranging from even well below EFA (exponential function arithmetic) to perhaps j:V_{\lambda+1}\to V_{\lambda+1} and $j:V \to V$ without choice, and perhaps beyond. These include your philosophy. Most of these philosophies have their merits and demerits, their advantages and disadvantages, which are apparent according to the skill levels of the philosophers who advocate them. I regard the clarity of the associated notions as “continuously” degrading as you move up, starting with something like a 3 x 3 chessboard.

I decided long ago that the establishment of the following Thesis – which has certainly not yet been fully established – is going to be of essential importance in any dialog. Of course, exactly what its implications are for the dialog are unclear, and it may be used for unexpected or competing purposes in various ways by various scholars – just like Gödel’s first and second incompleteness theorems, and the Gödel/Cohen work on AxC and CH.

THESIS. Corresponding to every interesting level in the interpretation hierarchy referred to above, there is a \Pi^0_1 sentence of clear mathematical interest and simplicity. I.e., which is demonstrably equivalent to the consistency of formal systems corresponding to that level, with the equivalence proved in EFA (or even less). There are corresponding formulations in terms of interpretations and conservative extensions. ​

Furthermore, the only way we can expect the wider mathematical community to become engaged in such issues (finitism, predicativity, realism, Platonism, etcetera) is through this Thesis.

I am not out to get mathematicians generally engaged in such issues; it is a rare mathematician who does (Weyl, Brouwer, Hilbert, Bishop). But even if one is out to do that, I don’t think your Thesis (to whatever extent that may be verified) will serve to engage them any more in that respect.

Best,
Sol

Re: Paper and slides on indefiniteness of CH

Dear Harvey,

Ok, I will add some comments to my response. Below is simply how I currently see things. It is obvious based on this account that an inconsistency in PD would render this picture completely vacuous and so I for one would have to start all over in trying to understand V. But that road would be much harder given the lessons of the collapse caused by the inconsistency of PD. How could one (i.e. me) be at all convinced that the intuitions behind ZFC are not similarly flawed?

I want to emphasize that what I describe below is just my admittedly very optimistic view. I am not advocating a program of discovery or anything like that. I am also not arguing for this view here. I am just describing how I see things now. (But that noted, there are rather specific conjectures which if proved, I think would argue strongly for this view. And if these conjectures are false then I will have to alter my view.)

This view is based on a substantial number of key theorems which have been proved (and not just by me) over the last several decades.

Starting with the conception of V as given by the ZFC axioms, there is a natural expansion of the conception along the following lines.

The Jensen Covering Lemma argues for 0^\# and introduces a horizontal maximality notion. This is the first line and gives sharps for all sets. This in turn activates a second line, determinacy principles.

The core model induction now gets under way and one is quickly led to PD and \text{AD}^{L(\mathbb R)}, and reaches the stage where one has iterable inner models with a proper class of Woodin cardinals. This is all driven by the horizontal maximality principle (roughly, if there is no iterable inner model with a proper class of Woodin cardinals then there is a generalization of L relative to which V is close at all large enough cardinals and with no sharp etc.).

Adding the hypothesis that there is a proper class of Woodin cardinals, one can now directly define the maximum extension of the projective sets and develop the basic theory of these sets. This is the collection of universally Baire sets (which has an elementary definition). The important point here is that unlike the definitions of the projective sets, this collection is not defined from below. (There is a much more technical definition one can give without assuming the existence of a proper class of Woodin cardinals).

Continuing, one is led to degrees of supercompactness (the details here are now based on quite a number of conjectures, but let’s ignore that).

Also a third line is activated now. This is the generalization of determinacy from L(\mathbb R) = L(P(\omega)) to the level of L(P(\lambda)) for suitable \lambda > \omega. These \lambda are where the Axiom I0 holds. This axiom is among the strongest large cardinal axioms we currently know of which are relatively consistent with the Axiom of Choice. There are many examples of rather remarkable parallels between L(\mathbb R) in the context that AD holds in L(\mathbb R), and L(P(\lambda)) in the context that the Axiom I0 holds at \lambda.

Now things start accelerating. One is quickly led to the theorem that the existence of the generalization of L to the level of exactly one supercompact cardinal is where the expansion driven by the horizontal maximality principles stops. This inner model cannot have sharp and is provably close to V (if it exists in the form of a weak extender model for supercompactness). So the line (based on horizontal maximality) necessarily stops (if this inner model exists) and one is left with vertical maximality and the third line (based on I0-like axioms).

One is also led by consideration of the universally Baire sets to the formulation of the axiom that V = Ultimate L and the Ultimate L Conjecture. The latter conjecture if true confirms that the line driven by horizontal maximality principles ceases. Let’s assume the Ultimate L Conjecture is true.

Now comes (really extreme) sheer speculation. The vertical expansion continues, driven by the consequences for Ultimate L of the existence of large cardinals within Ultimate L.

By the universality theorem, there must exist \lambda where the Axiom I0 holds in Ultimate L. Consider for example the least such cardinal in Ultimate L. The corresponding L(P(\lambda)) must have a canonical theory where of course I am referring to the L(P(\lambda)) of Ultimat L.

It has been known for quite some time that if the Axiom I0 holds at a given \lambda then the detailed structure theory of L(P(\lambda)) = L(V_{\lambda+1}) above \lambda can be severely affected by forcing with partial orders of size less than \lambda. But these extensions must preserve that Axiom I0 holds at \lambda. So there are natural features of L(P(\lambda)) above \lambda which are quite fragile relative to forcing.

Thus unlike the case of L(\mathbb R) where AD gives “complete information”, for L(P(\lambda)) one seems to need two things: First the relevant generalization of AD which arguably is provided by Axiom I0 and second, the correct theory of V_\lambda. The speculation is that V = Ultimate L provides the latter.

The key question will be: Does the global structure theory of L(P(\lambda)), as given in the context of the Axiom I0 and V = Ultimate L, imply that V = Ultimate L must hold in V_\lambda?

If this convergence happens at \lambda and the structure theory is at all “natural” then at least for me this would absolutely confirm that V = Ultimate L.

Aside: This is not an entirely unreasonable possibility. The are quite a number of theorems now which show that \text{AD}^{L(\mathbb R)} follows from its most basic consequences.

For example it follows from just all sets are Lebesgue measurable, have the property of Baire, and uniformization (by functions in L(\mathbb R)) for the sets A \subset \mathbb R \times \mathbb R which are \Sigma_1-definable in L(\mathbb R) from parameter \mathbb R. This is essentially the maximum amount of uniformization which can hold in L(\mathbb R) without yielding the Axiom of Choice.

Thus for L(\mathbb R), the entire global structure theory, i.e. that given by \text{AD}^{L(\mathbb R)}, is implied by a small number of its fundamental consequences.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Peter,

Thanks for your message, especially the first half with helpful clarifications regarding the proper use of the terms “extrinsic” and “intrinsic”. I especially like your suggestion of considering “degrees of intrinsicness” (“relativised” versions) and this should fit very well with the HP. This is perhaps anticipated by Pen’s comment near the end of her 27.August message, where she reports that I might consider “returning to the proposal of a different conception of set. The challenge there is to do so without returning to the unappealing idea that ‘intrinsic justification’ and ‘set-theoretic truth’ are determined by a conception of the set-theoretic universe that’s special to a select group.” One could perhaps interpret “special to a select group” as “low degree of intrinsicness” in your sense.

But more to the point, Peter, the situation is as follows: Pen and I initiated a process of carefully examining the steps in the HP and got stuck. As Pen said in her 27.August message, one of the key steps is my claim to extract something new from the Maximal Iterative Conception (MIC). Her message includes a description of my claim, which includes the use of “lengthenings” and “thickenings” of mental pictures of V. Pen felt that there was “something off about a universe being ‘maximal in width’, but also having a ‘thickening'” and I replied that “lengthenings” were already implicit in reflection (Pen’s message includes my argument for that). At that point Pen decided it would be best to consult with you directly about the role of “lengthenings” in reflection and explicitly asked for your opinion. What do you think? I’d really appreciate your response because Pen and I have been waiting for it since August 27!

Regarding the second half of your message: I do not claim that the IMH is intrinsically justified! This is partly my fault, since my views have changed since Tatiana and I wrote our paper. In my 7.August message to Pen I say:

… as my views have evolved slightly since Tatiana and I wrote the BSL paper I’d like to take the liberty (see below) of fine-tuning and enhancing the picture you present above. My apologies for these modifications, but I understand that changes in one’s point of view are not prohibited in philosophy? ;)

I go on to say:

… what I came to realise is that the IMH deals only with “powerset maximality” and it is compelling to also introduce “ordinal maximality” into the picture. (I should have come to
that conclusion earlier, as indeed the existence of inaccessible cardinals is derivable from the intrinsic maximal iterative concept of set!)

And further on:

This was an important lesson for me and strongly confirms what you suggested: In the HP (Hyperuniverse Programme) we are not able to declare ultimate and unrevisable truths. Instead it is a dynamic process of exploration of the different ways of instantiating intrinsic features of universes, learning their consequences and synthesising criteria together with the long-term goal of converging towards a stable notion ….

I do understand that I said very different things in my original paper with Tatiana, but I tried to correct this in my 7.August e-mail to Pen. And I do realise that it is too much to ask that you sift through all of those e-mails I sent to Pen (there were many!) so I’m happy to repeat things now to sort out misunderstandings.

In abridged form: The HP starts with intrinisic features of V that follow from the Maximum Iterative Conception and then provides a method for turning these features into precise mathematical criteria which ultimately yield first-order statements. The process is dynamic, whereby the choice of criteria together with their first-order consequences can change over time, as indicated in the last quote above. So one does not arrive at “unrevisable intrinsically justified” statements, as sometimes criteria and their consequences are discarded as the programm progresses. This already happened to the IMH: it must be synthesised with ordinal-maximality, and if this is done using the Friedman-Honzik form of reflection (\#-generation) this removes its anti-large cardinal consequences.

The above description obviously leaves huge gaps, in particular it does not explain what the mathematical criteria are about and how they are chosen. But it gives the rough idea and I hope clarifies that the word “intrinsic” is intended to apply to features of V and only to criteria and their consequences after a lengthy (practice-independent) process of analysis and synthesis has occurred. I plan to examine each feature of the HP carefully in further discussions with Pen. But we are in need of your response to her message!

Thanks,
Sy

PS: The HP is not concerned with justifications of consistency. The consistency of large cardinals is taken as given in the programme.

Re: Paper and slides on indefiniteness of CH

Dear Pen and Sy,

I have benefited from your exchange. I’ll try to add some input.

Sy: I have wanted to say something about your proposal. But I am still very unclear on how you understand the philosophical landscape, in particular, on how you understand the distinction between intrinsic and extrinsic justification.

One of the distinctive aspects of your view — a selling point, from a philosophical point of view — is that it involves pursuit of “intrinsic justifications”, as opposed to “extrinsic justifications”. But I am not sure how you are using these terms. From your exchange with Pen it seems that your usage is quite different from the original, namely, that of Gödel.

For Gödel a statements S is intrinsically justified relative to a concept C (like the concept of set) if it “follows from” (or it “unfolds”, or is “part of”) that concept. The precise concept intended is far from clear but it seems clear that whatever it is intrinsic justifications are supposed to be very secure, not easily open to revision, and qualify as analytic. In contrast, on your usage it appears that intrinsic justifications need not be secure, are easily open to revision, and (so) are (probably) not analytic.

For Gödel a statement S is “extrinsically justified” relative to a concept C (like the concept of set) if it is justified (on the basis of reasons grounded in that concept) in terms of its consequences (especially its “verifiable” consequences), just as in physics. Again this is far from precise but it seems clear that extrinsic justifications are not as secure as intrinsic justifications but instead offer “probable”, defeasible evidence. In contrast, on your usage it appears that you do not understand “extrinsic justification” as an epistemic notion, but rather you understand it as a practical notion, one having to do with meeting the aims of a pre-established practice.

So, you appear to use “intrinsic justification” for an epistemic notion that is not as secure as the traditional notion but rather merely gives epistemic weight that falls short of being conclusive. Moreover, at points, when talking about intrinsic justifications you talk of testing them in terms of their consequences. So I think that by “intrinsically justified” you mean either “intrinsically plausible” or “extrinsically justified”.

I think you need to be more precise about how you use these terms and how your usage relates to the standard usage. This is especially important if the main philosophical selling point of your proposal is that it is re-invigorating “intrinsic justifications” in the sense of Gödel. (Good places to start in getting clear on these notions are the papers of Tony Martin and Charles Parsons.)


In what I say next I will use “intrinsic justification” in the standard sense, both for the sake of definiteness and because it is on this understanding that your view is distinctive from a philosophical point of view.

Let me begin with a qualification. I am generally wary of appeals to
“intrinsic justification”, for the same reason I am generally wary of
appeals to “self-evidence”, the reason being that in each case the
notion is too absolute — it pretends to be a final certificate, an
epistemic high-ground, a final court of appeal. But in fact there is
little agreement on what is intrinsically justified (and on what is
self-evident). For this reason, in the end, discussions that employ
these notions tend to degenerate into foot-stamping. It is much
better, I think, to employ notions where there is widespread
intersubjective agreement, such as the relativized versions of these notions, notions like “A is more intrinsically plausible than B” and “A is more (intrinsically) evident than B”. This is one reason I find
extrinsic justifications to be more helpful. They are piecemeal and
modest and open to revision under systematic investigation. (I think
you agree, since I think that ultimately by “intrinsic justification”
you mean what is normally meant by “extrinsic justification”).

But let me set that qualification aside and proceed, employing the notion of “intrinsic justification” in the standard sense, for the
reasons given above.

There is an initial puzzle that arises with your view.

THE PUZZLE

  1. You claim that IMH is intrinsically justified.
  2. You claim that inaccessible cardinals — and much more — are intrinsically justified
  3. FACT: IMH is implies there are no inaccessibles.

Contradiction!

The natural reaction at this point would be to think that there is
something fundamentally problematic about this approach.

But perhaps there is a subtlety. Perhaps in (1) and (2) intrinsic
justifications are relative to different conceptions.

When you claim that IMH is intrinsically justified what exactly are you saying and what is the case for the claim? Are you saying IMH (a) intrinsically justified relative to our concept of set (which, on the face of it, concerns V) or (b) the concept of being a countable transitive model of ZFC, or (c) the concept of being a countable transitive model of ZFC that meets certain other constraints? Let’s go through these options one by one.

(a) IMH is intrinsically justified relative to the concept of set. I don’t see the basis for this claim. To the extent that I have a grasp on the notion of being intrinsically justified relative to the concept of set I can go along with the claims that Extensionality and Foundation are so justified and even the claims that Infinity and Replacement and Inaccessibles are so justified (thus following Gödel and others) but I lose grip when it comes to IMH. Moreover, IMH implies that there are no inaccessibles. Does that not undermine the claim that IMH is intrinsically justified on the basis of the concept of set? Assuming it does (and that this is not what you claim) let’s move on.

(b) IMH is intrinsically justified relative to the concept of being a
countable transitive model of ZFC. I have a good grasp on the notion of being a countable transitive model of ZFC. And I think it is interesting to study this space. But when I reflection this space –when I try to unfold the content implicit in this idea — I can reach nothing like IMH.

(c) IMH is intrinsically justified relative to the concept of being a countable transitive model of ZFC that meets certain other constraints. I can certainly see going along with this. But, of course, it depends on what the other constraints are. We have two options: (i) We can be precise about what we mean. For example, we can build into the notion that we are talking about the concept of being a countable transitive model of ZFC that satisfies X, where X is
something precise. We might then deduce IMH from X. In this case we know what we are talking about — that is, we know the subject matter — but we merely “get out as much as we put in”. Not so interesting. (ii) We can be vague about what we mean: For example, we can say that we are talking about countable transitive models of ZFC that are “maximal” (with respect to something). But in that case we have little idea of what we are talking about (our subject matter) and it seems that “anything goes”.

You seem to want to resolve the conflict in (a) — between the claim
that inaccessibles are intrinsically justified and the claim that IMH
is intrinsically justified — by resorting to both intrinsic justifications on the basis of our concept of set (which gives inaccessibles) and intrinsic justifications on the basis of the hyperuniverse (understood as either (i) or (ii) under (c)) and which gives IMH) and you seem to want to leverage the interplay between these two in such a way that it gives us information about our concept of set (which concerns V). But what can you say about the relationship between these two forms of intrinsic justification? Is there some kind of “meta” (or “cross-domain”) form of intrinsic justification that is supposed to give us confidence about why intrinsic justifications on the basis of the hyperuniverse should be accurate indicators of truth (or intrinsic justifications on the basis of) our concept of set?


One final comment: Here is an “intuition pump” regarding the claim that IMH is intrinsically justified.

THE FACTS:

  1. If there is a Woodin cardinal with an inaccessible above then IMH
    is consistent.
  2. If IMH holds then measurable cardinals are consistent.

So, if IMH is intrinsically justified (in the standard sense) then we can lean on it to ground our confidence in the consistency of measurable cardinals. For my part, the epistemic grounding runs the other way: IMH provides me with no confidence in the consistency of measurable cardinals (or of anything). Instead, the consistency of IMH is something in need of grounding. Fact (1) above provides me with evidence that IMH is consistent. Fact (2) does not provide me with evidence that measurable cardinals are consistent. I think most would agree. If I am correct about this then it raises further problems for the claim that IMH is intrinsically justified (in the standard sense).


I have further comments and questions about your notion of “sharp-generated reflection” and how you use it to modify IMH to \textsf{IMH}^\#. But those questions seem premature at this point, given that I am not on board with the basics. Let me just say this: The fact that you are readily modify (intrinsically justified) IMH to \textsf{IMH}^\# in light of the fact that IMH is incompatible with (intrinsically justified) inaccessibles indicates that your notion of intrinsic justification is quite revisable and, I think, best regarded as “intrinsic plausibility” or “extrinsic justification” or something else

Best,
Peter

Re: Paper and slides on indefiniteness of CH

My current comments [on Woodin's August 25 email]:

I appreciate that you probably do not have the time to continue this — but perhaps some others will.

This thought provoking reply is rather thin on an explanation of this “conception of V in which PD holds”. One thing I am not sure of is whether this involves a direct consideration and acceptance of PD on its own, or whether it instead involves a direct consideration and acceptance of the relevant large cardinals on their own that imply PD. Of course, some sort of “interactive mixture of the two” might make some sense, but obviously a clearer story would be just a direct consideration and acceptance of the relevant large cardinals, since they outright imply PD. Also, the phrase “conception of V in which PD holds” does suggest the “intrinsic” rather than the “extrinsic”. E.g., you didn’t say “PD holds in V because it is set theoretically useful”.

Concerning

But as your work suggests this could well change. But even so, somehow a structural divergence alone does not seem enough (to declare that Con ZFC+PD is an indispensable part of that conception). Who knows, maybe there is an arithmetically based strongly motivated hierarchy of “large cardinals” and Con PD matches something there.

This refers to #82 on my website. For the implicitly \Pi^0_1 equivalent to Con(SRP), you can really “see” the large cardinal indiscernibility in action. For the explicitly \Pi^0_1 equivalent to Con(SRP), you can also see it, perhaps not as clearly, and also somewhat see it for the explicitly and implicitly finite \Pi^0_1 equivalent to Con(HUGE). Of course, my main goal was to just get something that I can get a large number of mathematicians to feel, and be shocked that they can’t get around it by cutting down generality. That serious philosophy has reentered mathematics in a way that simply cannot be (comfortably) removed by their usual intellectual removal methods. But I obviously will be able to kill several birds with one stone if I can show that you can merely strip down large cardinal hypotheses to the bare bones in the integers (rationals) by simply writing down a purely arithmetical picture that all mathematicians find “perfectly natural”. Of course, I am not there yet, but have a real good start.

Harvey

Re: Paper and slides on indefiniteness of CH

Looking over the subject line and the recipient list, I think that we
have lately been focusing on rather technical matters, and I would
like to review the open threads we have with, in alphabetical order,
Hugh, Peter, Sol, and Sy.

With Hugh. I asked Hugh why he believes/feels/intuits that Con(ZFC) implies Con(ZFC + PD). Hugh said he has to return to Cambridge because the semester is starting. I have some sympathy for that, as I used to be employed. Of course, it would be nice if Hugh gets some time to explain more about this implication. I don’t know the extent to which Peter believes in this implication, and maybe Peter can comment on this. Of course, as you know, I think that Incon(ZFC) and Incon(ZFC + PD) warrant different kinds of Press Releases  (Hugh and Peter probably agree with that).

With Peter, I suggested a sharp difference between \omega and $latex P(\omega)$. For the former, we have no proper substructure of $latex (\omega,0,+1)$, but for the latter we have plenty of substructures of $latex (P(\omega), \dots)$, where \dots are any finite list of operations, natural or not. And also stronger statements along these lines and various kinds of statements along these lines. I think Peter has ideas about criticizing this distinction – perhaps claiming that this distinction is biased or unimportant. An example: if I say “finitely generated” and “not finitely generated”, then Peter could object that I have biased the comparison using “finitely”, and similar kinds of objections. But here we actually have generated from a single famous element via a famous operation. Recall I was prompted to make this point because Peter claims that he has thoroughly debunked certain kinds of fundamental distinctions between \omega and P(\omega). Rather than delve into Peter’s writings, I’m sure we would all greatly prefer a nice friendly careful discussion interactively online, taking advantage of modern technology.

Also Peter mentioned the difficulty of getting “convergence” in a public email exchange like this, mentioning “hydra”. I don’t believe that “convergence” should be the goal. Rather, it should be public exposure and interchange so as to ascertain what the disagreements are, with the hopeful consequence of exposing new philosophical/foundational insights and generating new (kinds of) Theorems that bear on the issues.

With Sol, I responded to his belief that P(\omega) is indefinite, or prima facie indefinite, whereas \omega is definitely definite, in an absolute sense. My view is that there is a sliding scale of “definiteness” or “clarity” or whatever, ranging from say the 3×3 black and white chessboard (yes, non chessplayers, try to fully visualize it in a single clear mental image, and you will like it,
where the left corner is black), all the way up to various iterations
of the power set operation. I stated, in various different wordings,
the

THESIS. Corresponding to each level, we have natural formal systems T, and for each such T, there are corresponding simple perfectly natural mathematical (implicitly and explicitly) $\Pi^0_1$ sentences of clear mathematical interest which are demonstrably equivalent to \text{Con}(T). There are “more” of these as we move up in level of interpretation power.

and asked about how this Thesis relates to the issues of “sliding scale of definiteness”. Maybe it is unrelated, or maybe it is inexorably intertwined. In any case, I offered my opinion that the establishment of this Thesis, or progress on this Thesis, plays an essential role in the discussion of “definiteness” – but precisely how it should affect the discussion is not yet clear. I offered that the same can be said of Gödel’s incompleteness theorems, which have been used to justify many conflicting points of view.

With Sy, I asked for a clear step by step presentation of the proposal to generate new axioms for set theory based on new forms of “maximality”. The idea of using relationships between countable transitive models of set theory for this purpose is, prima facie, paradoxical since countable universes appear to be at odds with “maximality” even if we have the downward Löwenheim-Skolem theorem. But we need some transparently stated justification that is generally understandable so that we can see the merits of this proposal.

Harvey