Towards a New Epistemology of Mathematics
September 14-16, 2006
My contribution will concentrate on some situations implied by set theoretical developments, which, as I'll show, are usually passed under silence or even misinterpreted by the philosophies of set theory subscribed by most practitioners and philosophers today. Two situations will be, in particular, focused on: the multiple-universe-phenomenon and the epistemic status of the notion of forcing. As to the former, I'll hint at the ways it is usually interpreted and argue that they leave alternatively unexplained views like Jensen's that V=L may be a very attractive axiom (Jensen 1995) and like Martin's that referring to the universe of all sets may make sense after all, the multiple-universe-phenomenon notwithstanding (Martin 2001). As to the latter, I'll notice that the notion of forcing has switched from being conceived as a mere technical device to being regarded as a fundamental set theoretic concept (see H. Friedman's FOM-report of the panel on CH held in Atlanta in January 2005) and observe that a philosophical explanation of this fact is yet to be given.
I will interpret these situations as witnessing the need of a novel philosophy for set theory.
The rise of the use of computers in mathematical research is raising interesting new philosophical questions about the nature of proof and of justification within mathematics. On the one hand, mathematicians have produced 'computer-aided' proofs - for example Appel and Haken's 1976 proof of the celebrated Four-Color Theorem - which are much too long and complex to ever be fully checked by human mathematicians. On the other hand, computers have been used to verify large numbers of specific instances of general mathematical claims - for example for Goldbach's Conjecture - thus building up a large body of what might be seen as inductive evidence. The term "Experimental Mathematics" has been applied to both of these developments. I am interested in whether there is any conceptual unity to the use of this term by mathematicians, and what philosophical issues are at stake in accepting the legitimacy of experimental mathematics.
I will discuss visualization in mathematics from a historical and didactical point of view. A common notion, historically as well as today, is to make a distinction between 'visualizable' and 'non-visualizable' mathematics. I critisize this distinction since it does not take into consideration where and to whom we visualize. The main example I will discuss, both historically and didactically, is continuous nowhere differentiable functions. In connection to this I will consider Felix Kleins distinction between naive and refined mathematics.
In a recent paper 'How Mathematicians May Fail to be Fully Rational' , I advocated the adoption in the philosophy of mathematics of Alasdair MacIntyre's general notion of tradition-constituted enquiry. A central component of this notion requires of a rational tradition that it know the history of its successes and failures. This raises the question as to whether, were such a history to be written, it would fall foul of the criticism contemporary historians of mathematics have levelled at mathematicians' histories that they are largely 'Royal-road-to-me' accounts. I shall address this question in the context of a research programme known as 'higher-dimensional algebra', and consider the charge mathematicians may make in return that historians are unable to treat research programmes which run for decades, supported by tens or hundreds of mathematicians from many countries and institutions.
The rise of modern, structural algebra may be characterized in terms of the consolidation of a certain image of the discipline that developed gradually since the turn of the century, received special impetus with the work of Emmy Noether beginning in 1920, and eventually became epitomized in van der Waerden's famous textbook of 1930, Moderne Algebra. The paradigmatic presentation of the discipline as conceived at the turn of the century is the one presented in Heinrich Weber's classical Lehrbuch der Algebra, whose first volume appeared in 1895.
The way from the "classical" to the "modern" conception of the discipline can be investigated from several perspectives. The most immediate, and perhaps necessary, one is to look at the milestone articles that progressively produced the main concepts, theorems and techniques that came to stand at the center of algebraic research as it was practiced along the 1920s.
Parallel to this, however, one may look for additional hints that clarify how the practitioners of the discipline interpreted this progressive evolution and how their image of algebra changed accordingly. One illuminating way to do so is to look at the leading, German review journal of the period, the Jahrbuch über die Fortschritte der Mathematik. It turns out that the changing classificatory schemes adopted by the journal to account for the current situation at various, important crossroads of this story add significant insights to our understanding of it.
Recent years have seen a growing acknowledgement within the mathematical community that mathematics is cognitively/socially constructed. Yet to anyone doing mathematics, it seems totally objective. The sensation in pursuing mathematical research is of discovering prior (eternal) truths about an external (abstract) world. Although the community can and does decide which topics to pursue and which axioms to adopt, neither an individual mathematician nor the entire community can choose whether a particular mathematical statement is true or false, based on the given axioms. Moreover, all the evidence suggests that all practitioners work with the same ontology. (My number 7 is exactly the same as yours.) How can we reconcile the notion that people construct mathematics, with this apparent choice-free, predetermined objectivity? I believe the answer is to be found by examining what mathematical thinking is (as a mental activity) and the way the human brain acquired the capacity for mathematical thinking.
In order to further the debate on whether mathematics needs new axioms, it seems useful to say what role axioms play in mathematics, so that their need can be analyzed. If we see what role axioms have been called upon to play historically, then we can see whether ZFC is sufficient, or whether more axioms might be necessary.
While it may seem plausible that axioms are inherently obvious statements that can be used to establish theorems unassailably, I point out that this may be neither possible nor necessary. In addition, it doesn't seem to fit the historical facts. Instead, I argue that the role of axioms is and has been to systematize relatively uncontroversial facts that mathematicians can accept from a wide variety of philosophical positions. Once the axioms are generally accepted, mathematicians can expend their energies on proving theorems instead of arguing philosophy. The fictionalist and the platonism can adopt the same axioms, and thus use each other's theorems, despite each thinking that the other has a confused picture of their shared goal.
Given this account of the role of axioms, I suggest that in order for a new axiom to be adopted, it must meet four requirements: it must be widely acceptable, it must have interesting consequences, it must help avoid philosophical disagreements, and it must be proven independent of existing axioms. Violating the third requirement gives what Feferman calls "structural axioms" rather than "foundational axioms", while violating the fourth gives a "conjecture", "hypothesis", or "theorem" rather than an axiom.
Penelope Maddy has recently proposed a similar view in Naturalism in Mathematics, but she suggests that the philosophical questions bracketed by adopting the axioms can in fact be ignored forever. I contend that these bracketed arguments are in fact important, and should ideally be resolved at some point. I concede that their resolution is unlikely to affect the ordinary practice of mathematics, though they may have effects in the margins of mathematics, including with regards to the large cardinal axioms Maddy would like to support.
The German mathematician David Hilbert's most distinctive contribution to the debate about the foundations of mathematics was his insistence that ordinary mathematics could stand on its own without recourse to philosophical foundations. In order to develop this line of Hilbert's thought, one must reconsider the traditional conception of Hilbert as a "finitist" and "formalist" philosopher of mathematics. According to that conception, Hilbert endorsed formalism as a thesis about the nature of mathematics and found in finitism an especially secure foundation on which to situate dubious infinitary techniques. I argue that Hilbert's actual position was more subtle:
No ordinary mathematical techniques are of questionable legitimacy, and any attept to secure mathematics on an epistemological base is misguided because the security inherent in mathematical methods already is greater than the security in the philosophical systems that recommend particular epistemological foundations. Hilbert's finitism and formalism were instead methodological constraints consistent with his thoroughly naturalistic conception of mathematics. He designed his foundational program, not to show that mathematics is epistemologically secure, but to show why no philosophical defense of mathematics is needed to demonstrate its security.
The subject of my paper is the investigation of mathematical processes which can be acknowledged as 'perceptual', in particular visual/spatial. Such investigation is part of the broader question of which epistemology of mathematics is more suited to account for its real processes.
In the first section, I present the background of my investigation, arguing that there is a need for a conception of mathematics alternative to the standard or 'logocentric' one typical of last century studies. I discuss the idea that mathematics is a human activity and a cognitive process, and thus it is similar to other sciences, proceeds testing hypotheses and is fallible. Mathematics deals with problem solving, and to this aim it makes use of different formats as tools to display the given information. There is a pluralism and gradualism in the forms of representation used in mathematics: each of these formats is a legitimate tool and is cognitively and computationally relevant.
In the second section, I take into account cognitive science results, to show that an interdisciplinary investigation into mathematical diagrams is possible. Psychology provides evidence for (i) the origin of mathematics in perception and their usefulness to solve problems; (ii) the existence of regularities and grouping laws in our mathematical perception that allow us to see configurations in diagrams. Nevertheless, mathematical diagrams are used specifically to solve mathematical problems: the investigation into mathematical perception has to relate also to more theoretical assumptions based on background knowledge. Mathematical diagrams are both icons, when they are considered only in their depictive and 'literal' sense, looking at the spatial and topological relations they display, and symbols, when they are interpreted as meaningful configurations, at a semantic level. The perceptual element is thus the necessary but not sufficient condition for diagrams to display mathematical information.
The motivation for a study of this kind is twofold: (i) my intention to put forward arguments in favour of a renewed epistemology of mathematics, showing that it is characterised by the use of different representations to display information; (ii) my wish to evaluate whether philosophy can assume a leading role in managing with epistemological interdisciplinary investigations.
Paying attention to mathematical practice is easier said than done. Mathematical practice must become the matter of some determinate academic discipline before it can yield any philosophcial fruit. Otherwise we will not move beyond anecdotes and the uncritical reproduction of mathematicians' folklore. But which discipline? Formal Logic, History, Sociology and (why not?) Evolutionary Psychology are all candidates (there are others). In this talk I will argue that the obvious answer--all of the above--is unworkable because at least some of these candidates (History and Logic) have incompatible presuppositions. I shall then sketch some practical consequences for the project of studying the epistemology of mathematics through mathematical practice.
A key focus of mathematical didactics is the area of mathematical learning. This requires a special view on mathematics and its internal structure. The presentation will carve out that mathematics can not be seen as an objective body of knowledge, legitimized by proofs, when approached from the perspective of mathematical learning. Instead, the subject of mathematics has to be interpreted as a discursive discipline, developed by human beings within their social and cultural environment.
A logical system, so called Projectively Modal Ontology, is described. The formal language of St.Lesniewski' Ontology and ideas of Platonic Philosophy are united in the system. Many mathematical objects (e.g. variables, functions, vectors etc.) have a basic semantics where principles can be restricted by conditions to form aspects of the principles (relation of principle and its aspects is like the relation of a three dimensional body and its two dimensional projections). Projectively Modal Ontology presents this semantics in a formal way. Language, axioms, definitions and theorems of the system are considered in detail such that a more platonic version of Foundations of Mathematics (with no sets but "unities" as primitives) can be presented.
Despite its abstract nature, mathematics is firmly rooted in our lifeworld. This is not only true in the obvious sense that the historical development of mathematics and that mathematical education start with the lifeworld, but -- as I want to show in this paper -- also in the much deeper sense that even the most abstract concepts of mathematics correspond with concrete and pictorial lifeworld notions. This correspondence is vital for our creative understanding in mathematics, and it is documented by the fact that mathematical understanding aims at making theorems obvious or even trivial.
On the one hand Ludwig Wittgenstein holds that a mathematical proof constitutes the meaning of the sentence proven. This has such counterintuitive consequences as 1) that an undecided mathematical sentence has no meaning and is not understood, 2) that by creating a proof you change the meaning of the proven sentence, i.e., when you have finished the proof you have proven another sentence than the one you started with, 3) that no unambigous sentence can be proven by different proofs. On the other hand Wittgenstein explicitly denies some of these counterintuitive consequences, especially 1) and 3). How can this apparent inconsistency be explained? I think an interpretation of Wittgenstein’' ideas which explains away this apparent inconsistency also has the merit of shedding some light on the debate between Quinn/Jaffe and Thurston about the role and importance of mathematical proof.
David Hilbert's (1862-1943) formalization of geometry around the turn of the last century led to a battle among mathematicians whether you can do without intuition in mathematics, or not. Among others Felix Klein (1849-1925) believed that mathematics can not be treated exhaustively through logical deduction and considered it to be necessary to use intuition in mathematics. He emphasized the importance of interaction between the intuition and the results attained through the axioms. I will investigate Klein's thoughts and ideas about intuition and the distinction he makes between what he calls naïve and refined intuition. Further, I will discuss Klein's belief that the axioms are not truths a priori, but the result of the idealization of our inexact intuition.
The Category Theory (CT) is progressively replacing (or complementing) the Set Theory (ST) as a mathematical lingua franca in which mathematicians working in various areas present their results and communicate them to each other. This suggests the view on CT as a "new foundations" of mathematics alternative to standard ST-foundations like ZFC. A list of first-order axioms for CT can be indeed written down (like in the case of ST), and a foundation of mathematics as a (or even "the") category of categories can be reasonably argued for (see Lawvere F.W., 1966, "The Category of Categories as a Foundation of Mathematics" in: La Jolla Conference on Categorical Algebra, Springer). However, in my view, this approach doesn't capture entirely what is potentially involved in the conceptual change "from sets to categories". It seems me more pertinent to rethink the standard notion of foundation (relevant to mathematics), and in particular the idea of Hilbert-style axiomatics, from the new categorical perspective.
In my talk I shall argue that CT should be seen as a powerful means of mutual interpretation of mathematical concepts and theories allowing for integration of mathematics into a connected whole rather than a foundational core to which mathematical theories are ultimately reduced, and that CT provides an epistemic integration rather than foundations of mathematics in the usual sense. I shall show that the Hylbert style axiomatic is a special ("centralized") case of the categorical integration. A shall also argue that the claim according to which CT supports the structuralist view on mathematics cannot be excepted without a profound revision of this view. Finally, I shall show how the old problem of relationships between mathematics and logics looks from the categorical perspective.
This paper consists of a historical and a philosophical part. In the first one I will clarify a common misconception about the development of group theory, in the second part I will draw some philosophical conclusions based on these findings.
In 1870 Jordan proved that the composition factors of two composition series of a group are the same. Almost 20 years later Hölder (1889) was able to extend this result by showing that the factor groups, which are quotient groups corresponding to the composition factors, are isomorphic. This result, nowadays called the Jordan-Hölder Theorem, is one of the fundamental theorems in the theory of groups.
The fact that Jordan, who was working in the framework of substitution groups, was able to prove only a part of the Jordan-Hölder Theorem is often used to emphasize the importance and even the necessity of the abstract conception of groups, which was employed by Hölder (see, for example, Wussing 1984, van der Waerden 1985, Nicholson 1993, and Corry 1996).
However, as a little-known paper from 1873 reveals, Jordan had all the necessary ingredients to prove the Jordan-Hölder Theorem at his disposal (namely, composition series, quotient groups, and isomorphisms), despite the fact that he was considering only substitution groups and that he did not have an abstract conception of groups. Thus, I argue that the answer to the question posed in the title is "Yes".
I suggest two possible reasons for why this observation been overlooked by most commentators on the development of group theory: First, Jordan's paper received only scant attention and reviewers did not even mention his use of quotient groups (e.g., Netto 1875). Second, at first sight Hölder's own proof indeed appears to rely essentially on the use of abstract groups. Nevertheless, there are later proofs of the Jordan-Hölder Theorem (e.g., in Weber 1896) which make only use of methods and conceptions that were available to Jordan in 1873.
Thus, I conclude that it was not the lack of the abstract notion of groups which prevented Jordan from proving the Jordan-Hölder Theorem, but the fact that he did not ask the right research questions that would have led him to this result. In other words, the historical episode discussed above shows that mathematical progress depends in part on considerations that go beyond purely mathematical ones.
The spectacular success of string theory in geometry has divided the mathematical community. Many heuristic proofs advanced by string theorists could be made rigorous by geometers. But there were failures as well. Apart from sociological concerns, among them who should be credited for a result, mathematicians wondered about the ontological status of such non-rigorous results. Were they more than simple conjectures? The eminent mathematicians Arthur Jaffe and Frank Quinn have proposed to consider them as "theoretical mathematics", because theoretical results still require independent corroboration. In my contribution I discuss ways how to obtain a suitable ontology for theoretical mathematics.
A natural first candidate is Lakatos's quasi-empiricist ontology of mathematics. On this account, heuristics and conjectures are the driving force of mathematical progress. Proofs themselves are constantly refined thought experiments. But considering theoretical mathematics as informal ancestry fails to account for the axiomatic character of the concepts relevant for today's mathematical physics, including string theory. Axiomatics, to Lakatos's mind is merely justificationist, if not dogmatic.
In this respect, John von Neumann's opportunistic axiomatics performs better. Insisting that the best inspirations of mathematics stem from the physical sciences, he proposed a set of pragmatic criteria of success, some of which are distinctive for mathematics. In virtue of its great reliability and its ability to rigorously define the limits of a certain concept, mathematics occupies a special position among the sciences. Freedom in concept formation is counterbalanced by the interaction with the empirical sciences which is necessary to prevent aestheticism. Within von Neumann's pragmatist ontology, axiomatization is thus both exploratory and justificationist. One may always hope that opportunistic strategies can be given a sound mathematical meaning. At this point, von Neumann reverberated Hilbert's optimism about the applicability of mathematics in the sciences.
As recent work of Mark Wilson has shown it is quite difficult to disprove mathematical optimism even in the domain of applied mathematics. Combining this perspective with the earlier approaches teaches us that the problem of the ontology of theoretical mathematics is not one-sided. Setting up mathematical structures is, on pain of aestheticism, not fully independent of how these structures are applied to real-world problems. Mathematicians might thus constantly oscillate between an opportunistic stand, driven by the quest for application, and an optimist stand, in which one tries to give mathematical meaning to successful application strategies.
Is it possible to explain to beginners what formal logic is really about while teaching it? Can it be taught in such a way that nothing that has been sold before as a law of logic has to be renounced when it comes to non-classical logics? The traditional approach starts from inferences in natural languages and arrives, by step for step abstraction, at "formalization". I tried to do it the other way around, starting with uninterpreted formal semantics (though not uninterpreted calculi) as games alongside with and independently of the informal reasoning and linking the two later on. This can be seen as a way of making Hilbert's basic idea fruitful for teaching logic by transferring it to formal semantics. Does it work? Well enough to have resulted in a logic book that follows the new method (Niko Strobach, Einführung in die Logik, Darmstadt: WBG 2005) and to provide me with interesting classroom experiences to discuss.
Most mathematicians appear to uphold a hybrid conception of mathematical truth. On the one hand, they entertain a consensual notion of it, whereby what is true is what is accepted by the community. On the other hand, as this seems to open the door to relativism, they also defend its objectivity, in terms of correspondence (realism) or coherence (formalism). This hybridity might be framed in Peircean terms, with discursive or social practices not diminishing but enhancing objectivity, which is only ever asymptotically reached. In any case, the community-model of mathematical practice is of high practical value, since mathematicians are at least as interested in whether provisional results can be relied on, as in their ultimate verity. Consequently, trustworthiness is a very important issue to mathematicians. However, recently, there seems to be a dramatic increase in the essential 'informality' of mathematical results, provisional or definitive, with a rising number of extremely long, complicated, digital, specialized, experimental or otherwise elusive proofs putting to the test the limits of human's mechanical or intuitive mathematical powers. Might we be at the dawn of a new crisis and/or revolution in the philosophy of mathematics? In this paper, some of the empirical material that is possibly relevant to effectively coping with this question is presented.