All 20 "Rules for History of Philosophy"

Posted on ..

Rules for History of Philosophy

A while ago I had the idea to suggest some guidelines encapsulating what I see as good practice in studying the history of philosophy. With any luck, these rules are exemplified, not routinely violated, by the podcast itself. These are not really “rules” of course, only suggestions of best practice based on my own limited experience. I would love to hear other ideas and have further discussion here on the website.

The “rules” were posted on an extremely irregular basis over a couple of years, and I eventually got up to 20. So that people don’t have to comb back through the blog, here is the entire list in one place.

Rule 1: It's possible for the same idea to appear independently more than once

It strikes me that a common error in history of philosophy is to see that two figures/traditions have put forward the same idea, and immediately infer a historical connection. For instance: atomism or monism emerging in both ancient Greek and classical Indian philosophy. Or: al-Ghazali's and Hume's discussions of causation. Yes, the similarities are striking, and there might be a historical connection, but the similarity does nothing in and of itself to show that there is such a connection. Rather it only raises the question of whether there was influence one way or the other. Often, the simplest explanation is just that people thinking about a certain topic will naturally tend towards a certain, limited range of positions (like, either bodies can be infinitely divided, or not - and in the latter case one is an atomist).

Rule 2: Respect the text

This is my version of what is sometimes called the "principle of charity." A minimal version of this rule is that we should assume, in the absence of fairly strong reasons for doubt, that the philosophical texts we are reading make sense. This holds not only for outstanding famous thinkers but also for lesser lights: even if they were not earth-shattering innovators, they usually didn't just write rubbish. Here it's also worth bearing in mind that until very recently (the last century) any text that has survived to get into your hands has already been through a process of selection by earlier readers. So they are likely to be reasonably good. But even without this observation, it still seems obvious (to me at least) that useful history of philosophy doesn't involve looking for inconsistencies and mistakes, but rather trying one's best to get a coherent and interesting line of argument out of the text. This is, of course, not to say that historical figures never contradicted themselves, made errors, and the like, but our interpretations should seek to avoid imputing such slips to them unless we have tried hard and failed to find a way of resolving the apparent slip.

Rule 3: Suspect the text

As I've frequently emphasized on the podcast, texts often have a long and complicated history of transmission. A work by, say, Aristotle was first written down well over two millenia ago; it's not unlikely that even the very first copy/copies had mistakes, given that it would presumably have been dictated to a scribe. To reach us, it then had to be copied by hand many many times, with the earliest surviving copies being copies of copies of copies... and those earliest surviving copies come from the Byzantine period, many centuries after Aristotle. Of course things aren't quite so daunting with more recent works but certainly anything produced before the invention of printing involves copying by hand, and there are philological issues to contend with even in the case of early printed works. This means that, if you are really getting into the nitty gritty of a pre-modern philosophical text, you need to beware of the existence of many variants in the text, which could radically alter the meaning. Scribes made mistakes, incorporated glosses into the main text, and made their own emendations to fix problems they found in their copies (these scribes were not stupid by the way: their emendations may well be right!). And that isn't even taking into account the possibility of outright tampering. The podcast fell afoul of this when I emphasized the salacious story about Avicenna's unrestrained sexual appetite while dying of colic. I subsequently became aware of a recent article showing persuasively that this was a later, hostile addition to the biography of Avicenna written by one of his students. (See the comments on the relevant episode.) The upshot is that historians of philosophy need to be philologists too, insofar as they can manage it, and to take seriously the work of scholars working on textual transmission or even collaborate with them.

Rule 4: Respect the context

Podcast listeners will know that I put a lot of emphasis on the wider historical context within which philosophy was produced. To some extent it should be obvious how necessary this is: how can we understand, say, Plato and Aristotle's political philosophy without knowing something about the political situation of Athens in their day, or understand Hobbes without knowing about English history? But historical context can be relevant in more surprising ways; my favorite example of this is the parallel between early Islamic debates over the eternity of the universe and the contemporary debate over the eternity or createdness of the Koran. (Actually, though I've drawn this comparison in many places including the podcast, I don't know that anyone agrees with me about it, but I still think it's right.)

There are at least two worries we might have here. First, that history of philosophy is turned into something that is more history than philosophy. Sometimes people speak dismissively of the "history of ideas," in which philosophical theories are nothing but reflections of other historical events. But I strongly feel that history of philosophy is both a kind of history and a kind of philosophy. Understanding the historical context will help us understand philosophical arguments, but going through and evaluating those arguments is still a philosophical enterprise.

Second, that this rule makes it nearly impossible to do the history of philosophy. Are we really supposed to become experts, not only on all these philosophers, but also on the whole context they lived in, taking into account everything from political events to social circumstances, economic factors, etc? My answer would be, basically, yes. There is no point at which you can say, "ok, I've learned enough about the historical context, nothing I learn further will help or be relevant." In principle, it is always worth looking at the context more carefully, no matter how well you understand it. The limits are imposed by what we can manage in terms of time and expertise. Like some of the other rules I'm proposing, this rule is intended as an open-ended encouragement to strive for an ideal which is not practically reachable.

Rule 5: Take "minor" figures seriously

I suppose no one is going to be surprised by this one, given the "without any gaps" slogan. One of the main points I'm trying to make with this podcast is that, if you want to understand the history of philosophy, you can't just hop from one great thinker to another, leaving out everything that happened in between. Of course the famous names are those who drew us all into the subject in the first place: I am not alone in having caught the philosophy bug by being exposed to Plato. But even if all you want to do is understand the famous figures, you have to remember that they are responding to less famous figures who came right before them or who were their contemporaries. We've seen plenty of examples in the podcast so far. Furthermore, as we've also seen, the so-called "minor" figures have made significant contributions themselves.

Previously ignored authors are routinely "discovered" in scholarship and pushed into the front rank. In the series on medieval philosophy, for instance, we look at John Buridan who was previously relatively obscure but has gotten a lot of attention in recent secondary literature. Another point to consider here is that all the figures who leap to mind when you think of "great philosophers" have been men. So ignoring the "minor" figures means leaving out the contributions made by women authors throughout the history of philosophy. Historically, attitudes towards women have almost guaranteed that they would be evaluated as less important than their male counterparts. While some, for instance Wollstonecraft, are now taken seriously as major thinkers, we have a long way to go in terms of rescuing women thinkers from undeserved obscurity.

None of this is to say that it is illegitimate for a historian to spend much of their time reading, say, Plato, or Descartes. These are complex, deep and rewarding thinkers who seem to be almost inexhaustible in rewarding our attention. But as a discipline, the history of philosophy would benefit if more effort were devoted to the B team.

Rule 6: Learn some dates

This one may seem obvious, but I mention it because it does really come in handy. As I've had occasion to mention on the podcast I don't have a very good head for dates, so I try to remember some specific ones as landmarks -- like the death dates of significant philosophers, and then you can at least get a vague idea when another philosopher is by knowing whether they are earlier or later than that philosopher. (A good one to memorize is Socrates' death date of 399 BC, because you can work backward to know when the Presocratics were and forward for Plato and Aristotle.) It's also a good idea to learn some other non-philosophical dates, to help with knowing what the context of the philosophers' work was (see rule 4).

While I'm on this subject, don't forget the timelines here on the website (in the menu above) which give you the dates for all the philosophers I've mentioned on the podcasts thus far, with links to the episodes where they are covered.

Rule 7: Ask yourself why they care

This and the next couple of rules are going to be about avoiding anachronism. That seems obvious enough, but anachronism is surprisingly hard to avoid in the history of philosophy, so I thought I would break the issue down into several aspects. This first one is, I think, often overlooked. Instead of assuming that the historical figures we study are motivated by the same philosophical worries that worry us, we need to understand why they care about each issue they raise. Often it will be because of something in the historical context (again, see rule 4), a view held by a predecessor, or something else in their own philosophical system. Seeing what led them to a particular argument or discussion will help us understand that argument or discussion.

My favorite example here is the medieval debates over the eternity of the world. We might even be tempted to dismiss the whole debate as uninteresting, since modern physics has rendered the debate obsolete. But if we dig into the motivation for the debate (as I tried to do in, for instance, episodes 144, 161, and 252) we see that the eternity debate was not only about eternity. It was about God's relationship to the world, and more abstractly, about how to understand the concepts of necessity and causation.

Rule 8: Read the whole text

I shouldn't even have to say this! But I do, because in fact it's very common to take individual passages or arguments or claims out of their textual context. Perhaps the best example of all is something I mention in podcast episode 205, on Anselm's ontological argument: the argument fits onto a page or two and is nearly always read by itself without going through the rest of the work in which it appears (his Proslogion). In fact that argument is only the first step in a lengthy attempt to grasp God, and it's impossible to understand correctly what Anselm is up to unless you read the whole book. That's an extreme example, but it's not atypical, I think.

Of course, there may be more or less free-standing bits of text that don't need to be read along with the rest of the work in which they appear - some philosophers write aphoristically, for instance. But even in that sort of case, we shouldn't just assume that (say) a collection of aphorisms and short texts by Nietzsche has been put together with no thought regarding structure or thematic arc. At the other end of the spectrum, it amazes me that people often read bits of Platonic dialogues as if they could be understood in isolation - even though it's patently obvious that Plato put immense effort into the unity and structure of each dialogue. (In fact he even talks about the organic unity of a good speech in the Phaedrus.) 

Now, it's not always easy or even possible to read works as a whole. There are texts that are preserved only as fragments, like with the Presocratics and early Stoics; similar problems arise with, say, anonymous glosses or notes in medieval manuscripts, where we can't be sure what (if any) other material was written by the same annotator. Then there are massive works where reading the whole thing is a major commitment; I wouldn't tell someone it is worthless to read one book of the Republic unless they are also going to be reading the other nine in the near future. As with the other rules, this is therefore more an ideal to shoot for. Whenever possible consider textual evidence in light of the rest of the work, for instance by considering what the author may have been trying to do in the work as a whole, and what function this particular part of the text plays in that whole.

This is incidentally another way to avoid anachronism. By being more attentive to the goals and project of the whole work, we are less likely to jump to conclusions about one isolated passage and to import anachronistic philosophical concerns into that passage.

Rule 9: Learn the terminology

Another obvious one, perhaps, but also worth mentioning. Not all philosophers develop their own technical or semi-technical vocabulary, but many do. (Sometimes even those who officially make a big deal out of not worrying about terminology, like Plato.) When reading any philosopher, you need to know which words have a technical meaning and what they mean – this obviously requires knowing at least enough of the primary language to track the terms in question. (I actually considered having a more general rule to the effect of “learn the primary language,” but I worry that this could be discouraging: please do read Plato, even if you can’t read Greek! Still, it really does go without saying that there is a significant sense in which you can’t in fact read Plato if you can’t read Greek.)

This is another rule that has to do with avoiding anachronism. The more we know about a philosopher’s language, including not only the way terms were generally used at his or her time but also the way that this philosopher in particular uses terms, the less likely we are to import our own assumptions about what these terms must mean. There are many examples where scholars have pointed out that interpreters have mistakenly been taking a given word to mean what we now today would mean by it, whereas actually it meant something different – one that comes to mind is “cause” in Aristotle. The best way to guard against such mistakes is to track the use of a word across the philosopher’s works, using context (both in these works and in other works of the time) to get a better grip on exactly what the word means.

Rule 10: Silence is not louder than words

One of the most tempting things to do when you are reading a philosophical text is to assume that, if the philosopher you're reading hasn't mentioned something you would expect to be mentioned, then they have omitted it on purpose. Allusions to predecessors, suppressed premises, allusions to historical/religious context, etc all may cause what seem to be loud silences when they are absent. And definitely, being alive to the possibility that a philosopher is purposefully not saying something should be in every historian's toolkit. But it's a tool to be used with great caution. What we might expect a philosopher to say is going to depend to a great extent on what our own interests and philosophical worldview looks like, so breaking this rule can be another souce of anachronism. The same goes for our (inevitably very partial) understanding of the philosopher's intellectual and historical context. How, we might think, can a philosopher not mention such-and-such a historical event that we think of as really crucial... they must be avoiding mention of it on purpose! At its worst, this sort of reading allows us to project our own concerns onto the text with wild abandon.

It's an interesting question what exactly you need to have as evidence before arguing from silences in this way; one kind of license might be a philosopher who actually tells you that some points are deliberately being suppressed (as Maimonides does in his famous preface to the Guide). But generally, I think one should always err on the side of working out the philosopher's priorities and ideas from what they do say rather than what they don't.

Rule 11: Think critically

With all these worries about avoiding anachronism, you may have gotten the impression that I am only worried about "getting the text right," and in fact I do think that is a first step in dealing with any historical source material. However, just as I said above that history of philosophy is a kind of history, it is also a kind of philosophy! Philosophy comes in not only when you are reconstructing the position (because you need to make sense of the ideas in the text "from the inside" which is a philosophical task) but also in assessing the arguments you've read and, hopefully, now understood in all their complexity, historical context, etc. 

Here I like something that we used to emphasize a lot when I taught in England (it's much less emphasized in Germany, I find): that from the very beginning of their education in philosophy, students should not just summarize and present a text in their essays or discussion in class, but also say what they think about it, consider possible counter-arguments, etc. Of course this often meant that students were being asked critically to assess arguments and ideas they weren't yet in a position to understand fully, but it's nonetheless a good approach because it trains students to think critically about what they are reading.

The most obvious reason to do this is that we are in the end interested in whether any of these philosophical views are true! But even if your motivation is strictly historical, you will often need to think hard about a given philosopher's ideas critically to understand why later philosophers (or even the same philosopher, after re-thinking) rejected, or carried forward, those ideas in certain ways.

Rule 12: Think about the audience

All good writers, teachers and speakers know to bear in mind the audience they are writing for; think about what will interest them, what their concerns may be, what they already know and what they want to learn. Obviously not all philosophers have been good writers, teachers and speakers, and some philosophical texts seem to have been written with no particular audience in mind (or even with an “audience be damned!” attitude). But usually, texts are written with at least some conception of the readership. This can be an important guide to interpretation. It is vital to know, for instance, what a philosopher could take for granted in terms of background knowledge in their intended audience, or which other texts the audience will be likely to know. Just to give a specific example, it is almost impossible to overestimate the presence in an ancient Greek’s mind of Homer and Hesiod, or the Bible in a medieval reader’s mind – these texts could be brought to mind even with single words or vague allusions, much as we can bring to mind tv shows or movies with a single word or phrase. Likewise it may be useful to bear in mind, say, that an audience member is likely to have primarily theological concerns in mind, or primarily political ones, in thinking about how the author has framed arguments aimed at that audience: what does such-and-such an argument indirectly imply about the Trinity, or about the legitimacy of monarchy?

The result is that the historian of philosophy needs to know as much as possible about who the audience for a text was likely to be, and as much as possible about what that audience would have read, known, and thought. This is obviously related to knowing about the historical context more generally, but it is a more specific and to some extent more challenging task, one even impossible to carry out fully, since for most periods we have little hope of stepping entirely into the shoes of the audience members. Again, it’s more of an ideal to shoot for.

Rule 13: Take metaphors seriously

The history of philosophy is full of metaphors, analogies and similes - from Plato's cave to Neoplatonic "emanation" to Rawls' "veil of ignorance". In general, I am a big supporter of taking seriously the "literary" features of philosophical texts, like structure or characterization and dramatic setting. Metaphors are a particularly interesting case, though, because one needs to decide how exactly to apply the metaphor. Obviously many philosophical metaphors have been parsed and analyzed in great detail - no one would say that Plato's cave has received insufficient attention. Nonetheless I think there are a lot of such metaphors that bear further thought. For instance I once wrote a paper on the widespread ancient tendency to compare the following things to one another: the individual (or the soul); the household; the city; and the cosmos. It turns out to be useful to dwell on exactly how such metaphors are cashed out, and what effect they have on philosophers' ways of thinking. That comparison between a city or other society and the cosmos tends to push them in the direction of arguing for monarchy, because the cosmos is likewise for them ruled by a single divine principle. (Or was it the other way around, that they draw the analogy in the first place as a way of justifying their political preconceptions?) And then there is the issue of what, if any, argumentative weight a metaphor has - should we be more persuaded by a philosophical view just because it is illuminated by a rhetorically powerful metaphor? Again, think of Plato's cave and how much less resonance it would have if he had just said something like, "I think people in everyday life are paying attention to images of reality, instead of true reality," without giving the metaphor. Yet that resonance itself doesn't make the philosophical position more convincing... or does it? Again, it's just one example of what could be a more general rule, which is to pay heed to literary features of texts.

Rule 14: Take religion seriously

Ok, this one might be controversial. But it was much on my mind when I wrote the episodes on medieval philosophy, where religion is woven into pretty much every text I was looking through. As I wrote the scripts I thought a lot about how to present the material in a way that will be interesting and seem relevant to listeners who don't care much about religion, without misrepresenting the material or for that matter letting down listeners who do have an interest in the religious side of things. At any rate, it seems to me important to remember that the vast majority of figures available for us to study in the history of philosophy have been religious believers. This goes not only for the obvious cases like the medieval Latin Christians but also for pagan thinkers of antiquity. We can assume that nearly all of the people I covered in episodes 1-100 (i.e. before arriving at ancient Christianity) were practicing pagans, and that includes household names like Socrates, Plato and Aristotle. They may have had culturally unusual interpretations of the religion of their day but they were in some sense themselves religious (by which I mean that they believed in divine entities and presumably engaged in cultic practices) and more importantly for us, they felt the need to engage with religion in their works. So religious issues are (to differing degrees, but almost always to some degree) woven into the very fabric of the philosophical works we are reading from antiquity, and this also goes of course for the medieval period in various cultures, and for early modernity. As we’ve seen in the series on Indian philosophy, religious issues also played an important role there. Nowadays most professional philosophers in Europe and the US seem to be atheists, as far as I can tell, but that is a very recent development, even if one can point to occasional atheists in earlier periods (Hume is a favorite example).

What does this mean for the historian? In the first place that we need to learn about religious context just like the other aspects of historical context. No surprise there. But it also means something more challenging, which is that one needs to take an objective and open-minded attitude towards the philosophers' religious beliefs. Insofar as we are historians of philosophy, our goal should not be to take inspiration for our own religious faith if we have it, or to find the mistakes made by great religious authorities for the sake of reinforcing our own lack of faith if we don't have it. Rather it should be to understand how the religious views interacted with and influenced the philosophical views – for instance, how Augustinian ideas about grace affected views on free will. Actually I would go further and say that we shouldn't even worry which aspects of a thinker's worldview are "religious" as opposed to "philosophical." Much of the time this dividing line is going to be blurry or even non-existent, and there is no reason to get anxious about it. I know from comments I've seen here on the website that some listeners think that philosophy is antithetical to religion; whatever merit that may have as a philosophical position nowadays, it is not a good place to start from in doing history of philosophy.

This is not to say, of course, that one's own beliefs remain irrelevant. If you are an atheist you had better be ready to say what is wrong with Anselm's ontological argument for the existence of God, once you have done your best to understand it! It's just that, as I've argued before in this series of rules, the first and very challenging step is to understand the texts you are reading, and taking religion seriously is part of that.

Rule 15: Be broadminded about what counts as “philosophy”

This is in a way a generalization of the previous rule to take religion seriously. The point I want to make with this rule has as its obvious starting point the frequent observation that, until very recently (like, only the last couple of centuries) the word "philosophy" included much more than we would include today. Still during the Enlightenment people we would call "scientists" would have referred to themselves as "natural philosophers." Of course that by itself might just mean that the word has changed meaning. But we need to remember that historical figures would have seen topics of inquiry that for us are no longer "philosophical" as being part and parcel of "philosophy"; they didn't recognize the same disciplinary boundaries that we do, so they moved very freely from topics like epistemology and metaphysics to topics like astronomy, mathematics or medicine. This is why I have devoted so much attention to "scientific" and even "pseudo-scientific" subjects in the podcast, covering things like medicine, astronomy, and astrology.

But it's not just science: historically the boundaries between philosophy on the one hand, and theology or mysticism on the other, have been quite blurry or just non-existent. I won't go into the theology point again, except to refer back to the Islamic world episodes and all the philosophy we saw being done by representatives of "kalam" (systematic theology). We also saw some philosophically interesting material in Sufis and Kabbalists, with mutual influence and re-purposing of ideas about negative theology, the soul, and so on, from philosophy to mysticism or vice-versa. Even a topic like Islamic jurisprudence turned out to have important implications in ethics and epistemology.

The moral of this story, then, is that historicans shouldn't restrict their attention to texts, figures and movements that seem "philosophical" in our sense. Philosophical material is not philosophical because of where it appears, but because (to make a long story short) it is philosophically interesting.

Rule 16: Respect texts about texts

A whole genre of philosophical writing that traditionally suffers from neglect is the commentary. Actually there is a whole range of texts about other philosophical texts, which would include commentaries but also glosses, paraphrases, epitomes, and the like – I am referring to all this sort of thing, but to simplify I’ll mostly just talk about “commentaries.” A good example, and also an example where prejudice has largely been overcome now, is the massive body of philosophical commentaries on Plato, Aristotle and other philosophical works that was produced in late antiquity. Thanks to Richard Sorabji’s Ancient Commentators Project (which I worked with in London for some years) these commentaries are now mostly available in English and have been pretty well integrated into history of philosophy. There are also many commentaries in Latin medieval philosophy and in the Islamic tradition. In fact, one reason for the widespread myth that philosophy in the Islamic world ended after the 12th century or so is that thereafter, philosophy was often written in the form of glosses and commentaries, which are always in danger of not being taken seriously. (I interviewed Robert Wisnovsky about this here on the podcast.)

There are at least three reasons why we should take such texts seriously, and include them in the history of philosophy. First, they can still fulfill their original purpose of illuminating the text commented upon. Alexander of Aphrodisias was not only a superb philosopher in his own right, but also had a thorough and intimate knowledge of Aristotle’s works (plus he was a native speaker of ancient Greek!). He is thus a very useful guide to textual and philosophical problems in the source text – that doesn’t mean he’s always right in his interpretations of course, but he is pretty well always worth consulting. Admittedly not all commentators reach his standard; maybe only Averroes can compete with him as an insightful and interesting commentator on Aristotle. But the mere fact that a commentary has survived down to the present day is usually a sign that many generations of readers found it useful.

Second, commentators are themselves philosophers and say interesting and original things in the context of commentating – sometimes this happens as a kind of digression from the commentary, but you can also find fascinating material in the midst of commenting on a passage. It’s often precisely when the commentator has trouble with the source text that he or she is going to be innovative – a Platonist commenting on Aristotle, for instance – and the innovation may show itself in very subtle ways, for instance slightly but significantly different word choice as a source text is paraphrased.

Third, there is something philosophical about the commentary activity itself. What these older commentators were doing is much like what we are doing when we read historical philosophical texts today: trying to make sense of them and find what is true in them. The methods and presuppositions a commentator brings to a text can be illuminating for our own practice. For instance, do they use a “principle of charity,” trying to offer readings that will make the source text come out true or at least coherent or plausible, and if so how do they do so? As you’ll know by now I’m firmly convinced that doing history of philosophy is itself a philosophical enterprise, and we may have no texts that illustrate this point better than texts about texts from earlier time periods.

Rule 17: Focus on the primary text, not secondary literature

I often tell my students, "I would always rather you read the primary text one more time than go read a piece of secondary literature." The point of this is to encourage students to form their own impressions and analysis of a historical source, rather than just reproducing what scholars have already written about that source. This is not to say that secondary literature is useless. It would be pretty hypocritical for me to say that, given that I produce it myself! But one needs to think carefully about how to use it, and about the balance between reading the primary source and using scholarly literature. I think that it is a good rule of thumb for everyone - from beginning student to professional historian of philosophy - to focus on the primary text, and to have a clear idea what one is trying to get out of secondary literature when one does turn to it. Some uses are pretty much unproblematic, for instance:

• It may help provide historical context for the primary source, e.g. what other texts the author is responding to; often you just won't be able to get that out of the primary text (editorial notes indicating sources or parallels in other works are, of course, themselves a "secondary" intervention and not part of the primary text).
• If you want to produce new research about the primary text you obviously need to know what has already been said, so that you aren't just reinventing the wheel.
• General secondary works (like this podcast and the books based on it!) can give you a broad sense of what primary texts are out there, and which you may want to study more closely. To employ a metaphor I've used before, the podcast is akin to a travel guide, which tells you which cities and landmarks you may want to visit; but you shouldn't only read the guide book, you should go visit yourself.

The tricky part comes when secondary literature tries to help you understand the primary text, by making distinctions or observations you may not have seen yourself. Of course this is useful too; indeed it is usually the point of reading published scholarship on history of philosophy. But it is more treacherous, because having read this scholarship you run the risk of coming to the primary text without "fresh eyes" and only seeing the problems or solutions others have already found in it. Hence the point of my advice to students: when in doubt, make up your own mind first and then check to see how your understanding of the text compares to what others have said.

Rule 18 for history of philosophy: don't essentialize

In reading about Indian philosophy for the podcast I have been struck that, especially in older secondary literature, you'll come across claims like "an interest in the self is fundamental to the Indian worldview" or "non-violence is deeply rooted within the humanism of Indian culture." Such claims, made by both Indian and non-Indian scholars, are usually meant as compliments. But to my mind they are reductive and, to be frank, silly. In one case, which actually inspired me to devise this new rule, an author said that non-violence (ahimsa) was fundamental to the Indian worldview, so that the spectacular and tragic violence of mid-20th-century Indian history must have been somehow a violation or abberation of Indians' true nature! That looks suspiciously like a theory that is immune to counterevidence. One sees this with other cultures too. I've often seen - and not only in older literature - remarks that Islam is, or isn't, a "religion of peace," is "intolerant" or "tolerant," etc.

The truth is that cultures, including religious cultures, are complex and marked by internal disagreement, and they develop over time. So we should see them as historical phenomena, not as having some sort of essential character that is acquired by all the adherents of a given religion or members of a given culture.

Probably it is easier to make this point about cultures or geographical regions than religions. It seems just evidently ridiculous to suppose that the population of India has, in general, had a commitment or even tendency to any particular philosophical view or ethical maxim from the time of the Upanisads down to the current day. Lurking below the surface here is our urge to stereotype - just as Italians are emotional and germans love discipline, so Indians are supposedly fascinated by the self and committed to non-violence.

With religion, things are trickier. I think I would have to admit that someone who is actually a Muslim might have a stake in what Islam "really is committed to," e.g. on the basis that there are correct and incorrect interpretations of the Koran and hadith. But I see no reason for a non-Muslim, or even a Muslim historian of philosophiy who is writing in his or her capacity as a historian, to think in these terms. Rather the question should be, "what have actual Muslims in such-and-such a period believed about their religion?" Anyone who's dipped into the Islamic world episodes of the podcast knows that the answer to that is as varied as the thinkers that I covered, to say nothing of those I didn't.

This matters for the history of philosophy in particular because of the widespread tendency to expect that certain (especially so-called "non-western") philosophical traditions will have a distinctive, essential character - more "spiritual", more "determinist," or whatever. This is a bad approach. We are much more likely to discover tensions and disagreements within a tradition of any significant historical scope, than we are to discover some kind of enduring character that marks all thought from within that tradition. And supposing that frequently recurring ideas within a culture somehow derive from the "innate character" of that culture is lazy, and a way of avoiding the more interesting question: what historical or intellectual reasons underlie the prevalence of such ideas?

Rule 19 for history of philosophy: beware of jargon

This is, I think, good advice for all kinds of writing in the humanities but it's especially relevant for philosophers and historians of philosophy. Contemporary philosophy, both analytic and continental (to use some terminology that in itself is questionable), bristles with off-putting jargon and also technical tools like logical notation to abbreviations and numbered propositions. There's certainly a place for this: philosophy is, among other things, about precision and rigor, and formal languages and jargon can be very precise. But analytic philosophers often make things unnecessarily hard on their readers by using technical symbols when normal language could say the same thing quite easily, or expecting the reader to bear in mind what lots of numbered theses stand for. (I read a book for the podcast that had so many numbered propositions in it that it needed a several page long appendix to list them all, forcing the reader constantly to flip back and forth between the main text and the appendix.) As for "continental" philosophy, the scandals of parody articles being accepted for peer review speak for themselves. Every time you introduce a new piece of terminology, abbreviation, or tag (like referring back to some philosophical claim as P, or 4*) you make it harder for the reader to stay with you, and an accumulation of these devices will make your text almost impossible to read. Of course their use is often justified, and what is incomprehensible for a general audience is often straightforward for a specialized audience. But the rule should be: don’t formalize, or use jargon, unless the gain in clarity, rigor etc is worth the burden you’re placing on the reader by doing so.

There are two reasons that this point is especially relevant to history of philosophy. One is that the use of contemporary technical tools and jargon brings with it the risk of anachronism (and by now you know how I feel about anachronism). My favorite example is the use of the “backwards E” or existential quantifier (Ǝ), for instance Ǝx which would be read “there is an x.” You can readily find examples of this symbol being used in work on ancient philosophy. Of course such notation was not used then, but that isn’t the problem. The problem is that one can have a long debate about whether ancient thinkers had a notion of existence that would correspond to the use of this quantifier, where anything can be put in for “x”. I would argue that they did not. Just imagine what Aquinas would say if you insisted that God, a created substance, or a created accident must all “exist” or “be” in the same sense, because they can all be substituted for x in Ǝx. Similarly, using bits of jargon from contemporary philosophy can cover up the interesting fact that earlier thinkers lacked precisely the concepts or presuppositions behind that jargon. Again, I’m not saying it is never warranted, and I myself am willing to apply a term like “compatibilist” or “physicalist” to, say, the Stoics. But you have to be very clear in your own mind what these terms mean and whether they truly apply.

The second reason is that historical texts have their own jargon (see rule 9: learn the terminology). Of course using these terms is not anachronistic, unless you apply a term from one period of history to another period. But again, it is a barrier to understanding for the reader. I hate it when people write about Aristotle and use untranslated Greek, or about Avicenna with untranslated Arabic. This is like putting a note at the top of the piece that says “if you can’t read these languages, I don’t want to talk to you.” And same with unexplained bits of technical language (say, using “supposit” in a discussion of medieval philosophy without explaining it). Here too of course, the rule is not absolute. You might be writing a detailed discussion of the original terminology, which inevitably presupposes that the reader knows the original language. Or you may intend to write only for other specialists in medieval thought, who are just going to be annoyed and bored if you tell them things they already know. My suggested rule of thumb though is to avoid erecting unnecessary barriers to understanding.

Rule 20 for history of philosophy: things are always more complicated than you think

For this final rule I considered several options, like “learn some geography” which is definitely a good idea (compare to rule 6 about learning some dates), or exhorting people to explore philosophy from more than one culture or more than one branch of philosophy (not just ethics, but also epistemology, etc). But eventually I decided the best piece of advice to close with is this: “things are always more complicated than you think.” In a way this sums up the core message of my so-called “rules.” Like plain old history, history of philosophy is very complicated and there is no real limit to the things you might want, or need, to know if you really want to understand how and why ideas developed. Hence my earlier pieces of advice to explore the context of historical texts, the role of lesser-known authors, and so on.

But I also suggest this last piece of advice with a view to the core activity of the historian of philosophy, which is reading philosophical texts. One thing I have learned from participating in many philosophy reading groups over the years (and not least from MM McCabe, who was for many years a colleague of mine at King’s College London) is that a good philosophical text will keep yielding insights the longer and more closely you read it. Of course you can’t, for practical reasons, just keep reading and re-reading the same page forever, even if that page was written by Plato or Kant. But one should also resist the thought “ok, I basically get the point of this text,” or “I already know what this author thinks about this topic,” and swiftly move on. Slow reading, and repeated reading, is crucial. Towards this end, it is useful to remind yourself that the text you’re looking at is more complicated than you think. Just assume you haven’t yet figured it out fully. Of course not every text rewards this kind of scrutiny; perhaps there are even some bits of Plato that aren’t this rich (if so I haven’t found them). But with any given text, as for the history of philosophy as a whole, it doesn’t hurt to assume that there is always more to discover.

Brady on 2 January 2017

This is a very sensible

This is a very sensible approach to intellectual history, thanks. Your core message - take thinkers seriously, and things are always more complicated - is great and easily neglected. But I almost feel like there should be a little bit of room for unserious history of philosophy too. (Not in the ‘popularize and have a sense of humor’ way, because you obviously do that, in a ‘willfully sloppy’ way.) Lots of good philosophers are horrible historians, and use caricatures of their opponents and predecessors as foils for their own ideas. Some artists seem to have that tendency too. Maybe, sometimes you need to simplify others to make room for your own nuance and abstraction. Although I guess that’s a dangerous path to start down!

 

The paper you mentioned in Rule 13 sounded interesting. I did some googling, and I assume you meant "State of Nature: Human and Cosmic Rulership in Ancient Philosophy". The only version I was able to find was in the German essay collection, and since you have the sole English essay in the book I'd rather not purchase it. Is there a standalone copy you could direct me to? In any case, thank you again for the great work you do with your podcasts. 

In reply to by Brady

Peter Adamson on 2 January 2017

If you email me I can send

If you email me I can send you a copy of the article (peter.adamson@lrz.uni-muenchen.de).

And as for your first point, I agree - especially your point that often great philosophers have been terrible historians! But this was after all a list about how to do it well, not how to do it badly but succeed nonetheless, as it were.

Kris McDaniel on 4 January 2017

Great post, and very helpful

Great post, and very helpful to think through.  I'd consider adding two other rules (I know your list was not meant to be complete.)  They might be collorallies of rule 19, but might be worth considering on their own. If I may:

21.  Be open to the possibility that there are elements of the text in question that cannot be translated without loss of meaning into terms being used in contempory philosophical debates.  Perhaps certain bits of technical terminology used in historical texts have at best only analogoues to contemporary terms.

22.  Be open to the possibility of indeterminacy of meaning.  Sometimes an unclarity in what is meant reflects that there is no clear thing that is meant.  Consequently, one should be cautious about attributing overly determinate theses when the text itself seems to under-determine what is meant.

In reply to by Kris McDaniel

Peter Adamson on 4 January 2017

Hi Kris - I agree with both

Hi Kris - I agree with both of those. I think I meant to cover your 21 in my 19, but hadn't really thought about 22. For me, concluding that "nothing clear was meant at all" would be a last resort, cf the principle of charity earlier in the list. However it could certainly be that a given text undetermines our interpretation, just because it is too compressed or unclearly written. I happen to think this about de Anima 3.5 for instance, one of the most intensely discussed paragraphs in all philosophy. I assume Aristotle had a very clear idea what he wanted to say but it hasn't come across to us in a form that we can decode with any confidence.

In reply to by Peter Adamson

Kris McDaniel on 9 January 2017

Hi Peter,

Hi Peter,

Thanks for the thoughtful response.  The suggested rule is to be open to indeterminacy rather than leap towards assuming it, if only to caution against over-interpreting or reading too much into texts.   

One interesting case of indeterminacy is suggested by Hartrty Field, who considers how to understand  how ‘mass’ was used in pre-relativistic physics.  Did Newton mean by ‘mass’ relativistic mass or proper mass? Field argues that ‘mass’ in this context is indeterminate with respect to these two possibilities.  This is an interesting case because in some sense there was no way for the theoretician in that original context to have been clearer and yet there is still indeterminacy in meaning.  It's not clear(!) that appeals to charity have a place in cases like this. 

I don't know if you are interseted in further reflections on these two suggested ammendments, but I discuss them in this paper here

Thanks again for the awesome work you are doing. And I love NOGAPS.

In reply to by Kris McDaniel

Peter Adamson on 10 January 2017

Thanks, that's a really

Thanks, that's a really interesting point! We used to talk about this when I was in London - MM McCabe liked to make the point that we shouldn't jump to congratulate ourselves on being able to diagnose an "ambiguity" in a given Greek word like, say, "logos" which can mean about 50 different things from our point of view, because after all to an ancient Greek it just meant logos. This is a potentially very complicated issue: what is the difference between shades of meaning (maybe 50 shades, in the case of logos?) that might even be philosophically fruitful, and ambiguity that is philosophically unhelpful? Anyway definitely something one needs to think about, though I guess it pretty much presupposes not just acquaintance but deep knowledge of the primary language.

David Vessey on 4 January 2017

Beautifully done Peter! I

Beautifully done Peter! I expect to use this for my students (and to keep my excesses in check) for years to come. I appreciate--and would stress even more--that philosophical texts are a rich source of philosophical insights; that's what makes them philosophically interesting in addition to all the other ways they can be interesting. I think reading in such a way that you are attentive to philosophical insights risks anachonism (because what one find insightful today may not at have been dreary boilerplate in other contexts), but it also provides a corrective for uncharitable readings and for taking "philosophy" too narrowly.

In reply to by David Vessey

Peter Adamson on 4 January 2017

Thanks! It would be great to

Wow, a comment from the one and only Dave Vessey! Thanks - it would be great to hear what your students make of the list.

Tad Brennan on 6 January 2017

Great stuff, Peter! I agree

Great stuff, Peter! I agree with all of this, and I'm glad to have it all in one place in order to direct my students to it.

I don't think I have anything to add, except perhaps the advice never to imitate the prose-style of your chosen historical figure. Plato is the archetype of those who cannot be imitated; Aristotle of those who shouldn't be.  In this as in other ways they divide their successors into two camps (though it is possible to synthesize them, as e.g. Wittgenstein does.) 

May I point out a possible typo for you to correct?  In your #16, I could not construe "Of what these older commentators were doing is much like what we are doing when we read historical philosophical texts today," except on the assumption that you had originally written "Much of what..." and then incompletely revised it.

And I noticed a charming ambiguity in #3:  "this was a later, hostile addition to the biography of Avicenna written by one of his students"--did the student write the biography or the hostile addition?  It's the possibility of such different readings in even such a simple sentence that assures me that Google Translate will not put us out of work too soon.

Thanks again!

In reply to by Tad Brennan

Peter Adamson on 7 January 2017

Hi Tad! Glad you liked the

Hi Tad! Glad you liked the post and thanks for catching the typo, I will fix that now (but your comment here will forever record that the typo was made).

The hostile addition is a later intervention, not from the student, but I think I will leave that one so everyone can enjoy the ambiguity. As for Google Translate, I take refuge in the thought that ancient Greek and classical Arabic are likely to be among the languages Google will defer as being too difficult to cope with.

Rob Piercey on 6 January 2017

Outstanding post, Peter!

Outstanding post, Peter! Thanks for assembling these in one place.

In reply to by Rob Piercey

Peter Adamson on 7 January 2017

Hi Rob! Great to hear from

Hi Rob! Great to hear from you and thanks, I'm glad you like the "rules".
 

Martin Schonfeld on 21 January 2017

Thoughtful post and valuable

Thoughtful post and valuable rules. I concur with Kris McDaniel's comment above about adding Rules 21 and 22. That said, I’d propose Rule 23: Take Science Seriously.

Convention discourages retrospective projections on the history of philosophy. But sometimes anachronistic readings are a good thing, especially in texts in natural philosophy and metaphysics. Science has settled questions previously assumed to be perennial, and for an informed reading of the classics such findings matter. Consider the question of nonhuman minds (e.g. Descartes vs. Leibniz) or the question of material self-organization (e.g. early Kant). All too often metaphysical texts are treated as cultural artefacts devoid of truth content, and for the sake of objectivity scholars assume a skeptical or agnostic stance. I think this does philosophy a disservice, since it creates the illusion that there are open questions when in fact there are not. 

Shelby on 24 July 2017

 

 

I appreciate these points and find many of them to be crucial, though I think there are some things that merit mentioning. It is important first and formost to remember that one is not a philosopher but a historian, it might seem tempting to try an answer some of the perennial questions raised in philosophy but one in their right mind should avoid it. It doesn't matter if you have a degree from Harvard and a glowing track record, if for centuries great minds could not achieve some consensus on these topics you won't be able to provide a magic answer to them either. In fact, it is important to remember that your not as smart as you like to think and years down the line, sooner or later, people will likely have forgotten you, if not ridicule your work. Your work should be devoid of self interest, to that end it must be rigorous. I'll add a practical comment to that effect, if your writing a paper, come away from it after some time and then re-read it critically. Approach it the way you might approach the work of a rival; actively attempt to prove it's thesis wrong, it's author a fool. If your work survives this then send it to a colleague and ask them to do the same, if it survives only then submit it. It's important also to never take a 'moralistic' approach to the objects of your study, there may be many things found in classic texts that will no doutbly disgust you. However, one must not let this cloud one's judgement, always remember that in a century or so (maybe more who knows?) your own lifestyle may be viewed as equally deplorable, in fact one should bare in mind the deplorable things one has done and how one would not like these things to be judgement of one's character. Last but not least, always pay attention to language, remember how language plays a crucial role in our social construction of reality. No translation is neutral, be careful how you describe a set of practices, beliefs, doctrines or modalities. If it is difficult to do this, it is necessary to make explicit one's inability to be genuinely objective for these reasons. While academia values objectivity, it is in fact impossible and you should make this explicit, however, always try to be fair, distanced and detached. I should also mention that it is a waste to study something you have no attachment to, don't study Islamic philosophy if you have little to no personal interest in Islam or the Muslim world, though this should rightly be self evident. 

P.S. If you happen to work or study at KCL, never, I repeat never, take the AKC course, it's only use value is as a badge of self pride. Though, do not take my word for it, see for yourself. 

In reply to by Shelby

Peter Adamson on 25 July 2017

Well, I can't say I agree

Well, I can't say I agree with all that. Most obviously, I have spent my career studying (among other things) philosophy in the Islamic world, even though I don't have anything personally invested in Islam as a religion. I mean, I guess I have a "personal interest" in it in some sense but basically it is just the historical thing I got interested in studying.

Also, as the rules state, I think that the history/philosophy dichotomy is a false one: to do good history of philosophy you have to think philosophically as well as historically. That doesn't guarantee that your work will be remembered of course! You're probably pretty much right about that.

And as for the AKC, I taught on it once! That was a while ago when my main job was at King's. I hope my part at least wasn't a waste of time.

A. on 26 July 2017

Rule 14 is interesting,

Rule 14 is interesting, particularly your attempt at defining religiousity by the belief in divine authority and cultic practices. American nationalism relies in many ways in the belief in divinity, the abstract power and might of the nation, it also has it's ritual behaviours, singing anthems or saluting the flag, It's holidays, thanksgiving and Independence Day. In some ways, it even has it's own authoritative text to which adherants relate, the constitution, laid down by a group of prophetic founding fathers, whose narratives are particularly important points of debate. Yet these beliefs and practices are charecterised as distinctly secular. A question that can be asked is, is the distinction between religious and secular simply a contemporary phennomenon? Are we simply projecting a set of concepts onto the past? Maybe it's this sort of question that a historian of philosophy should be concerned with. 

Often histories of philosophy resemble organised hagiographies, where there are set of chapters each devoted to one particular thinker (or movement) in a chronological order with a brief summary of their work, life, influences and legacy. But maybe historians of philosophy should be equally concerned with specific embedded concepts that mould the intellectual world in which philosophers worked and the ways in which doing philosophy involved the creation and sustenance of a set of institutions that shape the lived experience of people; which inturn shaped philosophical ideas. For example, how did faith transform from a virtue, 'I am faithful to my friend' or 'I am faithful to my wife', to a particular mode of interpreting the world that exists in relation others, such as science or reason, to which it is separate and distinct? How did this latter idea come to be? Can we trace it's history throughout the ages in the work of various thinkers? How do these concepts shape the way we think and do philosophy today?

I'm sorry if I'm making faulty assumptions, for all I know scholars in this field do deal with such issues, I really don't know much about the history of philosophy as a field; save from what I've observed on this podcast. In that case I apologise. On a side note, I'm not sure if you'll answer this, but I'd like to ask what kind of advice would you give to a prospective undergrad with an interest in the history of philosophy in the Islamic world? I'm not convinced that a BA in philosophy is useful for someone who wants to study the history of philosophy, let alone a completely different tradition of philosophical thinking to that often taught in the academy. 

In reply to by A.

Peter Adamson on 26 July 2017

Thanks for that thoughtful

Thanks for that thoughtful post. I agree that the concept of "religion" is in need of definition; it comes up in other philosophical traditions too where for instance people worry about whether Buddhism is a religion or a philosophy, or both, or neither, or whatever. Since I am not a religious studies scholar, my main goal is just to get people not to exclude things from history of philosophy on the basis that they are "religion and not philosophy." So, recognizing that religion has "blurry edges" is grist to my mill, as it makes it harder to draw that hard and fast religion vs philosophy contrast.

Re. your second practical question, studying philosophy in the Islamic world seriously is difficult because you need a background in both philosophy and the languages (unless you are a native speaker). So a good route is a BA in philosophy then an MA in Islamic studies, or vice versa.

 

Product of my … on 23 November 2017

I have a question about

I have a question about historical presentism. Applying current moral standards and world views to historical times. I'm wondering what you think about it and how you practice the history of philosophy in relation to it. One approach might be simply to report what people have said, understanding their times from their own perspective and in their own context, without passing comment. Another might be to take a moral relativist position. Moral relativism isn't popular with a lot of people, so maybe someone would adopt that approach in their day job, while wearing their historian's hat, but believe something else, personally, out in the real world.

One of the way historians justify writing new books is by saying that past events need to be reinterpreted in light of today's concerns and world views. That seems to be a presentist position. Another is to say we need to learn from the mistakes of history, which probably means making moral judgements about what things are mistakes. Also, there's an awful lot of history. Just selecting what is interesting to study contains an element of presentism. Why, for example, the lives of women in the past and not the lives of the elderly? That would seem to reflect a concern of the present. This might not be a problem for you, as you seem to be rejecting the notion of selection. No gaps!

In the US declaration of independence, it's stated "all men are created equal", but some of the same people who wrote that also owned slaves. There, we don't need to apply our present day standards to criticise them, we can criticse them on their own terms. Similarly, Marcus Aurelius was the Roman emperor who wrote this: The universal nature has made rational animals for the sake of one another to help one another according to their deserts, but in no way to injure one another. (Meditations, 9,1). But Aurelius ruled over a Roman state that endorsed institutionalised slavery and he did little to abolish it. Again, we can criticise him on his own terms. With Aristotle, it's harder to criticise him on his own terms. As far as I know, (not so much), he doesn't claim that all people are created equal, or that we should in no way injure one another. His views may be self-consistent.

Mary Midgley, in Animals and Why They Matter, was critical of Rousseau and his novel Emille. According to Midgley, Rousseau urges men to be free, but urges women to obey their men and to accept subjugation. When men enter society: "each, while uniting himself with all, may still obey himself alone and still remain as free as before". When writing about women he says: "the genuine mother of a family is no woman of the world. She is almost as much a recluse as a nun in her convent" and that "Woman is especially made for man's delight". Rousseau also writes that with girls: "all their lifelong they will have to submit to the strictest and most enduring restraints, those of propriety ... They have, or ought to have little freedom ... she should early learn to submit to injustice and to suffer the wrongs inflicted on her by her husband without complaint".

(Midgley's point here, was to show that similar to how the claims of women may not have been given a fair hearing in the past, the claims of animals may not be having a fair hearing today.)

On his own terms Rousseau may not be inconsistent to have one approach for men and another approach for women. Today we generally think that wrong. If Midgley was a historian of philosophy would she be wrong to criticise Rousseau in this presentist way? Is Midgley fair to apply the standards of today to Rousseau living in a different time? Should we think Rousseau sexist only if he was even more sexist than the standards of his time, regardless of whether he is sexist by the standards of our time? Would a jury of his peers think him unfair to women or would they think more or less about right? No one is raised in a vacuum. Everyone is a product of their time and culture. If I lived in Rousseau's day, I might have similar views. In a hundred years time, people may look back at now and think us blinded by well-meaning ideology in saying there should be equal numbers of men and women as engineers and nurses.

It seems to me that philosophers are supposed to challenge established prejudice, received wisdom and the status quo. I find philosophers more blameworthy for uncritically endorsing the established prejudices, received wisdom and the status quo of their time, than I would say playwrights or politicians. I wouldn't expect anyone to be untouched by the views of their day, but philosophers would be failing to live up to their job description more than playwrights or politicans would be. As a matter of philosophy, we should be able to criticise thinkers on whatever grounds we can justify. As a matter of the history of philosophy, or of history more widely, is that criticism out of place and inappropriate?

A longish post, as it wasn't obviously covered by your twenty rules, I thought I'd make it.

In reply to by Product of my …

Peter Adamson on 24 November 2017

That is a great question and

That is a great question and one I am very interested in. As it happens only a couple of days ago a column of mine was published with the magazine Philosophy Now - you can read it here - which asks how we should deal with repugnant views expressed by figures in the history of philosophy. I have found that some of the first responses to my column assume I am thinking about it the way you pretty much are, namely that it is a matter of "criticizing" (or perhaps even condemning) Aristotle, Kant or whoever expressed these awful things about other peoples, women, etc. Actually, I don't find that to be a particularly interesting question: it's hardly surprising that Aristotle was from a present-day point of view a misogynist, given that nearly everyone was at the time and he is no more (or less) blameworthy for that than all the other Greeks who shared his views. Perhaps you could argue that philosophers should be held to a higher standard but I tend to think that that is rather unrealistic: in general, I don't think it's fair to blame thinkers for failing to have ideas they might have had, and we should focus on exploring the good ideas they did have.

What I am more interested in, and in the column am addressing, is the question of whether those apparently good philosophical ideas are "infected" by the misogyny (or other forms of bigotry) found in the same works or authors, so that they can no longer, or only with great care, be considered as attractive views to us today. If that is the question, then adopting a "presentist" attitude is only natural because we are asking precisely whether these past ideas are still viable for us now.

In reply to by Peter Adamson

Product of my … on 25 November 2017

I'm in agreement that it's

I'm in agreement that it's better to focus on good ideas rather than on bad ideas. Where they may not have given much thought to it, perhaps it's not fair to blame a thinker for casually echoing, in passing, the racist or sexist views of their day. With Aristotle on slavery and Rousseau on women, as they explicitly wrote about it in more than passing, any criticism of them is not about them "failing to have ideas they might have had", but about criticising the ideas they did have, after considering the matter.

As I'm a poor scholar and tend to approach things piecemeal, the notion of ideas being "infected" hasn't been a problem for me. If I was against utilitarianism, but thought Peter Singer had good ideas in relation to animal rights, I don't think I'd find those ideas infected by his utilitarianism. Then again, if politician X that I didn't endorse, supported an idea that I did endorse, I'd likely more readily think of that idea as being infected by politician X.

What motivated my question was listening to a number of history podcasts in which this issue often raises itself. A lot of presentist criticism seems to presume that we in the present are either right in our moral views, or have made progress, neither of which I necessarilly agree with. I'm leaning towards making a distinction between criticism and blame. I might favour criticism, in the neutral sense of engaging with someone's position or actions, but prefer to avoid prescribing moral blame or condemnation. That might also mean not prescribing moral praise. I'm complimentary of Jeremy Bentham - for being pro pushpin, gay sex and animal rights - if it increases the amount of pleasure in the world. Views that were ahead of his time. Part of what I like about Bentham is that his utilitarianism seems consistent, unlike Mill. I'd probably take pushpin over poetry.

It's interesting that in your article you write about "catching out" philosophers in self-contradiction. Almost as if that's a cheap thing to do. It seems to me that's one of the most important tests of someone's position. If someone is self-consistent, that position is at least viable. It doesn't mean we have to adopt it, it may one of many viable positions. Also, there may be positions that are significantly flawed, but still have interesting insights. Still, I don't know if I'm understanding rightly, in reading you as thinking it cheap to look for self-contradiction. To me, maybe naively, consistency seems fundamental.

Thanks for the reply and the podcasts!

In reply to by Product of my …

Peter Adamson on 26 November 2017

Wow, there's a lot there -

Wow, there's a lot there - let me pick out a few things to respond to:

"If I was against utilitarianism, but thought Peter Singer had good ideas in relation to animal rights, I don't think I'd find those ideas infected by his utilitarianism."

This strikes me as an odd thing to say: I mean, Singer's entire rationale for animal rights is utilitarian. It seems like you are saying that you are happy to accept the conclusions, or perhaps individual claims, of philosophers and discard pretty well everything else if you aren't in agreement with it. But surely the interesting thing philosophers do is not make individual claims (anyone can do that: God exists! God doesn't exist! We have free will! We don't!), the interesting part is the arguments they give and how their various claims all hang together. That's why I think the "infection" issue is so troubling.

"A lot of presentist criticism seems to presume that we in the present are either right in our moral views, or have made progress, neither of which I necessarilly agree with."

This raises an interesting issue which is almost a paradox: you seem to be saying that one can simultaneously hold a thesis (say, a moral view) while also not holding that the thesis is true. So you might say, "I don't criticize philosopher P or culture C for believing that women are inferior to men, because even though I myself believe that women are not inferior, I don't presume that I am right in believing that." How do you go about believing something, perhaps very strongly, without also holding that people who reject that belief are wrong?

"I'm complimentary of Jeremy Bentham - for being pro pushpin, gay sex and animal rights": Really? Does he actually apply his view to gay sex or are you extrapolating from what he says?

"It's interesting that in your article you write about "catching out" philosophers in self-contradiction. Almost as if that's a cheap thing to do." I agree that philosophers should be consistent, my point was that as sympathetic interpreters we should try to make sense of their views looking for coherence and consistency, not seize on apparent contradictions as if finding these were the goal of interpretation. Or to put it another way ascribing self-contradiction may certainly occur but it is something of a last resort.

In reply to by Peter Adamson

Product of my … on 28 November 2017

My views on presentism aren't

My views on presentism aren't clear to me and that's why I posted about it. I'm interested and want to explore it. When people say we can't reproach Kant for his seemingly racist views, he was writing in a time and place where there those views were common, that sort of makes sense to me. But when other people say of course we can reproach Kant where his views seem racist, that also sort of makes sense to me. So if what I posted might not seem to add up, it shouldn't be surprising. That said, I think I can reply to your points.

1)

My posts get rejected if I include urls. I searched for "Jeremy Bentham gay sex" and the first few results gave me what's set out below.

This is from the Guardian article:

The main impetus for Bentham's obsession with sexual freedom was his society's harsh persecution of homosexual men. Since about 1700, the increasing permissiveness towards what was seen as "natural" sex had led to a sharpened abhorrence across the western world of supposedly "unnatural" acts. Throughout Bentham's lifetime, homosexuals were regularly executed in England, or had their lives ruined by the pillory, exile or public disgrace. He was appalled at this horrible prejudice. Sodomy, he argued, was not just harmless but evidently pleasurable to its participants. The mere fact that the custom was abhorrent to the majority of the community no more justified the persecution of sodomites than it did the killing of Jews, heretics, smokers, or people who ate oysters – "to destroy a man there should certainly be some better reason than mere dislike to his Taste, let that dislike be ever so strong".

Nick Booth writes this in the UCL blog, promoting his talk on Bentham:

... Bentham, working from this philosophy, came up with some pretty modern-sounding ideas. These include supporting suffrage for women, calling for animal rights and the abolition of slavery. It also led him to advocate for gay rights, at a time when being homosexual in Great Britain could result in the death penalty. Bentham believed that things that gave pleasure were good. Sex gave pleasure, there for it was good – as long as it was consensual and no-one got hurt. So as something good sex should not be reserved for a man and a woman in marriage, it should be open to all – same sex, different sex, multiple partners. Professor Schofield summed it up rather succinctly – “Sex is the purest and most intense form of pleasure: hence, sexual liberty.”

2)

I said: A lot of presentist criticism seems to presume that we in the present are either right in our moral views, or have made progress, neither of which I necessarilly agree with.

You replied: So you might say, "I don't criticize philosopher P or culture C for believing that women are inferior to men, because even though I myself believe that women are not inferior, I don't presume that I am right in believing that."

There are some things where there seems to be a fact of the matter: 2+2=4, H2O is water, that the Earth revolves around the sun. There are some things where I don't believe there is fact of the matter. Is Shakespeare better than The Simpsons? Is Bach better than Tupac? Different people will have different opinions. It depends on the criteria they use for judging "better". All I would ask is that each person apply their different criteria consistently.

Similarly, in morality, I don't think there is fact of the matter. Crudely put, people may begin with different moral values and because of that, even if people are entirely consistent and coherent, they'll probably end up believing different things. Chaim may start out with the value that we should honour the Jewish God. Belinda may start out with the value that we should promote individual liberty. Chaim may be consistent and coherent in saying we should stone gay people to death. Belinda may be quite consistent in saying gay people should be allowed to live as they please. Given the different starting values and beliefs each person holds, each person's position makes sense. There is no paradox and there is no fact of the matter about which values to start out with. Similar to how there is no fact of the matter about what criteria to use to determine whether Bach is better than Tupac.

A criticism of this kind of view is that it makes moral debate difficult. If people begin with different values, there may be no common ground from which to begin debate from. I agree with that, but it needn't be direct moral debate. They can debate on the other person's terms. So Belinda can say if you truly believe in honouring God, that would entail doing such and such, where Chaim might not like doing such and such. Belinda might undermine Chaim's starting value by undermining his belief in God. There may be facts of the matter that affect the application of their values. For example, is it a fact that the Bible says that men who have sex with other men should be stoned to death? Yes it is. Leviticus 20:13. Moral debate is possible, but people may need to address their interlocutor on their interlocuter's own terms. This is why I emphasise criticising people on their own terms and emphasise consistency. Given their different starting points and because there is no fact of the matter about where to start, Chaim and Belinda would only be wrong if they were inconsistent or got their other facts wrong somehow.

(I use the Jewish God here, because some Christians say the New Testament supercedes the Old Testament, so they are not held by what it says in the Old Testament).

To speak on presentism more directly, a podcast I was recently listening to talked about the Bible passage Deuteronomy 22:28-29. There, a man who has raped a virgin daughter is ordered to marry the victim and to pay her father 50 shekels. Many might shake their heads in disbelief today. However, the point was made on the podcast that the alternative would be for the father and her brothers to get "justice" by killing the rapist. There was no police force, no courts, no state institutions, just a loose association of tribes. The father and brothers of the killed man would then seek "justice" themselves, killing someone from the daughter's family in their own revenge, beginning a blood feud that might lead everyone concerned to live in fear and violence. That kind of blood feud spanning several generations happened not so long ago between the Hatfield's and the McCoys in the US South. In light of the desire to avoid a blood feud, if the podcast describes the situation fairly, the Bible solution makes some sense. It still doesn't explain why the rapist has to marry the victim, or that the money should go to the father and not the daughter. The idea is that the daughter is now sullied goods and no one will want her, but give her the money in her own right and she mght be much more attractive. Setting that aside, the point is that our morality today presumes things likes police, courts and justice through the state, not things that the people of the early Bible had access to. We can't always straightforwardly judge other peoples and times by our moral standards. Whether it's a lack of state institutions or something else, the circumstances and problems are often different.

Using your example, if a family lived in circumstances where the value of a child depended on how well that child could use violence to defend the family, and if in general boys could use violence better than girls, this in a time before guns, then that family may be justified in saying girls are inferior to boys.

3)

You said: But surely the interesting thing philosophers do is not make individual claims (anyone can do that: God exists! God doesn't exist! We have free will! We don't!), the interesting part is the arguments they give and how their various claims all hang together.

There are lots of Western people who seem to reject Buddhist beliefs about rebirth and nirvana, but who also embrace Buddhist beliefs about the self, letting go of desire and the benefits of meditation. It seems to me someone could accept Newton's mathematics without worrying about his views on alchemy and the Bible. The Stoics believed something like that the universe, or nature, or God, was a unified ordered thing, with each person having there proper place within that order, a kind of teleology,  a purpose at work in the cosmos. I don't see why someone can't reject that, but still embrace the Stoic views about some things being outside our control, with the good and the bad lying only within the things that are inside our control.

I agree the arguments people give are interesting, but to me, whether or not all hangs together is not especially troubling. Peter Singer makes the point that an adult chimpanzee is cognitively more advanced than a human baby or someone mentally disabled. If we can experiment on adult chimpanzees why can't we experiment on human babies or the mentally disabled? Is it simply speciesism? Singer may be motivated by utilitarian principles, but utilitarianism has no bearing on the validity of that question. The argument is a good one and it can stand apart from utilitarianism. There's no need for that argument to hang together with anything else.

One of the reasons I post online is so that people much sharper than me can point out where I'm going wrong. Hope to learn something. So I look forward to you shredding my post!

In reply to by Product of my …

Peter Adamson on 29 November 2017

The Bentham stuff is

The Bentham stuff is fascinating, thanks! Re. 2, the basis of what you are saying there is moral relativism, which I think one should just reject and pretty obviously so. It follows from moral relativism that, e.g. we are not in a position to assert (as a straightforwardly true claim, as opposed to a mere expression of our own belief, cultural attitude etc) that torturing babies is wrong, or that committing genocide is worse than not committing genocide. I think you may be falling into the trap of going from "more information or context would help me understand why culture C thinks practice P is ok" from "I am in no position to say that culture C is wrong to practice P." But it seems to me that such moral claims as "genocide is morally worse than eating an apple" are so obviously true that nothing could possibly be said in favor of relativism that should lead one to doubt them. Therefore, while there may be _something_ to be said for thinking more relativistically, e.g. being modest about our own certainty in hard cases or whatever, or being less ready to be judgmental, in general total moral relativism is to my mind a non-starter. And if total moral relativism is false then your worries about presentism are misplaced: we can certainly judge slave holding societies as being immoral insofar as they had slaves.

Re. your third point, I would simply agree: you can take a Stoic attitude without invoking the Stoic determinisic cosmology. In fact Epictetus hardly ever does invoke that cosmology but is the greatest exponent of the ethical theory. The only real pressure on this view comes if you ask yourself why we should be so accepting of things that are outside our control: the classic Stoic view is that we should accept these things as the will of god, and if you cut god out of the picture the view could tip into despair (I can't control external things, and those things are horrific!) rather than beatific acceptance.

In reply to by Peter Adamson

Product of my … on 1 December 2017

Earlier you said: But surely

Earlier you said: But surely the interesting thing philosophers do is not make individual claims (anyone can do that: God exists! God doesn't exist! We have free will! We don't!), the interesting part is the arguments they give and how their various claims all hang together.

In your last post you said: But it seems to me that such moral claims as "genocide is morally worse than eating an apple" are so obviously true that nothing could possibly be said in favor of relativism that should lead one to doubt them.

If the best argument against relativism is that some things "are so obviously true", then to my mind, you haven't done any more than assert a claim. According to what you wrote before, "anyone can do that". People in Europe used to think about atheism the way many seem to think about relativism: it's just obviously wrong. The existence of God is "so obviously true". As it was an article of faith for those Europeans then that God existed, so it seems to be an article of faith for many today that moral facts exist.

There are no objective, universal, mind-independent prescriptive moral truths out there. The universe is indifferent to us and doesn't care. There are no moral facts. Not even "queer" moral facts. Some wasps lay their eggs in caterpillars so that the larvae can eat the caterpillar from the inside out. Cats play with mice before killing them and we cuddle up to them after they've done it. That's the way the universe is, that's the existence we've been thrust into, that's the human condition we face and have to make the best of. In the US people used to think of torture as being almost as abhorent as genocide or slavery. The circumstances have changed and the views of many Americans have changed. That's how it works. I believe that genocide is wrong. I believe that slavery is wrong. But it's not clear that's obvious. It wasn't obvious to the many people who carried out genocides in Europe or in other parts of the world. Slavery wasn't wrong to the many people who practiced it. There is no fact of the matter about genocide or slavery being wrong. I bite that bullet. I'm not falling into any trap.

I wouldn't call myself a relativist because too many people have misconceptions about what that means. It needn't mean anything goes, it needn't mean moral criticism is impossible and it needn't mean tolerating what other people want to do. At base, I'm a moral anti-realist. There are no moral facts on which to ground moral frameworks. Still, we find ourselves here. We have to make decisions about how we want to live and how we want to organise our societies. There are no moral facts about which moral values we should adopt, but we should be consistent to the moral values we have adopted and be correct about any relevant matters of fact involved in applying them.

If you or anyone else can make a good case for moral realism, or can demonstrate the existence of moral facts, I'm open to being persuaded. Some moral facts "are so obviously true" doesn't persuade me and seems no different to you or anyone else stamping their feet, no matter how many feet are stamping.

In reply to by Product of my …

Peter Adamson on 1 December 2017

Ah, ok. The reason I

Ah, ok. The reason I confronted you with claims like "genocide is obviously worse than non-genocide" or "torturing babies is repugnant" is that I have very often encountered moral relativism and moral anti-realism (it is typically the opening position taken by undergrad students), but most people will immediately give up that stance upon being confronted with such a clear example. You're right that such examples in themselves do nothing to refute moral anti-realism, it's just that very few people are willing to disbelieve or suspend judgement about them  - or rather pretend to do so, which I suppose is what is really going on when people endorse moral anti-realism. It's sort of like external world skepticism, in that it is difficult to refute but nearly impossible genuinely to believe.

If however you want to dig in your heels and adopt relativism or anti-realism, then I agree it is going to be very hard to refute you. I personally don't find it very interesting to adopt or argue against such radical skeptical positions since, as I say, they just seem to me obviously false. But as you say, "your view is obviously false" is not an argument.

I guess that the best one can do against the moral anti-realist is challenge you to make good on the sort of thing you say here: "It needn't mean anything goes, it needn't mean moral criticism is impossible and it needn't mean tolerating what other people want to do... There are no moral facts about which moral values we should adopt, but we should be consistent to the moral values we have adopted and be correct about any relevant matters of fact involved in applying them." Now, as a moral anti-realist there is no obvious reason why any of that should be true. You could easily say that in fact, anything goes. However, having admitted otherwise you take on a pretty huge burden of argument, because you now need to explain to those who do have realist moral beliefs (i.e. everyone, basically) why on your view not just anything goes; or even why we "should" be consistent given that there are no facts about "should". I'm not saying this is impossible, I'm just saying that it is a mountain to climb, philosophically.

Given that this is such a huge challenge for you as a moral anti-realist and that, as I say, moral anti-realism is totally counterintuitive and has no obvious upsides apart from the fact that it is very hard to refute, I really don't see it as an attractive view in any sense. In general, I don't think "view X is hard to refute" is a good reason to adopt view X. I guess that often in philosophy one's choice of view will come down more to the following considerations: what speaks in its favor, what advantages does it have, how hard is it going to be to square with our other commitments? I mean, ultimately a radical skeptic who refuses to commit to ANY claims at all is irrefutable; but who wants to be a radical across-the-board skeptic?

In reply to by Peter Adamson

Product of my … on 1 December 2017

My position is that there are

My position is that there are no moral facts. No moral truths. But we are all here and we all have to do something. We have to do something with our lives and have to organise our societies in some way. As there are no moral facts, there are no right answers as to how to do that, but we still have to do that. What is a fair rate of tax, for example? Would it be a flat rate or should the rich pay more? Should I spend my life chasing more money? Or should I do something else with my life? There are no moral facts, but we still have to take a position on things like that, despite there being no fact of the matter answers to those questions. Even refusing to take a position, is a form of taking a position. Even doing nothing, is a form of doing something. We are all here and we all have to live our lives in one way or another.

It's not about proving or refuting anything. It's about deciding what we think is best, while accepting that any answers we come up with are not grounded on any moral facts. There is no right or wrong. All I can do is think about my position, then try to live my life accordingly and, where I'm interested to, try to persuade other people to see things as I do. They may or may not see things as I do. They may persaude me to see things as they do. But whatever is the case, there is no fact of the matter about it.

What I would try to persuade people of is that once people have adopted their moral values, they should be consistent with them and be correct about any facts that are relevant to that. If I put a high value on individual liberty, I should be consistent with that, even where people use that liberty to do things I might prefer them not to. If I put a high value on promoting happiness, I should try to be correct about the actual matters of fact as regards what promotes happiness. As people's moral values have a high importance in my account, I would prefer people to give some thought to the moral values they hold. That's what I would try to persuade people of, where I'm interested in trying to persuade people.

Now Paula could say: "I don't care about being consistent, or about being correct, or about thinking out the values I hold". I personally think Paula would be wrong to take that approach, but Paula wouldn't be wrong in any fact of the matter sense. It may be entirely rational to take that position. It's hard thinking work that a lot of people might not enjoy and they could probably spend their time more happily doing other things. All I can do is try to persuade. All anyone can do is try to persaude, because there are no facts of the matter. If Paula thinks anything goes, she can live her life on that basis and make her case for that. She would need to remember that anything goes would justify someone else doing anything to her, if that someone else had the power to do it. If Simona thinks anything doesn't go, she can similarly live her life on the basis of the moral framework she's adopted and make her case for her moral framework, if she's interested to.

Al Gore said that before the civil rights legislation was passed in the 1960s, the conversations were won. Conversations are one form of persuasion. Around 1492, my understanding is the Spanish told the Jews that they could leave, convert or be killed. Force of various kinds is another form of persuasion. Marx talked about how the powerful create moral ideologies that help justify and entrench their positions of power in society. That's also a form of persuasion.

I don't see that I have any "huge challenge" because I'm not attempting any huge task. There are no moral facts, but as we are all here, we have to do something with the situation we face. So, if we're interested in this kind of thing, we can think it out and try to persuade other people of what we think, while being open to being persuaded by what other people think. Or, if they can square it with themselves and if they're willing to face the potential social costs, people can live in any way they want to, without ever thinking much about it and however much other people might try to persuade them to live otherwise. Ultimately, there is no fact of the matter about it, because there are no moral facts.

In reply to by Product of my …

Peter Adamson on 2 December 2017

Yes, I get that you deny the

Yes, I get that you deny the existence of moral facts, and I take your point that even a moral anti-realist has to get on with life somehow (as I suggested before I suppose the anti-realist does that by, effectively, living as if moral realism were true and that is almost what you say here).

The difficult question is how you can make sense of all the things you are saying that are not negative claims about morality, for instance "once people have adopted their moral values, they should be consistent with them": if confronted with someone who has no interest in that you would have to say, "well look, it is not true that you should be consistent since there is no fact of the matter about that, but you should be consistent anyway". Which is, to put it mildly, puzzling. Or again "I personally think Paula would be wrong to take that approach": so you think she is wrong, even though there is no fact of the matter about whether she is wrong.

I suspect what you have in mind with that "personally" is something like: I know that there are no real moral truths, yet I find myself having my own moral intuitions and beliefs, which I realize are not really true but I have no choice to live in accordance with them. So there is a deep cognitive dissonance built into your view. I guess that the strategy you should adopt to resolve this is to give some new interpretation to words like "wrong" or "should", e.g. "wrong" may equal "it is not to my taste" and "should" could mean "I wish that..." There are plenty of attempts to do that in 20th century ethics e.g. emotivism, which argued that moral judgments are sort of like saying "yay!" or "yuck!"

But as I say, the real question here is - given that all this is so bizarre and complicated - why adopt moral anti-realism in the first place? It seems like all you are willing to say about that is "there are no moral truths." But that is just a statement of your anti-realist position, it is not a reason for the position. And the position is as I say deeply unattractive in all kinds of ways, so your reason had better be pretty darn good. Like I said before I suspect moral anti-realists usually get into their way of thinking by confusing the need for tolerance of other views, and the need for an appropriate degree of moral humility, with the far more radical notion that there are just no moral truths at all. I think there is a lot to be said for tolerance and humility but nothing to be said anti-realism so I would just suggest avoiding the slide from the former to the latter.

In reply to by Peter Adamson

Product of my … on 5 December 2017

Again, if you or anyone else

Again, if you or anyone else can make a good case for moral realism, or demonstrate the existence of moral facts, I'm open to being persuaded. People often argue about who has the burden of proof. My approach to that is to try to give my case as best I can, regardless of who I think bears that burden more.

Some people think we can find moral facts or moral truth in our emotions and intuitions. If moral facts don't have any causal influence in the universe, how did our intuitions evolve to track them? If moral facts do have a casual influence in the universe, why can no one detect them, measure them or scientifically study them? We can independently verify the theory of relativity. Why can't people independently verify these natural moral facts? If our natural emotions and intuitions are good, then xenophobia as part of the in-group/out-group bias is good. Many have pointed out that genocide may be good from an evolutionary perspective. According to the selfish gene type stuff, the goal is to maximise the number of your genes in a viable gene pool. Killing off groups of people who have different genes, genocide, acheives that. If our natural emotions and intuitions are good and we have natural emotions and intuitions that encourage us to be hostile to people different from us, genocide may be good.

In any case, I'm skeptical that our intuitions are innate. I suspect that they are mostly socialised into us. If so, when people say our moral theories should match up with our moral intuitions, all they are doing is seeking to justify the status quo through circularity. Our moral theories would then match up with the beliefs and values we've been socialised into, not to any moral truth. People's intuitions about gay sex, IVF, women in combat sports, transgender people and many other things have changed substantially in a short period of time. People in other culture's and times often seem to have very different intuitions to ours today. The chinese eat dogs, Indians have arranged marriages, the Aztecs practiced human sacrifice, the Greeks enjoyed sex with young boys, the Egyptian royal family favoured incest, the Romans went crazy for gladiators fighting to the death, many peoples practiced cannibalism, slavery and infanticide. Even now, Americans are coming over to the view that torture is permissable. If our emotions and intuitions are not innate, if they change as individual social worlds change and if they differ from culture to culture, it doesn't seem they're guided by any fixed moral truth.

I don't believe in a God or Gods. These moral facts if they exist, don't seem to be natural facts, as we can't scientifically get at them. The natural world doesn't seem to behave morally and we are just animals evolved by chance. The universe appears indifferent to us. Genghis Khan was responsible for the death of millions of people. The universe didn't punish him. I would guess he was very pleased with the way his life turned out. He's not in hell, hasn't been reborn as a slug, he's just dead. Why don't these moral facts seem to have any force on on what people do? They didn't stop Genghis Khan. Some people talk about an inner moral sense or a moral faculty that tunes in to moral truth. How does that work? Seems like nothing more than appealing to magic as I see it. We have consciences, but Freud pointed out that these are largely socialised into us. Appealing to evolved emotions and intuitions doesn't seem to work. Most people think some of our evolved emotions and intuitions good and think some of our evolved emotions and intuitions as bad. That seems to mean we need some kind of further moral framework to help us decide which are good and which are bad. We can't simply point to evolved emotions and intuitions as being moral truth. It may be that most people believe in moral realism, but most people in Europe probably used to believe in a God, souls, witches, fairies, alchemy, the four humours and all other kinds of stuff. Nietzsche said something like that people have given up God, but have still retained a lot of the God baggage. Moral realism is part of that baggage.

You say my view is "puzzling" and "bizarre". That's how an attachment to moral realism seems to me. It seems like an article of faith with no basis. The puzzling and bizarre nature of the moral realist view is often hidden, because moral realists tend to talk about moral facts and moral truth without explaining what these things are. Which you haven't. If they did explain what these things are, likely it would show what bizarre and puzzling things these moral facts would need to be. I imagine that a few hundred years ago atheism would have seemed puzzling and bizarre to many. Your argument is similar to someone who might've said a hundred years ago that we have to believe in a God because look at the "unattractive" consequences. If moral facts exist, why don't moral realists tell us what these moral facts are and tell us how they arrived at them, so we can verify these moral facts. As they don't seem to be able to that, maybe they should question their faith in these moral facts. I don't see the point in invoking moral realism if no one knows what these moral facts are, no one knows how we might find them and no one would even know if we had somehow stumbled across them. The claim that moral facts don't exist makes compelling Ockham's razor sense to me.

I don't see my view as amounting to a matter of taste, though I don't have huge problems with that view. Consistency and persuasion are important parts of how I think about things. I prefer strawberry ice cream over chocolate ice cream. I also prefer chocolate spread over strawberry jam. It might be said that's inconsistent. In a matter of taste, I don't find that disconcerting. In a moral framework, I would want to address that. I also like to think that I'm open to being persuaded to see things differently in moral matters. I don't think anyone could persuade me that to prefer chocolate ice cream over strawberry ice cream, even if I was open to that. So I don't think my view amounts to "yay" or "yuck". If it was said it's a matter of taste that I want to be consistent and open to persuasion, that's not the best way to describe it. That would seem to mean that deontologists are deontologists as a matter of taste, utilitarians are utilitarians as a matter of taste and so on. At base, there are probably going to be some values or beliefs that are just brute starting points. Taste doesn't seem an appropriate way to describe those starting points. As before, those starting points can be argued for and are amenable to persuasion.

You characterise me as saying: "well look, it is not true that you should be consistent since there is no fact of the matter about that, but you should be consistent anyway".

I'm not claiming what I believe is true and I'm not claiming what I might want to persuade someone else of is true. If people don't want to be consistent, they are wrong as I see it, by my framework, but not in any fact of the matter sense. In the Euclidian framework, parallel lines do not meet. In non-Euclidian frameworks, parallel lines can meet, for example, lines of longitude on a globe. To say parallel lines can meet in the Euclidian framework is wrong, but not in non-Euclidian frameworks. This doesn't seem puzzling to me. I believe what I believe, if I try to persuade someone else, they don't have to take it up. They may be wrong by my framework, but not by their own framework and there is no fact of the matter about which framework to adopt.

You also think I will have to say: I know that there are no real moral truths, yet I find myself having my own moral intuitions and beliefs, which I realize are not really true but I have no choice to live in accordance with them.

I disagree that I have "no choice but to live in accordance with them". I do have a choice. I'm not virtuous and often fail to live up to what I believe in. I'm free to fail. There is no "deep cognitive dissonance" in my position. I also don't "find myself" having moral beliefs. I have to spend time thinking them through, I don't just wake up with them of a morning. Not only that, but I recognise that my beliefs are provisional. As I change, as the stituations I find myself in change and as I'm influenced by other people persuading me to see things differently, what I believe will likely change. There are no moral facts of the matter and I do the best I can in the face of that freedom.

I get the feeling we might start going round in circles. I've written a lot, so unless you something that I'll just burst if I don't respond to, I've probably said all I want to. Quite likely that means I'll reply with a post twice as long as this one. If I post again or not, enjoyable to think about :)

To sum up what I think, there are no moral facts and there are no moral truths. The universe is indifferent to us and doesn't care what we do. But we are all here. We have to live our lives in one way or another and we have to organise our societies in one way or another. There are no moral facts, but we all have to make moral choices, even if that moral choice is to avoid thinking about them. Things like:

Should I eat meat? Should we intervene in Syria? Should we allow abortion? Should I have an abortion? Should I spend my hard earned money on pristine new Air Jordans or should I give the money to charity? Should we organise universal healthcare paid for by taxation? Should I fool around with some hot young thing, where I haven't agreed it with my partner and know my partner won't find out? Should I spend $10,000 on my daughter's life saving operation or use that money to save the lives of a hundred poor Africans? Should I put my sole surviving parent in a care home or I should I encourage them to come and live out their days with me in my house?

As you seem to be a moral realist, maybe you'll explain what the right answer is to all those things and how you arrived at the moral truth underlying those right answers.

I say there are no moral facts about those things, but we all have to make moral choices. A lot of the time we have the luxury of not making an explicit choice. I don't have to form a view about Syria, because my view doesn't count for much in the end. If I eat, which everyone does, I do have to make a decison about whether or not to eat meat. In the past in the UK, it may sincerely not have occured to many that eating meat might be wrong, I doubt anyone in the UK can sincerely say that now.

We might have strong views about those things. We may unthinkingly adopt and echo the views of our culture. We may look to other people for advice. We may be influenced by salient matters of fact about financial cost, our health, how other people will judge us, whether or not $10,000 will actually save the lives of a hundred poor Africans and so on. We may be guided by general moral principles. We might try to think it out for ourselves. We may try to persuade other people to see things as we do. Whatever approach we take though, there is no fact of the matter about what the right thing to do is.

Sartre has an example of a young man in France who had a choice between looking after his mother or joining the army to fight in WW2. If I remember rightly, Sartre left that an open question. That's what moral life is for everyone: an open question. Because there are no moral facts.

Kailer Mullet on 25 October 2018

I clearly don't read fast

I clearly don't read fast enough to study philosophy. I couln't even apply the rule on reading the whole text to this list of rules. Primary sources, indeed. The primary ones are the tough ones. I like the secondary ones better because they are easier. And if that means I'm not properly understanding the work than so be it. I've read primary sources of some things, and I don't feel it gives me that much deeper an understanding. Certainly not deep enough to pay the cost of mining that dense prose everyone used to write in. I swear, any book written before 1905 is unreadable. Back then books cost a lot, so folks didn't want to waste money little ones that you could read over a weekend. People back then wanted to get their money's worth. We're talkin' tomes. Something that will get you through to spring planting. But it's tougher to write long books, so back then Authors liked to write in big complicated sentences to really pad things out. They wouldn't assume that someone knows what a thought is. Better to derive it from first principles. Really helps to get that page count up. 

Barry Cotter on 25 October 2018

> As for "continental"

> As for "continental" philosophy, the scandals of parody articles being accepted for peer review speak for themselves. I suggest you look at the articles and the journals they were submitted to before impugning continental philosophy as such. Gender studies and feminist philosophy may be crap but the only quasi-philosophical journal which accepted or published anything was Hypatia. Cultural studies and women’s studies are activism, not philosophy and should be ejected from the academy but they’re not philosophy and they’re not claiming to be philosophy of any kind, continental or analytical. The Wikipedia article on the Grievance Studies affair provides a useful précis. https://en.m.wikipedia.org/wiki/Grievance_Studies_affair

In reply to by Barry Cotter

Peter Adamson on 25 October 2018

Oh, well my post above that

Oh, well my post above that you're responding to predates the so-called "grievance studies" scandal, which only just happened, by two years. What I was thinking of was the similar hoax perpetrated by Sokal against a journal called "Social Text." Perhaps I should have said "postmodern" rather than "continental," actually, that might have been more accurate; but the target there was in any case more obviously a part of the philosophy industry than in this more recent case. I hasten to add though that I am far from wanting to cast aspersions on all of continental philosophy, which I find fascinating though difficult. I am just saying that in general jargon has its downsides and both the analytic and continental traditions suffer from it.

In reply to by Peter Adamson

Barry Cotter on 30 October 2018

Forgive me for not looking at

Forgive me for not looking at the date posted, I came here through marginalrevolution.com and assumed your rules had been posted more recently. Do you think that postmodernists in American academia, the literary “Theory” people, are taken seriously by non-analytic philosophers? Reading Brian Leiter’s blog one gets the impression that there is at the very least a substantial minority that view them with utter contempt.

Is there some reason for the very small selection of your podcasts available on iTunes?

Thank you for an extremely interesting article.

In reply to by Barry Cotter

Peter Adamson on 30 October 2018

Just to start with the iTunes

Just to start with the iTunes thing since it's easier: aren't most of them on there? They have a rule that only 300 episodes will appear when you browse iTunes so it will miss out the first 15 or 20, but if you hit subscribe it will still give you all of them starting with the first episode.

Re. postmodernists, I would say that there is indeed a split. Analytically trained people tend not so much to sneer at it, in my experience, as to ignore it - at least in the UK where I spent quite a bit of my career. Probably it is seen, often, as not part of philosophy at all. However in some departments especially in the US there are both "continental" and "analytic" people and they have to try to get along somehow. When things go well there is mutual respect. My own view on this is that authors like Heidegger and Derrida are of great interest to many students and, from my passing familiarity with them, both genuinely interesting and very, very difficult to understand. So the effort/reward balance is not quite as favorable as it is in most analytic philosophy, where you can usually follow what is going on without too much trouble and get out of it whatever it has to offer. But that doesn't mean that it isn't worth the effort, of course.
 

In reply to by Peter Adamson

Alexander Johnson on 23 November 2018

speaking of both the sokal

speaking of both the sokal hoax and beware of jargon, i thought this article was very interesting

In reply to by Alexander Johnson

Peter Adamson on 23 November 2018

I liked the remark "I don’t

I liked the remark "I don’t know what this means. But I wrote it and I was rewarded for it."

The best thing I have read about the Sokal Squared Hoax is in a recent London Review of Books: the gist was that he trio of hoaxers first got their submissions turned down flatly, but raised their game and by mistake wound up writing papers that were actually pretty good, having mastered the tools of the intellectual field they were trying to critique, which explains how they got the papers accepted.

In reply to by Peter Adamson

Barry Cotter on 24 November 2018

If three people working part

If three people working part time can get enough publications to have a shot at tenure in one year they either understand the field or there’s nothing there to understand. The three authors do not appear to have emerged from their efforts with much greater respect for the fields they published in than when they started. I don’t expect anything from gender studies so that wasn’t too surprising. If you think the LRB may have been right and the articles were actually good I suggest looking at the article accepted at Hypatia, or the R&R’d one. You will think substantially less of that journal afterwards.

Thank you for the podcasts. I’m sure you don’t get told they’re great enough.

Add new comment

The content of this field is kept private and will not be shown publicly.