Monday 20 July 2015

Virtual Travel

I went to a family gathering this weekend, about two hundred miles from the places I call 'home'. This is actually the first time this decade I've been that far from home (I don't get about much). The journeys there and back were seven and five hours respectively in a car, and were the longest car journeys I've been on in the same period.

Travel is starting to become a theme of my writing about video games. Many of my favourite in-game moments have to do with travelling through virtual worlds. A big part of my excitement for upcoming games like Tales of Zestiria, Xenoblade Chronicles X and Final Fantasy XV is that they offer vast new worlds to explore.

By contrast, I really don't like to do anything in the real world that's vaguely analogous to my virtual explorations. I don't like hiking, I don't like driving (or being a passenger, since I can't legally drive myself anywhere), and I generally don't like to travel. There are a variety of reasons, but the biggest is quite simply that real travel is a lot of effort.

I want to see mountains like the ones my dad likes to climb without getting sore feet. I want to experience the vast sweeps of landscapes like the ones we drove through this weekend (some of which were very pretty) without the steadily-hardening crick in the base of my spine. I want to float through the world as effortlessly and intangibly as a videogame camera.

And I thought until this weekend that there was no harm in this, provided I kept to my lane and didn't get too whiny when I actually do have to travel. But Actually Travelling, and looking at the landscapes I travelled through with the same critical eye I've been training to look at videogames with, put me ill at ease.

Games, by the limits of their technology and the demands of their audience, compress distance. Sure, you can walk for some hours in a straight line without touching the sides of some recent games. But you could walk the real world for the same length of time and in many places not even leave the valley you start in. I found myself wondering, as we drove over the crest of a hill and the horizon retreated on Friday afternoon, how dishonest it is to indulge in this, and how harmful.

It would be a pretty shallow critique to say that the shortening of distance in games is straightforwardly misrepresentative and creates harmful expectations of travel - no better than tired old arguments that the mere presentation of violence is sufficient to induce people to be more violent. It's more complex than that.

But games are so often about the mastery of space, the individual eventually rising to an effortless, unchallenged mobility. There's no better example of this than the 'airship moment' in JRPGs, the point at which you've explored most of the world on foot and, to avoid forcing you to retread old ground, you get a tool that allows you to hop or float to wherever you want, bypassing even the abstractions that are supposed to add labour and time back into the earlier compressed journeys.

This is all a bit unfocussed and musing-y (which is why it's here and not on my actual games blog). The problem probably has more to do with the way games construct mastery than the way they handle travel and distance. But this weekend, looking out at the same sweeping vista for twenty miles and realising it would take ten times as long to walk it on foot as in-game, I had a moment of very sharp discomfort. I hope I'll be able to hold that in mind as I develop my critical ideas on this theme.

Wednesday 1 July 2015

Feels and Reals


'Reals over feels!' and variants thereof have become a slogan for the internet right in the ongoing shitstorm over whether 'objective' journalism is a thing. Simplistic as it is, it's an expression of an ideology with roots in the work of some much-celebrated twentieth-century philosophers (at least, those in the English or anglocentric tradition), and indicative of a subtle shift they engineered in how we use language about truth and reality.

The general meaning of 'Reals over feels!' is that one kind of proposition, understood as impersonal and objective, should be considered more true, or more worthy of consideration, than another, understood as personal and subjective. Statements about objects independently of some personal perspective are considered accurate; statements about how something seems to some perspective are inaccurate or invalid.

In the journalistic field, the problem with this is that all journalism, no matter how diligent, is perspectival. This issue generalises, though; there are, quite simply, no non-perspectival facts. This is not the same as saying that there are no facts. How we arrive the idea that there are objective facts despite the truth of this claim is a topic to which I'll return later.

Let's start with the basics. We have known since Descartes that the only sure foundation for knowledge is conscious experience – that the only thing that absolutely cannot be doubted is the current content of our conscious fields. I may be only hallucinating that I sit in front of a computer right now, or the image of the computer that I see may be the product of a Matrix-like simulation (or, in Descartes' scenario, a trick played by an evil demon), but I cannot be mistaken that I seem to see the computer; I cannot be mistaken that a computer appears before me.

Absolutely every other thing we take ourselves to know is known by inference, and every means of inference we have, we know to be periodically fallible[1]. So however reliable a claim of knowledge beyond immediate awareness may be, we know there must still be some small chance that it is wrong.

Why emphasise this? Well, as the physical sciences developed in the wake of Descartes, the gulf between direct awareness and the theories of the sciences widened dramatically. The seventeenth century gave us cells, the nineteenth atoms and the twentieth quanta, so tiny that any conventional notion of being directly conscious of them goes out the window.

My point is not to dismiss the theories of science, not at all; just to remind where they come from. What I want to repudiate is the relatively recent philosophical contention that because direct awareness is of appearances that often do not correspond closely to the equivalent deliveries of scientific theory, it is inherently misleading.

The difference is subtle. No-one denies that appearances can be deceiving. The question is whether appearances are inherently deceiving. The shift from the former claim to the latter was accomplished in philosophy largely during the rise of Bertrand Russell, and the fall or at least pushing-aside of the British Idealists and Continental Phenomenologists his dominance replaced.

Idealism (in this, rather than the political, context) is the metaphysical and/or epistemological theory that the world, or at least our knowledge of it, is fundamentally based on experiential/conscious facts. Phenomenology is a philosophical method that requires one to start from what is observed most directly, explain that, and then build on the explanations. Both these positions have clear roots in the idea that conscious awareness comes first.

Russell, along with friends like G.E. Moore and disciples like the young A.J. Ayer, held that these approaches had given rise to obscure and absurd metaphysical systems, convoluted theories that were of little use in actually explaining things. It's true that the phenomenologies and idealisms of the 19th century were complex, but then so is the world[2]. Russell in particular held logical clarity to be the most important virtue of philosophical systems, and was willing to ignore or bury a great many issues that would not submit to logic-based treatments[3].

What all the theories discussed so far, including those of Russell and Moore, have in common is that they are attempts to explain the relationships between the experiences of different people. We take my experience, now, of sitting in front of a computer, to be accurate precisely because if you came and took my seat, you would have a similar experience, and because if I come back to this room later today and sit in this same seat, I will have another, similar, experience.

The standard early modern philosophical explanation of this consistency would be that there is an object, the computer, in this room that produces computer-like experiences for any who sit in front of it. With modern science, however, we know that the object in this room can't be simply described as 'a computer'; it's an immensely complex structure of polymers and electronics, each themselves complex structures of molecules, which are made up of atoms, which are in turn complex structures of subatomic particles and fields that can be described mathematically and logically but not terribly intelligibly.

This sets up a huge discrepancy between the experience of sitting at the computer and the scientific description of what's going on (it gets even worse if you try to factor in a scientific description of me, and/or the process of perception). The tendency of the (anglocentric) philosophers of the 20th century was to argue that this meant the experience was deceptive and false, while the scientific description was accurate and true.

But as I argued here, the scientific description is actually much less useful than the experiential one in most cases. Our knowledge, the everyday stuff that enables us to find our way to the shops and so on, is overwhelmingly experiential in character.

And this hints at another reason for preferring the experiential to the scientific; where scientific knowledge is useful, it is only because of some effect on experience that it enables us to generate. The microscopic precision that allows Intel to inscribe GHz CPUs on a postage stamp is only worth achieving because computers enable wondrous new experiences, whether that means exploring the Mushroom Kingdom or establishing personal relationships that stretch around the globe or even just being able to do your own accounting without needing pages and pages of maths paper.

In truth, all values – not just the emotional or aesthetic, but every kind of utility as well – are values only from some human[4] perspective. Feels are reals, both in the general sense that perspectival facts are real (because they are the only kind of fact), and in the specific sense that emotional perspectives are important, because it is those emotional perspectives from which the values that make anything at all that we do worthwhile spring.

The only question that remains is why, if all this is correct, some people are so convinced that there is an 'objective' perspective, one that is right above all others. I said above that the issue is about the relationship between experiences; Russell and his colleagues came to see the explanation for the relationship as more fundamental than the experiences it relates, but there is another process at work here too.

To examine the relationship among experiences generally, you must have a set of experiences to generalise from. Ideally, as indeed the theory behind the scientific method suggests, this sample will be representative; if it is not, there is a much bigger chance of missing something important. In practice, you cannot include experiences of which you know nothing.

And the philosophers I've discussed here had relatively narrow ranges of experience to draw on. Descartes (and other influential early modern philosophers like Locke, Hume and Kant) lived most of his life in and around the courts of Enlightenment Europe. Russell and his cronies were ensconced in the ivory towers of British academia (and I can tell you from personal experience just how narrow the windows there are).

Not only did these men have a limited range of experiences to draw on, they either had or have subsequently gained a great deal of influence to pronounce with. Their positions, social class, shared ethnicity and so on have made them Great Men with Important Views; people who have differing opinions seem unimportant by contrast. This actually applies to their historical opponents every bit as much as to marginalised people today; one hardly hears the names of Bradley and Meinong in philosophy classrooms anymore.

The tendency of self-proclaimed logical thinkers to exclude dissenting opinions from both history and contemporary debate should by this point be sadly familiar. It's a self-reinforcing process; when dissent has already been shut out once, it is much easier to dismiss a second time. People clinging to Russell's model now, a hundred years down the line, may not even realise how trapped they are in it[5]. Open-minded reflection on the views of people from different backgrounds and demographics is the only antidote.

In summary, the idea of an 'objective fact' is a mirage. The only indubitable propositions are subjective in character. The philosophical models that allow us to link them together into a coherent world are at best intersubjective, a negotiation shaped by social pressures much more than 'purely intellectual' considerations (if there are any such things).





[1] It's worth stressing at this point that awareness is not purely sensory – memory is a kind of awareness, so your memory of an event (though not the event itself, if it is in the past) may serve as the foundation for some knowledge, or at least reasoning).

[2] There's an interesting comparison here with the current state of quantum physics. Its more phenomenological elements – the mechanics that describe and predict actual measurements – are the most accurate science yet developed by man, but the interpretations that seek to explain why those relationships exist... well, here's the Wikipedia page on interpretations of quantum mechanics. Just count how many different interpretations there are, don't try to wrap your head around them all.

[3] This Spockish attitude persists today in the myriad ways our culture insists on quantifiability and computability. Things (like emotions) which are messy to compute tend to be regarded with suspicion.

[4] In the interests of inclusion, this should be 'appropriately human-like perspective', really. We want to be able to extend values and valuation to sentient aliens, sophisticated animals and so on.

[5] Which doesn't excuse their lack of awareness, since they're also likely to have more spare time and money to support self-reflection with than other social groups.

Wednesday 6 May 2015

On Voting

I will be voting in the general election. I will probably vote in every general election in my adult lifetime (this will actually only be my second - I missed out by a month in 2005). I vote in every local election, too. I was going to write a post this week with a thorough and general defence of democratic participation, appealing to cynics and anarchists alike.

But I'm not sure that's helpful, particularly given how miserable the system's current offerings are. I don't have quite the same conviction regarding the importance of voting that I used to; the reasons for my remaining conviction (which is still, obviously, fairly strong) are weaker, narrower, more personal.

I used to argue, when challenged to defend centralised government, that the global scale of contemporary problems like overpopulation and climate change would require a global coordination only possible through centralisation, and that a purely bureaucratic centralisation was at least as dangerous as one with some element of democracy.

But, quite without realising it, I betrayed this argument completely when writing The Second Realm. It's not really emphasised or investigated in the story, but governance in The Second Realm (actually in the First Realm) is localised, with only the very lightest central coordination. Society functions as a network of small communities each communicating with and mutually supporting its neighbours.

Granted, it's a much smaller society with very different problems to ours and a surplus of natural resources, but apparently I don't (universally) believe central, hierarchical governance is necessary. I can at least imagine us surviving without it.

Maybe, then, voting won't always be necessary. I'm still voting this time round, though, for a whole bunch of well-trodden reasons; because the parties aren't all the same, and with a population of 60million to work with even small differences may improve lives for lots of people; because my abstinence would be read by the mass media as apathy, which makes my skin crawl; because over longer terms than the parliamentary, there's at least some reason to think that many small voices add up.

It would be disingenuous to overlook the privilege of my upbringing in this, of course; part of the reason I'm voting, and that I don't feel completely hopeless about it, is that I've grown up with the idea that my voice will be heard and will make a difference. At quite a deep level I'm not inclined to see voting as futile.

But that's also a form of optimism, and it's possible - even important - to be optimistic without being naive.

Tuesday 31 March 2015

Boiled Potatoes and the Analytic Method, part 7

I found myself in need of counselling last year. The counselling I received was extremely helpful, but it's only as, in the intervening time, I've started to study critical perspectives from gender and race discourse in depth that I've been able to understand the wider context of my difficulties. These approaches emphasise connectedness; the marketing of children's toys, for example, contributes to a domestication of women that in turn commodifies their sexuality and devalues their consent, leading to rape culture.

By contrast, the idiom of 'analytic philosophy', the tallest and remotest of the academic ivory towers, to which I've given a decade of my life and all my adulthood, puts detachment and abstraction foremost. It was detachment and abstraction - an overdose of both - that led me to counselling. What follows is a reflection on that journey.

In part 1, I discussed the specific experience that led me to seek counselling.

In part 2, I talked about a lack of emotional sensation that I discovered during my counselling sessions.

In part 3, I blamed everything on boiled potatoes (and allowing my everyday life to become too bland).

In part 4, I surveyed the rise of analytic philosophy and attempted to show how it rejects the spiritual and the emotional.

In part 5, I evaluated analytic philosophy and the limits of its conception of meaning.

In part 6, I identified the limit that the analytic method places on discourses of morality and responsibility. 

Part 7: What Pieces Are You So Scared Of?

I wasn't expecting to write this part in quite the mode I'm in at the moment. I've been feeling generally pretty positive and upbeat so far this spring, and was looking forward to rounding this series out with a similarly cheerful summation on the theme of healing and embracing a life that values emotional sensation.

But I had a bad weekend in a handful of little ways that left me feeling a bit on the low side. As ever when I get on a downer, I started to pull back from things, and especially from people. Anxiety sets in, loading every potential encounter with a hundred disaster scenarios.

There's a numbing process that's part of this, too. It's a defensive reflex, I think, shutting down the mechanisms of self-regard and self-care that identify the problem to avoid having to think about it. We're supposed to solve problems by disinvesting, stepping outside ourselves to look at them 'objectively'. This is supposed to make solutions clearer and less clouded by emotion. But sometimes the problem is the emotion, more than anything else.

In my head, at least, this sits side-by-side with the analytic method. They present themselves to me as the same process. For years I have embraced them as one, and identified all sorts of objective solutions to my problems - limited budget, for example, or shared living environments that aren't well cared for, or (when I was still living at home) the fact that my parents insist on listening to the radio news four times a day, making it completely inescapable.

The real problem, though, is and has always been the denial of inner sensation, the failure to attend to so many important dimensions of well-being, the determination to rise above 'meat'. I am starting to learn, though. Slowly, I'm thawing out.

It starts, perhaps predictably in my case, with music. Music has always offered the most purely emotional experiences of my life - I don't have the theoretical knowledge to analyse it the way I can tackle novels, films and now to a certain extent also video games. It's in music that I'm normally closest to engaging bodily - while I'm a terrible dancer, I'm also basically incapable of standing still when there's music playing.

And I have some incredibly talented musician friends. Look, I know no-one ever takes my music recommendations, but click that last link and listen to Sam's most recent album. Seriously, it's not long, and the last track is the first piece of music in a decade to bring tears to my eyes. It's five minutes that I can get completely lost in. Sometimes it's good to be lost.

Sometimes getting lost is exactly what I need. Some problems don't need the analytic distance of the cartographer - the map is clear, the map is the problem, the map shows you all too clearly what stands between you and the shining horizon. The map tells you what the walk is like, but sometimes you need to stop thinking about that and walk anyway. That's the point at which the map can't tell you anything useful.

Monday 23 March 2015

Boiled Potatoes and the Analytic Method, part 6

I found myself in need of counselling last year. The counselling I received was extremely helpful, but it's only as, in the intervening time, I've started to study critical perspectives from gender and race discourse in depth that I've been able to understand the wider context of my difficulties. These approaches emphasise connectedness; the marketing of children's toys, for example, contributes to a domestication of women that in turn commodifies their sexuality and devalues their consent, leading to rape culture.

By contrast, the idiom of 'analytic philosophy', the tallest and remotest of the academic ivory towers, to which I've given a decade of my life and all my adulthood, puts detachment and abstraction foremost. It was detachment and abstraction - an overdose of both - that led me to counselling. What follows is a reflection on that journey.

In part 1, I discussed the specific experience that led me to seek counselling.

In part 2, I talked about a lack of emotional sensation that I discovered during my counselling sessions.

In part 3, I blamed everything on boiled potatoes (and allowing my everyday life to become too bland).

In part 4, I surveyed the rise of analytic philosophy and attempted to show how it rejects the spiritual and the emotional.

In part 5, I evaluated analytic philosophy and the limits of its conception of meaning.

Part 6: On Taking Responsibility

The concept of moral responsibility has been at the heart of my journey through analytic philosophy. The first philosophical system I encountered which inspired and moved me was existentialism, a position that has moral responsibility as its foundation and centrepiece. This stands in direct opposition to the determinism which characterised much of 20th-century analysis.

Science has long been thought to promise a perfect system for predicting human behaviour (I choose my words carefully here, since few practicing scientists have embraced this belief - it belongs more to the realm of 'popular' or at least establishment commentary). It's a classic modernist tenet, and for a while scientific discoveries did seem to be progressing in that direction. Neuroscience and psychology made great strides through the nineteenth century and into the twentieth.

Still, as early as 1942, Isaac Asimov could acknowledge, with his invention of 'psychohistory' in the Foundation short stories, that a truly determinist understanding of human behaviour was out of reach, prohibited by the fundamentally probabilistic character of quantum physics. This is not to claim that prediction of human behaviour is impossible, only that it can never be done with complete certainty.

Philosophers, who have been arguing with Laplace's demon for two centuries now, were slower to catch on. Even ten years ago, when I was in my first year at university, hard determinism was still discussed as a plausible theory, rather than merely a far-fetched possibility. So great was my determination (hah) to hold onto moral responsibility that I once refused to read an assigned article because of its determinist slant, which is about as defiant as I've ever been towards a teacher ever.

Determinism and scientism suit the analytic approach. They are theories of absolute knowledge and certainty, of everything in its place, clear and predictable. In denying the possibility of free will, they deny the meaningfulness of the aesthetic, reducing emotions, beliefs and principles to the purely causal.

This outlook has persisted despite the eventual demise of hard determinism. The philosophers who would have been determinists in a previous generation now begrudgingly begin their papers with 'we know that hard determinism is false, but...' and go on to argue that quantum randomness leaves the defender of free will no better off.

The point is not entirely without merit. Fundamental randomness does not guarantee a meaningful freedom of will. Free will theorists have long held that free will is a necessary condition of moral responsibility. The best they can claim from quantum theory is the existence of a narrow sliver of space in which freedom of the will might hide.

More insidiously, the post-determinists have targeted moral responsibility itself, even as free will theorists began to abandon the connection between will and responsibility (the resulting positions are myriad, and better covered in detail elsewhere). The essence of the new determinist argument concerns motivation, understood as whatever mental state in an agent results in their action.

An agent is morally responsible for an action, the argument goes, if their action is a product of a motivation in an appropriate way (that is, not subject to hypnosis or other control). Motivations, though, are products of the agent's character, and said character is a product of the agent's birth and upbringing. If we are to hold agents responsible for their actions, then, it seems that we must hold them responsible for their upbringing and their ancestry. This, the post-determinists argue, is absurd.

And, on the face of it, it does sound absurd. A person cannot literally be responsible for their own birth - this would distort time itself. This argument, the causal argument, seems to present a profound challenge to the existence of moral responsibility.

And yet... Let us come at this from another angle. Critical theories, such as Marxism, feminism and queer theory, recognise differences among birth circumstances as important social phenomena. The concept of privilege is vital to understanding these models, and their well-grounded demands for social justice.

These days, it is common to hear reactionaries crying that it is not their fault they were born male, or white, or middle class, or straight, or cisgendered, or able-bodied, or neurotypical. Strictly, they are not wrong - but then, you will find no serious feminist arguing that they are. What the reactionaries are doing, though, is relying on the same simplistic, causal understanding of moral responsibility as the post-determinist analysts.

Responsibility for the circumstances of our birth, for the privileges and attitudes therefrom, is something we take. It is not something we are born to, nor something we are morally entitled to ignore. The essence of maturity, of adulthood, is making this transition; this is the sense in which children are innocent.

Practically speaking, the act of taking responsibility consists in critical self-reflection, the willingness to examine our own behaviour and the attitudes which condition it, and the seeking of ways to change them where appropriate. It is the act of taking seriously our relations, both structural and specific, to others, rather than viewing ourselves as isolated particles predestined to bang into one another with whatever arbitrary results a crude social physics dictates.

Theoretically, taking responsibility requires detaching responsibility from the purely causal, embracing the messy illogicality of a putatively free choice to escape the fist of determinism. The result is not a neat theory; it has little of the clarity that analysis craves. But it is honest and liberating, and above all else it allows a hope for general, meaningful change that the determinist mindset can never offer.

(part 7)

Monday 16 March 2015

Boiled Potatoes and the Analytic Method, part 5

I found myself in need of counselling last year. The counselling I received was extremely helpful, but it's only as, in the intervening time, I've started to study critical perspectives from gender and race discourse in depth that I've been able to understand the wider context of my difficulties. These approaches emphasise connectedness; the marketing of children's toys, for example, contributes to a domestication of women that in turn commodifies their sexuality and devalues their consent, leading to rape culture.

By contrast, the idiom of 'analytic philosophy', the tallest and remotest of the academic ivory towers, to which I've given a decade of my life and all my adulthood, puts detachment and abstraction foremost. It was detachment and abstraction - an overdose of both - that led me to counselling. What follows is a reflection on that journey.

In part 1, I discussed the specific experience that led me to seek counselling.

In part 2, I talked about a lack of emotional sensation that I discovered during my counselling sessions.

In part 3, I blamed everything on boiled potatoes (and allowing my everyday life to become too bland).

In part 4, I surveyed the rise of analytic philosophy and attempted to show how it rejects the spiritual and the emotional.

Part 5: Aesthetics and Anaesthetics

I only recently made the etymological connection between 'aesthetics' and 'anaesthetics', but it's hardly an earthshaking revelation. Aesthetics is (roughly) the study of art, a fundamentally sensory thing; anaesthetics make us numb, insensate. The common Greek root originally means perception.

It would not be too far wide of the mark to describe analytic philosophy as anaesthetic. Above all else, what analytic philosophy denies is the subjective. It is the search for objective answers to the grand philosophical questions. The whole analytic construction of 'rationality' opposes the value of personal perspectives, appealing to a transcendent reason which may or may not bear any real connection to the divine intellect of the early modern or classical rationalists.

But analytic philosophy undoubtedly has its advantages. The detachment it advocates can be absolutely crucial for some debates. It's particularly important when responding to criticism; one cannot, after all, take up the point of view of another while clinging to one's own. There are other ways to develop the ability to detach, but practice in the analytic method is a particularly effective and pure one.

(Note: it's far from perfect, as anyone who's ever pricked the ego or threatened the funding of an academic can attest).

And the analytic tradition in philosophy has real triumphs to its name, too; the systems of formal logic developed in the first half of the twentieth century are not just a huge step forward over their arcane predecessors. They are legitimately powerful tools of reasoning, at least within the limit of Gödel's theorem, and underpin much of modern computing.

Another important product of the analytic tradition, one that is rather more complicated to endorse, is its discourse on meaning. This is usually what definitions of analytic philosophy centre on, but the analytic discourse on meaning is almost exclusively linguistic - it concerns words and sentences, spoken and written. In aesthetics, on the other hand, languages are only a small subset of things that mean (the first part of this video has a pretty robust introduction to some of these ideas, referencing the omega of analytic philosophy, Wittgenstein).

And in aesthetics, meaning is a very different beast to the meaning of the analysts. It is lived, experienced, bodily, not a clinical study of how words point to things in the world. Analysts have devoted a great deal of work to establishing what it means to say something exists; in aesthetics, the question is simply 'is it felt?'

The modern technophile's - my - obsession with transcending 'meat' (as William Gibson perfectly put it in Neuromancer) is born of this analytic understanding of meaning, thought and reason. We disdain bodily hedonism for the 'higher pleasures' of the mind, and in doing so fail to realise that our 'higher pleasures' are really just contempt for other ways of seeing the world, other tools that are in their own way as valuable and in many ways richer than those we have learned.

Aesthetic comprehension, in a way, is a much more basic part of the human condition than analytic. This, perhaps, explains some of our disdain; a baby can feel, but only a sophisticated adult can 'really think'. That we can believe this while yearning for our lost, or innocent, or joyful childhoods is a testament to the spectacular power of the (archetypally white, male etc.) privileged ego.

(part 6)

Monday 9 March 2015

Words Matter

(content warning: discussion of ableist terms, reference to other slurs)

I changed the URL and title of this blog yesterday, to remove the ableist slur 'stupid'. I apologise wholeheartedly for not doing this sooner and for failing to treat this issue with the gravitas it deserves until now.

The rest of this post is addressed to anyone who thinks this is making mountains out of molehills, or that I needn't have bothered making the change.

Let me start with the obvious: words matter. They have power. I'd be a pretty poor writer if I didn't believe that. And power is always dangerous - not necessarily always harmful, but always accompanied by the danger of causing harm.

Words can become harmful in lots of ways, but one of the most serious is when used to justify (or in the justification of) harmful policies. We rightly regard racial slurs like the n-words as harmful because of their association with governmental policies and societal patterns of slavery and segregation - policies and patterns with costs both measurable (in death and injury figures) and immeasurable (in lost human potential and complex, oppressive legacies).

Why, then, is 'stupid' harmful? It is, after all, a very common word, and one not normally connected to any great opprobrium.

The simplest answer to this is a direct comparison with racist language. Constructions of intelligence have sometimes been used as viciously as constructs of race to justify policies every bit as horrible. The eugenics movements of the early 20th century are the best examples of this - in the 20s and 30s, many 'developed' nations including Britain and America forcibly sterilised people who failed to meet certain standards of 'intelligence' (usually measured with IQ tests - the exact purpose and value of which remains controversial to this day). More famously, Nazi Germany sent people to concentration camps and even gas chambers on 'intelligence' grounds as well as racial.

It's generally good policy to not throw around as insults words that the Nazis used to justify genocide.

One final thing; I want to point out how easy it is to overlook this issue. When I started this blog, I was twenty-three, already in possession of a master's degree and well on my way to a doctorate - hardly able to claim general ignorance, and yet I had no idea that my choice of phrase (a reference to Bill Clinton's famous campaign policy from 1992) could be harmful. Worse, I was literally working in disability support for students at the time - none of my (actually quite limited) training had addressed this issue.

And it gets even worse than that, because a year or two later I was working with a student whose course included modules of disability studies and special educational needs. There were several lectures about ableist language, including specific problems with the language of intelligence, and I still didn't see a problem with my own blog title. It's very easy to dismiss issues when they require you to change.

Learning to rid my everyday vocabulary of words like 'stupid' - to put them in the procribed category where they belong - is not easy. But there are plenty of better words, both as insults and to refer to things that are strange/absurd. We can - and should - live without words that are imbued with such harm.

Here's a great resource for examining ableism generally, and here's their excellent collection of articles addressing specific ableist terms.