Wednesday, September 20, 2017

The Moral Problem of Accelerating Change





Holmes knew that killing people was wrong, but he faced a dilemma. Holmes was a member of the crew onboard the ship The William Brown, which sailed from Liverpool to New York in early April 1842. During its Atlantic crossing, The William Brown ran into trouble. In a tragedy that would repeat itself 70 years later during the fateful first voyage of The Titanic, the ship struck an iceberg off the coast of Canada. The crew and half the passengers managed to escape to a lifeboat. Once there, tragedy struck again. The lifeboat was too laden with people and started to sink. Something had to be done.

The captain made a decision. The crew would have to throw some passengers overboard, leaving them to perish in the icy waters, but raising the level of the boat. It was the only way anyone was going to get out alive. Holmes followed these orders and was complicit in the deaths of 14 people. But the remaining passengers were saved. Holmes and his fellow crew were their saviours. Without doing what they did, everyone would have died. For his troubles, Holmes was eventually prosecuted for murder, but the jury refused to convict him for this. They reduced the conviction to one of manslaughter and Holmes only served six months in jail.

I discuss this case every year with students. Most of them share the jurors’ sense that although Holmes intentionally killed people, he didn’t deserve much blame for his actions. In the context, we would be hard pressed to have done differently. Indeed, many of my students think he should avoid all punishment for his actions.

Holmes’s story illustrates an important point: morality is contextual. What we ought to do is dependent on what is happening around us. Sometimes our duties and obligations can change. You probably don’t think about this phenomenon too much, taking it is a natural and obvious feature of the moral universe, but the contextual nature of morality poses a challenge during times of accelerating technological change.

That’s one of the central ideas motivating Shannon Vallor’s recent book Technology and the Virtues. I’m still working my way through it (I’ve read approximately 65 pages at the time of writing) but it is provoking many thoughts in my mind and I feel I have to get some of them down on the page. This post is my first attempt to do so, examining one of the key arguments developed by Vallor over the opening chapters of the book.

That argument comes in two parts. The first part claims that there is a particularly acute and important moral problem facing us in the modern age. Vallor calls this the problem of ‘acute technosocial opacity’; I’m going to give it a slightly different name: the moral problem of accelerating change. The second part argues for a solution to this problem: developing a technology-sensitive virtue ethics. I’m going to analyse and evaluate both parts of the argument in what follows.

Before I get into the details, a word of warning. What I am about to say is highly provisional. As noted, I’m still reading Vallor’s book. I am very conscious of the fact that the problems I raise with certain aspects of her argument might be addressed later in the book. So take what I am about to say with a hefty grain of salt.


1. The Moral Problem of Accelerating Change
We are living through a time of accelerating technological change. This is one of the central theses of futurists like Ray Kurzweil. In his infamous 2005 book The Singularity is Near, Kurzweil maps out the exponential improvements in various technologies, including computing speed, size and density of transistors, data storage and so on. Some of these improvements are definitely real: Moore’s law — the observation that the number of transistors that can fit on an integrated circuit doubles every two or so years — is the most famous example. But Kurzweil and his fellow futurists take the idea much further, arguing that converging trends in artificial intelligence, biotech, and nanotech hold truly revolutionary potential for human society. Kurzweil believes that we are heading towards a ‘singularity’ where humans and machines will merge together and we will suffuse the cosmos with our intelligence. Others are less optimistic, thinking that the singularity holds much darker promises.

You don’t have to be a fully signed-up Kurzweilian to believe that there is something to the notion of accelerating change. We all have a sense that things are changing pretty quickly. Jobs that were once stable and dependable sources of income have been automated or eliminated. Digital and smart technologies that were non-existent ten years ago are embedding themselves in our daily lives, turning us all into screen-obsessed zombies. This is to say nothing of the advances in other technologies, such as AI, 3-D printing and brain-computer interfaces. You might think that we can handle all this change — that although things are moving quickly they are not moving so quickly that we cannot keep up. But this assessment might be premature. One of the key insights of Kurzweil’s work — one that has been taken onboard by others — is that accelerating change has a way of sneaking up on us. A doubling of computer speed year-on-year is not that spectacular for the first few years, particularly if you start from a low baseline, but after ten or twenty years the changes become truly astronomical. It’s like that old puzzle about the lily pad that doubles in size every day. If it covers half the pond on day 47 when does it cover the entire pond? Answer: on day 48. One more day is enough to completely wipe out the pond.

Accelerating change poses a significant moral challenge. We all seek moral guidance — even the committed moral relativists among us try to figure out what they ought to do. But noted in the introduction, moral guidance is often contextual. It depends, critically, on two variables: (i) what is happening in the world around us and (ii) what is within our power to control. Once upon time, no one would have said that you had a moral obligation to vaccinate your children. It wasn’t within your power to do so. But with the invention of vaccines for the leading childhood illnesses, as well as the copious volumes of evidence in support of their safety and efficacy, what was once unimaginable has become something close to a moral duty. Some people still resist vaccinations, of course, but they do so knowing that they are taking a moral risk: that their decision could impose costs on their child and the children of others. Consequently, there is a moral dimension to their choice that would have been historically unfathomable.

Accelerating change ramps up the problem of moral contextuality. If our technological environment is rapidly changing, it’s hard to offer concrete guidance and moral education to people about what they ought to do. They may face challenges and have powers that are beyond our ability to predict. This is something that most historical schools of moral thought did not envisage. As Vallor notes:

The founders of the most enduring classical traditions of ethics — Plato, Aristotle, Aquinas, Confucius, the Buddha — had the luxury of assuming that the practical conditions under which they and their cohorts lived would be, if not wholly static, at least relatively stable…the safest bet for a moral sage of premodern times would be that he, his fellows, and their children would confront essentially similar moral opportunities and challenges over the course of their lives. 
(Vallor 2016, 6)

All of this suggests that the following argument is worthy of our consideration:


  • (1) In order to provide practical and useful moral guidance to ourselves and our cohorts, we must be able to predict and understand the moral context in which we will operate.
  • (2) Accelerating technological change makes it extremely difficult to predict and understand the moral context in which we and our cohorts will operate.
  • (3) Therefore, accelerating technological change impedes our ability to provide practical and useful moral guidance.


Support for premise (1) derives from the preceding discussion of moral contextuality. If what we ought to do depends on the context, we need to know something about that context in order to provide practical guidance. Support for premise (2) derives from the preceding discussion of accelerating change. Admittedly, I haven’t provided a robust case for accelerating change, but I would suggest that there is something to the idea that is worth taking seriously. I also think the argument as a whole is worthy of serious scrutiny. The question is whether there is any solution to the problem it identifies.


2. The Failures of Abstract Normative Ethics
One possible solution lies in abstract normative principles. Students of moral philosophy will no doubt be suspicious of premise (1). They will know that modern ethical theories — in particular the theories associated with Immanuel Kant and proponents of utilitarianism — offer a type of moral guidance that makes no appeal to the context in which a moral choice must be made.

Consider Kant’s famous categorical imperative. There are various formulations of is, but the most popular and widely discussed is the ‘universalisation’ formulation (note: this is my wording, not Kant’s):

Categorical Imperative: You ought to only act on a maxim of the will that you can, at the same time, will as a universal maxim.

In other words, whenever you are about to do something ask yourself: would it be acceptable for everyone else, in this circumstance, to act as I am about to act? Are my choices universalisable? If not, then you are taking special exceptions for yourself and not acting in a moral way. Note how this principle is supposed to ‘float free’ of all contexts. It should work whatever fate may throw your way.
Consider also the basic principle of utilitarianism. Again, there are many formulations of utilitarianism, but they all involve something like this:


Utilitarian Principle: Act in a way that maximises the amount of pleasure (or some other property like ‘happiness’ or ‘desire satisfaction’) and minimises the amount pain, for the greatest number of people.

This principle also floats free of context. No matter what circumstance you find yourself in, you should always aim to maximise pleasure and minimise pain.

Vallor finds both of these solutions to the problem of accelerating change lacking. The issue is essentially the same for both. Although they may seem to be context-free, abstract moral principles, translating them from their abstract form into practical guidance requires far greater knowledge of moral context than initially seems to be the case. To know whether the rule you wish to follow is truly universalisable, you have be able to predict its consequences in multiple scenarios. But prediction of that sort is elusive in an era of rapid technological change. The same goes for figuring out how to maximise pleasure and minimise pain. This has been notoriously difficult for utilitarians given the complex causal relationships between acts and consequences. This was true even before the era of accelerating technological change. It will hardly be better in it.

For what it is worth, I think Vallor is correct in this assessment. Although abstract moral principles might seem like a solution to the problem of accelerating change, they falter in practice. That said, I think there is some value to the abstraction. Having a general rule of thumb that can apply to all contexts can be a useful starting point. We are always going to find ourselves in new situations and new contexts, irrespective of changes to our technologies. In those contexts we will have to work with the moral resources we have. I may walk into a new context and not know what choice is universalisable or likely to maximise pleasure, but I can at least know what sorts of evidence I should seek out to inform my choice.


3. The Virtue Ethical Solution


Vallor favours a different solution to the problem of accelerating change. She argues that instead of finding solace in abstract moral principles, we should look to the great virtue ethical traditions of the past. These are the traditions associated with Aristotle, Confucius and the Buddha. These traditions emphasise moral character, not moral principles. The goal of moral education, according to these traditions, is to train people to develop virtuous character traits that will enable them to skilfully navigate the ethical challenges that life throws their way.

Why is this a compelling solution to the problem of accelerating change? An analogy might help. As a university lecturer in the 21st century, I am very aware of the challenge of educating students for the future. The common view of higher education is that it is about conveying information. A lecturer stands at a lectern and tries to transfer his/her notes into the minds of the students. The students learn specific propositions, theories and facts that they later regurgitate in exams and, if we are lucky, in their professional lives. The problem with this common view is that it seems ill-equipped to deal with the challenges of the modern world. The information that I have in my notes will soon be outdated. For example, if I am teaching students about the law, I have to be cognisant of the fact that the rules and cases that I am explaining to them today may be overturned or reformed in the future. When the students step out into the professional world, they will have to cope with these new laws — ones they haven’t learned about in the course of their education.

So education cannot simply be an information-dump. It wouldn’t be very useful if it were. This is why there is such an emphasis on ‘skills-based’ education in universities today. The goal of education should not be get students to learn facts and propositions, but to develop skills that will enable them to handle new information and knowledge in the future. The skill of critical thinking is probably foremost amongst the skills that universities try to cultivate among their students. Most course descriptions nowadays suggest that critical thinking is a key learning objective of college education. As I understand it, this skill is supposed to enable students to critically assess and evaluate any kind of information, argument, theory or policy that might come their way. The successful critical thinker is, consequently, capable of facing the challenges of a changing world.

The goal of virtue ethics is much the same. Virtue ethical traditions try to cultivate moral skills among their adherents. The virtuous person doesn’t just learn a list of rules and regulations that they slavishly follow in all circumstances, they cultivate an ability to critically reflect upon new moral challenges and judge for themselves what the best moral solution might be. This may require casting off the principles that once seemed sensible. As Vallor puts it:

Moral expertise thus entails a kind of knowledge extending well beyond a cognitive grasp of rules and principles to include emotional and social intelligence: keen awareness of the motivations, feelings, beliefs, and desires of others; a sensitivity to the morally salient features of particular situations; and a creative knack for devising appropriate practical responses to those situations, especially where they involve novel or dynamically unstable circumstances. 
(Vallor 2016, 26)

The claim then is that cultivating moral expertise is the ideal way in which to provide moral guidance in an era of accelerating change:

[Ask yourself] which practical strategy is more likely to serve humans best in dealing with [the] unprecedented moral questions [raised by technological advances]: a stronger commitment to adhere strictly to fixed rules and moral principles (whether Kantian or utilitarian)? Or stronger and more widely cultivated habits of moral virtue, guided by excellence in practical and context-adaptive moral reasoning? 
(Vallor 2016, 27)

This is a direct challenge to premise (1) of the argument from accelerating change. The claim is that we do not need to know the particulars of every moral choice we might face in the future to provide moral guidance to ourselves and our cohorts. We just need to develop the context-adaptive skill of moral expertise.


4. Criticisms and Concerns
Of course, the devil is in the detail. Vallor’s book is an attempt to map out and defend exactly what this skill of moral expertise might look like in an era of accelerating technological change. As already noted, I haven’t read the whole book. Nevertheless, I have some initial concerns about the virtue ethical solution that I want to highlight. I know that Vallor is aware of most of these, so hopefully they will addressed later on.

The first is the problem of parochiality. Prima facie, virtue ethics seems like an odd place to find solace in a time of technological change. The leading virtue ethical traditions are firmly grounded in the parochial concerns of now-dead civilisations: Ancient Greece (Aristotle), China (Confucius) and India (the Buddha). Indeed, Vallor herself acknowledge this, as is clear from the quote I provided earlier on about the luxury these iconic figures had in assuming that things would be roughly the same in the future.

Vallor tries to solve this problem in two ways. First, she tries to argue that there is a ‘thin’ core of shared commitments across all of the leading virtue ethical traditions. This core of commitments can be divorced, to some extent, from the parochial historical concerns of Ancient Greece, China and India. These commitments include: (i) a belief in flourishing as the highest ideal of human existence; (ii) a belief in virtues as character traits shared by certain exemplary figures; (iii) a belief that there is practical path to the cultivation of moral expertise; and (iv) some conception of human nature that is relatively fixed and stable. Second, she tries to identify virtues that are particularly relevant to our era. She does this by adopting Alisdair MacIntyre’s theory of virtues, which argues that virtues are always tied to the inherent goods of particular social practices. She then tries to argue that there is a set of goods inherent to modern technosocial practice. These goods are mainly tied to our growing global interconnectedness, and the consequent need to cultivate global wisdom, community and justice.

Both of these attempts to overcome the problem of parochiality are interesting and worthy of greater consideration. I hope to examine them in more depth at a later stage. I want to fixate, however, on one aspect of Vallor’s ‘thin’ theory of virtues because I think it reveals another important problem: the problem of human nature. As she notes, all virtue ethical theories share the idea that the goal of moral practice should be to promote human flourishing. They also share the belief that the path to this goal is determined by some conception of human nature. It is because there is a relatively stable and fixed human nature that we can meaningfully identify certain practices and traits as conducive to human flourishing. Vallor accepts that the ‘thick’ details of this theory will vary between the traditions, but also seems committed to the notion that there is some stable core to what is conducive to human flourishing. For example, when commenting on the need to develop social bonds and a sense of community, she says:

Humans in all times and places have needed cooperative social bonds of family, friendship, and community in order to flourish; this is simply a fact of our biological and environmental dependence. 
(Vallor 2016, 50)

This quote shows, I think, how the virtue ethical solution to the problem of accelerating change is to swap abstract and fixed principles for an abstract and fixed human nature. I think this is problematic.

I’m certainly not a denier of human nature. I think there probably are some stable and relatively fixed aspects of human nature, at least for humans as they are currently constituted. But that’s the crucial point. One of the biggest moral challenges posed by technological development is the fact that it is no longer just the environment around us that is changing. Technologies of human-machine integration or human enhancement threaten the historical stability of our ‘biological and environmental dependence’. Two potential technological developments seem to pose a particular challenge in this regard:

The Hyperagency Challenge: This arises from the creation of enhancement technologies that allow us to readily control and manipulate the constitutive aspects of our agency, i.e. our beliefs, desires, moods, motivations and dispositions. If all these things can be erased, changed, overridden, and altered, the idea that there is an internal, fixed aspect of our nature that serve as a moral guide becomes more questionable. I’ve written two papers about this challenge in the past, so I won’t say anymore about it here.

The Hivemind Challenge: This arises from the creation of technologies that blur the boundary between human and machine, and enable greater surveillance and interconnectedness of human beings. As I’ve noted in the past, such technologies could, in extreme forms, erode the existence of a stable, individual moral agent. Since most virtue ethical traditions (even the more communitarian ones) assume that the target of moral education is the individual agent, this challenge also calls into question the utility of virtue ethics as a guide to our changing times. Indeed, if we do become a global hivemind, the idea of ‘human’ nature would seem to go out the window.

I don’t know how seriously we should take these challenges. You could argue that the technologies that will make them possible are hypothetical and outlandish — that for the time being we will have a relatively stable nature that can serve as the basis for a technomoral virtue ethics. But if the relevant technologies could be realised, it might call into question the long-term sustainability of a virtue ethical solution to the problem of accelerating change.

The final problem I have is the problem of calibration. This is a more philosophical worry. It is a worry about the philosophical coherence of virtue ethics itself. The claim made by many virtue ethicists is that moral expertise is a skill that is cultivated through practice. The moral expert is someone who can learn from their experiences and the experiences of others, and use their judgment to hone their ability to ‘see’ what is morally required in new contexts. What has never been quite clear to me is how the moral expert is supposed to calibrate their moral sensibility. How do they know that they are honing their skill in the right direction? How can they meaningfully learn from their experiences without some standards against which to evaluate what they have done? I’m not exactly sure what the answer is, but it seems to me that it will require some appeal to abstract moral standards. The budding moral expert will have to assess their actions by appealing to standards such as the general utility of pleasure over pain, the desirability of individual autonomy and control, the typical superiority of impartial universal rules over partial and parochial ones. In sum, it seems like the contrast between virtue ethics and abstract moral principles and standards may not be that sharp in practice. We may need both if we are going to successfully navigate these changing times.





Tuesday, August 29, 2017

Episode #28 - Walch on the Misunderstandings of Blockchain Technology

Angela-Walch_Web.jpg

In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies.

You can download the episode here. You can listen below. You can also subscribe on iTunes or Stitcher.



Show Notes

  • 0:00 - Introduction
  • 2:06 - What is a blockchain?
  • 6:15 - Is the blockchain distributed or shared?
  • 7:57 - What's the difference between a public and private blockchain?
  • 11:20 - What's the relationship between blockchains and currencies?
  • 18:43 - What is miner? What's the difference between a full node and a partial node?
  • 22:25 - Why is there so much confusion associated with blockchains?
  • 29:50 - Should we regulate blockchain technologies?
  • 36:00 - The problems of inconsistency and perverse innovation
  • 41:40 - Why blockchains are not 'immutable'
  • 58:04 - Why blockchains are not 'trustless'
  • 1:00:00 - Definitional problems in practice
  • 1:02:37 - What is to be done about the problem?
 

Relevant Links


 

Saturday, August 26, 2017

New papers on Moral Enhancement and Brain-Based Lie Detection




I have a couple of new papers available online. The first looks at the moral freedom objection to moral enhancement. The second tries to rebut an interesting philosophical objection to the use of brain-based lie detection. Both papers are set to appear in edited books in 2018. Details and links to pre-publication versions below (just click on the paper title):

Moral Enhancement and Moral Freedom: A Critique of the Little Alex Problem
Abstract: A common objection to moral enhancement is that it would undermine our moral freedom and that this is a bad thing because moral freedom is a great good. Michael Hauskeller has defended this view on a couple of occasions using an arresting thought experiment called the 'Little Alex' problem. In this paper, I reconstruct the argument Hauskeller derives from this thought experiment and subject it to critical scrutiny. I claim that the argument ultimately fails because (a) it assumes that moral freedom is an intrinsic good when, in fact, it is more likely to be an axiological catalyst; and (b) there are reasons to think that moral enhancement does not undermine moral freedom.

Brain-based Lie Detection and the Mereological Fallacy: Reasons for Optimism
Abstract: There has been much hype about the implications of contemporary developments in neuroscience for the law. Pardo and Patterson are skeptical of this hype. They argue that a good deal of the hype stems from simple philosophical errors and conceptual confusions. In the course of this critique, they offer particular objections to the forensic use of brain-based lie detection methods. Although agreeing with the authors about the need for skepticism and conceptual clarity, this chapter argues that they get things wrong when it comes to their skepticism of brain-based lie detection. This is for three reasons. First, in their critique they focus too heavily on the problems associated with the more speculative and less empirically grounded fMRI-based methods, not enough on the more robustly grounded EEG-based methods. Second, when focus is switched to these methods, their main philosophical critique of the use of neuroscience in law – the neurolaw mereological fallacy – has much less bite. And third, they neglect to address the merits of brain-based lie detection methods relative to existing methods for inferring what a witness does or does not believe. When these three critiques are factored in, the future looks brighter for this particular use of neuroscience in law. 
 
 
 





Monday, August 21, 2017

The 'In Principle' Objection to Privatisation




A few years ago, they privatised the Irish water supply. Rather than water being a freely provided public service, funded out of general taxation, water was now to be a privately supplied good, with each household paying an annual fee that varied depending on usage. It proved to be quite a controversial move, leading to numerous protests and a significant loss of legitimacy for the government. So much so, in fact, that the future of water privatisation in Ireland is currently uncertain.

The privatisation of formerly public services often proves controversial. Privatisation is a major feature of the so-called ‘neo-liberal’ agenda. It is often favoured by economists and policy wonks on the grounds of efficiency. The typical argument is this: if we learned nothing else from the history communism and socialism, it is that government agencies aren’t particularly good a supplying scarce resources. The incentives are out of whack. They are often hugely wasteful, and tend to over-supply or under-supply goods and services. Private agents, motivated by profit and incentivised by prices, are often much more efficient, supplying just as much as the market demands, at a price that maximises societal welfare. (Note: this isn’t always true: for certain goods and services even classical economists will agree that private markets can fail to be efficient — I talk about this in more detail in my posts on Hayek’s famous argument against centrally planned economies).

And yet the process still proves controversial, with many philosophers and activists resisting the wave of privatisation. Sometimes their arguments are strictly empirical in nature: they disagree with the economists and policy wonks who insist upon the efficiency of private markets and the inefficiency of the state. And there are, indeed, empirical studies that support their disagreements. But sometimes their arguments are more philosophical or normative in nature: they hold that, irrespective of the consequences of privatisation, there is something morally circumspect about process. It leads to the selling off of the public sphere and the erosion of public authority and legitimacy. They challenge privatisation on principled grounds, not empirical ones.

Avihay Dorfman and Alon Harel are possibly the leading defenders of this ‘in principle’ objection to privatisation. Over the past few years, they have authored a number of papers that try, with increasing degrees of sophistication and rigour, to present a robust, non-empirical objection to privatisation. They argue that there are some ‘intrinsically public goods’ that should never be handed over to private agents, and they work hard to identify the key properties of these goods.

Although I cannot hope to do justice to the full body of their work on this topic, I do want to look at their main line of argument in the remainder of this post. I do so by analysing and evaluating one of their most recent papers, entitled ‘Against Privatisation as Such’, which appeared in the Oxford Journal of Legal Studies in 2016. Their argument in that paper is that privatisation is objectionable because it undermines public engagement with and responsibility for certain kinds of decision.


1. Two Conceptions of Privatisation
To understand the argument, we first need to understand what it means to privatise something. We all have an intuitive and commonsense understanding of what this entails. The opening example of the privatisation of Irish water gives us some sense of what happens. When the government ‘privatises’ the provision of a particular good or service, it transfers decision-making authority for the provision of that good or service to a private agent (company/corporation). This private agent will then follow a slightly different set of incentives/reasons than a public agency would when supplying the good or service. The hope is that they will follow a set of incentives/reasons that enables them to supply the good or service in a more efficient manner.

This gives us two distinct ways of understanding the process of privatisation. Both of these have been identified and discussed in the academic debate:


The Reasons View - To privatise the provision of a good or service, X, is to change (wholly or partially) the reasons for which someone supplies that good or service.

The Agency View - To privatise the provision of a good or service, X, is to transfer the decision-making authority in relation to that good or service to a private agent.


Dorfman and Harel note that many theorists seem to endorse the Reasons View, and in doing so they often stumble upon an interesting way in which to defend privatisation. One of the features of the Reasons View is that it pays little attention to the identity of the decision-maker when it comes to classifying a particular decision as being ‘private’ or ‘public’. Instead, it focuses on the reasons utilised by the decision-maker. So on this view what is distinctive about private decision-making processes is that they are motivated by things like profit and loss and other relevant economic variables, and not by concerns like fairness, justice and the common good. These concerns are more typically associated with public decision-making processes.

One of the consequences of this method of categorisation is that it is possible for private agents to act in a public-spirited way (or vice versa, i.e. for public agents to act in a privatised way). You could imagine a private company being contracted by the government to provide a good or service on the basis that they direct themselves to the common good. You could also imagine private companies and contractors acting for a combination of reasons, some of which are strictly economic in nature and others of which are more public spirited. Indeed, this might lead you defend privatisation on the grounds that it gives you the ‘best of both worlds’: It brings the efficiencies of the private sector without necessarily losing the public touch.

The essence of Dorfman and Harel’s case against privatisation is that the Reasons View gets it wrong. Privatisation is not solely or even primarily about changing the reasons for which a decision is made; it is really about the transferal of authority. This means that the Agency View is more correct, and when you understand privatisation in terms of agency, you begin to see why it might be objectionable in principle: because it changes the nature and locus of legitimacy in society.

The defence of this argument comes in two parts. The first part highlights the flaws in the Reasons View; the second explains why the transferal of authority is so problematic.


2. Against the Reasons View
Dorfman and Harel don’t present their case against the Reasons View in formal terms, but I’m going to do so, for ease of exposition. Their argument is a very simple one and works like this:



  • (1) If the Reasons View of privatisation were correct, then all we should care about (when it comes to debating the pros and cons of the process) are the reasons motivating the decisions, not who makes them.

  • (2) We do not only care about the reasons motivating particular decisions; we also care about the identity of the agent making those decisions.

  • (3) Therefore, the Reasons View of privatisation must be incorrect.



The key to this argument is the second premise. Dorfman and Harel develop a few lines of support for this premise. One of them is to look at how people talk about privatisation. They consider the work of Richard Bauman, who once identified five characteristics of privatisation:

(1) the complete or partial sell-off (through asset or share sales) of major public enterprises; (2) the deregulation of a particular industry; (3) the commercialization of a government department; (4) the removal of subsidies to producers; and (5) the assumption by private operators of what were formerly exclusively public services, through, for example, contracting out. 
(Bauman 2000, 2 - sourced in Dorfman and Harel 2016)

They argue that it is difficult to make sense of Bauman’s five characteristics if you favour the Reasons View. While some the characteristics could be understood in terms of changing the reasons for which a decision is made (specifically, characteristics 2, 3 and 4), others cannot. Indeed, the other characteristics are probably best understood in terms of the Agency View. Dorfman and Harel then argue that if someone like Bauman wished to stick with the Reasons View he would need to explain away the fact that some of relevant characteristics of privatisation are concerned with agency.

Another line of argument comes from the debate about the normative justification for punishment. The typical rationales for punishment are either retributive or consequentialist in nature. The retributive rationale holds that it is intrinsically good to punish people in proportion to their wrongdoing. The consequentialist rationale holds that punishment is justified if it achieves some desirable end (e.g. deterrence). Both rationales are, to some extent, concerned with the reasons motivating the decision to punish. This might suggest that the debate about the justification of punishment plays out in an arena that is shaped by the Reasons View. But this is not the case. Many of the participants in the debate about the normative justification of punishment, be they retributive or consequentialist in their leanings, hold that there is another condition that must be satisfied before punishment can be legitimate. They hold that the punishment must be administered by a public official. Indeed, most theorists of punishment implicitly or explicitly assume that private individuals are never the appropriate administrators of punishment. This is why they usually balk at the notion of individuals taking it upon themselves to punish wrongdoers (so-called ‘vigilante justice’). It is difficult to explain this in terms of the Reasons View.

A final line of support comes from legal doctrine. Dorfman and Harel reference two cases in their article that highlight how courts often reject the privatisation of public services on grounds that are unrelated to the Reasons View. One case comes from the Indian Supreme Court and had to do with outsourcing of police services on short-term contracts. The court rejected this practice on the grounds that policing was something that must ‘necessarily…be delivered by forces that are and personnel who are completely under the control of the state’ (Dorfman and Harel 2016, 410). The other case is Israeli and had to do with the privatisation of prison services. This was rejected by the court on the grounds that only public officials are normatively competent to deny someone’s liberty.

In short, Dorfman and Harel reject the Reasons View because it conflicts with how we characterise and critique the process of privatisation. They argue that if you pay attention to both of these things you will find that the identity of the agent is a paramount concern.


3. In Defence of the Agency View
The preceding line of argument only gets us so far. We have cause to reject the Reasons View, and we have found considerable concern with the identity of the agent making the decisions. The problem is that this concern is somewhat mysterious in character. Why must punishment (or whatever) be administered by a public agent? The defender of the Agency View owes us some account of that.

Dorfman and Harel try to step up to the plate and provide us with exactly that. Their argument is quite complex and convoluted. I’ll present a simplified version of it here. In essence, it has to do with the need for certain decisions (punishment being a good example) to be publicly legitimised. If you are going to make decisions that could harm, deprive, or redistribute core rights and responsibilities, you need for those decisions to be publicly legitimate. The problem, according to Dorfman and Harel, is that you will never get that legitimacy if you privatise those decisions. The reason for this is that privatisation necessarily undermines and erodes public responsibility for, and engagement with, the relevant decision, both of which are essential for legitimacy.

The argument seems to work like this:


  • (4) In order for particular decisions to have public legitimacy they must emanate from the public (i.e. the public must be engaged with the decision-making process and must be able to take responsibility for the decision).

  • (5) In order for a decision to emanate from the public it must be made by an agent who defers to the public in a particular way.

  • (6) A private agent, contracted to make those decisions, cannot defer to the public in the right way.

  • (7) Therefore, privatised decision-making lacks the legitimacy that is required for particular decisions.


You could challenge several aspects of this argument. It is certainly a little bit sketchy about which decisions require this form of legitimation, and there is plenty of disagreement about the conditions that must be satisfied in order for a decision to be legitimate in the academic literature. Nevertheless, most of the action in Dorfman and Harel’s article centres on premises (5) and (6).

Let’s consider premise (5). Many ‘public’ decisions are made by individual decision-makers (public officials, government ministers, government agencies, etc.). Members of the general public may have little direct influence and involvement in those decisions. Nevertheless, in order for the decisions to be legitimate, the individual decision-maker must defer to the public in making the decisions. They must see and understand themselves to be public servants. They must take the public’s views into consideration and be answerable to the public for what they do. In short, they must be part of a normative community/institution that engages with and answers to the general public. That’s the view that underlies premise (5).

Premise (6) is then defended on the grounds that private agents, who are contracted to make similar decisions, can never show the right kind of deference. This is fleshed out in what Dorfman and Harel term the ‘different contracts’ argument. A public official is employed under a particular set of norms, norms that require their integration into and answerability to the general public. A private agent is employed under a contractual agreement. Their duties and obligations will be set by the terms and conditions of that contractual agreement. Their duty is not to the general public, it is to the contract. There is consequently distance between them and the general public, not deference.

You might respond to this by arguing that deference to the public could be built into the terms and conditions of the contractual agreement. But Dorfman and Harel argue that this won’t work. They argue that every privatisation agreement will have to afford the private agent a ‘zone of permissibility’ (or ‘autonomy’) in which they are free to exercise their own judgment about what to do. This zone of permissibility will necessarily remove them from the kind of public deference that is required. The reason for this is that in order to make any sense at all, a privatisation agreement must defer to the skills and judgment of the private agent. Recall, that the whole point of privatisation is that private agents are able to provide a good or service more efficiently than public agents. This necessarily implies a zone of permissibility. The problem is that within that zone of permissibility, the private agent will have an ‘immunity’ or ‘claim right’ against the state: they will be legally entitled to resist state interference and direction. This is what distances them from the public.

Dorfman and Harel concede that, in principle, a private agent could be fully integrated into a public agency, but they argue that in such as case the private agent would cease to be ‘private’. They also concede that there are some seemingly public agencies that have zones of autonomy that seal them off from political interference. They give the example of an independent election monitoring agency as an example. But they argue that such agencies are not ‘private’ in any meaningful sense. They still serve and engage with the public in the fulfilment of their duties. They are not employed or constituted under the same kind of private contractual agreement as a private agent.


4. Concluding Thoughts
That’s Dorfman and Harel’s argument in very broad outline. Suffice to say there is a lot of detail and nuance missing from this summary. For those who want that detail and nuance, I recommend reading the original paper. Granting that my summary is imperfect, I nevertheless want to close with three critical reflections on the argument presented above.

First, I’m not sure I am entirely convinced by the whole ‘different contracts’ line of argument. It seems to me that contracts of employment (or service provision) are pretty fluid things. There is a classical notion of contracts that views them as strictly private agreements, untethered from general norms and outside influences. But this classical notion is obviously flawed, and certainly does not represent any contemporary legal position on the nature of a contractual agreement. Nowadays, so-called private contractual agreements are frequently influenced, directed and constrained by public policy. Consumer protection legislation, for example, severely restricts what can be put into a private contract. Given this, I’m not sure why the contractual agreement underlying a privatisation arrangement couldn’t be heavily influenced and constrained by public policy, and, more importantly, why the contract couldn’t insist on deference to the public. I’m also not sure that I am convinced by the claim that a ‘zone of permissibility’ make that big a difference in this regard since, presumably, the employment contracts of public officials will include similar zones of permissibility. That doesn’t completely distance them from the general community, not grant them a relevant claim right.

Second, I’m somewhat puzzled by the distinction between the Reasons View and the Agency View. One problem I have is that Dorfman and Harel’s defence of the Agency View seems to bring them full circle, back to a position that is very similar to the Reasons View. Think about it. When it boils down to it, their major line of objection is that privatisation agreements give private agents a ‘zone of permissibility’ in making decisions. But why is this so objectionable? Because it means they do not defer to the public in making decisions, i.e. they do not take the public appropriately into account in their decision-making. This seems pretty close to saying that the problem is that they don’t act for sufficiently ‘public’ reasons. Now, I’m sure Dorfman and Harel would respond by saying that their argument is not just about reasons, it is also about the formal legal immunities that the contractual agreement would grant the private agent. But if those formal immunities are up for grabs — as I suggested in the previous paragraph — then it seems like reasons for action would be the only relevant difference.

Third, I worry that the argument as a whole is little bit too clever. It seems to come perilously close to being true by definition. Privatisation is being defined as the process whereby decision-making authority is transferred to a private agent through a contractual agreement that distances them from the public. Distancing thus becomes the defining feature of privatisation. And this defining, in turn, is why the process is objectionable. Any so-called privatisation arrangement that doesn’t involve this distancing is not really privatisation at all.

That said, I do find something compelling in the line of argument sketched by Dorfman and Harel. I do think public responsibility and engagement are important in some contexts, and I do worry that privatisation agreements tend to be more corrosive of those virtues. But that’s not to say that they are always and everywhere more corrosive, and that they might not have other, countervailing virtues.




Friday, August 18, 2017

Does Human Nature Exist? On the Philosophy of Human Nature




We hear the term bandied about all the time. A man cheats on his wife. We are told that this is simply part of his 'nature’ - that men have evolved to be philanderers. Two young men fight on the streets, taunting and goading each other on. This too is said to be part of their nature - they have evolved modules that predispose them toward violence and jockeying for status. Some people have dedicated their lives to studying and identifying all the constituent elements of human nature, convinced that their inquiries are unearthing important truths about the human condition.

Are they right? Is there a tractable concept or idea of human nature that can form the basis of their inquiries? Or are they like theologians debating the properties of angels dancing on the head of a pin? This is a fundamentally philosophical question. It has nothing to do with particular claims about human nature — such as the two, highly contentious claims with which I opened this post — and everything do with the concept of human nature. How ought it be understood or defined?

According to some philosophers, there is no such thing as human nature. According to them, to think that humans (or other animals) have some stable ‘nature’ is contrary to one of the central tenets of modern evolutionary biology. Others think that there is a defensible concept of human nature. In this post, I want to take a look at some of the arguments that are presented in this debate.

I do so through the lens of Edouard Machery’s article ‘A Plea for Human Nature’. As you might guess from the title, Machery is one of the philosophers who thinks that there is a defensible concept of human nature. He defends his view by looking at two arguments from the work of David Hull, one of the leading critics of the concept of human nature. Let’s take a look at what he has to say.


1. Two Concepts of Human Nature
Machery’s defence of human nature hinges on a particular understanding of human nature. He argues that there are two concepts of human nature at play in the contemporary debate. One of them is the ‘essentialist view’ of human nature:

Essentialist view of Human Nature = The claim that human nature is determined by the set of necessary and sufficient properties of humanness, coupled with the claim that the properties that are part of human nature are distinctive of human beings.

This is a classic, Platonic view. It is premised on the belief that every object, event or state of affairs (every ‘kind’) has a set of necessary and sufficient properties that determine its ontological status (a set of ‘essences’). The essence of being a triangle, for example, would be ‘having three sides’. Any object that had more than three sides could not be a triangle. In the case of human beings, this essentialist view usually translates into the claim that things like intelligence, humour, morality, reason, and language are distinctively and essentially human. They are what define us and mark us out as different from other animals. They constitute our nature as human beings.

The essentialist view is to be contrasted with something Machery calls the ‘nomological view’:

Nomological view of Human Nature = The claim that human nature is the set of properties that humans tend to have due to the evolution of their species.

The nomological view does not try to identify what is distinctive or special about human beings. It simply tries to identify properties that humans exhibit that are best explained by their evolutionary (and not by their cultural) heritage. Examples of properties that proponents of this view claim to be part of human nature would include bipedalism, sexual dimorphism, and large brains.

Machery’s defence of human nature works like this: He argues that most of the critics of human nature have taken aim at the essentialist view, not the nomological view. He favours the nomological view and thinks that it withstands the criticisms usually levelled at the essentialist view.



2. Hull’s Anti-Essentialism Argument
To see how this plays out, let’s first consider David Hull’s famous anti-essentialist argument. Here is the relevant text from Hull’s paper setting out this argument:

Generations of philosophers have argued that all human beings are essentially the same, that is, they share the same nature… In this paper, I argue that if ‘biology’ is taken to refer to the technical pronouncements of professional biologists, in particular evolutionary biologists, it is simply not true that all organisms that belong to Homo Sapiens as a biological species are essentially the same… periodically a biological species might be characterised by one or more characters which are both universally distributed among and limited to the organisms belonging to that species, but such states of affairs are temporary, contingent and relatively rare. 
(Hull 1986, 3)

In this passage, Hull is highlighting one of the key findings of evolutionary biology. Since the Neo-Darwinian synthesis was formulated in the first half of the 20th century, evolutionary biology has been wedded to anti-essentialist thinking. Indeed, one of the most vigorous defenders of evolution — Richard Dawkins — starts his book-length defence of the truth of evolution with a chapter outlining its anti-essentialism.

According to the modern view, species are not immutable Platonic kinds. They are all part of one great tree of life. Individual organisms reproduce by exchanging and combining genetic material. This, allied with occasional mutations in DNA, leads to variation in their offspring. The core truth of evolutionary biology is that life (across space and time) is just one teeming mass of variation, with some stable clusters of organisms within it. These stable clusters only exchange genetic material with one another and they form what we call ‘species’. But their clustering is just a contingent accident of evolutionary history and even within these breeding populations there is considerable variation in offspring.

As a result, there is no ‘essence’ to any particular species. As soon as you identify a property that you think is shared by all (and only) members of a particular species, you are sure to find another member of that species who lacks that property. This knocks the essentialist view of human nature on the head. What's more, it is consistent with our everyday experience of humanity. For every allegedly distinctive property of humanity — reason, morality, language — we can find other animals who share some version of those properties or humans who lack them. Some defenders of essentialism might to avoid this problem by focusing on ‘statistically characterised essences’, i.e. by claiming that rather than there being a specific set of properties that humans must have (in order to count as humans), there is instead a set of properties of which an individual must share a certain proportion (in order to count as human). Hull argues that this doesn’t work because, in practice, it has proved impossible to define species membership using such clusters of properties.

Hull’s argument could then be reconstructed like this:


  • (1) If there is a human nature, it will be because there is a set of necessary and sufficient properties that are distinctively human (i.e. shared by all and only members of the species Homo Sapiens)
  • (2) There is no set of necessary and sufficient properties that are distinctively human (nor any statistical set).
  • (3) Therefore, there is no such thing as human nature.


Machery concedes premise (2) of this argument. He thinks Hull is absolutely right to claim that there are no essences of humanness. Where he thinks the argument goes wrong is in the first premise, i.e. in the assumption that the essentialist view is the only game in town. He argues that if we adopt the nomological view, we end up with something that is unscathed by Hull’s argument. Indeed, the nomological view is designed to be consistent with evolutionary theory. It does not claim that there are particular properties that all and only members of Homo Sapiens share. It merely claims that there are some properties that we tend to share as a result of our evolutionary history. These properties could be shared by other species and may not be shared by some members of our own species. That does not mean they are not part of our nature.


3. Criticisms of the Nomological View
The nomological view is not beyond criticism. Though it may avoid the clutches of Hull’s argument, there are some potential problems. Machery discusses two in his paper.

The first is that the nomological approach is too reformatory. That is to say, it moves us too far away from the traditional conception of human nature, such that the concept of human nature no longer performs the function we expect of it in our scientific and everyday discourse. When people refer to something being part of human nature, they have in mind those properties and traits that are distinctively human. The nomological view doesn't give them this.

Machery responds to this by arguing that the concept of human nature has played many roles in human history and although the nomological concept fails to fulfil some of those roles, it does fulfil others. In particular, he thinks it helps to mark out humans as a special group in evolutionary history and to identify properties that are likely to be shared by members of this group, irrespective of culture or background.

The second problem with the nomological view is that it might be over-inclusive. That is to say, it might include too many properties within the definition of human nature. There is a terrible tendency to assume that every trait or property that is shared by the majority of humans must have its origins in our evolutionary history — i.e. to suggest the ‘universals’ of the human condition are aspects of human nature. But this cannot be right. Machery gives the example of the belief that water is wet. This is a universal belief, but clearly it cannot be part of human nature. It is a belief prompted by exposure to water not by evolutionary processes. The problem is that, trivially, evolution has contributed to the belief that water is wet because it has provided us with the sensory apparatus that enables us to form this belief. Nevertheless, it doesn’t seem right to claim that the belief is part of our nature.

The solution to this problem, according to Machery, is to argue that although evolution does trivially contribute to the existence of any trait or disposition shared by humanity, not all such traits and dispositions can be ultimately explained by evolutionary processes. The phrase ‘human nature’ should be reserved for those traits that can be ultimately explained by these processes.

This, however, is easier said than done. Many of the most contentious debates between evolutionary psychologists and their critics, for example, tend to centre on whether certain, seemingly universal (or near-universal), traits can be best explained by evolutionary processes or not. To return to the opening example of the disposition of young men toward violence or philandering. Some people will want to argue that these traits are products of our evolutionary histories; some will want to argue that they are the result of certain consistently present environmental factors.

So in short, even if we accept the nomological view of human nature, there will be plenty of debate left about the actual contents of human nature. Philosophy alone cannot resolve those debates but it can, at least, clarify what we are debating about.




Thursday, August 17, 2017

The Reality of Virtual Reality: A Philosophical Analysis


The Holodeck - Star Trek

There is an apple in front of me. I can see it, but I can’t touch it. The reason is that the apple is actually a 3-D rendered model of an apple. It looks like an apple, but exists only within a virtual environment — one that is projected onto the computer screen in front of me. I can interact with the apple. I have an avatar that I can control on the screen. That avatar is a virtual projection of my self. It can pick up the apple, throw it around the virtual room, or eat it. But I can’t touch it and interact with it using my own physical hands.

Is the apple real? Of course not: it’s virtual. But are virtual objects (or events or states of affairs) ever real? This question is of considerable importance. We already live a considerable amount of our lives online (in ‘virtual’ worlds). We interact with people virtually. We deal in virtual goods and services. And if the prognostications of technological enthusiasts are anything to go by, we will probably live more and more of our lives in virtual worlds in the future. With the emergence of immersive virtual reality (VR) headsets such as the Occulus Rift, the Samsung Gear, and Sony Playstation VR, we can now participate in highly realistic and engaging virtual activities. It would be nice to know whether any of these qualify as being ‘real’, particularly given that the technology is marketed to us as virtual reality.

The reality (or unreality) of the virtual is, fundamentally, a philosophical question. And, fortunately, philosophers have already begun to answer it. The philosopher Philip Brey, in particular, has developed a sophisticated framework for thinking about the reality of virtual reality. He argues that some virtual objects and events are obviously not real (they are merely representations or simulacra), but others are every bit as real as their real world analogues. He suggests that we use John Searle’s theory of social reality to tell the difference.

I want to analyse and evaluate Brey’s proposed framework in the remainder of this post.


1. The Physical and the Functional
To warm up, let’s think a little bit more about the opening example of the virtual apple. As Brey points out, this ‘apple’ clearly exists in some form. It is not a mirage or hallucination. It really exists within the virtual environment. But its existence has a distinctive metaphysical quality to it. It does not exist qua real apple. You cannot bite into it or taste its flesh. But it does exist qua representation or simulation. In this sense it is somewhat like a fictional character. Sherlock Holmes is not real: there was no one by that name living at 221b Baker Street in London in the late 1800s, nor did anyone answering his description solve the crimes that befuddled the hapless inspectors from Scotland Yard. But Sherlock Holmes clearly does exist qua fictional character. There are agreed upon facts about his appearance, habits, and intellect, as well as what he did and did not do qua fictional character.

So Sherlock Holmes and the apple have a simulative reality, but nothing more. They do not and cannot exist qua real apple or real person. Why not? The answer seems to lie in the essentially physical nature of apples and detectives. An apple does not exist qua real apple unless it has certain physical properties and attributes. It has to have mass, occupy space, consist in a certain mix of proteins, sugars and fats, and so on. A virtual apple cannot have those properties and hence cannot be the same thing as a real apple.

The same goes for detectives like Sherlock Holmes. Although there are some complexities there. Human detectives have to have mass, occupy space, and consist in a certain mix of proteins and metabolic processes. But do all detectives have to have these properties? Here we get into one of the great debates in philosophy. It seems to be at least conceivable that there could be a virtual detective that could solve real world crimes in the same manner as Sherlock Holmes. Imagine a really advanced artificial intelligence (AI) that is constantly fed data about crimes and criminal behaviour. It spots patterns and learns how to solve crimes based on this data. You could then feed information about new crimes into the AI and it could spit out a solution. This AI program would then be a ‘real’ detective, not a mere simulation or representation of a detective. In fact, you don’t really have to imagine such a detective. Companies like PredPol are already creating them.

We can draw some lessons from these examples. First, we can see that there are at least some kinds of entities — like apples and human detectives — that are essentially physical in nature. We can call them essentially physical kinds. These are objects, events and states of affairs that must have some specific physical properties in order qualify as an instance of the relevant kind. Virtual versions of these kinds can never be real; they can only be simulations or representations. But then there are other kinds that are not essentially physical in nature. A ‘detective’ would seem to be an example. A detective is a non-physical functional kind: an entity qualifies for membership of the class of detectives in virtue of the function it performs — attempting to investigate and solve crimes — not in virtue of any physical properties it might have. Virtual versions of these kinds can be every bit as real as their real-world equivalents.

Some functional kinds are essentially physical in nature. A lever is a functional kind. A wooden stick can be counted as a ‘real’ instance of a lever in virtue of the function it performs, but it can only perform that function because it has certain physical characteristics. Just try lifting a heavy object with a virtual lever — one simulated on the screen of your smartphone. You won’t be able to do it. On the other hand, a spirit level does not require any particular physical shape or constitution. You can quite happily assess the levelness of your bookshelf with a spirit level that has been simulated on the screen of your smartphone.

Furthermore, the term ‘non-physical functional kind’ is something of a misnomer. Objects and entities that belong to that class will have some physical instantiation (after all virtual objects are physically instantiated, in some symbolic form, in computer hardware); it’s just that they don’t require any particular or specific physical characteristics in order to perform the relevant function.


2. Social Kinds and Social Reality
So there are some essentially physical kinds: virtual instances of these kinds can only be simulacra. There also some non-physical functional kinds: virtual instances of these kinds can be as real as their real world equivalents. Are there any other kinds whose virtual instances can be every bit as real as their real world equivalents? Yes, there are: social kinds. These are a sub-category of non-physical functional kinds, which are particularly interesting because of their practical importance and their ontological origins.

In terms of their importance, it goes without saying that large chunks of the reality with which we engage on a daily basis is social in nature. Our relationships, jobs, financial assets, property, legal obligations, credentials, social status, and so on, are all socially constructed and sustained. Brey argues that much of this social reality can be recreated in virtual form. He argues that we can use John Searle’s theory of social reality as a guide to when and whether social kinds can be ‘ontologically reproduced’ (as he puts it) in virtual form.

To understand his proposal, we need first to understand Searle’s theory. Searle distinguishes physical kinds and social kinds along two dimensions:* their ontology (what they are) and their epistemology (how we come to know of their existence). He argues that physical kinds are distinctive in virtue of the fact that they are ontologically objective and epistemically objective. An apple does not depend on the presence of a human mind for its existence — it is thus ontologically objective. Furthermore, we can come to know of its existence through intersubjectively agreed upon methods of inquiry — it is thus epistemically objective.

Social kinds are distinctive because they are ontologically subjective and epistemically objective. Money depends on human minds for its existence. Gold, silver, paper and other physical tokens do not count as money in virtue of their physical properties or characteristics (contrary to what people often believe). They count as money because human minds have conferred the functional status of money on them through an exercise of collective imagination. In other words, particular physical tokens only count as money because most of us agree that they count as money. In theory, we can confer the functional status of money on any token, be it an exquisitely sculpted metal coin or a digital register of bank balances. In practice, certain tokens are better suited to the functional task than others. This is due to their durability and incorruptibility. Nevertheless, this hasn’t stopped us from conferring the functional status of money on virtual tokens. Indeed, most money that is in existence today is virtual in nature: it only exists in digital bank balances; it does not, and never will, exist in the form of notes or coins. We happily pay for goods and services with this ‘virtual’ money, even though it lacks physical tangibility. This virtual money is still epistemically objective in nature. I cannot unilaterally imagine more money into my bank account. My current financial status is a matter of intersubjectively agreed upon fact.

Searle argues that many social kinds share these twin properties of ontological subjectivity and epistemic objectivity. Examples include marriages, property, legal rights and duties generally, corporations, political offices and so on. He calls these ‘institutional facts’. These are social kinds that come into existence through the collective agreement upon a constitutive rule. The constitutive rule takes the form ‘X counts as Y in context C’. In the case of money, the constitutive rule might read something like ‘Precious metal coins of with features a, b, c, count as money for the purposes of purchasing goods and services’. Searle doesn’t think that we explicitly formulate constitutive rules for all social objects and events. Some constitutive rules are implicit in how we behave and act; others are more explicit.

What’s interesting about Searle’s theory is that it means that much of our everyday social reality is, in a sense, already ‘virtual’ in nature. It doesn’t depend on any physical, real world properties or characteristics for its existence. Money, marriages, property, rights, duties, political offices and the like do not exist ‘out there’ in the physical world; they exist inside our (collective) minds. They are fictional projections of our minds over the physical reality we inhabit. In principle, we can project the same social reality over anything, including the representations and simulations that exist within virtual reality. Thus, according to Brey, we can ontologically recreate things like money, marriage, rights, duties, political offices, and so forth in virtual worlds. All it takes is some collective imagination and will.


3. Conclusion: What's real and what's not?
Brey’s view on the reality of virtual reality can be summarised as follows:

Essentially physical kinds: i.e. entities that have some specific physical properties or characteristics can never be ontologically reproduced in a virtual environment; their virtual form can only ever be a simulation or representation (e.g. apples, chairs, cars etc.).

Non-physical functional kinds: i.e. entities that perform functions that do not depend on any particular physical properties or characteristics can be ontologically reproduced in a virtual environment; their virtual form can be every bit as real as their real world equivalents.

Social kinds: i.e. a sub-set of non-physical functional kinds whose existence depends on the collective coordination and agreement upon a constitutive rule (of the form ‘X counts as Y in C’) can be ontologically reproduced in a virtual environment; their virtual form can be every bit as real as their real world equivalents.

Note that this theory covers virtual objects, events and states of affairs. It does not include virtual actions. As Brey points out in his paper on the topic, virtual actions have to be treated differently for the simple reason that virtual actions are typically performed by human controllers of characters operating in virtual worlds. As such, virtual actions can have ‘extravirtual’ origins and effects, and this means that they share a much more fluid relationship with reality than do virtual objects and events. Virtual actions are constantly spilling over into the real world. It would require another post to clarify the exact ontological status and classification of these acts, but suffice to say virtual actions are often every bit as real as real world actions.

For what it is worth, I think Brey’s theory is pretty much spot on. There clearly are some objects and events that require a particular physical instantiation and this can never be recreated in virtual form; and there are also clearly other objects and events that do not depend on a particular physical instantiation (that are ‘multiply realisable’ - to use the philosophical parlance). I also agree that much of our everyday social reality can be recreated in virtual form because it depends for its existence on collective agreement. I think this is an important observation because its consequences could be far reaching. We can certainly quibble about the utility of Searle’s specific theory of how social kinds come into existence, but there is general agreement that much of the social world is constructed by the minds of human actors. (If you are interested in a slightly different theory of social kinds, see my previous post on the philosophy of social construction).

That said, I think that there might an alternative approach to differentiating between the virtual and the real that is overlooked by the theory. I’m not sure that defining everything that is represented or created on a computer as ‘virtual’ captures what we really mean by the term. Indeed, I tend to favour something closer to an exclusionary definition of the virtual. In other words, I would prefer a definition of the virtual that necessarily excludes reality: that holds that the virtual can never be real.
Furthermore, even though I agree with the theory in its current form, I think there will be much disagreement over specific cases. For any particular object or event, people might disagree about whether it requires some essential physical property or characteristic or not. Consider the debate about the human mind. There are some philosophers, called functionalists, who think that the human mind can be realised in multiple different physical forms. It is, consequently, not an essentially physical kind. There are others who think that only a human brain could instantiate a mind. This means that it is an essentially physical kind. We can expect disagreements of this sort to arise over allegedly real objects and events that are instantiated in virtual worlds, even if we agree on the general principles that apply to distinguishing that which is real from that which is merely a simulation. To be fair, Brey recognises this point. One of his main observations is that virtual objects and events tend to exist in an ontologically uncertain/contested state.


* Searle uses slightly different terminology in his work. He distinguishes between brute facts and institutional facts. 




Wednesday, August 9, 2017

Podcast - Why we should create artificial offspring

I recently had the pleasure of being a guest on the RoboPsych podcast. I was interviewed by hosts Tom Guarriello and Julie Carpenter about my recent paper 'Why we should create artificial offspring'.  The paper is an extended thought experiment, arguing that creating artificial offspring might be good for humanity.  The podcast explored many of the key ideas in the paper and some other issues too. You can listen below or follow this link to the RoboPsych website. While there, you should check out the other episodes. There have been a number of interesting guests and topics.