Monday, 31 October 2011

Is sea water healthy?

Presocratic Greek philosopher Heraclitus [Herra-cl-eye-tus] gave us the following fragment to consider:

Sea water is very pure and very foul, for, while to fishes it is drinkable and healthful, to men it is hurtful and unfit to drink.

What I want to draw from this fragment is that purity or foulness is not a property of the water, but a reaction of the individual towards the water.

This relates to a discussion of the definition of health and disease in the philosophy of medicine. Philosophers of medicine have invested a great deal into trying to define what disease is. Is blindness a disease, for instance. Heraclitus looked at the difference in reaction between a man and a fish. I want to extend this to looking at the difference between individual human beings.

Sea water is foul to a man because he is not biologically adjusted to it, whereas a fish is adjusted to it. In the same way, a man born blind may be perfectly well adjusted to life without sight and ought not be labelled as diseased, but a man who has had sight all his life and is blinded as the result of an accident may be called diseased since he is maladjusted to his new condition and may be wise to seek treatment.

So where philosophers of medicine have troubled to seek a definition of disease to suit all of mankind, I counsel against being so general.

I instead seek a kind of meta-definition that would compare a man's actual condition with the condition that he is adjusted to.
I am interested in looking at an individual's biological fitness and how it fluctuates on a local basis. I am examining a definition of disease that focuses on an individual's fitness reaching a certain threshold below his local average as the result of a certain event or culmination of circumstances.

The individual in the graph below is considered diseased after the time his fitness dips below the disease threshold.

Monday, 24 October 2011


I have a remote for my Bose speaker system for listening to music, and when I adjust the volume, I find myself clicking the volume button multiple times in succession to achieve a large change in volume. Now the remote volume comes with a function whereby I can simply hold the volume button and the volume changes continuously.

So why don't I do this? Surely it must be more efficient? It seems that if I hold the volume button and wait to hear for the right amount of volume before releasing the volume button, there is a lag between me hearing the right moment and actually releasing the button, so the volume overshoots and the music ends up too loud or too quiet. Instead, I click the button in succession, even though this is comparatively slow.

My brainwave was this: I can work out my lag. How many 'clicks' worth do I overshoot by? By finding this number, n, I can simply hold the volume button, judge by ear and when I find it, compensate for the lag by immediately pressing the opposite volume buttons n times. Amazing! I have now spent 20 minutes thinking about and discussing how I can save a few seconds on large volume adjustments when using the remote for my Bose speaker system. Practical philosophy in action (irony intended)!

Sunday, 9 May 2010

Why you should listen to me

Isn't it great when a study confirms what you already suspected? There's a significant correlation between robust daydreaming and superior intelligence.1

So, am I really saying that because I am, and have always been since childhood, a thinker and a dreamer, I have 'superior intelligence', for which my musings should be respected?

Well... yes. I ought perhaps to say, I'm blushing furiously as I type. I'm sure some of the students around me in the lirbary have noticed.

What I'm really searching for is a way to give credence to my meditations.2 To convince myself, and maybe others, that I'm not just another guy with his own crazy opinions, but that I'm one of those [rare] types that really can think clearly and brightly about all aspects of the world and its big questions.

Maybe I'm worth noticing. Maybe I could be the next Descartes or Hume.

There I go, daydreaming again...

2: ... and subtly working in the blog title...

Friday, 7 May 2010

A moral question

Why don't you rob a bank? The benefit of doing so would be, it's unnecessary to say, enormous. But very few people do it. Why not? Is it because it's so much hard work to do so, what with all the logicstics and technology and months of preparation involved? Is it because if you are caught, the costs are so high that they put the enormous benefit in to stark perspective? Is it because it's completely immoral?

I suspect that it's the last answer that most people would give, without so much as blinking. But what does that really mean? Does everyone share an inate sense of morallistic duty that makes them not want to rob the bank? Obviously not, since there are some people in the world who do go ahead and try. Maybe immoral in this context, for some people at least, is some kind of shorthand for the costs/risks are too high to justify the act.

So how do we go about finding out whether someone is unmotivated to steal for reasons of undiluted moral duty, or merely because they consider the costs to be higher than the benefits? With science of course! We will do a thought experiment, and to recreate experimental conditions, we will isolate the variable we want to test, in this case the morality. Therefore we will mitigate the cost/benefit analysis. If we make the result of a cost/benefit analysis unequivocably positive, then it's obvious that any person that still resists robbing a bank is resisting for moral reasons.

So let us enter the theoretical realm of the thought experiment. I shall ask you, as the experimenter, to imagine that in this realm, robbing a bank is incredibly easy, and you have a 100% guarantee that noone will ever find out. Imagine perhaps that you're sat in front of a computer and that all you have to do is to simply press enter, and any amount of money you desire will be strategically siponed into various accounts prepared for you in advance. It's set up in complete anonymity. You will leave and noone would know that it was you. You know you wouldn't even be brought in to be questioned since there would be no traces leading back to you. Not even God would know it was you1. Now here comes the question; would you press enter?

There is a problem with the experiment, however, and that is that it's impossible for the experimenter to get any meaningful data. Let's break this down. The experimentee has 2 things going on: the answer he gives to the experimenter, and the answer he actually believes. As I will explain, there is motive for lying here. So since there are two factors, each with two possible answers, there are four possible outcomes to consider:

1) The experimentee would, and tells this to the experimenter.

2) The experimentee would, but tells the experimenter he wouldn't.

3) The experimentee wouldn't and tells this to the experimenter.

4) The experimentee wouldn't, but tells the experimenter he would.

Now, some of these are more likely to occur than others. I suspect that if we can assume that the experimentee holds a basic level of sincerity and respect for the experiment, that it's unlikely that we'll get (4).

(1) is the scenario from which we learn the most, except that we only learn that the experimentee is either not very intelligent, or has not really thought it through very well. If he had thought about it, he would realise that telling the experimenter marks him out as 'immoral', held back only by the policing ability of society's law enforcement. Therefore, scenario (2) is far more likely from the person who would. However, at this point, how do we differentiate the person in scenario (2) from the person in scenario (3)? Any person telling the experimenter tha he wouldn't isn't giving the experimenter any useful information, since it is still anyone's guess what the actual tendancies of the experimentee are.

So this is an interesting thought experiment, but unfortunately the only person you can experiment with reliably is yourself!

1:We must assume for the purposes of the thought experiment that if you believe in an omniscient God, that you must discout his omniscience in this case. If you do not, then the experiment can't advance since you are still exposed to policing. The experiment tries to isolate the morality of the individual from any kinds of external morality, whether they be imposed by society, law or God.

Sunday, 25 April 2010


Humans are notoriously bad at judging risks. I’m talking about the decision making process behind actions. We all make hundreds of decisions every day. When do we decide to cross the road? What do we decide to put in our sandwich for lunch? What religion do we choose to follow? Not all these sound like risks, but all decisions to take risks are nonetheless decisions and follow the same decision making process as any other decision. We don’t consider a decision to be a risk when the cost of making the wrong decision isn’t very great. That’s the only distinction between a conventional decision and a risk.

The decision making process is actually pretty straightforward, as I shall demonstrate with my Risk MatrixTM later. However, there is a substantial amount of judgment to be done, and this is where our human nature allows us to screw up and make the wrong decisions and put ourselves at risk.

Every decision starts with a question: Should I do action X?

Action X might be to cross the road, for instance. Then there are two factors to consider: probability and cost/benefit, both of which can be either positive or negative. These are shown in the Risk MatrixTM below.

In our road crossing example, there might be few cars on the road so the probability of successfully crossing the road (good outcome; P) will be rather high compared to the probability of being involved in an accident (bad outcome; p). Let’s say you decide that you have a 99% chance of successfully crossing the road. So P = 0.99 and p = 0.01.

Next we need to do some cost-be

nefit analysis. We can assume that there are more benefits of reaching the other pavement than

there are costs. It could be that the supermarket is on the

other side of the street. The benefit is that we can get some bread and milk before going home. The cost is that it’ll take us longer to get home since we’ll have to do the shopping and cross back over the road. But we decide that overall the benefits are greater than the costs, so we assign a net benefit of +1.

On the other hand, there are considerable costs associated with unsuccessfully crossing the road and ending up in an accident. These include injury resulting in time spent at the hospital, preventing you from doing things you would need or want to do. There could be some benefits, such as not having to go to school and hand in your unfinished homework, but overall the costs will outweigh the benefits. Since there is some uncertainty in exactly how bad the accident could be, we’ll assign it an average cost of -100 (it would be worth about 100 shopping trips).

Now in order to determine whether to perform the action of crossing the road, we multiply the cost/benefit by the probability:

(P x b) + (p x c)

If the result is a positive number, then the action is worth making. If the result is negative, then the action should not be taken; it is too risky!

In our example,

(P x b) + (p x c)

= (0.99 x 1) + (0.01 x -100)

= 0.99 - 1

= -0.01

The result is negative and we conclude that in these conditions crossing the road would not be a smart move. Perhaps wait until some cars have gone past and re-evaluate.

However, remember at the start I said that humans are notoriously bad at judging risks? Well this hasn’t changed; the issue is in the values we assign to each part of the equation. If we were drunk we might misjudge the likelihood of succeeding; the road might be busy and we ignore a high probability of collision. Maybe we’re young and innocent and don’t realise the implications of being involved in an accident and misjudge the net cost of a bad outcome.

Sometimes we’re blinded by one impressive value in the Risk MatrixTM and do not notice the importance of the other values. Here are two examples.

The first is that of the gambler. The question is should he put all his money on red? The benefit of winning is obviously massive. He could walk away with thousands of pounds. So excited is he by this prospect that he ignores the underwhelming probability of him actually succeeding. Coupled with a high cost to failure, he shouldn’t make the gamble, but it can be easy to be blinded by the spectacular benefit of success.

The other example is that of the agnostic. He comes across a religious group that tells him that their god punishes non-belief by eternal suffering in the afterlife. The agnostic may at first question the existence of this god, but the religious group tells him that even if he’s not sure whether or not that their god exists, at least he can avoid eternal suffering by becoming a follower. The question is should I ignore this god? The cost of a bad outcome (that after all this god does exist) is so great that he might ignore to consider the probability of the god existing, which could turn out to be so slight as to mitigate the impressive costs.

If there were to be a moral to this story, it would be to consider all corners of the Risk MatrixTM when making an important life-changing decision. Always make sure you have a good understanding of both the probabilities and costs/benefits involved.

Friday, 16 April 2010

Science is just a religion

It seems to be a theistic claim common to the intertubes1 that science is a religion, or atheism takes just as much faith as religion. These are clearly erroneous claims, for a series of reasons that I won’t go into detail on. However, I’m not sure what the point of this argument is in the first place.

Does reducing science or atheism to the status of religion make it less respectable? Does it make them more easily dismissible as opinion? Yes, very likely, and that’s probably why a great number of rational thinkers are offended by statements like these. In fact, this might even be the reason. Theists observe that these claims rile their ‘opponents’ and thus keep using them.

However, the reality is quite alarming. If making science a religion makes it less respectable and making atheism a faith makes it more dismissible, then does that not mean that the theists are calling their beliefs and religions disrespectable and dismissible as ‘just an opinion’? Surely that’s counter to all that they’re trying to achieve?

1:, though special admittance to YouTube must also be made here.

Tuesday, 6 April 2010

Shades of grey

Many people believe in absolute truths. Killing is wrong, God exists, Picasso is beautiful, Frenchmen are more romantic, what goes up must come down. I think that people are learning to believe these through inductive learning. All around them are either examples of these with no exceptions or people telling them these, again with no exception or contradiction.

Later in life, these people may come into contact with realities that contradict their conceived laws of nature; war, atheists, art critics, Italians (only kidding), NASA. If they have lived too long in their bubble, they will twist and contort what they are witnessing to fit their beliefs. Other people are wrong. Argument and hostility may break out. Others less deep will modify their rules, ever adding exceptions. Killing is wrong. Unless God orders it. Whatever goes up must come down. Unless it's a rocket.

Spend too long without these ‘laws’ challenged, it becomes easy to fall into a trap. Subscription to increased doses of absolute truth. Smaller and smaller trends are deemed truths, with heavier and heavier bias. Scrutiny is met with hostility. Life becomes a set of rules too easily created with too many appendices.

An open mind (minds seem to be better opened earlier than later) realises that in an inductive system, an exception breaks the rule, rather than being added to the rule. When confronted with an exception, the open mind attempts to understand what is really going on and rewrites the rules in their entirety in order to maintain stronger congruence.

Under closer scrutiny, the world is much greyer than we assume. It’s all too easy to label things black and white without thinking about it too much. Sometimes it takes one person to come along and shake the box a bit.

Think of sexuality. First there was just heterosexuality. Then homosexuality came out of the closet and we were forced to change the rules. Now a person can be gay or straight. Then bisexuality came along. There is still misunderstanding and hostility towards this even now (remember second and third paragraphs). Some people more open minded have changed the rules to gay, straight or bi. Then more contradictions came along in the form of asexuality and trans-sexuality. Alfred Kinsey came along and shook the box. He told us to stop thinking in terms of absolutes. Don’t categorise when no categories exist.

The world is not to be divided into sheep and goats. It is a fundamental of taxonomy that nature rarely deals with discrete categories... The living world is a continuum in each and every one of its aspects.
- Alfred Kinsey 1948