Preamble - This is a vignette that considers what might happen to the cars of today, if we want to a grid based system of controlled modular transport.  It is the Clarkson nightmare scenario, in Basingstoke.  

Basingstoke Car Park

Join us from the 15 March 2050 as The Council for Greater South West London opens ‘Basingstoke Car Park.’  This car park will be a fascinating and fun day out for the entire family.

Celebrating 180 years of the internal combustion engine and petrol power, Basingstoke Car Park is a must see.  Set in the picturesque suburban conurbation 85, the car park offers three square miles of motoring pleasure.

Learn first-hand the power and the excitement of the internal combustion engine.  Smell the exhaust fumes.  Hear the roar of a petrol engine and experience the pull of a 70 brake horse power engine.  Ride and drive* a selection of automobiles from throughout the ages.  See a number of famous British cars; from the ‘Talbot Salara’ and the classic ‘Mini Metro’ and see the most prestigious car ever made – the ‘Rolls Royce’**.  Experience the thrill of driving your own ‘Nissan Micra’ (manufactured in UK in 2012) around a full scale 20th century urban park.

Please note – driving extends to the depression of an accelerator pedal between the speeds of 10-20mph.  Safety equipment must be worn at all times and each live experience is undertaken within an AI-monitored centralised safety grid.

** Please note – the Rolls Royce on display, although the last car to be designed by human in the UK, was manufactured in China, Unified Korea and India.


 Experience ‘Live’ Driving

If you’ve enjoyed attractions at other car parks, such as Slough ‘Home of the first pedestrian crossing’ or Milton Keynes the alleged home of the first roundabout, then you’ll love Basingstoke Car park.  Not only does it offer more roundabouts than any other car park (eight, in contrast to Milton Keynes’ five), it also allows you to experience what motoring conditions were really like in the late 20th century.

In fully steerable*** self-controlled cars you and your family will be able to experience the thrill of driving in live conditions.  All of our cars allow you to experience the sensation of ‘starting’ the ignition.  Hear the crackle of the FM radio.  Feel the power of ‘disc braking’ as you negotiate the challenges of changing lanes and turning right.

The experience allows everyone to experience the joy of driving as fast as you want (between 20-30mph) and the excitement of live decision making.

*** Please note – the experience is limited to eight replica ‘Nissan Micras’ operating over a spell of 30 minutes.  The number of passengers per car is limited to four.  All acceleration is controlled between the range of 20-30mph.  Braking if not applied by the driver in appropriate circumstances, such as approaching a roundabout, will be applied automatically by the centralised safety grid.


The Life of a Motorist

As well as the first ‘live’ driving experience to be had in the UK for twenty years.  Basingstoke Car Park offers a unique opportunity to experience what life was really like for a motorist.  Through a range of simulations and exhibits you can see just how perilous the daily commute used to be.  Experience the perils of human decision making and undertake a simulated motorway driving experience; driving at 70mph, late at night, in wet conditions.  Experience the dangers of human error, fatigue and learn of the phenomenon of ‘road rage’ in our psychology of driving exhibition.

Visit our Driving Hall of fame were you can interact with automated replicas of key ‘drivers from history’. Drive in our Grand Prix simulation: see things from Michael Schumacher’s point of view and feel what it’s like to drive at 200mph, entirely under your own control!  Meet key figures from the dawn of ‘Health and Safety’ and see a full scale animatronic version of the Lord Jeremy Clarkson of Beaulieu, famous for the establishment of parks for endangered automobiles popular in the 2020′s that allowed those wishing to retain their driving skills to do so in specialist enclosures.  Hear exerts from his famous ‘petrol heads unite’ protest of 2025, when he encouraged motorists to converge on Central London but didn’t make it further than the congestion on the M25 Heathrow Toll barrier.

Finally, before you leave ensure you have a picture taken by a genuine 20th century ‘speed camera’.  How fast can you go before it flashes!

How to get to us

Basingstoke Car Park is located on the outskirts of the Wider London Grid; a mere 30 minutes from the Administrative Centre using autonomous transport units and a short distance from the major transit network.

By Mass Transit.  Simply travel to the Southern Network Node and take a transport to Basingstoke Central Hub.  You will be able to either walk the 5 minute journey to the Car Park or take an eco-shuttle through the local grid.

By Autonomous Unit.  Use the following co-ordinates for your autonomous transport unit – SG 82, CP3.  This will take your transport through the wider London commuter Grid and out to suburban conurbation 85.

Other Attractions.

On your journey why not learn about some of the advanced attractions and take advantage of your ‘travel time’ to book some activities.  You could even reserve a table at our motorway ‘service station’.  Here, in full scale replica of the original ‘Reading M4 Services’, you can experience the full range of delicious food that a British motorist would have eaten at the beginning of the twentieth century.  Experience the delights of an ‘Olympic breakfast’ and a ‘cup of tea’ at our replica ‘Little Chef’****.

Come and visit Basingstoke Car Park.  A thrilling day out for all the family.

****Please note – due to Department of Health guidelines, Olympic breakfasts consist of soy sausages and soy-blended Fake-on™.





In light of Julian Assange and Edward Snowden leaks, will the Secret Services change the way they operate?

Modern day spies may have qualities more akin to reporters than that of the paradigm of the cinematic agent, James Bond.  Spies deal, after all, in information that will be useful to their bosses and they build up social networks in order to gain this information.  These networks include friends and acquaintances who have been persuaded to pass on useful information perhaps for a financial reward, or perhaps because they believe it in the public interest.  Possibly they may have been persuaded to pass their information after being subject to a little blackmail – nothing too ugly but just sufficient to persuade a hesitant informer.  All in a day’s work for both the reporter and the professional spy, and both are inclined to justify their work in terms of working in the national interest.

However, there are some differences: reporters protect their sources and disclose their findings while spies are reluctant and positively avoid disclosing sources or findings to the public.  This approach has made the public nervous of late, raising questions as to whether everything spies do is justified.  The issue over information gained through the use of torture by 3rd parties is just such an issue that the public has too little information about to make up its own mind.  The publications from Julian Assange and Edward Snowden have provided material to better inform the public about what information the intelligence services collect. With greater expectations for transparency and demands for accountability for all systems of governance, what does this mean for security agencies reliant on classified information?  Are we coming back to the age-old debate of who polices the police or checks the Cheka?

So, when the spy chiefs of GCHQ, MI5 and MI6 reported to the Parliamentary Committee for the first time in a public forum, there was speculation that something had changed in the spy business.  But as the dust settles on the briefing folders of our spy chiefs, it is time to consider how, or if, anything will change?

The Use of Spies

Spies have been around for a long time. Alexander the Great used them, as have most, if not all, military leaders.   In the military context, spies have been used to find out the strengths and weaknesses of opposing armies, or to persuade tribes to support one or another side; these seem acceptable pastimes for spy folk. However, given the shadowy world that spies operate in their activity is, and will remain controversial.  Espionage, fake identities and betrayal have become firmly associated with this trade over the course of history.  Since before Tarpeia betrayed Rome for the gold bracelets worn by the Sabines, treason has never been well received in any state.  The thought that someone (a fifth columnist, a red under the bed, a terrorist) might betray the society they live in has always made the public nervous.  Governments have sought to allay this concern but have they only ‘set a thief to catch a thief’ or more pertinently, set a spy to catch a spy.  Or are our spies honourable individuals possessing boundless amounts of integrity that ensure that the public would be in agreement with any of the actions?

 At present, spies either collect information that helps this country’s national interests or they counter the efforts of other nations’ spies that would harm the national interests.  But who tells the intelligence community what is the national interest?  It is clearly a challenge to identify any course of action that stands the long-term test of being in the national interest.  No one wishes to have another 9/11 or 7/7.  But what is the Government’s strategy to prevent such a reoccurrence?  The terrorist activities of the Provisional IRA were resolved, not by listening to everyone’s telephone calls but by resolving the issues that created conflict in Northern Ireland.  Whether the intelligence services agreed with this strategy or not is irrelevant, it was decided by the Government of the day, elected and accountable to the public.  But politicians can get it wrong too.  For example, did Tony Blair use his public popularity and the ‘force of personality’ to persuade the nation to support a second war against Iraq in 2003?  Voices from that date and since have struggled to demonstrate the ‘national interest’ that that campaign supported.  Consequently, it is difficult to determine who we trust in identifying national interest whether in the short or long term.

The Future

The future is likely to see the world a more complicated place than today with continued globalisation blurring the boundaries between police work and spying.  Preventing nuclear proliferation, or the use of WMD are clearly worthy causes for spies; preventing money laundering by drug barons perhaps seems more like police work albeit both benefit from the ability to access conversations and messages throughout the world.  And while the public may accept the need for email intercepts when they help identify a group of active terrorists, they will increasingly expect a level of assurance that their privacy has not been invaded unnecessarily.  The revelations from Edward Snowden have raised the question of proportionality; can the security services justify monitoring everybody to identify a few targets of the spies. At one time phone tapping required a legal sanction before it was permitted.  Perhaps the time is coming for some form of ombudsman to monitor the intelligence services surveillance targets and reassure the public that their interests are been served by those working in their name?

“Who watches the watchmen?

 The presence of the three spy chiefs answering questions in public to the Parliamentary Committee for Intelligence and Security perhaps indicates that it is the intelligence community who have, subtly, opened the debate about change.  They said little that was new or unexpected, but their presence, in a public forum, placed the issue of personal privacy in the public domain.  The intelligence services wish their activities and information to remain covert and they argue that anything less than complete secrecy compromises their activities to safeguard our interests.  If we agree with this, the lack of public outcry will be tantamount to sanctioning this position.  Conversely, if we feel there should be safeguards in place to ensure there is accountability and responsibility of the intelligence community to its employers, then does greater pressure need to be applied through government to the Security Services?

The intelligence community has focussed the arguments about their activities on preventing terrorism.  It is an emotive focus; MI5 stated that 34 terrorist plots have been disrupted in the UK since 7 July 2005, presumably by use of covert interception of email and phone conversations (under the GCHQ Tempora programme) But preventing terrorism is only a part of work that the security services prosecute.  Would the public be as happy to surrender their rights to privacy for objectives other than the prevention of terrorism?  It is, surely, for the public to have their say but achieving an informed debate may be a challenge.





Is there a final boundary in human evolution?  Could there ever be a point when the mind could leave the body?  What would it mean for humanity as we know if, if we are no longer ‘shackled to the flesh’?

At the moment, such a development is an ‘outlier’ trend.  Most of the speculation of how and what would happen if the ‘soul’ left the body has come from science fiction.  This has been done in mainstream films – like the Matrix, but perhaps most famously in William Gibson’s novel – Neuromancer.  In this story, the main protagonist, is able to interface directly with computers and control them, moving himself virtually away from his body.

Science fiction, as well as being great fun, is able to provide fictional solutions to challenges of technology enabling someone’s mind to effectively transcend their body?  But what about fact?  What technology would be required to do this, and what technology is emerging that could come close to enabling such a leap?

Current trends

At present, most advances in this area of science concern the therapeutic application of technologies to help people who suffer from disabilities or who have lost limbs or some part of their sensory systems.  Such therapeutic advances are driving sophisticated neuronal control of artificial limbs and computers.  In simple terms, such control is achieved either through reading brain signals (using external sensors, such as plastic caps with lots of electrodes in them, that measure brain activity) or, through direct brain associations known as neuronal interfaces.

Strides are being made in both areas of research – but particular developments are being seen in neuronal interfaces as our knowledge of medicine improves and the scale of our technology reduces.  Research into neuronal interfaces has been in development since the 1980′s and has come to represent a range of sophisticated techniques, through which people can form new neuronal connections with new appliances.  Put simply, using neuronal interfaces, people can train themselves (by forming the necessary neuronal connections) to be able to move artificial limbs and cursors, therefore augmenting their own physiology.[1]  Such advances have tremendous therapeutic benefit and can help people who have lost functionality of their limbs, retrain and grown the brain cells required to derive movement and functionality through the augmented appendages.[2]

There is considerable demand for such technology and significant strides look likely over the next 30 years.  Most advances are likely to start by having therapeutic applications, but these will probably expand into commercial markets as people seek to enhance their own natural abilities either out of interest or for their own competitive advantages.  As understanding and demand increases for such technology, so will its abundance.  Research will probably focus on how to make such augmentations as painless and efficient as possible – for example – neural ‘dust’ technology is currently being developed in the University of California Berkeley that offers a system of thousands of ultra-tiny chips permanently inserted into the brain that can both monitor and communicate data through ultrasound.  It is probably a fair assumption that such technology will continue to be sought and this will increase the sophistication of how we integrate with technology and how we communicate.

Augmentation technology and the ‘Last leap’.

Neural interfaces can take advantage of an important property of the brain, known as neuroplasticity.  This process is what allows us to learn and basically it is because the neurons in the brain are constantly able to form new connections as we learn new things.  So, for example, if someone has a new arm added to them and a neuronal interface connected, with time and training (and a lot of trial and error in the brain) eventually, an associated collection of neurons for moving that arm will form.

The question is now, with this amazing ability and the increasing sophistication of technology, how long will it be until the technology reaches the levels speculated about in science fiction?  If someone can train their brain to recognise an augmented appendage, would it be possible for them to train themselves to use and recognise a new brain-space?  In theory, the space where the soul meets the body, can be altered.  Could a new space can be integrated into the human brain that allows people to move their conscious into a new area?  If so, would this represent the means of enabling the last leap, when the conscious can leave the biological body and move into something new?

So let’s speculate about what this could mean for the soul.  If a technology was developed, that took advantage of natural processes like neuroplasticity and gradually replaced each section of brain data storage.  Piece by piece, it moved through all the essential functions, moved connections from the living parts, little by little into a replicated, artificial brain.  Switching off one as it opened another.  Theoretically, following a gradual and controlled process, the old human brain, could be switched off and replaced with a functioning, artificial brain, that could in theory hold all the same neuronal data that consisted the soul of that person.  Ideas like the ‘exocortex’ describe such a technology.

In reality, it is probably unlikely there would be a sudden ‘leap’ where a person could gradually fill up one artificial brain and switch off their real one.  More likely, and probably more immediately, some people will see the benefits of acquiring an alternative ‘brain space’ as they go through life.  This presents an interesting evolutionary trend – would such a person spend more and more time in the artificial brain, with the biological, original brain being less and less required?  The situation is similar to a ‘drone’ in an Iain M Banks novel, where such robotic systems are built with  three brains, based on hard computer components, degrees of magnitude faster (due to the smaller nano-components) than the back-up brain.

So what?

If you look at current and, as much as we can, the technologies being developed over the next 10-30 years, a ‘leap’ is probably unlikely, but the gradual evolution of an augmented brain is a real possibility.  What this means for the human species is unclear.  Will we see the development of ‘trans-humans’?  Also, if the access to such technology is unevenly distributed, in the early days, most people, except a brave few pioneers are unlikely to volunteer until both the clinical and real-life benefits/competitive advantages are observed and then, in the early stages the technology is likely to be expensive and therefore not available to all.

The trend for human augmentation will continue.  It will set new taboos and change perceptions of what it is to be human.  The development and use of all such technology will probably be controversial and much debated, but as our understanding of it increases and the associated costs go down, one thing is certain, people will start using it for a variety of different reasons – so for genuine, therapeutic applications other for reasons of simple competitive advantage.







Globalisation isn’t new.  As a process it has been running for thousands of years.  People have always travelled the earth, but increasingly they have been able to travel further and further distances.  Communication systems have grown in complexity and spread out to cover the world.  Goods, services and ideas have been transported around the world for a long time.

The Eastern Telegraph Company System – 1899 chart of underwater telegraph cables. 

Up until the 1980’s, globalisation was generally limited by how fast a person (or an item) could travel.  Most of the exchange of goods and people was through transport systems.    Although the telegraph existed its use was generally limited, telegrams (although understood by most people) weren’t a hugely common means of communication.  Eventually, this gave way to the telephone which to begin with generally facilitated better communication across a nation.  However, in the 1980’s, with the development of satellite technology people were able communicate, initially through telephones and later through computers all around the world (although the bulk of the internet was established through a continually improving cable based infrastructure around a lot of the world).  Television’s influence grew wider still due to the development of satellite TV.

These developments meant ideas and information became shared ever more quickly.  Today, internet-based services and virtual collaborations mean businesses and projects form without any geographical boundaries.  Through you-tube and twitter, the scope to ‘go-viral’ and have a single idea or product reach billions in a matter of days, even hours, is a real possibility.  But what does this coming together mean socially? Are we working toward a ‘singular’ global society?  What does the ease of access and spread of global communications mean for the diverse societies found around the world?

Globalisation and society.

Until recently, when globalisation happened more slowly, most societies existed as ‘microcosms’ – diverse small environments where different languages, cultures and ideas dominated.  To help understand this and what it means, consider the following example that every Genetics student learns about the ‘Great lakes’, just to illustrate a bit more about this trend.

The Great Lakes are in Africa and they cover a very large area.  They are home to a species of fish called the ‘Cichlid’.

The great lakes (mainly Lake Malawi, Lake Victoria, and Lake Tanganyika) are home to around 800 and 2100 species of Cichlid.  Nearly all of these species are endemic (this means they evolved in and are confined to a particular place) to the lake they inhabit.  This is why the Cichlid species is used often in Genetics research because it is known to both adapt and evolve quickly.  This is explains why there are so many diverse cichlid species across the Great Lakes.  Each species has adapted to its own niche, so much so that it will now only mate with its own ‘kind’.  And its own kind might be a blue fish, or it could be more subtle, it could be that it only mates with a fish that does a particular kind of swimming motion.  When such diversification abounds the differential between what is and what isn’t a species can become quite particular.

So, what does this have to do with globalisation and diversity?

Now, consider each Great Lake as its own environment.  What would happen if, due to some kind of freak geological rebalancing – say an earthquake that suddenly brings all of the lakes level with each other and they all ran together?  You would have a large number of cichlids, each adapted to their own environmental niche which had suddenly, without warning, changed.  This would probably be a difficult time for a lot of the cichlids.  There would be extensive competition amongst those seeking to occupy the same space.  In the short term there would be a lot of fighting.  Then, as things stablised, you would see the forces of selection operating over this wide range of species.  The overall, larger lake environment that has resulted from all the smaller lakes running together, would be larger, more uniform, this would probably lead to the variability of the wider environment decreasing.  This would mean the number of specialist environments the cichlids could adapt to, would reduce and probably lead to a reduction in the total number of cichlid species.  Reducing to a smaller total number of species, but probably with a greater number of fishes in each species.

So, is this the case with Globalisation?  With the coming together of many different, localised societies are we seeing a great, global ‘mixing’.  Are all of the different ‘lakes’ of human society with different cultures, norms and practices merging?  Although, thankfully, this is not being bought about by some catastrophe – some kind of continental, drift in reverse.  Instead, the geography is staying the same but it is the movement of people and the knowledge and transmission of ideas between the different communities that is bringing everyone together.  And as it does, what does it mean?  Does it mean we all become increasingly similar, characterised by a declining number of global norms, which become increasingly inconsequential?

Globalisation and human society.

Are human environments all becoming increasingly similar?  Is there any evidence for this?  Consider some of the following trends.

Firstly, living habitats could be seen as becoming more uniform.  It is now more common for people to live in the urban environment.  This means people tend to live in smaller habitations confined to smaller areas.  They tend to find employment in the urban environment that is often based on services – moving away from the traditional form of agriculture as the main source of employment.  People tend to live in higher density housing, perhaps increasingly living in apartments rather than houses.  (If you’re interested in facts and figures for this trend – take a look at the excellent UN Urbanisation prospects database at

Second, while the physical environment most people find themselves in becomes smaller and more uniform – this, combined with trends in current employment patterns drives a trend for smaller family sizes.  Today, there is generally a benefit in most countries to have a family size of 1-2 children as such a level probably has less impact on long term individual earnings.  Such a trend is a large shift away from the ‘traditional’ notion of ‘hunter-gatherers’ with large families, which has often been the basis for having larger family sizes in most societies around the world.  This trend is further reiterated by the increasing awareness of global norms for family size.  As more people get TV, they see entertainment programmes that show families with, generally, 1-2 children, often in-line with so, often through things like sitcoms and the media – the ‘traditional’ family size reduces to ‘developed levels’.

Third.  Language.  As people move more and gravitate to densely populated areas, the number of rare languages, sadly seems to decline.   It seems that also, along with languages so do the cultures and beliefs practiced.  The young look to the modern and opportunity and leave behind the ‘old ways’.  This is further facilitated by modern communications.  Deceptively simply devices like smartphones, tablets and increasingly affordable computing power means that almost everywhere, anybody can access a network.  With such tools, quick and accessible communication thrives on a language that is easy to communicate.  Symbols, emoticons and general overarching principles dominate that allow messages to be conveyed to as many people as quickly as possible.  At present, we could be seeing languages blend into a handful of macro-type forms that are easier to understand by the mainstream and can blend into other for ease of mass communication.

The Enduring languages project part funded by National Geographic, predicts that by 2100 half of the world’s 7000 languages may have disappeared.

Finally, and perhaps even more contentious than some of the trends described above there is the issue of politics.  Is political choice changing?  Are the ideas and values of political parties becoming increasingly similar?  In the UK for example, the political differences between the two main parties – Conservative and Labour used to be quite stark.  But, in the past 20 years, both parties have becoming increasingly central around a wide variety issues.  Labour has becoming more embracing of trade and business, whilst the Conservative party has become, generally, less conservative around issues of immigration and perhaps social care. Has this trend been driven by parties having to adapt to the general consensus of the electorate, which is, for the reasons described above, more central itself?  Does this mean that politically, we can expect the parties to become more like each other?  Its hard to call – this certainly remains an interesting trend that warrants more thought…watch this space on that one to…

So what?

As a species, are humans becoming less diverse? Well.  Perhaps yes and and perhaps no.  The trends described above, generally describe change at the national level.  In the future, what people define themselves as, the norms they subscribe to could be less bounded by the norms/traditions, or rules of their locality.  Nationality, for example, could be less of a significant marker for who a person is as increasingly, people ascribe to a more global system of norms.  But in potential population of 9 billion, what’s ‘normal’ could be potentially huge – the ‘mainstream’ is and will be massive.

In a lot of ways ‘standardisation through globalisation’ is a good thing.  It increases economic opportunity and standards of living for billions.  At a level, traditional diversity may be sacrificed for the majority vote but does it means people generally live longer, are generally healthier and more tolerant?

However, the process of change is never smooth.  Quite apart from the loss of unique languages, species and customs.  Many of those who practice and protect such things do not want them to disappear.  This can lead to many outcomes.  Peaceful awareness building and campaigning to raise awareness and increase protection.  Unfortunately, violence is at the other end.  So that is the challenge of the change that globalisation brings – As the environment becomes increasingly uniform, what does this mean to vast number of beautiful and exoctic creatures that populate our earth?

And one final point about the cichlids.  Another interesting thing about them, is that despite the similarities between them, they are for the main, very similar genetically.  This means that they could interbreed if they chose to.  But, they don’t chose to.  Sexual selection is very strongly in operation, which means that behaviour becomes the most significant factor in determining different types of species.  Despite all the trends pointing to everyone becoming increasing similar in terms of global norms there is a counter trend – the growth of interest based activities.  One of the most expanding industries globally is based around what people choose to do in their spare time.  In the future, could people be more like the cichlids, could people meet and marry and form families only with those that share the same interests as them?  Probably, but this wouldn’t lead to different types of human species – probably not, not at least for about a million years and Warcraft isn’t likely to be in its present form by then is it?





In star trek, they don’t need money anymore.  The logic for why this is isn’t particular clear, but the main ideas for this are, by the 23rd century they have moved away from money and instead use ‘credits’.  These credits are generally based on ‘work’, which people are encouraged to do on the basis of self-advancement not aggrandisement through wealth.  One of the other things is that the free market no longer exists.  Another aspect of the Star Trek world, is that this all takes place in a federation – a single, political earth and perhaps this is another thing that the creators thought could conceivably allow a system for removing money.  The predictions made in Star Trek don’t really go much further than this, but there are interesting speculations on how the federation could conceivably move away from money.


Ok, so this isn’t the most specific of models for the future.  But it does expose a few interesting things about where we are now and how they could change.  Firstly, we are currently dependent on the strength of currencies (at present this is principally the dollar) and the performance of the markets to dictate value.  Second, we are still at a level where national economies compete with each other.  And finally, today, people generally accumulate wealth in a manner that isn’t necessarily proportional to how much they physically ‘work’.

To understand what could happen to money in the future, it is interesting to see how both money and the idea of ‘value’ have evolved.  Roughly, it went like this barter, stone tablets, metal coins (and paper money), the gold standard, the dollar.


To begin with, people bartered, exchanging what they produced and made for the other things they needed.  As this happened more, commodities started to arise.  These items were used by almost everyone, such as salt, spices, tobacco or tea.  These had slightly higher values that could be measured but still had issues surrounding their weight and perishability.

Use of coins and paper money

This led to the use of money when, from around 700BC, metal coins started to appear.  This was because metal was relatively available and didn’t perish.  Each coin was made to represent a certain value and enabled people to compare the different costs of the items they wanted.  And, as more and more people started to use them, they began to be adopted by cities and countries started to adopt them as a means of currency.

With coins, came the first really global currency.  From the 17th – 18th centuries, the famous ‘pieces of eight’ or Spanish dollars, were used in trading throughout the Americas, Asia and Europe.  The spread of the Spanish dollar was enabled both by Spain’s dominance ofthe world stage and also the quality and purity of the silver used to form the coin meant it held its value well.  However, in parallel with coins, paper money was also being used in some countries.  For example, paper money is thought to have originated in the Tang Dynasty in China in 700AD, however, the most circulated and recognised form originated in 1100 AD in the Song Dynasty.  These early notes were effectively promises to pay the bearer in coins.


 An example of a Song Dynasty ‘Jiaozi’, the world’s earliest paper money.[2]

Money and gold.

The thing is, as paper money become more popular – the notes themselves became the markers of value.  Less and less they were exchanged and more and more they became valuable in their own right. But this value had to represent something.  To begin with, it made the most sense to tie this to the most valuable commodity – gold, or sometimes silver.  From the 18th century onward, coinciding with British imperial rule, representative money was backed by a government or bank promise to exchange it for a certain amount of silver or gold.  This meant governments and banks used the ‘gold standard’ as a measure of value.  This lasted until the early 20th century where the system used for the international gold standard collapsed during World War I.  Then, post World War I, the British pound, or the Pound Sterling, was once guaranteed to be redeemable for a pound of sterling silver. [3]  This system was used for the majority of most financial systems around the world, where most countries took the value of their representative money from the gold standard and this lasted until around 1944.

Representative money

After 1944, ‘commodity money’ changed to ‘representative’ money.  Put simply, this change meant money itself, no longer had to be linked to a store of value like gold.  This change occurred shortly after World War II, when the ‘Breton Woods’ system was developed to rebuild the international economy.  This system of value management established a series of rules and financial relations between the world’s major industrial powers.[4]  In simple terms, it meant that each country maintained an exchange rate for its currency that was tied to the US dollar.  In 1971, this led to the removal of the gold standard when the US terminated the convertibility of the US dollar to gold, establishing something known as a ‘fiat currency’. ‘Fiat’ is the Latin for ‘let it be done’ and ‘fiat money’ is different to representative money because the value is given by a government.  This means that ‘legal tender’ laws can be enforced by a state and the refusal of legal tender, in favour or some other form of payment is technically illegal.[5]

Roughly, this is where we are today.  Governments now assign values to currencies and they do this mostly by taking the value from the dollar which is generally held to be the global reserve currency.   Although there have been challengers to the dollar as the worlds most traded currency (the Japanese Yen and the Euro in recent times), it has remained as the de facto world currency.   In 2012, 61.1% of official foreign exchange reserves were held in dollars, compared to 24.3% in Euros, 4% in Pound Sterling and 4.1% in Japanese Yen.  Although, it is worth noting, currency controls are currently in place for both the Chinese RMB and the Indian Rupee.

So what for the future?

So this takes up to where we are now.  What does this mean for where we could be in thirty years time?  Could the Chinese Remnimbi be the dominant reserve currency, could bitcoin represent a new ‘virtual’ value banked through distributed servers or could 3d printing and new measures of work lead to a ‘star trek’ style global economy (stick with me on that one…)?

What will be the dominant currency in 30 years time?

At discussed, the dollar is currently the most commonly held reserve currency.  As a result, the US economy receives economic benefits from having ‘reserve currency status.’  This means the US can run slightly higher trade deficits and its economy generally benefits from the amount of dollars being bought by other countries.  However, how long is this likely to last?  The growth of the Chinese and Indian economies could see the development of other currencies that could be increasingly traded.  But, for both countries to do this, a lot needs to happen.

Currently the Chinese economy benefits internally from having currency restrictions.  This means it is (relatively) easy for the Chinese government to control capital flows and interest rates.  This also means the economy can benefit from setting low interest rates for foreign businesses and cheap capital which drives growth.  As China still requires a high level of economic growth for both industrial growth and internal stability, it is likely that for at least the next 10 years, it will keep its current currency restrictions.  However, later on, probably in the next 10-20 years, as the Chinese economy starts to slow, it will become more in the national interest to trade the remnimbi globally to accrue the economic benefits of a traded currency.  This is also the case for India, where most of these issues also hold true.  However, what may become key as we move further out into the future, is the bankability of the institutions in each of these countries.  One of the main reasons the dollar is such a trusted currency, is because of the resilience and the associated degree of trust in the US Treasury.  For investors to hold such a degree of trust in the RMB and the Rupee, significant developments would be needed in the accountability and overall trust in both Chinese and Indian institutions.  From today’s viewpoint, India probably has the edge in making such strides despite current perceptions of corruption, due to its liberal democracy and attitude to the free market.  This is something that the Chinese Communist Part, has a long way to go in embracing and a critical uncertainty for the long term viability of the Chinese economy.

But, the increased significance of China, India and other strong economies does not stop the role of the dollar being debated, especially in the light of the 2008 financial crisis.  For example, since 2009, various possible alternatives to the current system have been suggested that include.

  • Diversifying the list of currencies used as reserves and using agreed measures to promote major regional financial centres.
  • The development of a supra-national reserve currency issued by international financial institutions.
  • The use of the International Monetary Funds special drawing rights (a currency basket that comprises dollars, euros, yen and sterling) could be used a super-sovereign reserve currency.
  • A petro-currency, backed oil reserves in oil producing countries.

These ideas represent a range of alternatives, some perhaps with a greater degree of political bias than others.  But what’s interesting is how they illustrate how the idea is being debated and how a supra-national or super-sovereign alternative to a single national currency is becoming more debated.  Given the continued growth and significance of the global market this could be an increasingly significant trend, which could also drive the development of virtual currencies.

Virtual currencies.

If alternatives to the dollar are being debated, if globally we are starting to seek new ‘bench marks’ for value, could this lead to the development of a new currency being created?  Due to the national advantage this creates, could it be possible that this is no longer based on a single, national currency?  This sounds outlandish, but with the development of on-line currencies, could value be tied to something that is no longer based on the performance of a single state?  For example, take Bitcoin.  For those who don’t know what Bitcoin is, it’s an online currency that has no central bank and solely relies on an internet-based peer-to-peer network.  At present, Bitcoin is the most widely used alternative currency and as of March 2013, the monetary base of bitcoin is valued at over $800 million. [6]

A virtual currency could have significant impact for how we use money in the future.  Perhaps it favours moving away from paper money and coins completely.  Perhaps it challenges the notion of ‘legal tender’.  It doesn’t quite get us to the ‘star trek’ future yet, but it is an important step.  But, if you take this trend and compare it to the development of 3d printing, that’s where things start to get really interesting.

3d printing, money and value.

Forgive me, like bitcoin, I’m sure 3d Printing, is something that everyone, everywhere has heard of.  But, for anyone who hasn’t, 3d printing, or additive manufacturing as its known technically, has the ability to revolutionise manufacturing.[7]  Although, this is by no means a given for the future, it could lead some interesting implications for what we need money for and what we value.


3d Printing systems, such as this will be increasingly common over the next 10-20 years.

In the next 20 years, 3d printing has the potential to revolutionise consumption.  Instead of needing people to make objects a lot of material requirements could be met by ‘simple’ home based units.  Imagine a single station that produces your clothes, tools and utensils.  Imagine another in your kitchen that produces synthetic protein and makes food to order.  You could even have another in your bathroom dispensing all of your immunological needs, linked to an internet-based disease trackers that tell you what diseases are circulating.[8]  In this world, what would you actually be paying for?  You would most probably undertake virtual transactions and pay for access to particular licenses/blue prints/versions.  For this, you would go to particular vendors that guarantee access to the latest and/or the best ideas.  At the same time, a lot of the ‘basic’ items you require could probably be made without paying for blue-prints.  Open source, free to access assembly codes for things like shoes, plates, even furniture would be available.  This would mean you could produce what you need for the cost of the substrate (and of course, the initial cost of the 3d printer).

This creates the image of interesting future, but what does it mean for money?  How ideas/licenses are purchased is likely to be similar to how apps, music and books are purchased today – mostly through on-line transactions and through a huge virtual market place.  In such an environment would a virtual currency be even more likely?  Something like bitcoin will probably exist but with accountability to some kind of authority.  The question is, would this be national, global or virtual?  And then, if we go back to what money was initially made to do, what does it mean?  If people just need money to pay for certain things and everything else is free, where does value reside?

Could this be a stepping stone to the ‘Star trek’ future?  Could this be the first development of ‘credits’?  Even more, in this future, ideas are even more valuable.  Could people start to offer their time as a means to get high value credits?  Could people do the work they require (either participating in ideas-generation sessions, creative work, or clinical trials (and yes, please note, I am currently available for all three!)) for shorter and shorter times.  Or could higher credits be earned for more unpleasant tasks?  What would this mean for how people earn their wealth today – presently it is estimated that in 2012, the world’s 100 richest people earned $240 Bn.[9]

The Star Trek future of money

The idea of money of what money is has come a long way and meant different things at different times in history.  The idea posited in Star Trek, that we might not need it in the future, may not be as crazy as it sounds – essentially it all depends on what we value.  In the future, should manufacturing change and become localised and money move to a more virtual form, we may see people being able to sell their time with greater and greater efficiency, which could benefit them and their quality of life and also challenge current grossly unbalanced income differentials.

Perhaps increasingly, the big question will be – What do people value?  Could we be moving to a world, where people place greater importance on ‘time’ and self-advancement rather than the pursuit of wealth?  Technology could increasingly make this possible and the values we start to hold dear could be more and more based around our notion of ‘self’ and how we wish to live.  Perhaps we’re already there – debates around ‘gross domestic happiness’ and the pursuit of a balanced life are certainly more common today than they were five years ago.  How long then till we can do without money?  Will we see a new system of value arising in the next 30 years?  Well if you consider the increasing importance of happiness metrics to policy formation, in Western countries, it could be sooner than that…




[4] states.




[8]See the Chinese dinosaur for a short story exploring these trends.

[9]World’s 100 richest earned enough in 2012 to end global poverty 4 times over –





In the future, as the capability to ‘hack’ life increases who or what will regulate ethics?

Biological sciences have seen incredible advances in the past fifty years. Since Watson and Crick first determined the structure of DNA in 1953, there have been countless advances that have allowed companies, states and even interested individuals to start experimenting with ‘life’ as we know it. Traditionally, most advances were made in academic research institutions, but over the years more and more corporate research has been conducted meaning that breakthroughs occur in both sectors. But In recent years, with the greater pooling of knowledge and research through the internet and more and more commercially available equipment a third sector has started to grow. The interested ‘hobbyist’ who tinkers in their basement to produce different genes and even life forms. Today, this practice often referred to as bio-hacking, which, is generally seen to be conducted by a community of interested individuals rather than government, academic or commercial groups.

Alba, the fluorescent bunny, was genetically engineered in 2000, she was produced by Artist Eduardo Kac.

The term ‘Biohacking’ refers to the practice of combining biological research with the modern day phenomenon of ‘hacking’. However, there are slight differences in meaning. In the classic, computer based context ‘hacking’ a network generally focuses on exploiting weaknesses and circumventing controls. The developing biohacker community has more positive focus and rather than seeking to exploit weakness it is about seeking to understand how something works and attempting to change it, often simply to see what happens.

The increasing interest in this area and the potential further development of a ‘movement’ or established community will probably lead to some interesting ethical discussions, in which the state will have to play a part. Historically, the notion of ‘Genetic engineering’ has been a controversial one in many cultures. In the UK particularly, the notion of ‘Frankenstein foods’ has been a long-standing political issue. Generally, at least in democractic states, discussions and policies to do with ‘life’ tends to be driven by the view of its populace. In Western democratic systems, the prevelance of Christianity, tends to shape peoples beliefs and mean that political choices are often made to contain and control such research with a view to to keeping it at an ‘morally acceptible’ levels within legal guidelines. These guidelines seek to reflect the general consensus of the society that they serve. This is especially the case for the UK, because genetic manipulation is seen as a political issue, ethical standards are developed to keep its application confined to particular types of research – the rules for genetic engineering the UK are set by the Medical Research Council, state it is legal to genetically engineer mice, cows, pigs, sheep and goats. This means that today, the rules for what is allowed for researching on ‘life’ is set, or driven by, the state, and in a democracy, is representative of the will of the people.

So, at the state-level, there are checks and balances put in place to ensure the rules on ‘life’ research reflect the overall will of the people. But, in the future, in a world of increasing global interconnectedness, greater individual empowerment and increased diversity of corporations and services (see the Open Futures Project for more details behind these potential trends), who will regulate or ensure a consistent ethical code of conduct for research? But, before we get to this, let’s look at the other players in the life research game and consider who could be the key agents of change in the future.


Research into ‘genetic engineering’ has long been controversial, especially when conducted by a corporation. The perception is that a company, generally a generic multinational is manipulating and creating new forms of life that are created to make more profit. This may or not be true. Certainly, a company’s bottom line is profit, but current legislation seems to be quite good at holding companies to account, generally because they have to operate and/or trade somewhere in the West. For example, one of the more controversial ‘agri-bio’ companies of recent years, Monsanto, is frequently moderated by the US legal system to avoid the undue release of genetically modified seeds and foodstuffs. So, possibly, corporations wishing to research and release genetically modified life forms in certain countries and markets will face different rules and regulations. Similar to governments, the larger a company becomes, the more it becomes a subject to its own reputation and perhaps, more significantly, its shareholders.

The Rest of the world

As genetic research is generally subject to legal checks and balances and the political considerations of societal belief on the sanctity of life, its usage varies by country. Emerging players around the world are developing genetic research and are applying different measures of judgement for their application. For example, in China it is considered ethically unreasonable not to use any form of clinical practice that can help improve a person’s life, so the alteration of the genome/life to improve human life is considered perfectly acceptable.

Around there world, there are a range of different national policies toward genetic manipulation of both ‘life’ and specifically the human genome and how this research is applied to medicine, as well the production of food and animal products. Many states do not apply the same laws or ethical principles on how research is conducted, this is not to say that don’t have frameworks, it is that they are different, and this difference can amount to allowing a wide range of differing laws (sometimes with no laws at all) that allow multinational corporations and individuals to conduct research that would be illegal in another country.

The Biohackers

Now, we return to the biohackers. As a group they represent a relatively new entry to the field and what’s interesting is that they neither work for a corporation, university or a government. They are a new breed (not a genetically altered one, well not yet!) in the area of genetic research. Essentially, they are enthusiasts empowered by the increasing availability of genetic research equipment and material. Using material on-line people are learning to develop and grow their own forms of life; they are hobby cloners – the biohackers, the people who are learning how to ‘crack life’ in their own time. Some are motivated by interest, others by profit, either way, they represent a new movement, a group from the individual level who, may be more empowered to deliver considerable social changes, especially if they desire to research upon themselves and other volunteers. Interestingly, he biohacking cause, has been likened to the development of steam power in the 17th century, where interested individuals such as James Watt, developed their own research and ideas that helped power the industrial revolution.

What does this mean for the future and who, or what could regulate research on ‘life’?

Commercial drivers for genetic engineering are likely to continue. The demands of a growing planet with less resources could mean that the need to produce more efficient or more hardy types of life, (or perhaps drive the development of synthetic meat) – See ‘The Open Futures Project, The Natural World’. However, the growth of ‘interest-based’ researchers linked through online communities are likely to increase the speed of research and the pace of new developments and discoveries. As these occur, society and culture will adapt. What is considered controversial today and where there are moral objections on genetic research will probably alter over time. How will each generation respond, what will be taboo? What are their thoughts to dolly the sheep, how did they develop, how did people react to Alba? And what will be extreme or taboo in the future? How will society respond to fringe groups and lone inventors who create on new forms or life or augment their own bodies with biological or physical enhancements? At what pace could such challenges take hold and how could they challenge the norms and values of societies such as the west, which, although open, still takes many of its key philosophical and moral steers from the Christian faith?

However, as well as all the potential advancements and possible social changes it is worth also reflecting on potential risks. As more researchers are confident and empowered to alter and change life, new forms of new and altered species are likely to come into existence. If this is conducted for offensive purposes, the creation of dangerous diseases by accident or design from either a rogue state or an unhinged individual increases. The possibility of such a risk, will probably lead to state/regulatory interest in movements such as biohacking, in order to undertake its most basic remit of providing security to its citizens. So, either through the state or the market, at some point legislation and monitoring will occur. States will increasingly have to alter and adjust their positions to control and provide order to a game that enrages many of their citizens, whilst protecting the rights of those who seek to play. The development of ‘biohacking’ as a movement or a cause, could be the first step in social change that exposes an ongoing political tension for many Western states who have to arbitrate between the bible and the biohackers.