On cyborg anthropology, citizen journalism and the universal basic income in relation to AI automation.

This is just the tip of the iceberg.

“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them.” 
― Terry PratchettThe Long Earth

 

Wait! Am I a cyborg?

Cyborg anthropology is the relationship between humans and technologies, more specifically computers. This subject is as much of an in-depth look at human beings as a species, as much as it is a study into the past and it's prophetic insights of the future.

Rather than thinking of a cyborg as a physically augmented human, use the metaphor of cyberspace to augment the human mind. With that in mind, it is also a lens to spy on the horizon and what is just beyond it.

The idea of cyborg anthropology is quite a sweeping one and can encompass a variety of subjects, however in this case, the current lens is aimed at the ubiquitous availability of information we are currently immersed in as a species. 
 

Tech is cheap but so is life

This is actually more to do with the ubiquity of cheap technology that is finding itself in the poorest parts of the world. Technological stepping stones are allowing developing communities the abilities of, want for a better word, more advanced infrastructure without the need for the big payouts for said infrastructure. The invention of the telephone that spread through the developed world put a domino effect on technology.

This was necessary for what would later create a strong foundation for the smart phone. A satellite launched from the Russian cosmodrome in southern Kazahkstan, Using parts developed by NASA, that were made with the help of COMAC (commercial aircraft corporation of China) is allowing a young business women in Kenya (Kenya is ranked 6th on the extreme poverty index) to support her family, using micro finance that she controls on her smart phone. 

Though there isn't currently a strong backing for independent regulation in most of the effected areas, it is still a stride in the direction out of poverty for some of the worst effected people on the planet.
 

Reports just in: Cyborg rights

Citizen journalism has been a natural progression that rose from an interconnected globalised world, using the combined power houses of cheap tech and social media. The discussion of its validity in objective journalism is as complex as the stories it covers, so side stepping that  wriggling can of worms (this warrants it's own post), it has contributed to some earth shattering revelations.

The Arab revolutions, popularised by the term Arab spring, that started on the 17th December 2010 with the Tunisian revolution, energetically spilling over to 5 other countries and a further 15 countries in the region with fragmented enthusiasm. Even though we stand in the shadow of what has been a colossal failure, leading many to jump on the band wagon of calling the current climate in the region the Arab winter coined by Prof James Simms Jr.

The effect of citizen journalism on this particular occasion was staggering, not only because of the speed, depth or scale it travelled at, but because of the content. It was people that vindicated themselves through technology and social media. For the first time it was obvious how much power the internet had when used in this way, so much so that within the time frame of the revolutions, Egypt, Libya and Syria shut down the internet within their borders.

While Tunisia, Saudi Arabia and Bahrain breached confidentiality laws and hacked into accounts, arrested and allegedly killed some agitators. This rehashed the interesting discussion whether or not the internet should be deemed as a human right, the conversation started in 2003's world summit on the information society.

This was resolved in 2016 by the UN human rights council as a non-binding resolution as a condemnation of withholding access to the internet by governments. This means if any government does intentionally disrupt it's peoples access to the internet, the UN will cross it's arms and pout, because technically they haven't broken any law.

 

Stepping stones

If we think of the upcoming technologies that approach us, at an ever growing pace, we can start to think of the impact it will begin to have globally. The main piece of the future technology we have hold of, tentatively, by the throttle is AI. Tentatively because AI developers have already got to the point where they don't fully understand the reasoning behind the decisions made by the AI programs. The way they have structured AI mechanisms is through a neural network logic gate.

The same way a baby has potential neural networks that make connections through repetitive learning, the AI has the same metaphorically speaking, so any reasoning done is inside its virtual mind. The Berkeley University of California, amongst others, have created a process in which they go through the machines readable reasons for doing what ever it did in any one scenario and analyses it.

The long strings of code is then deciphered into a legible sentences, allowing for future algorithms to produce an interface to ask questions and decipher answers. Up until the AI learns how to lie of course.

The point still stands that we are on the verge of a paradigm shifting technology which will have far reaching consequences in the coming years. Moore's law has stayed relatively on point since it's inception in 1965 by Gordan Moore co-founder of Intel, describing a doubling of power every 18 months, creating the exponential curved graph you may or may not have come across.

A graph depicting the growth of AI automation counted in unemployment levels and the volume of our collective impending doom counted in decibels

This translates quite nicely into sales projections over the same period of time, (which is quite obviously a glitch in the matrix. WE ARE IN A SIMULATION!). The higher the curve the cheaper the tech becomes on the lower part of the curve. When AI and AI integrated tools becomes as readily available as the xolo 900 (yes that is regrettably pronounced “yolo”) Intel's first low cost smartphone, the need for human input for a lot of industries are going to be severely cut.

Even in the poorer countries where rampant free market capitalism exploits people in need for cheap labour. When AI automation is more economical than using cheap labour abroad and the factories of where the cheap labour was sourced from start upgrading to AI automated machines, what will happen to the global citizen?

 

The rise of the automaton.

So when exactly are we going to be sidelined by the machines? Is there a date we can collectively put in our calenders as the apocalypse? The short answer is no. The somewhat less short answer is we'll have to get back to you. Amongst the wild speculators the Guardian newspaper suggested 6% of all jobs will be automated by 2021, while the metro chimed in that it would be 40% by the dawn of 2030.

One of many studies floating around is the future of humanity institute's survey (yes apparently that is a thing that is real) published on arxiv (which you can, if you so wish to have your mind melted, read the article here). The survey consisted of 352 machine learning researchers across the world. They have predicted a 50% chance the time it will take for machine learning, to out perform humans in all tasks, will be in about 45 years. That's the same chances of running for the bus and catching it in London.

The finale won't come until 120 years according to the north American researchers, where all jobs will be man handled/gender neutral handled/...bot handled by complete high-level machine intelligence (HLMI) automation. Though this eventuality is estimated to be 44 years earlier by the Asian researchers in the survey. This study was published on the 24th March 2017 so put a line between 2093 and 2137 for the potential apocalypse and/or Utopia.

Ironically the survey had the researchers ask them about their own redundancy, which seems somewhat mean spirited but it had to be asked, when would machine learning researchers be automated. 88 years is apparently the answer, however if you were asked that question, it would be justifiable if you fudge the numbers a bit to give you an extra decade worth of funding.

This is akin to the scene in the Avengers first movie, where Ironman push starts the turbine to get the engine up and running again. The half second he gets to exclaim he may be in a bit of a pickle, before having the weight of humanities unemployment smack him in the back, leaving him winded and somewhat broken inside...wait I'm mixing metaphors. Point still stands that passing the torch is going to be difficult and in more places than I care to think about is going to be exceedingly painful.

Within the next decade quantum computing is pegged to obtain quantum supremacy, which is when the ability of a classical/quantum computer trumps the strongest of classical super computers. When it does it will be able to boost normal computers significantly through connected cloud computing. Using quantum; simulation, assisted optimization and sampling, machine learning will get a shunt into the future(read in more depth in this article here from Nature: the international weekly journal of science).

Also within that period there will be a strike out list of jobs that would have been automated, it starts out relatively mundane, but grows into quite an obvious trend.

  1. learn how to play Angry Birds – 3 years
  2. transcribe speech – 7 years
  3. translate (versus an amateur) – 7.5 years
  4. read text aloud – 8 years
  5. write a secondary school level essay – 9 years
  6. explains own actions in video games – 10 years
  7. able to replace retail sales people – 15 years

This means that a 3 year old child going into nursery this academic year (2017), will leaving school, if this trend is accurate, without any available non skilled jobs and a waning skilled job market. This is going to put intense strain on the education system as a whole and the ability to ready the generations to come. We all need to know how to tackle an unwinnable scenario in the current framing of the problem. This will probably create a culture of life long learning in the coming decades.

 

A free lunch

There is a couple potential ways of combating AI automation of the majority of society's jobs, none really come to mind apart from the Universal basic income. The UBI is not a new idea and it is beginning to regain some traction in the political arena around the world. The general idea is unconditional free money for all, which to a lot of people that come across it for the first time sounds like an implausible utopian dream.

One of the first inceptions of this idea was in the 16th century from a man named Thomas More in his book Utopia published in 1516. Since then it has been batted about in various forms until its current modern experimental form we are currently witnessing today. A little known fact is Richard Nixon in 1969 almost passed a bill that would legislate a form of the UBI across the U.S. but before he could sign, one of his aides highlighted a study that pointed to higher divorce statistics due to women's financial independence. This was of course not true but has still been used repeatedly to discredit the idea.

There has been some stirring in the venture capitalist area on the subject of the universal basic income. Sam Altman is the President of Y combinator, the silicon valley seed accelerator (seed funding), who have invested in companies such as Airbnb and Dropbox. The y combinatoror blog has revealed a pilot project they are funding in Oakland, California. With the outcome of this scheme they are potentially going to fund a full fledged experimental scheme.

The current climate of the debate always comes back to an awkward fact to the “for” side of the debate, there is no long term data on whether or not it is beneficial or sustainable. This study and others like it going on around the world will provide the quantitative data needed to back claims, but will also give a qualitative incite of the transformative capabilities of the implementation of an UBI.

There is a fierce debate going on where ever you find the idea of the UBI, with valid points on both sides that should be considered and tested fully before implementing it into society. The last thing we want is causing the collapse of society with one swift blow. The main argument against, which is more a reasonable question than a stone wall, is how can you possibly pay for this? Well one of the ways that has been put forward is replacing the current welfare system we have today.

  • UK welfare budget for 2013/14 (as an example)
  • Total welfare spending £251billion
  • Population 64.5 million
  • of which were children 15 million
  • Per head (including children) £3891 annually/ £324 monthly
  • Per head (adults only) £5081 annually/£423 monthly

This is in no way a definitive way of paying for such a mammoth initiative, but it does shine a meek light onto the first step of the debate, of which is to big too cover on an already wide ranging topic on this post. This is a weighty subject to end on so it will have to be fleshed out in more depthin a later post, but if this brief introduction has peaked your interest then check out this compass for further reading and basicincome for a way to join in the debate.

Thanks for reading, don't forget to comment below to send me your opinions on the topics talked about in this post.

Exhaustedly,

Chris from RockdoveRehab