Ray Kurzweil, The Singularity and the accelerating pace of progress

By August 16, 2010Ray Kurzweil

Over the last 6-9 months American invetor and futurist Ray Kurzweil has had a profound influence on how I see the future and how I run my life today.  His theories on the evolution of technology and its impact on society, and the impact technology can have on our health and longevity today have been around for a while, but it is only this year that I have become familiar with them, largely through reading two of his books (Transcend and The Singularity) and taking the weekly newsletter from his website, KurzweilAI.net.

I have only just finished reading The Singularity, which I found to be sufficiently profound and thought provoking that I’m going to summarise it over a series of blog posts, of which this is the first.  (Regular readers will know I occasionally write single posts about books I have like very much.  This is the first time I will write a series.)


The full title of the book is The Singularity is near: When humans transcend biology, and the subtitle gives a hint as to how far Kurzweil goes with his conclusions.  I have very much enjoyed reading the book (despite the fact that it is a) weighty and b) densely written) and have talked excitedly with a number of friends about its various different parts.  Those conversations showed me that getting Kurzweil’s arguments across clearly and convincingly is tough to do, I think largely because of the surprisingly far reaching nature of his conclusions and the fact that they involve a change in what it means to be human (or more properly what it means to be alive).  So I thought I would set them out in a series of blog posts – probably around half a dozen.

In these blog posts I’m going to put his arguments together piece by piece and then finish with the conclusion.  I’ve taken this approach as each of the steps on the way is very important in its own right and because I think it would be impossibly hard to make the conclusion seem credible in a standard length blog post.  If you want to cut straight to the chase you can check out the wikipedia article about the book (also linked above).

The first idea to get across is that the rate of change in life/society/technology has always been increasing exponentially.  I lump life, society, and technology together like that because their rate of change graphs look the same, although the time periods on the x-axis are successively shorter.  The idea of the accelerating rate of change was first introduced to me by Anthony Giddens in my opening sociology lecture at Cambridge back in 1992 and is also something I mentioned on this blog before, when I first came across Kurzweil’s graph showing the continued exponential growth in computing power over the last one hundred years.  I’ve reproduced the chart below because it is (along with its brethren) fundamental to Kurzweil’s argument.

This exponential growth in computing chart is one of many many examples Kurzweil gives of the exponential rate of change.  In every case he produces historical evidence showing the trend going back over a long period of time and his argument at this stage is that the rate of change has been increasing exponentially for a long time and there is no reason why we should expect it to stop now.  Other historical examples of exponential increases beyond the one above include evolution of the universe since the big bang, evolution of life from single cell organisms to humans, growth of the US fixed line phone industry, mass use of inventions, DRAM density, transistor prices, microprocessor clock speed, supercomputer power, DNA sequencing cost, number of internet hosts, decrease in size of nanotech devices, ecommerce revenues and indeed US GDP.

The important things to note here are:

  • exponential increases are present in all walks of life (note the references to biotech and nanotech above – that will be important later)
  • they are not limited to a single paradigm or technology – e.g. exponential increases in computing power pre-date Moore’s law and silicon substrates and will most likely continue long after the silicon chip becomes as dated as the valves we used to use

Kurzweil makes a big play of the fact that the human mind doesn’t easily grasp exponentials, and that at any given point the rate of change will be experienced as linear, particularly in the early stages before any given technology reaches the ‘knee of the curve’.  In Kurzweil’s mind, and I buy into this, the intuitive ease of understanding straight line change coupled with the difficulty in imagining a future radically different from what we have today, combine to make us collectively under estimate the change that is coming.

Another way to look at this is to ask yourself whether people living in 1910 would have believed today’s reality to be possible.

Given the exponential rate of change we can expect people looking back on 2010 from 2060 to see a similar level of difference as we see looking back to 1910.

To close I want to return to the title of this post.  You might have noticed that I used the word ‘progress’ in the header but until this point have been talking only about ‘change’ in the body of the post.  I made that choice because change isn’t necessarily progress, and, whilst I believe that in this case it is, I wanted to separate the two arguments.  Whether or not you believe all this change has been a good thing, and will continue to be a good thing, the most important thing to realise is that it has been happening, and absent some huge shifts in government or a huge disaster, will continue to happen.  In fact absent total annihilation of the human race, the rate of change will probably continue to increase exponentially even if there is a big disaster – the two world wars of the last century didn’t have a noticeable impact on the rate of change.  However, returning to progress, I’m with Kurzweil in believing that the world has become a significantly better place – as measured by human happiness, or more objectively, by the lessening of human suffering.  On average we live longer, eat better, and endure less tragedy (e.g. infant mortality) than our forbears.

Enhanced by Zemanta
  • Anon

    Unfortunately, Kurzweil's argument about an exponential increase in computing power isn't equivalent to the machines being able to contextualize the wealth of data points or knowledge across the Web, and hence to “conscious” machines.

    This is because computing power can be analogized as the speed of the machines. However, without appropriate direction guidance in the machine learning environment, the velocity of advancement would produce a different curve to Kurzweil's postulations.

    For anyone interested, there was an excellent report in 2008 that presents arguments both consistent with Kurzweil's position and in contrast with it:

    * http://spectrum.ieee.org/static/singularity

    Interestingly, in 2010 whilst the UK government cuts the Web Science Institute budget (jv between Oxford and Southampton Universities) Google and NASA support Kurzweil's Singularity University so there may be something in his approach — even if it doesn't have all the answers or even some of them (yet).

  • DrJohnty

    I can’t see that any argument against the exponential growth of technology can possibly make sense. The reason is simple because if we look at the past the rate of progress has always been continually accelerating. It just started from a low level hence the reason exponential growth has such bearing. To explain this let’s start with a 1c coin and and double it each day. After a week you would have $1.28c not a lot but a significant increase. Hardly noticeable! In another week you would have $164 Now the growth pattern really starts to emerge because it is about to realy kick off hence by the end of week three you would have $21.000. And the next week you would have $2,684,353. Five days later, you’re drowning in cash with $343,597.376. The growth from here keeps accelerating but in summary just keep in mind that the first week took you to $1.28. Now three weeks later your have nearly thre humdred and fifty million dollars but just three more days and you have over 2.7 billion! This is the nature of exponential growth and it is exactly what is going on with technology and has been since the very beginning. You hardly notice the growth at first. Buth the effect becomes increasingly clear. If we take the last century it did not equate to 100 years of progress at today’s rate but was around 18 or 19 years because of the rapidly accelerating rate of progress. . I estimate we will make more progress by the mid to late 2020s than we made in the entire 20th century and that we will so it again by the mid to late 2030s. Ray Kurzweil loves to play with the figures relating to exponential growth and he has more of an understandng of this area than anyone else I can think of. What to me is very apparent (and maths is not my strong point!) is that Ray Kurzweil is correct when he say that the power of technology per dollar doubles every twelve months, and that the rate of growth is accelerating. It seems likely that based on the laws of exponential growth that our technology will (a Ray himself says) be over 1000 times more powerful in just ten years – and a billion times more powerful in twenty-five years.

  • Thanks for this. I will be coming to Kurzweil's arguments as to why machines will become as intelligent and subtle as humans in later posts. I agree that this is an area where one must make a leap of faith, particularly around the notion of consciousness.

  • Anon

    No for the simple reason that we can’t reverse engineer the biochemicals that elicit emotions in our brains and contribute to its sense-making, morals, values and consciousness.

    Sure we can broadly map the functional, mechanistic zones (http://content.answcdn.com/main/content/img/McGrawHill/Encyclopedia/images/CE093200FG0010.gif) and the synaptic connectors of the human brain. We can also generate MRI heatmaps of activity, but we have no scientific tools to precisely simulate the biochem-induced emotions.

    That’s why machines can’t be conscious as humans are.

    Moreover, for example: human memory is not directly equivalent to computer storage either so even in this reverse engineering practice that we’ve adopted as a computing norm we’re missing out some key context. After all, our memories are intrinsically connected with emotions and values (established, evolving and random) whereas a file or data object in computer storage is tied to either alphabetic ordering or some taxonomic convention (noun, verb, location, etc.) with no emotional values, elicited connotations, time context or randomness.

    As an associated point, that partially explains the limitations of current semantic structures as well as why the NLP in translation software still struggles with grammatical tenses in the subjunctive — since the subjunctive is about uncertainty, personal subjectivity/feelings and time relativism (or we can call it conjunctions).

    As it is, the AI community hasn’t yet started to appropriately include emotions or time conjunctions into the algorithms; conjunctions being distinct from definitive time (@ HH:MM:SS on DD/MM/YYY) and time periods (from N to N-n or N+n), by the way.

    Ah and by emotions I don’t mean the fuzzy logic that’s currently being used in sentiment engines which are primitive at best and autistic at worst.

    There are other mathematical and code innovations ahead before we can even “reverse engineer” the functional components of the brain in any sort of salient software way — much less factor in the biochemical contributions that make up consciousness.

    Luckily, those innovations are on the horizon……………

  • Anon

    Machine consciousness and subtleties (to deal with ambiguity and moral context of content, for example) will need more than leaps of faith.

    It will need us to innovate new scientific instruments, new mathematics, new code, new constructs of value (moral, perception, tastes and more) and new socio-economics.

    If we, for example, proxy the Web as being some type of “Global Brain”, a coalescence of global community values and we recognize that there are still unsolvables and unknowns about the natural human brain………..Then we may realize that our current approach to consciousness (whether organic or man-made) is both incoherent and incomplete.

    There may be useful links here:

    * http://knol.google.com/k/the-global-brain-the-s

    For where machine intelligence is it's worthwhile tracking the progress of IBM's Watson machine:

    * http://www.nytimes.com/2010/06/20/magazine/20Co

    For machine consciousness there's missing mathematics and code that can't yet be found in any of the Semantic structures out there — despite the advancements and deployments in AI and NLP.

    The AI community grasps this.

    Ongoing discussions center around Wolfram Alpha's algorithms and whether we will be able to compute the Theory of Everything (in a sense the Singularity):

    * http://www.ted.com/talks/stephen_wolfram_comput

    At present, the binary and Bayesian roots of code prevent the machines from being more than efficient calculators rather than effective valuers. There's a difference between “efficiency” (computational power) and “effective” (solution provision). Also there's a difference between engines which are calculators from those that are valuers.

    Let's also throw this into considerations: since 1950 Alan Turing has perplexed us all with his supposition “Can Machines Think?” He then provided us with the principle that a machine that can read and mimic text-based content on a par with as if a human was interacting would be deemed to pass the Turing test if it could do so at 30% of accuracy.

    The more interesting approach would be to ask, “How can we program machines to make sense?”

    Sense => consciousness.

    It segues with the Singularity because it gets down to our definitions of what constitutes “intelligence”, how we test for it, if our current notions of it are coherent and complete, and importantly how intelligence transforms and mutates (underpinned by inherited DNA in organic matter and tagged algorithms in machine matter).

    Anyway, it's all fascinating and challenging stuff!

  • Thanks again for a great comment. Both the Watson machine and Wolfram's computation are fascinating and a clear advance on where we are today, but I agree some considerable way short of what most humans would regard as conscious.

    Kurzweil's main argument here (as I'm sure you know) is that by reversing engineering the human brain and copying the salient parts in software we will create something that is equivalent to the human brain in raw intelligence (albeit with a number of big advantages in terms of speed, data entry etc) and that such a computer will inevitably seem just like a person and is therefore to all intense and purposes conscious.

    Do you buy that?

  • Anon

    Japanese knotwood grows exponentially. By comparison the Amazon rainforest doesn't. That's an example of exponential growth not necessarily bearing with it beneficial developments and also pointing us to the need for cultivation processes in our drive towards technological progress. That too is what I mean about speed being different from velocity; the latter encompasses direction (i.e., in the case of tech some form of cultivation or curating).

    The proliferation of technology (data content, devices, server farms, network cabling etc) is undeniable and Eric Schmidt referenced this recently:

    * http://techcrunch.com/2010/08/04/schmidt-data/

    Theories of exponential growth are not new nor baseless so no one is arguing against Kurzweil about the 1c becoming US$15 mln in 3 weeks. By the way. the Chinese don't use the 1c example, we use a chessboard and one grain of rice on the first square and double along all subsequent 63 squares.

    Mankind has witnessed exponential growth and grasped it as a mathematical concept even as far back as the invention of fire — think of the phrase “it exploded like wildfire” before Western mathematicians like Pascal had axiomatically labelled binomials, polynomials and exponentials (all of which actually have their origins in Persian and Oriental maths that preceded the Western axioms by anywhere between 500 and 1800 years).

    Anyway, mathematics aside, the crux of the Singularity is less about the exponential growth and whether/how we cope with it. It's more about what constitutes “consciousness” in machines because a Singularity — just as a Semantic Web Stack — doesn't necessarily result in the machines being able to understand each other, human context or being able to adapt their algorithms beyond the bounds that humans have coded in. In computing terms, the bounds are the arrays (of data selection, for example) that we enable the machines to search/sort/connect through.

    An increase in processing power just means they're faster, but not necessarily fitter (which takes us to Darwin, intelligence again, evolution and whether the machines evolve interalia Skynet / Asimo (Honda from Isaac Asimov) / The Matrix.)

  • Anon

    Typo: US$150 mln.

    Also, I meant to write, “Ah and by emotions I don’t mean the fuzzy logic that’s currently being used in sentiment engines which are dyslexic at best and autistic at worst.”


  • Thanks again for all the comments.

    I'm a little puzzled by your finishing line which seems to contradict the rest of your comment.

    If innovations that “factor in the biochemical contributions that make up consciousness” are on the horizon then doesn't that imply that machines will eventually become conscious?

    Even if they are a long way short today.

  • Anon

    The innovations on the horizon refer to the functional components of the brain rather than to the biochemical contributions.

    Functional components referring to the way we categorize time and information clusters, for example. Comments threads as well as search lists illustrate this. Before Web 2.0 and FOAF structures, content tended to be ordered either alphabetically or time-stamped. Now we see Google doing this:

    * http://www.youtube.com/watch?v=qDnmQ9Mmj1Q

    On threads we see in-line / cluster structure of comments; still time–stamped but at least relevant to discrete strands of arguments.

    We also see the marvel that is debategraph from my friend, David Price (ex-Cambridge):

    * http://debategraph.org/Stream.aspx?nID=7714

    There are still other functional components to be innovated, though — before we even get to the challenge of, “How do we inject / simulate biochemical emotions into the machines to make them conscious like humans?”

    Btw, this has an entire realm of morals and ethics associated with it (IT literally and figuratively) that will need to be publicly debated and legislated for!

    On the flip side of the coin of, “How do we make machines conscious?” is the question, “How do we make Man more efficient?”

    In medical research there's the emergence of bionic (prosthetic) technology for those who've lost limbs in accidents — and those artificial limbs can “feel”, neural chips for the treatment of depression or paralysis (possibly Alzheimers) and taste sensors to help the visually-impaired navigate their way around:

    * http://www.technologyreview.com/biomedicine/19759/

    * http://www.technologyreview.com/special/neuro/

    * http://www.youtube.com/watch?v=OKd56D2mvN0

    * http://www.telegraph.co.uk/health/healthnews/73

    Both sides of the coin are about how to bind Man and machine together (either by proxying one's characteristics in the others or by wholesale copying) in ways that enable not simply thinking but……….sense-making.

    So the point about the Singularity is whether the exponential growth of processing power enables us to optimize Mankind's sense-making.

    Otherwise, instead of beneficial development we may end up with some tragic mutation that results in….war, destruction and deaths.

    Because, after all, the machines don't have a natural grasp or framework of emotions that humans have and which inform our constructs of morals, values and humanity.

  • Pingback: Kurzweil predicts personal computers with the power of the human brain by 2025 | The Equity Kicker()

  • Pingback: Finance Geek » Kurzweil predicts personal computers with the power of the human brain by 2025()

  • Interesting post, Nic, and I look forward to reading The Singularity.

    But my first thought on reading your introduction in email form was “Crikey, Nic's just joined a cult.”

    Reading the post and the comments, I'm still wondering…

  • Pingback: Kurzweil predicts personal computers with the power of the human brain by 2025 | Bookmarks()

  • Pingback: Kurzweil predicts we will create software that emulates the human mind by reverse engineering the human brain | The Equity Kicker()

  • Pingback: Finance Geek » Kurzweil predicts we will create software that emulates the human mind by reverse engineering the human brain()

  • 🙂 you make me wonder if my enthusiasm for Kurzweil's work contributes to my difficulty convincing people of its veracity (which is maybe your point…)

  • Pingback: Kurzweil on coming revolutions in genetics, nanotech and robotics (strong AI) | The Equity Kicker()

  • Pingback: Finance Geek » Kurzweil on coming revolutions in genetics, nanotech and robotics (strong AI)()

  • Pingback: Kurzweil part 5 – the impact | The Equity Kicker()

  • Pingback: Finance Geek » Kurzweil part 5 – the impact()

  • Pingback: The counter arguments to Kurzweil’s Singularity thesis | The Equity Kicker()

  • Pingback: Finance Geek » The counter arguments to Kurzweil’s Singularity thesis()