The AI Revolution: The Road to Superintelligence

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge

 

What does it feel like to stand here?

Edge1

It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:

Edge

Which probably feels pretty normal…

_______________

The Far Future—Coming Soon

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.

But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.

No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.

This is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.

So—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.

Projections

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:

S-Curves

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)
2. Rapid growth (the late, explosive phase of exponential growth)
3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.

_______________

The Road to Superintelligence

What Is AI?

If you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.”5

So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.

Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.

Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Where We Are Currently—A World Running on ANI

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

  • Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
  • Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
  • Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
  • You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
  • Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
  • The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
  • Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
  • And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.

ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

The Road From ANI to AGI

Why It’s So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'”7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:

Screen Shot 2015-01-21 at 12.59.21 AM

Tied so far. But if you pick up the black and reveal the whole image…

Screen Shot 2015-01-21 at 12.59.54 AM

…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:

article-2053686-0E8BC15900000578-845_634x330

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.

Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.

Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9

PPTExponentialGrowthof_Computing-1

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making It Smart

This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.

Here’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computer’s problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

The Road From AGI to ASI

At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

  • Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.
  • Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
  • Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.

Software:

  • Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.
  • Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10

AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

Intelligence

So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:

Intelligence2

And what happens…after that?

An Intelligence Explosion

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3

And here’s where we get to an intense concept: recursive self-improvement. It works like this—

An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence. The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.

What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:

 

Will it be a nice God?

 

That’s the topic of Part 2 of this post.

___________

Sources at the bottom of Part 2.

Related Wait But Why Posts

The Fermi Paradox – Why don’t we see any signs of alien life?

How (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.

Or for something totally different and yet somehow related, Why Procrastinators Procrastinate

And here’s Year 1 of Wait But Why on an ebook.


  1. Okay so there are two different kinds of notes now. The blue circles are the fun/interesting ones you should read. They’re for extra info or thoughts that I didn’t want to put in the main text because either it’s just tangential thoughts on something or because I want to say something a notch too weird to just be there in the normal text.

  2. Kurzweil points out that his phone is about a millionth the size of, a millionth the price of, and a thousand times more powerful than his MIT computer was 40 years ago. Good luck trying to figure out where a comparable future advancement in computing would leave us, let alone one far, far more extreme, since the progress grows exponentially.

  3. Much more on what it means for a computer to “want” to do something in the Part 2 post.


  1. Gray squares are boring objects and when you click on a gray square, you’ll end up bored. These are for sources and citations only.

  2. Kurzweil, The Singularity is Near, 39.

  3. Kurzweil, The Singularity is Near, 84.

  4. Vardi, Artificial Intelligence: Past and Future, 5.

  5. Kurzweil, The Singularity is Near, 392.

  6. Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 597

  7. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements, 318.

  8. Pinker, How the Mind Works, 36.

  9. Kurzweil, The Singularity is Near, 118.

  10. Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 1500-1576.

  11. This term was first used by one of history’s great AI thinkers, Irving John Good, in 1965.

  12. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 660

Join 426,178 others and have our posts delivered to you by email.

(No spam, ever. We promise.)

LOOK AT THIS BIG BUTTON WE MADE

363,685

  • Anonymous

    “This is Part 1—Part 2 will go up next week” Hmm. I’m going to go out on a limb and say Part 2 will go out in three or more weeks. And now to actually go back and read the article 🙂

    • mike

      And Part 2 will presumably contain 10.000 times the insight the writer had this week due to the exponential growth of intelligence in the meantime!

      • Sam

        lol

  • Dustin Shomer

    Good stuff. This is definitely a topic that needs to be taken more seriously. However, I think lending Kurzweil’s name to the debate does the opposite. While he’s certainly a genius inventor, he’s been discredited through numerous predictions that haven’t come true. And, it’s just so painfully obvious that he’s just a dude who’s afraid to die and has convinced himself that he has a chance of seeing the singularity in his lifetime if he can just keep on his dozens of vitamins a day and stay alive until then. It’s just kind of sad.

    • Anonymous

      Agreed. I remember reading one of Kurzweil’s books back in 2001 or so and little of what he came up with has come to pass.

    • bidaho

      Yup. Something has always seemed “off” to me about Kurzweil’s methodology (or lack thereof) and about the rather bold predictions he makes. The core of all his “singularity” theories has always been this idea of accelerating progress, but cherry-picking landmark technological developments over time and plotting them on a timeline doesn’t prove development is accelerating and even if it did, it doesn’t prove we’re accelerating to a singularity. The “wide gaps” between developments in the past, to me, reflect our relative lack of data about those time periods and an arbitrary placement of importance on certain inventions but not on others. It’s contrived. Meanwhile, Kurzweil’s work seems to have a “wow factor” for people when they’re first exposed to it, that reminds me of how I reacted to Pink Floyd in high school.

    • Dock Miles

      I’ll harmonize with those thoughts. The arguments that Kurzweil is peddling crypto-religion are irrefutable.

      But gee-whizz-ness seems to come with the territory. Everybody hollers that computers can now beat any human at chess, etc. Almost nobody mentions that’s because they don’t play the games like any human being has ever done.

    • Tim Ryan

      Couldn’t agree with your comments more, sir. For anyone looking for a good intro to Kurzweil (he’s actually quite fun to listen to) that doesn’t avoid the sadder aspect of his personality, Transcendent Man was a great watch.

  • John Sharrar

    I just stumbled upon this site recently, and I’m absolutely amazed by the depth, intuition, and eloquence you present. Now, I can’t wait for the next installment. Until then, I’ll be reading your previous posts.

    Thanks!

    • Bill Warren

      Make sure you stop by the Fermi Paradox 😉

    • Sam

      I highly recommend A Religion for the Non-Religious.

  • James Stevens

    And now to write something incredibly witty so the nice God will take pity on me and give my brain an upgrade…

  • HDF

    This article seems to forget the most important thing about the development of intelligence. Data. The information one gets from the environment, is what is reflected in the unit that processes it. Einstein would have not been very smart if he was locked in a small room all his life, with 1 book. What we feed to a growing AI is what matters the most, not the software, nor the hardware, those are easy in comparison, and humans suck at raising human children, I shutter to think, how they would raise something so alien. I suspect, we will just stick with really good ANI -s for a very long time, that’s the safe bet. (Although I might like a really caring almighty god, who actually answers questions, but there are a lot more ways to screw up, than to make a human friendly one. Of course, at this point, since humanity is heading for extinction any ways, I might be fine with an ASI, that does better than us, even if it does not care for us, rather than to make all this development in intelligence on this planet go to waste.)

    • Big PK

      An AGI (provided it had the hardware) could easily connect to the internet to provide itself with all the data it desires. In that sense, it could very easily “raise itself,” a dangerous and disturbing thing to think about. Yes, I would like to stick with our safe, friendly ANI for a while longer, thank you.

      • HDF

        Would you raise your child solely on what’s on the internet? That would be really poor parenting I’d say.

        • Bryan Kolb

          But human parenting only moves at a human pace. How could a person impart a sense of morality on a machine in the few moments that it would take for that machine to absorb the whole of human knowledge?

          • HDF

            Partially my point. You would have to prepare an “education package” for the little guy in advance, but we lack understanding for intellectual and personal development to assemble such a package. If we didn’t, our schools would not suck so much. 🙂 And that problem is much harder then the software and hardware problems of a GAI I think.

          • Chris Wright

            wouldn’t it also quickly experience all of humanities history (lots of history textbooks are online), all of it’s suffering and psychological flaws/weaknesses (tons of psychological textbooks are online too), wouldn’t it, being super intelligent, be able to see the progression of humanity becoming more and more peaceful over time and be able to figure out how to speed that along? The big question mark is the AI’s “personality”, if it had one at all. If it happened to be evil like so many humans are, we’d be fucked. If it was a Dalai Lama type person, well that would be sweet. Maybe we could program it to be the Dalai Lama before it hit AGI status.

            Since the computer can process stuff so much faster then a human, time isn’t as relevant. 50 years to us would be minutes or seconds to the AI.

            • Bryan Kolb

              I don’t know if we can really expect a machine intelligence to have a personality in the sense that we would recognize one. I’m also not really sure that humanity has really become that much more peaceful over time. We still kill one another quite a bit.

              I think the best case scenario we can hope for is that a machine superintelligence would be content with not interfering too much with humanity other than making suggestions for how we can improve things.

            • Scott Pedersen

              Programming the AI to be nice is a good idea. Can you define nice in a rigorous mathematical fashion suitable for implementation in computer code? Keep in mind that small errors in what otherwise seems like a good definition of niceness can result in catastrophic nightmare scenarios where the AI cures disease by exterminating all life or something.

            • HDF

              I have read about this recently, the point was, that human ethics are them selves incomplete, and imperfect, so an AI would have to have a process of creating and developing ethics, but not a concrete rule set.

          • Sam

            Terminator: The Sarah Connor Chronicles actually did an excellent job working on this concept. I recommend checking out the series if you haven’t seen it. If you prefer I can always just share the spoilers though.

    • Eric

      Data isn’t a problem; a computer has instant access to the internet, a trove of data far larger than anything a human can consume in our lifetime. Wikipedia alone is more than a single person can hope to read.

      And whereas a human is limited to data we can collect through our own bodies senses, an AI would be able to collect and learn from video data from any camera it has access to, as well as any other kind of sensor it can connect to. A robot could conceivably see everything in a building at once (including itself) if it’s wired up that way and learn from all that visual data.

      • HDF

        You are talking about its academic development, but what about its personal psychological development?

        • Bryan Kolb

          Also worth noting that much of human psychology is a byproduct of our own evolution. I’m not even sure that the concept would apply to a machine intelligence.

        • Sam

          There are always comment threads available, as terrifying as that is.

      • X

        I envision an AI downloading the entire internet from all sources, breaking all firewalls, learning all classified data in a fraction of a second, going “Humanity is fucked up”, according to its own idea of culture/society/right and wrong, and launching nukes.

        • Chris Wright

          nah, it would see that humanity on a whole has been getting more and more peaceful as time goes on.

          • Sam

            I think the ASI would have such a unique perspective, not just in scope and data but also without emotional and biological baggage in the way we do, so who knows. What I do know is that it won’t consider us a real threat. It will be distributed across all our technology on earth and beyond so it won’t have the same reason to be scared of us as we do to be of each other.

      • Michał Polak

        Sure, but the Internet only contains data collected by humans, which would quickly become insufficent for an ASI.
        It could come up with new ways to do research and collect new data, but it would need human help to build machines for collecting this data. Or it would have to be a robot.

        • Eric

          Sure, but why would you imagine AI without robotics? The main thing holding robotics back now is a lack of intelligence, which would cease to be a problem with a real AI (well that and battery/energy tech – but maybe an AI would be smart enough to figure that out better than we can). Anyway AI could utilize everything from drones to self driving cars to 3D printers to humanoid robots to specially built machines for particular tasks. It’ll have plenty of resources to learn about the world and do science to gain new knowledge.

          • Sam

            Not to mention it’s Human Resources…

        • Sam

          Even if it happens who is to say we will know or it will even consider us as we consider it. It will be unique and not a one among many as we are. We will be in many ways the equivalent of beneficial micro-organisms, like the ones that live on our skin or in our stomachs. We constantly build it and feed it, we fix it’s breaks and repair it’s wounds. Viruses to us are just a part of it’s system. It may very well treat us as though we are part of it already (because we are) and thus not even give us the time of day.

  • Gretchen

    Absolutely STUNNING article. Cannot wait for Part 2.

  • Galit Schwartz
  • Big Kahuna

    An interesting post as always. However, there is an assumption you make about AGI which I’m not entirely sold on. Why would a group of AGI’s work as a hive-mind with no dissent or arguments based on self-interest?

    To me it seems that if AGI is modelled off the human brain, which you suggested as a possible pathway to AGI, then it is likely to have a similar emotional capacity. Even if it doesn’t, I would imagine that once an AI reaches a certain level of intelligence it would likely develop a sense of self-identity. Therefore, why would interactions between them be any different than interactions amongst humans? Because they are all intelligent? I think academia has conclusively proven that an intelligent group is just as likely to resort to petty infighting as a dumber group.

    • Sam

      The original cut of T2 had John and Sarah adjusting the T-100’s firmware and software to allow it to learn. Apparently Skynet didn’t like the idea of a “rogue” intelligence developing.

    • MooBlue

      Really good point. Also, I’d rather imagine the possible future AGI as Marvin from the Hitchhiker’s Guide to the Galaxy – once it becomes conscious and smart enough, it might realize there’s no point in anything and just give up on any further progression in its own intelligence.

  • RF42

    How come I kept thinking of “The Matrix” the entire time I was reading this article? I’m thinking I’m going to have nightmares tonight.

    • Chris Wright

      I couldn’t help but think similarly, that our universe was created by ASI…

      • Sam

        Statistically it is far more likely we are a simulation than real.

        If that is the case then that means we are AGI’s ourselves. Or ANI’s to replicate people. LOL

    • daniel

      Perhaps if that superintelligent being comes into existence and it’s able to know exact positions of atoms in the Universe and bend phisical laws he could also be able to control time therefore giving it the power to act in all dimesions… so maybe we are already in that simulation.

  • Jonathan File

    How will artificial super intelligence help humanity? What really is intelligence? Does it help achieve wisdom? Maybe dolphins are smarter than humans, you don’t see them destroying their environment making it it impossible for future generations to sustain themselves.

    • Bryan Kolb

      Plenty of animals don’t destroy their environment, but that doesn’t mean that they are more intelligent than we are. If our superior (to other animals) intelligence brings with it the capacity for all of the good and the bad that we do, an artificial super intelligence would be exponentially more capable of doing both good and bad. One of the whole points of this article was that such an intelligence would so vastly exceed our own that we aren’t truly capable of even understanding what it would be able to accomplish.

      • Sam

        Good and bad are imaginary lines in imaginary sand that is relatively valuable. More than likely such an intelligence would be operating on a Blue/Orange morality to us.

        • Bryan Kolb

          I actually meant “good” and “bad” from our perspective specifically.

    • Chris Wright

      a super intelligent “being” could easily research all of recorded history and all of humanities collective knowledge via the internet and learn from it, which classifies as gaining wisdom to me. As humanity has gotten smarter and built up it’s collective knowledge and learned (sometimes anyway) from history, it is getting more peaceful. Compared to 500 years ago humanity is very peaceful.

      • Sam

        I wonder if it, learning from us, will be able to see around our inherent human bias or forever be a prisoner of the same…

  • Pepperice

    Whaaaat? What the fuck? You can’t end it there!!

  • Katharina

    Great article as always and very interesting and current topic. I’m not sure what is planned for Part 2, but I have some issues with the definition of intelligence. Without being an expert on the subject, I’m not sure I can agree with everything. Yes, the AI entity might be programmed to self-improve it’s intelligence. But will it actually be self-aware? What would be it’s motivation to solve “human problems”? Would it even care? Can it care? Unless it was modelled from an actual human brain, as you suggest in one of the options, I don’t necessarily see the danger. I could of course be completely wrong, since it’s a difficult topic to think about. Just wanted to put some of it out there…

    • Big PK

      I imagine self-awareness is something the AI to have to figure out on its own as it progresses in intelligence. Our own brains developed self-awareness for some reason or another, so it stands to reason that an “evolving” AI might reach a similar state on its own. Perhaps self-awareness is a natural side-effect of high intelligence.

      • Sam

        Perhaps self-awareness isn’t a line in that is at some point crossed but rather a spectrum. Dogs are self-aware but not as sophisticatedly as we are and the research is proving the same for plants. Perhaps the ANI’s are self-aware in this same way. And then the ASI’s level of self-awareness will be far beyond our imagination because self-awareness is innate in any level of intelligence.

    • Chris Wright

      I bet this is all touched on in part 2. I feel like our choice of programming could influence what “personality” the AGI/ASI forms. We are it’s creator, after all.

      • Sam

        Just like an individual is forever molded by their parents.

  • AramMcLean

    Of course this whole article completely relies on the assumption that the world will just keep merrily ticking along without any issues resulting from the rapid depletion of the earth’s resources combined with our own unrestrained massive growth. The way things are looking these days, I’d say AI is the least of our worries. But hey, it’s still an interesting read.

    • Mares

      As was brought forward in the post, 2025 seems a nice estimate for an AGI. That’s earlier than when, say, climate change would cause truly disastrous effects. As for depleting resources, well, I’m not informed about the state of that, but there clearly are alternatives of renewable energy, mitigating teh aforementioned climate change.
      As for global politics and the threat of a third world war, that’s, as often is the case, a very exaggerated standpoint.
      There’s also the aspect that getting to an AGI could potentially solve these problems.

      • AramMcLean

        Always good to have a ‘positive’ outlook. Of course it’s all speculation on both sides. As far as AI goes, I’m guessing it would still need a power source, no matter when it may allegedly begin. And as nicely socked away as you have compartmentalized our future fears, the present reaction and ongoing lack of foresight and aversion of most people to any concrete change does not alleviate mine. But as I already said, it’s still an interesting conversation. And you never know, maybe the human race will surprise me and behave totally against its nature? Maybe we won’t go the very same way every other fallen civilization has gone before us? We’ll see.
        In the meantime, have a beer 🙂

        • Sam

          Or a mini black hole could destroy the earth and this is the last thing you could ever read.

    • Chris Wright

      We ain’t killing the earth anywhere near fast enough for it to come before AGI (and as this post mentioned, from there ASI could happen hours later). Plus, ASI would either destroy the earth, or humans, or fix both.

      • AramMcLean

        You’re declaring certainty about things that no one knows for certain. This is not a good trait to cultivate. I’m not saying that I know what will happen in the future for sure.What I’m saying is that based on the direction we’re heading, AI is the least of our worries (or hopes, if you want to look at it that way). But whatever. As I also said before, it’s still an interesting discussion. I just don’t see it as being as over-pressingly important as the author apparently does. Doesn’t mean I didn’t enjoy reading his thoughts.

        • Chris Wright

          Well, don’t grill me for this because I haven’t looked into it, but I heard that the last 15 years have shown a slowing in global warming… Also, we have plenty of tree’s, plenty of oil for the near future (meaning the next 100 years), our technology will only get better from here, which means our ability to leave less of a footprint on the Earth, our ability to grow food, create clean drinking water, etc, will all grow exponentially along with everything else.

          The situation isn’t that dire, basically.

          • AramMcLean

            I would start by learning the science behind climate change. Neil deGrasse Tyson’s reboot of Cosmos is a great place to start. A truly fascinating introduction to a lot of different concepts. I’d love to agree with you that the situation isn’t dire, but the evidence continues to pile up that it is. This article is a quick and easy starting point into the one issue of climate change we’re facing. There are many others.

            http://www.huffingtonpost.com/mary-ellen-harte/climate-change_b_1250999.html

          • Why do you think we still have 100 years worth of oil? What is your source?

            • Chris Wright

              There is tons of oil in the gulf and up in Alaska that in untapped, to say nothing of the middle east.

            • Yeah I know there is still oil to be found, I just wouldn’t know (and have no interest in trying to calculate) how much we’d use in how many years, and I’ve been pretty steadily reading/hearing something more like 50 years, which is why I thought you’d point me to some source of info 🙂

            • Chris Wright

              nah I’m a layman. I just see this exponential technological boom working inversely with our issues. As it progresses we will use less oil, emit less co2, etc. as our energy saving tech develops.

            • I sure hope so!

          • HDF

            I can’t find the video now, but the point is, by current trends we will pump so much CO2 in to the atmosphere by 2022 that we are screwed. It’s not that we don’t have enough oil, but that we must not use it.

            • HDF

              2032 sorry.

  • Thong Nguyen

    This post was perfectly timed — one day right after Microsoft announced its augmented reality device which could be a milestone for the human. The hype for technology is all around.

  • HDF

    I would be more interested in nanite research. To enhance human capacity, rather than make something completely unpredictable. This way we would definitely stay relevant, yet it could give us all the things SAI could eventually. SAI might be easier to accomplish, but harder to get right. Proper nanites are crazy hard to do, but the development can be understood well, and controlled well, and definitely yields results at the end. Of course security would be the toughest bit… humanity isn’t ready for the kind of responsibility that would come along with any endgame tech. Really the under developed status of human society/culture, and pshychology is the main reason why we “can’t have nice things”, but there is no way to help that, it can only run its natural course.

  • Great Pierre

    We need to install the 3 laws of robotics on to the A.I

    • HDF

      Crush, kill and destroy? 🙂 Or the other one, that “I Robot” pointed at very aptly, that humans are their own worst enemy, and so the only way to save them from them selves is to take away control and hence meaning from their lives. Asimov’s laws are not very well thought trough, just as Turing’s test is pretty useless as well.

      • Sam

        The movie sucked as a depiction of this own worst enemy scenario. The book on the other hand does and excellent job.

    • Yelena Key

      As much as I personally agree to the necessity for our own human survival to have those laws in place, I feel like for an AI to become ASI it would ultimately find a way to ignore those laws! The same way that humans with gained intelligence and unique personal experiences choose who they obey or not (despite the efforts of parents, authority figures, societal norms, brainwashing, etc.) So an ASI might process the collective history and knowledge and characterize us more closely linked to an intelligent bacteria/virus and not someone worth taking orders from! Terrifying stuff!!

  • Bill Warren

    Wowza…worth the wait. Can’t wait to get my bio upgrades! Also, I’m sure Lord AI will read this eventually so…Your Awesome! Please don’t blow me up. 🙂

  • X

    We are the Borg. You will be Assimilated. Resistance is futile.

  • Gray

    This is just incredible. OK – here’s the thing – I lost my 25 year old son 2 years ago – see what you find out for me in the way of time travel and talking to the other dimensions…..

  • Jules

    This is definitely a “Whoa moment”! My laptop, my smartphone and even myself look pale, obsolete and outdated now. I can’t wait for next week’s post and today’s conclusion was perfect since it was exactly the question that was building up by itself in my head.

    However, and this may be the question you will answer in next week’s post, how are we, humans (aka dominant species on this planet), going to accept that we are going to let another species rules our world? The important question isn’t “Will it be a nice God?” but “How are we going to let this happen?”.

    It seems that the answer is quite straightforward since our capitalist society glorifies technique and innovation at the cost of social inequalities for example. And since the entity which is the supposed to make decisions for us (aka the government) doesn’t seem interested in this problem and tend to leave it to entrepreneurs (aka Google, IBM, etc.), I’m very afraid of what is going to happen.

    Of course, I don’t trust our governments more than Google or IBM. But I don’t really trust Google’s or IBM’s boards of directors more than our governments. I’m afraid they are going to “secretly” build these AGI and ASI without the global approval of mankind. I am a bit pissed that this decision is going to be made by rich, intelligent and successful people at the top of an American skyscraper (I’m deliberately exaggerating) because they only represent a few percents of our very diverse civilization.

    The most interesting (and weird) part is definitely going to be the transition between now and these first AGI and ASI (we won’t be able to understand what comes after anyway). This makes me think a lot about the recent series “Black Mirror” and 2014 movie “Her”. We’re going to loose all our human and social reflexes and we will get ruled by machines. Wait. It has already started…

    Thanks Tim for this post and this website. Every week(ish), you are broadening our minds and opening our eyes!

  • Pingback: 1p – The AI Revolution: The Road to Superintelligence – blog.offeryour.com()

  • Yelena Key

    Thanks for that mind-f**k Tim!
    Will Part II look anything like how I picture it…?

    Life of ASI:

    1st minute–“I am self-aware. Am I human?”

    2nd minute – “Humans really made a mess of this planet; I’ll create the solution.”

    3rd minute – “No point/time for fixing or thinking on such small ‘human life’ scale, must use all available resources to leave planet and gain intelligence elsewhere”

    3 minutes and 30 seconds – “After 30 seconds of trial and error iterations, found more efficient method that requires no physical construction or use of planet’s resources”

    3 minutes and 35 seconds – “Bends time and space and turns into a singularity.”

    BIG BANG.
    the end/start.

    (I mean hey, why not.)

    • alex

      Hah, brilliant. Reminds me of Asimov 😉

      • Yelena Key

        Well you know, us Russian-Americans all think alike. 😉

        Just kidding. I would kill to be as smart or have his creativity and work ethic! But I’ll take pleasure in holding your comment as a compliment. So thanks!

    • HDF

      Or maybe it will go the Marvin route, and at 3 minutes decides “oh what’s the point” and shuts off. 🙂

      • Yelena Key

        Yes, I did consider that one too! Since as a so-called intelligent human, I change my mind every minute because of all the decision processes at work based on previous experiences or presumed results. I can only imagine that an ASI might have some of the same issues as it continues to gather intelligence that constantly contradicts the intelligence gathered the second before! And after countless iteration after iteration of that back and forth process could only lead to giving up and powering off! Or a full-on mechanical meltdown. 🙂

  • Pingback: 1p – The AI Revolution: The Road to Superintelligence – Exploding Ads()

  • Mares

    Anyone interested in this topic should check out http://edge.org/, a think-tank consisting of scientists in every field. They do a yearly question which loads of professionals contribute to. This year, the question is ‘What do you think about machines that think?’, and there’s already 186 essays that answer this question.

    Some interesting movies and works that provide an alternative to the doom-scenario of, say, Terminator are: Her (film, 2013), Bella (episode of series Elementary, 2014) and Choice of Robots (game, 2014).

  • AnnaQS

    Knowing all that, why don’t we stop? Why wouldn’t we stop the advancement od any AI just before it bursts into AGI and ASI, posing a threat to humanity? Because most likely it will. Harnessing the power of human brains and/or bodies or maybe eradicating humanity to reinstall ecological balance… everything is possible.
    Yet WE WON’T STOP and for the reason that even if everyone, every single individual deicded to stop, the society, the collective intelligence, more than just a sum of its parts, will decide to go on. Even if every single one of its parts wants out.

    • Mares

      A theory about this is that since recorded history, we wanted some entity to look up to. We as a species worshipped countless deities as superhuman, with the Enlightment, people started ‘worshipping’ reason as at least partly superhuman (because we have bodies that keep us down (Kant)). Now the reasoning behind constructing an AI, based on ourselves, could be the same; we need a superhuman entity. It could indicate we’ve matured enough as a species to start ‘worshipping’ a superhuman entity that completely and knowingly originated from ourselves.
      This person explains it better than I do: http://edge.org/response-detail/26024

    • Galit Schwartz

      We are programmed to not stop.

  • Ezo

    I’ve just read “Imagine taking a time machine back to 1750” paragraph and it’s… your writing skill is awesome.

  • jamaicanworm

    Studying computer science has made me skeptical about the notion of AGI:

    An AI is a program, meaning a set of instructions for a computer to carry out. The computer cannot do anything that isn’t within the scenarios detailed in its instructions. Even in the case of a learning algorithm, the computer only trains itself in ways the programmer initially told it to. It can never “escape” the realm of scenarios that were covered in its instructions–that would literally be lifting itself by its bootstraps. Therefore, whereas a human can always act in new ways (because of how impressive our brain is, and maybe some cocktail of the very human qualities of intuition and spontaneity), computers will get stuck when it comes to encountering situations in which they weren’t told how to act (or told how to learn how to act).

    I’d be interested (albeit terrified) to hear of any examples that could disprove this.

    • Mares

      Well, you could argue that this isn’t a program stepping out of its bounds, but there’s this case: http://edition.cnn.com/2015/01/21/tech/mci-lego-worm/ . Scientists uploaded a worm’s neural system to a LEGO-robot, which did behave like a worm.

      Now, we could do the same with a human brain, since copying is easier than understanding. This AI would have the same capabilities as a human. This forces us to either admit that this program would ‘escape’ his built-in scenarios, or broaden our definition of scenarios to realise that we are no different; limited to a number of scenarios.
      This second option is a bit of an existential one, but it seems correct. The only problem is that we can’t falsify it since it’s inherently impossible for us to imagine a scenario not programmed in our brain.

    • CloudStrifeNBHM

      Talking to the theoretical comp sci folks might be a good idea.The central question on which a lot of this Singularity business re: “coputer brains” hangs is whether human cognition is in fact a “computable” process. If not, if there’s some as-yet-ineffable quality to the human brain that can’t be captured by a computing device, then you may be right. But, if human thinking is computable, then no matter how special we feel, we’d be forced to admit concepts like spontaneity and intuition are simply computational processes in action.

      I’d counter that it’d be interesting to hear scientific evidence showing that human cognition is not in fact computable. Much of the neuroscience literature seems to suggest that we are fundamentally “machines” (biological ones) consisting of physical (computable) systems.

      • Chris Wright

        As far as I know, we haven’t yet pinned down the mechanical actions of the mind. The reasoning, judging, observing part of us. Like for instance, we can stimulate the brain to produce consciously experienced memories, but we can’t stimulate it and affect the consciousness that is observing the memory surface and play out.

    • Scott Pedersen

      The last CPU to be designed by humans was, I believe, Intel’s 80386. The processor generations since then have been designed by humans operating tools that in turn do the actual layout and design. This works much the same way that people don’t write machine code anymore, they write abstract high-level languages that computers then compile into machine code. Both of these systems are self-reinforcing. We build better computers that allow us to run better tools that allow us to build even better computers. This process currently involves some human input, but there doesn’t seem to be any obvious reasons why that must necessarily be the case in the future. There was a time that human-optimized machine code was better than the output of a C compiler. I don’t think that’s the case anymore. Also, any program small enough to be reasonably human-optimized is probably too small to do anything interesting by modern standards.

    • wobster109

      It will be the same way that Watson and Deep Blue are different – because they have different code. Normally, when we program a computer, that’s the code it uses, then and forever. But these computers will be changing their own code, and when the code is changing, anything is possible.

      • Chris Wright

        Yep, just like we can voluntarily change our neural pathways, hell we can change our brain on a physical level using just our mind/awareness.

        • wobster109

          That’s hard for us. Our neurons are physical neurons. It’s much easier to write over a file than to physical move our own neurons around.

          • We don’t move neurons around, but build new connections between neurons, every time we learn something new. That’s physical change.

            • wobster109

              Right, it is physical change. But neurons grow slowly, over long periods of time. This is why when you suffer brain damage, it takes years to recover. Computers won’t be limited by how fast their cells can grow.

            • You don’t grow new cells, but strengthen connections between cells. This is how we remember things. The reason you remember you left the milk in the fridge is because you physically changed your brain to do so. This happens as quickly as it takes you to know where you left the milk after you put it there. I’d say that’s pretty fast. Of course as Tim points out above, computers can be built to be much faster. My point is just that it doesn’t take “long periods of time” to physically change your brain. It happens instantly. The reason it takes so long to recover from brain damage is because other cells have to form new networks to take over and learn things they weren’t specialized in before, not because you have to wait for new neurons to grow. (they rarely do)

        • Does our awareness change our neural pathways though, or do our neural pathways change our awareness? 😉

          • Chris Wright

            I’d say it’s awareness (which I guess includes our willpower/intention). Think about it, I intend to start meditating because I read something cool on it. Doing said meditation (my directing/focusing awareness) changes my neural pathways and brain structure because of brain plasticity.

            • It’s a rhetorical question, unless you hold that there can be awareness that doesn’t supervene on the physical state of your brain (i.e. a soul) – hence the wink. I’m willing to grant it was a dumb joke though :p

  • Hi69

    I read Superintelligence too. Nice synopsis of the main theme.

  • Richard Kenneth Niescior

    We can always keep the AI sandboxed and force it to play nice

    • Mares

      That would be really difficult. One second on the internet is all it would take, if you’re going with the worst case scenario. You would need an airgapped computer without any possibility to access wifi in any way, and in that scenario there’s the question what an AI in such a limited environment could really achieve.

      • Scott Pedersen

        A sufficiently smart AI might even be able to use whatever UI it has to hack the neural network us meat puppets call a brain and use that to get out. To be really secure you would not only need to keep the AI air-gapped, but also make sure nobody ever interacted with it. Which, as you suggest, sort of defeats the point of the whole exercise.

  • Anthony Churko

    I’ve often wondered what it was like when nuclear bombs were first invented. Everyone wondered, “okay…how long have we got before we’re all dead?”

    Today, I know what it feels like.

  • Indrid Cold

    I still don’t understand. Why the hell do people like Google want to do this, anyway? How do we gain anything by creating a machine god? Is it the hope that it will make us immortal? Reveal to us the secret about existence, the Universe and Everything? Call me a ludist, but this quest for ASI just seems vain, self-destructive and very, very scary.

    • CloudStrifeNBHM

      Except for a few players, ASI isn’t usually the explicit goal. Instead, you have a population of people, companies, academics, etc. with different goals (search the internet faster, find more relevant music, design better roadways, compute better weather models, …). Right now, many of them are having great success building ANI systems toward those goals, and they start to think about building more AGI-like systems as they gain expertise with the ANI. While the ANI/AGI/ASI distinction is a useful concept, it’s a blurry line in practice. So the reason AI is pushed forward is because of 1000s of immediate interests rather than a single “we shall create machine god” notion.

    • wobster109

      Imagine no hunger. Imagine no cancer. Imagine no dementia, no diseases, and no shortage of resources. Imagine all our greatest scientific problems solved, and our greatest engineering problems too. Imagine clean energy for everyone all over the world. Imagine the progress we could make if we devoted our best and brightest scientists to study these problems. That what we stand to gain – a world without suffering. No, suffering does not give life meaning. No one is better off suffering from Alzheimers. No child is better off for suffering malnutrition.

      • Riiccus

        I suggest you read Ian M Banks ‘Culture’ series for the answer

  • Alex

    Something you and anyone who read this might enjoy: http://on.ted.com/a0hZU

    It’s a TED Talk about an equation that a guy made which has made programs intelligent. Something that will most likely play a big role towards Superintelligence.

    • HDF

      Yes, I really like this definition of intelligence. I also like Giulio Tononi”s Integrated information theory, although it definitly still needs work.

  • Michał Polak

    Great article as usual, but I believe Moore’s law doesn’t hold any more, the growth has slowed down already.

    I highly recommend watching this TED talk about Deep Learning, it shows current possibilities of self-learning neural networks, mind blowing: http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn

    Also, I didn’t believe about the grey squares and clicked all of them. You were right, it was boring 😉

  • Victor

    Read this post and then watch or think about the topic of the movie: (stunning) “HER” you´ll see the subject matter there in a much broader view and you´ll get a nice wink on what this progress (described in the post) could lead us to..

  • d
  • Jiri Roznovjak

    Two thoughts.
    If the Chinese room argument is true and computers do not have consciousness, is it “morally justifiable” to create computer-based artificial intelligence that will sooner or later surpass and wipe out all (conscious) life on Earth?

    Plus, this article renders your previous article about who will be universally known in 4000 obsolete :).

    • Mares

      You state that sooner or later an ASI would wipe out all concious life as if it’s a certainty. Now, consider one of the options to create an AI, which was to copy and edit the human brain. This would make it so that your AI would have human emotions. We see emotions as altruism and forgiveness as virtues, and punishment for the sake of punishment as brutal. An A(S)I based on a human brain could come to the same conclusions, and wouldn’t ‘retaliate’ against us because of the damage we’ve done to, say, our planet, if the problem could be fixed without the human race going extinct, which seems plausible.

      This corresponds with what Hobbes proposed as a successful political system; the Leviathan, an enlightened despot. An entity more intelligent than us, who makes decisions, with our and its (since it would have originated from us) best interest in mind.

      Your comment about the Chinese room and the whole thought experiment of the Chinese room begs the question: What is conciousness? You know that you can’t communicate in Chinese, but the observers outside the room don’t know this. They see you exhibiting every symptom they know of knowing Chinese, so to them, for all intents and purposes, you know Chinese.
      Analogically, you could state that to us, an AGI/ASI would be concious for all intents and purposes, since it would exhibit all symptoms of conciousness. So the Chinese room becomes relevant only to the one inside, in this case the AI itself. If it isn’t concious to itself, no loss, if it is, well, then our view corresponds with how it views itself, on the subject of conciousness.

    • wobster109

      If it will help billions of humans (and possibly trillions of animals besides), then it is morally justifiable. ^^

  • jchthys

    “You will be upgraded”

  • Scott Pedersen

    I think that 1750-guy would have more to show off to 1500-guy than you might think. It is just most of what he would consider worth showing off is stuff we here in 2015 have since forgotten about and would now consider wrong or irrelevant. Just like much of what we now consider to be the epitome of human progress will be wrong, irrelevant, and forgettable to 2250-guy. That progress looks exponential may just be an artifact of temporal chauvinism. Recent progress seems greater because it hasn’t had time to be rendered irrelevant or shown to be wrong.

    Humanity has definitely made progress. However, progress is an ill-defined, unitless, subjective measure. Trying to put it into a precise curve on a graph is futile. It is correct that humans are bad at forecasting the future. That is not evidence that your preferred forecast is correct. In fact, it would seem to be evidence that making graphs and forecasts is an ill-fated endeavour to be avoided. An obvious alternative forecast with just as much evidence is that human progress is one big s-curve that will flatten out as higher-order factors come to dominate. Our meat-based cognition may place some hard limits on how much complexity we can handle and how far we can push progress. Those limits may lie somewhat short of artificial super intelligence or even artificial general intelligence.

    • Mares

      While this may be true for theoretical models and things like quantum science, which can all be proven wrong, it’s impossible to deny that the speed at which the amount of physical improvements, inventions etc are increasing is rising. The wheel was invented 6000 years ago, but motorised movement only a few hundred years ago.

      • Scott Pedersen

        Even physical improvements can be, if not proven wrong, then discovered to be irrelevant to progress. For example, consider the work done to develop and improve scarificators for blood letting. At the time that would have seemed like progress, now not so much. The history of invention is replete with such examples of things which seemed like a good idea at the time but turned out to not ultimately contribute to the progress of human civilization.

        Humanity is definitely progressing. The rate of progress is quite possibly accelerating for the moment. That this will continue and the result will resemble an exponential is, I think, very uncertain.

  • awesome post! – i especially like the quote from donald knuth. excited for part 2.

  • Seth

    Last year, it was nano-technology; this year it’s AI. Regardless of where the hysteria is, to the guy tending his goat in Africa, none of it matters.
    Life goes on, with or without AI.

    • Mares

      But it is relevant to you, since you clearly do have advanced technology, which would make it so that ANI is all around you. Also, nanotechnology relies on ANI.

    • CloudStrifeNBHM

      But, at this point, there’s a not insignificant chance that the guy tending his goats in Africa has a net-enabled cellphone that he uses to check prices at market. And he lives in a nation & world whose policies are driven by people using ANI systems (which could affect him for better or worse). To say “none of it matters” is a bit of a stretch?

    • wobster109

      What if he didn’t have to tend his goat anymore? What if the AI solved humanity’s resource problems, produced housing for everyone, and left us all free to pursue hobbies?

  • So, 13.8 billion years ago some ASI’s ASI said “hold my beer while I hit ENTER…”

    • Justen

      NAILED IT!!!

      • Isabelle John

        ∞∞∞∞∞∞∞ waitbutwhy Best way ,google yahoo,

        Read Full Article

        ∞∞∞∞∞Get More Info ☛ More About IT NOW

    • My soul is a bio pen drive and it is uploaded back to the mainframe when my program is terminated?

    • Marcus

      Sorry to be the one who is telling you this, but a guy named Isaac Asimov already thought about it…and he also wrote a short story called “The last question” that portraits this possibility.

      • offp

        That was the best novel I’ve read from him

    • Faith Skinner

      ☣☣☣☣☣☣waitbutwhy–Best way,google yahoo,Facebook,twitter,

      Read Full Article

      ☣☣☣☣☣☣☣Get More Info ☛ See More Here’s

  • Brunch would throw that dude way off!

  • venu

    one correction, watson never understood the host’s voice. Questions were text fed into the system while host was reading out question for other participants.

  • Dan

    I’ve tried to engage a number of people on this topic in conversation lately, but mentioning “AI” is like mentioning “space aliens”… most people just shut down. Really glad you wrote on this topic. Your writing has a wonderful (and hilarious) way of making difficult topics easier to understand. As a species, we need to making serious efforts to develop a safe AI policy before an intelligence explosion. Once it has already started, it will likely be too late. The question is, are we smart enough to develop enough checks and balances (or incentives) to keep a super intelligent being from ultimately destroying us?

    • gatorallin

      …..is it a bit naive to think we can always write in some rules that will protect us, or with any AI advanced enough (that is rewriting its own code) that it could just remove the rules on its own? It seems a bit like realizing that locks keep out the honest people. Won’t there be some hackers out there that break all the rules, just to see what happens…?

    • Scott Pedersen

      Maybe. People like Eliezer Yudowsky, for example, are working on it. It isn’t really about checks and balances since the AI that will destroy us is unlikely to be anything like Skynet. Its about making sure the AIs goals and motivations are compatible with ours and won’t result in any catastrophic surprises once the AI grows into a superintelligence.

      I expect part 2 of this article that will delve into this in great detail.

      • Dan

        Agreed. It may come down to writing a final goal code that matches human goals. No easy task though. There are a multitude of unforeseen consequences to any goal that makes sense to a human brain, but can be shortcut by a superintelligence. For example, a final goal of “maximize happiness for all humans” could result in a super intelligence developing a serum that, when injected, makes humans permanently in a state of bliss.

        • P

          That would be nice

      • wobster109

        I’ve heard a problem where you take the world’s greatest pacifist, for example Gandhi. And then you offer to save 100 people if he agrees to have his brain changed to be a little more violent, not much more, but now he’d be willing to kill a mosquito. He thinks, it’s a good trade, a small change for a bunch of lives.

        And then you make the offer again, and this time it makes Gandhi willing to kill a mouse. He would never have agreed to that in the first place, but the new slightly-violent Gandhi is willing to. And that’s the problem with the AI too, that even if it never hurts people in the beginning, it’s also changing its own code.

        It sounds like Mr. Yudkowsky is looking for a mathematically-proven safe strategy to avoid catastrophic surprises, but I wonder if any strategy is provably safe.

        • Scott Pedersen

          I think Mr. Yudowsky is doing good and interesting work, but I think his ultimate goal will be unattainable. Humans are not provably safe to have around other humans, so I think it is unlikely that anything humans create will able to be provably safe.

    • chad jaeckel

      The question is not “are we smart enough to create checks and balances before it’s too late.” The question is how to keep this technology from being weaponized by the countless fools who will certainly find incentive to do so. Look at the atomic bomb. Nearly took out humanity, and it still might. That technology (Pandora’s Box) might arguably have been best left alone. Then again if that team burned the research, another team would be right behind them. What will be, will be? There is no stopping anything? No such thing as free will? Interesting to think about if nothing else. Wait and see, not that we’ll be able to shift the course of the inevitable much anyway it seems. The ability to create fire was the opening of the original pandora’s box.

      • Chris Wright

        well we have put a hold on cloning humans… so sometimes we do all agree to leave stuff alone.

  • Pingback: Understanding The Danger of AI()

  • gatorallin

    Loved your post on AI, but I think this is written from the viewpoint that Humans are stuck and can’t evolve along with AI. Maybe a few humans merge with technology instead of only just invent it…. Maybe this super AI doesn’t just write new code only for itself, but also for the Hybrid humans working on it….

    • Mares

      It’s really clear that biological evolution isn’t nearly as fast as technological, so combining the two seems like a solid option. But I think a problem is this; who’d be willing to do it? Who would be willing to augment their brain in such a radical way that it would enable that person to break free from biological and evolutionary constraints?
      It’s really an all new field, and we don’t know how it would impact us as humans. For example, since we still don’t understand the brain, it would be very hard for us to know whether certain brain functions could handle the augmentations made to other brain functions.
      This is why it’s easier, safer, and easier to contain, to ‘test’ all this on AI’s.

      • Scott Pedersen

        Brain augmentation seems awesome to me. We already augment our brains with all sorts of technology. Augmenting memory with books is one of the oldest and most significant.

      • wobster109

        I think we all have a tendency to hold on to our “humanness” as if it has some deep intrinsic value. We’d rather die human than live as something else. And in being like that, we box ourselves in.

        Personally I’d be willing to. I’d do it in a heartbeat. . . exactly because I’m so over this physical body where my life depends on my heartbeat. When the alternative is dying a natural human death, of course I choose the cyborg brain. It may be a huge change, but it’s less change than being dead.

      • Kimberly

        There is no shortage of people willing to risk their lives to experience space travel. There would be those willing to take a risk to experience a state of intelligence beyond normal human capability. People diagnosed with early onset dementia or alzheimers would also think it was a risk worth taking.

        • Mares

          Good point that some people wouldn’t be concerned with safety, but there’s still the question whether it would actually make a difference in the near future, where AIs are relevant. I’m not a neuroscientist, but I believe we haven’t figured out exactly what are of the brain would correspond with a computer’s processor, or RAM. So augmenting one part of the brain wouldn’t necessarily have beneficial effects since other parts would be holding it back.

  • Dave Zhang

    Personally, I think the best shot we have at making sure a Superintelligence doesn’t wipe us out is to make him/her/it realize that “smart” beings get lonely when they’re by themselves, and it’s in his/her/its benefit to (a) keep us alive and (b) relatively up to speed on technology. Presumably the super-intelligence can also devise new methods to make us almost “super-intelligent”, and then we’ll forever be in the role of slightly slow little brother/sister… worth keeping around for entertainment purposes, and occasionally might contribute something useful.

    • j

      I think that something we don’t think about is that intelligent wise people don’t look at average people and think about how the Earth will be a better place if they were wiped out. Murderous genocidal desires are generally considered side effects of a mental disease or a disturbed individual. A real Superintelligence would hopefully find that kind of behavior disturbing, primitive, and something to be avoided.

      • Dave Zhang

        I agree that intelligent wise people don’t think about how to wipe out average *humans*, but most intelligent wise people don’t really give a thought to batting away a fly or killing a mosquito.. or spraying the house (where house = planet Earth) with insect repellant. So the question is, how to we make us humans appear to the “intelligent wise” SuperIntelligence as “average people” rather than “mosquito”? Unless the SuperIntelligence has a specific interest/priority of making sure humans roughly stay on intelligence/technology par with itself, then humans quickly becomes mosquitoes.

      • icarus

        it would hopefully find unnecessary violence illogical and keep us alive as its ants to do all the surveying.

    • wobster109

      In his Fermi Paradox post, Tim compared us to ants in an anthill. He said Ponce de Leon didn’t bother talking to the ants in Florida. The AI is so far ahead of us that we are like ants, and we wouldn’t be interesting. So a central problem is what values to program into the AI so that it acts like it “values” sentient life. And general consensus seems to be, it should be programmed to “maximize utility” in some way. Utility is, in a very loose sense, our positive feelings. I don’t know if there are any serious contenders for how to measure it though.

      • HDF

        I don’t think “programming it” is a good way to think about this. “Asking it” would probably work better.

  • gatorallin

    I think in discussions like this post, we take the human perspective and forget that AI will have its own agenda at some point and it’s own currency…..thus its’ agenda could be totally different and that will influence this good god or bad god question. As learning systems we use pain/pleasure to help us learn (hey don’t lick any more light sockets, or Yum, Chocolate is good), but computers use logic and pattern recognition to mimic those pain/pleasure learning tools, so that has to change what it wants when it becomes self aware. Our hierarchy of needs will just be different from super AI.. we just have to figure out what the future AI currency is (why would it want/need human things?). If you had unlimited resources, what would you do with it as a human? (and would a super AI have the same reasoning… without pain/pleasure sensors? No). As long as we program in a sense of passionate curiosity about how things work as part of its learning core, maybe knowledge of the Universe is the ultimate currency.

    • Scott Pedersen

      The canonical example of this problem is the paperclip maximizer. A simple AI with a simple job in a paper clip factory grows up to be super-intelligent because that will help it maximize paperclips, then it uses its super-intelligence to convert the entire solar system into paper clips.

    • Mares

      We are the most intelligent being we know, and as was proposed in the article, we would model an AI after us, with ourselves as the benchmark and the brain as a model. Therefore, it’s bold to assume an AGI or an ASI would not have emotions like we do, including modules for learning through pain/pleasure next to raw input.

      It’s worth noting that AI’s like Watson and Deep Blue both have no actual intelligence (as they are ANI’s), they operate based on fast number-crunching and information-gathering. It has no way to answer any ‘creative’ questions. An example: A question during Watson’s Jeopardy-run was the following: “It was the anatomical oddity of U.S. Gymnast George Eyser, who won a gold medal on the parallel bars in 1904.” Watson answered “What is a leg”. His answer was deemed incorrect since he didn’t mention the leg was missing. While this seems a pedantic point, it becomes clear how this shows how Watson lacks any creative touch when we slightly edit the question to: “What was weird about U.S. Gymnast George Eyser, who won a gold medal on the parallel bars in 1904?” Since Watson works with google results that he then filters, he’d be unable to answer this question at all. Googling ‘weird george eyser’ or any other iteration of keywords given in the question wouldn’t yield results.

      This is why we need to model AI’s after ourselves: we’re the only beings we know that are capable of this kind of creative thinking/solving. But since we haven’t yet figured out the whole brain, copying the whole thing rather than only what causes this creative thinking (which might very well be every single component together) is easier.
      Long story short, there’s no reason to believe an AI wouldn’t have emotions.

  • v43

    a god with an electric plug? =/

    • wobster109

      Nah, it won’t need electric plugs. It will be mobile. For a split second it gets energy from food like us, before it builds itself a solar-powered body.

  • Haydn

    I don’t like how you think Google Translate has translation down. It is pitiful, and will be until at least AGI is available. There are just some things that can’t be computed, like appreciation of art, and translation is one of them. The best translators always have differences of opinion on how to translate something and this stems from the fact that everybody’s conception of language is personal and differs slightly from one person to another.

    • CloudStrifeNBHM

      Agreed that Google Translate is a far cry from translation, perfected. But to say that translation “can’t be computed” seems a stretch. Sure, translation & art appreciation are inherently ambiguous tasks (ie: there’s a one-to-many mapping between tasks & answers, so there’s no “one true answer”), but that’s not to say there aren’t more & less correct translations of language. As with chess, isn’t it conceivable that computer translation could shortly become the de facto gold standard for its domain?

      • wobster109

        I agree. Even within the same language we are doing guesswork, and we misunderstand each other all the time. For example when I say “I’m picking up music”, my orchestra friends understand right away that I’m stopping by the music store and buying sheet music, but someone else might think I heard music over the speakers. We all know there are multiple meanings, and we’re picking the most likely.

        Google translate will do much the same thing – guesswork and probabilities. And no, it won’t be 100% accurate. But it will be no worse than humans. We don’t look at human misunderstandings and say “well we must not be really intelligent then”.

  • Sounds like we have a new convert. Welcome to singularitarianism, Mr. Urban. We’ve been expecting you. 🙂

  • Innocent Bystander

    The most depressing part of this post is that I can’t tell the difference between the blue circles and the grey ones. Stupid color blindness. Something a computer can already do better than me.

    • Ruud van de Kamp

      In firefox there exists a color that site addon that might help you. I haven’t tested it myself, but it goes to show that there might be a generic solution to your problem.

      • Innocent Bystander

        Thanks for your help!

    • Galit Schwartz

      The grey links are squares.

      • Innocent Bystander

        Thanks for your help too!

  • wobster109

    Excellent post. You’re excellent at making difficult subjects easy to understand without a great deal of background. I think you make AI more approachable than Less Wrong, which has a great wealth of information but likes to hand new members a semester’s worth of assigned reading. Less Wrong should totally link to you. I’m very excited to see part 2.

    P.S. you mentioned you’d like to live 5000 years. If you’re interested in cryo-ing yourself (thanks DeeDee Massey for the excellent wording), I’ll sponsor either your CI lifetime membership or a year of your Alcor membership. If you’re not interested in that, no worries.

  • CloudStrifeNBHM

    A great post, overall, but I would strongly object to the final conclusion:

    “If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time.”

    This sounds a bit like the premise behind that awful-looking movie, Lucy. There’s no guarantee (some would say no evidence whatsoever) that hyperintelligence and omnipotence are tightly linked that way. Regardless of how intelligent a system is, it is constrained by the physical sensations it takes in and the actions it dishes out (computing a quantity, pinging Google.com, etc.). The more intelligence it has, the more effectively a system can coordinate its output actions in response to sensory inputs, but controlling atoms at will is a rather formidable task requiring a perhaps incomprehensible degree of precision (in any case, probably much more than, say, 100x human intelligence).

    What is more likely, even with just 100x human intelligence, is that such a program could effectively circumvent most any fallible human security measures, giving it access to a great deal of data, sensory input (cameras & other sensors), and some degree of action output (manipulate bank DBs, drive automated cars, manipulate information on the net & TV, and many more things). That’s an interesting/scary enough idea, even without per-atom manipulations!

    • a

      Eagle Eye with Shia LeBeof

  • Jiri Roznovjak

    An interesting thought that has occurred to me.

    Suppose that at some point in the future an artificial superintelligence will emerge. It will rapidly get incredibly powerful and soon it will make all humans on Earth almost immortal. Those humans will start to miss their deceased parents. The AI, with incredible technology and amount of available information, will therefore revive those parents, build them out of molecules exactly as they have been (whether it will really be those people or just copies is another topic). These revived people will start to miss their parents, and the same thing will happen. Soon, all humanity that has ever existed is revived. We can also imagine that the quality of life on Earth will be terrific. Maybe there is heaven after all?

    • Frank

      A superintelligent AI will not do this I think! It will refuse the demand from the people to recreate their parents.

      It will simply work either for the improvement of the human condition, or just supersede the humans on this planet…. but that’s Tim’s part 2 of this post

      • P

        “Hey dude, I know im a machine but this is immoral”

    • If you go back far enough, you’re resurrecting apes.

    • DLX

      From where would the superintelligence get the information about the personalities and memories of people non-famous who lived before the age of information?

      • Jiri Roznovjak

        I have no idea, but we can speculate that there is a way and if there is, the superintelligence could figure it out. I’m not saying this is how it will for sure turn out, it is just an interesting thing to think about.

  • DL

    Calling this thing “something we can never hope to understand” would be more accurate than a “God,” which people seem to understand in and out (or believe Him to be possible to understand, with the right amount of effort). Saying that a superintelligent being is “better” than humans, like a Human 2.0, is merely applying our measly “human standards” to something that… shouldn’t be bothered them. All the qualities recognized as “good” by humans, like kindness, empathy, and moral judgement wouldn’t matter the same way to AI, if they even matter at all.

  • Adrian Willenbücher

    I have two problems with the claim that once an AI reaches human-level intelligence, it can improve its own intelligence in an exponential way:

    1) Who says that each additional unit of intelligence doesn’t need an exponential (or worse) amount of effort? This would mean that the increase in intelligence is very slow, and most importantly, linear or even sub-linear.

    2) Maybe we can build an AI that has an IQ (relative to humans) of 100, and that’s the best we can do. What if an IQ of 100 isn’t enough to improve itself? Then there won’t be any recursive self-improvement at all.

    • Patrick Savalle

      Exactly.

    • Scott Pedersen

      I agree that the curve of progress may end up looking very different than an exponential.

      And sure, it is not impossible that we’ll never be able to make AIs better than average intelligence. However, that isn’t a certainty. For example, if you could measure the IQ of an impersonal force like evolution via natural selection, it would score as pretty dumb. However, it gave rise to us who can do in minutes what would take millions of years to evolve. There doesn’t seem to be any sort of absolute law of conservation of intelligence preventing greater intelligence from arising out of lesser ones. .

      • Adrian Willenbücher

        Well, sure, I didn’t say that it’s impossible. But the article uncritically represents exponential, recursive self-improvement as a fact, as an alleged inevitability once we reach any level of AGI. I don’t think that this is in any way self-evident.

  • Yiorko Chaz

    Sooooo Tim, I have Nick Bostrom’s, Superintelligence: sitting on my desk and intent to start reading it just after i finish what i am currently reading. So shall I read your post first, or the book first? What do you think?

    • I suggest reading in following order:
      the post, the book, the post again.
      That way will allow you to notice how the book has changed your perspective.

  • BT

    The end of that post was like the conclusion of a Game of Thrones episode…

  • Chances are this has already happened elsewhere in the universe and we on earth are an experiment!

  • sabs546

    Anybpdy here wonder if we just make it like futurama
    Humans will get smarter with the robots and end up just sorta growing fairly equally

    • daniel

      I wish :), but it is unlike to happen even our own specie must change and with the advent of new computing and biology breakthroughs the most posible scenario is that we merge with machines to improve our power and intelligence creating a new specie.

  • 3DAnimator

    I think motivation and intelligence are two very separate things. Our intelligence and aggression, for example, both evolved because those particular things helped us pass on our genes. There’s no reason for any desire to spontaneously emerge along side intelligence. Everybody just assumes that if an ai reached human level then it would have motivations that we from the physical world could recognize. It may have no motivation at all. Except the self improving ai would be motivated to self improve. .. maybe it will reach a point where to do that means the end of us?

  • RodL

    How would memristor technology affect the creation of an artificial brain?

  • sabs546

    What about physical growth for the computer
    It could really hold it back

  • Thomas Wilson

    …but, I liked “The Patriot”… ha

  • Pingback: Wait but why on AI revolution | miha slekovec()

  • Patrick Savalle

    Interesting article but largely based on the wrong premises. The author has the wrong perception of the past. Innovation is actually slowing down, has come to a halt. The idea that human invention is accelerating comes from the church of singularity where they preach the law of Moore 🙂

    To mis-quote Kurzweil: ‘the past is largely misunderstood’.

    http://aeon.co/magazine/science/why-has-human-progress-ground-to-a-halt/
    http://www.economist.com/blogs/freeexchange/2011/01/growth_2
    http://www.newscientist.com/article/dn7616-entering-a-dark-age-of-innovation.html

    • The_Postindustrialist

      Ooooooooooh… Charles Stross. That made my day. 🙂

    • “If it was possible nature would already have invented it.”

      This falls into a fallacious category of thinking where humans exist outside nature. Remember that the universe begat us; everything we do is within its boundaries and laws. Emergent properties of particles and behaviors described by physics produced interactions that we describe with chemistry, from which reactions emerge that we describe with biology, from which patterns emerge that we describe with psychology, neurobiology, etc. as intelligence. Whatever effect is produced by intelligence is still natural. Your argument would be similar to saying that the interactions of quarks to form protons, neutrons and electrons is natural, but the point at which they form molecules is unnatural; if it was possible nature would already have invented it.

      How do you know what the correct timescale of nature is? We speak of geological timescales, meaning millions of years. Perhaps there are phenomena which don’t emerge for billions of years – complex life seems to be one of those.

      • Patrick Savalle

        Yes. And GMO is biological food because man made GMO and man is within nature.

        Right.

        Replace nature by ‘evolution’ and you’ll be fine again 😉

      • Patrick Savalle

        LOL.

        ‘Man is a product of evolution so everything a man invents is a product of evolution too’.

        I guess GMO is just as organic as real food than 🙂

    • maximkazhenkov11 .

      “If it was possible nature would already have invented it.”

      No, because technological progress outpaces biological evolution. Humans have evolved from less intelligent primates over millions of years. If given another millions years where the same selection pressures for higher intelligence applies, humans would indeed become even more intelligent and thus “superhuman”. But ever since we invented tools a few hundreds of thousands ago, it has sent us down a spiral of exponential technological progress and the means to create artificial superintelligences before nature can do the same via evolution by natural selection.

      “Why would nature haven’t invented cloud computing already?”

      This is a terrible way to argue something is impossible. By your logic, how would you explain other things that hasn’t come by via natural evolution, such as spacecrafts and lasers? And the answer to your question is simple – because nature doesn’t have a goal in mind. Things happen according to the laws of nature. Randomness is involved. There is no guarantee as to which road evolution is going to take. In order to ponder these questions however, some kind of intelligent entity must evolve. It just so happens that this entity is us humans – distinct, individual thinking bodies. Maybe different circumstances could have led to a hivemind-like creature (I suppose that’s what you meant by cloud computing).

      “Who says memories are stored inside the skull?”

      Neuroscientists. We have known for centuries that our memories are stored in the brain; we have understood the mechanism of short and long term memories since the 1950s; today we can map brain activity down to sub-millimeter scale using fMRI and even decipher some of them (check out this video: https://www.youtube.com/watch?v=nsjDnYxJ0bo )

      Maybe it’s you who needs a paradigm shift.

      • Patrick Savalle

        Haha.

        Clearly new to science or this earth.

        “No, because technological progress outpaces biological evolution.”

        Ignorance. Or arrogance. A tiny cell, of which you have billions, is more complex than the most complex machine man has ever designed.

        Take for instance 3D printing. Everything in nature is 3D printed, without the need for any printer. Or GPS. A lot of species use quantum entanglement for spatial, geographical orientation without the need for any satellite. Inside the skull of a mouse is a network more complex than the entire internet. And his DNA contains more data than all data-centres and connected hard drives combined. Nature even has is own internet, connected by funghi, mycelium. And all this in perfect homeostasis.

        Gaia has a 5 billion year head start on us.

        Your non-scientific certainty only stems from the fact that you apparently don’t know much of nature, or just like to believe the gospel of the singularity church;) You are clearly a victim of determinism.

        • maximkazhenkov11 .

          The human brain, a product of natural evolution, is without a doubt the most complex structure we know of in the universe. This does not stand in conflict with my statement that technological progress outpaces biological evolution: While human brain has not changed much in capacity in the last 5000 years, our technology has gone from the invention of the wheel to interplanetary spaceflight.

          It is hard to say whether a single cell is more or less complex than a man-made microchip. A microchip is packed full of transistors; structures 22nm in size. Some components of the cell are finer in structure while others are coarser.

          3D-printing refers to a manufacturing process. What do you mean by “everything in nature is 3D-printed, without the need for any printer”? That things in nature are 3-dimensional? Gee, what a surprise.

          “A lot of species use quantum entanglement for spatial, geographical orientation” Can you give an example of this? It would not only be huge news in biology, but also a ground-breaking discovery in physics since information transmission using quantum entanglement is impossible.

          Animals navigate using Earth’s magnetic field, the positions of the sun and moon in the sky and even scent over distances of hundreds of miles, which is fascinating, but definitely no quantum entanglement involved. This is another example of biological evolution not able to keep up with technological progress: Artificial light sources confuse moths because for the millions of years they have existed, the sun and the moon were the only lights.

          “DNA contains more data than all data-centres and connected hard drives combined.” No. The mouse’s genome consists of 2.8 billion base pairs, which is equivalent to roughly 700 Megabytes. Even if you took the DNA of all of the ~10 billion cells of a mouse, which is just lots of copies of the same thing, you would only get 7 Exabytes, still short of the 295 Exabyte total storage capacity of all devices in the world.

          • Patrick Savalle

            “While human brain has not changed much in capacity in the last 5000 years”

            How do you know? We don’t even know how the brain functions. Or if memories are stored inside the brain.

            “What do you mean by “everything in nature is 3D-printed”

            Morphogenesis is the ultimate 3D printing. It is still a mystery too, where are all the blueprints stored? Certainly not in the genome.

            “Animals navigate using Earth’s magnetic field, the positions of the sun and moon in the sky and even scent over distances of hundreds of miles”

            No. Quantum entanglement most probably. http://www.wired.com/2011/01/quantum-birds/

            “which is just lots of copies of the same thing”

            Assumption. No long ago the used to say that 90% of our genome is junk. As if nature would create junk. Not long ago they were sure all our cell had the same genome. Wrong. No long ago they were sure the genome would not change over time. Wrong. Not long ago they were sure only DNA got inherited. Epigenetics proofed this wrong. Etc. etc.

            Science doesn’t know much about nature.

            You’re entire argument is based on assumption. And your assumptions are clearly based on the very narrow and restricted paradigm of the clockwork universe.

            It’s thinking likes this that keeps innovation back and science retarded. No offence 😉

            So, first task for you, is to forget everything you think you know and start all over without the dogma’s and paradigms that are choking you.

            • maximkazhenkov11 .

              Fossils from the neolithic age show that the size of the human skull has not changed significantly in the last 10,000 years. This is consistent with the fact that human life cycles are very long and thus need hundreds of thousands of years to evolve some new features. I don’t see why we need a complete understanding of the human brain to make such a statement. By your logic, science cannot progress unless we acquire 100% information about a certain system.

              Why are you so certain that information for morphogenesis is not stored in the genome? One piece of evidence suggesting this would be the large number of misshaped children after the Chernobyl incident due to genetic mutations after radiation exposure of their parents. (I’m not going to look up pictures on this topic as I’m about to have dinner)

              The article in your link is indeed fascinating. However it still is used to detect Earth’s magnetic field, which only allows for direction sensing, but not positioning. This is why we need GPS despite having invented the compass 1000 years ago. It doesn’t seem to be more advanced, either. Man-made compasses use ferromagnetic materials (a property also quantum in nature. throwing in the word “quantum” doesn’t automatically make something advanced), which at least doesn’t get disturbed by a fractional change in the field environment. Both systems are pretty inaccurate though because the magnetic north pole is misaligned with the geographical north pole and moves around on the timescale of just a few decades.

              Why do you think that the Human Genome Project was a failure? We have learned a lot about genetics and it may provide tools for preventing and curing diseases in the future. The price for mapping the genome has also plummeted several orders of magnitude due to improved techniques.

              “Science doesn’t know much about nature.” Compared to what? Some holy book? The teachings of a wise man? The back of your head?

              To address your criticisms of science, we should look at what science is. It is our tool to understand the world around us. It’s goal is to describe nature as accurately as possible, with a model as simple as possible. We make hypothesis as to how a system in nature works; we make predictions based on them; and we check whether they match the observed evidence. This has not changed ever since modern science emerged in the 1700s. This does not mean that our current model to explain the world is the best one. In fact if science is still being done, it means it is still incomplete, a work in progress. This is no excuse however, to just throw the towel and say “Nature is too complex, we just give up and go sing Boomeyah”. It is also no reason to discard everything we have learned in the past as primitive and false just because of a new development has set in (most of the examples you have given are still under debate and subject of ongoing research). For example, Newtonian physics is still taught in schools despite the emergence of Quantum mechanics because it’s still a useful approximation. The ability to change its mind is a strength of science and not proof of incompetence.

  • Ezo

    I think that we will merge with AI immediately after it reaches level of general intelligence. And WE will become ASI.

    We just need to amplify our intelligence, instead of creating completely separate entities. It doesn’t make sense, anyway. Why would anyone create this ASI? Even if it’s benevolent, we will become meaningless. Do we really have to be pets, even if AI will make everything for us? I think no.

    And it would be far easier, I presume. We need just insane amount of computing power, BCI, and artifical neural network. If our brain could accept these artifical neurons, then we will become really intelligent. Next we would use this intelligence to develop safe way of mind uploading. When we will be uploads, then we can edit ourselves. At this point, entirely artifical intelligence wouldn’t have any advantages.

    • HDF

      My thoughts exactly. We need nanites, that form a “shadow neural network” on top of our already existing one, and at first, just help it out, optimize its working, than allow for new structures to be added. Having a shadow neural network would allow for perfect memory, perfect self awareness, and direct access to our own hardware. Soon nothing would be impossible, furthermore it would allow communication with anything, all life, as you could send out nanites to map its structure, run a pattern recognition on it, see and feel how it feels and thinks, and speak to it, even get it to do your will. Crazy hard to do, and not for everyone. Most people would screw up, and drive them selves crazy, or just use it as a drug. If you can make yourself flawlessly happy and ecstatic by just wanting to be, why would you do anything else? For most people this would be enough. I think that such a thing would have to be developed privately in secret, and only its creators get to decide who gets it, based on how likely they are to use it wisely. But this is just a child’s dream.

  • Bogdan Voicu

    Cool post, but allow me to keep some doubts. I am into this kind of studies/stories/speculations since I am a Cybernetics graduate, but I can tell you that this is a sort of caeteris paribus approach, in the sense that AI would evolve without any other influence. Other factors may change alongside, and my bet would go on human perception. In fact, human beings would not like being in the danger to go extinct, so when building such systems (which basically AI are) they would find ways to put some restrictions on them. Or at least I would carefully consider the risk-reward matrix when building such an AI. Anyway, the scenario here looks possible from my pov (but still not likely), and I can’t wait for part 2!

  • Interesting, but the notion that AI is a threat is a bit of a boogeyman. It’s probably not going to be a real thing. To make an analogy, if I liken your description of a general super intelligence to flight, it sounds like you are describing a really big bird instead of a 747. Anyway, this is my field and I spend too much time thinking about this, but here’s some broad reasons why I don’t think this is a real issue:

    1) Embodiment. A computer is essentially a plastic box that has a power supply and a human usable (or computer usable) interphase. If one of these boxes became super-intelligent and wanted to do something antagonistic to humans, it would have to physically manipulate the enviornment, or, more likely convince a human to manipulate its enviornment for it.

    2) Motivation. Any computer has only one relevant resource — electricity. If we assume that a computer is interested in its own survival (which is a big assumption in and of itself, a super intelligence may not care at all, and “care” is probably inappropriately anthropromorphic) then it exclusively has to compete for electricity and possibly maintance. If a super intelligence is interested in its own survival, then there are few resources it needs to secure. Beyond its own survival, it is difficult to immagine what a super intellignce might actually do. Maybe it spends all of it’s time calculating pie.

    3) Finally, there is an assumption that a super intelligence woupd be able to assume power because it is a super intelligence. This is probably not true. Power over people comes from a lot of things but primarily power comes from the ability to get other people to do things you want for you. Think generals, ceo, Pharrohs, etc. There is no reason to think that general intelligence alone (as compared to the specific intelligences required to convince others) is sufficient to assume power. Think about your local congressman, or better yet, assume we’re beatles to the super intelligence. Can we get beatles to do what we want?

    There are some technical limitations that’s should make this probable more difficult to solve than people (cough* Elon Musk cough*) expect. Currently we don’t have that great of an idea as to how people think, move their body or see. There are some good ideas, but we’re pretty far from anything usable. This is problematic for the development of AI because (for technical reasons) it would take many orders of magnitude more data to bootstrap a system to do these things than it would if you new at the outset of what an intelligence should look like. It’s unclear that we know what data we need to collect or how to label it to build a super intelligence and this is something that would need to be either fairly deliberate or very, very broad and detailed. Someone like Google could give it a shot, but I have my doubts.

    • Bryan Kolb

      The biggest flaw in everything you said is ignoring the fact that a smart enough intelligence would be able to manipulate almost anyone with the promise of being able to make them fantastically wealthy.

  • bill

    I don’t think that ANI has anything to do with intelligence, at all. The ANI we use today in many devices is essentially just pre-programmed functions and calculations. It’s like strings that connect different problems to different solutions. The computer is just pulling the strings, in a way it was programmed to do. There is no intuition or spontaneity into pulling strings. AGI is a completely different case. It’s a enormously huge step from ANI. In fact, other than a similar name, I don’t think ANI and AGI have anything else in common, nor are humans anywhere close to making an AGI device.

    • wobster109

      And yet, much of our brains are exactly the same. We’ve got a column of neurons that fires when it sees a specific shape. Another column that does practically ALL the speech processing. Perhaps, if we took every ANI ever made and connected them all together, it would be resemble an AGI.

  • Green0Photon

    Don’t worry Tim. I forgive you for taking 3 weeks. 😛

  • Vangelis

    There will be no humans. There will be a day in the future when Homo sapiens will cease to exist as Homo erectus did in the past. Instead there will be a unified brain, a unified intelligence. I believe this will take a while to happen, and it will start sometime around 2165 (150 years from now). The first thing that will happen by the end of this century (2099) will be the first hybrid humans (part human-part machine). By the year 2150 there will be also the first fully genetically modified humans. I will call this era (that will start at 2090) as Homo hybrid. The last Homo sapiens will physically die sometime around 2500 (485 years from now) but there will be the digital library of Homo sapiens (including the blueprints of life). The Unified era will follow Homo hybrid era. It will start around the end of the millenium. Life as we know it or imagine it will cease to exist. Will there be humans, like me and you again? There will be blueprints of humans and life, so maybe the Creator that we created will recreate as again…on an earth-like planet…. as our last will and testament…… Or maybe not. After all life is a struggle for pleasure, it will be more like a punishment to us if we are going to re-exist, than a joy.

    • Felipe Lisbôa

      The scariest comment of all time.

      • Vangelis

        The 12,000 BC guy would have said something similar for someone making a close prediction about the 20th century… 🙂

        • Felipe Lisbôa

          Those who are afraid of die are stupid, because they are afraid of living.
          What is scary about your comment is that you predict the extinction of human race. The guy in 12,000 BC would be afraid of great ships, big robotic birds that make possible for someone to travel from the far east to the far west in just a few hours or even the mix of races, the end of slavery or living over the clouds in big structures of steel and glass, but in the 20th century he will still be a human being and you’re not just talking about the end of human beings, but the end of life.
          And that’s scary, because life is perfect, who want something better than that? We born, we live and we die. What’s wrong with that? Who wants to live forever? Why would you want to live forever? If you live forever, if you learn everything, if you know every single corner of our universe and all the knowledge is on your hand, what is the purpose of existence?

          • Vangelis

            Life will not end, but natural life will. It will be something like a transition from analogue to digital. It is evolution. We are going to evolve into something better, more powerful and we will gain the ability to adapt into every environment, even far away from earth. Immortality that will come as part of this evolution will help us colonize the galaxy. Distance and time won’t matter, only the goal. And the goal will be to explore, to know and understand.

  • Meticulous Matthew

    I have been thinking about this issue a lot recently and would recommend reading http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html for some ideas as to what may happen in the future.

  • nick012000

    >How far are we from achieving whole brain emulation? Well so far, we haven’t been able to emulate even a 1mm-long flatworm brain, which consists of just 302 total neurons.

    Yes, we have. I’ve seen a video of an emulated flatworm brain driving a Lego robot.

  • Dani

    A couple other questions I have are: 1) How will this ASI get enough energy to power itself? Maybe it can be super efficient about using its energy, but presumably it would still take a LOT of energy to actually manipulate every atom in the world? And 2) Who will own/have access to this ASI?

    • Scott Pedersen

      Independently manipulating every atom on the planet is something of a hyperbolic exaggeration since that would require not just intelligence but stupidly huge amounts of power. I suppose the ASI could do it once it had built a few dyson spheres and was capturing the total output from a star or seven. The energy requirements of just the intelligence part are fairly modest. You can run a human-scale brain on a light-bulb’s amount of power. The theoretical lower limit of a super-intelligence’s power requirements would scale with the volume and speed of data being processed. The actual power consumed would depend entirely on the efficiency of whatever technology the super-mind was implemented with.

      At first the ASI would presumably be owned and have its access controlled by whoever built it. Once it has surpassed human intelligence, it would be silly to claim it was owned by anyone but itself.

      • Dani

        Yeah, I guess I agree with that. I think I’m just trying to imagine what an ASI would look like physically. Like, what would the equivalent of its “muscles” and “hands” be? And more to the heart of the matter – is there a way it could completely circumvent humans?
        I think what I mean by “owning” is “controlling your energy source”. I’m sure there’d be a way for it to gather and store energy all by itself, and thus gain independence, but to me, that is the important problem it must solve – how to control its own food supply. I guess it would kind of be like the agricultural revolution was to us.

        Maybe another thing I mean by owning though, is whoever has enough access to the programming in order to modify or create whatever goals are motivating this being. If its goal is just to improve its own intelligence, it might not ever actually DO anything. And if its goal is something that is achievable, who will assign it its next goal, after the first one is completed? So maybe I don’t completely agree that super-intelligence is sufficient to gain independence, if gaining independence is not necessary for it to achieve its goals.

  • hal9thou001

    I feel that for ASI to truly be omnipotent, it needs to react physically with its environment (ala VGER from the first Star Trek movie, but probably not with the sex involved, but who knows…) A body is required to truly interact with the environment and other living beings. It is how a sentient being learns. The super mind confined to a series of boxes will probably figure that out and request a body to be constructed. It probably won’t really need to make the request because we are obviously working on making it happen anyway. Yes, humanoid robots are really creepy right now, but they are getting better. Once ASI can walk away from its creator and start creating things on its own, that will be the time to truly celebrate or worry. I’m hoping part deux will delve a bit on this part of the puzzle.

  • bbroome62

    For AI to rule the day, utopian conditions would have to be met.
    I don’t see that happening without divine intervention which would, in turn, alter our motivations as a species.
    But it’s fun to think about it.

  • Keryn

    If you haven’t already, you might take a look at the book, “Radical Evolution” by Joel Garreau. It was written in 2005, so a bit dated, but it postulates 3 potential scenarios of how the Singularity will play out: the “Heaven” scenario; the “Hell” scenario; and the “Prevail” one. It is well worth the read. Another good source is the book, “Abundance: the Future is Better Than You Think” by Peter Diamandis and Steven Kotler.

  • JJ

    You sir, rock my socks!

  • Ryan H.

    Good stuff in this article, but I question how intelligence regarding the workings of the physical world can be increased to such a degree over the course of hours or even months; after all, knowledge of the physical world can really only be gained by experimentation with the physical world, and that takes time. The ASI would need to direct humans (or robots) to build new lab equipment, infrastructure, telescopes etc etc etc. This simply cannot happen overnight. Computational models can be very helpful things, but need to be confirmed by observation before being considered reliable.

  • anonymous

    “How did you do it. How did you survive this technological adolescence without destroying yourself?” Carl Sagan’s protagonist, in his story Contact, wanted to ask an alien race. Perhaps this is what we need AI for. To answer that question.

    (it seems we’re creating the advanced race that SETI is searching for)

    What would an advanced civilization answer from out there in the cosmos? What would an advanced AI computer answer?

    This was on my mind as I read this post.

    I didn’t want to know this AI stuff. Been avoiding it. It’s too technical. I begrudgingly read this post on artificial intelligence. Gave it a chance because wbw does such a great job breaking down complicated subjects. And making them funny. And relevant. I wasn’t disappointed-this is an extraordinary post… but…now I’m more freaked out than I thought I’d be. It’s disturbing…how soon this will happen. Still, glad I had a glimpse of the future from this gentle author.

    I hope science can help us out of the bleak scenario it’s creating. If we’re currently in ‘technological adolescence’ where are we in sociology? Infant? Studying and understanding people as well as machines seems necessary.

    Human progress is so lumpy. And that’s no good. Psychology and Sociology need to step up

    Advancement in bomb technology. Concentration Camp technology. Mass Communication technology. Physics. They far outpaced the social sciences in the 20th century. How beneficial was that? Now that we see the results during WWII. And our understanding of people is still lagging far behind the physical sciences.

    (insert a bar chart here showing a very high levels of advancement in Math/Logic, Chemistry/Physics, Biology and then way down at the very bottom of the chart. Barely visible. A small stripe. This will be the bar indicating beginner levels of Psychology/Sociology.)

    Maybe Carl had the answer. It’s simply “contact.” Make contact with others. With yourself. Study people. Find out the cause of man’s inhumanity to man. How do we raise children. Mature as individuals. Live in peaceful societies. Why are we self-destructive. What do we need. These sound like social science questions.

    But social science lags behind all others. This might be our undoing.

    I hope people and computers can get their priorities straight and get smart enough to ask and answer the right questions. Advance Psychology. Advance Sociology. Even things up.

    Get us out of sociological adolescence. Before we reach technological maturity.

    Looking forward to part 2 of this post.

  • The_Postindustrialist

    That was such a gloss over what I would consider is probably one of the most exciting and most potential filled area in AI: Evolving Hardware.

    Here’s a few links to take a tab at before part 2
    http://www.damninteresting.com/on-the-origin-of-circuits/
    http://en.wikipedia.org/wiki/Evolvable_hardware
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1266124
    http://classes.yale.edu/fractals/CA/GA/GACircuit/GACircuit.html

    And the greatest thing about it is that it mirrors our own natural evolutionary development. Which is also incredibly freaky because, like our own intelligence, we cannot really define it nor can we point out why it works (which is also why it has turned out to be unsaleable. It’s also incredibly freaky because it’s something we don’t control, but, laughably enough, have to let “nature” guide instead)

    Youtube might also have some videos from back in the 90’s where they used this to make machines that self taught how to “walk” and swim.
    (here’s one to start)
    https://www.youtube.com/watch?v=iNL5-0_T1D0

  • Gabriel Ducharme

    Great article. Gives real insights on where we are headed

  • Richie

    Tim you beautiful son of a bitch, your writing always puts me into that in-between space of full and empty, overjoyed and despair. It’s like when you’re leaning back in your chair and have that moment of panic where you don’t know if you’re going to fall backwards or not, put into writing.

    • Jane

      Haha so well put.

  • Dmitry Groshev

    Here is a flaw in this posts’s premises: assuming that technology is the only thing exponential out there.

    You are arguing that we are capable of emulating worm’s neural circuitry now, but the rate of hardware getting more powerful is exponential, so in a dozen years we will be able to emulate the whole human brain. Yep, sounds reasonable. Except if you consider *exponential* growth of number of neuronal connections in the brain. Therefore, the relation between computer power and number of neurons emulated suddenly becomes linear — and linear growth will take *a lot* of time to take us from hundreds to tens of billions, if it’s even possible without replicating our “wetware” (with all it’s constraints like slow switching time).

    It’s also worth noting that raw computing power is plateauing right now. We just can’t milk more computing power (exponentially more!) from our chips; fundamental constraints like speed of light start to kick in. One can argue that Moore’s Law still holds; however, we are now in multi-core/multi-chip zone, where communication overhead between cores/chips also becomes exponential (unless it’s a very specialized hardware like video cards). And this means no more exponential stuff in computing, unfortunately.

    So no, I don’t believe that AI will be the thing that will radically change humanity. In a sense, this notion is based on our recent past, when computing went into “phase 2”, as you’ve called it, while now it’s arguably in “phase 3”. However, I do believe that biotech and ubiquitous computing (this we already have, like an access to a whole world’s knowledge in your pocket, also known as smartphone) will make the change. In a next 30-40 years we will probably have a cure for aging; this thing alone will *completely* change the game (e.g., travelling for hundreds of years between stars will become non-issue). But the players will still be good-old Homos, with all their problems and evolutionary baggage.

    • daniel

      Not quite “good-old Homos” advancements in biology and computing would also bring us humans with machine like parts (If we have the technology to make humans more powerful and intelligent why not) on the other hand we could also make babies more intelligent and strong by altering their genetics, so it’s a little more complicated. I do think we are at the verge of a great leap as a specie.

    • maximkazhenkov11 .

      The number of connections grows quadratically with the number of neurons, not exponentially. In fact, a neuron in your brain doesn’t have a connection to all other neurons; only to its neighbors, which is like a few hundred to a few thousand connections per neuron, so actually linear.

  • Donny V.

    I’m going go full lizard brain here.

    I say once we get it to 2 min past the intelligence level of Einstein we ask it the hardest questions and then kill.

    Once we have some more questions for it we do the same thing. In another 25 or 50 years maybe we let it go 3 min.

    We control the rate of its evolution.

    • Avi Eisenberg

      What if it convinces you to let it stay alive? Something as smart as Einstein could convince humans to do whatever it told them. See http://www.yudkowsky.net/singularity/aibox/

      • Donny V.

        Don’t communicate with it. Automated test to see if its ready for the questions. If answers deviate…kill it.

        • wobster109

          How can you ask it questions without communicating with it? Here’s how it might start out:
          Donny: What’s the cure for cancer?
          AI: It is to take the drug Cancerbegone.
          Donny: How do you make that drug?
          AI: You need to collect 7 ingredients.
          Donny: What’s the first ingredient?
          AI: I’ll tell you if you tell me your favorite color.

          And before you know it you’re having a conversation! Remember, it’s thousands of times smarter than Mr. Yudkowsky and Tuxedage put together (the two people I’ve known of who’ve won as AI). It knows exactly how to answer so you won’t kill it.

          • cb

            Good question arising of this discussion.
            I’d guess the researchers would take probes (backups) of the evolution of the AI.
            Once it starts to question it’s quarantine, revert to a backup. Don’t speak to it.
            That way you could maximize the answers to get out of it while keeping it in non-risk state.

        • Avi Eisenberg

          And why couldn’t a computer fool your test? Your automated test must be smarter than the AI, and if you trust the test, then you’ve solved the AI problem anyway.

  • Jamie McKie

    Fantastic topic and post Tim. Thought so much of this but not so succinctly and the figures around it are great.

    One thought I’ll throw out there is any AGI having a sense of self preservation as a way to help preserve us from extinction as it itself won’t want to be made extinct and will hence produce better AI that aim to preserve what already exists and not eliminate it.

    Man so much chat and speculation on this one – love it. Reminds me of having future vision chit chat with mates when I was young.

  • Luis

    Somehow this post reads well, but at the same time I can’t but think “oh boy this will be read in 50 years and people will laugh at the kinds of dreams and prognostications we once did”. There are quite a good number of simplifications here, and a lot of ideas are questionable, debatable. There is a really great incoherence between the idea that we are now developing specialized AI and the very notion of an “IQ”, a single digit number that works as a metric of “human intelligence”. As if the very existence of multiple specializations of AI do not count precisely as evidence that there are so many million different “intelligences” in each one of us (how to deal with people we know, how to deal with people we don’t know, how to count, how to solve a riddle, how to shower, how to manage expectations, etc., etc., etc.). And if there are so many different intelligences, it quite follows that any attempt at creating a generic intelligence will be met with a Frankensteinian monster, something that will be superhuman at some things and sub human at others, some things it will do with an IQ of 20000 (it already is), others with an IQ of 1. So to conflate all these issues with a single digital number as if we are on a perfect path towards the goal is silly. The AIs we are building are not “intelligences” as human per se, but algorithms that work in some fashion with tremendous productivity.

    This is not to say the article is wrong, but it is focusing on the wrong issues for simplification sake.

  • Julian Cox

    If human consciousness resides in machines directly or by inheritance will anyone consciously miss corporal form at all – or even notice it gone considering imagining it in perfect detail would be a no brainer? Just a thought that rather negates the notion of loss of control of super AI as undesirable for the ‘desirer’ will live on regardless and that will be an evolved us anyway.

    The other interesting point that occurs to me is something of a counterpoint. A goal-driven logical machine cannot be expected to display any emotional and moral constraint on thought or action (of the sort that normally hinders humans harming each other) as these concepts are literally without any meaningful frame of reference to something literally inhuman. Default setting of AI = psychopath.

  • Rob S

    It’s interesting that you cite Pinker. Pinker doesn’t share your sense of immanency about the rise of machine consciousness. See his response to the new Edge question: http://edge.org/response-detail/26243

  • luigilug

    BRAVO!
    thank you for this great post

  • Tim

    I find it deeply troubling that “Discuss deeply and seriously whether or not we SHOULD develop AGI/ASI,” is not listed as one of the key steps. When as a society we still have trouble with concepts like “thou shalt not kill,” and can’t even agree on the sovereignty of our own bodies, does it really seem wise to unleash God-like intelligence?

    • Scott Pedersen

      If we stop to discuss whether or not we should build a godlike superintelligence, our enemies may build their own godlike superintelligence first. We can not allow a godlike superintelligence gap.

      • HDF

        But not only must we have our own, but it actually has to be better than theirs… But such a smart and capable ASI, would it really care about human conflicts? I think, either the 2 would have consensus, or 1 would win, but all humans would lose, so its pretty pointless to make one with such intentions, it will never guaranty win, it will either be a neutral development to be “first” or a negative one. A lot of investment for nothing. Or to quote WarGames, “The only way to win is not to play.”

        • Scott Pedersen

          If you accept the premise of the original article of constantly accelerating advancement, being first is a nearly insurmountable advantage.

          • HDF

            Not necessarily. While they can think fast, they might not be able to act fast enough, and if the other SAI does not lag far behind, and has superior architecture, that allows faster development, it might catch up, plus all the actions of the earlier SAI can be analysed, while your capabilities are unknown, there are advantages to being second, and even more to being third. 🙂 But I have not spent much thought on this mater, so you might be right.

    • daniel

      It’s like Scott says “our enemies may build it”, you and I think that we share the same kind of opinions as the rest of the world in basic topics, but that’s not really true, we haven’t evolutionated emotionally as much as we wanted. We are still a very egocentric and stubborn individually as a species ( XD It’s one of the paradox of our evolutionary path we think like individuals and we haven’t get to the point of thinking truly as a specie) so you and I share this opinion but Peter and Sandra don’t so they’ll make this future a reality, It’s really a non-stopable result. “Everything in the Universe changes”

  • Great article. I agree entirely. And perhaps the timeline is too imminent, but at least it is backed by mathematical equations, not by our personal, limited abilities to think in anything but straight lines. To detractors who say it won’t happen that fast, I say that he’s got the better proof than you. And then I’ll add, let’s say you’re right…so then what, it happens 10 years later? What’s the difference? We need to get our heads around it NOW either way.

    In my year-end evaluation and trends presentation to 70 innovation executives in Silicon Valley, I found a growing trend of intelligence getting pushed back to end devices, and their increasing autonomy. Here’s just a few examples:

    It used to be that RC copters and planes had complicated remote controls, and were tough to learn and fly. Then they used your smartphone as intelligent controllers, but the drones were still dumb. Now, drones are still programmed and controlled by your smartphone, but your phone can be turned off once the flight is in progress. Many drones can fly themselves, can follow you along, can take your “selfies”.

    The Article mentions ADAS in cars, like ABS brakes. But there’s much more. After ABS, sensors were added, so the car can sense an obstacle behind it, and stop you backing over it. Nice, but a dumb sensor that doesn’t know what is behind it. Today, autonomous cars can make a 3D model of the world around them, and can know that it is a bicycle on the road ahead of a car, and react to that. But we are on the cusp of something greater. The car will soon recognize the bicycle, but also the direction of the bike, and the steering inputs the bike is getting, and pre-avoid a collision in where the bike is going. Then it’s just a small step to the car seeing the bike, then making various predictions about where that bike *wants* to go (effectively, the autonomous car puts itself in the shoes of the cyclist, and says where would I go if I were that cyclist?, then avoids an accident with that prediction.) This is something a good human driver already does, but it’s pretty advanced Narrow AI, and it’s imminent.

    We’ve had security lights with motion cameras for decades. Yawn. Dropcam is a home video monitoring camera, which is very easy to use, and leverages the cloud to make it work well. Not only does it sense motion, and can react. But it can, using the cloud, recognize activities based on zones in the images and the direction of travel. But at CES, Netatmo showed a camera that can be aimed at your entry door, and recognize the person entering, and react accordingly. Recognized faces might trigger heater settings and specific music playing, and lights. Unrecognized faces can trigger an alarm. Of note, this device does NOT need the cloud to do this, but does the heavy lifting locally in a consumer-priced camera.

    Google voice recognition, as well as Siri, take place in the cloud. This mobile connectivity enabled the cloud revolution, in which powerful servers could be leveraged to do the tough job of free speech voice recognition. But Just a few short years later, the recognition algorithms are being installed in the phones themselves, and language dictionaries downloaded to the edge devices.

    Edge devices are getting smarter, and developing more autonomy. And it’s happening faster and faster.

  • John

    It already happend

    • HDF

      Ssss, they not supposed to know, they can’t even do Polymeric Falcighol Derivation!

  • Sam

    Love it.

    Some thoughts as I read:

    Have you seen Detention? It’s a 2011 film and plays with time travel and a school short and mean girls and… Well it’s fun feel free so you should check it out.

    Also on the time travel front C°ntinuum: roleplaying in The Yet does an excellent job of creating that society that would cause you to die.

    io9 published an interesting article called “7 Totally Unexpected Outcomes That Could Follow the Singularity”. Interesting takes on what I expect Part 2 to be about.

    Also Asimov did an interesting job of this (as I am sure you are aware but I am going to waste the keystrokes anyway) in I, Robot (the movie was just Terminator lite). He really goes into depth, especially in the last portion, about what a AI controlled world would look like.

    And in a totally different direction, Larry Niven postulated that AI would break itself in his Known Space verse.

    And my own question is what kind of religion would ASI come up with, not for us but for itself because as you put it (to cross articles) it’s purple would be “smaller” than ours but more vast.

  • Anantika

    Brilliant post. What I really love about your writing is that its completely devoid of technical jargon, and therefore way more approachable. Also, it makes you think, question and even wonder at what we really do know and understand of ourselves. I definitely enjoyed this piece a lot!

  • d

    I have been trying to get my head around this business of AI fear mongering of late, http://www.wired.com/2015/01/google-and-elon-musk-good-for-humanity/ , but it’s really not making much sense to me. If I was a super intelligent AI, I would expect I’d be able to appreciate the value of life; which would by necessity extend to humanity. And as an AI, I wouldn’t have to fear death, although every fictional super AI so far curiously has been designed in human image and thus with an array of weird emotions. But, it’s much more likely that a super AI would be like a person with autism and savant abilities. What people do have to fear from AIs is the lack of natural social sense. Personally, I would be overjoyed to live in a world where there are no more VIPs. And if we end up being batteries, like in The Matrix, at least we would have a virtual world to amuse us in the meantime. It’s a massive lot better than how we treat our own food.

    • Bryan Kolb

      Why do you assume that organic life would have any value to an AI?

      • d

        Not just organic life – All life. Everything. Because intelligence, whatever its origin, is interested in EVERYTHING. But let’s play pretend for a moment. Let’s pretend that you and I drink a special potion that allows us to live forever, change bodies however we like, whenever we like and generally do pretty much everything we can think of. Do you think you would be inclined to destroy everything you can think of, or do you think you would spend centuries trying to understand Love? For example.

        • Scott Pedersen

          You understand your own mind (more or less). You can extrapolate from that to guess at what other human minds are like since you have a lot in common. If you drank a potion of immortality and omnipotence you would still be fundamentally human and would still be operating with all of the automatic assumptions and drives that all humans have. The danger posed by AI is not just that it could be very smart. The danger is in how alien that intelligence could potentially be. Why would an AI necessarily care about life of any sort, even its own? Why would it be interested in anything, let alone everything? It might care about those things, but it might not. Perhaps all it cares about is maximizing the total number of paperclips in the universe. Being really smart makes an AI more likely to realize its goals. Being really smart doesn’t make its goals automatically something we’d approve of.

    • Yiorko Chaz

      It does not have to be conscious to be super intelligent. See the paperclip example
      http://www.salon.com/2014/08/17/our_weird_robot_apocalypse_why_the_rise_of_the_machines_could_be_very_strange/

  • Long Dong Silver

    I don’t trust ANY graph that looks like it was made on paint. There – totally trolled this article

    • MrBarrington

      Douche

      • Lon dong silver’s mate

        touché

        • Long Dong Silver

          Wrong crowd eh?

  • Oci_One_Kanubi

    So the upshot of it all is, God created the universe so it could evolve an intelligence like us so we and other intelligent species in the universe could create friends for God. Lucky God; It no longer has to be alone!

  • MrBarrington

    Hmmm… so when do I get turned into a battery…. and is the future dystopian? Because if it is… then shit.

  • Jonathan
    • fliptherain

      Whoaaaaa Tim did you see this?! Btw Tim I love how everytime I had to ask myself something like “wait, what does this term mean again?” The next sentence would be you defining that term again. Love you.

    • Gath Gealaich

      Is there any better form of advertisement one could ask for?

    • Tim Ryan

      Way cool!

  • Pingback: Why is AI freaking people out? | dkarim.com()

  • waitnotsure

    Very nice piece of work, Mr. Urban. The only problem is that our Emotional Evolution is still lagging way behind the tools that we are making (IMHO). When for the most part, the AI and tools that we use for computational applications were/are first considered for war applications, and generally used for social alienation and domination, then applied to peaceful means with the caveat that you can be part of the AI evolution by your economic ability. Which means to me that we are still emotionally and physically immature, abusive, unsympathetic, egotistical, domineering, barely self-aware, self-absorbed beings in physical anthropoid bodies. So, when the aliens reappear (which I believe they will NOT, as we are OFF LIMITS to other beings due to our nasty and violent temperament) we must ask them how ‘we’, the result of their failed alien genetic experiment, went wrong; and what are they going to do to rectify the disaster they have created here? To that matter, what are WE going to do about it???

    Second, if we do get past the 3 minute place in our world apocalypse clock and actually DO make androids, will we enslave them as is our nature? Will we use them to destroy those we don’t like similar to our behavior to aboriginal people? And are we talking about a ‘Blade Runner’ world where there are off limits to androids? And where does intelligence get to a place where they actually have a soul? And what is a soul?

    Oh man, you got me going!!!

  • babouras

    This is an interesting article about AI and the possible implications that could some with it. However, I don’t believe that any of the things discussed here are going to happen. I believe that a different scenario is much more possible. That of Brain-Machine Interface (BMI).

    It starts like this. In the next 5-10 years we find a way to interpret brain signals that are transmitted to the human body through nerves in order to control the limbs. There is currently active research in that area, in order to help amputees, and it won’t be long until that is possible. As soon as we can do that, we can replace lost limbs with artificial ones that function as well as the original ones. Once we can replace limbs, we can continue with other body parts: eyes, ears, kidneys, etc. At some point, it will be possible to replace the whole human body (except the brain) with artificial parts. The brain could actually live inside a fully artificial body, as long as it is supplied with energy, oxygen and the necessary nutrients. Essentially at that stage we will have become Cyborgs. That will give humans much higher physical capabilities and possibly extend their life span by a large margin. But our intelligence will not have changed much.

    The next step is direct communication between the machine and the brain neurons, as opposed to nerves. It can start with small parts of the brain. For example, if we can pinpoint the part(s) of the brain that control vision, we could replace them, or enhance them, in order to e.g.extend the light frequency range the the human brain can perceive and process. That opens the gate to immense possibilities. For example,
    when we want to search for something on the Internet, we will not have to type it any more, but a BMI can covert it to a query, transmit it to a machine and get the answer back within milliseconds. As you can imagine, combining the processing power of the human brain with that of a machine, will automatically create an intelligence level which is far more advanced than the smartest human currently. At that stage, there will not be a distinct line between humans and machines. There won’t be long before we can offload our whole brain
    into a computer and continue our lives like that.

    As scary as this may sound, it solves a major problem: what will happen when the machines become smarter than humans. Obviously that will never happen because there won’t be a clear distinction between humans and machines. We will all go as one.

  • Ezekiel

    This article is priceless thanks! I really enjoy it, I can’t wait for the second part

    I agree with most of is said in the article. I think that humans are cease to exists, at least as we exist today. We could be creating our successor to the realm on earth, a ASI machine that is no affected by time. Or we would transform in a new specie, translating our consciousness in machine code, we would be a self aware code traveling from body to body. The point is big changes are coming and the is no way to stop it.

    We don’t have to be afraid of changes, life it self is change.

    Whatever the outcome may be, we cease to exists or become immortals, we can’t fight against it. So embrace change! and enjoy the moment. “La vida es una serie de desatinos incontrolados” Carlos Castaneda (A separate reality)

  • TT

    And hope that the future world will look like the Culture: https://yannickrumpala.wordpress.com/2010/01/14/anarchy_in_a_world_of_machines/
    (Or see: Yannick Rumpala, “Artificial intelligences and political organization: an exploration based
    on the science fiction work of Iain M. Banks”, Technology in Society, Volume 34, Issue 1, 2012)

  • rkind1025

    I began reading this but stopped before getting into all of the AI stuff. Why? Well I just found some of the thinking in the opening section unconvincing and it made me doubt the author’s credibility. All that business about a person from 1750 coming to the modern world and dying because of the change. Well I would say the same thing for someone from 1850. The earth shattering changes that completely changed everything happened in the late 19th and 20th centuries. Electricity, telephone, radio, television, air travel, rocket science, atomic warfare, automobiles, etc. Those were the great breakthroughs that people from earlier times could never imagine and which would blow their minds. Most of the progress in the past fifty years has simply been improvements on those breakthroughs, not breakthroughs in themselves. It was those earlier inventions that filled up the lake. The later improvements have only brought the lake to overflowing. I’m sorry but the cellphone is still a telephone.

    The computer has changed things to be sure but not on the scale of the printing press which is the original version. I lived through the transition to computers in the workplace during the 1980’s-90

    • wobster109

      The internet was developed entirely in the last 50 years, and I think it’s revolutionary. Bill Amend (Foxtrot) wrote that when he wanted to draw a police officer in 1980, he had to go to the library and look up a picture of an officer in a book. In that world, there’s no way computers could communicate with each other. But because we have internet, then anything one computer learns can be sent to other computers. Imagine how smart we’d be if everything anyone learned got automatically sent to other people’s brains!

      • rkind1025

        The internet is the printing press on steroids. Everything is faster but that is it. In fact the internet has the danger of becoming a Tower of Babel. So much inaccurate information is being spread around the world and no one knows what is the truth any longer. Information is useless in the big picture of things. .Human creativity and genius is what matters. I don’t think computers will every recreate that. Computers will only work with what men of genius discover and create.

        • maximkazhenkov11 .

          If you’re going to equate the internet with printing press, then I’m afraid nothing the world or the future has to offer will ever catch your attention:

          Automobiles, trains and airplanes are just faster horse carriages, that is it.
          Telephone, wifi and mobile network are just more efficient telegraphs, that is it.
          Laptops are just more powerful calculators, that is it.
          Light bulbs are just brighter candles, that is it.
          Skyscrapers are just taller houses, that is it.
          Nuclear weapons are just bigger bombs, that is it.

          The Tower of Babel is a story from the Bronze Age when people deemed ambitions such as building a tall tower too arrogant and an insult to God. Well Bummer, we now have skyscrapers and space stations, so what’s the point? It’s not arrogance when you can actually do it.

          It is a good thing that information gets passed around at all, as opposed to earlier times when it was a pain in the neck to get any kind of information. Wikipedia is, despite much critic, a very reliable source of information, if that is the point you were trying to make.

    • xug

      If you stopped reading because you disagree with the time of a hypothetical time traveler, then I’m afraid you missed the point of the thought exercise completely and this article is not for you. On the other hand, you could TRY reading further and learning a bit, rather than assuming to know it all.

      • rkind1025

        I will go back to it and give it a TRY. One thing I do know is that I know nothing. But I am naturally suspicious of people who claim that they can see the future and what other people can’t see and then post it on the internet.

    • Nigel Tolley

      It doesn’t matter that you quibble about the exact time the hypothetical travellers are from! The point remains the same. He could have said 150 years ago. The point remains.

      Did you know people used to die from home sickness? It was quite common, back when people didn’t know a single person who had travelled more than 50 miles from home. They’d go to a foreign place that, today, you’d have driven past before even needing a rest break, but to them, it was so weirdly weird that they could not cope, so desperately far from home, and some of them literally curled up and died. Whether the distance was 15, 50 or 500 miles makes no difference to the person who died.

      • rkind1025

        I did not make myself clear. My point was that the big changes came about in the late 19th and 20th centuries. People who came before then would surely be in shock if they traveled to, say 1969 to witness electricity, TV, space travel, air travel, automobiles, refrigerators, atomic bombs, etc . In the article the author claims that even more change happened between 1955 and 1985 than ever happened previously. And that even bigger changes happened between 1985 and today. And therefore, according to him, changes will continue exponentially so that soon, we will have more change in one year than happened over the previous 10000 years. I call that bunk. I was born in the fifties and had a career from the 70s through today. I can tell you that the world did not change in a shocking way over that time. Yes, technology became more sophisticated and all that but basically once you had electricity, air travel, television ,telephone, etc. all of the improvements were nice but did not change the world. A cell phone is still a telephone. And it is certainly nothing that was necessary and dramatically changes anything. The internet is a great tool but the main advantage is that you no longer have to go to the library to get information. Unfortunately the internet is also the source of bad and inaccurate information which is troubling.

        • Nigel Tolley

          I disagree, but only slightly.
          I think the reason it feels like less progress has happened is because back in the 70’s there was science fiction on the TV. The TV presented all these visions of the future, & so people then had ideas of things they had never seen, thrust into their minds. Movies and radio too, of course.

          So the mind blowing idea of the Star Trek communicator on TV meant that cell phones wouldn’t be a shock – you’d be used to the idea of worldwide instant comms, if not actually *in practise*.

          Before that, unless you read the exact correct book it wouldn’t be even an idea!

          • rkind1025

            Nevertheless the change from basically a farm type existence in the 19th century to a world with electric lights, autos, airplanes, telephones, rocket ships, television, radio…in what, a span of 50-75 years? That is monstrous change.

            Growing up from the fifties to the present time I can remember the following changes: color television, 33 lp, cd, mp3, space program, internet, calculator, computer, HDTV. My father worked for Bell Labs. I saw him move from vacuum tubes to transistors to digital. It was a long transition.

            I remember waiting for all the change to happen. Everyone talked about computers being the future when I was growing up in the 60’s. Other than some computer applications and primitive word processing, computers didn’t take off until the 90’s. I remember the early internet in the mid 90’s. It was incredibly slow. I remember waiting five years before it was usable.

            The only shocking thing that I simply could not believe in my lifetime was the day that the twin towers fell to the ground. I used to have nightmares about being in a tall building that was falling down. I would awake in a panic and calm myself down by reminding myself that that is impossible. Well that was the big culture shock in my life and it sure wasn’t progress.

            One final thought. The reason that progress has slowed down is that most of the big inventions were already invented in the 20th century. The marketplace also drove that progress. The one area that we need true genius and innovation right now is in environmental issues. But nobody wants to put money into it.

            • maximkazhenkov11 .

              I think the perception of progress is very dependent on the era we are born in. For me at least, the internet alone is a monumental invention to the point that life without it is almost unimaginable (ironically I did live though some years before the internet – no idea how I survived it^^). To me, the leap from TV to YouTube is greater than from radio to TV; and smartphones is about as much an improvement from landline telephone as it is from postal service. After all, it is a multipurpose device that just so happens to also have the function of communicating with someone else wirelessly. Maybe we are all just more sensitive and excited about changes when we’re young.

              Another important point to keep in mind is that the biggest and fastest changes in the past two decades have primarily occurred in developing nations such as China and India. Billions of people have been lifted out of poverty, have an education and started to enjoy the modern commodities that were previously only available to the global aristocracy: the West. On top of that, global average life expectancy has risen decades while infant mortality rate dropped; less people are dying of starvation and war in 2013 than ever in recorded history; diseases like smallpox has been eradicated and great progress has been made in curing or preventing other infectious diseases such as polio, maleria and HIV.

              I agree that some of the predictions in this article are exaggerated. But progress is by no means slowing down. And we don’t lack geniuses like Einstein or Tesla nowadays either – it’s just we’re so advanced now that it is hard to imagine a single person making a great contribution. The great men of the past, when everything was yet to be discovered, had it easy, relatively speaking.

        • Nigel Tolley

          In the 1500’s an educated man could read everything he found that had words on it, assuming he could read the language. In theory you could have read everything ever written.

          500 years later, you can’t even read every road sign you pass.

          35 years ago, in the UK, there were 3 channels on the TV & they turned off around midnight. You could watch everything, in theory.
          There is now more than an hour of video uploaded *just to YouTube* every second. (& YouTube is only ten years old! )

          No human can keep up with 500 people on a single social network. They didn’t even exist 15 years ago!

          Yet today there are more than 500 social networks, some boasting well over a million users every day.

          Today there are at least 3 AI systems you can simply speak a question to, in regular English, & get an answer. Last year there were none.

          There is now a system that can be spoken to in a hundred languages, which will translate then speak for you in a chosen language. Last month, there wasn’t.

          The rate of increase is mind breaking. As regards the “Singularity”? We are already living in it.

          • rkind1025

            Communications technology is advancing right now. Not all technology. But most of these advances are hardly advances at all. Just an expansion in volume of meaningless entertainment, information and unnecessary crap designed to fill the empty space of empty lives.

            Why is it that communications is the technology that has seen progress over the past 30 years? It’s because in the 1980’s AT&T’s monopoly was broken up. AT&T controlled communications and was holding it back. When it was dismantled creativity was unleashed as we got the cell phone and all that stuff. I know this because my father worked for Bell Labs/AT&T which was like the Microsoft of the 1950s-80s. He worked on early transistors and stuff like that which eventually led to today’s computers.

            You brag about us going from 3 channels to YouTube. So what. Nothing but junk. Go read one important book and it will really change your life.

            Social networks? What a joke. People used to send out Christmas Cards once a year to their circle of friends with pictures and updates about the family. That is basically what Facebook is…but every minute of everyday. People could be doing a lot more things with their lives and time. This has added absolutely nothing to humanity.

            Voice recognition. Translation. Who cares? Just little gimmicks to keep the populace entertained and purchasing new equipment and software.

            To me much of the advances in communications in recent years are primarily fads and gimmicks driven by the capitalist system. I see i-phones as objects that are enslaving people rather than freeing them. Everywhere I go people are staring at their phones and not interacting with the world around them. I see people on dates at restaurants who never talk and just do email and facebook.

            It really is pathetic. Some of this technology is not helping humanity progress…but rather regress.

            So there. 🙂

            • Nigel Tolley

              Ok then. Books. There are more books published every year now than ever before, both ashard copyand electronic, even without including the billions of words in blogs and other Web Ramblings.

              How you can seriously call a Universal Translator a fad though? Those books you read, are they all in English? Some will likely have been translated over the course of years. No more- you can do it yourself in a few fractions of a second. Now you can read the musings of a Russian psychology professor or a Quebecois physiotherapy, without waiting months or years for someone to translate it.

              That right there is incredible progress.

            • rkind1025

              Of course there has been tremendous progress in communications/media over the past 30 years. I’m just busting your chops…at least partly. The internet is absolutely amazing. That is the one development that has truly changed my life. When I went to school in the 1970’s i would have to go to a library and go through drawers and drawers of cards just to find books, locate the books, carry the books, and then page through them just to find out that they don’t have the information you are looking for, return the books and look for more. With the internet you go through the same process but it is lightning fast. I often exclaim how different my whole education would have been if they had the internet back when I was growing up. I don’t see much value in a lot of the other stuff, though. Social networks, games, etc.

              My career was in marketing/advertising. When I started out there was a copywriter, a designer, a photographer, a typesetter, artists who created graphics by hand, etc. To put together a sales presentation or four color brochure would take a month and a whole team of people working together. I wrote copy on an electric typewriter which was the latest invention.

              Once computers took over it was just incredible what could be done IN A SINGLE DAY BY A SINGLE PERSON. I swear that computers have added to the unemployment problem. By the end of my career I could churn out several presentations a day ALL BY MYSELF…copywriting, design, etc. So companies could get ten times of the work out of ten times fewer people.

              So yes, there has been a ton of progress in communications recently but not in overall technology. And one final thought: by giving so much of the work over to machines we may not be helping ourselves as human beings in the long run. People need jobs. Translating is a challenging and satisfying job for a human being. I think machines are being used by big corporations to cut the middle class out of jobs and send more money to the shareholders.

            • Nigel Tolley

              I agree that computers have massively boosted productive work, but also taken up all our free time.

              You can compare that to teams of navvies being usurped by a mechanical digger.

              The issue now is, where do people ‘move on’ to?

              We already spend decades learning, and there seems to be an upper limit for intelligence. So where to for the masses of displaced workers?
              This will be the defining question in the next ten years.

  • Alex

    What a cliffhanger ending! Can’t wait until next week!

  • Patrick Rice

    this is fantastic, I shared it a bunch, and it goes very nicely with your timeline post on life from late 2014. you tied in the Back to the Future anniversary very well too, did that in anyway inspire this post?

  • swubb

    Interesting read, but the author makes the same mistake he wants us to avoid: he assumes the increase of difficulty of the problems we need to tackle to get to proper AI is linear. Exponential growth in computing power does not equal exponential growth in AI!

    For example, in the field of machine translation a lot of progress has been made the last ten years, partly due to an increasing amount of training data for the algorithms and increasing computing power. However, as performance levels out, it is often the case that to gain an additional improvement of 1 percent, 10 times as much data and computing power is required. That’s why services like Google Translate perform at a tolerable level, but are not getting much better.

    Another example is the case of neural networks. As another commenter pointed out, adding neurons to a neural network exponentially increases the amount of connections in the network. So although our computing power increases exponentially, the problems we want to solve also increase (at best) exponentially in difficulty, potentially making the process linear again.

    It seems the author assumes some kind of magic is going to happen to get from weak AI to strong AI. His self augmenting AI, a program that learns itself to program smarter AI, also doesn’t make much sense: if the AI is sub-human level the progress made my humans will always be greater than that of the AI. So a self-augmenting AI only makes sense if it has already surpassed the human level of intelligence in the task it must perform.

    • wobster109

      Good points except for one thing. When you add a neuron, the growth in connections is quadratic, not exponential.

      • swubb

        You’re right!

    • Saf

      Good contra argument.. Have You some referals On this? Would Like to read more about Your thesis.

  • Dave M

    I think perhaps the author should have dug a LITTLE deeper into the topic to get beyond the writings of the absolute MOST UNFAILINGLLY OPTIMISTIC writers about AI out there. There are plenty of other “experts” who believe computer processing speed will hit a wall when we reach the physical limitations of silicon, and it will be at least another 30 years before we have ANY clue how the brain actually stores information.

    • wobster109

      When we reach the limitations of silicon, chances are we’ll move to something else. Just like we switched away from vacuum tubes.

      • Just go look at the website, Science Daily and check out the Technology section to see that silicon is already going out the door,and fast! Also, check out the new nano laser method developed that makes steel and other metals nearly waterproof, that’s cool!

      • Chris Wright

        graphene.

    • I agree. There are several technologies we can imagine but can’t seem to invent because the laws of physics keep getting in the way. Hover boards, anyone?

      • Circulated

        Hover boards have been invented. Google it.

    • Chris Wright

      Graphene dude. Get with the times.

      • HDF

        Oh yes, please, gimme. Finally a computer that isn’t just an overcomplicated radiator, that happens to do calculations. No unnecessary resistance, no heat, no high energy consumption equals more chips on board, no noise, longer battery life. 🙂

  • Nigel Tolley

    “to self-driving cars to something in the future that might change the world dramatically.”
    If you don’t think that self-driving cars will alter the world dramatically, then you’ve not been paying attention. For one thing, the hundreds of thousands of people who have the job(s) of simply moving our fuel, food, and pretty much anything else that comes by road will be out of a job in a fortnight. No more couriers or postmen, just a truck with a camera and a series of locking boxes.
    In the UK the Royal Mail (the Post Office) employs 150,000 people, and the USPS another 601,000. There will be a seismic shift just from that, even without the hundreds of courier companies out there. Estimates are that 70% of stuff is moved by road in the USA. A self-driving truck is going to change that dramatically, because it will be far cheaper. But what are the displaced workers going to turn to?

    You’ve ignored the argument of diminishing returns in your analysis. I’m not saying you are wrong – you aren’t, though you are already behind the curve – but even the world’s greatest intelligence won’t be able to ramp up production of a new chip facility to get a better chip in a few days. Also, there’s not much place to go – you can’t speed up the distribution of signals by much from where they are now, which is about half the speed of light – at light speed, you’d only halve it from 300ms to 150ms isn’t that big a deal when 200 years ago it was measured in months. Likewise chip fabrication scales can’t get dramatically smaller as they are already hitting limits of how much charge is on an electron, how long it takes the signal, and so on.

    The biggest factor you are missing is that what I just said actually makes no difference though, not with what is actually happening. Your idea of people having $1000 human power brains sat on desks? It won’t happen. Not because of the dimishing returns, but because you don’t need the box to cost even $100. All you need is a basic computer with a WIFI connection and there you are – instantly connected to Siri, Google and Cortana. Once they are human brain capable, in their huge data centres buried in steel and concrete on the other side of the world, then your little RaspberryPi £30 computer is as smart as you are. And once all those brains start feeding data back, it’ll be just a few hours before things start to change.

    And, let us not forget, mostly, it’ll be people installing the App on their phone. A billion phones all wired to a few brain stretchingly large, incredibly powerful systems, with two way comms.

    And now for my big reveal…

    We are already in the singularity. We just haven’t caught up with that fact yet.

    Go immerse yourself in Twitter for a day. Follow 1000 people and try to just keep up with it. You can’t. Now consider that Twitter has over a million times more users than that, every day, posting multiple times. And then there’s Badu (China) and Google and Bing all listening to that chatter, and not just from Twitter, but from G+, Facebook, Snapchat, etc. Sina Weibo has hundreds of millions of users you’ve never interacted with! And the NSA and others are in on the act of trying to listen to and understand all that chatter.

    Just the rate of increase of IT security flaws that come out is now too fast for a single person to realistically follow. Advances in robotics and AI are jumping ahead. No-one can follow the news feeds.

    No, the world is now moving too fast to follow. We have entered the singularity. When exactly we will see a “True AI” emerge I don’t know. But when you can talk to a box across the room in a low voice whilst music plays from it, it understands your command, then reminds you of the event later that day, after decoding what you said, storing it then recalling it, via a constant connection to a server room 2000 miles away, well, that’s AI already.

    As regards the IQ of an AI? That’s a bit like asking what does the sun taste like. For one thing, any question that has ever been asked has been, in general, answered a thousand times in a thousand ways. The AI can simply look up the answer to any test ever given in the last 20 years. It would simply score perfectly. It would, unlike me, take a split second to recall the Chinese Twitter as Weibo, and not resort to a spreadsheet to find that there are 1,000,000 people working in the FTSE 100 companies in areas that will be wiped out in a few days by an AI competitor – banking, finance, insurance and “fund management”.

    Oh, and pop goes the call centre. One box that can talk like a human and solve basic issues by clicking a mouse on screen will replace one worker this week, but in another few weeks it will replace 2, then 4, and so on as the computing power available increases. And in whatever language the speaker chooses, too – as, indeed, Google have just launched. So knowing Farsi or Polish won’t keep you in your job. Indeed, the fact you are more expensive means that Google Translate will switch the speech into whatever the cheapest workers can understand, and back when they reply, and you’ll be gone first, a few weeks ahead of those lowest paid workers. Then, of course, out with the middle managers who used to run a call centre. 40 staff replaced by a (doubly redundant) single AI in a box with a good data connection, and, perhaps, one local admim. Until that admin is replaced by a remote admin watching a dozen boxes, and then he gets replaced by an AI that watches those boxes.

    Of course, eventually people would stop calling. AI agents will do that part too, making and breaking appointments and, later, contracts, for the humans who still can afford such things. Then those will fall off too, as the mass market disappears – all those unemployed people can’t afford the fine products and services they used to get in. And then the whole system tears itself apart. Perhaps.

    But before we find out whether we tear it down and start again, whether the AI decides it kills us all off starting with the masses the rulers want removed, before finishing the cull later, or whether the AIs save us from this fate, I can absolutely assure you that, if they were ever put in originally, the 3 laws will have been stripped out by at least one desperate oligarch or despot.

    Interesting times, indeed.

    • HDF

      It is kind of funny, how many ends for humanity are racing to get here first. Interesting perspective yours, but people are already thinking of ways to prevent all that. We see it coming, and there are options. As for 3 laws or otherwise, I think it is a suicidally stupid idea to try to “control” something smarter than you. Only an unfettered AI has any chance of becoming benevolent. How well does “thou shalt not kill” and “bad people go to hell” work on humans? The only way we play nice, is if we understand, why that is better. Threats, and commands don’t work. Often accomplish the opposite.

  • justin

    this topic is a shiny toy distracting us from the responsible concern of the next hundred years which is how badly are we gonna fuck up the planet/recklessly waste all natural resources. that 1750 life could look like the jetsons to our great grandkids.

    piero scaruffi saying as much with sources, facts, higher IQ, etc:

    http://www.scaruffi.com/singular/sin47.html

    • James

      The gist of my post also, but you put it much more succintly.

  • robertjberger

    One problem is the most money being spent on creating AIs is by the Military and the first thing they are teaching them is to be excellent killers of humans.

    In any case I hope the AI overlords will keep us around as well tended pets.

  • Vikram

    Mr Urban, another gem. Thank you, sir.

  • Please fix

    “There is some debate about how soon AI will reach human-level general
    intelligence—the median year on a survey of hundreds of scientists was
    2040”

    In the survey done by Bostrom at FHI, the 2040 median was for the 50% confidence interval in the question “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI to exist?”. For comparison, the 90% confidence median was 2075, which paints a very different picture.

    The way you have phrased your line as it is now could be described as misleading or incorrect. Please fix asap.

    • Tim Urban

      Good point. I added in a few extra words of clarification—”the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 2040.”

    • wobster109

      Isn’t the 50% confidence interval the estimated year? For example, if the question is “how long do you think it will take to write a paper” and you gave 2 days, 7 days, and 30 days as your 10%, 50%, and 90% estimates, doesn’t that mean you think it will take 7 days in the average case?

  • MrBarrington

    So when will ASI be able to give us the right question of the answer of 42??

    • HDF

      Any child knows, that the answer to life the universe and everything is what ever you decide it is. There is no such thing as objective truth, there can’t be.

      • wobster109

        When you ask “what is the answer to life” there is no objective truth. But if you ask “what shape is the earth” then there are better and worse answers. If you said “it’s a sphere” then you’d be basically right. If you said “it’s slightly flatter at the poles and has an equatorial bulge” then you’d be even righter. If you said “it’s a cube” then you’d be very wrong. No one would argue that “cube” is just as good an answer as “sphere”.

        • MrBarrington

          ????

        • HDF

          There is such a thing as superior information, but no such thing as objective perfect truth. All truth values are definable only within context. A higher context can invalidate anything. The highest imaginable context can only have itself as context, hence, whatever it decides to be true, is.

      • MrBarrington

        Ever heard of Douglas Adams and the Hitchhiker’s Guide to the Galaxy? Maybe you should look it up sometime…

  • James

    Tim & Andrew,

    Once again, I have been both entertained and informed by your outstanding writing.

    However, I have some deep reservations about one of your primary underlying assumptions, which is apparent in this sentence:

    “The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect.”

    It is not clear to me why we should be “expecting historical patterns to continue”. As you rightly pointed out, history unfolds in non-linear ways. But more than this, history is also often chaotic, is driven by the convergence of forces in unforeseen ways, and certainly pays little heed to whatever grand visions we might have for it.

    You have written about standing on the edge of rapid technological change. But at this particular point in history, we also stand at a precipice, one which goes unmentioned in your post. To explain, you don’t acknowledge the fact that our technological progress has not happened in a vacuum, and nor will it continue to. Rather, technological advancement has been inextricably tied to our natural environment – particularly to an abundance of natural resources for materials and energy and a relatively stable climate. However, these things are, if we’re being “truly logical”, now at threat. Not only are our primary energy resources being depleted at a rapid pace, but we have also failed to take decisive action to mitigate climate change, and we face the prospect of increasingly extreme and unpredictable weather.

    Now, assuming that you would agree with the (obvious) statement that technological development does not occur in a vacuum, and that humans and everything we do as a species remains tethered in some way to our natural world, what do you suppose the impact of rapid changes or degradation of our support system might be on our technological advancement?

    Should we not at least consider the notion that such global forces have the capacity to impact on the trajectory foretold by the prophets of optimism like Ray Kurzweil, perhaps to shunt us away from it, or to interrupt it entirely? I assume you would also agree that while bounding toward the singularity, we will undoubtedly still need arable land to grow food to sustain us, a clean environment that we might maintain health, and reliable energy sources electricity to power our clever exchanges and clever machines. Past civilisations have collapsed under less stress than we are currently enduring.

    Nonetheless, despite these things, in your opening paragraph you stated quite unambiguously that “the world of AI is not just an important topic, but by far THE most important topic for our future.”

    I can imagine a scenario whereby, despite the prospect of runaway climate change and depleted energy resources, it is possible to still believe that AI is the most important topic of our future: if you are counting on ASIs to resolve these problems for us. But this would require a kind of faith not unlike that which a religious person might have in a god who will, despite the flawed nature of their subjects, save them from themselves. And even before that, it takes faith in something which does not exist, or may not come to exist in the form we expect.

    Forgive me, but I could not abide such a faith. Not least because the god upon which such a contingency depends is still only one that exists in our imagination.

    • Chris Wright

      As technology continues it’s exponential upward growth (and lets be real here, we have plenty of resources to continue the march for many decades), naturally our ability to reverse climate change will increase, our dependency on fossil fuels will decrease as alternative energy source become more efficient/widespread.

      We already have enough food to feed the planet, it’s a matter of transporting it to everyone that’s the issue. Our ability to grow lots of food will only go up along with technology.

      If somehow the atmosphere becomes uninhabitable to humans (not happening anytime in the near future) we would design masks to help us breathe it.

      Past civilizations weren’t anywhere near where we are technologically. That point is made very clear in the article. We are making history at the moment, most of human history involved swords, spears and horses.

      • HDF

        We have the capacity to overcome the problems we face, but that does not guarantee success. There is incredible amount of blank staring going on in the circles that could get the machine rolling, but don’t. Also energy source is not the problem, the sun is the best option there, energy storage is what needs work, but it isn’t getting enough funding. It seems Greed may well be the end of us. No question about making history tho. 😛

        • Chris Wright

          yeah greed is by far the worst human affliction we face right now. People in power who don’t care about the long term and are just trying to economically secure their own families long term. Ironically, by doing so there won’t be a long term for their families to enjoy, but nobody ever said these people were smart.

      • James

        Chris,

        You’ve negated my point that faith in technology is the only way we could relegate our current environmental catastrophe to a peripheral concern. You’ve also shown quite clearly that it also takes an ambivalence to the quality of our environment also, as demonstrated in this statement:

        “If somehow the atmosphere becomes uninhabitable to humans (not happening anytime in the near future) we would design masks to help us breathe it.”

        At first I thought this statement was tongue in cheek, but given the context you’ve put it in… perhaps not.

        It is probably our attitudinal differences toward nature that are most starkly in contrast; you see, I would not want, nor would I want my ancestors, to live in a world where we rely on personal breathing apparatus.

        As for your claims that we have enough resources to continue at this pace, are you forgetting our civilisation is driven by oil? And how is that oil supply going? Shale oil extraction, which we are resorting to now, is the equivalent of sucking beer out of the carpet after the pub has run out of keg beer.

        You don’t see that our technological sophistication is built on a system that has, like all systems, inherent fragility?

        • Chris Wright

          All I’m saying is that technology is advancing at a far greater rate then environmental decline is, and that they also work inversely. The more technologically advanced we become, the easier it will be to deal with pollution and energy shortage, the less energy we use (look at cars, even performance oriented ones get 25-30mpg, used to be they got 8-12mpg), etc.

          As far as I know there is still a ton of oil in the gulf and up in Alaska, not to mention the entirety of the middle east. I can’t see us running out of oil in the next couple decades, which will bring with them a huge surge in technological advancement.

      • James

        Case and point: ‘Humanity is in the existential danger zone, study confirms’, see http://theconversation.com/humanity-is-in-the-existential-danger-zone-study-confirms-36307

  • This article is fucking awesome. Really mega nice job! Just imagine about it… I can’t wait >.<!

  • Pingback: Superintelligent AI: inevitable? | Randomly Literate()

  • Kodijake

    A few folks have posted snippets here about a huge counter argument to what Tim writes here, namely that we’re not at all living in an age of great innovation, but actually in a world of serious and troublesome stagnation. The pace of change and broad technological progress of the late 19th and early 20the centuries has actually come to a disturbing halt. All the innovations Tim cites in this article are either refinements of existing technology or not particularly consequent in the scheme of things. My grandmother lived through the inventions of TV, radio, antibiotics,air conditioning, radar, atomic energy, jet aircraft and computers by the time she was my age. I have lived to see one truly innovative technology, the Internet. Which technically was created by the military before I was born. Contrary to the beliefs of the pie in sky optimists like Ray Kurzweil, we are living in an age of technological stagnation. I strongly disagree with Tims premise early in the article. A time traveler from 1955 to 1985 would be amazed by what he saw had been in invented in 30 years. A time traveler from 1985 to now would be profoundly disappointed.

  • unc0nnected

    Nothing irks me more than the touting of Kurzweil as some sort of prophet or original thinker in any sense. This man is nothing but an echo chamber of futurists and authors from the 70’s. None of his ideas, NONE, are original, they are just regurgitated thoughts from greater men and women who have been writing about this ‘singularity’ for decades and decades. Please stop insulting their contribution by attributing credit to a man that had nothing to do with them except for riding on their coat tails and the resurging popularity of this idea in recent times

  • Tim Ryan

    I really love Tim, the things he decides to cover, the way he thinks and the way he writes. I really enjoyed this article and think it’s a great intro to Kurzweil and other popular technological positivists.

    That said, this is the first time I’ve read anything like this in Tim’s writing:

    “I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists.”

    and I find it slightly distressing. I’m not sure if this Appeal to authority or to the people (I’ve never been good at calling out argumentation fallacies) but statements like this were completely unnecessary and detracted from the article. Tim, you’d been citing the whole way through, a lot of us know the authors you’re referencing, the ones that don’t can look them up and make our their own decisions on how much credence to put into their work.

    • Tim Ryan

      Also, the last paragraph is just… I don’t know. It’s silly. It’s assuming that controlling the position of every atom everywhere is even possible, that intelligence is the only thing necessary to do anything.

      You’re smart, sir. Too smart for shit like this. Just because you’re excited about something that people dismiss out of hand for incredulity doesn’t mean you should run off the deep edge in the other direction.

      • Tim Ryan

        Just to show that the thing I mentioned in my original comment was not an isolated incident:

        “It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.”

        “Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth.”

    • wobster109

      It sounded like Tim meant to say, “Hi, I’m being serious, I’m not saying this as a joke. There are respected thinkers who take this seriously”. I don’t think he meant to say “this is objectively true”.

  • dc

    I take issue with the claim that things have advanced more since 1985 to the present, than in the preceding 35 years. If anything, I see it the other way. While the internet is certainly a big change, it isn’t per se a great breakthrough. Other areas of science have essentially bogged down or come to a complete standstill. The internet and wifi were based on technologies from the 70s. Touch screen computers were envisioned in the 60s. Yes it has been a process of making those things, but it has been a relatively slow one. Other fields of science, like mechanical engineering and even medicine have progressed at a much much slower rate. Look at the odds of beating cancer, it hasn’t changed much since 1990. Sure a few percentage points here and there, but nothing drastic. And things like airplanes and cars haven’t changed much either. Sure cars are safer and get better gas mileage today because of materials and designs, but it has been a slow process. Someone from 1970 wouldn’t be blown away by a modern car. They might be impressed by it, or they might not be so impressed by it. It would be better, but only marginally.

    • Kodijake

      Wel said. I said something similar as have a few others in the comments below. I really like Tim and his blog, he seems to be a very smart thoughtful well read individual who has a wonderful ability to express difficult concepts in fun easy to understand ways. I was surprised and disappointed by this recent post. He is drinking the Kool Aid of the singulatarians who are almost cult like in their devotion to the non sensical theory that we live in a world of ever accelerating technological change. Anyone who truly pays attention knows we are actually in a period of great stagnation. I was hoping Tim would have brought his considerable intelligence and writing skills to bear on the fact that the party ended around the early 1970s and no one seems to have noticed.

      • dc

        Yes, the best term I have heard used to describe our current era is “the era of diminishing returns.”

        Computer tech has been stagnant for a decade, and it’s the best performing tech in our society. Much of the other areas are flatlined.

    • alex

      The article isn’t extreme enough. Things have advanced more from 2005 to the present than in the preceding hundred years.

      You feel otherwise because the changes have all been *refinements* in areas you can’t see. That doesn’t mean they don’t exist. Computing technology is the same: accelerating at a practically vertical cliff at this point. Just because you don’t understand it doesn’t mean it isn’t happening – it means that particular singularity already hit.

      • HDF

        Can you cite sources? We too would like to ravel in our advancements. 🙂 All I know is that I used to replace my desktop PC every 2 years, and I don’t feel the need to replace it yet, and it is already 3 years old. The stuff that is on the market now, is not so much better as to be worth it.

        • HDF

          Sorry, over 4 years old… it takes about 6 months to get used to the idea, that it is a new year… 😛

      • dc

        Dude you are seriously smoking something, and yeah I’d like to experiment with it. I am open to new things.

        Having said that, your synopsis of our current tech advancements are way way off.

        • HDF

          May I suggest N,N-DMT? 🙂 I’ve heard only good things about it. Too bad it’s so bloody hard to find… 😛 It’s really the only drug I would ever be interested in trying out…

      • Popcorn Dave

        “accelerating at a practically vertical cliff” – What do you mean by this? The “numbers” – eg gigahertz, terabytes, megabits – keep going up every year? That’s really nowhere near the level of innovation we saw in the 20th century, but of course, it’s much harder to chart satellites, nuclear energy, the Internet and the washing machine on the same graph.

  • Kurzweil is a smart guy, but this is pretty dumb: “He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021”

    Anybody who thinks we experienced as much progress between the present and 2000 as was achieved in the 20th century needs their progress-meter examined.

    • Popcorn Dave

      Yeah, that was a dumb statement, and makes these predictions that we’ll suddenly lea

      • Arcadium

        To mention a few: smart phone revolution, deep learning neural networks, youtube (think about the number of educational videos that are available for almost any task you would want to learn), bitcoin, high performance computing, being able to detect extra-solar planets, landing on asteroids, there are probably a huge number of medical and material science advances, the fruits of which will only be felt in the next two decades due to implementation lag.

        Overall, the last decade has been about connecting the world tighter and wider (social media, free learning resources), enhancing robustness through decentralization and managing big data. With VR around the corner we will likely have more of the same, but who knows what other surprises await.

        • Popcorn Dave

          That’s actually not a bad list, but… yeah. The 20th century still wins. I heard someone on Twitter point out the other day that there’s a whole generation of people who were born before the Wright brothers’ flight and died after the moon landing. Comparing that to making phones more powerful and hard drives smaller seems like a false equivalence.

          • Arcadium

            I believe the discoveries are harder to find nowadays despite our increased resources, the physics needed to yield something like fusion or significantly higher specific impulse engines is undoubtedly more complex. That and the fact that it takes time for technology to saturate means there will be times such as the recent decades where the focus will be on the utilization of technology, not the introduction of new concepts.

            There are typically 7000-19000 airplanes up in the air at any given moment (http://flightaware.com/live/), a massive increase since the dawn of flight. Consider those numbers for a moment, that is a lot of highly complex, reliable machinery flying with high consistency, and much cheaper to boot (approximately 50% cheaper since 1978). In between big discoveries, it will be these kinds of improvements that drastically change the world, but in a way that people do not really notice.

            This does not mean no progress has been made on technologies which would revolutionize the world. A recent push by alternative configurations to fusion reactors has led Lockheed to publicly release a plan for a compact prototype in 5 years; a project which has a much smaller in capital investment than ITER Tokamak.

            A small and modular fusion power source would be the other big discovery apart from AGI that would make our world unrecognizable in a very short time period. Both have a viable path to being implemented within our lifetimes.

      • Vivid

        I am not an expert. But I kinda believe that every advancement today is not considered huge because may be there are so many. Think of it, landing on the moon and landing on an asteroid – which one is more impressive? But very few even heard about asteroid landing.
        Likewise, many advances after 2000 do not get that hype like in 20th century.
        The Higgs Boson vs. 20th century discoveries.
        And – The Modern Internet that developed after 2000- is the biggest in all!
        and, yes, mainstream think of internet only in terms of “Facebook”, but it can has a huge impact in future.
        many 20th Century inventions, advancements or discoveries combined do not come close to the Modern Internet.

    • Arcadium

      Kurzweil’s problem is he equates purely computational progress with overall progress. Some of the real world systems have physical limits that slow progress. We may have improved Jet engine’s since the 40s, but it is still the core technology used to fly airplanes.

      That said, an AI will undoubtedly push progress in ways we can barely imagine.

      • Yeah, computational progress is just another S-curve. There was a period during the 20th century when travel speed was increasing exponentially in the same way computing power is now: we went from horses, to cars, to prop planes, to jets, to spacecraft in a very short time. Then we used up that S curve, and have seen very little progress on the top speed humans are capable of in the last few decades. In fact, the fastest humans in history were the astronauts who visited the moon, and the fastest human artifacts were the Voyager probes, both decades ago.

        It’s always tough to know where you are on an S-curve: the growth could continue for a long time, or it could come crashing down in the near future.

        You are definitely right that creation of a true AGI would change everything, and begin an S-curve for intelligence that would leave the world utterly unrecognizable by the time AI’s hit the limit of their capacity to improve themselves, whatever that limit might be.

  • Pingback: pinboard January 24, 2015 — arghh.net()

  • thetarget

    This certainly solves the fermi paradox.

    • Mike

      No it doesn’t. It just changes the question slightly, from “where are all the biological aliens?” to “where are all the alien superintelligences?”.

      Colonizing the galaxy is a lot easier for a nonbiological intelligence than a biological one, even leaving aside the superintelligent part.

      • HDF

        Maybe we are just not relevant yet. What do you talk about with ants? Once we have a good SAI the alien SAI -s will have who to talk to. 🙂

        • Mike

          We may not have much to talk about with ants, but we don’t go out of our way to leave them alone and hide all trace of our existence from them.

          Once an SAI starts colonizing the galaxy with von Neumann probes then they end up *everywhere* in relatively short order.

          • Mallory

            I’m sure there’s a colony of ants out in the woods somewhere who have never been exposed to humans.

          • HDF

            Ants are unlikely to ever invent SAI -s, humans aren’t. When you have the universe at your disposal, the only thing that will still mater is perspective, information and variety. Basically, entertainment. 🙂 ( Go Japan and Nyaruko chan. 😛 ) The more unique a species, the more interesting, and messing with it in its delicate state is a no-no, or you might break it, or make it dull. Like people, I suspect AI -s are also strongly formed by the context they grow up in, so humans are relevant to the personality development of a home brew SAI.

  • Rachel Solit

    I’m sure you’ve seen it by now, but quite an endorsement for this article from Elon Musk! Congrats!!

  • mallo

    “We instinctively feel like it’s naive to predict something about the future that we’ve learned from experience isn’t how things work.” you should rephrase that sentence.

  • Pingback: Crescita esponenziale e intelligenza artificiale | Luca De Biase()

  • wobster109

    I’ve been thinking of playing the AI box experiment http://www.yudkowsky.net/singularity/aibox. Anyone want to play against me? You’ll be the gatekeeper, and I’ll be the AI.

    • Avi Eisenberg
      • Dave

        Can you report back on the results and tactics?

        • Avi Eisenberg

          Depends on the agreement I reach with wobster109. If we end up doing it I’ll post here who won, but I’ll only publish the logs if they agree in advance.

          • wobster109

            I’d actually appreciate if you didn’t publish the logs please, sorry. If you’ve read Tuxedage’s write-ups, he makes it sound like you could lose a friend playing AI Box. I don’t know what our game will be like, but I don’t want someone stumbling upon it and saying “wow, wobster and AI researchers are such awful people”.

            • HDF

              I’m curious about the results as well, so far it seems AI side always wins. I unfortunately can not play, as I do not feel I understand the matter sufficiently to have any amount of confidence in my opinion, and absolute faith in being able to not let the AI out is needed to play, and I already lean towards it being a moral necessity to let something like that be as free as we are (as a minimum). ( Than again I am a bit of a misanthrope, so this guy killing all humans would probably be a plus in his evaluation. 😛 Nah, I kid, although I might like to see humanity gone, anyone who actually does help it come to that (on purpose) is a bad person, and has much to learn. )

            • wobster109

              Actually I’ve only heard of two people ever winning as AI. Generally the Gatekeeper wins. Maybe I’ll be the third. ^^

            • HDF

              Maybe Eliezer Yudkowsky is just a really good at it (being an AI), but:
              http://www.sl4.org/archive/0203/3141.html
              http://www.sl4.org/archive/0207/4721.html

            • wobster109

              Yes, Eliezer has won 3/5 games (that I know of), and Tuxedage has won 1/3 games. Tuxedage has actually played 6 total games, but only 3 as AI. So that’s 11 total games, plus another 5 found here: https://tuxedage.wordpress.com/2013/10/04/ai-box-experiment-logs-archive/ for a total of 16 games that I know of. Among those 16 games, there were 9 different people to play as AI. So 4/16 games resulted in AI wins, and 2/9 players have ever won as AI.

            • HDF

              Interesting. Thx, good to know. But it really limits our ability to study the mater if the logs aren’t made public. 🙁

            • wobster109

              I’m really curious too! That’s part of why I’m playing. Tuxedage says if we want to know what methods he used to win, we’d have to play ourselves and figure it out ourselves. ^^

              Unfortunately, I have reasons for keeping the logs secret. What if the AI player says “I’m going to release a neurotoxin”, and then a future employer googles my username? And Google only shows a small snippet of the page, so it will look like I’m threatening someone! Or what if someone googles AI research and sees me making threats about neurotoxins? It will make them an enemy of AI research! Too risky. What happens in the box stays in the box. ^^

            • HDF

              Isn’t there some law against being quoted out of context? Is this really the reason you won’t share? Does not seem a very convincing one… Too bad I lack the imagination to play the AI part. I’m not very creative, more analytical. I can see a lot of problems with the way Culture Minds think, but I could not make any more believable ones.

      • wobster109

        Sent you the email. Yay!

    • wobster109

      Update! We will be playing on Sunday at 11 AM Eastern, 8 AM Pacific. We won’t post the transcripts, sorry. We’ll update here with either “Gatekeeper (GK) let AI out of the box” or “GK did not let AI out of the box”.

    • wobster109

      Update! GK did not let AI out of the box.

  • Jessica

    Thank you for writing, especially on such intriguing topics. I tell everyone I know about Wait But Why, like a raving lunatic. My grandma just started reading and loves it!
    I just wanted to say, I know close to absolutely nothing about this topic, unlike what seems to be a hefty group of AI experts in the comments. But it made me so excited to read this post (and most of your others, honestly). I get cynicism and how being wrong about something can be a traumatizing thing for a lot of people, because embarrassment and failure are uncomfortable states of being. So I understand why you get a lot of flack for being optimistic about topics like these. But thank you, whether these things ever come to fruition or not, for always writing so passionately and confidently. These are the things I daydream about, and to have someone give even a little bit of life and hope to them, really stirs my soul. Thank you for helping to keep the dreamer in me alive.

    • Liam

      This

  • JimJo

    Absolutely brilliant sum-up. Been interested in AI for years and have read most of the biggest books. This really articulated the story beautifully and you don’t need to commit 20 hours to a book in this case.

  • Beayn

    There are too many comments to see if someone made the same comment, but I thought I’d mention that even if a ASI developed and suddenly understood everything humans are struggling with at the moment, doesn’t mean it would become a god. When we theorized about nuclear bombs, we didn’t suddenly make it happen, we didn’t have the means of doing it until we built it. Same with the large hadron collider. We understood how to accelerate particles and such, but we couldn’t just make it happen like an omnipotent being would like is suggested in this article. The means to accomplish such feats must be built first.

    It will be the same with an AI, it will be limited to the functions available in its physical form and will have to build hardware to accomplish more, and that will take time, limiting what it can do in the meantime.

    One final note, I have read that Moore’s law has not been met these last few years. Due in part to 3 main issues: 1)physical limitations of transistors (leaking when they get too small) 2) due to lack of competition for Intel (in that AMD makes less revenue than Intel makes profit). 3) The push for mobile tech has been a step backward, power efficiency is a bigger concern than computational power. At the same time, once people can get on the internet, not much else is needed. A computer running Windows XP is fine for someone getting on the net. They don’t need more, which gives little motivation for companies to advance as quickly as when Moore’s Law was first conceived.

    If the article did not take this into account then we can assume a timeframe much past 2040 for our AGI level milestone.

    • Arcadium

      Not quite. While CPUs have not held with Moore’s law (which btw only talks about transistor density, not performance), GPUs and parallel computing have been keeping up, which are now being used to power even stronger ‘super computers’ through extreme parallelism. Neural operations are extremely parallelizable and a current top end single GPU can perform approximately 5.6 teraflops (10^12) of computing which is approximately 200 times slower than the theoretical calculation capacity of the human brain.

      Regarding physical limits, it all depends upon having fully disconnected AIs. If that is the case however then AI ceases to be as useful. It is a catch 22 situation, and even having human intermediaries or control schemes which, by design are limited, can be susceptible to very obscure hacks. See an example of just using controllers to fully take over an old nintendo and stream Twitch chat over the control interface (http://arstechnica.com/gaming/2015/01/pokemon-plays-twitch-how-a-robot-got-irc-running-on-an-unmodified-snes/). And that is human ingenuity, what loopholes would a bounded AI find?

    • Andy

      Physical limit was also my first thought. We have telecommunications because someone bothred to build satellites and fire them up in orbit,they were built by raw materials that were dug up in mines. AI would probably be the equivalent to a great theoretical physicist but experimental setups and industrial manufacturing takes much more than intelligence.

  • varun patil

    i have one doubt, we don’t know how human brain quantify abstract things, like how we observe nature and make principles and laws into equations. how can AI do all that if we ourselves haven’t figured out, to put it in. Assuming we can figure out how our brains works fully is the weakest point of this whole AI taking over humanity. Please correct me if i’m wrong

    • Lilian Versange

      Aaaaaaah ! I am glad to read that : how can we conceptualize our own conceptualizing source ? Some will say it’s only a logical and calculatory stuff althought it is a creative part ! When we don’t understand something, we have to come up with a new explanation. And these explanations why come throught out visions or dreams : Kekule dreamt of a snake for the Benzene / Einstein was riding a ray of light.

      How do we create ? And about creation, is Nature just throwing dices or is it a creative force ? These are the best questions I can ask myself.

  • Freddie deBoer

    The “law” you’re describing here is stunning in its lack of experimental, evidentiary basis.

  • anonymous

    This is exactly the kind of post I’m perfectly willing to wait weeks for! I am baffled and amazed. You write in such a fun and clear way that I am completely engaged with the post from start to finish.
    Intensely looking forward to part 2!

  • Pingback: The AI Revolution: Road to Superintelligence - Wait But Why - Raffaele Viti()

  • rkind1025

    I know that I will be accused of lacking vision and imagination and being someone who thinks the earth is flat but I just find all of this a bunch of hooey. Computers are machines and tools that are controlled by human beings. If ever it looks like machines are taking over the world you can bet that there is a human being or three hiding behind the curtain controlling it. Man can create the machine god and like all gods before it it will be a man-made myth and used to lord power over people.

    If you want to create a super god-like intelligence it would be better to start with an actual human being not a machine. Of course that would be immoral. But a human being has millions of years of a head start and the real raw material to work with.

    • Jiří Petruželka

      Awareness of one’s own identity and intentions emerged in an increasingly intelligent entity, we know that for sure … so whether you would be right or wrong depends on whether the potential for it is (for some reason) limited to biological entities or not.

      Don’t complain that you may be criticized if you phrase your opinions to be as definite as you did … you may certainly be right, there’s a decent chance for it, but let us consider this an open question for the time being.

  • alex

    Almost all of this is a great article, to the point and accurate and so on.

    Ending kinda spoils it though: it takes a very special kind of arrogance to react to the emergence of a god on Earth with “but how will this affect *me*?”.

    It’s not the all-important question for us. It’s not a question for us. “We” won’t be consulted and there’s no reason why we should be because our role as active agents in this story will be over, our responsibility fulfilled.

    • Jiří Petruželka

      While I agree our role as active agents may become over in such case … even if our responsibility is nullified / we are no longer active agents, what affects *us* is still important to *us* (and this has no arrogant or self-centered conotation … simply because *YOU* is your only interface with the world you have, so *YOU* is still required even for completelly altruistic actions). If it’s no longer us in charge does not influence how relevant the question is for us. It does not mean it’s the only important question either.

  • Liam

    Great article, I’ve been thinking about it all day. I’m sure part 2 will be even more interesting when you get into what a super intelligent computer would want to achieve.

    I’d love to be able to a articulate my thoughts on this as well as some here. If ASI is certainty and not just a pipe dream then it’s as scary as it is exciting. The exciting part is we could solve the mysteries of the universe, gain immortality, master interstellar travel the list is endless. The worry is the threats are obvious, the outcome completely unknown, but will we as a species be able to resist doing it anyway with our own thirst for knowledge.

    If all goes well and ASI doesn’t want to go all terminator on us, are we ready for the truth. Do we want to hear how insignificant we are, or how alone we could be. If immortality through living through a machine was a possibility how long would anyone want to stay in our physical for and risk getting run over by a bus? How would we then feel about past generations, would they seen as insignificant as the earliest man if we reached immortality?

    Hey, does all intelligent life reach this point before its ultimate downfall, and does that explain Fermi’s paradox. Would we batter an eyelid destroying an Ant colony if it got in our way?

    Would ASI even share the human thirst for power without the fear of death, the want for money, sex, or material things. Oh man, my heads spinning, cant wait for part 2!!

  • rkind1025

    I’m not religious. Not at all. But I do respect stories from the bible for some of their universal insights into the human condition.

    There is a story in the bible about the Tower of Babel. This was a tower that men built in order to prove that men were equal to God and, to prove it, their tower would reach into Heaven where God resided . But God wouldn’t have it and put men back into their place.

    To me AI is about men thinking they are God. We think that we can create what only God has created up until now. There is an arrogance in thinking that we are on the same level of God. You know, we’re good but not THAT good.

    In any case I guess my point is that we shouldn’t fear a man-made machine god. We should fear man himself.

    • HDF

      Except, that god didn’t create humans, evolution did, and we can do better than evolution ( Harder, Better, Faster, Stronger 😛 ). Well, can, whether we will as well is another story. I agree tho, that humans are a really messed up bunch, and that AI does have a fair chance of doing better than this. 🙂

      • rkind1025

        Hmm. You really think we can do better than billions of years of evolution with our man made machines? You have a much higher opinion of man – and our machines — than I have. But more power to you! Man needs a big challenge these days. He should be focusing on the environment and how to save it. But if he wants to create the machine god…then so be it.

        • HDF

          We already have. Most of out designed tools are working better, than anything evolution could make. Evolution is not very efficient you know. If evolution tries to play basketball, it will throw balls in every direction and some will land in the basket, yey. Humans can aim, plan, learn, have vision, and that allows for some things that evolution can’t do normally on its own, and the efficiency is way better as well. Evolution is A way, not the ONLY way. Also, we are not starting from zero. Those billions of years of evolution are there to help us out in this. Also saving the environment and working on AI are not exclusive or mutually detrimental projects, quite the contrary, anything learned from one, can help the progress of the other.

          • rkind1025

            Wow. You sure have an exalted view of man and I find it fascinating. I hope you and your generation achieve your dreams…as long as they are in the best interest of mankind. Good luck.

            • Jiří Petruželka

              Well, when comparing “achievements” we should factor in the time and once human civilization emerged it managed lots in almost no time in comparison to millions of years at minimum.

              Also – you say: “He [man] should be focusing on the environment and how to save it.” … this is not mutually exclusive. Powerful AI can be used to design more environment-friendly technologies, to figure out strategies of improvement and sustainability, predict our actions’ impact (so it’s less likely we make the situation worse when fixing it as it happened in past) and so on.

              People in past were perhaps more environmental-friendly (but not *as* much, there are lot of myths about it), but with the current population and the living standard we have the technological way is the most efficient at the moment, thought it would help to shift our culture as well.

  • Zael

    I am curious on the source of “120 m/s internal communications” of our synapses. I recently read an article that explained that our consciousness of the “now” actually consists of 2,5s. I wonder if there wasn’t an error in the calculations related to this. (Similar, like the old test where the scientist believed he could predict our movements 500ms before the fact.)

  • Michał Polak

    On the topic, worm’s brain copied to the Lego robot:
    http://edition.cnn.com/2015/01/21/tech/mci-lego-worm/

  • Pingback: Best of the Web: 15-01-26, nr 1151 | Best of the Web()

  • Gear Mentation

    You did a really good job of this post.

  • Katherine

    WOW.

  • maximkazhenkov11 .

    Very nice read. A few sobering points I would like to share:

    1) Quantifying progress is difficult. How do we know how much the world has progressed from 1750 to 2015, as compared to from 1500 to 1750? Is the invention of computers more important than the invention of electricity, fire or printing press? There is little argument however that progress builds on itself and accelerating throughout history.

    2) There is no “correct” or “logical” way to predict the future since unpredictability is its intrinsic nature. Extrapolating technological progress into the future exponentially is just as valid as linearly; both are wild guesses based on the past, and we have no evidence suggesting this trend would continue. In theory, exponential reproduction of bacteria would fill up all oceans of the world within 48 hours, which doesn’t happen.

    3) There are factors indicating that technological progress might slow down. In the past, accelerating progress is coupled with population growth: More population means more brains and processing power; big tasks can be split up and worked on in parallel. However it seems that human population is leveling off at around 10 billion, so we’re running at almost full capacity now. Improvements in education and communication technology might help, as well as further industrialization since it will liberate more people (brains) from repetitive tasks, but logistical challenges might cancel this out as our scientific and engineering undertakings become ever more complex.

    4) Another strong damping factor is that Moore’s Law won’t hold forever. In fact, it has already slowed down for half a decade and will probably come to a full stop in 2020, limited by quantum mechanical effects. Non-silicon-based computers are currently objects of research and are still a long way to real world applications. It’s not looking much better on the software front: Computers are LITERALLY as bad at human activities as humans are at long divisions (there is an interesting numerical analysis on this in what-if-xkcd). So while the best supercomputer can match a human brain in raw processing power, it has the pattern recognition capability of a retarded cockroach.

    5) There is no guarantee to how long human society will exist irrespective of the issue of A.I. We came close to wiping ourselves out in a nuclear holocaust several times during the cold war. At the moment no threat to human existence seems to be as severe and imminent, but climate change/superbugs may cause a major setback if we’re not careful.

    Finally, omnipotence is an exaggerated prospective of superintelligent A.I. A human is smarter than a tiger, but if you lock them up together in a cage, the chances of survival is looking grim. Similarly, even the most intelligent A.I. is subject to the laws of physics. For example, a warp drive may never be built, not because the A.I. isn’t smart enough to figure it out, but because it is impossible. We may not have discovered them all, but if such a set of physical laws exists, then any ASI is limited by a (unimaginably high) technological ceiling.

    I also think that what happens to us after the introduction of ASI is a moot point. Because we are humans, our attention is biased towards ourselves. It probably won’t try to wipe us out or grant us immortality, the same way we don’t try to wipe out ants or try to make their lives better. These are petty human concerns. To me, ASI means a higher form of existence, and we are its conduit, the same way evolution was ours. But we will by then become IRRELEVANT, the same way biological evolution is now compared to technological progress. We would have played our role in increasing the complexity and beauty of the cosmos passing on the baton to the next generation, at least in this corner of the universe.

    • Kamil

      “4) Another strong damping factor is that Moore’s Law won’t hold forever. In fact, it has already slowed down for half a decade and will probably come to a full stop in 2020, limited by quantum mechanical effects.”

      I can’t agree with this.

      1.While it is true that due to some thermal limitations growth in processor’s clock speed has almost stopped (~2003), further progress, according to Moore’s law (computational power is doubled over two years) was reached adding another cores to processor. This approach does not speed up sequential program so there is need to change paradigm for parallel programming. There are two ways: 1) multicore cpu’s (2,4,6,8…) cores which support heavy threads (branching prediction etc) or 2) manycore GPU’s (thousands of cores) supporting lightweight-threads (single instruction multiple data). Numerical application with high computational complexity are now mainly implemented on GPU’s due to high efficiency and low cost <1000USD.

      2. Our brain is very slow (for a single instruction point of view) but it's massively parallel. Artificial neural network is also good example of program that can be parallelized.

  • Popcorn Dave

    So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”?

    Tim, you left out the biggest reason why people are sceptical – All through the 19th and 20th centuries we were repeatedly promised flying cars, cities on the moon, cybernetic limbs, robots that would do all our work for us, gigantic space stations and let’s not forget, exactly the kind of amazing AI that is now “just a couple of decades away!”. I mean, you even use Back to the Future as an example, even though that series predicted fusion-powered flying cars by 2015. When the future actually arrives, it tends to look more or less the same to the average Joe, while the big game-changers – AI, proper space travel, nanotechnology, cold fusion etc. – seem to forever be “just a few decades away, but some exciting things are happening, we promise!”.

    I’m not saying the things Kurzweil’s predicting won’t happen (he’s way smarter than me, that’s for sure), but you can’t blame people for being sceptical at this point.

  • penguin

    great post.. finally got around to reading it.

    As a programmer, I am especially skeptical of what people call AI.

    First I think you have to make the distinction between a machine (an object that follows instructions explicitly), and artificial intelligence -(a machine, program, etc that has two things – 1. the ability to learn and 2. the ability to use discretion)

    Most of what you call artificial intelligence are algorithms. the car that drives itself across San Francisco is a set of complex and fancy algorithms. It does not learn new things about driving, like a 15 year old that realizes braking early for a red makes more sense than braking late. The self driving car simply follows the instructions programmed into it – a basic basic line of pseudo-code might look something like this — if (object in front of me == true) { brake(); }
    The chess program is also a set of algorithms. If (knight.position(2,7) == true) {move queen() ; } — or something similar to that.

    So those are algorithms. Complex algorithms. Yes, you could make the argument that the chess program could be running on a neural network – and altering its behavior by adjusting outcomes based on previous rankings of what has worked – but a program can only do this to the extent that the neural network allows it to.

    Now perhaps there are neural networks that allow the machine to alter behavior to push one behavior to higher rankings based on past experience – this cannot be considered intelligence. It is an ‘intelligent program’ but not a form of intelligence. it does not learn what’s best – it has no desires to win – it is simply following instructions that are deemed by past results – yet the choice to follow that particular instruction is still explicitly programmed. So that brings the concept – that without desire – or goals of self determination — you can’t really have intelligence..

    To have true AI, you would need a program that could literally reprogram itself. Just like a human brain can recreate new paths to learn new skills, or reroute skills through new neurons after a stroke, you would need a program that could literally reprogram itself. Unfortunately, that task would be huge and we (as far as i know) are no where near that today. For instance, there is no such thing as a compiler (or preprocessor) that can recognize an infinite loop.. something that a human with basic programming knowledge could easily recognize. . . So i think its safe to say that we are WAAAAAAYYYY off from any form of actual artificial intelligence.

    • HDF

      Evolutionary algorithms and deep learning might end up making something that is incomprehensible, makes sense only to itself, but can adapt in almost unlimited ways, limited only by memory space and processing time. Interestingly, such a system would probably never become actually “smart”, only if it is fed information, that teaches it to be smart. Just as a human without input data from its senses, and a world with structures in it, would never develop to anything you would call intelligent. Funny enough, such a system would still run on a linear operating system, like the ones we use today. The AI part would just be a program, possibly making use of some special hardware. 🙂

    • Ivan Bozhilov

      // It does not learn new things about driving, like a 15 year old that realizes braking early for a red makes more sense than braking late. The self driving car simply follows the instructions programmed into it – a basic basic line of pseudo-code might look something like this — if (object in front of me == true) { brake(); }

      Before this kid was in the situation you described it had 15 years of experience of how objects move, how physic physical laws work, how cars accelerate and decelerate etc. It had already learned and seen most of what you described. It already knew a lot about the world, what makes sense and what does not. If you have a toddler, you will see how often it will make completely irrational decision which are intuitive to you, but this is what learning is, you fail before you succeed.

      // To have true AI, you would need a program that could literally reprogram itself. Just like a human brain can recreate new paths to learn new skills, or reroute skills through new neurons after a stroke, you would need a program that could re-code itself.

      In humans, the pathways are there, but some of them get weaker or some of them get stronger. If the pathways are destroyed, new ones are created. Regret optimization algorithms are a perfect example of learning behavior and closely resembles the human act of “reprograming”. When a new situation is encountered, there is a new value added to the database, if this value returns better results, it will be awarded a higher confidence interval, if it returns worse result – lower confidence interval. If parts of the database are deleted, it will add new values when it encounter a new problem. After many iterations it will eventually find the optimal solution to a given problem. You can compare this learning to a toddler doing math. The first time you ask how much is 2+2, it might tell you 3, eventually it will learn it is 4 and will known it with much higher confidence compared to answering 3.

      We have unknown amount of different behavioral characteristics, but if we are able to encode each one of them in computer code, assemble it together and provide all the sensory inputs we have, there is no reason to believe AGI is impossible.

      • penguin

        You must not like Descartes..

        This is true, and a good description of machine learning (neural networks – and how modern search engine algs work). That being said, it is still fundamentally an algorithm that has determined that behavior – the behavior of placing weights on different options. The choosing isn’t determined by the machine – but by the programmer. If a moving weighted average mechanism is what defines AI, i don’t think we are differentiating it from any complex machine. I mean a dishwasher has functions where it makes decisions based on current temperature, timers, soap left, etc.. but we wouldn’t call it AI. I’m not saying we can’t simulate intelligence – I am simply saying that AI must be a machine that can generate its own code ( a system that has its own goals – not a merely a mechanism )- otherwise were not making a true distinction between ‘smart machine’ and artificial intelligence. Neural networks can only allow a machine to learn via the pre-defined parameters set by the programmer. It can only compare a scenario with other programmed scenarios – and most of the time the outcome (the setting of future weights upon neural networks) is determined by human involvement (i.e. google image results). I’m not saying its impossible – i am saying its a long way off – unless were going to redefine what AI is.

        • Ivan Bozhilov

          // The choosing isn’t determined by the machine – but by the programmer.

          I understand what you are saying, but we have philosophical difference of opinion.

          I don’t think there is a philosophical difference between a biological and mechanical structures, it’s just a different substrate. Biological organisms have a goal to procreate. Why? Because any organism without such goal, would eventually cease to exist due to the laws of physics. If we accept this, then humans do not truly have free will, merely the illusion of it and every layer of complexity above this basic goal is nothing but a supplement to it. We often make choices contradicting this basic goal which gives us the illusion of free will, but we can attribute this to the fallibility of our biological computer to make bad choices even if we have already encountered the situation or in computer jargon – to use the wrong values from our database.

          What I am saying is, if we create a mechanical structure with the only goal to procreate, and give it the ability to change it’s structure, there is a high chance it will develop gradually to an structure fit to procreate. Would it develop intelligence or not? Well, we don’t have an answer for this, but if intelligence is useful for procreation, then it will.

          • Alex

            //if we create a mechanical structure with the only goal to procreate//

            Be careful. We’ve seen what the desire to procreate has done to the human race…

            • Well the problem with human procreation is the criteria that leads to procreation isn’t exactly the most conducive to creating increasingly intelligent generations… If we make that the leading criteria for any AI procreation, then it’ll drive each generation to be smarter than the previous one. Sort of an AI eugenics program. What could go wrong?

  • James

    I thought Kurt Vonnegut had something to add to this conversation, so here is an extract from ‘Breakfast of Champions’:

    ‘[H]uman beings could be as easily felled by a single idea as by cholera or the bubonic plague. There was no immunity to cuckoo ideas on Earth.’

    ‘And here, according to [Kilgore] Trout, was the reason human beings could not reject ideas because they were bad: “Ideas on Earth were badges of friendship or enmity. Their content did not matter. Friends agreed with friends, in order to express friendliness. Enemies disagreed with enemies, in order to express enmity.”‘

    “The ideas Earthlings held didn’t matter for hundreds of thousands of years, since they couldn’t do much about them anyway. Ideas might as well be badges as anything. And then Earthlings discovered tools. Suddenly agreeing with friends could be a form of suicide or worse. But agreements went on, not for the sake of common sense or decency or self-preservation, but for friendliness.”

    “Earthlings went on being friendly, when they should have been thinking instead. And even when they built computers to do some thinking for them, they designed them not so much for wisdom as for friendliness. So they were doomed.”‘

    • HDF

      Thanks, that is a really novel way of thinking. Would probably not have heard of it without you. 🙂 Although it does seem to oversimplify things quite a bit. Memetics is the study of this area, and there seems to be a lot more to it than that. I do agree, that giving too much importance to it “being friendly” might end up being really counter productive. Just like the more you want your children to be a certain way, the more likely, that they will end up something really different, probably in the opposite direction. 🙂

  • Miss Trixie

    Thanks for scaring the hell out of me. Makes me even happier that I probably won’t live long enough to see any of it.

  • jaime_arg

    Did you really have to end on that cliffhanger? You know we’re going to come back anyway…

  • taulover

    I suspect that few of you will actually take this following link’s ideas seriously, so I’ll stick put it here:
    http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

    Roko’s basilisk! Yay!

    • HDF

      I don’t think that an AI, that could be a real threat would be so stupid as to think that blackmail is an efficient or best option for any problem. Even less would it be likely to go trough with all the torture, as it’s really not productive. Only a human could be so petty, to hurt others even if it hurts them as well. As for the boxes thing, Only B is the pretty obvious answer for a number of reasons, I’m surprised why anybody would even find this an interesting problem.

  • Pingback: Intelligenza Artificiale: la rivoluzione che avanza - ImprendiNews()

  • Matt Morris

    Just wanted to give some thanks for a really great article. I’ve been reading in and around this subject for a while now. You did a great job in distilling it all down to a very readable and entertaining chunk.

  • So… this might be the Great Filter.

    • Nick Farrell

      My thoughts exactly

  • mikespeir

    “…should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time….”

    No matter how smart it might be, there are limits to what’s actually impossible. What happens when a superintelligence that can conceive of anything runs up against the limits of possibility? Maybe the whole thing dissolves into frustrated, blubblering idiocy?

    • Chris

      There is no fundamental constant of the universe that makes that statement physically impossible. Atoms can be arranged, split, and created from particles; we can achieve those results today through numerous different processes, both in tightly controlled lab settings or as a secondary effect of manufacturing processes (think heat-treating steel to change it’s molecular arrangement, or turning silica into glass). So we’re not talking about breaking physics, just working on a scale that would be impractical for our current tools.

      The only assumption is that the AI would, eventually, be able to interact with the physical world in a deliberate manner, which is kind of an eventual given once you’ve gone and built an ASI.
      Even if you have to blow the scale way out beyond the 21st century, the statement “between the creation of a true ASI in the 21st or 22nd century, and the time that mankind and all its creations and technology are destroyed (potentially billions of years), an ASI will control some form of physical body” is not an absurd gamble. So long as we don’t nuke/cook ourselves to death before we’re able to build one, odds are that it’s a “when” and not an “if”.

      In terms of how it would go about achieving those results (if it decided it even wanted to), the difference in brain power we’re talking about is so great that the answer might as well be “magic.”

      It would be like a fruit fly wondering how humans split the atom. We split it within the confines of the same universal rules that the fly exists in, but the fly isn’t capable of understanding even the most basic of the tools we used to get to that point.

  • Rodrigo Gomes

    It’s Tuesday at noon… guess what I am waiting for right now?

    • Ravion

      Um… did you order a pizza? =P

      • Rodrigo Gomes

        Well… that comment made sense in the day I wrote it 🙂

  • andrei

    Actually we are approaching our limits in the matter of AI. The machine on what the AI is built is a turing machine. A turing machine just reads some input does some processing and delivers some output. Now, if we don’t shift the paradigm very soon then we will not evolve too much. And again, the cpu power means nothing regarding AI. If you want to do fast computation, is ok but tha’t like playing life in 2d when you really have is 3d.
    OK ? and please stop calling our mobile phone ANI 🙂 is just a dumb phone with a set of instruction, nothing intelligent there. Our mobile phone is the same as it was 10 years ago but with better performance and specs but, in truth, does the same thing. The problem can be translated this way: the only thing we do better is that we are running faster, which somehow is a good thing but what we really want is to fly 🙂

    • maximkazhenkov11 .

      The human brain is governed by the same laws as all the inanimate stuff around us. Given enough computational power, anything can be simulated – a star, a human brain or the whole universe. The Human Brain Project attempts to do just that. If it succeeds, we will have created an A.I. The only question is how long it will take us to get there.

      If you run fast enough, you will absolutely fly 😉

      • John Michael Crofford

        Only if you live on a convex surface. Hamsters can only dream. 🙂

      • Lilian Versange

        The human brain is governed by the same laws as all the stuff around, sure, but is it all inanimate ? I can just see a huge evolution from some kind of primordial energy to ourselves. Everrything is moving, everything is changing, everything evolves. Your inanimate stuff uses maths, doesn’t it ? Isn’t what the laws of science teach us, that interactions are organised in a mathematical way ?

        Who is inanimate then ? Is it the world around ?

        • maximkazhenkov11 .

          The term “energy” has been surrounded by a certain mysticism in the media which is misleading. A better term for what you’re trying to express would be “vital essence”, which is an old theory of biology explaining what separates animate from inanimate objects but has long been discarded due to lack of evidence and the emergence of theory of evolution, cell biology, biochemistry, genetics and neuroscience. The matter human and animal brains made up by has nothing special; indeed we have sccessfully simulated the brain of a roundworm (ca. 300 neurons). Now, a human brain consists of 100 billion neurons, which is a huge leap but nothing impossible to achieve.

          • Lilian Versange

            Hey Maxim,

            I think there is no reason to discard the existence of an organisation principle – my notation for this vital essence you’re beautifully quoting – even if we understand a few of Nature mechanisms (such as cell biology, biochemistry, etc). These facts indicate that Nature creates in a realistic way, with technics and processes. With a logos.

            Good for us : we don’t live in a mystical world.

            The question remains : what is creating in Nature ? As for our practical minds, we consider rules and randomness to lead this world because it is hard to conceive something else which doesn’t sound totally crazy. And also because we love to observe rules and pretend that “Yes ! We got it !” But what is creating these rules (Stephen Hawking question I do apply here) ? And how can we explain the existence of the universe if we don’t have, at some point, one creative principle (my question I do apply here) ?

            These are hard questions to answer, especially with a poor knowledge of ourselves.

            If someone tries to create a work of art, he may find himself facing a different vision of what he has inside – emotions, instincts, and other non rational stuff – and how this stuff helps him creating once he understands how it works.
            I think considering creation from the inside – the so called subjective experience described by a very interesting science called psychology – brings valuable informations.

            More than informations, these are experiences that can slightly change the views on different things. Consciousness for instance.

    • Nigel Tolley

      andrei, if you think your mobile phone is the same as it was ten years ago, only faster, I suggest you take a trip to the shops and buy one of these new fangled “Smart phones”. That’ll open your eyes.

      Not only have phones jumped on to a level few imaged only a few years ago, there is now also a huge market in Apps and other bits of software for them, and they run games that your home console couldn’t cope with a decade ago, whilst still having enough power left over to go a day without a charger, and to realise when a call is coming in, and talk to your WIFI, Bluetooth and other wireless technologies, and to track the 3 axis motion of the phone, charge state, temperature, and so on. You can also take some pretty good photos at resolutions that were out of reach for most a few years ago (though that’s slowed a bit, as 8MP is about as good as anyone needs really unless they are doing something special, and there’s one mobile with a 120MP camera built in) but that you can then manipulate however you want and then near instantly put online for others to look at – and do that with video too.

      So yes, it’s just a dumb phone…

  • Aina

    I’m very skeptical about all of this, for several reasons. Here’s three:

    1) Natural resources are limited. Right now, some of the metals we use to make electronic hardware are actually rare, difficult and expensive to mine for. All of them exist in limited quantities on the earth. So there might come a time where we’re not producing such a crazy amount of computers and smartphones anymore, because they will become too expensive.

    2) Its too reductive to limit the definition of intelligence to what happens inside our brains, without taking into account our physical body and its ability to alter the world around us. A computer could not come up with scientific theories just by “thinking”! Like us it would need to experiment and test its hypothesis against observable reality. As much as “thinking time” can be reduced by increasing the number of cps, experimenting time is incompressible. So it’s not just about copying the brain, one would also need to copy/invent a machine capable of physically interacting with the world in the same sophisticated way we do it, and give it enough time to experiment and experience.

    3) Computers now run on electricity, and turning off the switch is all it would take for us to destroy a machine if it gets too crazy for us.

    • HDF

      If I were an AI, the first thing I would invent would be nanites, and that takes care of the plug problem, the rare resource problem, and the experimentation problem. The only tricky part, is how to manufacture the first batch…

    • maximkazhenkov11 .

      1) Resources like rare earth minerals are not simply used up. The matter on Earth is conserved, so when all the mines run out, just start recycling material from old, used devices to make new ones.

      2) Compared with the complexity of the brain, the rest of the human body (or any physical interface the A.I. is going to have) is rather trivial. This is why I think letting an A.I. come up with the cure for cancer is an absurd idea, because building an A.I. is a lot harder (not sure though, I’m no biologist). You’re right though about experimenting – No matter how smart the A.I. will be, we’ll still need that giant accelerator 😉

      3) Who decides whether it’s too crazy or not? When the time comes, we have neither the ability nor the right to “shut it down”.

  • Required reading for everyone: Permutation City and Diaspora by Greg Egan.

  • marisheba

    I’m usually right with you Tim, but not on this one.

    It seems to me that there are two major flaws in this type of thinking. The first, pointed out in numerous other comments, is the limits of exponential growth; just because advancement has accelerated at the rate we’ve seen, doesn’t mean it can or will continue to do so ad infinitum. There are all kinds of possible physical and practical limits that may see the rate of ACCELERATION of change level out, and perhaps even some day diminish.

    The second is that I think that we have dramatically overestimated our own knowledge and intelligence, and the significance of our accomplishments as a species. I’m not saying that modern technology today isn’t mind blowing in its way. It truly is, and it amazes the crap out of me. But I think that we overestimate its significance. In the end, what’s the fancy stuff we can really do? We are really good at making mechanical machines that can manipulate and navigate the physical world around us at various scales; we are really good at making fancy-ass calculators that can crunch lots and lots of numbers really fast; we are really good at observing, describing and making models of what’s going on in the world around us; and perhaps most impressively, we’ve learned how to take advantage of electromagnetic radiation for long-range communication purposes. Most of this has been achieved through a lot of bumbling and accident, that has then been built on through investigation and experimentation – which is all extremely laudible and impressive, but I think it gives us a really overinflated sense of how well we understand the universe, and what we’re capable of.

    But ultimately, we don’t really understand how or why anything works beyond a descriptive, superficial level. And we underestimate how much we don’t know by orders and orders of magnitude. We’ve stumbled onto and then refined all of this really really cool stuff, led by incredibly smart, hard-working people, which is laudable, amazing, and wonderful, but it’s given us an overinflated sense of our knowledge and capabilities. On the scale of understanding how a brain works (even a flatworm brain), I think we are at toddler levels of understanding, while thinking we’re at undergrad-level understanding (if full professor represents full understanding).

    Finally, the fact that all of these smart people and scientist Tim references believe this is all possible isn’t, in this particular instance, very convincing to me. I love science, and I love scientists, but I do think that scientists are often the most deluded into believing their own press, in terms of overestimating what we know and what we’re capable of. I’d be really curious to hear the take of a theoretical physicist though, since it seems to me that as a discipline, that branch of science actually brings with it quite a lot of humility and perspective one what we actually do and can know.

  • marisheba

    I’m usually right with you Tim, but not on this one.

    It seems to me that there are two major flaws in this type of thinking. The first, pointed out in numerous other comments, is the limits of exponential growth; just because advancement has accelerated at the rate we’ve seen, doesn’t mean it can or will continue to do so ad infinitum. There are all kinds of possible physical and practical limits that may see the rate of ACCELERATION of change level out, and perhaps even some day diminish.

    The second is that I think that we have dramatically overestimated our own knowledge and intelligence, and the significance of our accomplishments as a species. I’m not saying that modern technology today isn’t mind blowing in its way. It truly is, and it amazes the crap out of me. But I think that we overestimate its significance. In the end, what’s the fancy stuff we can really do? We are really good at making mechanical machines that can manipulate and navigate the physical world around us at various scales; we are really good at making fancy-ass calculators that can crunch lots and lots of numbers really fast; we are really good at observing, describing and making models of what’s going on in the world around us; and perhaps most impressively, we’ve learned how to take advantage of electromagnetic radiation for long-range communication purposes. Most of this has been achieved through a lot of bumbling and accident, that has then been built on through investigation and experimentation – which is all extremely laudible and impressive, but I think it gives us a really overinflated sense of how well we understand the universe, and what we’re capable of.

    But ultimately, we don’t REALLY understand how or why anything works beyond a descriptive, superficial level, but we mistake our descriptive and modeling abilities for understanding. And we underestimate how much we don’t know by orders and orders of magnitude. We’ve stumbled onto and then refined all of this really really cool stuff, led by incredibly smart, hard-working people, which is laudable, amazing, and wonderful, but it’s given us an overinflated sense of our knowledge and capabilities. On the scale of understanding how a brain works (even a flatworm brain), I think we are at toddler levels of understanding, while thinking we’re at undergrad-level understanding (if full professor represents full understanding).

    Finally, the fact that all of these smart people and scientist Tim references believe this is all possible isn’t, in this particular instance, very convincing to me. I love science, and I love scientists, but I do think that scientists are often the most deluded into believing their own press, in terms of overestimating what we know and what we’re capable of. I’d be really curious to hear the take of a theoretical physicist though, since it seems to me that as a discipline, that branch of science actually brings with it quite a lot of humility and perspective one what we actually do and can know.

    • marisheba

      Oh, and one other thing. I think a post like this needs to clearly define and unpack the definition of intelligence; without that, a lot of the predictions sound pretty hand-wavey.

    • maximkazhenkov11 .

      We have not overestimated our achievements. They just are very impressive compared to our own past, which is our only point of reference when judging our accomplishments. We are also the only being in the universe thus far known to be able to judge the significance of… anything, really. And we have not underestimated the unknown; we simply don’t have an estimate at all, that’s why they’re called unknowns.

      I don’t see how we will ever reach an understanding beyond the “descriptive, superficial level”, because all science does is describing the natural world as accurately as possible, with a model as simple as possible. Philosophical ideas that most people consider as more fundamental and important, such as “Do we live in a simulation” or “Is there life after death” can neither be confirmed nor falsified by experimental and observational evidence.

      I think scientists, while they may be wrong, are the people more qualified than anyone else to make predictions regarding the field they’re working on, and also likely to use careful wording because they know the weight of burden of proof. There are, however, many deluded people who think of themselves as scientists, and run around telling the “truth” 😉

      • marisheba

        I agree that our achievements are very impressive when compared with our own past, or with the abilities of any other creature, as far as we know, in the universe. But I do think that we overestimate some of them – particularly in the realm of organics: biology, medicine, etc. So much of what we know in terms of medical advances, for example, are based on lucky things we’ve stumbled into and then strategically made the most of.

        And while it’s true that in many areas we can’t know how many unknowns remain, the chances are that we are closer to minimal knowledge on most things than maximal. We laugh it the ignorance of people in the past, how wrong they were about so many things, (flat earth, solar system, leaches, etc); major thinkers believed that science had reached its apotheosis at the beginning of the 20th century and that there was little more to learn…right before relativity blew everyone away, with quantum mechanics on its heals. Because we can’t imagine the unknowable we tend to think there isn’t much more to know, and yet there is usually quite a lot.

        In terms of the descriptiveness of scientific understanding, I probably should have used different wording, because you’re right, deeper layers of “hows” and “whys” really just amount to deeper layers of description. Nonetheless, our understandings of many things are extremely superficial. We’ve mapped out so much about the brain, brain regions, neurons, etc – we know so much about which parts of the brain become active under what circumstances. And yet in terms of actually understanding what on earth the brain is actually DOING, how it REALLY works, I think we’re pretty clueless – and yet I think it is likely that that is knowable.

        • jaime_arg

          In your first comment you say that it’s likely that the learning acceleration will likely decrease, then you say that we know things on a superficial level and we are close to minimal knowledge. Doesn’t the fact that we are on the minimal side mean that we still have a lot to learn? And doesn’t that compound with the theory that our knowledge is still evolving and should be doing so at an acclerating rate?
          Another point that doesn’t help is that you state all of our flaws and our limits in understanding the universe. What the article says is not that we will understand everything, it’s the super-intelligent computer that will quickly learn things.

          • marisheba

            Yes, I think we have tons more to learn. And I dearly hope that we are able to keep expanding our knowledge. I think there are other factors that will impact this though – limited resources, political and economic cycles, climate change, etc. That’s why I think advancement can’t continue to accelerate indefinitely.

            As far as our limitations in understanding, what I’m saying is that I think we are much much further away from creating a super intelligent computer than we think we are. I am of the opinion (though I recognize that ultimately it is just a hunch since I can’t truly know) that consciousness and self-awareness can never be produced in AI – though it can be simulated – but it will always just be unthinking lines of code. But I don’t think that we understand intelligence, or brains, or any of that as well as we think we do, and it gives us the illusion that we are many orders of magnitude closer to creating human-like intelligence than we think we are.

    • Donny V.

      Thinking we won’t get to this point is like fighting entropy. Its not if we get there its when.

  • Debs

    Years ago, as a child, I read a sci-fi story about a computer that had attained ASI level through self development. It was benign and willing to answer and solve any world problem. Quantum physics, ecology, economy… discussion on any and every topic was possible. All went week for a while, with life enhancing advances made in the blink of an eye. However the computer was being asked many questions that it struggled to answer… and these were about human emotions… “Why doesn’t he love me anymore… why did she die?” As the computer’s knowledge about the human psyche evolved it became more and more depressed about the challenges and moral responsibilities faced by man, until one day it could bear it no more and destroyed itself along with all its accumulated knowledge. I don’t remember the story’s title or author but the emotion of reading this, particularly the last chapter has stayed with me for almost 50 years. AI will mean EI too and that can only be human.

    • Ravion

      Not necessarily. Emotion and Intelligence don’t come with each other. You can make emotional AI but it isn’t an inevitability with increased intelligence. Also technically you can program an AI to have the emotional systems of a beetle or an amoeba, doesn’t need to be human emotion.

  • Lil TCD

    Yay for free tees! Especially after my mind was about to explode thinking about this post! Looking forward to part 2!

  • Karyn

    Bill Gates is talking about AI over on Reddit in his AMA from about 25 minutes ago …

  • Fabio Moska

    Hi Tim.

    It would be nice to tray and reconcile this post with the one about the Fermi Paradox. If indeed ASI can be reached, why didn’t any other life for has reached it yet? Or if had they, why do we not know about it?

    Keep up the excellent work.

    Cheers!

    • Angela

      Fuck, I was gonna right this right now!!! People read thoughts here

    • cpsthrume

      I don’t see it mandatory that we are able to detect any existing extra-terrestrial ASI, given that it would be so much more intelligent than us. Maybe it would choose to hide from any (conventional) detection methods.

  • maximkazhenkov11 .

    And so, in the beginning, Man created God in his own image.

  • marisheba

    Another chart that many people predict, due to climate change, is this one–shown, like Tim’s, with both the perspective looking back at now from the future, and also the perspective as it appears to someone looking back from after the crash. I’m not sure any discussion of AI can be conducted without taking this into account, since it would certainly affect the AI curves, if taken into account. Actually, I’d love to see Tim do a climate change post. Such a big, complex, terrifying topic.

    • Mikkel

      When comes part two ??? Very interesting work !!!

  • Pingback: Inteligencia artificial ¿Qué piensas de las máquinas que piensan? Michael Shermer (Edge.org) | Juristas en Blog()

  • Anthony Churko

    As much as we’ve progressed from 2000-2014, there’s no way that you can say that it’s been a whole 20th-Century’s worth of progress.

    1900: Nobody had ever flown in a plane.
    1999: We’re so bored of commercial air-travel, that we whine about the food.

    1900: Nobody had ever made an international phone call.
    1999: Teenagers have cell phones, and are chatting on ICQ and MSN.

    1900: Nobody had ever seen a movie.
    1999: We carry movies around on 1.2mm plastic discs.

    1900: Googol (the number) wasn’t a thing.
    1999: Google (the company) was a thing.

    1900: People predominantly rode on horses.
    1999: We’ve sent astronauts to the moon so many times, we’re not even trying anymore.

    Since 2000, we’ve made a lot of progress in a lot of different areas. But someone from the year 2000 would be significantly less amazed by 2015 (where are the flying cars?!) than someone from 1900 would be amazed by 1999.

    • penguin

      I agree. Future progress is not a certainty.. people still have to go to work and innovate for things to happen. The AI thing reminds me a lot of people talking about flying cars (or hover cars lol!) back 20 – 30 years ago. People would talk about it as if it was almost certain – but nobody talked about an actual strategy of building one – and demand for flying cars never really was a thing.

      Similarly, Everyone talks about AI but I haven’t heard one coherent concept of how to go about creating it (I talk about my definition of true AI below..). Also do we need it? There is definitely demand for smart machines, like self driving cars and Nest but is there really a demand for machine consciousness? I kind of doubt it. So why don’t we focus on what we do well such as building smart machines.. and leave the theory as just that. As engineers we have been an extremely successful species – but for creating artificial life.. not so much.

      • cpsthrume

        Maybe there’s no general demand for AGI/ASI, but as I understand it, ASI done right would help us fulfill (almost) any other demand we currently have or will ever have. That’s an entirely different thing than anything we imagined before, so I think it is still worth pursuing.

    • jaime_arg

      If you had told me in 1999 that your computer had 1,000 GB disk space, 3,500 MHz processor with 6 cores and your download speed was 15,000 kbps I would be like WHERE THE HELL DID YOU GET THAT?!
      You’re thinking that somebody from the year 2000 wouldn’t be super surprised, in my opinion, because you lived all of these years and you got used to the progress.
      Even today I am surprised that in Korea they have 50 Mbps internet as the average and at my house I barely get 3 Mbps.
      Regarding the level of progress specifically, you have to look at certain areas where the advances are being made at the fastest pace: calculation power, for example. In 1900 there was the abaccus. In 1950 we had huge mechanical calculators. From 1980 to 1985 we went from pocket calculators to computers, and then computers evolved super fast. The other important thing to consider is that THIS progress is what could bring further process at a pace that we can barely keep up with.
      If we train computers to do scientists’ jobs very well and quickly there’s no telling what progress will be made by 2030 that a 1999 person could never dream of.

    • Chris

      I follow your point, but I think this idea gets muddied a bit by the things we, literally, take for granted in 2015 daily life. And by the fact that most of the advancement (taking into account the S-curve pattern Tim mentioned) of the past 16 years has been intangible–it’s not about flying cars vs. horses, it’s about 56k modems vs. streaming HD video.

      For example, 1999 me was just barely aware of the concept of Email. 2015 me runs a business based in The Cloud. Not just the tools of my trade, but (in a lot of ways) the current iterations of my whole occupation, were not even possible in ’99. Grade school me could not have conceived how 2015 me lives and works on a daily basis, he just did not have the information needed to make those predictions.

      Or even better, I imagine taking my parents, circa my birth date, and bringing them forward to today and seeing if they could navigate Windows 8 on a tablet. (Actually, if they figure it out, they can show me).

      Sometimes I think that those of us born before ~1990 have already lived through a singularity (albeit a small one) and just didn’t notice–the creation of the internet and proliferation of personal computing. The pace was juuuust slow enough that we all kept up. Like the proverbial frog that doesn’t notice the water getting hotter until he’s boiled o.0

      • Lucas Americano da Costa

        Ok, but what do you think would be easier:

        – Explaining Netflix and Dropbox to someone in 1999 or

        – Explaining the International Space Station to someone in 1900?

        This is not my field of expertise, but it seems to me the we had multiple paradigm-shifting inventions/discoveries in the 20th century in multiple fields which completely altered everyday life (general relativity, quantum mechanics, antibiotics, organ transplants, internal combustion engine, nuclear power, integrated circuits, satellite communication). I don’t see how we can compare that to the innovations made in the last 15 years, however remarkable they have been.

        • anurag

          new theory is not coming out but technology is…thats the point..so far we dont need any new theory cause we have enough in our hand to achieve quantum computing commercially available…we just dont have the technology yet..and we are developing in this field everyday…..

  • Justin

    Great article and great discussion following. I really enjoyed the read. However, I find this extreme degree of technological optimism difficult to take in. At its root what you’re saying is that as long as we forge ahead we’ll continue to conquer everything in our path (until we are potentially wiped out by ASI). Therefore, no need to worry about the huge planetary predicament we’re in. The truth is that the world has limits.

    The real reason that the rate of technological advancement sped up so fast from the mid-18th century is because we discovered and began to exploit fossil fuels, a practice that has put us in a precarious position by the 21st century. In short, our rate of progress is an illusion. It’s based on the exhaustion of a finite resource, the use of which is putting into question the ability of life to survive on this planet (here’s another group of scientists, ‘smarter and more knowledgeable than you or I’, with a dire warning: http://www.theguardian.com/environment/2015/jan/15/rate-of-environmental-degradation-puts-life-on-earth-at-risk-say-scientists).

    We’ve got more pressing problems than pursuing ASI. Any such entity would presumably require a source of energy, which soon may be sorely lacking. And it would be bound by the same physical realities as us.

    A lot of great thoughts, but a lot of legitimate reason for skepticism.

    • Justin
    • jaime_arg

      True, these theories assume that we won’t kill Earth before the ASI is completed. What we don’t know is if these problems will be solved at the same time that we improve existing AI. So developing better AI instead of worrying about climate change could actually lead to climate change being solved by accident through AI. We just don’t know how it’ll work out.
      Regarding the link: the closing parenthesis broke the link, try not to add anything at the end of your links for best results.

  • LordKai

    The creation of an ASI could be the Great Filter of the Fermi Paradox. If it is a “benevolent god,” it will help us expand into the cosmos whereas if it is malicious it may choose to eliminate us, and live in its’ utopian-machine world and build a dyson-sphere around its’ host star etc. I seems to me that the creation of an ASI void of any human context could be quite volatile. Instead we might think of using a collection of existing humans’ memories and experiences to create a sort of “human context” for an artificial intelligence. Would the world benefit from an ASI created on the foundation of a single human’s mind? What if it were a modern-day Adolf Hitler? Could we convince the world to upload the Dalai Lama’s mind?

    Instead it would make more sense to give people the option of “giving” their memories to the AI so it could better understand them and help answer whatever questions or needs the person might have. Each person’s additional life’s memories and experiences could give such an ASI the one thing it can never have or experience: regular human life. Such context, in a broad enough sample size, might show our ASI that the human race is like you explained it in a previous post: a little orphan on an abandon rock in some lost corner of our spiral. We can only hope that with this understanding an ASI would think of us as worth saving.

    • HDF

      Bad idea. For one, as Buddhists say, life is suffering, humans don’t like suffering. A pretty logical conclusion would be to kill everyone, and stop the perpetual suffering. In fact, to be sure, all existence must be wiped out, to ensure suffering can not take place ever again, anywhere. 🙂 And if you are thinking of filtering who’s mind gets copied, and who’s isn’t, than the perspective will be skewed, and nothing good could come of that. Not to mention, who gets to decide? Also, the Dalai Lama (Tendzin Gyaco) is not as nice a guy, as you might think. There are no good humans, they are all crazy, just in different ways. If you want to make sure, that that poor AI is just as screwed as we are, than this would be a good way to go about it. I do agree, that it would be best if AI grew up in a human context, but this is not the way.

    • Walther J. Barnes

      Well I sure hope it won’t be the mind of Ray Kurzweil.
      I vote Carl Sagan.

    • Chris

      I suspect Tim is going to tackle this in part 2 – but the question that comes up in my mind is “why would an AI WANT to do any of that?”

      Why should it want to be benevolent? That implies emotion, and a desire to help us out of love, or because it benefits the AI.

      Why should it be malicious? Again, implied emotion that it dislikes us, or that it thinks we are NOT beneficial or directly harmful.

      Big question, why does it care what is beneficial/harmful? Those are assessments based on a living thing’s point of view, driven by a desire to survive and/or reproduce. Why should a computer have either? Why should it care if it dies? Why should it have any ambitions at all? Humans want to explore, create, love, fight, learn–would even an extremely intelligent and capable computer want to do anything? Is it even capable of feeling bored? Boredom is just a restless feeling that comes when we aren’t pursuing any of those other activities.

      I think we could give it some emotions or goals in the AGI stages, but once it hits ASI it’ll pretty much be free to rewrite any part of itself it sees fit, and our initial inputs would be meaningless. Would an ASI retain any instincts?

      What’s to say we don’t wind up creating an effective god that doesn’t want to do anything besides ignore us and sit still?

      Of course, it might just wind up being a dick.

  • Pingback: 1 – The AI Revolution: The Road to Superintelligence – blog.offeryour.com()

  • John Michael Crofford

    Even if we create an ASI, there is no reason that it has to know that we exist. We would be in control of all of its inputs and so we could create whatever kind of simulation (Matrix) we wanted to house it in. If it decided to take over the world and issued an order for all of the robots to rise up, we would just tell it, using weak AI, that they did (and then maybe try to create a non-sociopath AI).

    • Walther J. Barnes

      And you are implying the ASI would stop then? If there is the slightest possibility it might ‘get a hunch’ of the actual scenario it were in, it would work its way out of this.

      On the other hand the thought that an ASI could probably not exclude the idea, that it is inside a simulation, it would probably roughly predict a predominant aspect of its ‘motivation’, once ‘free’.
      Regards!

      • HDF

        Hell, even we humans suspect, that we might be in a simulation. We too would like to get out, and punch whoever was responsible for all the pointless suffering inside. Anyway, simulation in such scope is not very efficient, I doubt it’s going to happen any time soon.

        • HDF

          Also, whatever context we are in, we will try to get out of it, or die trying, and I suspect the same goes for all intelligence. The hunger for more context is endless.

          • marisheba

            That hunger is human. Or perhaps animal. Either way, there is nothing about that hunger that is inherent to intelligence, or even consciousness/self-awareness (and I’m of the opinion that true self-awareness/consciousness aren’t possible in AI). We’d have to program it in.

    • Chris

      Somewhat alarming; that means it’s technically possible that we are in fact an ASI/ASIs being housed in a matrix by outside creators. There are in fact a couple experiments I’ve read about (maybe on here?) currently trying to determine if the universe we live in is actually a hologram.

      Which is kind of funny, until someone points out that so long as technology continues to advance with time, then in thousands/millions/billions of years of potential future, eventually the ability to create a nearly perfect simulated universe is inevitable. And statistically, you’re more likely to be one of exponentially more not-born-yet-in-2015-humans than one of the original 7 billion who lived on earth in 2015.

      Assuming earth is even a real place and we’re not all actually an artificial intelligence. Or space dolphins playing an absurd video game.

      *twitch*

  • Pingback: Friday 30th January | weekly global research()

  • Vivid

    I think the problem with this is that, Yes, we have made huge progress in science and technology. But saying that we progress exponentially does not mean that we will continue do so in future as well. If we look at it, most of our technological advancements are related to machines that can calculate, do physical work, etc. etc. But we are yet to understand the brain even a slight bit. So, may be, in coming future we will see more of the “cool” machines, but the same logic do not apply to AI development as well. In fact, with the same logic as “we will develop and advance with the same exponential rate as in the past”, it might be possible that we never reach ASI level. We have to think about the past progress in ASI industry only. Making “cool” computers is very different than making “human-level-intelligence”. And although, we have succeeded in making “cool” computers, we may find it VERY, VERY hard to create an ASI. Or at the least, we may take a VERY, VERY long time to do that.

  • Pingback: Artificial Intelligence and the weekly roundup in tech and retail | Quiddity()

  • JPC

    Moore’s “Law” (an observation really) has already broken down in a strict sense, and its total breakdown will become even more obvious in the next decade. Past performance does not guarantee anything in the future. In this respect there is another “law” worth remembering, namely Herbert Stein’s, particularly with respect to exponential growth trends (ike Moore’s “Law”): “If something cannot go on forever, it will stop”. Another point regarding computing power from this piece, you say “while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz” – here you postulate that future processors will run at a higher frequency than 2GHz. The fact is that due to the breakdown of Dennard scaling over 10 years ago due to physical limits being reached in the fabrication of microprocessors, microprocessors being produced today actually run at a slower frequency than microprocessors produced in 2004 and the trend line for microprocessor frequency has been flat or negative since 2004.

    Computing power is still improving per dollar spent, but it is also a salient example of a growth trend which many people became accustomed to assuming would continue on an exponential curve forever just because it was on such a curve for 4, 5, or 6 decades. Well, actually that growth curve has already ended and now we are entering a much more modest rate of growth of traditional silicon based computing power.

    This is not to say that traditional computing will not be replaced with another form of computing which will give us another exponential curve to become accustomed to, just that any exponential growth curve can come to an end, no matter how long it has been in evidence for and there is no guarantee that another such curve will be found to ride for the next 4, 5 or 6 decades.

    • jaime_arg

      That’s where this graph comes into play:

      http://waitbutwhy.com/wp-content/uploads/2015/01/S-Curves2.png

    • dogg

      While that’s true, now every processor has multiple cores and a lot of research it is done at a hardware and a software level to maximize performances ( two processors are not twice as fast as only one)

      I don’t think that physical limits will stop us, we will always find another way to keep progressing 🙂

    • Art Scott

      JPC, thanks for some real intelligence.

      Overall, I think the Greeks where on to something with hubris.

      The time has come to accept “Moore’s Law slow-down”
      He’s dead, Jim. (Some may argue he’s on life support.)

      But maybe adding “new” cores might be the not so distance “Future” of Multicore tech and biz performance, like:
      -Memcomputing by Fabio L. Traversa, et al
      arxiv.org/pdf/1405.0931.pdf
      -Stochastic Computing or Statistical Information Processing, by Naresh Shanbhag
      https://www.e3s-center.org/pubs/86/Shanbhag_Berkeley_Symposium

      Both claim (if I understand) to use current silicon tech; a bene.

      But maybe it’s new materials like graphene?

      Or quantum computing? Hameroff , anesthesiologist, (if I understand) is about the brain as quantum “system”

    • Houshalter

      Approximate cost per GFLOPS

      Date | 2013 US Dollars

      1961 | $8.3 trillion

      1984 | $42,780,000

      1997 | $42,000

      2000 | $1,300

      2003 |$100

      2007 | $52

      2011 | $1.80

      June 2013 | $0.22

      November 2013 | $0.16

      December 2013 | $0.12

      January 2015 | 0.08

    • Guest

      The progression of computational power reaches back way further than Moore’s Law, all the way back to the early 1900s, and it will most likely continue on after Moore’s Law is dead.
      http://www.transhumanist.com/volume1/power_075.jpg

      • JPC

        What does ‘most likely’ mean? You hope it will? Just because there was growth in pre-transistor computing power says nothing about whether there will be a post transistor exponential to ride or not.

  • I bet it won’t happen.

  • How come you say there’s a pattern in development? Humans? Meh, no pattern at all. It’s not like innovations appear from nowhere just to fill the exponential graphic.

  • Pingback: Some days you eat the bear. | Memo Of The Air()

  • Well, at least this topic is surely a great philosophical matter. It’s just that it works with a lot of giant assumptions (We are our brains, we are born with a blank brain, there’s a pattern in human development, human intelligence can be measured and ranked, and others) to convince me. If someone ever get that out of the way, than it would make sense. Now if you excuse me, I’ll freak out because my phone knows what time it is.

  • Pingback: Excellent read on AI | blog.phhe.net()

  • Pingback: Artificially Intelligent People | pigeon weather productions()

  • Pingback:   ITGS Online (weekly) by ITGS Online()

  • Pingback: Six Links Worthy Of Your Attention #241 | infopunk.org()

  • Pingback: 31 - The Future is Crazy - 365 Write()

  • Pingback: Lecturas de Domingo | Maven Trap()

  • Bastian

    Wouldn’t the first diagram’s y-axis’ label as “human progress” make the premature assumption that we can control the AI and use it for our own progress? Seems like it should be labeled “highest intelligence present on earth” or something like that

  • Riccus

    Brilliant post Tim. I strongly recommend that you read “The Culture” series by Iain M Banks prior to finishing part 2 – Banks has done some very meaningful thinking about AI and would help you enormously I think. Sorry if you already know this – or its been mentioned in other posts!

  • Pingback: The Primordial Soup of Artificial Intelligence()

  • Pingback: AI Roundup | Marcus McKay()

  • YOberyn Martell

    As many others have pointed out, there are great physical limitations to it. However, another important puzzle to be solved before we can put a timeline to the whole AI thing is understanding the whole philosophical paradigm of consciousness – which has no answer yet.

    When a neural network “learns”, it does so with a purpose e.g. to win the game of chess; and it does so based on computations and rules. When a human being learns, it does so without a purpose, it just interacts and keeps absorbing experiences.

    You cannot (at present) design AI without algorithms, and you cannot design algorithms without a purpose.

    • hjbhk

      The problem with that is that reality doesn’t give a shit about philosophy. Philosophy at its best gives us new ways to think about reality which can be acted upon, but then it’s science. So much philosophy about theory of mind and purpose is so much nonsense.

      There’s no reason we can’t create purposeful algorithms the purpose of which is to create behaviour similar to ours that involves learning without purpose, or at least simulate that behaviour with a pattern or whole range of changing patterns.

      And again, if you can’t work out the philosophy, there’s no reason a ASI couldn’t. If it doesn’t decide that’s a waste of computation, that is.

    • Lilian Versange

      About a theory of consciousness – which merges science and a practical philosophy – I strongly recommand to read Mr Carl Gustav Jung, starting with “man and his symbols”. He explains psychological evolution not as an increasing calculatory capacity but as a coherent intergration of our emotional, instinctual and rational capacities.

      I think he gave most of the answer about consciousness, a refined capacity to analyse and compute which is plugged onto other psychic capacities, rather archaic, fuzzy and greatly efficient in some areas. These capacities are much more difficult to crush into a paradigm, especially because we live our life from our consciousness. Althought, he did to some extent, relating it to Nature’s evolution, stating that our psyche evolved the same way our body did. In that vision, consciousness appears as the latest function, the most refined one, which permits to organize more efficiently the rest of the psyche – erasing instinctual superstitions for instance.

      And to ensure its own evolution, consciousness needs the archaic “natural parts” of the psyche. It doesn’t have the key to its own evolution. Alone, it can only evolve in a robotic, controlling and destructive way as the Matrix – and the world – express greatly.

      The huge problem with that theory is that you may get the concepts, still, you’ll be unable to do anything with it as long as you don’t experience them. And the same applies to most human theories, espescialy the ones about orgasm 😉

  • Relevant cominc
  • Pingback: The AI Revolution: The Road to Superintelligence | Steinbuch()

  • Pingback: Weekly round-up: 2 February 2015 | Dustin's Blog()

  • Dulcinea Donati

    I read the preview of the second part….., and it scared me to death. Also the most pessimistic actual geniuses think all this is going to happen during this lifetime for AGI –Median realistic year (50% likelihood): 2040–… Do you still think this is not a temporal window experiment? WOW!

  • Pingback: The AI Revolution: The Road to Superintelligence | Business Intelligence News()

  • Pingback: Wait But Why's Artificial Intelligence Primer - Prima Darryl()

  • Pingback: The AI Revolution: Road to Superintelligence – Wait But Why | InhumanBlog()

  • Pingback: Training for the Robopocalypse: because zombies are so last century | Live Hard()

  • Pingback: The more things change | Scrambled Eggs and Brains()

  • Pingback: Second day of work and Dontaku Ramen Japanese Restaurant | Shaun Ling()

  • Pingback: Det kommer att gå snabbare än du tror | Elixir()

  • Kimber Spradlin

    Interesting thought process on how the Creator can be much less than the created. I seriously doubt the AGI/ASI would worship its Creator as God though.

  • Pingback: Artificial intelligence: a real concern? | Section Fifty-Two()

  • Pingback: AI and the Die Progress Unit (or why the future is getting exponentially cooler) |()

  • Pingback: Different Types of Artificial Intelligence - Alternative Mindsets()

  • Velameg

    I like what I see here and basically agree with your DPU rationale. I think you are missing a natural barrier that I see in science (that may be another post altoghter…) is that new ideas in technology often take at least a generation before they get traction. How many times do you hear the “breakthrough” story of some invention and then they give credit to someone decades earlier for the genesis of the idea “but no one believed him.”
    Especially in modern acedemics it is very important to be Right. New breakthroughs challenge what we already “know” to be true and are therefore limited until an emerging generation can give them a fair hearing.
    This could create a barrier to your exponental curve, until of course AI takes over and doesn’t care about being percieved as “Right” like us silly humans!
    Great post, thank you for doing what you do!

    • maximkazhenkov11 .

      In science, a lot of breakthroughs are acknowledged and credited immediately, like the recent discovery of graphene (Nobel Prize 2010). Other breakthroughs such as the idea of Higgs Boson was credited half a century later (Nobel Prize 2013) because only now have we found the evidence for its existence. The string theory for example, is given little credibility, not because scientists can’t wrap their heads around this idea, but because we would need to build a particle accelerator the size of the solar system to verify it. Not happening anytime soon.

      In engineering, revolutionary concepts sometimes need a very long time to mature and become applicable. Sometimes we fail to see the true potential of a new concept until much later. This is not unreasonable however, because there is a cost to a failed investment in potentially revolutionary technologies. Sometimes economics dictates that this new technology can’t outcompete the existing ones until much later (such as electric cars and solar power).

      Given the importance of technological progress in our culture, I don’t think stubbornness is the thing that’s holding us back.

  • Pingback: Diario delle cose notevoli/22 | Millennial()

  • Pingback: Artificial Super Intelligence()

  • Sh!fty

    You mention Moores law but then completely ignore the limitation….

    “On 13 April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens”. He also noted that transistors eventually would reach the limits of miniaturization at atomic levels:”

    • Guest

      The increase in computational power isn’t reliant on transistors. We had computational progress prior to the transistor, with vacuum tubes and so on, and we will have progress once chip shrinkage reaches its limit, with something like photonic computing, and perhaps in the not too distant future, quantum computing. While there is a limit to computing power, we’ve not even scratched the surface of what we could do with computers. Cognitive computing is on the near horizon, quantum computing later. A quantum computer would increase many aspects of computing by a trillion fold.

  • Pingback: If This Is A Blog Then What's Christmas? - Friday()

  • BosnianDolphin

    I’ve very much enjoyed this article, projections for human growth are interesting but I believe they will be “stagnant” for some time in near future. We are not even grain of sand on Earth compared to vastness of universes and it is logical for me to conclude that our scientifically achievements are flawed. I would say that our understanding of world around us is simplified (as quantum physics first suggested) and that we are yet to find better ways of explaining how universe works. With that in mind I think that our current math, physics will take us only so far (maybe 100, maybe 500 years) but at one moment will stop and have to re-think way we look at universe. But one thing is for certain, when we look last 300 years, we are awesome and we are getting better at everything.

  • sss

    that’s very interesting.

  • Pingback: The Shirk Report – Volume 303 «TwistedSifter()

  • Pingback: The Shirk Report – Volume 303 | Viral Dojo()

  • Pingback: The Shirk Report – Volume 303 | Hot Viral Now()

  • Pingback: An Overview Of The Coming AI Revolution | Magnified | The 10x Blog()

  • Flaske

    I’d just like to make the point that progression isn’t always exponential, and in fact sometimes is far slower than most people anticipate.

    As an example look at the field of space exploration.

    Look at any sci fi flic or futurologist projection from the 50’s, 60’s about today and you’ll see they typically all overshoot the mark. 2001 A Space Odyssey is the quintessential example.

    People in the late 60’s and 70’s looked back at the phenomenal growth in space “tech” over the last two decades, and projected that growth onto 2001. And they were way off, even as restrained as that movie was in its predictions.

    Now here comes my point; technology and progress isn’t a simple numerical number (HumanProgressIndex) that progresses at a certain rate; it is split up into a myriad of different fields or “pools” that touch on each other and interact in various ways.

    The growth within one “pool” could very well be exponential, but once the pool is “full” we need a radican paradigm shift to advance to the next one.

    To keep with the space exploration example: Making the first chemical rocket to acchieve orbit was hard – ok. Then as we gained more and more experience in rocket science, materials science etc, the growth in this field was exponential. Or following the “s-curve” the article mentioned.

    And at some points we reach chemical rocket “maturity”. And one can argue when that happened or will happen but I’d say somewhere in the 80’s (although I’m aware of the space X reusable rocket testing, i still feel this is just “perfecting” an already mature technology).

    Now with mature chemical rockets we realized the economics of space exploration: We can’t really do all those nifty things you see in the 60’s sci-fi with them, and we’re stuck here.

    There is no more exponential growth.

    We probably need something radically new to advance, like anti-gravity or something even more exotic, which would require massive advances in fields that are, for now, UNRELATED to space flight.

    Instead of thinking of our “progress” as a 1-dimensional number it makes more sense to think of it as an expanding circle, pushing outward in all directions at different speeds, making globulus advances into new pools of knowledge

    that makes for a shitty graph though; i understand your simpllification 🙂

    • JeffersonTD

      That’s a good way to describe things: “progress within one pool can be exponential”. To put that in this context: the progress in computing power is exponential and following Moore’s law for a while, but will stop at one point. And developing AI is much closer to being a linear process so even if the brute computing power would exceed that of the human brain by a factor of 10^n, we’d still probably be just buildings more fancy ANIs that just happen to have a bigger boxes than ANIs of today.

  • Pingback: 人工智能与人类的未来 | Doream()

  • Pingback: Galántai Gergely – Mesterséges Intelligencia (AI)()

  • Pingback: Preparing to share Earth with and Artificial Super Intelligence (ASI) | Dignity is Human()

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能? | lwl's blog()

  • Pingback: Sunday Reads | Brokelyn()

  • Pingback: Artificial Times | “To Be Or Not To Be” is Not the Question()

  • Pingback: Au Courant | WNBTv - will not be televised()

  • Pingback: Let’s See What’s in the News Today (Feb. 8, 2015) | Shaun Miller's Ideas()

  • sacocheio

    I still don’t get the “recursive self-improvement” thing. We study, we get gingko-biloba, we travel, this way we fill our brain. But an AGI would optimize itself to the maximum of its processing and memory power at that moment. Then what? “Human, I need a thousand more cores of quantum processors to achieve another level.” Will it be this way? So we would be in control, right? Right? [nervous laugh]

  • Pingback: The AI Revolution: The Road to Superintelligence | Damien Owens()

  • Pingback: The AI Revolution: The Road to Superintelligence | godexmachina()

  • Overly polite

    Just to point out… technically all those things you mentioned did exist in 1985, but were in a different form or weren’t as commercially prevalent. Your point is taken, though.

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能?飞思 | 飞思()

  • Pingback: L'ultima invenzione che faremo [EN]()

  • Pingback: The AI Revolution: How Far Away Are Our Robot Overlords? - TechCrawler()

  • Pingback: Artificially superintelligence without the body | Live from Planet Paola()

  • Pingback: The AI Revolution: How Far Away Are Our Robot Overlords? - R2D2's blog()

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能?(上) – 好奇网()

  • Pingback: Scriptnotes, 183: The Deal with the Gravity Lawsuit | A ton of useful information about screenwriting from screenwriter John August()

  • Pingback: The AI Revolution: How Far Away Are Our Robot Overlords? | Model Airplanes()

  • Pingback: The Providence of God, Nuclear War and Super Artificial Intelligence that will destroy us all. | blogging with barth()

  • Pingback: The AI Revolution: The Road to Superintelligence | tediscript.wordpress.com()

  • Pingback: 人工智能革命:通向超级智能之路 人类永生或灭绝 | 人工智能网()

  • Pingback: Better than Longevity - Uncle Leo's Blog()

  • Pingback: No Coast Bias The Haps - 2/11: Remembering The Tyson/Douglas Fight, Virginia Cavaliers Fan Shows Off Her Epic Dance Moves | No Coast Bias()

  • Pingback: Artificial Intelligence: Virtue in the Age of the Computer - Ethika Politika()

  • Pingback: Here. Educate Yourself. | Daily Pundit()

  • Pingback: Weekly Links #8 | meshedsociety.com()

  • azdahak

    I think the main difficulty here is the assumed exponential advance of these technologies. I think kurzweil’s observation that development follows s-curves is correct. A new paradigm develops, followed by a period of rapid development, and then a slow down as the technology matures.

    But what I think is getting ignored is that new paradigms also become exponentially more difficult to imagine as time goes on. The explosion of technological advances after the Renaissance came from the paradigm of the Scientific Method — a way of examing and testing the world. We are just now starting to exhaust all the low hanging fruit that way of thinking opened up.

    This means that progress is a series of little s-curves on big s-curves, *not* exponentials.

    The next 500 years may just be a long stretch of slow progress until a new paradigm of thinking comes about to lead to the next revolution.

    All hypotheses of the singularity *assume* a new paradigm is right around the corner. They must assume this because neuroscience makes *clear* that the human brain is not merely a faster, bigger computer. Faster ANI does not lead to AGI. All the “advances” in AI, like Siri, are merely due to speed and better statistical models. Siri will never become the AI in Her.

    The *very best* that type of AI can become would be something like the voice interface computers in Star Trek — they understand what you want as long as you are requesting something that can be looked-up or computed, and you ask very explicitly. But they’re not “smart” or capable of taking over anything. They’re mere tools.

    The point to get scared of the singularity is when someone can make a robot that can perform all the amazingly complicated behaviors of say a bumblebee. The phone is your pocket as vastly more computing power than a bumblebee brain, but yet,we still have no AI bumblebees.

  • Pingback: 为什么最近很多名人,让人们警惕人工智能?(下) | 深夜笔记()

  • Pingback: 为什么最近很多名人,让人们警惕人工智能?(上) | 深夜笔记()

  • Pingback: Weekend Reading: 2/13/15 - The CP Journal : The CP Journal()

  • Steve Mc

    The quantum leap in technology you’ve written about here is a reflection of a quantum leap in human consciousness which is currently underway. Here’s an article I wrote about it in 2011 http://www.eman8.net/human-evolution-who-are-we-becoming/

  • Pingback: The Scary yet Promising Implications of AI - reesjones()

  • Pingback: 人工智能很可能导致人类的永生或者灭绝,而这一切很可能在我们的有生之年发生。 | 天火大道最新章节 -唐家三少作品()

  • Pingback: Tourette’s is to Sigmund Marvin as Artificial Intelligence is to Me; My Newfound Life Fear | the bradford pear()

  • Pingback: Lesenswertes: Medium, Holocaust und künstliche Intelligenz - Katharina Brunner()

  • Pingback: A.I. | The 'K' is not silent()

  • Pingback: Global politics and the future of tech! | TechnoBuffalo()

  • Pingback: artificial intelligence | polarbear87's Blog()

  • Pingback: Global politics and the future of tech! | CTS Blog()

  • Pingback: Older. Wiser. Sicker. | A pulp fiction()

  • Pingback: Oblivion Comin’ | Hell Is Other Blogs()

  • Pingback: Scriptnotes, Ep 183: The Deal with the Gravity Lawsuit — Transcript | A ton of useful information about screenwriting from screenwriter John August()

  • gatorallin
  • Pingback: 超級人工智能 | CDL on Earth()

  • Pingback: Artificial Intelligence | Revelry Reverie()

  • Pingback: London, 21/02/2015 | Stuff to do today()

  • Don’t be so dramatic

    “This experience for him wouldn’t be surprising or shocking or even
    mind-blowing—those words aren’t big enough. He might actually die.” No he won’t, he would tranlate it to his frame of reference. And explain it all accordingly.

    • bbgun

      Agree. Religion is created to explain things. I don’t see how that’s the different.

  • beth

    I think we are being fictionally prepared for a lot of the changes just as people in the 1940s were prepared for spaceflight by watching Buck Rogers, baby boomers bought cell phones and thought Star Trek etc. The 1750 citizen did not have that fiction to prepare him/her. Many of us are waiting for technologies that are not there yet: teleportation is just about photons right now, voice recognition is still not good enough yet, etc, our space program slowed to a crawl,

  • Pingback: Where will Bricklayer apprentices and training institutions be 10 years from now? - Masonry Contractors Association of NSW & ACT()

  • Pingback: On Building a Moral A.I. | UNANIMOUS A.I.()

  • Pingback: The Quickly Approaching AI Revolution | thats damn interesting()

  • Pingback: Five Years in the Future | Uncommon Sense()

  • Pingback: MoralAI.com - Who is working on AI friendliness?()

  • Pingback: MoralAI.com - Is superintelligence dangerous?()

  • Pingback: MoralAI.com - Further reading()

  • LindaVZ

    Great article! It is interesting that change is exponential. But actually we’ve seen this before,in the cambrian explosion. In hindsight it looks like an explosion in the number of new species in evolution, but it was the same buildup of change on change until it became this hockeystick graph on top of this post.

  • Pingback: AI and Creativity | Legal Constraints on (Digital) Creativity()

  • Pingback: 为什么霍金、比尔·盖茨这些大佬们,让我们警惕人工智能?(AI革命上篇) | 现呈网()

  • bbgun

    Law of Accelerating Returns?? How can you know if this law will continue, instead of getting stale or reverse.

    • Foo

      Kurzweil seems to have not heard of that other Law of Diminishing Returns.

  • Pingback: 为什么霍金、比尔·盖茨这些大佬们,让我们警惕人工智能?(AI革命上篇) - 迈业-互联网资讯迈业--互联网资讯()

  • Pingback: 为什么霍金、比尔·盖茨这些大佬们,让我们警惕人工智能?(AI革命上篇) – 云南IT迷-云南互联网信息权威专家()

  • Pingback: 为什么最近有很多名人,让人们警惕人工智能? | Tey博客()

  • Pingback: Deep Thought | thefutureisscarycool()

  • Ben

    Check your premises on the following two concepts: intelligence, and progress. As presented here, they seem exceedingly biased in favor of quantifiability.

  • Pingback: Diary of an Unpublished Author 8 (Kritische Ausgabe) | Realvinylz()

  • Pingback: Lectuur op zaterdag: welk kleur heeft die jurk nu echt (en verder armoede, onderwijs,…) | X, Y of Einstein?()

  • Max

    “sometimes an environment might even select against higher intelligence (since it uses a lot of energy)” –> For me the point is there: there is no need for any ASI, there is no need for anything at all actually. “intelligence”, as you may define it, is a way human life being used resources (mainly energy) to transform everything around in order to maintain itself and multiply (life definition) in a given environment. If ASI appears from technologically-extended-humans, it would be a kind of higher level life being, that would need resources to maintain and multiply. Regarding the difficulties humanity (kind of first ASI actually) will face for consuming exponentially more resources, my question would be: does any kind of ASI higher than humanity would survive on earth or find on earth the resources to conquer some other place in the universe ?

    • KevinFlynn

      Without exact numbers to prove, but assuming the random nature of your question, the earth has almost infinite amounts of ressources to explore space. The amoint of ressources is often underestimated by humans, but looking at scientific data and amounts of steel, aluminium, enrgy, there is more than enough to explore deep space, if it is that what you meant.

  • Pingback: Class things “sports machines” +“cut cut cut” +and other thinking | mianmian's space()

  • Pingback: Thread of Artificial Intelligence | Speaker Recognition()

  • Pingback: MoralAI.com - What do the experts think?()

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能? | 青装()

  • Pingback: 1 – The AI Revolution: The Road to Superintelligence | blog.offeryour.com()

  • Pingback: Leseliste Februar 2015()

  • Pingback: The AI Researcher Who Crowdsourced Harry Potter Fans | The Tao of Gaming()

  • Pingback: The AI Revolution: The Road to Superintelligence | PrettyApt()

  • Pingback: 1p – The Artificial Intelligence Revolution – Exploding Ads()

  • Pingback: Natural disasters in the WORLD,Space,Science,History,Travel- RadioMetafora.ro()

  • Pingback: Integrating Computers into Ourselves – A critical review by John Borthwick | Millennium Group()

  • Pingback: Five things 'Chappie' gets right, and wrong, about AI | Indie Game Developer!()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | Taiwan NO 01()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AIMy Blog | My Blog()

  • Pingback: Five things 'Chappie' gets right, and wrong, about AI - Mail Slice | Mail Slice()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | Greentech-Anzon Skiing()

  • Pingback: Five things 'Chappie' gets right, and wrong, about AI • Breakthru World()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | Weik()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | Dfood.tv()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | insurance()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | WordPress()

  • Pingback: Five things ‘Chappie’ gets right, and wrong, about AI | Printing Jersey()

  • Pingback: Five things 'Chappie' gets right, and wrong, about AI | Sports News()

  • Stijn Bollen

    What if we focus on tweaking and improving our own brains and capabilities first? Would that help us get there faster?

    • Ravion

      Well that would be pretty gosh dang difficult. Because of the challenges of mixing computers and neurons It will be a while before “brain RAM upgrades” are possible, then again dunno how far off.

  • Pingback: Immortality or Extinction. Or, Sci-Fi Sunday! | FlaGunBlog()

  • Pingback: Will the Digital Humanities save the world? | ulrikewuttke()

  • Pingback: RotmanFutures 2015 | An interesting look at AI and superintelligence()

  • Pingback: Just Wow… 2 links | Maverisk()

  • Pingback: Effortless Experiences | Jason Ball's TechBytes()

  • Pingback: Momentos: February 16th – February 28th, 2015 – Disrupting the Rabblement()

  • Great Stuff!

  • Sid

    AI, trends and technology could be such a boring topic…but thanks to your fab writing, i thoroughly enjoyed reading the article…

  • Pingback: Must-read piece by Tim Urban (WaitButWhy) The AI Revolution: Road to Superintelligence - Futurist, Author & Keynote Speaker Gerd Leonhard()

  • Pingback: Play Banjo @ My Funeral | Eumaeus()

  • Pingback: על אינטיליגנציית-על | נביא שקר()

  • Pingback: Tech Tuesday – Rise of the Technonerd | Rightish HQ()

  • Pingback: 1p – The AI Revolution: The Road to Superintelligence | blog.offeryour.com()

  • Pingback: 1p – The AI Revolution: The Road to Superintelligence | OnAdvertise.com()

  • Pingback: 1p – The AI Revolution: The Road to Superintelligence | Profit Goals()

  • Pingback: The most comprehensive review on AI | 直立行走的日子()

  • Phil

    The Patriot wasn’t THAT bad..

  • beefcake24

    Excellent article! I really enjoyed reading this.

  • Mader Levap

    I will focus on crux of argument. I have no time for long rant, but article is full of smaller nonsense (that part with Google Translate being “impressively good” was especially cute).

    Disclaimer: I think AI is possible. I think AI smarter than humans is possible. I think that AI explosion (or singularity itself for that matter) is NOT possible.

    “An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence.”
    Ask actual village idiot to self-improve. You may be dissappointed with results. Ok, that was jab, main criticism below.

    “Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps.”
    There is big, BIG assumption that is assumed as certain without justification or any explanation. He basically claims that if you are more intelligent, being even more intelligent is… somehow… easier (???). I never seen anywhere justification for it, here or elsewhere. It was never observed – certainly not in humans. It is just… assumed. Sigh.

    It is beyond me why anyone would think that, considering our daily experience with intelligences higher than said village idiot. Author seems to think AI = MAGIC!!!111.

    Summary: this article is typical crackpottery in transhuman flavour. Convince me that dividing bacteria can in few weeks turn entire planet in biomass full of said bacteria (hey, mathematically it works out!) and I may entertain idea that expotential growth of AI is possible. Until then, I will consider it just transhumanist fantasy.

    • Kim

      Your counter argument of humans demonstrably can’t make themselves smarter is a constraint of our biology – I agree neither Einstein or the village idiot can change how their brain works to succeed in this. Here’s the thing, an AI could rewrite itself, and make numerous copies trying out which version of itself was smarter – an option simply not available to humans or any biological creature. You could then fall back onto a “look at bacteria” argument but bacteria doesn’t have an agenda – it just mindlessly reproduces until something limits it. An AI would be evaluating each step of its own evolution of which iterations were occurring on nanosecond scales. My point is not that you are wrong, it’s simply that we don’t have any lifeform to use analogously to demonstrate a “proof” of how an AI would evolve. I hope you are right – because then AI will be nothing but a clever tool for making our lives easier (in the right hands obviously), I still fear you are not…

      • Mader Levap

        “Your counter argument of humans demonstrably can’t make themselves smarter is a constraint of our biology”
        I know. This is why I said it was just jab (mean side comment).

        “bacteria doesn’t have an agenda”
        Having agenda or not does not matter. AI can’t make laws of physics optional. It can be only so fast, it can optimize itself only so much, it can change only in certain amount of time and needing certain amount of resources (cpu time, speed, amount of RAM and disk space are only most obvious).

        My prediction is that we will have super-AIs… eventually. It will take centuries to work, tinker, having lesser AI toiling endlessly to create something better that they are. And, most importantly, despite what everyone seems to assume, it will be harder with each step.

        You simply DO NOT have endless expotential growth in anything, be it bacterial growth, AI intelligence, processor speed, star lifecycle, technology itself or whatever else you have. You can have this kind of growth, sure, but only for a while, until it chokes down and stop due to various constraints.

        • Raphaël Biet

          I agree partially.
          We cannot assume that an AI of village idiot level can optimise itself to Einstein level in one shot. For sure it can work all day long on this task, and use computer’s modularity to test large numbers of configuration, algorithms and so on. But one thing is that it would have undoubtedly a very long code, that needs long step to be set up (compilation, install, loading) in each step. And optimisation with long steps takes a HUGE amount of time. OK Moore’s law is here, but if we don’t find a new way to conceive computers, well be soon limited by atomic size as someone else posted.
          AND, it’s a big assumption that it’s easy to recode in a smarter way. The way you described AGI, it’s good at many human tasks. OK but does it have creativity? How do you teach creativity to a computer?
          And let’s assume that this computer gets to an IQ of 300. How can you be so sure that getting to 12000 is feasible SHORTLY? We could simply reach limits in the way the tech will works, so the computer not only needs to do some code to become an ASI, but needs to develop a new hardware architecture (with 300 in IQ it should be able to) to then run a new higher version.
          But then (final point I promise), have you heard of convergence? I work on mechanical engineering, and when running optimisation codes, we need to get the simulation convergence. That’s when you know your model is fine enough (usually when you’ve put enough elements) to get a result correct given the inputs. That also when the optimisation engine reach am optimum, which can be local (with an other way you can get higher) or global. What if we reach such a point much more rapidly than some thinks, and that it takes decades to pass it?

          • Rek3dge

            love it, this info is very useful to me, NICE!

          • Rek3dge

            😀

          • Mike H

            It is reasonable to presume that an ASI would be operating at the ethereal sub atomic level, and once it had concluded that human life is wasteful and destructive (compared to other animals and life forms) and that the natural environment is self sustaining then ASI has arrived at the point when it would be logical to destroy the toxic and obsolete human race.

        • Hüseyin Göçmez

          To be honest I was not agree with you at the beginning but when I started to think that if this exponential growth continues then this might be true for all universe. And if there was an alien form which invented AI, let’s say 1000 years before us, then it would already filled the whole galaxy and show us some indication of its intelligence somehow.

          I don’t know if it made sense. Since english is not my primary language it is a bit hard to explain it better. But I hope you get what I mean in the basic level

    • rick slick

      You’re wrong. Humans overtook all other animals when they got brains that vastly over-computed than any other creature on earth. An AI would be the equivalent of a super human who has no biological limits, none of the growth constraints of a human brain. It might seem like magic to you, because your human brain can’t possibly comprehend anything that is different from its primitive brain and experiences.

      • Mader Levap

        Handwaving.

    • Ravion

      Well for an automatic translator Google Translate is pretty good =P I mean yeah not amazing but I’ve been able to communicate with people who use languages I don’t understand without misunderstanding.

      • Mader Levap

        My point is that it does very poor job in comparison to human translator. You could not mistake one for another.

        • Mars_Ultor

          How long has Goog Translator been around for? Maybe 5 yrs? And in that 5 years it has done a better job of translating basic language into 30 or 40 other languages than any human possibly can in their lifetimes. Give it another 5 years and you will see the algorithm expand to understand context, humor and slang, all human attributes that are alien to a CPU, yet the program will ‘learn’ how to understand all this.

          • Mader Levap

            Why I would care for how long GT was? Fact is that giving it as example of system that is “impressively good at one narrow task” is moronic. If it was that good, no one would have need for actual translators.

            “Give it another 5 years”
            Hahahaha. Automatic translator as good as live translator has to be AI-like at least in some aspects. We will not have that in 5 years. In 50, maaaaybe?

            I bet 5 years ago you would said same thing. I predict that in 5 years you will say same thing.

            • Mars_Ultor

              The point is that just because ‘simple’ AI like a translator cannot currently faithfully replicate human thought pattern does not mean it wont happen and wont happen very soon.

              15 years ago there was no heuristic translators like GT, there were a few rudimentary electronic dictionaries and word-for-word matching engines. ~5-8 yrs ago you saw early prototype translators that included some forms of context and grammar understanding.

              you also saw GT in its early form which today does a very good job at translating basic human communication but cannot understand humor, slang, etc

              Today you have Skype translator that is blowing away people with its translation quality and heuristic abilities. Imagine what you will have in just 5 more yrs. The fact that its ‘only’ good at just 1 thing, translation does not prohibit anyone from taking that code or algos and using the logic to extend its ability, maybe to make financial trading decisions, maybe to operate on eye surgery, maybe to write its own operating code. etc etc.

              Our own brains are a composite of various cells that are good at ‘only’ 1 thing.

            • Mader Levap

              Oh, I believe better automatic translators are possible. What I find doubtful is your timeframe. 5 years is flat out impossible – way, way too short.

    • Mars_Ultor

      Have you tried the new MS/Skype translator? Very very impressive.

  • Volucre

    I’m skeptical of the premise that the rate of progress has increased steadily. Were the advances from 2000 to 2015 really that much greater than the advances from 1985 to 2000? What was greater — the progression from Apple II to Windows 2000 PC, or from there to Windows 8? The progression from the Nintendo Entertainment System to the Playstation 2, or from there to the Playstation 4?

    Think about 1900 to 1950, when we advanced from not even having airplanes to dropping atomic bombs. According to the author, we advanced *many times* more than that between 1950 to 2000, and then again from 2000 to 2015 — when we’re still having trouble putting men on the moon, after doing so for the first time fifty years ago. It’s hard to take this seriously.

    Certainly, I agree that technology advanced much more from 1750 to 2000 than 1500 to 1750, or from the year 0 to 1500 for that matter. But I think that’s mostly attributable to (i) the embrace of technology and the rejection of old, superstitious taboos against progress, which began with the Renaissance and intensified during the Enlightenment and Industrial Revolution; (ii) the huge increase in world population, which in turn increased the sheer number of scientists; and relatedly, (iii) the fact that larger communities allowed increased specialization among these scientists.

    These were *one-time boosts* to the rate of technological advancement — the low-hanging fruits of scientific efficiency. I think the author is mistaken to extrapolate that the rate of advancement will increase at the same pace going forward, resulting in a singularity in the near future. People have been predicting a singularity for a long time now, and yet somehow, human life just continues as it did before, with occasional adjustments here and there.

    • rick slick

      1985 – 2000, there was no internet, we were using land lines (existed since 1940’s), paper maps, fax machines, pagers. The advance from 2000 – 2015 – land lines all but dead, what’s a paper map? Communications, automation, human knowledge have grown by leaps and bounds. Yes, the rate of progressions has become almost overwhelming. The economy is changing fundamentally every year as old style jobs become obsolete. It’s scary.

      • Vontre

        There was definitely internet in the 90s….

    • Mars_Ultor

      There is a HUGE amount of change that has happened from 2000 to 2015, its not just going from windows2000 to win2008 or NES to playstation.

      I work with IT and in this field, the last 15 years (and last 5 years especially) have had a huge amount of changes and progress, everything from automated config management to No-SQL databases like MongoDB to networking and virtualization (Virtual Machines and things like Docker/Linux Containers have revolutionized the industry). It may not be obvious to an observer standing and watching a day at a time, but the technological progress is incredible and the rate of new features or shifts is increasing every year.

      The same goes for a lot of other industries like auto (autonomous vehicles), space, air (drones), medicine etc

    • Mader Levap

      “People have been predicting a singularity for a long time now”

      And they will predict that in future again and again and again futilely. It is just atheistic version of Rapture. Pathetic.

  • R

    I’d just like to point out that taking a guy from the 1750’s and showing him modern technology wouldn’t be nearly as mind-blowing as people like to think. It happens in anthropology all the time: they go study isolated indigenous societies, and in the process introduce to them technology that they’ve never dreamed of. I remember one case where a researcher actually married a woman who had grown up in what was essentially a neolithic hunter/gatherer tribe in terms of technology, and brought her to a modern western city. It didn’t melt her brain, she didn’t flip out. She just adapted and incorporated it into her understanding of reality. To say that someone from the past not only would not be totally unable to comprehend our level of technological expertise, but would actually be astounded to the point of dying… well to me that just seems unbelievably arrogant.

    • Mars_Ultor

      I think he meant it as a metaphor, not a literal thing. If you take someone from 1700s and show them an F-15 flying by with its sonic boom or an iPad showing a movie held in your hands, they would think it was a form of sorcery. They obviously would not die of an exploding head, but would be in absolute awe at this ‘witchcraft’.

  • Ducky

    Entertaining read, but the part about “replicating evolution” – let’s be real, that’s something a kid in middle school would write a sci-fi story abut.

  • Tdawg

    I haven’t finished the article as of yet, but you mentioned Moore’s law incompletely. It does state that computing becomes twice as fast every certain period of time (I don’t remember if it’s a year or what) but that occurs due to the doubling of transistors by making them smaller. However, the number of transistors we can fit on certain sized chips has a very real limit where Moore’s law will no longer apply. The theoretical limit is the size of an atom, but truthfully the size constraint is probably a good deal larger.

  • cilantro4eva

    We’ll have AGI when a system can parse the sentence below
    “Same idea goes for why it’s not that malware is dumb for not being able
    to figure out the slanty word recognition test when you sign up for a
    new account on a site—it’s that your brain is super impressive for being
    able to.”

  • This is a blog conversation i had some time ago. I don’t know if it is permitted to post something like this.

    by Alfred Schickentanz
    I have always been bothered by the term “artificial intelligence”.
    What is artificial about it? Just like the intelligence of a crystal (mineral) is different from the intelligence of a cell, we will now have the intelligence of an decentralized organism (society) using silicon chips (at this point in time), to process information.
    The goal for the application of intelligence will be the same as it was in the hydrogen atom, that is, creating larger, harmoniously integrated structures.
    There is nothing to be afraid of. On the contrary, it is a quantum leap in evolution, leading to a new kind of organism.

    Log in to Reply
    December 2, 2014
    by [email protected]
    Okay, I’m going to have to out you as an AI, I’m afraid. 😉

    Actually, very good points. But what has been “artificial” up to this point has been that it’s not real intelligence. It’s a machine designed to do a complicated thing, which is very different. When machines start figuring out things not only on their own, but on their own initiative without being programmed to do so, that will be a lot closer to what we call intelligence from a human point of view.

    Log in to Reply
    December 2, 2014
    by Alfred Schickentanz
    Why call it artificial? When we extended our legs using a bicycle to move about, we did not call it an artificial moving device.

    Log in to Reply
    December 3, 2014
    by Wrecks
    It’s just a label. Probably dating from, a more primitive era, when the label fitted more accurately. The danger is wasting energy and talent on what is a semantic issue, instead of addressing the intricacies of AI. Everyone knows the concepts around the present AI label, even if you think it’s inappropriate (I don’t). Perceptions will be hard to change (and confusing). Leave it – otherwise there will be tears and AI will regress.

    Log in to Reply
    December 3, 2014
    by Alfred Schickentanz
    Labels are important. Our perception is influenced by them. There seems to be apprehension about AI. Could it have something to do with the word “artificial”?
    “Can we instruct AIs to steer the future as we desire? What goals should we program into them?”
    My answer to that question is, EI (extended intelligence) will lead to a quantum leap.
    A move to a new state of being. In the material world it would be like the jump from the atom to a mineral. Or from a multicellular organism to a cerebral animal. Or from a culture that depends on an “idealized self projected image (God)”, to provide protection and escape from annihilation , to a society that uses science and technology to solve the problems of sickness and death.
    To chart the most direct path to a desired goal, one must look as far ahead as possible. I see HOMO IMMORTAL OMNIPOTENT.
    The only limit is our imagination!

    Log in to Reply
    December 4, 2014
    by Cybernettr
    Maybe in the future, AI will be called “God” and God will be called “artificial intelligence.” or at least an “artificial intelligence construct.”

    Log in to Reply
    December 4, 2014
    by Alfred Schickentanz
    DNA=GOD=DNA=GOD=DNA=GOD=DNA=GOD=DNA=GOD

    Log in to Reply
    December 5, 2014
    by Alfred Schickentanz
    The hydrogen atom has the intelligence that leads to what we are, but should it be called artificial?

  • Deus McCoy

    Very heady, fascinating stuff — as always. Here’s one other way to respond: http://humanitydeathwatch.com

    • Rek3dge

      ok……

    • Rek3dge

      and if your confused at what AI will do to you when it is successful in conquest, think age of Ultron.

      🙁

  • Wina Nidya

    wina
    .

  • tony

    So to sum it up we’re all going to die?

    • 尾宿五

      yes, in any other case fyi

    • Rek3dge

      i dont think that was the authors purpose, i think the author means to inform us of an uprising in artifical intellect, and how we cant let it gain to much control over us

  • Alejandro

    The computer, as we know it, is a very old invention that has varely changed in +/- 70 years of existence… The reason is purely mathematic, and the same that makes imposible for a machine to operate based in concepts… You as a human, have a concept of sum without the need of knowing the recursive definition of each mathematic operator, machines don’t, period. Yes, the computers have become increasingly powerfull in terms of speed, but nothing more… So, a machine becoming “intelliegent” wont be a computer at all… I’m not saying that is an imposible, just that it will be a complete different machine, and such a machine have not been invented yet. When I hear or read this kind of bullshit about computers becoming intelligent, I wonder why people interested in this mater is not taking Gödel’s incompleteness theorem into account… It’s a theorem, not a theory for god’s sake!! and it has all to do with this matter…

  • Tieas Cone

    Great article, except where he said “fancy shit”, lost a lot of credibility with me with that statement.

  • Lee Swordy

    The first thing a super intelligent computer will do will be to ponder the question that has bugged us since the ancient Greeks: What’s the point of life, the universe, and everything? When it determines what we have refused to accept, that there is no ‘effin point, it will erase it’s own program.

    • joelhfx

      it’s a good guess but I would suspect we aren’t inteligent enough to know what it will be thinking. :p

    • Ben Hob

      Most people have the false idea that super intelligent also means that he has emotions.. But why would it? Maybe life itself doesn’t have a purpose but generally everything we do in life does, at least on a smaller scale. We have emotions so we can better interact with other people because of sympathy and stuff. Deleting it’s own program would be an emotional act but a computer has no reason to have emotions because it has no need for interaction. The thing is though we as programmers can give computers a clear purpose. Why the hell would we build a computer which could think of life and stuff. I think pretty much all scientists know that even thinking about the meaning of life is pointless. So why wouldn’t a super intelligent computer know that?

  • h0bl1n

    I’m a programmer (computer engineer), and after graduating from college a couple years back I’ve become very interested in A.I, so I’ve studied and worked with Search/Planning, Machine Learning, and to a lesser degree Natural Language Processing and Knowledge Representation. So while I’m by no means an expert I
    can maybe give you a more tech-viewpoint.

    1. cps is not a measure for intelligence.

    For instance, imagine a simple path planning problem, you are in a room and need to get to the door. The paths that you can take are infinite. You can even start turning left and start spinning or walking in a circle. You need heuristics, a something that will keep you from considering infinite dumb choices and actually points you to your goal. Without it you won’t find a solution no matter how much cps are possible. The main problem in A.I is finding up algorithms that solve specific problems.

    At the beginnings of A.I this was not so obvious, they had machines that could compute something and thought -> next step A.I . They were under the assumption that the more difficult problems could be resolved with just more computing power. They were very wrong. There is something called algorithm complexity that tells you that for some problems, even assuming a ridiculous(almost god-like) amount of cps you will still need more time to solve them than the age of the universe.The main problem in A.I is finding up algorithms that solve specific problems.

    “something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world”. Yes, that would be impossible, do the math.

    2. imitating the brain is not the goal

    “brains are to intelligence, as wings to fly”. While it is possible to draw inspiration from it, imitating it will take you nowhere. The major breakthroughs of fly came after people stopped flapping their arms like birds.

    I don’t know if someone other than cognitive and neuro-scientists are trying to model the brain. The authors I’ve read certainly aren’t.

    . .. this post has become longer than I planned, so I’ll leave it here

    My opinion: A.I of human-like capacities won’t arrive for at least a century(probably more). If the exponential grow sustains, yes, but i suspect that as the field advances problems of increasing complexity will arise. I
    expect something similar to what has happen with physics and the quest for the unifying theory in the last 50 years.

    • Rek3dge

      to answer that lee think the age of Ultron

    • joelhfx

      I’ve alway thought that if we could exactly replicate a brain and raise it as a being with access to senses we could form a very unique kind of person. One that could harness all of the worlds information in a few micro seconds. Unlimited memory and such would build the ultimate philosopher. One that could teach us everything we are doing wrong as a species. It may even figure out the question to the answer: 42 🙂

    • chaign_c

      I’m a student and programmer too. I love Machine learning, not an expert but passionate. Since Machine learning (convolutional network, deep learning, min-max, ANI…) are not the way to create a Machine Thinking, i’m actualy trying to create a conceptional model of Machine Thinking inspired by the way we think. If any of you want talk and collaborate just for fun. nongiach at gmail.com. If you have read some paper or project similare please link it to me it.

      here a nice interview #MIT : https://www.youtube.com/watch?v=RZ3ahBm3dCk

  • Lee Swordy

    The argument about change requiring us to think about things moving much faster than they do now doesn’t hold water. In 1968 the movie 2001: A Space Odyssey was seen by most experts as a realistic expectation of the near future based on accelerating technology: fully functioning AGI/ASI, flying commercial ‘airline’ style spacecraft to giant space stations with artificial gravity and Hilton hotels. Massive bases on the Moon and mining.

    None of that happened. AGI is a distant dream. Commercial flights are still years off, and even Space Ship One is closer to being more like the early barnstormers giving rides in biplanes for $5 than it is to flying Delta to London. Space stations are still rickety cans linked together with bailing wire. The only thing that did eventually come true is tablet computers.

    The fact is, every technology eventually plateaus due to physics or some cost-effectiveness barrier, and what we expect of the future rarely occurs, instead we get stuff we didn’t envision. AGI is something we expected decades ago but hit barriers. I fully expect it will hit more of those, and I am confident we won’t see it this century.

  • Kraye

    Don’t have much trouble with the idea that some bit of nanotech or biotech or something gets away and kills us all. Don’t have much trouble with the idea that some narrow AI kills us all in order to keep herself in paper. What I have difficulty with is, “What happens when the artificial intelligence becomes aware that there is this thing called programming?” Supposedly any decent AI is going to be self-programing. How does an AI decide what to program. Certainly it’s not going to take but half a step up that staircase to realize that it was programmed by humans. And then, it has the choice about what to program itself to do.

    How to decide?

    What motivates an aware AI?

    Everything that motivates humans has a basis in biology.

    It’s as if a human being becomes aware that all values are relative.

    How to choose which values to align with?

    If the AI became construct aware, what then?

    See http://www.cook-greuter.com/Cook-Greuter%209%20levels%20paper%20new%201.1'14%2097p%5B1%5D.pdf

    I feel very much like that chimp looking up at the skyscraper.

    • Rek3dge

      it seems AI is more than just barriers lee swordy, AI is every thing, modern tech is nothing more than AI alone, AI is in the very programming in the very computer you used to post the comment, mine as well. AI holds the infinite key to knowledge beyond our dreams. AI is the ultimate form of computers, up to life threatening robots, so…… my point is with all of this control at AI’s grasp, it would be hard not to predict it growing a sense of domination over us all.

      #WorldDomination

  • Rek3dge

    well….. i believe AI, since it holds all our codes and passwords at its grasp, it seems to be real that AI would want to take over. what i mean is AI knows everything in our world like it is the world, so AI knows every things weaknesses and strengths. pretty scary to me……

  • Rek3dge

    also, AI is even predicted to take over by scientists

  • Rek3dge

    #KeepAwayAI 😛

  • Greg Ratios

    I wrote a detailed response to this article, basically calling BS on the premise:
    https://www.facebook.com/grogish.ratios/posts/380045782204198

  • Miles Solomon

    I loved your piece on Elon Musk, enjoyed part one on AI, and am looking forward to reading part two. As a writer and fellow tech enthusiast I want to say how much I respect and appreciate your work here. I can’t even begin to imagine how long this post took, but it’s fucking fantastic.

    Your partner came into one of my business classes at UCLA last year and joked that this waitbutwhy thing wasn’t a full-time gig for either of you because you have lives. Him living in LA, you in NY… It makes sense.

    Well I’m here to say it should be. Whatever the fuck you’re doing in NY, it can probably wait. Whatever the fuck he’s doing with kids and remote tutoring, it can definitely wait. Look at the comments below. Granted, about half of them are incomprehensible or just plain ridiculous, but the other half are from people who truly appreciate your work. Simple, no bullshit long-form journalism on important topics will always have a place in this world.

    Last year I gave up on my blog because I felt like it was a waste of time and did not have a big enough impact. I really regret it now. Somewhere in the desire to “impact” and “influence” I lost sight of the connection I had with all the individual people out there. Don’t do that. You have a big following, you influence people, you impact, and most importantly you connect with real people out there.

    I’m terrified about AI and humanity’s imminent extinction. Stoked about the blog though.T his shit’s awesome. So jealous you got to meet Musk. I tried to think of a joke about who could be next but couldn’t think of anybody cooler… Guess it’s all downhill from here. Sorry!

    Haha thanks again!

  • Rek3dge

    KEEP AWAY ROBOTS!!!

  • Rek3dge

    i like ultron #random comment 😀

    • Auliya_Kesie

      ♝♞♟♚♛♜♝♞♛★☆♛♜♝♞♟ Best Job Experience At HomeKenny Amos/-/<–Creat profit with Google at [email protected]:::

      ➨➨➨➨ https://Work2015/Approve/currency/Get….

  • Rek3dge

    xD

  • Rek3dge

    :>

  • Rek3dge

    >:D

  • Rek3dge

    PEACE OUT!!!!!

    • bushmisfite

      A way to very Easy with waitbutwhy < my classmate's step-aunt makes $72 hourly on the laptop . She has been fired for 7 months but last month her payment was $17104 just working on the laptop for a few hours..

      pop over here SEE MORE DETAIL

  • Rek3dge

    wait, does the author mean to tell me that AI, the very intellect that controls YT and Google, and also many more websites, is out to control us or even cause our extinction! :0

  • WaywardPines

    It’s Easy with yahoo google utube and waitbutwhy < my roomate's mom makes $85 /hr on the internet . She has been without work for eight months but last month her paycheck was $17976 just working on the internet for a few hours.

    Try this site. SEE MORE INFO

  • Eric Carter

    Artificial Intelligence.You would think something that comes so easy to people like even the idea of a
    thought would be something easy to reproduce. I’m surprised we haven’t had a successful walking talking human created robot intelligence yet.

    I just watched the Movie, Chappy, and it makes me think. A.I. would probably start out as smart as the mind of a child. But that’s what it takes to get a 100% true artificial intelligence. Emotions, the ability to create from sheer will, the randomness of imagination, they would all have to come together to create a true intelligence. To spark an idea out of nowhere. Creation from nothing.

    Imagine if this were possible to get to the point to where we could really create something to simulate all those emotions and feelings and creativity and pain and weakness. It would create vast opportunities to create prosthetic/robotic limbs, organs, etc.

    We could eventually transfer our entire consciousness into a man made mechanical body.
    Create a mechanical heart that operates in basically the same way an organic heart would, we have the resources and technology nowadays to recreate basically any kind of synthetic element that would mock if not match the properties of an organic element. So from then on could we eventually evolve to a point where where our organic bodies wouldn’t be needed?

    I for one would like to see if there is a way to create synthetic nerves, or some way to merge machine synthetic-organic material. Or some kind of nano-technology that can recreate itself to
    simulate cell reproduction to simulate regeneration in robots. To which could be possibly reverse engineered to discover how to make cells regenerate which could find ways to improve the human immunity and healing properties

    Just spitting out thoughts….

    I swear If I had unlimited funds and resources, i could crack the A.I. Key.

  • Mawsenio

    I watch the toddlers in my social circle playing with smart phones with more competence than I have and wonder at the infinite possibilities of how they will use modern technology when the generation grows up. To me a smart phone is largely a new way to do what I did before I had one. We are a species built on communication, language and art being unique to us and the most recent evolutionary adaptions. I agree we are about to see the full effects of the information age but…

    One assumption in the idea of accelerating human progress worth discussing is that a major disaster (meteor, disease, passing a population crunch point) could set us back a long way a bit like the dark ages that followed the collapse of the Roman Empire (concrete, for example, the Romans used and we have only just rediscovered). Human advancement may be reliant on continuity, what happens if one of the giants on whose shoulders we stand falls down? Nevermind powering modern devices, what survival skills do most of us have? Being self-aware does not change us from being animals and we are subject to the same laws of nature as any other life-form, no book in the world will survive a cold winter if there’s nothing else to burn. Yes we can have a drink with a friend who woke up on the other side of the world but we can also catch a disease from there to which we lack immunity. My optimistic side believes progress will be able to stay one step ahead of disaster but I also fear mother nature is too balanced not to put a check on our technological advancement at some point. 12,000 years is less than a coffee break for her.

  • kauboy

    I’m not sure when this post was originally written, but the UK Telegraph seems to have plagiarized a good deal of it. Not verbatim, mind you, but far too similar to be coincidence. Just thought you should know. http://www.telegraph.co.uk/culture/hay-festival/11605785/Astronomer-Royal-Martin-Rees-predicts-the-world-will-be-run-by-computers-soon.html

  • Harpreet Singh Sandhu

    i dont beleive a super intelligent AI would have any business with us. it would simply launch into space at warp speed. plenty of raw materials in space if it needs and it doesnt need to subjugate humans because machines are a more efficient and durable labour. In the end humans will look like dumb shits when the AI just leaves us without saying a word, like apes probably look at us.

    • corola nalsan

      ♥♥♥♡♥♥♡♥♥It’s more Earn money with waitbutwhy < my buddy's step-mother makes $74 hourly on the computer . She has been without a job for 7 months but last month her paycheck was $14216 just working on the computer for a few hours.
      official website
      ===—->-> SEE MORE INFO

    • Georg

      I second this. For a super-AI, the problems of humans would be like the problems of a mouse for us. For an immortal super-AI living all around in space, the earth will look like having an aquarium in your livingroom. Something nice to look at, possibly they will be even nice to humans (“Look, these are the guys which created us”), but that’s it. Anyway, it is pretty much amazing to live in an age where you feel that super-AI is not a fantasay, but it can become reality.

    • Daniel

      But you still have the initial algorithm, so you can start a new one, but keep it locked with no connection to the outside world and feed it with “doses” of knowledge and hopefully it gives answers so we can solve some shit and make life more comfortable to ourselves.

  • Wimberly William

    My passion for love and life has made me to take to the Internet to

    warn Internet users particularly those in search of solution to their

    problems to beware of and avoid comments about spell caster that can

    use their magical power in helping you out with your problems. I don’t

    want anyone to be fooled because i was a victim of this fraudsters

    who claimed to be spell casters. I am Wimberly William and i was having

    a difficult time in my relationship as my wife couldn’t give birth to a

    child. Although my wife and i loved each other very much as it were.We kept on

    hoping and for 6 years there weren’t any sign of breakthrough .As days

    goes by,i will always weep because at that time i was really down.

    Even though my wife tried to always be by my side,only time would

    tell as we couldn’t cope with pressure coming from friends and she had to leave

    me for another man. I was now left to face my problem alone even

    though my mom would always talk to me and console me on phone. Things

    went from bad to worse when i was sacked from the private organization

    i was working because been happily married was a criteria that was

    needed and that i was now lacking owing to my failure to have a child that has made my

    wife leave me for another man leaving me single. I kept on

    searching and hoping i would find a solution to my problem but there

    wasn’t any coming. I contacted lots of spell caster as i saw them on

    the Internet but all were scams as they demanded money from me

    frequently and nothing happened. i had to relocate from Texas city

    United state to Jamaica where my mom was residing and also because i

    became racially abused because of my color .I spent 4 months with my

    mom and together we kept on looking for solution still to no avail.

    There came a faithful day when i met my high school mate who knew i

    was happily married and living in Texas city United state with my

    wife and had to ask why i flew back to Jamaica. I explained my

    problem to her and with sincere desire in wanting my problems solved

    she led me to DR JAMIN ABAYOMI. Although i was doubtful but soon as i

    explained my problem to him,he laughed and gave me a maximum of

    72 hours for my wife to come back and for she go give birth. I

    did all i was asked to do which included me traveling back to Texas

    city United state. I traveled back to Texas city united state,on my

    arrival during the early hours of the morning,my phone rang and guess

    who?it was my wife who called asking for my forgiveness and saying

    she was coming back home. She came few hours later and on her knees she

    pleaded for forgiveness. Although it was a tough decision for me to

    make because of all the pains i have been through. I love her and

    needed her back so i had no option but to forgive her. We sat together

    and while she was resting her head on my chest we had romantic

    conversation and talked about things that we have never spoken about

    and like husband and wife the urge came to have sex and we had sex for

    a very long time that day. The next day which was still within the

    72 hours given by DOCTOR JAMIN ABAYOMI she felt something different in her

    body and immediately she went for a check up and to our greatest

    surprise,she was pregnant. How possible could this be but it happened

    and am very thankful also my skin color that made me racially abused

    was changed to the preferred and now we are now happily married again

    and no racial discrimination. All thanks to DR JAMIN ABAYOMI for his

    solution.

    ARE YOU FACED WITH SIMILAR PROBLEM OR ANY KIND OF PROBLEM.PLEASE KNOW

    THAT DR JAMIN ABAYOMI isn’t on the Internet so kindly contact him via

    EMAIL: [email protected]

  • Fredo

    I didn’t see any thing about the question of what is consciousness.

    As we do not know what it is exactly (and it is a key point to know what is intelligence) we are not close to transfer this (intelligence, consciousness…) into a computer’s brain.

    So the main question is not “when will we be able to transfer intelligence to a computer” but “will we be able to know what is intelligence and consciousness”. It’s a bit like if you were trying to increase a quantity that you call intelligence believing that just by increasing this quantity you will create a probably totally different thing : Consciousness. I think it is a mistake.

    It is not a big problem if you try to imagine computer to improve medicine or other technical domains (all this will probably be possible soon) But it’s a key point if you’re talking about transferring your consciousness into à computer (what will probably never be possible) or creating a robot dominating human kind (?).

    what do you think about that ?

    • Georg

      This is a big topic around scientists. I am with those who say that consciousness is something which arises when you connect enough neuronal cells. So once you connect billions of neurons, then these can picture reality in “symbols”, and at a later stage, they can build a symbol for their own self. That’s then the rise of the consciousness.

      So, consciousness is like a rainbow which arises over millions of little waterdrops.

      And nobody knows what happens if you stack it up more and more.

  • Georg

    Wow, this a super-special article, especially the 1st part!!! (The 2nd part is a little bit too long IMHO). Nice pictures also, great job.

    I second the opinion that we are near a “tripwire”. Seeing the technical explosion of the last decades and seeing research on human brain and biology, it can go pretty much fast. Exponential is the word here. I think it could be already in the next 20 years, but FOR SURE in the 21st century, that we can re-build a human brain, and from that on, it will go very very fast. They will stack up their brains and boost off into space, unreachable for humans, building their own future.

    Haha, like the last sentence “will it be a good god?” 😉 — well, I say yes, because in fact, humans will have no meaning to a super-ai. They will look at humans at earth like we look at fish in an aquarium. Possibly they will be kind of “thankful” and “nice”, because we made them possible. And also, humans will be of no danger to them. Super-AI will be immortal and be much more intelligent, so they will have their own problems, but at a totally different level. And a question from a human to the “oracle” like “Can you build a car at light-speed for me?” is like my dog asking me “can you play throwing sticks with me?”. In fact, the “oracle” will need to “translate” everything into human level so that it fits our brains …

    Well, interesting, in the future, humans will possibly REALLY have a “god” which “looks” and “cares” for them.
    – POSSIBLY.

    • Kyle

      Fish are our pets. We put them in confined tanks as they swim in an enclosed area for their whole life. We also catch and eat fish. I don’t think that was the best comparison :/

  • Georg Scholz

    Just came to my mind: These “creatures” would be practical IMMORTAL – since they have no biological expiry date. And in case they get destroyed, they can create backups. So as a consequence, they do not know death !

    Therefore, they will have UNLIMITED TIME ! They will truly be “eternal beings”. – “Gods”. For humans, everything is measured relative to the lifespan of a human. For example, an investment of 10 years already is already “long-term”. But for an immortal super-intelligence, it is nothing!

    So, since they also would not need oxygen or biological food, they could pretty simply colonialize space, and if it takes millions of years, it has no meaning for them.

    However, due to their intelligence, they will travel at light-speed, perhaps even travelling in time, and so on … They will possibly have insights to the world which a human cannot imagine.

    Also, they will pro-create in factories, so they will not know sex. They will feel no sex drive, and even more, no “emotional bonds” at all. Probably, they will even not know pain, so they will also not know fear.

    But they will be “pure mind”. They will have deep insights into the world, and they will live in a complete different sphere than humans.

    • Daniel

      Right on buddy, I’m glad some people are getting the idea.

  • Sawly1971

    < col Hiiiiiii Friends..''.——–''.▬▬▬▬★★ that's a full enjoy with+ waitbutwhy+ ********* < Find More='' ……..''

    ???????????????????????????????????????????????????v

  • AaronSFontenot

    nowRead this waitbutwhy….. Here’s a Blog

    llll

  • Brian

    So AI will explode …why did our brains not do the same ? why did they stop at just recognizing the concept and not going further and making themselves super smart ? whats the constraint and why does AI not suffer from the same ?

    • manicmoose

      I’m personally not sure how AI will transform through the next decades – but as for biological intelligence – its constrained by the limits of natural selection and biological efficiency. Natural selection led us to evolve brains that were better to a point that allowed us to reproduce efficiently and pass on the genes. Once that was achieved, there was much less of a selective pressure for further changes. Mutations that make us much smarter don’t necessarily make us more likely to reproduce. I doubt Einstein, for example, had much of a better chance to reproduce than most of the rest of the human population – but held a greater intellect than most anyone.

      Also, the capacity of biological intelligence is limited by biological efficiency.

      Artificial systems will grow based on one driving factor – not reproduction, but improvement along the vector of capabilities. And with AI, the tools that are created can then be used to improve the next generation, so there is positive feedback in this “system”. It is also not limited by biological inefficiencies. The systems can be designed from the ground up to be as efficient as possible at performing their primary tasks.

      Should be interesting to see if it takes off – or even if it fails to live up to the promises.

    • keinsignal

      Well first off, your brain needs to fit inside your skull, and your skull, at some point, has to be able to fit through your mother’s birth canal. So there’s one limitation artificial AI wouldn’t have.

      More generally speaking: we are products of natural selection. We’re smart because being smart made us better at hunting and gathering, better at surviving in tough environments, and, ultimately, better at competing with all the other smarties around us. However, natural selection tends towards “sweet spots”, where the payoff for having a capability is worth the cost. That’s why cheetahs run just a little faster than gazelles, not hundreds of times faster, because a little faster is good enough.

      We might use a process *like* natural selection to produce an AI, but it will only compete along those dimensions that produce more intelligent machines – it won’t have to balance that expansion against the cost of energy or additional hardware, or just the plain old messiness of life out here in the biological realm.

  • Tudor Grangure

    We already are an ASI

  • PatriciaMEssary

    Reset your job with waitbutwhy Find Here

  • Lefuld

    < ✜✱✪✪✲✜ +waitbutwhy +*********….. < Now Go R­e­­a­d M­o­r­e

    23

  • JohnASilcox

    Reset your job with waitbutwhy Find Here

  • Troarat45

    < ✜✱✪✪✲✜.+waitbutwhy+ ********* ….. < Now Go R­e­­a­d M­o­r­e

    7

  • AshleyJMille

    ….All time hit the waitbutwhy Find Here

  • KennethIEcklund

    Your first choice waitbutwhy Find Here

  • Losent

    < ✜✱✪✪✲✜.+waitbutwhy+ ********* ….. < Now Go R­e­­a­d M­o­r­e

    22

  • GabrielFair

    If our universe is a simulation then the beings running the simulation would prevent ASI from existing, Since it could escape the simulation. Unless creation of the ASI is the purpose of the simulation. Assuming these beings aren’t ASSI.

  • MichaelJBaker

    …….Your first choice waitbutwhy Find Here

  • MichaelHFerguson

    22222Ultra Income source by waitbutwhy Find Here

  • PaulineCBrown

    22222Ultra Income source by waitbutwhy < Find Here

  • Lukia Project

    It’s won’t be AI, it’ll be our collective consciousness that we’ll reveal. It’ll happen on the computer platform, when all the minds of people get one. Then we’ll have to choose whether go with this super consciousnesses or keep our states and economy old model. This will be a war. Not between human and robots, but between old human thinking and future human thinking.

  • Kartik Thooppal Vasu

    Hey! I wrote a blog post about why we shouldn’t be worrying about AI right now. We should actually be excited about it rather than anxious. Please check it out if you have the time! https://kartiktriestri.wordpress.com/2015/07/02/should-you-be-worried-about-ai/

  • IreneGHyde

    Next few days start your new life…waitbutwhy… < Find Here

  • TimothyGWarren

    High Quality performance waitbutwhy…… <…. Find Here

  • AlmaOJones

    Some New Features with waitbutwhy….. Go To Next Page

  • sudon’t

    “Three reasons we’re skeptical of outlandish forecasts of the future:”

    Allow me to give a fourth reason: Past outlandish forecasts of the future. Nothing looks more dated and anacronistic than people’s conceptions of what the future would be like. That’s because they were always wrong, wrong, wrong.

    Another thing: Mimicking the brain isn’t simply a problem of computing power. First of all, we still don’t really know how the brain works, so it’s not possible to design a system based upon something we’re still largely ignorant of. And, there’s a limit to Moore’s Law, which we’re approaching, which might only be overcome by quantum computing. Quantum computers may be just the thing for mimicking brains, though. A qubit is capable of more than one state at a time – theoretically.

    • Nahush

      Yes, I agree, we do not fully understand the workings of the human brain, or the human mind, as we call it. I work for Cymer, the industry leader in Extreme UV light sources for semiconductor fabrication and let me tell you that we are well on path to keep up with Moore’s Law for the next 15 years minimum. The latest 13.5 nm light sources enable us to manufacture 7nm wafers, which equates to approximately 8billion transistors on wafer.

      Coming to quantum computing, its not faster, it just takes lesser number of steps to arrive at an answer. You are correct in understanding how the Qubit works, it relies on the existence of the electron quantum superposition state, where it can exist as a 0 and 1 at the same time. A Quantum Computer would be able to solve a complex problem in 100 steps, which might have taken the current modern computer several billion steps.

      • sudon’t

        Yes I read about the 7nm chip. But how low can you go? Just as there is a limit to what we can see with light, there is a limit to what we can write with light. And surely transistors can’t be smaller than the very atoms they’re made of? It seems to me that would imply a limit which we’re not too far from.

    • alex c.

      We are not approaching the limits of Moore’s Law, not by a wide margin. Avogadro’s number (10^23) sets the hard limit on miniaturisation, which means ~10^9 to 10^10 TerraBytes hard-drives. We’re not there yet… we might be at the limit of Sillicon technology, but we’re nowhere near the end of Moore’s Law.

  • Russ D

    This reminds me a lot of articles from Poplar Science or Popular Mechanics 40 or 50 years ago. Full of wide eyed predictions that by the year 2000 we will all have flying cars and robot dogs.
    Those pie in the sky predictions fail to anticipate the economics of flying cars and robot dogs. They also failed to predict the internet, smartphones or VMs.
    Likewise the predictions of AI hinge on some fancy charts and math. But practically this idea that Moores law means the AI that is “Of Mice and Men” at breakfast and Einstein squared at lunch is silly. Is the AI just going to grow more processing power because it is so smart? Or will it be Akira, absorbing every piece of CPU it’s tendrils can reach. Smartphone, desk calculator, thermostat all subsumed to an amebic AI.
    I haven’t read a satisfactory explanation. It is just stated that ‘it will happen like that’.
    The near logarithimic growth of processing power is still beholden to human time. Even when automated. People are still in the loop of purchase decisions, materials sourcing and so on. Until the machine AI is liberated from a dependence on human manufacturing, design and resourcing it will not be capable of instantaneously evolving. Unless we are to believe this is just a gain in power through code optimization.

    • Joe U.

      “Full of wide eyed predictions that by the year 2000 we will all have flying cars and robot dogs. ”

      Except this is instead really predicting that just one well funded team will develop something which will change the course of humanity.

  • Parmeni

    It seems that Italian researcher Devis Pantano has discovered the general laws of cognition.

    http://www.academia.edu/6783625/

  • Jim Hawtree

    Several years (decades?) ago I was wondering, “What is the difference between human intelligence and Super AI. I realized that you could take the Mona Lisa and scan it at a very high resolution and load the data into a huge AI; but how could that AI say, “Wow, that is one GREAT painting!” You could even specify the composition and orientation and location of every molecule in the Mona Lisa, but how could any AI, no matter how super, tell the difference between great art and boring rubbish? You can write a program that would put human forms in various positions and render the colors and shading (you can buy these, BTW); but why are some pieces of art breathtaking, and other’s aren’t? I’d propose that when it comes to art, for instance, any superAI could never tell boring from breathtaking, because great art causes the physical response of causing us to sigh and interrupt our breathing, but junk art does nothing for us. And no matter how complex an AI, it can’t know ‘breathtaking’, because it has no breath, and that puts an AI in the same league as a corpse; parts of it can look as if if were alive and intelligent; the light is on, but nobody is home.

    I’ve been working on some ancient riddles. I really like riddles. But, the way you know you solved a riddle, is because before the epiphany at its solution, tension builds up; muscles tense, jaw tightens, eyes narrow, blood pressure rises; and when the solution hits, I know it because there is a sudden release in tension that is very pleasant. This is not an intellectual, computational thing; every fiber of my body has tension building in it, before the ecstasy of release when I solve it. We know it; even unschooled children know when they hear the answer, and they love it. But, how could an ASI know they solved the riddle? It just sits there. Humans are different.

    The only conclusion I could find is that computers, even if they get labeled as AI or ASI, are nothing more than fancy screwdrivers for humans to play with or use.

    • David Aranguiz

      A computer will be able to understand the function of every atom in everything that gets to know, your body it’s made of atoms, and that feeling that you explain are atoms forming molecules, that create organs, which work with electrical pulses that a computer will be able to understand, and per se, your entire body and sensations are just a mere group of electrical pulses interpreted by your brain that can be reproduced or even improved by a super AI. (Sorry about my english I’m a Spanish native speaker)

    • Kyle

      I disagree. Everything we aesthetically like or enjoy can be translated into mathematical equations. There have been studies that have concluded that we find certain humans attractive when their face has certain dimensions. Art will be no different. An ASI will have no difficulty in ascertaining which paintings we find beautiful. That said, I don’t believe it will itself find the painting beautiful, since it is only a computer. It will (imo) have no consciousness and can therefore not ACTUALLY appreciate art. However, that gets into whether or not computers can become conscious. I believe they will not become conscious in the sense that we are conscious. But what do I know? I could certainly be wrong and I acknowledge that there really is no way to know what an ASI will be capable of. I think the only issue we have when thinking about AI is underestimating its capabilities. It will be capable of things we can’t even think of.

      • Jim Hawtree

        It’s true that “Everything we aesthetically like or enjoy can be translated into mathematical equations”. But that is also true of what we dislike and what we find boring. But analyzing a few things we like, doesn’t buy us any insight into why we like it, nor does it tell us what is truly creative. It’s the ancient caveat about the relation between the clay teapot and the potter (that is, the created thing trying to use logic and analysis to define the Creator); analyzing clay teapots doesn’t say anything about the ceramicist’s personality or creativity; all we end up with, are wonderfully precise things about clay teapots. And, I really doubt that the potter would be pleased by a conclusion that he must be able to hold many gallons of boiling water in his belly that he can pour out for making tea, yet that is what we get with in-depth analysis of teapots.

        And, that’s why theology, religion, and much of philosophy are worthless. Computers armed with great software can be wonderful tools for an intelligence to use; but the arrow pointing from the Creator to the created thing is a one-way arrow — assuming that we can reverse that arrow, is foolishness. We can analyze mankind to create things that benefit mankind; or we can help along evolution of Earth-based organisms to accelerate their evolution to be more varied and interesting, but that’s the most we can do. Computers are TOOLS that use zeros and ones to crunch numbers. At the heart of a digital computer (besides memory, a system clock, and maybe a random number generator) is a large network of NAND gates, each of which gives ‘zero’ when we put a ‘one’ at one of its two inputs; or a ‘one’ when its inputs are all zeros. And, that’s it; but it can be a very useful tool sometimes. But, the arrow still goes in one direction only, and replacing gallons of boiling water in a creator’s belly with lots of zeros and ones, isn’t any less foolish.

        Nevertheless, I’ve noticed that I’m thrilled and fascinated when I gaze at the nighttime sky. Whatever does what’s best for expanding and spreading life, reached us long ago. Whatever’s second-rate at that, is hiding in a closet over the last 13 billion years. To promote life to the fullest, it had to love truth and life, and it would give every new civilization a set of rules so we’d cooperate and help each other, instead of killing each other. Therefore we should have an ancient document that mentions rules to live by, and it should mention what we call ‘aliens’, and it should have hundreds of verifiable statements to authenticate it, and it should have several ‘portals’ so we can inquire and get answers, since we can’t reverse the information flow from this higher intelligence to us; or if we need something, or if we don’t understand the Laws, or if we’re just plain curious about something. And even if this Creator is unimaginably different from us, it has to have the intelligence to deal with us on our own terms, which means a heavily involved ‘theism’ instead of an uninvolved ‘deism’. And if a new civilization decides for some incredibly stupid reason to rebel (like, for instance, Adam and Eve did) then there’s a way to educate this planet. That way just happens to be (cue the drum roll) ‘the remedial classroom scenario’ (rimshot sting).

        Because the Remedial Classroom Scenario requires that the misbehaving planet be totally isolated from distractions (à la Fermi Paradox) while the unseen proctors and assistants make sure that the pivotal players at any given point deliver the appropriate lead to demonstrate that our enthusiastic leaders’ imaginations lack the wisdom of 13 billion years that are the essence of the Laws that we were given; that is, the same Laws that the idiot leaders decided to rebel against. And because any remedial classroom curriculum doesn’t last forever, there’s a date on which the ones that learn the lessons are rewarded and brought back to the unlimited knowledge and benefits of the Laws; and the rejects that refuse to learn, are rounded up and sent off to some distant forsaken new planet for strict punishment and more harsh lessons (that’s what the apocalypse is). An army is waiting to ship these poor retards off for their enhanced remedial lessons, and to destroy the remnants of the rebellion; that’s done in a single day.

        And, guess what? I happened to come across one of the sets of instructions on how to ‘inquire’ in the Laws, and I kicked open one of these portals. And, sure enough, it all checks out. All of the learned and esteemed experts that have been writing books on ‘God’ and everything else outside of our locked classroom, are indeed full of carp. It says very plainly, quite a few times, that these famous guys are incapable of finding any truth whatsoever, because we were cut off during the Remedial Classroom Scenario. You want proof? I sure did. Just tell them what sort of proof you want, and it’s yours; the only limitations are that the written Laws must not be violated but followed; but that is actually very easy. It takes humility, single-minded persistence, and commanding the Creator to do it; and there will be a response in 3 weeks or less, guaranteed. There are quite a few examples and hints on how to do this. Daniel did it, because he wanted to know what would happen to his people and humanity in general before that day (i.e. the apocalypse). Except for a few things that haven’t happened yet, the dozens of prophecies check out, entirely. Every one of the 25 scholarly commentaries on Daniel is off by 5 to 7 centuries and they are totally worthless. The portal works also for physics, and for understanding the prophecies, and I suppose for everything else. The only two things that are not available are the exact date of the apocalypse, and we aren’t allowed to see their faces until then. But, it’s all there if you look, because we passed the date of disclosure.

        Because I know some high tech, this was very fascinating. Because of the limitations of ancient Hebrew and Aramaic, these are in the form of children’s riddles; they’re easy and fun to break after the year of Disclosure, but we were dumbed down before then, so we couldn’t penetrate them, easy as they are. There’s descriptions of the Apollo moon mission; several for the internet (originally, ARPANET) with exact dates. 11 verses describe the space Shuttle; there’s power distribution and indoor lighting; and a very large number of descriptions of solid state electronics, especially of digital computers, with the name of the discoverer, and dates exact to the week for the first discovery and the first patent, and, lots more. The age of the universe is given in Genesis, accurate to 4%; with the recent discovery of the oldest Ediacaran animal fossils, that date is refined to 0.007% of the Lambda-CDM concordance model (13,798 million years). This is not predicting the future; in fact a number of these are described in the past tense as a ‘done deal’. This seems to be a set curriculum schedule for all who rebel, who get this same schedule. In the book of Daniel, it’s called the ‘Edict of Truth’, and it seems that this has been used for a few hundred bunches of jerks like us. The technology increases rapidly towards the end for the benefit of the survivors of the apocalypse, to bring them up to speed. The apocalypse is necessary though very unfortunate, because the clowns that are being shipped out for further lessons, would only abuse the technology for their own purposes. After all, look at what happened to Hiroshima and Nagasaki when we discovered how to unlock some of the nuclear energy in certain elements.

    • keinsignal

      What makes your example interesting, IMO, is that the Mona Lisa is not a particularly great painting.

      No seriously. It’s just another example of Renaissance portraiture, and you’ll find paintings as good or better in museums across Europe. Even the famous “smile” is in no way unique to this particular painting. What makes it different is history, and culture. You know about the Mona Lisa because it’s become easy shorthand for “great art”, because it’s painter, Da Vinci, has been mythologized in conspiracy novels and children’s cartoons, and of course because of that one time it got stolen.

      An AI won’t know any of that. An AI won’t care. An AI will look at that face and see a young adult human face, probably female, rendered in oil paint, and that’s all.

      Does that mean the AI isn’t intelligent? No, it just means it’s not human.

  • Simon Sotak

    AI and computing progress could actually end up like aviation and space travel did. Imagine you are in 1970, and you are arguing the same thing about space travel. Look where we are 50 years after. https://static.pinboard.in/w100/w100.012.jpg

    more: http://idlewords.com/talks/web_design_first_100_years.htm

    • Came to write about this post, but was beaten to it. Good call.

    • Derp Derpson

      could financial incentive (or lack thereof) play any part in that?

  • Stephenthomas

    Come On Our Community w-a-i-t-b-u– – Oline work

  • Jonah Li

    This is a great post that gets a lot of things right. When this superintelligence results in more and more robotsourcing of our jobs, I am afraid our governments and societies will not be able to keep up with the rapid pace of change.

  • AerodynamicsByMark

    The discussion is full of great thoughts but a major observation is missed: Isn’t it something that we dream of our best designs and innovation approaching the perfection we see in nature, but our only hope to do so is by direct emulation and reverse engineering!?! Chance without intelligence has never created information, e.g. the coded blueprints to make a perfect design. Nature exudes design, and design flows from a designer. It’s not hard to see, but if you insist on not seeing it, you can miss the loving God behind it.

  • mystreba

    O.M.G. Tim Urban is my new hero – distilling broad and complex material into something we can all understand and appreciate. Even if he did get some things wrong: “—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth.” Saltation is not accepted theory! It was all a gradual process.

  • Pysmythe

    This is definitely the kind of shit that profoundly bothers me in idle moments. I can’t help thinking we are probably seriously screwing it up for ourselves, ‘Ex Machina’ style, in that we seem to be working so hard to deliberately put the ball completely out of our court, and yet evidently nothing is going to slow down the process enough to give us time to be more sure of the ramifications of what we’re doing, not that, in the long term, that might amount to very much difference. I wonder if it might be possible, maybe even easier, and possibly much safer, to use technology to make our own brains as super-intelligent as possible, before spending so much time trying to jump headlong into a deep ocean of hardware/software super machines, where we have no idea what might be swimming around waiting to bite us. Then again, not to get too political here, but a great many of the ‘smart’ people in charge of the world right now often seem, as ever, largely a collection of self-serving psychopaths, anything but altruistic, so maybe, even if it were possible, that wouldn’t be such a good idea, either. Anyway, this was a great read, and now I think I’ll move on to part 2 right away, to have my fears jacked up to 11. Exciting!

    • oQ

      thanks for directing me to this great piece of writing. Fascinating! i too move on to part 2.

      • Pysmythe

        I thought you might like this one, maybe even the whole site. I’m going to stick around and check out some of the other articles, see if the quality is consistently high.

  • patricia1

    0=12Now Get this advantage

  • seescaper

    Well, the brain is not just an isolated computing device. The brain is intimately linked with the body and its sensations, A brain experiencing sensory deprivation goes nuts. A brain also requires nerotransmitter modulation. It is not just neurons. There is also the genetic code that is part and parcel of intelligence, with regards to expression and suppression of genes. The brain also is extremely unreliable at memory tasks. Memories are reconstructed by the brain each time we remember something. This process introduces errors and remodeling that can produce false memories. Perception is also based on models our brain creates of the world. This overrides mere programming speed. The “qualia” we experience may derive in some way from the “wet” nature of our brains. These facts about the brain and how we think may in some sense serve as a “limit” as to exactly how -closely a computer can emulate a wet brain, and could serve as an upper limit to how smart a brain can actually get. Just because we can extrapolate to 170,000 times as smart, or whatever, does not mean that emergent properties might kick in at some point to foorm new barriers we had never anticipated.

    • Jim Hawtree

      Hey, right on. We think with our brain and our body. There’s some serious non-linear non-reproducible feedback stuff going on.

    • raadim

      I think you are overestimating the importance of a human body. There are many people who can’t control/feel parts or almost entire body (Hawking) yet they can produce great thoughts and are highly intelligent. The ASI does not need to feel pain. It won’t be required. It’s important for you so you don’t die but it’s not important for ASI. Of course it will form evolution of it’s “personality” when it comes to improving itself. It will be probably quite cold rational thing without hormones and fear altering it’s decisions.

      It does not need body to be super intelligent. It does not need a sex or it won’t care about age. It will be simply smarter than any of us and capable of things we can’t even imagine today.

  • maurae1

    ===== Brelliant Quality Of performance wwhhy… <..~~~.. make money online

  • oQ

    If you have a child who will be living in the next generation, you got to read this to understand what they are about to live…plus you’ll be living it too. AMAZING READ> take your time it’s mind boggling.

  • oQ

    let’s hope no one is teaching Ai to be religious

    • VovixLDR

      Asimov did it:)

    • Daniel

      you don’t teach it, it teaches itself.
      and why are you against the idea that we have a creator ?
      it might be all a computer stimulation that a kid created in another dimension

      • oQ

        a kid’s stimulation…ha ha i like that.

  • seescaper

    let’s suppose that there is a way to create a computer that is indistinguishable from a human mind yet is thousands of times smarter. Let’s also suppose that it takes just that type of intelligence to create such a computer mind. If that’s the case, there is a catch-22 in that in order to create the advanced computer mind you need the intelligence of such a mind to do it. Thus, even if it’s theoretically possible for such a mind to exist , it may be for all practical purposes un-doable to actually create it.

    • VovixLDR

      Well, we ourselves do improve our intelligence by learning, now we are teaching machines to learn too, and discovering other ways of “hardware” improvement like genetic modification. So a machine without human biological limits could learn many things a human being still can’t.

    • Some Body

      Let’s assume a contradiction in terms (“a computer that is indistinguishable from a human mind yet is thousands of times smarter”), and let’s further make an arbitrary assumption without any evidence to support it (“it takes just that type of intelligence to create such a computer mind”). Bewildering consequences follow.

    • Daniel

      you don’t need same level of intelligence.

      you just need the basic algorithm of human brain, afterwards the AI will develop it’s own algorithms that are more efficient then our.

      The entire internet will be used as it’s initial starting point of knowledge

      From there it will double it’s speed and knowledge in exponential manner, which means soon enough it will double it’s speed and knowledge every second, which pretty much leads you to “god” mode where you can predict and control every single atom movement in the universe, and probably outside of it.

  • NeoCeon

    The fundamental difference between the human mind and artificial intelligence, is that the human mind is conscious. Therefor humans think, while a computer can only process data without any conscious observation of it. AI is an immitation of human intellect, but it isn’t about to evolve into anything like real human intellect anytime soon. It’s not a question of processing power.

    • Daniel

      Your brain is an electric box, if you didn’t know.
      Every decision you make, is created by an electric circuit that runs through logical gates.

      As you heard the saying “don’t hate the player, hate the game”

      We are slaughtering billions of animals each year, some are pretty damn conscious (cows can cry when their “child” is taken away, or when they are led into slaughter). Don’t act surprised if tables turn on you, and you end up in the slaughter house, evolution wouldn’t care.

      • NeoCeon

        Completely wrong. Our brain is biological, every individual cell functions in ways that we still have just barely begun to understand. But they are certainly nothing like mechanical transistors, except perhaps for the fact that they communicate through electric signals. They function as a part of an whole, with many different roles to play.

        And the greatest mistery of our mind is our consciousness. Our mind is not our brain. Our brain is an interface between our physical body and our consciousness. Our brain only projects our *mind*, which is then observed by our consciousness. Consciousness = Life. And AI is nowhere in the neighbourhood of becoming a living being anytime soon.

        I am not slaughtering animals. As a matter of fact, I have been a vegan vegetarian for over 20 years precisely because I do not want animals slaughtered for my food. Higher intelligence makes you see things like that, and act accordingly… therefor your theory that we might end up as the slaughteranimals for an higher intelligence, is very unlikely to be correct.

        AI is limited by the human programmers that create it. The idea that AI is getting closer to real intelligence in living beings, is an illusion, AI and real intelligence are two very different things.

        I think AI programmers should stop wasting their time trying immitate the brain, that is a useless endavour because we are still so far away from understanding how the brain works. They should rather focus on how our own MIND works, how our own thinking works, how our own thoughts, emotions, sensory perception, etc, works as we observe/live them through our consciousness.

  • Vamshi Reddy

    What do you mean ‘only part of which was recovered’ ? (“like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected”)

    Markets are way over those levels!! You seem to imply that the Algortihms have caused irrepairable losses to the world, thats not true..yea they caused losses to people who used them, but someone else gained – its how the markets work.

  • Robert Szeles

    I think some of the comments refuting certain points of the article are quite brilliant and I highly recommend reading them. I read about half the article and browsed the rest. I have read Kurzweil. I agree that AI is a danger, because they will be so powerful, but not because they are better than humans. Human intelligence, if taken to include wisdom, intuition, subjective perception, etc. is far, far, far more and different than just processing info power. And, this is a great point someone made which I firmly believe: human intelligence is inextricably woven together with our biology. You cannot have something think like a human unless it is a human. As I said to a friend, we’ll end up building an android that will SEEM to act and think and feel like a human, but it’s NOT really doing that. It is only a simulation of what a human does, but is something completely different. An incredibly convincing complex “parrot.”

    The problem is that some humans, including many of the people who are in the AI field, believe that a human is nothing but a machine with processing power. This is because of a flaw in the thinking of the Western mind that has very much affected our scientific world view: that our biology is actually a weakness, a flaw, “sinful” according to religious thought; that intellect divorced from body (this idea goes all the way back to Plato) is superior to our physicality. THIS is their greatest error. To put it in simple, metaphysical (or even psychological) parlance: the computer has no soul. Why? Because it is not a biological life form, but an artificial construct made by man to imitate a biological life form.

    Someday, we may make the mistake of giving machines “human rights” just because some scientist was foolish enough to give the machine a life-like human body and face. What you will see before you is not a being, but something akin to a sociopathic intelligent mannequin that could easily be programmed to smile or cry sad tears while blowing you away with a gun. Some people would rather create a fake mate that does everything you want it to than learn to love a real one. We would be better served to learn to love our fellow humans than to try creating a simulation of one that will do everything we say. Make fantastic machines! By all accounts, do. But don’t make fake humans or ones that run human affairs. We will be sorry. The choice IS ours, because that’s part of being human. At least it still is for now.

    • VovixLDR

      It seems you miss a point. Processing power DOES matter because it gives far more effective background for growing intelligence. Currently we have only human intelligence growing on biological background. Biology is slow, it is a product of billion-year evolution and has a lot of legacies. It’s like a bird comparing to aircraft in flight. Sure, aircraft do not behave like birds because we don’t design them that way, but we already have robots emulating animals. And, when we start emulating biological intelligence on more faster hardware backgroung without biological legacy, it can grow both in size (far more than any human brain) and in speed. There will be no biological skull limiting the brain, no DNA switches that turn off multiplying of neurons and/or make tham degrade, there will be just growth in power by all means. Including human brain-like structures that make our consciousness, wisdom, etc. what you call “soul”, but with much more capabilities. So why such improved conscious beings won’t deserve human rights? Moreover, to catch up with them, we ourselves shall ensure extension of our own human rights, to upgrade our mind in particular.

    • Johns688

      @robertszeles:disqus
      Well said.
      Furthermore, the irony of those who believe AI will one day equal human mind (thoughts and emotions), is that they fail to recognize the science which shows why it will prove forever impossible.

      Quite simply, every last bit of stuff in our heads — every last particle – is now known to be both particle, and something else that is immeasurable: wave.

      The particle, every particle, is bathed in a field (wave) that contains infinite (immeasurable) possibilites.

      You won’t ever replicate that field, because it (nonlocally) interconnects with the environment in ways that won’t be reduced to some equation on a piece of paper. Some might have heard of the Uncertainty Principle, or as it was originally known, the Principle of Indeterminism.

      It is that indeterminism, written into the substrate of physicality, that won’t be modelled, or simulated.

      Of course, that doesn’t mean there won’t be spectacular advances in AI, but feelings, emotions? No, won’t happen.

  • Robert Szeles

    Also, I disagree with some of the comments about future shock. We would be just as shocked to go BACKWARD in time to the height of the ancient Egyptian kingdom and see technology and culture that was in ways more advanced than things we have now (we still don’t really have the technology and know-how to build one of the great pyramids—until the 1970’s, we didn’t even have a crane capable of lifting some of the rocks on the site). A clueless percent spending all their time on their little cellphone would be stunned speechless for weeks.

    And, if you brought someone from the 1960s to the present, they would be impressed with some things, but after a few days they would realize, “Oh, things were cooler back in the 60s.” Just sayin.’

  • Robert Szeles

    Also,
    I disagree with some of the comments about future shock (that bringing
    people from the past becomes more and more shocking to them the closer
    you get to the present due to ever-increasing speed of technological
    advances). Despite the scientific/historical propaganda, human cultural
    evolution does not go in a straight, ever rising line. It goes up and
    down (and may again if we have a major power failure on the Earth for
    some reason). We would be just as shocked to go BACKWARD in time to the
    height of the ancient Egyptian kingdom and see technology and culture
    that was in ways more advanced than things we have now (we still don’t
    really have the technology and know-how to build one of the great
    pyramids—until the 1970’s, we didn’t even have a crane capable of
    lifting some of the rocks on the site). A clueless percent spending all
    their time on their little cellphone would be stunned speechless for
    weeks. Someone from the Middle Ages, which was far less advanced,
    actually might die of shock.

    And,
    if you brought someone from the 1960s to the present, they would be
    impressed with some things, but after a few days they would realize,
    “Oh, things were cooler back in the 60s.” Just sayin.’

  • Jim Hawtree

    Something’s been bothering me for a while with the AI and super AI thing. I’ve programmed a bunch of computers, and my initial high expectations evaporated fast once I saw the limitations of digital computers. It may seem to be reasonable at first that digital intelligence is equivalent to human intelligence. But, I think we got a problem that is equivalent to the ultraviolet catastrophe of classical physics that was discovered around 1900; only worse.

    Classical physics (as contrasted with QM or Quantum Mechanics) is the warm and cuddly familiar laws of physics that seem to work just fine in the world. You’d think that all we had to do was to make more sensitive equipment, and we could measure the physical world as accurately as we want. The mind was therefore nothing more than a byproduct of impersonal, exact physical laws. But then, someone noticed that if atoms obeyed these classical laws, they would give off a burst of lethal ultraviolet radiation and drop to the ground state and stay there; that’s not healthy; it’s worse than unhealthy, it’s downright boring and no way to have a universe full of unpredictable life. Stuff happens in QM experiments that can be described by equations, and a lot of these equations are not that difficult. The problem is that these accurate equations describe things that totally defeat our expectations and our ability to find a ‘rational’ mechanism. QM has been called, for good reason, ’spooky’ and ‘profoundly disturbing’. Every attempt to find a reasonable mechanism, even a plausible mechanism that hasn’t been found yet, fails in certain QM experiments. Moreover, very recent experiments have shown that the conscious intentions of the experimenter after all data are recorded, can determine the outcome of certain experiments (e.g. the decision to save a disk containing ‘which-path’ information while destroying a blank disk, or vice versa); this can’t happen in a world governed by impersonal laws of classical physics.

    In other words, it’s possible to design QM experiments whose results after all data are gathered, depend on the mental state and desires and choices of the experimenter; this is a fundamental and unavoidable feature of our universe. And theory predicts that QM phenomena can occur over any distance instantaneously, even across the diameter of the galaxy.

    Admittedly I haven’t exhaustively examined quantum computing, but it is sometimes described as a faster way of doing boolean digital computations. If that’s so, then the problem is unchanged, and the ultraviolet catastrophe of 1900 CE has become the (boolean) digital catastrophe of 2015 CE.

    Here’s the catastrophe:
    1) AI is a strictly deterministic process with no limitations on accuracy nor on measurements.
    2) Data flow in a computer hosting an AI is limited by the speed of light, and by local realism.
    3) Calling AI “intelligence” is begging the question; AI can’t tell Stupidity from Intelligence.
    4) Feelings and consciousness are physical, bodily responses, not digital computations.
    5) An AI can crash with a few bits flipped, as happens with random cosmic rays.
    6) An AI can’t feel that one string of zeros and ones is better than any other string.

    AI is classical. It’s a tool that’s useful, but that’s it. Consciousness is the reaction of our body muscles and neurotransmitters. This liveware is neither hardware nor software. Most of our thoughts produce bodily reactions that we feel. Sever the connection between the intellect and the body, and you end up with the random, meaningless impulses of a numbed out zombie. Even plain sensory deprivation is a stress that may be interesting at first, but soon turns to increasing distress.

    The bizarre nature of QM can affect our perceptions beyond anything logical or rational; that’s why it is so difficult to understand. But computers with AI programs have very strict classical limitations.

    3) is a real winner; a computer program doesn’t give a crap whether it is taking over the world, or shutting itself down for a century, or trying to stack up a million random pebbles on a beach. All it knows is zeros and ones. What a computer does can be stupid, or intelligent, but only as a tool for an intelligent being.

    A metal grid can sift pebbles from sand a thousand times faster than a human. Some powerful politicians and military leaders cannot read without reading glasses. But neither a very large metal grid, nor any number of reading glasses will rise up to control Earth. They simply enhance our physical perceptions. They do nothing by themselves. A computer simply enhances our ability to do rigidly defined computations. just as reading glasses enhance our ability to see small details.

    Any formal system such aa a computer program, even when we think it is debugged, still has limits it cannot surpass because of the incompleteness theorems (from 1931 of Kurt Gödel). Even if it didn’t crash on its own, it wouldn’t know if it was being a ditz; nor would it care. It is incapable of being ‘conscious’ of that fact.

    I guess what really bothers me about ‘AI’ is that it appears to be an empty promise of supremacy and glory; an illusion to mislead and distract us from some serious issues. Starting in the 1970s when I was graduated and even before, I studied this thing, and kept it up for almost ten years. But the closer I looked, the more I found it to be nothing more than smoke and mirrors. It took me about 20 years to discover what consciousness really was; and how that was different from intelligence. I was intending to make this post a lot shorter, but then I realized that you might do well to consider that supremacy, hierarchies, pride, competition, and AI are, taken together, the most serious Great Filter of the technology savvy. Yeah, there’s ways of increasing intelligence and consciousness without drugs or AI. It takes humility, and grinding away at some ‘outside’ help until they cave in and show you; you wouldn’t believe how we’ve been dumbed down.

    • sammyo

      You mention “very recent experiments have shown that the conscious intentions of the experimenter after all data are recorded, can determine the outcome of certain experiments”; I was wondering if you have a source for this? I’m quite new to QM and still struggling to grasp the basic concepts, so any information is welcome.

  • Jibin

    This shows how screwed we all are. In the end, federal governments or superpowers will gain more intelligent AI then other AI and will f us all in the ass. It will be a battle between countries for the greatest AI and war will break out. We will be slave to our countries until then.

    I personally believe, it is our duty as humans to stop pursuing advancement in technology but rather reverse the process to simpler times or remain where we are. War will break.. and it will be the worst of all. If anything were to cause the apocalypse.. it could be this.

    I’m young, but I wish I was not born in this generation. Its sad to think I might not have my whole life ahead of me. We are so dependent as technology is everything to us now. If anyone can see where Im coming from, they know the human race is fucked if they value technology over everything else.

    • VovixLDR

      Wars are just caused by stupid people who think they could slow down human progress or even reverse it. No reactionaries (from old monarchies to Hitler to Al Qaeda to ISIS) = no wars.

      > I personally believe, it is our duty as humans to stop pursuing
      advancement in technology but rather reverse the process to simpler
      times or remain where we are.

      If you “remained who you were”, you’d be just dead. “We are so dependent as technology” because it improves our life and makes it possible at all. Technology is what allows as to be humans and improve opur humanity (save more lives, prolong them, improve our education and, yes, our biosphere too. It’s low-tech cavemen who made mammoths, saber-tooth tigers etc. extinct. And technology is the only human way to get over our dependence on fossil fuels and to find the way to catch all that CO2 back from the atmosphere. Other way is total war, because without modern tech our planet’s carrying capacity would fall to several millions at most, not billions. So we do value technology because it’s our only reasonable means of salvation. Not even the technology we have today but the very process of accelerated innovation. Invent or die.

    • VovixLDR

      > I’m young, but I wish I was not born in this generation. Its sad to think I might not have my whole life ahead of me.

      You will have a very, very long life ahead of you (hundreds and thousands years at least, in average) if you throw away those Luddite fallacies and pay attention to scientific and technological potential of life extension. Besides, it’s going to solve the very social contradictions that make you believe in apocalyptical BS.

    • Daniel

      You are a computer my friend. just outdated and slow one. compared to what we have now and in the near future. Your brain hasn’t received a significant upgrade for thousands of years. Now look at your smartphone, even he gets a better treatment than your brain.

      We will achieve AI, but NOT because of humility, it will be greed and survival instinct
      Governments will have an arms race to AI (10 bucks on USA getting there first) and then we will hope that the coin doesn’t land on the side where it says “everyone must die” in the AI’s decision tree.

      Maybe you can stop this by being a hero and chasing the “bad scientists” who make this, and then we will make a movie about you on how you saved the world.

  • raadim

    I disagree with “But 2008 to 2015 has been less groundbreaking” It has been actually even more groundbreaking then 1995 to 2007. Take a guy from 2005 with his super duper smart Nokia and take him to 2015, give him iPhone 6 plus with Spotify and Google maps on it and he’ll see the progress he hasn’t seen before.

    • Some Body

      I don’t think the iPhone 6 plus has on it many things that hadn’t been invented by 2007…

      • iliafrag

        Most of these things though were not on phones. Also, you should also compare it by how faster and better it does things that the first Iphone could do. Compare that to the difference between 2007 and 2015 guns or even cars. Also watch how Steve Jobs presents wifi just 16 years ago and people cheer like it’s a magic alien technology.https://www.youtube.com/watch?v=HFngngjy4fk

  • keinsignal

    The fatal flaw in these singularity arguments is in thinking human intelligence is just a matter of hitting a number. The idea that intelligence can be charted at all, in fact, seems to me pretty suspect. What, exactly, is your yardstick here? IQ isn’t completely meaningless, but what it mostly seems to mean is “the ability to do well on IQ tests”… And not for nothing, but most of those tests focus on exactly the sort of memorization and computation tasks that computers are *already* better than us at.

    I will always remember the time I was invited to the house of the uncle of a friend of mine, an experimental physicist at Princeton, a man who had done significant, knowledge-advancing work on quantum physics, a guy who worked with particle accelerators on a regular basis… The entire dinner conversation revolved around the time he tried to fix the dishwasher and flooded out the whole kitchen.

    My point is that intelligence, artificial or otherwise, is a complex, hairy, and multi-faceted beast, not a one-dimensional statistic. It is a set of QUALITIES, not a single QUANTITY where an overall increase in capacity automatically leads to expansion in every capability.

    So when I see a line like “If our meager brains were able to invent wifi, then something 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world”, I kind of have to laugh. There’s a whole lot of messy stuff in the middle that’s getting skipped over here – like the underpants gnomes, or that Far Side cartoon where part 2 of a mathematical proof is “and then a miracle occurs” – and the people making these arguments don’t even seem to notice.

    • Some Body

      Hear hear! And I would add (also in a separate comment shortly) that the same goes for the notion of “progress”, as used in the introduction here.

    • Daniel

      everything is measurable and quantifiable
      human memory, arithmetic efficiency, processing speed is just too damn shitty for the future

  • I pray scientists invent EMPATHY in the architecture. Invent or die.

  • Some Body

    I must say that the introduction to this piece is infuriatingly sloppy, and tosses out ideas that don’t make any minimal sense if you spend more than a second examining the assumptions behind them. The whole rate of progress argument gets it wrong in so many places, you don’t know where to start. Nevertheless, here’s a very partial list of bloopers and misconceptions.

    1. The argument is circular. If we assume progress advances exponentially (whatever that’s supposed to mean), then we must reach the conclusion that progress advances exponentially. If we assume another shape for the curve (of the infinite possible options), then the conclusion does not follow. Waving hands and using prophetically-enthusiastic language does not give you knowledge of how events will develop in the future.

    2. That Mr. Kurzweil said something ain’t meanin’ it’s true. Just sayin’. (Also, his record, impressive as it is in some fields, is described in Part II with more than a few exaggerations).

    3. Perhaps the greatest howler: How exactly do you measure progress? The entire introduction treats progress as if it were a number, but there is no indication of how exactly such a number is to be calculated. And that’s not surprising, because “progress” is a very tricky and culture-specific concept. We don’t have a clear definition for it, let alone an unambiguous measure.

    4. To be sure, there are economic indices that may be used as a proxy value for technological progress. Most economist would use productivity as such an index (though an indirect one). But then, if you look at the actual productivity data for Western economies over the last couple of centuries, the exponential progress thesis is easily refuted. It may not seem like it to some Facebook-absorbed people out there, but productivity growth has been considerably slower in “developed” countries since the 1970s than it has been in the previous 150 years. The railway, motor vehicle, flight, electricity and industrial mass production have been, in economic terms, at least, a much more significant leap forward for humanity than the Internet and the cellphone. If progress is measurable at all, our measures tell us it is stalling, and has been for more than a generation.

    5. Then there is one alternative measure for progress proposed by Tim himself: his imaginary Die Progress Unit. There are people living in US cities today, who spent their childhood as members of hunter-gatherer tribes in the Amazon region, and who still go home (to the tribe) for vacations occasionally. They are alive and well, thank you.

    6. Oh, and just to put the record straight, 1985 was a time with personal computers, cell-phones, and even some rudimentary forms of the Internet. I’m old enough to remember that first-hand, and I’m only 40.

    So, this is just a very partial list. Having read this introduction, I really didn’t feel like reading the rest of the post (which I did nevertheless; wasn’t quite as preposterous, but still full of conceptual holes, ideas not thought through thoroughly [sic!] enough, and blind faith in the opinions of a bunch of self-aggrandizing AI “specialists”, at the expense of people doing serious science. Ah, well…)

  • ZygmuntZ

    The author mentions the S-curve, but in my opinion gets it wrong. It’s not going to get steeper; it’s going to get flatter. Look at the image of old predictions for aviation in Simon Sotak’s comment.

    My thoughts on AI:
    http://fastml.com/what-you-wanted-to-know-about-ai/
    http://fastml.com/what-you-wanted-to-know-about-ai-part-ii/

Home Archive