On the Topic of Mind Uploading…


If you read my previous post on sci-fi books, you’ll know that lately, the topic of mind uploading, particularly as it relates to the technological singularity, has been present in much of my reading material. While the concept seems plausible, in at least a sci-fi sort of way (the Matrix being a case in point), most people scoff at the idea of it really happening, and openly laugh at the suggestion of it happening in the next 50 years.

I tend to be on the other side of this particular fence. I think that it is not only likely, but almost a certainty in the next 50 years. So, I decided to post up an exploration and perform some calculations to figure out if it is plausible. First though, let me explain mind uploading and get some basic prerequisites out of the way.

With mind uploading, the basic idea is that that due to Moore’s law and miniaturization, soon, computers will be powerful enough to rival the human brain in sheer processing power. Once computing systems reach this level of miniaturization and efficiency, the theory is that they will be able to simulate human brains on either a virtualized level(“pretend” brains based on a simplified simulation of the underlying physics of reality) or fully simulated level(brains where every possible event, down to a quantum level, are fully simulated).

The first thing I had to tackle when thinking about this is to determine what the estimated processing capacity of the brain is. From that, it should be relatively straight-forward to determine how close we are.

Luckily, many scientists have already pondered this exact question, and arrived at an answer. Based on our current understanding of how the brain works (which I will tackle a little later on), the expected total processing power is in the range of 100,000,000 MIPS (1014). This is helpful, but not hugely so. The reason this isn’t more helpful is because computational speeds are generally measured in FLOPS (Floating point Operations per Second) instead of MIPS (Millions of Instructions Per Second). Both of these are somewhat subjective, however, so a little work needs to be done to convert them. Let me dig into this a bit.

For MIPS, the atomic unit is an Instruction, which is pretty darn flexible. For example, an instruction could be writing data from one part of a chip to another (which is not a calculation at all), or it could be doing very simple math (2+3), or it could be calculating the precise position (X/Y/Z) of an object moving in 3-D space (in the case of a very specialized DSP). In this way, what an instruction actually represents is dependent on the core chip instruction set. In this way, MIPS are only a valid comparison for chips with identical instruction sets.

A FLOP, on the other hand, has an atomic unit of a Floating Point Operation. A floating point operation is much more standardized, and is a calculation using a floating point number (such as 2.56 × 1047). This is a pretty good indicator of the raw processing ability of a digital computer, but it still doesn’t take into account some general purpose tasks. Luckily, in most modern general purpose processors, MIPS and FLOPS tend to run pretty much neck and neck.

In any event, if we convert the processing power to FLOPS, we end up with 100 TeraFLOPS (1014 FLOPS), and we have a number we can compare things to. And it also turns out that, based on this number, there are any number of computer systems today that are powerful enough to simulate the brain. For example, a Cray XT3 is almost identically powerful, and it’s a relatively old supercomputer (2004). The most powerful supercomputers today are approximately 1,000 times as powerful.

So, if we can already (theoretically anyhow) simulate a brain, why haven’t we? Well, as the good people working on the Blue Brain project can attest, it’s not enough just to have the processing power. You also have to have an understanding of how the brain really works, and build software to emulate that. Power without direction is, at best, useless.

Also, I speculate that we will find, as we dig into this, that the human brain is vastly more complex than we have given it credit for. In fact, I speculate that the brain is actually a largely quantum computer primarily aimed at calculating things that are non-computable functions in conventional computers. I also suspect that the brain has only rudimentary analog conventional computing capabilities.

I am definitely not the first to have this idea, and research is still ongoing in this direction, but I believe that describing the brain as a quantum computer clears up a lot of contradictions inherent when you describe it as a classical computer.

For example, classical or conventional computers, whether they be a handheld calculator or a supercomputer, are all exceptionally good at math. The slowest general purpose computer you can find today can calculate pi out to around a million digits faster than you can read this sentence. Even the most gifted human on the planet cannot begin to compete with the speed of a computer even one one-thousandth as powerful in overall “brain-power” for general number crunching.

On the other hand, conventional computers are generally horrible at estimations. This is the nature of the computing platform. To estimate, it has to use an algorithm to approximate “fuzzy” logic and come up with a rough number. A human, on the other hand, can “eyeball” something and estimate it, often with a fairly high degree of accuracy, with very little effort.

Some scientists completely disagree with this notion. They state that the problem is one of software: Human brains are good at probabilistic logic and bad at traditional or “crisp” logic because our brains are wired or programmed to be good at it. This is possible as well, but is a much more complex answer. I think the simpler answer is that the brain is simply a different kind of computer altogether, one that expresses states as a range of possibilities, not as an absolute.

Furthermore, the brain does this automatically, estimating states without any conscious effort. In fact, it takes conscious effort to nail down to a solid number. In our minds, we can very easily compare things (such as two glasses with different amounts of water) by unconsciously estimating without actually arriving at a value for each glass. If we are asked to give a percentage of fullness to each glass, that takes effort, but the actual comparison is effortless.

In a computer, you must first estimate each value, which is a huge undertaking when only given visual evidence, then compare the values. The whole process takes an enormous amount of computing power, but in humans, it is done without a thought. To me, this is indicative of a very large disparity, and one that is more easily described by differences in platform than differences in software.

Ultimately though, with a sufficiently powerful computer, all of this would be moot. We could simply simulate the underlying physics of the universe for the space a human brain occupies, and we don’t even have to know how it works. We just have to have an unerringly accurate snapshot of the brain down to the quantum level. This, however, presents a completely different problem, as Heisenberg’s uncertainty principle states that we cannot know both the exact speed and location of any single particle. Still, perhaps using “default” values for speed and position of underlying particles would suffice as long as the major structures are accurate. It is hard to know without trying.

So, assuming that we can either figure out the underlying “software” of the brain, or we can get around Heisenberg’s uncertainty principle and create a molecular model of the brain, can we potentially simulate a brain with silicon in a smaller, more efficient space than the actual “meat brain” we were born with?

To answer this question, the first question I had to ask myself was: Based on the physical laws as we understand them, what are the computational limits of matter in our universe? This question is actually relatively easy to answer, because someone has already answered it. Bremermann’s limit and the Bekenstein bound determine the maximum computational power of a self-contained computer with a given mass and the maximum uncompressed information storage capacity of an area of space, respectively.

Bremermann’s limit states that the maximum information processing ability of a gram of matter is roughly 2.56 × 1047 bits per second. Now this is an interesting (and very large) number, but, unfortunately, it isn’t very useful. As we mentioned previously, FLOPS is really what we need in order to compare computational capability, so that is what we need to convert to. In order to convert this number to FLOPS, we first need to determine how many bits are going to be involved in each Floating point Operation.

Assuming 32-bit floating point numbers (32-bits per FLOP), the Bremermann’s limit for a 1 gram processor is about 2.56 PetaFLOPS (2.56×1015 FLOPS). That’s a much smaller (and more useful) number. You can roughly estimate that a self-contained computer of this power would be about the size of a single cubic centimeter. For reference, today, a computer half this powerful (the Cray Jaguar) takes up over 340,000,000 cubic centimeters.

To reach Bremermann’s limit using Moore’s law (doubling the processing capacity we can fit into a given space once every 18 months) is going to take about 42 years. This is assuming we can maintain Moore’s law for that long, of course, which is unlikely due to diminishing returns. Still, the possibility is definitely there.

At that point, we should theoretically be able to make something that has the mass of 1 nanogram (roughly the size of 1 cubic nanometer, or half the diameter of a helix of DNA) that can process at the rate of 2.56 MegaFLOPS (2,560,000 FLOPS). To put this in perspective, we should be able to create a molecular-sized computer that can process as fast as an Intel Core2 Duo. You will literally be able to fit more processing power that the current most powerful supercomputer in the world in the lint on your clothing. Even if we do not reach Bremermann’s limit in 42 years, we should still be able to put a useful amount of computing power into dust-sized processors in the near future.

So, now that we understand Bremermann’s limit, will we be able to simulate a human brain in less space that running a real “meat brain”? That question takes a little more work.

First, remember that we essentially have two major ways of simulating a brain. We can either do it by emulating the brain (i.e. creating software that operates on the same principles and runs on a computer system powerful enough to make a pseudo-brain) or by fully simulating the physics involved and using that, along with a particle-level map of the brain to completely simulate it.

If we go the emulation route, we need a computer system with enough power to model the brain (roughly 100 TeraFLOPS) plus a little extra for overhead (a low-level OS and some type of virtualization/emulation engine). So, figure about 120 TeraFLOPS.

We also need to consider storage (memory). The brain has, roughly, the equivalent of 100TB (100 TeraBytes, or roughly 100,000 GB) of storage. Again, expect ~20% for overhead, so we actually need about 120TB. So, for the emulation route, we need to pack processing power of 120 TeraFLOPS and storage of 120TB into 1.5 kg of mass.

Can we build a system of this power that has an equal mass? Well, my calculations on Bremermann’s limit already answered this question: Yes. Furthermore, creating a system with the required processing power that fits in roughly half of the space of the brain (~600 cubic cm) should take us around 10.5 years. So that part is definitely feasible, assuming we can figure out the “software” aspect, and assuming that we actually do have brains that roughly resemble classical computers.

For the storage, I had to reuse the Bekenstein bound. The Bekenstein bound describes the maximum amount of information that can be stored, given a fixed about of energy and space. For a space of 600 cubic cm (with a radius of 5.232 cm) and a mass of .75 kg, this works out to 1.01 x 1042 bits (1.264 x 1039 TB). This is much larger than what we need, so it’s definitely possible. However, how long will it take?

Luckily, storage has been increasing at a rate that’s actually faster than Moore’s law, and there are current techniques (such as electron quantum holography) that prove that it is possible to store up to 3 EB (3,000,000 TB) in a square inch. So, this part is actually possible today, and in a space that is considerably smaller than the 600 cubic cm that we allocated for it, meaning we could allocate more space to the processing unit.

Ultimately, it seems brain emulation in a human brain-sized package should be feasible, at least theoretically, within 5-10 years. That’s pretty encouraging news, but we can’t lose sight of the likelihood that we really don’t understand how the brain operates well enough to emulate it with any degree of success.

If we go with the particle-level simulation, however, we don’t have to understand how the brain works. We simply have to understand how the underlying physics works (which we presumably have a decent grasp on) and have an extremely detailed map or scan of the brain (which we do not have and may prove to be an impossible challenge).

The other downside of a full simulation is that it requires much more in the way of processing and storage resources, and is ultimately basically impossible to do in less space than it is already being done. This calculation is also arrived at using the Bekenstein bound. In addition to describing the maximum amount of information you can store in a given space, the Bekenstein bound can also be used to describe the maximum amount of information needed to perfectly describe (down to the quantum level) a physical system. Since you would need a chunk of mass at least as large as the human brain to store the amount of information contained in the sub-atomic particles that make up the human brain, you can’t possibly fully simulate the brain in less space.

This makes some sense; after all, the universe is simulating itself as fast as it possibly can. Put another way, you can’t possibly simulate the universe from inside the universe at faster than the universe is currently running, with the possible sole exception of a simulation that is running from inside the event horizon of a singularity. In fact, it is absolutely required that the simulated reality be, at least to some degree, slower than actual reality. However, this ultimately may not matter, as in simulated reality, time would be effectively meaningless.

Even if we can only achieve 50% of the speed in the primary simulation of the brain as we can in a real brain, we gain a lot of functionality that simply doesn’t exist in our meat brains. For example:

  • Death can become a thing of the past
  • We can “back up” our brains, saving distinct states to restore to (effectively giving us “undo” points)
  • We can run multiple copies of ourselves, spawning new copies to do things like explore space
  • We can forego the “meatbody” support apparatus, and go with a slimmer power source (such as solar or nuclear power) that can run for years
  • We can do a lot of things non-destructively (since we can backup and spawn new copies) that we can only currently do destructively, like sever connections in the brain to try and understand what they do
  • We can, once we learn enough about how the brain works, easily expand the capacity of our minds by adding more computing resources

This, of course, is just the tip of the iceberg, but it’s still pretty enticing.

So, ultimately, do I think the technological singularity in the shape of mind uploading is possible? Yes, assuming we can cross a few major hurdles. The hardware is already at a point to where it is feasible, at least from an emulation standpoint – we just have to get the software in place. Do I think it is likely to happen in my lifetime? Hard to say, but I am hopeful. One thing’s for sure – the next 50 years are going to be very interesting.

DATA USED FOR ANALYSIS:

Bremermann’s limit: 2.56 × 1047 bits per second per gram (3.2 x 1046 bytes) (4 x 1039 MIPS) (2.56×1015 FLOPS)

1000 Core i7’s needed to emulate brain (virtual simulation)

5.9011596613984439986299359005725×1032 Core 17’s needed to simulate brain (full simulation, assuming 5 FLOPS per Instruction, 1 instruction per bit per second average)

Time to gram-sized emulation of brain: 15 years

Time to gram-sized simulation of brain: 160.5 years – Impossible based on Bremermann’s limit

Time to Bremermann’s limit: 504 months (42 years); calculated by dividing 340,000,000 by 2 and then dividing that result by 2 and so on until reaching the number “1.2”. This process took 28 division cycles, which when multiplied by 18 (the number of months between each transistor doubling cycle, according to Moore’s law) gives you 504 months.

Number of Cray XT5 cabinets needed to emulate brain: 17.45

Number of AMD Opteron processors per cabinet: 187

Rough processor dimensions (including ceramic slug): 34mm x 35mm x 4mm (4760mm3)

Total volume of necessary processors: 89,012 cm3

Time to ~600cm3 emulation of brain: 10.5 years

Bekenstein bound: Human Brain: 3.01505 x 1032 GiB (3.01505 x 1041 bytes / 2.41204 x 1042 bits)

Bekenstein bound: General: 2.5769087×1043×(mass in kilograms)×(radius in meters)

Bekenstein bound: 1 Gram object: 2.5769087 x 1038 (32,211,358,750,000,000,000,000,000 TB)

Brain weight: 1.5 kg

Brian volume: ~1200 cm3

Brain processing power: 100,000,000 MIPS (1014)

Brain memory: 100 TB (105 GiB)

, , , , , ,

  1. No comments yet.
(will not be published)