MARCELO GLEISER: If the brain is essentially a machine, a device that can capture information about the world, and process this information into action, we may wonder if it is possible to construct an artificial brain. An artificial intelligence or AI. After all, we can model the brain as having hardware, that is the neurons, and the synapses that connect them, and software - even though we don't quite know what the software is. We understand that this software must be expressed in terms of the firing of neurons, and the flow of biochemicals in the brain, but don't know how it works. Given that we live in the age of computers, and information processing, it is natural to speculate that we can model the brain as a computer. as a device that has both hardware, and software, even if this picture may be simplistic. Of course, as we have seen before, we really don't understand the nature of consciousness or how the brain creates a sense of self. But many scientists believe that we can create a simulation of the brain that is so complex that this simulation will somehow become conscious of itself. Nobody knows if this is possible or not. But many people are trying. Historically, humans have always been fascinated with automata, with robots that obey mechanical laws, and behave kind of like us. The difference is that now those machines would also have some kind of brain, and would behave as close to humans as we can make them. Alan Turing, the mathematician that developed the halting problem that we examined before, created something called the Turing test. A method to determine if a machine is mimicking humans in a convincing way. A Turing Test has a human sitting in front of a computer screen typing questions to a machine. From the answers, the person must determine if the machine he's talking to is a human or a computer. Although some computer programs have fooled human judges before, it is fair to say that we don't have anything that even comes close to a thinking machine. But what about computers that have beaten the best humans in games? For example, IBM's Deep Blue computer that beat the world chess master, Garry Kasparov, or IBM's Watson computer that beat Jeopardy champions? Or more recently, Google's DeepMind Program that beat one of the world's best Go players. Are these machines intelligent? They are not. They are still running programs, and using their enormous processing speeds, and rapid access to gigantic databases to beat humans in games. Google's DeepMind Go playing program, apparently developed its own strategies as it played along using machine learning techniques, programs that becomes smarter as they are used. Even so, it is still far from being a thinking machine. A machine that not only follows instructions, but actually creates new instructions, new knowledge, and has a sense of self-awareness. The question then is whether inventing a thinking machine is even possible. Is it a matter of time? Or there are fundamental obstacles to create an artificial intelligence? The answer depends on who you talk to. Optimists like inventor Ray Kurzweil, and many others believe it's simply a matter of time that soon enough, perhaps by the year 2040 or so, the processing power of computers will be so enormous, and the sophistication of our program so amazing that they will have an intelligence vastly superior to our own. This is sometimes called the singularity. A point in history where machines become more intelligent than humans. Such a possibility brings out all kinds of nightmarish visions, like the old Frankenstein story. Will we create a monster more powerful than we are, capable of destroying us? Philosopher Nick Bostrom from Oxford University, and others have alerted us to the dangers of artificial intelligence. Some even say it will be the last invention that we make, because once machines become intelligent, we become obsolete as a species. Bostrom's and others are investigating ways by which we can guarantee that these machines will not destroy us, but will work with us to improve our quality of life, and to help solve the world's biggest problems. There is another way though, to think about our future. It's not necessarily a machine that we build out there, but the way that we are already becoming machines. Think of your cell phone, and how desperate you are if you forget it at home or if you lose it. You feel lost, and disconnected from the world, as if a piece of you is missing. In a very real sense, smartphones are an extension of ourselves. Part of who we are. Our apps are like fingerprints, unique to each of us, extensions of our bodies. The tendency is that this kind of amplification of our human abilities to interaction with machines will grow more and more, to the point that 30 or 40 years from now, we are going to be very different creatures from what we are today. Hybrids between biological matter and digital circuits, just like in sci-fi movies. Kurzweil, and others believe that this is our future. that we will eventually become pure information. Information that can be transferred from machine to machine in kind of a immortal existence. Clearly, if this is our future, we should be thinking about it, and wondering if this is the way we want to go. What will these creatures of the future be like? What conception of reality will they have? Well, one thing is certain, as it has happened in the past, our worldview is rapidly changing, and what we call reality today, and how we fit in the world is going to be very different from our conception of reality tomorrow. Still, the laws of nature will continue to apply in the same way, and even if these creatures of the future will be vastly superior to us, they will still have to obey the laws of nature, and explore reality with their tools to extract incomplete information about the world. The island of knowledge will be bigger than ours, but we will still be surrounded by a notion of the unknown, and by unknowables that will surely mystify them as much as they mystify us.