Interview | Dr. Ben Goertzel
Artificial General Intelligence, Transhumanism and Open Source Transhumanism
Transhumanism Meets Design Conference
New York City| 14-15 May 2011
Dr. Ben Goertzel. Photo courtesy of Neural Imprints
Dr. Ben Goertzel is Chairman of Humanity+; CEO of AI software company Novamente LLC and bioinformatics company Biomind LLC; leader of the open-source OpenCog Artificial General Intelligence (AGI) software project; Chief Technology Officer of biopharma firm Genescient Corp.; Director of Engineering of digital media firm Vzillion Inc.; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence Conference Series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas, Dr. Goertzel has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles, and the futurist treatise A Cosmist Manifesto. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand.
Dr. Goertzel spoke with Critical Thought’s Stuart Mason Dambrot following his talk at the recent 2011 Transhumanism Meets Design Conference in New York City. His presentation, Designing Minds and Worlds, asked and answered the key questions, How can we design a world (virtual or physical) so that it supports ongoing learning and growth and ethical behavior? How can we design a mind so that it takes advantage of the affordances its world offers? These are fundamental issues that bridge AI, robotics, cyborgics, virtual world and game design, sociology and psychology and other areas. His talk addressed them from a cognitive systems theory perspective and discussed how they’re concretely being confronted in his current work applying the OpenCog Artificial General Intelligence system to control game characters in virtual worlds.
SM Dambrot: We’re here with Dr. Ben Goertzel, CEO of Novamente, Leader of OpenCog and Chairmen of Humanity+ [at the 2011 Humanity+ Transhumanism Meets Design Conference in New York City]. Thank you so much for your time.
Dr. Goertzel: It’s great to be here.
SM Dambrot: In your very interesting talk yesterday, you spoke about the importance of the relationship between minds and worlds. Could you please expound on that a bit in terms of Artificial General Intelligence?
Dr. Goertzel: As an AGI developer this is a very practical issue which initially presents itself in a mundane form – but many subtle philosophical and conceptual problems are lurking there. From the beginning, when you’re building an AGI system you need that system to do something – and most AI history is about building AI systems to solve very particular problems, like planning and scheduling in a military context, or finding documents online in a Google context, playing chess, and so forth. In these cases you’re taking a very specific environment – a specific set of stimuli – and some very specific tasks – and customizing an AI system to do those tasks in that environment, all of which is quite precisely defined. When you start thinking about AGI – Artificial General Intelligence – in the sense of human-level AI – you not only need to think about a broader level of cognitive processes and structures inside the AI’s mind, you need to think about a broader set of tasks and environments for the AI system to deal with.
In the ideal case, one could approach human-level AGI by placing a humanoid robot capable of doing everything a human body can do in the everyday human world, and then the environment is taken care of – but that’s not the situation we’re confronted with right now. Our current robots are not very competent when compared with the human body. They’re better in some ways – such as withstanding extremes of weather that we can’t – but by and large they can’t move around as freely, they can’t grasp things and manipulate objects as well, and so on. Moreover, if you look at the alternatives – such as implementing complex objects and environments in virtual and game worlds – you encounter a lot of limitations as well.
You can also look at types of environments that are very different from the kinds of environments in which humans are embedded. For example, the Internet is a kind of environment that is immense and has many aspects that the everyday pre-Internet human environment doesn’t have: billions of text documents, satellite data from weather satellites, millions of webcams…but when you have a world for the AI that’s so different from what we humans ordinarily perceive, you start to question whether a AI modeled on human cognitive architecture is really suited for that sort of environment.
Initially the matter of environments and tasks may seem like a trivial issue – it may seem that the real problem is creating the artificial mind, and then when that’s done, there’s the small problem of making the mind do something in some environment. However, the world – the environment and the set of tasks that the AI will do – is very tightly coupled with what is going on inside the AI system. I therefore think you have to look at both minds and worlds together.
SM Dambrot: What you’ve just said about minds and worlds reminds me of two things. One is the way living systems evolved – that is, species evolve not in a null context, but rather, as you so well put it, tightly coupled to in this case an environment niche; every creature’s sensory apparatus is tuned that niche, so the mind and world co-evolve. The other is what you mentioned yesterday when discussing virtual and game worlds – that physics engines are not being used in all interactive situations – which leads me to ask you what you think will happen once true AGIs are embodied.
Dr. Goertzel: If we want to, we can make the boundary between the virtual and physical worlds pretty thin. Most roboticists work mostly in robot simulators, and a good robot simulator can simulate a great deal of what the robot confronts in the real world. There isn’t a good robot simulator for walking out in the field with birds flying overhead, the wind, the rain, and so forth – but if you’re talking about what occurs within someone’s house a lot can be accomplished.
It’s interesting to see what robot simulators can and can’t do. If we were trying to simulate the interior of a kitchen, for example, a robot simulator can deal with the physics of chairs and tables, pots and pans, the oven door, and so forth. Current virtual worlds don’t do that particularly well because they only use a physics engine for a certain class of interactions, and generally not for agent-object or agent-agent interactions – but these are just conventional simplifications made for the sake of efficiency, and can be overcome fairly straightforwardly if one wants to expend the computational resources on simulating those details of the environment.
If you took the best current robot simulators, most of which are open source, and integrated them with a virtual world, then you could build a very cool massive multiplayer robot simulator. The reason this hasn’t happened so far is simply that businesses and research funding agencies aren’t interested in this. I’ve thought a bit about how to motivate work in that regard. One idea is to design a video game that requires physics – for example, a robot wars game in players build robot from spare parts, and the robots do battle. You could also make the robots intelligent and bring some AI into it, which if done correctly would lead to the development of an appropriate cognitive infrastructure.
Having said that, going back to the kitchen – what would current robot simulators not be able to handle, but would have to be newly programmed? Dirt on the kitchen floor, so that in some areas you could slip more than others; baking, so when you mix flour and sugar and put it in the oven, the chemistry which is beyond what any current physics engine can really do; paper burning in the flame of a gas stove; and so on. The open question is how important these bits and pieces of everyday human life are to the development of an intelligence.
There’s a lot of richness in the everyday human world that little kids are fascinated by – fire, cooking, little animals – because this is part of the environmental niche that humans adapted to. Even the best robot simulators don’t have that much richness, so I think that it’s an interesting area to explore. I think we should push simulators as far as we can, create robot simulators with virtual worlds, and so forth – but at the same time I’m interested in proceeding with robotics as well because there’s a lot of richness in the real world and we don’t yet know how to simulate it.
The other thing you have to be careful of is that most of the work done with robots now completely ignores all this richness – and I’m as guilty of that as anybody, When we use robots in our lab in China do we let the robots roam free in the lab? Not currently. We made a little fenced-off area, we put some toys in it, and we made sure the lighting is OK because the robots we’re using (Aldebaran Nao robots) cost $15,000 and they have a tendency to fall down. It’s annoying when they break – you have to send them back to France to get repaired.
So, given the realities of current robot technology we tend to keep the robots in a simplified environment both for their protection, and so that their sensation and actuation will work better. They work, they’re cool, and they pick up certain objects well – but not most of those in everyday human life. When we fill the robot lab only with objects they can pick up, we’re eliminating a lot of the richness and flexibility a small child has.
SM Dambrot: This raises two more questions: Is cultural specificity required for any given AGI, and is it necessary to imbue an AGI with a sense of curiosity?
Dr. Goertzel: Our fascination with fire is an interesting example. You wonder to what extent it’s driven by pure curiosity versus our actual evolutionary history with fire – something that’s been going on for millions of years. I think our genome is programmed with reactions to many things in our everyday environment which drive curiosity – and fire and cooking are two interesting examples.
Having said that, yes, curiosity is one of the base motivators. We’re already using that fact in our OpenCog work. One of the top-level demands, as we call them, of our system is the ability to experience novelty, to discover new things. There are two demands: to discover new things in the world around it and just have the experience of learning new things internally, which can come through external or internal discovery. So we’ve already programmed things very similar to curiosity as top-level goals of the system. Otherwise you could end up with a boring system that just wanted to get all of its basic needs gratified, and would then just sit there with nothing to do.
SM Dambrot: That’s very interesting – especially the internal novelty drive. That seems even more exciting in terms of any type of AGI analogue to human intelligence, because we spend so much time discovery ideas internally.
Dr. Goertzel: Some people more than others – it’s cultural to some extent. I think we as Westerners spend more time intellectually introspecting than do people from Eastern cultures. Being from a Jewish background, I grew up in a culture particularly inclined towards intellectual introspection and meta-meta-meta thinking.
On a technical level, what we’ve done to inculcate the OpenCog system with a drive for internal novelty and internal learning and curiosity is actually very simple: It’s based on information theory and is related to work byJürgen Schmidhuber and others on the mathematical formulation of surprise. In an information-theoretic sense, OpenCog is always trying to surprise itself.
SM Dambrot: I recall that when Prof. Schmidhuber was discussing Recursive Neural Networks at Singularity Summit ’09, he talked about how the system looks for that type of novelty in its bit configurations.
Dr. Goertzel: That’s right – and what we do with OpenCog is quite similar to that. These are ideas that I encountered in the 1980s in the domain of music theory, based on Leonard Meyer’s Emotion and Meaning in Music. He was analyzing classical music – Bach, Mozart and so forth – and the idea he came up with was that aesthetically good music is all about the surprising fulfillment of expectations, which I thought was an interesting phrase. Now, if something is just surprising it’s too random, and some modern music can be like that – modern classical music in particular. If something is just predictable – pop music is often like that, and some classical music seems like that – it’s boring. The best music shows you something new yet it still fulfills the theme in a way that you didn’t quite expect to be fulfilled – so it’s even better than if it just fulfilled the theme.
I think that’s an important aesthetic in human psychology, and if you look at the goal system of a system like OpenCog, the system is seeking surprise but it also gets some reward from having its expectations fulfilled. If it can do both of those at once then it’s getting many of its demands fulfilled at the same time, so in principle it should be aesthetically satisfied by the same sorts of things that people are.
This is all at a very vague level, because I don’t think that surprise and fulfillment of expectations are the ultimate equation of aesthetics, music theory or anything else. It’s an interesting guide, though, and it’s interesting to see the same principles seem to hold up for human aesthetics in quite refined domains, and also for guiding the motivations of very simple AI systems in video game type worlds.
SM Dambrot: I’ve been wondering about materials and the structure of those materials. Do you think it’s important or even necessary in any way to have something that is patterned on our neocortical structure – neurons, axons, synapse, propagation – in order to really emulate our cognitive behavior, or not so relevant?
Dr. Goertzel: The first thing I would say is that in my own primary work right now with OpenCog, I’m not trying to emulate human cognition in any detail, so for what I’m trying to do – which is just to make a system that’s as smart as a human in vaguely the same sort of ways that humans are, and then ultimately capable of going beyond human intelligence – I’m almost sure that it’s not necessary to emulate the cognitive structure of human beings. Now, if you ask a different question – let’s say I really want to simulate Ben Goertzel and make a robot Ben Goertzel that really acts, thinks, and hopefully feels like the real Ben Goertzel Goertzel – to do that is a different proposition and it’s less clear to me how far down one needs to go, in terms of emulating neural structure and dynamics.
In principle, of course, one could simulate all the molecules and atoms in my brain in some kind of computer, be it a classical or quantum computer – so you wouldn’t actually need to get wet and sticky. On the other hand, if you need to go to a really low level of detail, the simulation might be so consumptive of computing power, you might be better off getting wet and sticky with some type of nanobiotech. When you talk about mind uploading, I don’t think we know yet how micro or nano we need to get in order to really emulate the mind of a particular person – but I see that as a somewhat separate project from AGI, where we’re trying to emulate human-like human level intelligence that is not an upload of any particular person. Of course if youcould upload a person, that would be one path to a human-level AGI…it’s just that it’s not the path I’m pursuing now, not because it’s uninteresting but I don’t know how to progress directly and rapidly on that right now.
I think I know how to build a human-level thinking machine…I could be wrong, but at least I have a detailed plan, and I think if you follow this plan for, let’s say, a decade, you’d get there. In the case of mind uploading, it seems there’s a large bottleneck of information capture – we don’t currently have the brain scanning methods capable of capturing the structure of an individual human brain with high spatial and temporal accuracy at the same time, and because of that we don’t have the data to experiment with. So if I were going to work on mind uploading, I’d start by trying to design better methods of scanning the brain – which is interesting but not what I’ve chosen to focus on.
SM Dambrot: Regarding uploading, then, how far down do you feel we might have to go? Is imaging a certain level of structure sufficient? Do we have to capture quantum spin states? I ask because Max Morementioned random quantum tunneling in his talk, suggesting that quantum events may be a factor in cryogenically-preserved neocortical tissue.
Dr. Goertzel: I’m almost certain that going down to the level of neurons, synapses and neurotransmitter concentrations will be enough to make a mind upload. When you look at what we know from neuroscience so far – such as what sorts of neurons are activated during different sorts of memories, the impact that neurotransmitter levels have on thought, and the whole area of cognitive neuroscience – I think there’s a pretty strong case that neurons and glia and the molecules intervening in interactions between these cells and other things on this level are good enough to emulate thought without having to go down to the level of quarks and gluons, or even (as Dr. Stuart Hameroff suggests) the level of the microtubular structure of the cell walls of the neuron. I wouldn’t say that I know that for certain, but it would be my guess.
From the perspective of cryogenic preservation, you might as well cover all bases and preserve something, so well that even if our current theories of neuroscience and physics turn out to be wrong, you can still revive the person. So from Max More’s perspective as CEO ofAlcor, I think he’s right – you need to preserve as much as you can, so as not to make any assumptions that might prevent you from reviving someone.
SM Dambrot: Like capturing a photograph in RAW image format…
Dr. Goertzel: Yes – you want to save more pixels than you’ll ever need just in case. But from the viewpoint of guiding scientific research, I think it’s a fair assumption that the levels currently looked at in cognitive neuroscience are good enough.
Stuart Mason Dambrot and Dr. Ben Goertzel. Photo courtesy of Neural Imprints
SM Dambrot: What’s your take on the Blue Brain Project? They’ve apparently emulated a cat’s neocortical structure and announced that their goal is to emulate a human neocortex within, at this point, roughly eight years.
Dr. Goertzel: This is a long and complex story regarding a number of fascinating simulations done on IBM supercomputers. If you look at what Henry Markramdid in simulating a cortical column, in the Blue Brain project, that was very interesting from a number of standpoints – yet in some ways it didn’t do everything some people think it did. In simulating that column, Markham had to dig deeply into the equations of the flow of charge along a single neuron – and he actually published some really cool papers in BiologicalCybernetics about adjusting those equations based on the measurements he and his team made. On the other hand, when you look at what the actual simulation he ran was, you can see that they did not actually simulate the precise input/output behavior of the cortical column.
What you’d like to see ideally is a simulation where if you feed some input into the column and get some output from the column, you see exact agreement with what you’d get from a real cortical column. They didn’t do that; what they did do was create a simulated column that statistically had the same input/output properties as a real column. That’s worthwhile and interesting, but it’s not uploading a cortical column. Since we don’t know the information coding of the column’s inputs and outputs, we don’t really know if we’ve gotten everything that’s there. Imagine that you simulated the input/output properties of me as a language user in this way: from the statistical standpoint of acoustic analysis it would look like it had the same input/output properties as I do, yet it’s missing the information.
Now, the cat brain that you mention was actuallyDharmendra Modha‘s work. It was a totally different project based on IBM hardware that was the next generation from what Markham used. They simulated a neural network similar in size and connection complexity to a cat’s brain. However, the pattern of connections was random – not derived from study of the cat brain and it didn’t go down to the level of neurotransmitter concentrations either. It was a wonderful hardware demonstration of building a formalized neural network of that huge size, but it didn’t have the same dynamics or structures as a cat brain because we don’t know what those are.
As it happens, Modha’s team at IBM has done some other work aimed at understanding those structures, and published quite an interesting paper on the structure of the monkey brain in which they curated thousands of neuroscience papers and charted which regions of the monkey brain connected to other regions, trying to parse the connection structure just on a region-to-region level. There are hundreds of brain regions and hundreds of thousands of papers on how they’re connected. Also, they were the first to sort through all the different nomenclatures and sub-literatures in the world to create a coherent database of the connections between different parts of the monkey brain.
So that’s interesting, and eventually if you bring that kind of connectivity diagram together with the kind of simulation that they did, potentially you could get a large-scale simulation with more of the same structures and dynamics as a real animal’s brain – but they haven’t gotten there yet.
Open Connectome is another interesting project, at John Hopkins University, to mention in that regard. It’s a little bit earlier stage that what Modha’s team did with the monkey brain, but it’s all Open Source. Their scientists upload connectivity data from different parts of the brain, and make open source tools where anyone can go online and help map out neurons, synapses and what’s connecting to what in the data – and this could produce a much more fine-grained map of the connectivity structure. If something like that succeeds, then you could really make a large-scale brain simulation that does what the brain does – which is something that neither Markham nor Modha did in their simulations.
SM Dambrot: That kind of open-source project would have a significant benefit to a wide community of neuroscientists.
Dr. Goertzel: Yes – they want to go Web 2.0 with it: They want to not only have scientist upload their data, but also have people from around the world log on and help interpret the data. It’s interesting – there are some image processing tasks that people are good at but computers aren’t that good at. For example, with three-dimensional imaging data – the type of data that the John Hopkins researchers have uploaded -people can look at and see, “yes, there’s a neuron there, and it’s pointing to another neuron over here.” Current image processing tools, however, are quite weak with 3D data.
So right now, there’s a role for people to look at this 3D data and see what’s connected to what. Once AI is a little further advanced at 3D image processing tasks, the role of people will shift to correcting the AI’s mistakes, and then ultimately the AI could obsolete people – in part by leveraging the training data obtained from people’s image classification judgments made by using the Open Connectome web interface.
SM Dambrot: Would you consider this the next step in the progression of distributed processing – SETI@home, ProteinFolding@home, and so on?
Dr. Goertzel: In a sense – but those are using home computing power to do number crunching, whereas Open Connectome uses human brain power. It would be interesting if you could take a page from the Google Image Labeler that Luis von Ahn created at Carnegie-Mellon University – he made labeling images online into a game to make it fun for people to provide textual labels for images, but it’s a game with a purpose: the labeling then serves as AI training data. It’s not exactlyName the Neuron – the point is not to label a neuron but rather to identify it and where it’s connecting – but I think it could be approached in a similar way.
SM Dambrot: Another interesting topic from your talk yesterday was the use of virtual and gaming worlds to provide and AI with a space to explore – specifically the block world.
Dr. Goertzel: In the AI project I’m currently doing withHong Kong Polytechnic University (PolyU), the basic goal is to demonstrate OpenCog doing something in a videogame world which will be interesting to the game industry. At the end of this two-year project, which is jointly funded by the Hong Kong government and my company Novamente LLC, we want to create an OpenCog agent in a game through a partnership with a game company – to both generate money for ongoing research, and establish a way to set the AI up in communication with potentially millions of people around the world who would be the AI’s teachers.
Then the question becomes: What type of game world should we use for our current prototype experiments? We’ve done some work before using a game platform called Multiverse in which the actor is a virtual dog that learns tricks – which is interesting as a platform for imitation reinforcement learning, but it’s limited. We wanted something with more versatility but not so much that it would confuse our early-stage AI.
An AGI Preschool is a cool idea. I want to do it, but it’s a bit much for right now – less in terms of the AI, which could probably handle it, but in terms of resources for game development. In a preschool you have a lot of things that are shared to simulate in a video game – a sandbox and Play-Doh, for example – so we settled on a game world modeled on the video game Minecraftbecause it’s relatively simple from a game development perspective yet provides a lot of flexibility on terms of the AI interacting with the world. In Minecraft, everything in the world is made of small blocks, which can be used to build anything – a ladder, tower, or even a statue that looks like oneself. There’s a lot of opportunity for flexibility and creativity, but because everything is made out of blocks you don’t have to deal with scripting sand and other difficult objects, and you don’t have to do was much artwork and animation.
In short, we made this decision to both simplify the AI’s job in terms of perception and action so it could focus more on cognition, learning, planning and construction, as well as to simplify game world construction – a world made of blocks is basically Democritus’s model of the cosmos, on a larger scale. Still, there are various decisions to make – in the physics of the game world, for example, you can build a very narrow tower of blocks but gravity doesn’t make it fall down.
SM Dambrot: Adding realistic physics would give you the best of both: you’d have real-world constraints coupled with the simplicity of using repetitive units to construct objects.
Dr. Goertzel: That’s right. And of course, in terms of transfer to a physical robot, you can give that robot blocks to play with in the robot lab. It transitions fairly well into building with wooden blocks, Lego blocks and so on. This natural transition path for the game world into robotics will probably be done in the Hong Kong project, which is focused on game AI.
SM Dambrot: You also discussed various types of memory in human cognition. Does AI memory conform to these?
Dr. Goertzel: Overall, my approach to AI is not based on neuroscience, primarily because I don’t we know enough about neuroscience to drive AI design – and the neuroscientists I talk to tell me the same thing. It is inspired by cognitive psychology to a significant extent. The different types of memory I used to design OpenCog are pretty well established in Cognitive Psychology, in the sense that we seem to have different mechanisms with different response time characteristics for, say, procedural knowledge versus semantic knowledge. If you dig into the neuroscience, there are many distinctions between these types of memory, in that various parts of the brain are differentially active during types of memory.
For example, there’s evidence that the cerebellum is involved during action sequences – the basal ganglia also come into it – even though they don’t involve motor action. In spatial knowledge, there are complex interactions between the posterior parietal cortex, hippocampus, entorhinal cortex, and so forth. We’re not at the stage where neuroscientists have a clear picture of how each of the different types of memory is implemented. So clearly there’s the same biochemical and cellular mechanisms underlying different kinds of memory in the brain, and there’s much overlap in terms of the brain regions and dynamics, as well as there being significant differences in which brain regions are involved, and in which neurotransmitters may be involved. The details are still unfolding.
If you look at what you can do on a computational neuroscience level now, you can do things like build a model of the hippocampus and medial temporal lobe, connect it to your model of the parietal cortex, and study how that implements spatial memory. The hippocampus and medial temporal lobe tend to deal more with allocentric coordinates (such as third-person top-down, or bird’s-eye, views), while the parietal cortex tends to handle first-person egocentric views – but both are head- and eye-centric. Neuroscientists have different opinions about the brain’s coordination of these different perspectives – and I’ve been doing some consulting in this direction through Novamente. However, to me this is a different pursuit than trying to build a human-level thinking machine, because the neuroscience is just too diverse, particular and unfinished.
SM Dambrot: Especially given the idea that AGI is ideally substrate-independent.
Dr. Goertzel: Substrate independence is an interesting notion, and as a mathematician I would like to aspire to that – yet as an AGI designer I’m constantly pushed away from it. The OpenCog design now is not that substrate-independent – in fact, in many ways it’s customized to operation on a network of symmetric multiprocessor Von Neumann machines.
In the just-finished first draft of my new book Building Better Minds, the core mathematics is substrate-independent – for instance it would work on a massively-parallel MIMD machine, like the Connection Machine that Danny Hillis built at MIT starting in the 1980s – but on the other hand, there’s also a lot of content and code heavily tied to the particular hardware we’re currently using. For example, we have to write code to multithread among 16 processors (or however many processors our individual SMP machines have), and we then will have to write code to network many of multiprocessor machines together. That has a lot of consequences – for example, if you’re running on 1,000 machines, each with 100GB of RAM, you have issues of how to dynamically and adaptively partition knowledge among those machines. How do your logical inference control and procedure learning mechanisms make use of this clustered feature of your knowledge base?
Once you go in that direction you’re adapting your systems to a network of symmetric multiprocessor machines, which is an infrastructure that very different from a Connection Machine or human brain – so if you gave us a Connection Machine with a trillion processors, we could port our mathematical algorithms, but much of the code would have to be rewritten, as would the intermediate layer of algorithms that we use as a “glue” between the mathematics and the hardware.
In short, efficiency leads you away from substrate independence, so as an AGI designer you want to formulate your core cognitive algorithms and structures in a substrate-independent way. At least that’s my approach. On the other hand, you could take a different view: If you’re less of a mathematician and more of an engineer or biologist, then your approach could be to grow a mind out of the substrate, which is what happens with the human brain – evolution didn’t start with abstract mathematics of thought that was then implemented on wetware.
SM Dambrot: This reminds me of our discussion a few minutes ago about the ways worlds and minds interact, in that the brain is tied in with the world in which it evolved.
Dr. Goertzel: The brain is part of the world – it’s made of the same stuff as the world around it. It’s more a matter of one part of the world co-evolving with another – and what we’re doing with AGI right now is engineering, not evolution.
A long time ago – before I started seriously working on AGI – I had the same thought many others have: Why not evolve a brain by implementing an artificial ecosystem across the Internet, set some artificial chemistry and biology in motion, and let the AGI emerge from the digital primordial soup. The obvious conclusion you come to after a while, yes, that’s really cool – but the ecosystem has many more molecules than any one brain, and that’s going to require orders of magnitude more computing power than does any individual brain, so it’s probably not the best approach to take.
SM Dambrot: Since we’re at the Humanity+ Transhumanism Conference, my last question is about the connection between your work in AGI and Transhumanism.
Dr. Goertzel: From a certain standpoint, working on an AGI is a purely technical and engineering pursuit which could be done by a lot of people – such me and five or ten other guys locked in a basement somewhere, just coding our hearts out all day. On the other hand, that’s not really the way things are going – we’re developing our AGI in an Open Source project with people around the world, trying to recruit new programmers, and with funding that so far has largely been based for vertical market applications, not just for pure research. Therefore, in practice – since our development of AGI is distributed around the world and couple with business, universities, and various other entities within the world – there’s been a fair amount of interoperation between the AGI outreach and the Transhumanism outreach that I’ve been doing.
As an example, our AGI project in Hong Kong Polytechnic University – where we’re developing OpenCog for video games – involves Gino Yu, who runs the lab, but who with me is also organizing the Humanity+ Hong King conference on December 3-4, 2011. Through that conference, we’ll get Hong Kong technology and business people attending, potentially leading to connection for more OpenCog commercial projects or university collaboration, in turn potentially leading to funding that will feed OpenCog development.
There’s a lot of cross-pollination scientifically as well: The OpenCog work is integrating many different AI tools, one of which is machine learning – a particular AI discipline based on learning by example that could itself be integrated with probabilistic reasoning, analogic inference and generalization. I’m using machine learning in my bioinformatics work to analyze genetics data – and in that bioinformatics work I’m collaborating with Genescient, accompany whose founding Chief Scientist was Michael Rose who I met at the Transhumanism-related Immortality Conference in 2005.
What I’d like to do in the next couple of years, among many other things, is to use OpenCog for the genetics work by pulling in probabilistic reasoning and concept learning so that we’re not just doing machine learning, but are also doing some AGI-type cognition about that bioinformatics data. That would be a case of OpenCog integrating more advanced technology into a bioinformatics project or engineered life extension, which was founded through a connection made at another Futurist conference. At the moment, it’s all one big social and intellectual network, rather than being siloed into AGI, Transhumanism, and so on.
To a large extent, that’s my own personal approach – there are certainly very solid AGI researchers who have no connection with the Transhumanist community, and of course there are Transhumanists thinking about AGI who have no connection with AGI research. I’m always interested in connecting things together – my main focus in life is making intellectual progress on scientific issues, but I spend a certain percentage of my time pulling people, social networks and ideas together, which I think is also valuable.
As a final example, at the AGI ’11 Conference – a technical AGI conference which will be held at the Google campus in Mountain View, California – we’ll have a Future of AGI Workshop before the conference, which should attract Transhumanists who wouldn’t necessarily attend the technical meeting. Pulling the community together like this can have a lot of impact – some Transhumanists may be involved in practical projects that could benefit from AGI technology, others or their friends and associates may have a technical background and so might want to get involved with AGI work, and of course meeting and talking with real AGI theorists may help them speculate about the future about ways that are better grounded than might otherwise have been.
SM Dambrot: If you would, please take a final moment to give us additional details about the AGI and Transhumanist conferences later this year, as well as when we might expect your upcoming books.
Dr. Goertzel: AGI 2011, to be held in Mountain View on August 3-6, is in large part a technical and scientific conference for those involved in Artificial General Intelligence, but the pre-conference workshop, as well as the Keynotes and demo sessions, will be interesting to everyone – so I encourage you to register soon, as there’s a cap on attendance of some 200 attendees due to the size of the venue at Google.
The Humanity+ @ Hong Kong Conference will be held on December 3-4, 2011, at Hong Kong Polytechnic University’s Chiang Chen Studio Theatre. It should be very interesting in terms of bringing in scientists and futurists from mainland China who don’t circulate much in the world-at-large or intersect with their Western counterparts – so I’m psyched about the cross-cultural admixture there.
In terms of my technical AGI book, Building Better Minds, its release date of course depends on the publisher, but my guess would be at the late 2011 or early 2012. I’m also working on an AGI trade book, tentatively titled Faster Than You Think, which should also come out in 2012.
SM Dambrot: Thank you so much, Dr. Goertzel.
Dr. Goertzel: Thank you for the interview.
Copyright 2011 PhysOrg.com. Originally published in two parts (Part I ~ Part 2)

Related
Subscribe by eMail
Tags
AI Art Artificial General Intelligence Artificial Life Bioinformatics Biology Biotechnology Complexity Computational Biology Consilience Convergence Design Emergence Events Futurism Kurzweil Law of Accelerating Returns LHC Materials Metamaterials Microfluidics Military Molecular Electronics Nanotechnology Negentropy Neural Networks Neuroscience Pharmacology Physics Politics Quantum Computing Quantum Neurobiology Robotics Sapience Science Singularity Sociology Superluminosity Synthetic Biology Synthetic Genomics Technogenesis Technology Theater Transhumanism Virtual RealityCategories
Archives