I think it remains to be seen — maybe we’ll understand better what taste is at the end of all of that.
VARUN MAYYA: Amazing. So you’re saying if you run a lot of experiments, you eventually develop a sense for what experiments are worth spending your time on. That’s amazing.
The Future of Medical AI
VARUN MAYYA: I have a question for my wife. She texted me on WhatsApp in the morning and she said you should ask this question, which is, we’ve seen AlphaFold — what is in the future of medical AI? What is something we can look forward to?
DEMIS HASSABIS: Well, AlphaFold, I think, was just the beginning. It was this 50 year grand challenge in science and in biology. Understanding the structure of a protein is incredibly useful for understanding disease and eventually developing drugs. But it’s only one small component of the drug discovery process. It’s an important component, but it’s a small component.
So we are trying to develop many other technologies adjacent to AlphaFold — Isomorphic mostly in the biochemistry and chemistry area — to develop the right compounds that will bind to the right part of the protein, but also other properties we care about, like the toxicity and the absorption properties of these compounds, to make sure that in the human body they do the right things and don’t have any side effects. And in some ways those things are even more complicated than the protein structure.
But we have a lot of belief that this is possible because of AlphaFold. The success of AlphaFold, which was thought to be an almost impossible challenge, and we were able to do it. So I do think these methods can scale to these very, very difficult problems. We have some very promising results at Isomorphic in developing these technologies further.
And I hope that eventually we’ll be able to bring down the drug discovery process. It takes on average about 10 years to come up with a new drug. Bring that down by a factor of 10 to a matter of months, maybe even possibly weeks. That could be possible. It sounds like science fiction today, but then so was finding the protein structures of all the proteins known to science — 200 million proteins known to science — and we’ve managed to fold and put out predictions for all of them now. That would have seemed impossible 10 years ago. So I think the same kind of thing will happen over the next decade with drug design.
AI and the Future of Game Development
VARUN MAYYA: I think this is a fascinating use case of AI, right — there is a person sitting on stage right now whose work is going to be used by so many people who are sick and need those drugs over time. So thank you so much for all your contributions.
I actually want to switch tracks here and want to talk about something totally different, which is a personal passion of mine, which is gaming. We’re working on a game — I would say we wanted to make a world class game, and we wanted to do it bootstrapped. But you’ve been a game developer, right? Game dev was one of the first hats you wore. You worked at Bullfrog back then.
The minute Genie came out and I saw it, I spent like five seconds looking at the screen and I was like, wait, I need to use this. And then I used it and I was like, okay, I can’t tell a story yet. How long do I have left? What’s the next three or four years of game development going to look like?
DEMIS HASSABIS: So I saw your game — it’s a great looking game, by the way. Thanks for sending me the video of it. I love game design and game development. That’s kind of how I started my professional career, but also my journey into AI.
When I was a teenager, I was working for Bullfrog, which at the time was probably Europe’s premier development house. They did some really creative simulation games where AI was a core part, like Theme Park. And actually, that’s when I decided, around about 16 years old, that AI was going to be my career, when I saw how much enjoyment people got from interacting with that game AI and the potential of that.
And I think it’s come full circle now. Games used to be the cutting edge of where technology was developed — graphics, AI, and also hardware like GPUs were of course invented for games, and now we use them for AI development. And now maybe AI has got good enough that it can help with game development, like you’re saying.
I think it’s going to help with many things. For example, creating assets and graphics, 3D models — the technologies are pretty good now and probably in the next year or two will be pretty amazing. From just a concept art, it could probably create the 3D asset.
What I’m most excited about is new types of genres of games that might be possible now that we have AI. For example, big massive multiplayer online games that are populated with NPCs that are actually smart and can advance the storyline. I think there’ll also be very useful tools for bug testing and auto balancing games.
But then you mentioned Genie. Genie 3 is our world model. What you’re able to do with that — for those of you not familiar, we just released a kind of beta version of it recently where you can just type in a prompt and you get a playable world. You can only play it for one minute and then it’s sort of like a dream and it disappears, because it can only stay coherent for a minute. But I think over the next four or five years we’ll be able to extend that time.
But as you say, that doesn’t necessarily make for a fun game yet. It’s like an interactive movie and it’s fascinating to try, but it still requires game design, game mechanics, and all of the amazing things that the games industry has built. So it may just facilitate faster prototyping and faster iteration of ideas. And then hopefully there’ll be a new golden era of game development, like it was when I was in games in the early 90s, where you could have small teams that could experiment with really creative ideas because it was fast and cheap enough to prototype and build those games. Hopefully these tools will allow us to do that again in the games industry.
The Making of a Polymath
VARUN MAYYA: Very cool. I have a question for both of you. This is a question about a difference between the average person I’ve spent time with and both of you, which is that both of you seem very cross functional. One day you could be playing chess, the next day making games, the next day you’re in life sciences, the next day you’re in AI.
And I think the word for this is polymath. It’s just very fascinating being around polymaths because the range of conversations you can have with those people is just so wide. How does one truly be a polymath? I know it’s a tough question. The answer might just be, “Hey, you’re born with it,” but I’m going to shoot my shot anyway.
The Polymath Mindset: Curiosity Across Disciplines
GOVINDAN RANGARAJAN: Well, maybe others may have a different phrase for it — jack of all trades and master of none. But still, I think it is just the basic curiosity. You’re curious about so many different disciplines, there are so many fascinating things to do that you just get into different areas. I think it is that basic curiosity which drives all this.
DEMIS HASSABIS: Yeah, so I thought a lot about this and I think for me at least, I’ve always had an insatiable curiosity from when I can remember. And even in my games career that happened. So I started playing chess for the England Junior Chess teams. But then I realized there were many other cool games out there like Go and poker and really interesting things as well, just beyond chess. And a lot of chess players just stay with chess and that’s all they play.
And so even in games I could feel myself drawn to — there are so many interesting things, as the professor says, that are interesting in the world. But there’s also another thing too, which is that I think a lot of the best inventions, especially in the modern era, will come at the intersection of two or more subjects. And you can think of DeepMind, when we started it, as a kind of combination of neuroscience, engineering and machine learning. It was sort of the intersection of all of that. And now, you look at Isomorphic — it’s an intersection of machine learning, chemistry and biology.
So I love those areas and I think a lot of the fastest progress still now as well. I’d encourage you to become expert in two or more areas and then find the connections between them, but also the analogies between them. And there are a lot of interesting analogies when you look at things from a first principles point of view.
And then the other thing too is that I think I’ve just been drawn — my kind of favourite people from the past, my heroes, are kind of the polymaths really, like you said, like da Vinci or Aristotle, who I feel like didn’t really see the boundaries between — not just the sciences, but even art and science and philosophy. And I like that approach. I feel these are all about finding out about the world, but just using different techniques.
So in the end, if you’re curious about how the universe works, you should be curious about it from all these different viewpoints. And I suppose for me, building AI as this sort of ultimate tool for science and discovery — that’s kind of given me the excuse to learn about a lot of other subject areas, which I’ve loved doing, because we can apply AI to those areas.
The Problem with Siloed Learning in India
VARUN MAYYA: Professor, do you think they’re making a mistake in science in India by having too many siloed ways of learning? Because sometimes when I speak to people, they say, “Hey, I’m a mechanical engineer and there’s no way I’d be interested in any other type of engineering.” Do you feel that’s a mistake?
GOVINDAN RANGARAJAN: Yeah, I think it’s a mistake. I think probably the original mistake was done when we abandoned universities and started — I mean, we also are an example of that — starting specialized research institutions. We lost that crosstalk between different disciplines. And we have become so siloed — law is different, management is different, medicine is different. We are trying to remedy that by bringing medicine back here and things like that. But I think it’s a serious issue that India faces and it’s going to become a bigger issue with AI coming in, when you really need this intersection of disciplines. So it’s going to be a problem.
DEMIS HASSABIS: Yeah, maybe I can just give a couple of pieces of advice or tips on how to do that. I think there are a couple of things you need. One reason it’s hard to be multidisciplinary is of course one has to be a world-leading expert in at least one domain. This is also why siloing has happened in departments, because you have to have that, otherwise you can’t contribute at the frontier of discovery.
But then what I’ve at least done — and I think everyone can do — is develop techniques to quickly learn to, maybe, a grad level in other subject areas. How do you transfer your own learning? And of course this is what we’re trying to do with AI systems, but you can also do it with your own mind — find those connection points, understand it from what you know from first principles, so you can quickly apply it very fast to a new area or new domain, at least to a sufficient level of understanding, so you can combine it with your expert area.
And I think the other reason I’ve seen in university systems that people don’t do this more is that it takes a little bit of humility — or maybe confidence and humility, both together — to become a beginner again in some other area when you’re already maybe a world expert in one area. Let’s say machine learning. And then, “Oh, I don’t know that much about biology. So I’m going to be willing to learn from the experts, start again, and be willing to put the effort in to do that learning.” I would encourage everyone to do that. It’s really worthwhile. But I think sometimes the academic system doesn’t reward that side of doing things.
Defining AGI: The Goalposts and the Real Test
VARUN MAYYA: Fantastic. I have a question on general intelligence. I’m sure everyone has this question — for a very long time I had, even before the entire AI wave. I grew up reading Asimov and I just said, “One day we’re going to have AGI.” But as I got older and I saw Gemini come out and a bunch of models come out, I said, “This feels like AGI.” But then the goalposts moved, and we said, “No, no, but it has to do this.”
So I made a joke out of it — my Twitter username is “Waiting for AGI” because it’s the kind of internal joke I have with myself. But a question to you, Demis — what’s the capability you would see where you would go, “That’s AGI”?
DEMIS HASSABIS: Yeah, I agree. Look, my definition of AGI has never changed. We’ve always defined it — and I’ve always defined it since I started working on this 20, 30 years ago — as a system that can exhibit all the cognitive capabilities humans can.
Now, why is that important? First of all, because the brain is the only existence proof we have — that we know of, maybe in the universe — of a general intelligence. That’s also partly why I studied neuroscience, because I wanted to understand the only data point that we have that this is possible, and understand that better. And so that’s the definition I use. It’s quite a high bar, because it means if you wanted to test the system against that, it would have to be capable of all the things humans can do with this brain architecture, which is incredibly flexible.
It’s clear today’s systems, although they’re very impressive and they’re improving, don’t do a lot of those things. True creativity, continual learning, long-term planning — they’re not good at those things. And another thing that is missing is general consistency across the board at capabilities. Of course, in some circumstances they can get gold medals in International Maths Olympiad questions, like we did last summer with our systems, but they can still fall over on relatively simple maths problems if you pose it in a certain way. That shouldn’t happen with a true general intelligence. It shouldn’t be a jagged intelligence like that.
So there’s still quite a lot of things missing. I think the kind of test I would be looking for is maybe training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity like Einstein did in 1915. That’s the kind of true test of whether we have a full AGI system. And I think we’re still a few years away from that, but I think that’s going to be possible eventually. It’s clear today’s systems couldn’t do that.
VARUN MAYYA: Professor, do you think about AGI?
GOVINDAN RANGARAJAN: Well, I’m just a consumer of AI right now, not an expert, so I’ll defer to Demis on this question. But the way it’s going, it’ll probably happen sometime. I think it’s enough of a useful tool right now that everybody should use it. We need not worry about AGI — that’s what I tell all the students and faculty. It’s a good enough tool for you to use right now and accelerate your research.
Balancing Commercial Pressure and Blue Sky Research
VARUN MAYYA: Interesting. Demis, how do you balance both? How do you balance the commercial pressure — this is Google, we have to make money — and also this is DeepMind, we have to do research? There are some short-term pressures and long-term pressures. I’d just like to know how you think about this.
DEMIS HASSABIS: Well, look, there are these competing pressures. The answer is we just do both to the maximum. And that’s one advantage we have with our size — we can explore both to the limit. So we have a large research team. I think we have the broadest and deepest research bench of any organization in the world. But we also are like the engine room of Google DeepMind these days, and we have to support that too. That’s what in the end brings in the revenues and the money and the funds to do more research.
So we have to get that balance right. Roughly half my team work on those kinds of immediate priorities and support for those things — and that’s very exciting too, because building foundation models like Gemini is on the shortest path to AGI in my opinion. But then we have half the team who are doing the next frontier. It’s sort of my job as the leader of that organization to protect the blue sky research and make sure it has room to flourish and deliver maybe on an 18-month, 2-year timescale or more. We make sure that we’re not just overly focused on the near term.
So the short answer is we need both, and we’ve got that balance pretty good even over the last decade — new innovations, but also plugging that into the latest products so billions of people around the world can benefit from it.
Funding Science in India: Balancing Priorities
VARUN MAYYA: Professor, do you think about the same problems when it comes to funding for science in India? Like, how do you balance what you want to do versus what there are grants or funding for? Is that a big challenge?
GOVINDAN RANGARAJAN: It is always a bit of a challenge because India does not have infinite resources, so it has to prioritize. There are these national missions like the AI mission, the semiconductor mission, the quantum mission and things like that, but there are deliverables that you are supposed to deliver on. That of course conflicts with the general attitude academia has where you, as Demis said, just do blue sky research.
But I think on the whole we have been able to balance the two. There are enough avenues for getting funding for basic research. And even with applied research, I think very interesting open research problems can come out of even pursuing applied research. So one can do the balance of the two. But funding in general needs to increase in India — we are still at 1.7% of GDP. I think at least we should go to 2%. That would be much more comfortable and, given our aspirations, I think that would be warranted also.
Advice for Indian Software Engineers in the Age of AI
VARUN MAYYA: I have a worry — and this is a personal worry — which is that 200-plus billion dollars of India’s exports is IT services. I read this post recently which said it’s just a certain amount of tokens. It’s a worry because we have software engineers who are good but not great. Some of the great ones end up going abroad. But the ones that stay behind are now competing against models that are just getting better at doing software. Do you have advice for people in software right now who are working on these projects and seeing AI rapidly improve?
AI in the Physical World and the Future of Engineering
DEMIS HASSABIS: Look, I think a lot of areas are going to get disrupted and change. I think with change there are challenges, but also come opportunities. So what I would recommend every engineer today, wherever they are, is to lean into these AI tools, get incredibly good at using them.
I think there is a lot of untapped potential there for the youth of today, wherever they are. And what one engineer can do can probably be 10x of that. I think new startups are going to happen that couldn’t have been done before. And in some ways it equalizes the playing field because everyone around the world has access to the same tools pretty much. So it’s about everybody figuring out how best to integrate that into their workflows, and then we’ll see what the different new industries or new services come out of that. But I think there will be some new, maybe higher level versions of the things that we do today.
VARUN MAYYA: So you’re saying there might be a higher level version of software where you just prompt the thing. But for a lot of engineers in India, it just feels like it’s not the craft anymore, because you just feel like you’re writing in English.
DEMIS HASSABIS: Well, maybe there’ll be a different sort of craft. First of all, we’re not there yet. But secondly, when I was starting off in games, we used to write in assembler language. But then when I was writing Theme Park, we went to C, and of course now we have Python and all these even higher languages. So one could view this as a continual abstraction that is happening.
I think that broadens the access to creativity — more people can try out their ideas and build their ideas. So maybe it’s a slightly different skill set that’s needed. But going back to this question of taste, I think that’s going to increasingly become the valuable differentiator.
What Excites You Most About the Future?
VARUN MAYYA: I have one last question for both of you — what is something from your field or what you do every day that is very exciting to you right now, but the world hasn’t heard of yet, and that you can potentially reveal without violating NDAs, that we can look forward to a few years from now?
GOVINDAN RANGARAJAN: Well, from the limited knowledge that I have, I think what is going to surprise people is the progress in math. Because if you look at the general public, math is always thought of as something inaccessible and populated by geniuses, which of course it’s not. But I think when people see that AI is making tremendous progress in math, they’re going to be surprised. Such a difficult — what’s thought to be a difficult field — can be cracked open by AI, because it is based on axioms and definitions, and predictions can be either proved right or wrong. Those qualities make it much more accessible to AI than other fields.
DEMIS HASSABIS: For me, the thing I’m most looking forward to is AI in the physical world. I think robotics is going to come of age in the next two to three years. There are still a lot of things that have to be solved, in my opinion, but I think we’re getting to the point where there will be some breakout moment.
I think also AI understanding the physical world — we’ve tried very hard to do that with Gemini as a multimodal model, probably the best in the world at that — so that you could have an assistant that’s maybe on your glasses and comes with you, or on your phone, and understands the world around you and the context around you. Obviously we’re seeing self-driving cars about to become a reality around the world.
And then I’m excited about things like automated labs that may speed up scientific discovery — not just in theory, but also in the practical realm too. I think that’s all going to come in the next maybe five years or so.
Audience Q&A: Why Build Intelligence Like Ours?
VARUN MAYYA: Thank you so much for your time. I want to open the floor to the audience. We’ll take two or three questions, run them through the spam filter — which is me — and then take them to the people on stage.
Hello. Mine’s not a very technical question, but you mentioned studying neuroscience as the only data point that you have for general intelligence. We have all this talk of moving towards neuromorphic designs and everything, but why do we want to move forward to an intelligence that is similar to ours? Not only does it lead to a loss of capital for a large percentage of people, it also leads to a loss of identity. And there are a lot of things that we are bad at which AI is already doing — like when you’re in a needle-in-a-haystack kind of situation, you can ask it to do literature reviews or pool large amounts of data together to find something specific. Why not make an alternate intelligence that works in synchrony with ours instead of trying to replace human intelligence?
DEMIS HASSABIS: I want to be clear about this — it’s not about replacing human intelligence. The thing about the human brain is it’s the only thing you can think about as a Turing machine, approximately. If you want to think about it mathematically, we have to understand what true generality is. Turing showed that with a Turing machine, and our brains — I think most people would accept — are some kind of approximate Turing machine. So if you are interested in general intelligence that can be applied across the board, it has to have, roughly, that set of capabilities. At least that’s the only set that we know of. Other animals are not general enough — they don’t have big enough prefrontal cortexes, for example.
So it’s not really about replacing humans. It’s about understanding what general intelligence is. The reason the industry is pursuing this is because we find that with these general tools, they can transfer to specialized domains. So it’s probably going to be more efficient to develop a general structure that can be used in more specialized domains than to develop hundreds of specialized systems. That’s the economic pressure you’re seeing. So there are two different things — one is a scientific question about what a general system is, and the other is a more economic question.
Using AI Without Losing First Principles Thinking
VARUN MAYYA: I have a question. How do we use AI to deepen first principles thinking without removing the struggle that builds real understanding? Do you have a framework or steps that would make it easier for us?
DEMIS HASSABIS: I think it’s down to the individual. It’s like the Internet and computers — you can use them in ways that will degrade your thinking, but you can also use them in ways that enhance it. We were talking earlier about becoming a polymath. Well, today that’s almost a dream come true. With YouTube and all the information on the Internet, for someone who wants to learn something very quickly, up to say undergrad level, it’s all there — the best lecturers in the world, all of that.
So that’s one way you can use this technology. Obviously, if you use AI in a lazy way, it will make you worse at critical thinking. But that’s down to you as individuals. No one can help you do that. The technology is sitting there neutrally. You need to be smart enough to use these new technologies in ways that will enhance your thinking rather than make it worse.
VARUN MAYYA: I get the point you’re trying to make, but as you said, we have so many resources for learning, and we might get lost in the pool of deciding which resource to use. What would be the better mindset for us to narrow down the resources and get into the learning?
DEMIS HASSABIS: I think the number one thing you should do while you’re in school is work out how you learn. Learning to learn — that’s my number one recommendation. I’m surprised that it is not taught more in schools. Figure out how you learn best. There isn’t going to be one answer for everyone. You need to think through how you work best, what environment, what modes you learn best in, and then double down on that.
VARUN MAYYA: Did something work for you, like the art of how to learn?
DEMIS HASSABIS: A little bit, yes, but it’s not possible to explain it in one minute. It’s many things — it’s just developing the mind. For me, it was actually games. I trained my mind on multiple games that exercise different parts of the thinking process, getting really good and capable at that. It’s kind of the way we developed AI in the early days of DeepMind — using games as a proving ground for testing out ideas.
Memory, the Hippocampus, and AI
VARUN MAYYA: I have a question around memory. Amongst all the neurological aspects, I find memory to be very intriguing — the way the hippocampus works, how we try to model episodic memory, semantic memory, long-term and short-term. Sometimes I get glimpses of what happened in my childhood. It’s not about weighted averaging — it’s some glimpses that remain. And for example, if you hear any—
GOVINDAN RANGARAJAN: —keyword, and that strikes something else for—
VARUN MAYYA: —you, and probably for someone else, something else entirely.
GOVINDAN RANGARAJAN: How are foundation models trying to handle this problem?
VARUN MAYYA: How are we planning to handle this? Because currently it’s very systematic, something that you can interpret. But as far as I’ve seen, memory is a very abstract concept. It’s very difficult to understand how the brain resolves it. What’s your take on it?
DEMIS HASSABIS: I agree with you — it’s one of the most interesting things. That’s why I studied memory as well, and the hippocampus, and imagination, partly because machines in those days were very bad at those things, and to some extent still are.
I would say we are badly approximating the hippocampus at the moment with the context window. The context window is more like working memory — humans only have working memories of about seven digits, plus or minus three. But of course, a computer can have, like Gemini, a million-token context window. The problem is, I think that’s still not as good as episodic memory. It’s kind of brute force — you’re remembering everything, when in fact most tokens are irrelevant. You want to only remember the important things, which is the way human memory works. We remember emotional things better than neutral things — both positive and negative. Maybe that’s one of the functions of emotion.
We don’t need to remember everything we’ve seen today. We’ll just remember some of the key moments that might be useful for learning, for future use, or for imagining and simulating new scenarios. So I think even in the realm of AI and machines, where we can have millions — or maybe one day tens of millions or billions — of memory units, you still pay a cost of searching that memory.
Our video models, or Project Astra, which is supposed to work on glasses, can record maybe 20 minutes of video — that’s about a million tokens. First of all, that’s not a lot of time. Secondly, to then find something in that is quite expensive, because you have to look through everything. So I think, ironically, one of the things we may be missing is forgetting — or, in computer science language, garbage collection — so that you actually compress what you’re remembering and consolidate it. For those of you neuroscientists in the audience, you’ll know what I mean. It just makes the things you are remembering more efficient to search through.
VARUN MAYYA: I have a follow-up to that. In high school biology, I read that the amygdala and the hippocampus fire together. So, as you said, when you’re very emotional, you tend to remember things. Is there an amygdala equivalent for LLMs?
DEMIS HASSABIS: Not at the moment, but maybe there should be. I don’t think you’d want it to be emotional or amygdala-like in the human sense. But maybe some kind of value judgment at the point of writing the memory — one that makes a calculation on how useful this memory would be for future learning or future behavior — would probably be pretty useful. And it’s something that we are researching.
VARUN MAYYA: Amazing. Thank you so much, everybody, for joining, and of course, thank you for being on stage and giving us your time.
Related Posts