2022 has been an interesting year. Perhaps the biggest change is that I left academia and started getting serious about AI safety. I am now head of research at Conjecture, a London-based startup with the mission of solving alignment. We are serious about this and we are giving it our best shot. Given the state of AI progress, I think it is important work.
In general, this year has been one with lots of disruption and change and a number of opportunities. This has resulted in a fair bit of intellectual progress, especially in terms of growing a more tacit knowledge of things like how businesses and startups and investing actually operate in practice. On the other hand, this has meant much less explicit and legible intellectual knowledge has been gained this year in terms of papers published or lecture courses completec. Overall, I think this change has probably been good overall – the move to Conjecture has been extremely positive, I feel, although time will have to tell of the longer term implications. I feel that it was definitely time to move out of academia, and that I had exhausted most of the gains from being there and was into the regime of steep diminishing returns. My only major mistake is that I probably should have moved sooner – perhaps after my PhD or after first year of postdoc vs 1.5 years of postdoc. My time at Conjecture thus far has been an incredible learning experience – probably comparable to my masters or year at Sussex. My prediction is that diminishing returns will set in in about 6 months to a year but more time is likely necessary for a full consolidation. This has approximately been how my life seems to have gone so far – years of sudden growth with moves followed by a year or two of consolidation. This pattern seems likely to repeat. It is unclear if I can do better, perhaps by moving more often, or whether periods of consolidation are necessary.
In general, my AI timelines have shortened dramatically this year and now stand at about 3-10 years to strongly superhuman AGI. AI progress has been absolutely stunning and it is clear that there now exists a direct path to AGI for any who want to walk it. It is clear that we will definitely experience the singularity easily within my lifetime and probably within the next 20 years. This means that, for people of my age, technologies like anti-aging are irrelevant, as likely will most major biology advances – iterated embryo selection and artificial wombs etc are super cool but look to be a post-singularity rather than pre with a 20 year lead time to impact which is just too long in the age of AGI.
In terms of general intellectual progress, probably the biggest and most obvious ones are:
1.) My thinking on alignment has shifted dramatically. I now feel like I have a sensible grasp of the issues and a rough strategy for how to address alignment. Interestingly, my views have increasingly diverged from the lesswrong consensus the more I have directly got involved in the problem and field. To be honest, this is likely due to my natural contrarianism rather than due to better epistemics but perhaps represents insight.
2.) As a corollary to this, I now post up blog posts and ideas regularly on lesswrong. This may seem small but I have spent years wanting to post things and always being too intimidated to do so, so this is actually a big personal step for me.
3.) My personal blog is doing much better now in that I actually write posts, albeit mostly about alignment. 2022 as been a great year for the blog.
4.) I understand language models at a much deeper level now. A lot of my work at Conjecture has been interpretability on LLMs and this has been super interesting diving into understanding large models.
5.) I have a pretty good understanding of how exactly large LLMs are trained and the necessary steps required to do ML at scale including infrastructurally.
6.) I have a much better grasp of the theory and practitioners intuitions relating to training large models due to osmosis of hanging around Conjecture’s engineering team.
7.) I now have a pretty good sense of the path to AGI and remaining bottlenecks here.
1.) I feel like I have pretty much come to a full-ish understanding of the space of predictive coding and other biologically plausible alternatives to backprop. This understanding is most reflected in my final two papers for Rafal: https://arxiv.org/pdf/2207.12316 and https://arxiv.org/pdf/2206.02629. These papers also both got accepted to ICLR (after rejections at NeurIPS) which was very nice.
2.) I feel like I have a pretty good idea about how credit assignment works in the brain now (at least static credit assignment; temporal credit assignment is still eluding me).
3.) I feel like I have a fairly good high level picture, albeit pretty speculative, in how the brain works at a macro-level including remaining elements of how we differ from current ML. This includes some ideas about how human values actually come about with relevance to alignment.
4.) I understand a lot about the space of neuromorphic hardware and how this fits into biologically plausible learning algorithms. I think this space is very interesting, but probably not relevant before AGI on short timelines although post-singularity AGIs will almost certainly be neuromorphic.
5.) I generally feel like I grok how the academic world works and its incentives, how to get into conferences / journals etc. I have been very slow on this path due to a lack of good mentorship and my own stupidity and incapacity but slowly the fog has cleared here.
6.) I feel like I now fully understand the FEP and related literature and have integrated the insights there into my general ML worldview.
Miscellaneous lecture courses / other developments.
1.) I did a fair bit of Susskind’s theoretical minimum advanced courses including the one on quantum physics and the first two particle physics. Which was kind of interesting and fun but I got a bit lost on some of it.
2.) I finished lecture courses on computational geometry and computer graphics which were interesting.
3.) I finished up a computer architecture and distributed computing course.
4.) I read a textbook on evolutionary theory.
5.) Reading books has generally been very poor this year since little time and in evenings I just crash and watch shows on netflix. Need to improve this.