Overall 2021 was much less of a productive and growth year than was 2020. The main reason for this is that, in retrospect, 2020 was exceptional in that for the first time in Sussex in Chris Buckley’s lab I had proper research mentorship and direction and that I also lucked out to be in an extremely exciting time for active inference and predictive coding with lots of things becoming clear pretty straightforwardly and with amazing collaborators (primarily Alec and Chris). The result of this was that I felt I had a major improvement in research skill in that I actually became capable of doing proper research well for the first time. I feel like the progress in 2021 was due to the velocity of 2020, but with almost zero further acceleration. 2020 was thus effectively a ‘zero to one’ year for me and hence in retrospect not really replicable in later years.

The second reason for lower apparently productivity was that I finished my PhD and moved to do a postdoc with Rafal Bogacz in Oxford. While I have had a lot of research productivity here as well, the overall cadence of publishing has been slower and more effort is put into each paper, especially with regards to presentation and the figures. This quality over quantity approach in the lab has resulted in much of my 2021 work not being published until 2022 and some of it has not even come out as of the writing of this post.

I also found that I moved into a more supervisory role and had much more supervision and meetings to do than previously where I was essentially full-time research with no other commitments as in 2020. This ate into my productivity to a very large extent and much more than I realized before this year. Having just one or two meetings in a day can often completely derail the deep and uninterrupted focus you need for good intellectual progress, at least for me. To some extent this is probably a personal failing of mine and also that I hadn’t yet adapted to having a lot of meetings, but lots of people online also talk about having similar issues. Finally, I became distracted in other ways as well, such as in trying to build an (ultimately doomed) AI startup which taught me a lot about how to build websites and distributed systems, but not research, as well as starting some consulting work. All of this, while interesting in their own right, led to a severe diminishment in research and intellectual development productivity.

At the end of 2021, therefore, I was very unsatisfied with progress. However, looking back on it from the vantage point of nearly another year (Late summer 2022), I can see 2021 as more of a year of consolidation and solidification of existing gains with a fair amount of solid progress was nevertheless made.

This included:

1.) I integrated well into Rafal Bogacz’s research group and made some solid research progress and became a core member of the team there. I will talk about this in more details later, once everything has fully become history, but I think we had a great time and made a lot of progress overall here in 2021.

2.) I gained a very good (although not fully complete until early 2022) understanding of both predictive coding away from the backprop limit (i.e. prospective configuration) as well as exactly how the backprop limit works for general energy based models. I now feel like I fully understand predictive coding, and where it fully fits into the algorithmic scheme built up in the rest of machine learning. I think there is still a lot of untapped potential here in various aspects which I have been and will continue to work on.

3.) I gained a very good fundamental understanding of model-free RL algorithms and successor representations and innovated in this space with the reward basis paper, which I think represents something fundamental about how the basal ganglia operate.

4.) I finally understood the basics of how associative memories and hopfield networks work, how they are related to attention, and the incredibly simple (but obscured in complexity) way that all of these networks work by just doing a very simple set of operations. This lead to my first ICML paper (Universal Hopfield Networks).

5.) I understood the literature on temporal credit assignment and what the fundamental problems here are for the brain but did not make any direct progress on this question.

6.) I finally fully understood all details of the free energy principle.

7.) I solidified my computer science fundamentals and learnt about how distributed systems and databases work and took university courses in these areas. I also took a course on computer architecture which turned out to be highly useful overall. I also learnt how to use kubernetes which was a pretty baffling experience overall.

8.) I finally sat down and learned about tax law and how accounting works and started investing some of my (meagre) postdoc salary. Alas, right at the top it turned out.

9.) I also finally sat down and properly learnt the anatomy of the brain. What every region is and what it does. It was embarassing that my knowledge before this was so partial and situational. I found that this has given me a surprising amount of global and general understanding of how the brain works, and was much more useful than I thought it would be.

10.) I now finally understand how variational message passing and general message passing algorithms on factor graphs work. I am weirdly proud about this since I’ve tried and failed to understand this both in 2019 and 2020.

11.) I learnt how to use jax.

12.) I feel like my global picture of machine learning is slowly clicking into place and this primarily happened in 2021 although a lot of it happened in 2022 as well.

13.) My AGI timelines shortened significantly to approx 10-15 years, although I also updated my views to think that recursive-self-improvement and FOOM are much less likely to be possible.