Intellectual Progress in 2023

2023 has also been an interesting year. The first half of the year was at Conjecture with a brief stint cofounding Apollo and then cofounding a soon-to-be-revealed (with any luck) startup which I shall have to remain fairly quiet on for now. There has been lots of change and personal... [Read More]

Open source AI has been vital for alignment

Epistemic Status: My opinion has been slowly shifting towards this view over the course of the year. My opinion is contingent upon the current situation being approximately maintained – i.e. that open source models trail the capabilities of the leading labs by a significant margin. [Read More]

Addendum to Grokking Grokking

In my original Grokking Grokking post, I argued that Grokking could be caused simply by diffusive dynamics on the optimal manifold. I.e. the idea being that during the pretraining phase to zero loss in an overparametrized network, the weight dynamics minimize loss until they hit an optimal manifold of solutions.... [Read More]

Strong infohazard norms lead to predictable failure modes

Obligatory disclaimer: This post is meant to argue against overuse of infohazard norms in the AI safety community and demonstrate failure modes that I have personally observed. It is not an argument for never using infohazards anywhere or that true infohazards do not exist. None of this is meant to... [Read More]

Preference Aggregation as Bayesian Inference

A fundamental problem in AI alignment, as well as in many social sciences is the problem of preference aggregation. Given a number of different actors who have specific preferences, what is a consistent way of making decisions that ensures that the outcome is fair and ideally that all of the... [Read More]