Preventing Goodheart with homeostatic reward functions.

Current decision theory and almost all AI alignment work assumes that we will build AGIs with some fixed utility function that it will optimize forever. This naturally runs the risk of extreme goodhearting, where if we do not get exactly the ‘correct’ utility function, then the slight differences between our... [Read More]

Don't argmax; Distribution match

I mentioned this briefly in a previous post, but thought I should expand on it a little. Basically, using argmax objectives, as in AIXI or many RL systems are intrinsically exceptionally bad from an alignment perspective due to the standard and well-known issues of goodhearting, ignoring uncertainty etc. There have... [Read More]

AGI will have learnt reward models.

There has been a lot of debate and discussion recently in the AI safety community about whether AGI will likely optimize for fixed goals or be a wrapper mind. The term wrapper mind is largely a restatement of the old idea of a utility maximizer, with AIXI as a canonical... [Read More]

Why not just stop FOOM?

AI alignment given FOOM seems exceptionally challenging in general. This is fundamentally because we have no reasonable bounds on the optimization power a post-FOOM agent can apply. Hence, for all we know, such an agent can go arbitrarily off distribution, which destroys our proxies, and also could defeat any method... [Read More]