Gradual Disempowerment Might Not Be So Bad

In lesswrong-style alignment terminology we generally talk about gradual disempowerment of humanity as one of the default bad outcomes. The idea here is that AIs will slowly automate more and more jobs and slowly take over the economy. Ultimately this will result in the concerted economic influence and decision making... [Read More]

Thoughts on (AI) consciousness

Note: I was inspired to write this after discussions with Anil Seth and Jonas Mago on AI consciousness, where, of course, I mostly disagreed with them. As with everything on consciousness, the empirical evidence is extremely sparse so it is mostly a game of conflicting intuitions. Strong opinions lightly held,... [Read More]