In lesswrong-style alignment terminology we generally talk about gradual disempowerment of humanity as one of the default bad outcomes. The idea here is that AIs will slowly automate more and more jobs and slowly take over the economy. Ultimately this will result in the concerted economic influence and decision making...
[Read More]
Whence Human Talents Neurobiologically?
Epistemic status: Just something I was musing over. I don’t have any answers to this.
[Read More]
Space Warfare Seems Mostly Defense Dominant
Epistemic status: I’ve done some thinking and research on this but I am not a physicist and I can easily be wrong about specific things.
[Read More]
Continual learning explains some interesting phenomena in human memory
Epistemic Status: Far from certain and mostly speculation, but it does make sense.
[Read More]
Thoughts on (AI) consciousness
Note: I was inspired to write this after discussions with Anil Seth and Jonas Mago on AI consciousness, where, of course, I mostly disagreed with them. As with everything on consciousness, the empirical evidence is extremely sparse so it is mostly a game of conflicting intuitions. Strong opinions lightly held,...
[Read More]