Epistemic Status: Just some quick thoughts written without a super deep knowledge of SLT so caveat emptor.
[Read More]
The Biosingularity Alignment Problem Seems Harder than AI Alignment
One alternative to the AI-driven singularity that is sometimes proposed is effectively the biosingularity, specifically focused on human intelligence augmentation. The idea here is that we first create what is effectively a successor species of highly enhanced humans and then these transhumans are better placed to solve the alignment problem....
[Read More]
Addendum to Fertility, Inheritance, and the Concentration of Wealth
Recently I was having a conversation about where are the missing billionaires, a book which questions why there are so few descendants of historical magnates with fortunes equal or comparable to their founders. I.e. why the fortunes of e.g. Rockefeller descendants do not match that of the original Rockefeller (although...
[Read More]
Gradual Disempowerment Might Not Be So Bad
In lesswrong-style alignment terminology we generally talk about gradual disempowerment of humanity as one of the default bad outcomes. The idea here is that AIs will slowly automate more and more jobs and slowly take over the economy. Ultimately this will result in the concerted economic influence and decision making...
[Read More]
Whence Human Talents Neurobiologically?
Epistemic status: Just something I was musing over. I don’t have any answers to this.
[Read More]