<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Beren&apos;s Blog</title>
    <description>Thoughts on AI, Neuroscience, and other things that interest me.</description>
    <link>http://www.beren.io/</link>
    <atom:link href="http://www.beren.io/feed.xml" rel="self" type="application/rss+xml" />
    
      <item>
        <title>AI Monotheism vs AI Polytheism</title>
        <description>
          
          Epistemic note: This is the beginning of a planned series of posts trying to think about what a highly multi-polar post-AGI world would look like and to what extent humanity or human values could survive in such a world depending on our degree of alignment success. This is all highly...
        </description>
        <pubDate>Wed, 07 Jan 2026 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2026-01-07-AI-Monotheism-vs-AI-Polytheism/</link>
        <guid isPermaLink="true">http://www.beren.io/2026-01-07-AI-Monotheism-vs-AI-Polytheism/</guid>
      </item>
    
      <item>
        <title>Two Mechanisms of Decadence</title>
        <description>
          
          Epistemic status: Obviously speculative sociology. Probably pretty obvious to some but I’m just trying to crystallize these ideas from my mind onto paper. I was recently on a random walk through some old SSC posts and stumbed upon his review of Cyropaedia. His interlude about the ‘Fremen mirage’ and the...
        </description>
        <pubDate>Tue, 06 Jan 2026 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2026-01-06-Two-Mechanisms-of-Decadence/</link>
        <guid isPermaLink="true">http://www.beren.io/2026-01-06-Two-Mechanisms-of-Decadence/</guid>
      </item>
    
      <item>
        <title>Intellectual Progress in 2025</title>
        <description>
          
          It is now 2026 and we are half way through the decade of the 2020s. If we think back to the halcyon days of January 2020 certainly a lot has happened, especially in AI1. The first half of the decade has essentially been the discovery and then incredible exploitation of...
        </description>
        <pubDate>Thu, 01 Jan 2026 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2026-01-01-Intellectual-Progress-in-2025/</link>
        <guid isPermaLink="true">http://www.beren.io/2026-01-01-Intellectual-Progress-in-2025/</guid>
      </item>
    
      <item>
        <title>Initial Quick Thoughts on Singular Learning Theory</title>
        <description>
          
          Epistemic Status: Just some quick thoughts written without a super deep knowledge of SLT so caveat emptor. Recently, I happened to run into Jesse Hoogland at the Post-AGI workshop and we got onto discussing his work on SLT. SLT had been vaguely in the air when I was at Conjecture...
        </description>
        <pubDate>Wed, 24 Dec 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-12-24-Initial-Quick-Thoughts-on-Singular-Learning-Theory/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-12-24-Initial-Quick-Thoughts-on-Singular-Learning-Theory/</guid>
      </item>
    
      <item>
        <title>The Biosingularity Alignment Problem Seems Harder than AI Alignment</title>
        <description>
          
          One alternative to the AI-driven singularity that is sometimes proposed is effectively the biosingularity, specifically focused on human intelligence augmentation. The idea here is that we first create what is effectively a successor species of highly enhanced humans and then these transhumans are better placed to solve the alignment problem....
        </description>
        <pubDate>Sun, 30 Nov 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-11-30-The-Biosingularity-Alignment-Problem-Seems-Harder-than-AI-Alignment/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-11-30-The-Biosingularity-Alignment-Problem-Seems-Harder-than-AI-Alignment/</guid>
      </item>
    
      <item>
        <title>Addendum to Fertility, Inheritance, and the Concentration of Wealth</title>
        <description>
          
          Recently I was having a conversation about where are the missing billionaires, a book which questions why there are so few descendants of historical magnates with fortunes equal or comparable to their founders. I.e. why the fortunes of e.g. Rockefeller descendants do not match that of the original Rockefeller (although...
        </description>
        <pubDate>Sun, 30 Nov 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-11-30-Addendum-to-Fertility-Inheritance-and-Concentration-of-Wealth/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-11-30-Addendum-to-Fertility-Inheritance-and-Concentration-of-Wealth/</guid>
      </item>
    
      <item>
        <title>Gradual Disempowerment Might Not Be So Bad</title>
        <description>
          
          In lesswrong-style alignment terminology we generally talk about gradual disempowerment of humanity as one of the default bad outcomes. The idea here is that AIs will slowly automate more and more jobs and slowly take over the economy. Ultimately this will result in the concerted economic influence and decision making...
        </description>
        <pubDate>Sun, 23 Nov 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-11-23-Gradual-Disempowerment-Might-Not-Be-So-Bad/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-11-23-Gradual-Disempowerment-Might-Not-Be-So-Bad/</guid>
      </item>
    
      <item>
        <title>Whence Human Talents Neurobiologically?</title>
        <description>
          
          Epistemic status: Just something I was musing over. I don’t have any answers to this. Something that is very obvious about humans is that they have strengths and weaknesses. Our intelligence is ‘spiky’. Some things we are good at; some things we are not. We often have an intrinsic talent...
        </description>
        <pubDate>Sat, 22 Nov 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-11-22-Whence-Human-Talents-Neurobiologically/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-11-22-Whence-Human-Talents-Neurobiologically/</guid>
      </item>
    
      <item>
        <title>Space Warfare Seems Mostly Defense Dominant</title>
        <description>
          
          Epistemic status: I’ve done some thinking and research on this but I am not a physicist and I can easily be wrong about specific things. Sometimes when thinking about the long term future, it is interesting to think about the offense-defense dynamics of a fully colonized ‘mature’ universe. E.g. suppose...
        </description>
        <pubDate>Sat, 22 Nov 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-11-22-Space-Warfare-Seems-Mostly-Defense-Dominant/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-11-22-Space-Warfare-Seems-Mostly-Defense-Dominant/</guid>
      </item>
    
      <item>
        <title>Continual learning explains some interesting phenomena in human memory</title>
        <description>
          
          Epistemic Status: Far from certain and mostly speculation, but it does make sense. Recently, I was pondering how continual learning works in the brain and realized that the interaction of our brain’s continual learning mechanisms with the hippocampal memory system would naturally explain a lot of the weirdness about how...
        </description>
        <pubDate>Sat, 11 Oct 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-10-11-Continual-Learning-Explains-Interesting-Phenomena-Human-Memory/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-10-11-Continual-Learning-Explains-Interesting-Phenomena-Human-Memory/</guid>
      </item>
    
      <item>
        <title>Thoughts on (AI) consciousness</title>
        <description>
          
          Note: I was inspired to write this after discussions with Anil Seth and Jonas Mago on AI consciousness, where, of course, I mostly disagreed with them. As with everything on consciousness, the empirical evidence is extremely sparse so it is mostly a game of conflicting intuitions. Strong opinions lightly held,...
        </description>
        <pubDate>Wed, 06 Aug 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-08-06-Thoughts-On-AI-Consciousness/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-08-06-Thoughts-On-AI-Consciousness/</guid>
      </item>
    
      <item>
        <title>Millennials as the Forever Generation</title>
        <description>
          
          Epistemic status: Obviously speculative, but interesting. Sometime in the next few decades it seems likely that we will have the singularity. If all goes well and we successfully create aligned AI systems, humanity will continue to exist but in an entirely new phase. This will likely include the vindication of...
        </description>
        <pubDate>Mon, 04 Aug 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-08-04-Millennials-as-the-Forever-Generation/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-08-04-Millennials-as-the-Forever-Generation/</guid>
      </item>
    
      <item>
        <title>The Limit of Prediction is not Omniscience</title>
        <description>
          
          Epistemic status: Mostly re-litigating old debates, I believe. Hopefully still somewhat interesting. This is just a short post for a short note which took me a worrying length of time to realize. For a while people were claiming that the pretraining next token prediction objective could directly lead to superintelligence...
        </description>
        <pubDate>Sun, 03 Aug 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-08-03-The-Limit-Of-Prediction-Is-Not-Omniscience/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-08-03-The-Limit-Of-Prediction-Is-Not-Omniscience/</guid>
      </item>
    
      <item>
        <title>Most Algorithmic Progress is Data Progress</title>
        <description>
          
          Epistemic Status: Fairly sure about this from experience but could be missing crucial considerations. I don’t present any super detailed evidence here so it is theoretically just vibes. When forecasting AI progress, the forecasters and modellers often break AI progress down into two components: increased compute, and ‘algorithmic progress’. My...
        </description>
        <pubDate>Sat, 02 Aug 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/</guid>
      </item>
    
      <item>
        <title>Do We Want Obedience or Alignment?</title>
        <description>
          
          One question which I have occasionally pondered is: assuming that we actually succeed at some kind of robust alignment of AGI, what is the alignment target we should focus on? In general, this question splits into two basic camps. The first is obedience and corrigibility: the AI system should execute...
        </description>
        <pubDate>Sat, 02 Aug 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-08-02-Do-We-Want-Obedience-Or-Alignment/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-08-02-Do-We-Want-Obedience-Or-Alignment/</guid>
      </item>
    
      <item>
        <title>Should we be behaviorist about an AI&apos;s values?</title>
        <description>
          
          Epistemic note: Very short point and I’m pretty uncertain on this myself. Trying to work out the arguments in blog format. In the alignment discourse I notice a lot of vaguely described but very real worry along the lines of “Even if we train an AI to be aligned and...
        </description>
        <pubDate>Sun, 11 May 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-05-11-Should-We-Be-Behaviourist-About-AIs-Values/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-05-11-Should-We-Be-Behaviourist-About-AIs-Values/</guid>
      </item>
    
      <item>
        <title>Preliminary Thoughts on Reward Hacking</title>
        <description>
          
          Epistemic status: Early thoughts. Some ideas but no empirical testing or validation as yet. I’ve started thinking a fair bit about reward hacking recently. This is because frontier models are reportedly beginning to show signs of reward hacking especially for coding tasks. Thus, the era of easy-to-align pretraining-only models appears...
        </description>
        <pubDate>Sun, 27 Apr 2025 00:00:00 -0700</pubDate>
        <link>http://www.beren.io/2025-04-27-Preliminary-Thoughts-On-Reward-Hacking/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-04-27-Preliminary-Thoughts-On-Reward-Hacking/</guid>
      </item>
    
      <item>
        <title>Why Not Sparse Hierarchical Graph Learning</title>
        <description>
          
          Recently Noumenal Labs announced themselves and I read their white paper. Although pretty light on specifics, it seems pretty clear that their issues with LLMs and generally NNs is that they do not properly reflect in their structure the true underlying generative process of reality — effectively that they do...
        </description>
        <pubDate>Sat, 01 Mar 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-03-01-Why-Not-Sparse-Hierarchical-Graph-Learning/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-03-01-Why-Not-Sparse-Hierarchical-Graph-Learning/</guid>
      </item>
    
      <item>
        <title>The Scaling Laws Are In Our Stars, Not Ourselves</title>
        <description>
          
          Epistemic status: Pretty uncertain, this is a model I have been using to think about neural networks for a while, which does have some support, but is not completely rigorous. I hear a lot of people talk about scaling laws as if they are a property of specific models or...
        </description>
        <pubDate>Sat, 01 Mar 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-03-01-The-Scaling-Laws-Are-In-Our-Stars-Not-Ourselves/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-03-01-The-Scaling-Laws-Are-In-Our-Stars-Not-Ourselves/</guid>
      </item>
    
      <item>
        <title>Current neural networks are not overparametrized</title>
        <description>
          
          Occasionally I hear people say or believe that NNs are overparametrized and base their intuitions off of this idea. Certainly there is a small literature in academia around phenomena like double descent which do implicitly assume an overparametrized network. However, while overparametrized inference and generalization is certainly a valid regime...
        </description>
        <pubDate>Sat, 01 Mar 2025 00:00:00 -0800</pubDate>
        <link>http://www.beren.io/2025-03-01-Current-Neural-Networks-Are-Not-Overparametrized/</link>
        <guid isPermaLink="true">http://www.beren.io/2025-03-01-Current-Neural-Networks-Are-Not-Overparametrized/</guid>
      </item>
    
  </channel>
</rss>
