<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
    <channel>
      <title>Quartz 4</title>
      <link>https://wiki.vectra.dev</link>
      <description>Last 10 notes on Quartz 4</description>
      <generator>Quartz -- quartz.jzhao.xyz</generator>
      <item>
    <title>README</title>
    <link>https://wiki.vectra.dev/README</link>
    <guid>https://wiki.vectra.dev/README</guid>
    <description><![CDATA[ wiki Personal Obsidian-style wiki powered by a Claude Code plugin. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:48:42 GMT</pubDate>
  </item><item>
    <title>Adaptive Computation Time</title>
    <link>https://wiki.vectra.dev/wiki/OpenMythos/Adaptive-Computation-Time</link>
    <guid>https://wiki.vectra.dev/wiki/OpenMythos/Adaptive-Computation-Time</guid>
    <description><![CDATA[ Adaptive Computation Time (ACT) is a mechanism that allows a neural network to allocate a variable number of processing steps to each input, spending more computation on difficult examples and fewer steps on simple ones. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:46:46 GMT</pubDate>
  </item><item>
    <title>Spectral Radius Constraints for Loop Stability</title>
    <link>https://wiki.vectra.dev/wiki/OpenMythos/Spectral-Radius-Constraints-for-Loop-Stability</link>
    <guid>https://wiki.vectra.dev/wiki/OpenMythos/Spectral-Radius-Constraints-for-Loop-Stability</guid>
    <description><![CDATA[ Spectral radius constraints are a stability mechanism applied to the learnable parameters of recurrent neural architectures to guarantee that iterative state updates remain bounded and do not diverge over many loop iterations. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:45:31 GMT</pubDate>
  </item><item>
    <title>Mixture of Experts with Shared Experts</title>
    <link>https://wiki.vectra.dev/wiki/OpenMythos/Mixture-of-Experts-with-Shared-Experts</link>
    <guid>https://wiki.vectra.dev/wiki/OpenMythos/Mixture-of-Experts-with-Shared-Experts</guid>
    <description><![CDATA[ Mixture of Experts with Shared Experts (MoE with Shared Experts) is a hybrid feed-forward routing strategy used in transformer models. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:45:05 GMT</pubDate>
  </item><item>
    <title>Looped Transformer</title>
    <link>https://wiki.vectra.dev/wiki/OpenMythos/Looped-Transformer</link>
    <guid>https://wiki.vectra.dev/wiki/OpenMythos/Looped-Transformer</guid>
    <description><![CDATA[ A Looped Transformer is a neural network architecture in which one or more transformer blocks are reused across multiple sequential iterations rather than instantiated once per layer, trading raw parameter count for configurable computational depth. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:44:06 GMT</pubDate>
  </item><item>
    <title>Recurrent-Depth Transformer</title>
    <link>https://wiki.vectra.dev/wiki/OpenMythos/Recurrent-Depth-Transformer</link>
    <guid>https://wiki.vectra.dev/wiki/OpenMythos/Recurrent-Depth-Transformer</guid>
    <description><![CDATA[ Recurrent-Depth Transformer A Recurrent-Depth Transformer (RDT) is a neural language model architecture that achieves computational depth by iterating over a shared set of transformer blocks rather than stacking many unique layers. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:42:48 GMT</pubDate>
  </item><item>
    <title>wiki</title>
    <link>https://wiki.vectra.dev/</link>
    <guid>https://wiki.vectra.dev/</guid>
    <description><![CDATA[ vectra.dev / wiki Personal knowledge base. ]]></description>
    <pubDate>Tue, 21 Apr 2026 20:19:55 GMT</pubDate>
  </item>
    </channel>
  </rss>