<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Lab Notebook - Jayadeep KM]]></title><description><![CDATA[Thoughts on software, systems, and whatever I’m experimenting with this week]]></description><link>https://jayadeep.me</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 18:05:14 GMT</lastBuildDate><atom:link href="https://jayadeep.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AI is not replacing Devops Engineers, It is making us more Valuable]]></title><description><![CDATA[There is significant hype and fear around AI replacing roles such as software engineers, technical writers, and translators. The debate is loud, emotional, and often polarized. As a Platform/DevOps en]]></description><link>https://jayadeep.me/ai-for-devops</link><guid isPermaLink="true">https://jayadeep.me/ai-for-devops</guid><category><![CDATA[Devops]]></category><category><![CDATA[AI]]></category><category><![CDATA[jobs]]></category><category><![CDATA[developer productivity]]></category><dc:creator><![CDATA[Jayadeep km]]></dc:creator><pubDate>Sat, 21 Feb 2026 15:18:56 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/697df4372d5fdda31acd5c67/d364878f-3796-40d6-be43-ac4f848f5b72.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There is significant hype and fear around AI replacing roles such as software engineers, technical writers, and translators. The debate is loud, emotional, and often polarized. As a Platform/DevOps engineer with more than a decade of experience in the field, I have a different perspective.</p>
<p>From what I have seen in practice, DevOps engineers are among the biggest beneficiaries of AI so far. Rather than replacing us, AI has amplified our effectiveness. It has increased our output, shortened iteration cycles, and made our impact more visible. In many cases, this increased leverage actually makes it easier for organizations to justify expanding platform and DevOps headcount rather than reducing it.</p>
<h3>1. IAC and CI/CD Have Become Much Easier</h3>
<p>A large part of DevOps work is writing Infrastructure as Code using tools like Terraform and Ansible. We also spend a lot of time defining CI/CD workflows in systems such as GitHub Actions or Azure DevOps. These are mostly declarative languages. They are structured, opinionated, and designed to catch errors early.</p>
<p>This structure works very well with AI.</p>
<p>Most Terraform modules, pipeline definitions, and configuration files follow common patterns. There is usually a recommended way to solve a problem. The documentation is clear, and thousands of public examples exist. Because of this, AI models can generate useful and often correct configurations with relatively simple prompts.</p>
<p>Even smaller models can produce solid results. In many cases, the generated Terraform code or CI workflow works with only small adjustments. Unlike complex application code, you are not dealing with deep business logic or unpredictable edge cases. The surface area is smaller and more constrained. This reduces the time spent on boilerplate and syntax.</p>
<p>Instead of searching documentation or copying examples from multiple sources, you can generate a starting point instantly and refine it. The real value then shifts from writing YAML or HCL to designing better systems, improving reliability, and enabling other teams.</p>
<h3>2. Delegating Boring Work to AI</h3>
<p>Unlike application developers, DevOps engineers spend a large portion of their time on tasks that do not require deep infrastructure expertise. These tasks are necessary, but they are repetitive and time-consuming. AI has been especially useful here.</p>
<p><strong>Creating and managing tickets.</strong> Writing Jira tickets with proper descriptions, acceptance criteria, and story point estimates used to take more time than it should. Now, a short prompt is often enough to generate a well-structured ticket. With MCP integrations, the ticket can even be created automatically. I also include the ticket context when working on a task and ask the AI to draft status updates as progress is made. Instead of seeing Jira as overhead, I now treat it as structured memory that AI tools can reference and maintain.</p>
<p><strong>Maintaining documentation.</strong> Platform teams usually own many internal tools and services. Keeping documentation accurate is important, but it often gets postponed because it feels secondary to delivery. AI reduces that friction. After completing a task, I provide the relevant documentation context and ask the model to update it. What used to take an extra hour at the end of a task now takes minutes. The result is more consistent and up-to-date internal documentation.</p>
<p><strong>Communication and announcements.</strong> DevOps work involves constant communication with stakeholders, vendors, and internal teams. Writing clear Slack messages, emails, or internal announcements requires effort and context switching. AI helps structure the message quickly. I focus on the key points and let the model refine the wording. The result is usually clearer than a rushed draft written between meetings.</p>
<p><strong>Presentations and data visualization.</strong> Platform engineers frequently need to explain costs, reliability metrics, or architectural decisions. AI tools make it easier to analyze vendor bills, summarize large datasets, and generate simple graphs for understanding trends. When preparing demos or internal presentations, I can provide bullet points and generate a clean HTML slide deck in a short time. This reduces preparation time and allows me to focus more on the substance of the discussion.</p>
<p>In all of these cases, AI does not replace DevOps expertise. It reduces the time spent on operational overhead and frees up attention for higher-value engineering work.</p>
<h3>3. Versatility and Adaptability as a Core Strength</h3>
<p>Another important factor is the versatility of the DevOps skill set. Over the years, we have had to adapt continuously to new tools, platforms, and paradigms. Configuration management tools replaced shell scripts, containers reshaped deployment models, Kubernetes changed orchestration, GitOps redefined delivery workflows, and cloud-native architectures altered infrastructure design. None of these shifts were optional. DevOps engineers learned to evaluate new technologies quickly, understand their trade-offs, and integrate them into existing systems without disrupting production.</p>
<p>This constant adaptation has trained us to think in terms of systems, abstractions, and automation rather than specific tools. We are used to reading documentation, experimenting in controlled environments, and moving from zero to working implementation in a short time. AI is simply another major shift in the tooling landscape. Companies now face rapid changes driven by AI capabilities, new platforms, and evolving best practices. The ability to assess, integrate, and operationalize these technologies is critical. DevOps engineers already operate at that intersection of infrastructure, automation, and developer enablement. That adaptability is not being replaced by AI, it is becoming more valuable because of it.</p>
<h3>4. Higher Productivity Across Teams Increases the Need for Stronger Platforms</h3>
<p>AI has increased productivity across almost every technical role. Software engineers are producing more code. Designers are iterating on concepts faster. Architects can evaluate and refine system designs in a fraction of the time it previously required. The overall pace of development has accelerated.</p>
<p>But higher output creates new pressure on the underlying systems.</p>
<p>More code means more builds, more deployments, and more infrastructure changes. Faster iteration cycles mean pipelines need to be reliable and scalable. Rapid experimentation increases the risk surface, which requires stronger security controls, better observability, and tighter governance. When teams move faster, instability compounds faster as well.</p>
<p>This acceleration does not reduce the need for DevOps. It increases it. If application teams can ship features at twice the speed, the platform must be able to support that speed without becoming a bottleneck. That requires well-designed CI/CD pipelines, resilient infrastructure, automated policy enforcement, and mature monitoring and incident response processes.</p>
<p>In short, AI amplifies development capacity. DevOps is what makes that capacity sustainable.</p>
<h3>Conclusion</h3>
<p>AI is changing how we work, but for DevOps it has mostly acted as a force multiplier. It reduces time spent on repetitive tasks, speeds up infrastructure work, and increases the overall pace of software delivery. That faster pace creates stronger demand for reliable platforms, secure systems, and scalable pipelines.</p>
<p>As AI accelerates development across teams, the need for engineers who can operationalize, secure, and stabilize that output becomes even more critical. The role is evolving, not shrinking.</p>
<hr />
<p><em>For transparency, this article was edited with the help of AI to improve grammar and wording. The ideas and opinions are based on my own experience.</em></p>
]]></content:encoded></item><item><title><![CDATA[Plan Mode — The Most Underrated Superpower in AI Coding Agents]]></title><description><![CDATA[When I first started using plan mode in Claude Code, it completely changed how I approach problems with AI coding agents.
Before that, I did what most people do:
Ask for a feature → let the agent write code → fix the mess afterward.
The results were ...]]></description><link>https://jayadeep.me/ai-agent-plan-mode-deep-dive</link><guid isPermaLink="true">https://jayadeep.me/ai-agent-plan-mode-deep-dive</guid><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[cursor]]></category><category><![CDATA[opencode]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Jayadeep km]]></dc:creator><pubDate>Wed, 11 Feb 2026 07:09:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770792699300/808a56f6-8224-4754-b3d7-0ae6200519ba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I first started using plan mode in Claude Code, it completely changed how I approach problems with AI coding agents.</p>
<p>Before that, I did what most people do:</p>
<p>Ask for a feature → let the agent write code → fix the mess afterward.</p>
<p>The results were as expected:</p>
<ul>
<li><p>Over-engineered implementations</p>
</li>
<li><p>Scattered edits across the repository that were hard to review</p>
</li>
<li><p>Code that technically worked but slowly turned into a black box</p>
</li>
</ul>
<p>Then I discovered plan mode.</p>
<p>Instead of jumping straight into implementation, the agent was forced to slow down, analyze the codebase, ask clarifying questions and produce step-by-step instructions. The output became drastically better, and maintainable.</p>
<p>I assumed plan mode made the model think harder. It doesn’t. It simply removes the ability to act. That constraint forces analysis first. Understanding this distinction is the key to using AI coding agents effectively. Let’s break it down</p>
<p>Plan mode works by changing three things.</p>
<ul>
<li><p>The system prompt</p>
</li>
<li><p>Allowed tools</p>
</li>
<li><p>Execution context</p>
</li>
</ul>
<h2 id="heading-system-prompt">System Prompt</h2>
<p>Every coding agent runs with a <strong>system prompt</strong> that defines its role, rules, and constraints. On each step, the model combines this with the user prompt, current workspace state, and tool outputs to decide what to do next. Because it has highest priority, it enforces safe, consistent behavior and overrides conflicting instructions.</p>
<p>In build mode, the objective is execution. The agent is encouraged to modify files, run commands, and complete the task as quickly as possible. (Interestingly, Codex is designed to be more autonomous and Claude is designed to collaborate more with user. That is an interesting topic for a separate blog post)</p>
<p>In plan mode, the objective shifts to analysis. The agent is instructed to read the codebase, ask clarifying questions, and produce a step-by-step plan without making changes.</p>
<p>You can find a reverse engineered system prompt for claude code plan mode <a target="_blank" href="https://github.com/Piebald-AI/claude-code-system-prompts/blob/main/system-prompts/agent-prompt-plan-mode-enhanced.md">here</a>. The interesting part in that system prompt is,</p>
<pre><code class="lang-markdown">=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY planning task. You are STRICTLY PROHIBITED from:
<span class="hljs-bullet">-</span> Creating new files (no Write, touch, or file creation of any kind)
<span class="hljs-bullet">-</span> Modifying existing files (no Edit operations)
<span class="hljs-bullet">-</span> Deleting files (no rm or deletion)
<span class="hljs-bullet">-</span> Moving or copying files (no mv or cp)
<span class="hljs-bullet">-</span> Creating temporary files anywhere, including /tmp
<span class="hljs-bullet">-</span> Using redirect operators (&gt;, &gt;&gt;, |) or heredocs to write to files
<span class="hljs-bullet">-</span> Running ANY commands that change system state

Your role is EXCLUSIVELY to explore the codebase and design implementation plans. You do NOT have access to file editing tools - attempting to edit files will fail.
</code></pre>
<p>Similarly, you can find the exact plan mode system prompt for opencode in their open source codebase <a target="_blank" href="https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/plan.txt">here</a>.</p>
<p>Another interesting part from the prompt that I really like is</p>
<pre><code class="lang-markdown"><span class="hljs-strong">**NOTE:**</span> At any point in time through this workflow you should feel free to ask the user questions or clarifications. Don't make large assumptions about user intent. The goal is to present a well researched plan to the user, and tie any loose ends before implementation begins.
</code></pre>
<p>This is the key part that makes plan mode more powerful. It doesn’t make assumptions, and provides a questionnaire to the user to come up with a concrete plan. The same model produces very different output simply because the goal defined in the prompt is different.</p>
<h2 id="heading-tool-restrictions">Tool restrictions</h2>
<p>In claude code, the <code>plan</code> mode is not just a system prompt, but also a permission model. A list of available tools is sent along with the system prompt to the model. Then the model returns instructions for the agent to execute the tools and send the output in next step. The coding agent makes sure that only the allowed tools are executed, irrespective of what the model asks to do.</p>
<p>Opencode also behaves in a similar way. <strong>Here</strong> is the source where it defines the default permissions for plan mode. It resolves to something like below,</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">"plan":</span> {
      <span class="hljs-attr">"model":</span> <span class="hljs-string">"&lt;default&gt;"</span>,
      <span class="hljs-attr">"tools":</span> {
        <span class="hljs-attr">"write":</span> <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"edit":</span> <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"bash":</span> <span class="hljs-literal">false</span>
      }
  }
</code></pre>
<p>Because it cannot modify anything, it is forced to reason first. This constraint has a larger impact on output quality than any prompt wording. Removing the ability to act naturally encourages the model to analyze.</p>
<h2 id="heading-the-role-of-the-plan-artifact">The role of the plan artifact</h2>
<p>The plan itself is not just documentation. It is an execution artifact. In most plan-based agents, the plan becomes the input to the next phase. How this handoff happens has a big impact on behavior.</p>
<h3 id="heading-claude-code">Claude code</h3>
<p>In <strong>Claude Code</strong>, plan mode produces a detailed markdown file with explicit, step-by-step instructions. This file is saved to a special directory (usually in ~/.claude folder) and treated as the single source of truth for the next phase.</p>
<p>When switching from plan → build:</p>
<ul>
<li><p>the session context is cleared</p>
</li>
<li><p>a fresh agent starts</p>
</li>
<li><p>only the markdown file is provided as input</p>
</li>
</ul>
<p>Effectively, <a target="_blank" href="http://plan.md"><strong>plan</strong></a> <strong>→ plan.md → prompt for build</strong></p>
<p>Because the steps are already concrete, you can often allow edits and execute the entire change in one pass. In practice, this works well. I frequently iterate on the plan first, refine it until it’s precise, and only then switch to build. Once the plan is solid, implementation becomes mostly mechanical.</p>
<p>Sometimes I keep the plan inside the project directory instead of the default location so I can review and tweak it manually before execution.</p>
<h3 id="heading-cursor">Cursor</h3>
<p><strong>Cursor</strong> follows a similar model. It also generates a plan artifact and treats it as the basis for execution. Since it’s an IDE, you can inspect and edit the plan directly before building. The behavior is effectively the same: clear context and execute from the plan.</p>
<h3 id="heading-opencode">Opencode</h3>
<p><strong>OpenCode</strong> handles this differently. There is no dedicated plan file by default.</p>
<p>The model produces a plan conversationally, but when switching to build:</p>
<ul>
<li><p>the session context is preserved</p>
</li>
<li><p>only the system prompt changes</p>
</li>
<li><p>edit tools become available</p>
</li>
</ul>
<p>So the same conversation continues with more permissions. It uses the <code>build-switch</code> system prompt to indicate that model is now allowed to use edit tools (<a target="_blank" href="https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/build-switch.txt"><strong>source</strong></a>).</p>
<p>Because context is not reset and there is no explicit plan artifact, execution is less deterministic. The model still carries earlier assumptions and exploratory reasoning, which sometimes leads to premature edits. In my experience, this makes one-shot execution less reliable compared to Claude’s “plan → clean build” workflow. This is much less of an issue if you are using a high reasoning model though.</p>
<p>To get similar behavior, I explicitly ask OpenCode to write a <a target="_blank" href="http://plan.md"><code>plan.md</code></a>, refine it manually, then start a fresh session and feed that file as input myself. That recreates the same “compiled prompt” approach.</p>
<h2 id="heading-summary-when-to-use-plan-mode-vs-build-mode">Summary : When to Use Plan Mode vs Build Mode</h2>
<p>Most official docs explain plan vs build in abstract terms. I found that too vague. After using these agents daily, I follow a simple rule of thumb.</p>
<h3 id="heading-build-mode">Build mode</h3>
<p>Use build when the task is simple and deterministic.</p>
<ul>
<li><p>easy, one-shot changes</p>
</li>
<li><p>straightforward implementation</p>
</li>
<li><p>only one obvious way to do it</p>
</li>
<li><p>I already know exactly how I’d code it myself</p>
</li>
<li><p>writing documentation, RFC or specs</p>
</li>
</ul>
<p>Most people assume docs or RFCs belong in plan mode. In practice, they’re iterative editing tasks. It’s faster to write, tweak, and refine directly in build mode.</p>
<h3 id="heading-plan-mode">Plan mode</h3>
<p>Use plan when thinking matters more than typing.</p>
<ul>
<li><p>changes that affect multiple files</p>
</li>
<li><p>unclear or open-ended problems</p>
</li>
<li><p>I’m not sure how to implement it yet</p>
</li>
<li><p>I want the agent to propose steps first</p>
</li>
<li><p>brainstorming or exploring a codebase</p>
</li>
<li><p>I don’t want the agent touching files yet</p>
</li>
</ul>
<p>If I have to think, I use plan. If I already know the solution, I use build.</p>
<h2 id="heading-final-takeaway">Final takeaway</h2>
<p>Plan mode doesn’t make the model smarter. It makes the instructions clearer. Clear instructions produce better code. Most developers don’t naturally write detailed prompts for an LLM. Plan mode does that for you. It turns vague intent into a concrete specification, and that specification becomes the prompt for the build agent.</p>
<p>Once you understand this, the workflow becomes simple: plan first, then execute.</p>
<p>That shift alone dramatically improves reliability.</p>
]]></content:encoded></item><item><title><![CDATA[My €400 Budget NAS That Replaced Google Drive and the Cloud]]></title><description><![CDATA[I wanted to setup a NAS server for quite some time. So finally I took the time to put together a NAS which is high-performing and on a tight budget. Let me walk you through the hardware and software components for the NAS and how I set them up.
TL;DR...]]></description><link>https://jayadeep.me/my-budget-nas-that-replaced-google-drive-and-the-cloud</link><guid isPermaLink="true">https://jayadeep.me/my-budget-nas-that-replaced-google-drive-and-the-cloud</guid><category><![CDATA[network attached storage]]></category><category><![CDATA[storage]]></category><category><![CDATA[TrueNAS]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[Backup]]></category><dc:creator><![CDATA[Jayadeep km]]></dc:creator><pubDate>Sat, 07 Feb 2026 10:27:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770459991485/0bbf2031-07a8-4938-b409-143f163cfc41.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I wanted to setup a NAS server for quite some time. So finally I took the time to put together a NAS which is high-performing and on a tight budget. Let me walk you through the hardware and software components for the NAS and how I set them up.</p>
<p><strong>TL;DR</strong></p>
<ul>
<li><p>€400 total cost</p>
</li>
<li><p>4TB mirrored storage</p>
</li>
<li><p>TrueNAS + Docker apps</p>
</li>
<li><p>Offsite backups to Hetzner</p>
</li>
<li><p>Handles Photos, Media, Kubernetes backups</p>
</li>
</ul>
<h1 id="heading-the-case-and-cpu"><strong>The Case and CPU</strong></h1>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/case.png" alt="PC" class="image--center mx-auto" /></p>
<p>Disks and backups matter far more than CPU for a NAS. Even an old i5 is overkill unless you're trans-coding or running many containers. I wanted something which can run a few docker containers and is also extensible. That’s how I came across this used PC - <strong>HP EliteDesk 800 G2</strong> on <a target="_blank" href="https://www.ebay.ch/itm/255736941927"><strong>ebay</strong></a>. It was perfect for my use case due to following reasons,</p>
<ul>
<li><p>I5 6th Gen is power efficient and can easily run a few docker containers in additional to NAS server components. The whole setup consumes around ~25W on moderate load with a few containers.</p>
</li>
<li><p>There is slot for 3 Hard disk drives and SSDs. There are 5 Sata ports available.</p>
</li>
<li><p>Highly extensible with 3x4 PCIE lanes and a x16 PCIE lane</p>
</li>
<li><p>Supports 64GB of RAM. The item came with 8 GB included (2x4GB) which was enough for a start.</p>
</li>
<li><p>It supports Intel QuickSync for video transcoding</p>
</li>
</ul>
<p>The Motherboard and CPU doesn’t support ECC memory. But it was a trade-off that I was willing to take.</p>
<p><strong>Cost: 70 Euros</strong></p>
<h1 id="heading-hard-disks"><strong>Hard Disks</strong></h1>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/hdd.webp" alt="HDD" /></p>
<p>This is where I made my first mistake. I went for really cheap SSDs from Aliexpress. I bought 2 4TB hard-disks from a seller named <strong>KANDA Store</strong> <a target="_blank" href="https://www.aliexpress.com/item/1005005796672976.html?spm=a2g0o.order_list.order_list_main.39.2cd91802vt80V6"><strong>here</strong></a>. 4TB SSDs for 40$ was obviously too good to be true. These drives are usually fake firmware + tiny flash chips.</p>
<p><strong>Lesson learned</strong>: <em>Never trust suspiciously cheap storage.</em></p>
<p>Learning from my mistake, I ordered a couple of <strong>WD Red Plus 4TB</strong> from <a target="_blank" href="https://www.digitec.ch/en/order/117157227"><strong>Digitec</strong></a>. There is little to no noise, and the performance was better than I expected. It also came with a 5 year warranty.</p>
<p>This was the most expensive component of the NAS build. As someone who takes backups seriously, it was well worth to spend on reliable hard-disks.</p>
<p>I had a 120GB SSD from an old laptop laying around. I decided to use that as the boot disk.</p>
<p>The layout I was planning for was pretty simple. Just 2x4TB disks in mirrored pool (RAID1) with 120GB ssd as boot drive. This setup cuts the capacity in half but protects against a single drive failure, providing about <strong>3.6TB</strong> of usable storage space.</p>
<blockquote>
<p><strong>Remember</strong>: RAID is not backup - it only protects against disk failure, not deletion or corruption.</p>
</blockquote>
<p><strong>Cost: 220 Euros</strong></p>
<h1 id="heading-memory"><strong>Memory</strong></h1>
<p>The PC came with 8GB of RAM included. I did the initial build with that and it was more than enough. Then I decided to run more apps like Jellyfin, Immich and Minio on the NAS. Then it made sense to add more RAM.</p>
<p>The motherboard was 4 slots available for installing RAM. So I bought two 16GB memory sticks from Aliexpress <a target="_blank" href="https://www.aliexpress.com/item/1005002946536496.html?spm=a2g0o.order_list.order_list_main.5.2cd91802vt80V6"><strong>here</strong></a>. This gave me 32GB of RAM with room for expansion later if needed.</p>
<p><strong>Cost: 54 Euros</strong></p>
<h1 id="heading-total-cost"><strong>Total Cost</strong></h1>
<p>The minimal version of the NAS was ready for ~290 Euros. The additional RAM and a few other components (lan cables, switch, SATA cables etc.) the total cost came to over 400 Euros. But I was able to cut down my cloud spending on backups and google storage to compensate this.</p>
<h1 id="heading-services-i-run"><strong>Services I run</strong></h1>
<p>Here is a screenshot of my storage layout.</p>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/nas.png" alt="NAS" /></p>
<p>Below is the list of important services my NAS runs,</p>
<ul>
<li><p>NFS server used by my homelab kubernetes cluster for PVC backups</p>
</li>
<li><p>Storage for Immich, an alternative to google photos for storing my photos and videos</p>
</li>
<li><p>Samba server <code>studio</code>, used for video editing across devices</p>
</li>
<li><p>NFS server for <code>tv</code>. It hosts my home videos as well as content I download through *arr stack</p>
</li>
</ul>
<p>I will make a separate post to go in depth about the services I’m running in the Homelab.</p>
<h1 id="heading-data-protection"><strong>Data Protection</strong></h1>
<p>By moving my digital life to a self maintained NAS server, I also bear the risk of losing everything in the event of a catastrophic failure (as it already happened when I bought the SSDs from Aliexpress and both SSDs failed at the same time). So it is important to have a offsite or cloud backup for the entire NAS where we can restore from.</p>
<p>Hetzner <a target="_blank" href="https://www.hetzner.com/storage/storage-box/"><strong>Storage box</strong></a> was the perfect solution for this. 1 TB of storage only costs ~3.8 Euros per month. It comes with SSH and Rsync access. I just had to setup rsync cronjobs in my NAS to backup periodically.</p>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/rsync.png" alt="Rsync" /></p>
<p>Another important part is to setup alerts correctly. TrueNAS has a sensible default configuration for alerting. I used telegram alerts in addition to email notifications. I already had a <code>homelab</code> channel setup in Telegram which I use for alert-manager notifications, so it made sense to re-use it for NAS as well.</p>
<h1 id="heading-running-apps"><strong>Running Apps</strong></h1>
<p>I was hesitant to use the hypervisor from TrueNAS to run apps, as VMS are more resource intensive than docker containers. Then I came across this cool project named <a target="_blank" href="https://github.com/Jip-Hop/jailmaker"><strong>Jailmaker</strong></a>. By following the tutorial provided on the readme, I was able to quickly set it up along with Docker and <a target="_blank" href="https://dockge.kuma.pet/"><strong>Dockge</strong></a></p>
<p>Dockge is a UI on top of docker and docker-compose. It lets you create and manage docker stacks through UI. It was quite handy given the limited shell access from Truenas (and the additional pain of exec-ing into the Jails)</p>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/dockge.png" alt="Dockge" /></p>
<p>I run the following services in Dockge</p>
<ul>
<li><p>Handbrake -&gt; Batch video transcoding before archiving</p>
</li>
<li><p>Immich -&gt; Alternative to Google photos. It has almost all the features google photos has, and very similar UI.</p>
</li>
<li><p>Jellyfin -&gt; For media playback across devices. I was able to setup hardware transcoding using Intel QuickSync by setting up GPU passthrough in Docker.</p>
</li>
<li><p>MinIO -&gt; Alternative to Amazon S3. I use it for restic backups from my PC and a few apps running on k8s</p>
</li>
<li><p>Nginx Proxy Manager -&gt; A graphical front-end to Nginx. I use it as a reverse proxy and certificate manager for the above apps</p>
</li>
</ul>
<h1 id="heading-monitoring-and-uptime-checks"><strong>Monitoring and Uptime checks</strong></h1>
<p>So I have alerts setup if something is wrong with the regular operation of NAS. What about Apps which are running in docker? What if the whole NAS is down or network is down?</p>
<p>For this, I have 2 more things setup.</p>
<ul>
<li><p>Healthchecks → Whole server dead?</p>
</li>
<li><p>Uptime Kuma → Specific service dead?</p>
</li>
</ul>
<h3 id="heading-healthchecksio"><strong>Healthchecks.io</strong></h3>
<p><a target="_blank" href="https://healthchecks.io/"><strong>healthchecks.io</strong></a> is a dead-man-switch service which notifies you if a service is down. The way it works is quite simple. The server has to ping the health-check endpoint in a predefined interval (I set it to 1 min in my case). If it fails to do so, the service will notify you by email or any other means. This is quite handy to notify when the network itself is down or the whole NAS has stopped responding.</p>
<h3 id="heading-uptime-kuma"><strong>Uptime Kuma</strong></h3>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/uptime.png" alt="Uptime" /></p>
<p><a target="_blank" href="https://github.com/louislam/uptime-kuma"><strong>Uptime Kuma</strong></a> can ping your services on regular intervals and notify if the service is down. It even gives a nice status page summarizing the uptime of various services.</p>
<p><img src="https://blog.jayadeep.me/posts/4-my-budget-nas-setup/status.png" alt="Status" /></p>
<p>I’m running Uptime kuma in my kubernetes cluster. So unless both k8s and NAS apps fail at the same time, this should work as expected. If both NAS and K8s die at the same time, I probably have bigger problems.</p>
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>Was it worth the effort and money? Absolutely! Setting up everything and configuring the automations, backups etc. took quite a long time. For €400 I now have full control over my data, lower monthly cloud costs, and a setup I can expand anytime. For me, that trade-off was easily worth it.</p>
]]></content:encoded></item><item><title><![CDATA[From Arch Linux to NixOS: Diving into the Nix Ecosystem – Part 1]]></title><description><![CDATA[Nix is a concept known for its steep learning curve. It is hard to understand, harder to explain and nearly impossible to master - even for the nerdiest Linux enthusiasts.
In this post, I would like to share my journey of embracing Nix as a long-time...]]></description><link>https://jayadeep.me/arch-to-nixos-part1</link><guid isPermaLink="true">https://jayadeep.me/arch-to-nixos-part1</guid><category><![CDATA[Nix]]></category><category><![CDATA[ArchLinux]]></category><category><![CDATA[Declarative Programming]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Jayadeep km]]></dc:creator><pubDate>Fri, 06 Feb 2026 07:04:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770361480106/9a8d481d-3646-4e19-b83d-dedf2697bde3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Nix is a concept known for its steep learning curve. It is hard to understand, harder to explain and nearly impossible to master - even for the nerdiest Linux enthusiasts.</p>
<p>In this post, I would like to share my journey of embracing Nix as a long-time Arch Linux user. Hopefully I can inspire a few people to get curious about Nix and even start using it.</p>
<p>This is Part 1 of the series, where I’ll cover how I got started and the background that led me here. Future posts will dive deeper into the technical side.</p>
<h1 id="heading-where-i-come-from"><strong>Where I come from</strong></h1>
<p>I was using Arch Linux as my daily driver for nearly 5 years. It was surprisingly stable and easy to use. The Ubuntu system I used before had more issues than the so-called “unstable” Arch Linux. I was using I3 window manager and Vim as my editor of choice. I used to update packages quite frequently. Arch Linux is usually the first to receive package updates. The AUR (Arch User Repository) had almost every package I needed. Arch wiki is great and any kind of issues were easy to fix with a bit of googling, thanks to a huge community of users.</p>
<p>However, there were a few quirks which annoyed me a little. I had to reinstall the OS a few times, due to a broken hard disk, migrating to a different laptop etc. Every time I reinstalled, it took me over a day to install and set everything up. I still ended up with a slightly different configuration. Each time I fix an issue (for example, getting Bluetooth to work), I end up forgetting how it was done and next time I had to research and figure it out again. I ended up documenting the list of packages I install, fixing common issues etc. But it became quite messy over time.</p>
<h1 id="heading-declarative-programming"><strong>Declarative Programming</strong></h1>
<p>I was quite fascinated by the idea of declarative programming. From <a target="_blank" href="https://en.wikipedia.org/wiki/Declarative_programming"><strong>Wikipedia</strong></a></p>
<blockquote>
<p><em>Declarative programming is a non-imperative style of programming in which programs describe their desired results without explicitly listing commands or steps that must be performed</em></p>
</blockquote>
<p>I was working heavily on Terraform at that moment. So the idea of declaratively configuring an entire OS sounded really interesting to me. To explain what a game-changing concept this is, here is a simple example of the difference between these two:</p>
<p>Imagine, you want to create a user in your system. In imperative style, it would be:</p>
<ol>
<li><p>Create new user - <code>jayadeep</code></p>
</li>
<li><p>Assign group for the user - <code>net-admins</code></p>
</li>
<li><p>Create home directory - <code>/home/jayadeep</code></p>
</li>
<li><p>Assign home directory permissions</p>
</li>
</ol>
<p>You just instruct the system to do these in order. The system blindly accepts the instructions and executes it. In declarative style, it would be:</p>
<p>Make sure following user exists with the given configuration:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">jayadeep</span>
<span class="hljs-attr">groups:</span> [<span class="hljs-string">net-admins</span>]
<span class="hljs-attr">homeDir:</span> <span class="hljs-string">'/home/jayadeep'</span>
</code></pre>
<p>Then the corresponding program will run the logic to ensure that the system has the users created with the given attributes. <a target="_blank" href="https://dev.to/khophi/explain-declarative-vs-imperative-programming-like-i-m-5-2a1l"><strong>Here</strong></a> is a more detailed explanation of the difference between these two.</p>
<p>In short, declarative syntax makes our life easier by hiding the implementation details.</p>
<h1 id="heading-hearing-about-nix-and-getting-started"><strong>Hearing about Nix, and getting started</strong></h1>
<p>Nix is a package manager, a build system, an operating system (NixOS), and much more. Before getting to the complexity of nix ecosystem, let me explain where I started. At first I heard about how nix can configure an entire OS declaratively. So I downloaded a NixOS ISO, booted into a VM and started tinkering with the configuration.</p>
<p>Here is a sample configuration of NixOS</p>
<pre><code class="lang-nix">{ config, pkgs, ... }:
{
  boot.loader.systemd-boot.<span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;
  boot.loader.efi.<span class="hljs-attr">canTouchEfiVariables</span> = <span class="hljs-literal">true</span>;

  users.users.<span class="hljs-attr">alice</span> = {
    <span class="hljs-attr">isNormalUser</span> = <span class="hljs-literal">true</span>;
    <span class="hljs-attr">extraGroups</span> = [ <span class="hljs-string">"wheel"</span> ]; <span class="hljs-comment"># Enable ‘sudo’ for the user.</span>
    <span class="hljs-attr">initialPassword</span> = <span class="hljs-string">"test"</span>;
  };

  environment.<span class="hljs-attr">systemPackages</span> = <span class="hljs-keyword">with</span> pkgs; [
    cowsay
    lolcat
  ];

  system.<span class="hljs-attr">stateVersion</span> = <span class="hljs-string">"23.11"</span>;
}
</code></pre>
<p>Although the syntax is a bit weird (looks like JSON, but isn’t), the configuration is easy to read and understand. What it does is the following,</p>
<ul>
<li><p>Enable systemd bootloader (alternative to grub)</p>
</li>
<li><p>Add a user named <code>alice</code>, with initial password <code>test</code></p>
</li>
<li><p>Install <code>cowsay</code> and <code>lolcat</code> packages</p>
</li>
</ul>
<p>With a bit of Googling, I came up with a basic configuration that I can install and boot with. I still didn’t have much clue about the inner workings of Nix though. Once the configuration is ready, I just had to run the following command and it immediately applies the changes</p>
<pre><code class="lang-yaml"><span class="hljs-string">sudo</span> <span class="hljs-string">nix-rebuild</span> <span class="hljs-string">--switch</span>
</code></pre>
<h1 id="heading-first-working-configuration"><strong>First working configuration</strong></h1>
<p>Here is my first commit on Git: <a target="_blank" href="https://github.com/kmjayadeep/nixdots/commit/cc68e219574a9a91cbc18fb2878a0be3840036fd"><strong>Nixdots</strong></a></p>
<p>This was my first working configuration after days of tinkering with config, restructuring, following various tutorials etc.</p>
<p>One clear advantage NixOs has compared to other OSes is that, the nix package ecosystem is really huge. It has over 80k+ packages available the repositories, which is much more than AUR or any other package managers out there. So I was confident that I would be able to configure the system to be able to completely switch from Arch Linux.</p>
<h1 id="heading-my-first-impressions"><strong>My first impressions</strong></h1>
<p>Declaring the list of packages like <a target="_blank" href="https://github.com/kmjayadeep/nixdots/blob/4c732a4671903804f4dede9a0a40ffd6d4d16771/modules/packages.nix"><strong>this</strong></a> was life changing for me. It allowed me to discard the messy notes. At every point of time, I know what packages are installed in the system by looking at this file. This made it really easy to reproduce the system in case of a reinstall.</p>
<p>Look how <a target="_blank" href="https://github.com/kmjayadeep/nixdots/blob/4c732a4671903804f4dede9a0a40ffd6d4d16771/modules/system.nix#L89"><strong>easy</strong></a> it is to enable Bluetooth in NixOS!</p>
<pre><code class="lang-nix"><span class="hljs-attr">hardware</span> = {
  <span class="hljs-attr">bluetooth</span> = {
    <span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;
    <span class="hljs-attr">settings</span> = {
      <span class="hljs-attr">General</span> = {
        <span class="hljs-comment"># Show battery percentage of bt headset</span>
        <span class="hljs-attr">Experimental</span> = <span class="hljs-literal">true</span>;
      };
    };
  };
};
</code></pre>
<p>This small snippet did all the magic. Since NixOS forces us to manage everything declaratively, once we fix an issue and persist it in code, it is fixed forever! Every package and every configuration is versioned and hashed. Nix goes to great lengths to ensure that the configuration is highly reproducible, byte by byte, no matter where or when we apply the configuration.</p>
<h1 id="heading-summary-and-upcoming-posts"><strong>Summary and Upcoming posts</strong></h1>
<p>You can find my current NixOS config on GitHub: <a target="_blank" href="https://github.com/kmjayadeep/nixdots">https://github.com/kmjayadeep/nixdots</a>. In the upcoming posts, I will explain more about Nix package manager, Flakes, Devshells and more.</p>
]]></content:encoded></item><item><title><![CDATA[Taking notes with SilverBullet.md]]></title><description><![CDATA[About two years ago, I wrote about my productivity trio, which includes Notes, MindMap, and Todo lists. In that article, I explained my setup using simple Bash scripts and plain text Markdown files to organize my notes. As a big fan of simple tools a...]]></description><link>https://jayadeep.me/taking-notes-with-silverbulletmd</link><guid isPermaLink="true">https://jayadeep.me/taking-notes-with-silverbulletmd</guid><category><![CDATA[Productivity]]></category><category><![CDATA[silverbullet]]></category><dc:creator><![CDATA[Jayadeep km]]></dc:creator><pubDate>Thu, 05 Feb 2026 05:58:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770269641891/0a3fff63-0c5f-4826-92bc-cbd4614547eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>About two years ago, I <a target="_blank" href="https://medium.com/@jayadeepkm/notes-mindmap-and-todo-how-i-set-up-the-perfect-productivity-trio-using-simple-bash-scripts-5376ff750170"><strong>wrote</strong></a> about my productivity trio, which includes Notes, MindMap, and Todo lists. In that article, I explained my setup using simple Bash scripts and plain text Markdown files to organize my notes. As a big fan of simple tools and plain text note-taking, I was thrilled to discover <a target="_blank" href="http://Silverbullet.md"><strong>SilverBullet.md</strong></a> a few months ago. I liked it immediately and ended up migrating all my notes to it.</p>
<p>So what makes it interesting? Here’s why I switched:</p>
<h2 id="heading-markdown-local-files">Markdown + Local Files</h2>
<p>Like Obsidian, SilverBullet stores everything as Markdown files. That means I can keep using all my existing plain-text tools.</p>
<p>I run a cron job that automatically creates Git commits, so I can easily revert or browse history. I also use <a target="_blank" href="https://syncthing.net/">Syncthing</a> to sync across multiple devices. And when I need heavy editing or bulk reorganization, I just open the folder in Neovim.</p>
<p>If I ever want to leave SilverBullet, I simply copy the files and move on. No vendor lock-in.</p>
<h2 id="heading-web-interface-with-pwa">Web interface with PWA</h2>
<p>This might sound counter-intuitive: why choose a web app over a native app?</p>
<p><strong>Portability</strong>.</p>
<p>I host SilverBullet on a cheap $5 Hetzner cloud server that I already use for other projects. Then I install it as a PWA on my phone and tablet. Now my notes are available everywhere.</p>
<p>SilverBullet caches the entire space using web workers, so it works fully offline too. It honestly feels like a native app on every platform.</p>
<p>Here is my Podman Compose definition for SilverBullet</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">psuite-wiki:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">docker.io/zefhemel/silverbullet:2.3.0</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">psuite-wiki</span>
    <span class="hljs-attr">hostname:</span> <span class="hljs-string">psuite-wiki</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">always</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">SB_FOLDER=/psuite-data/psuite-wiki/data</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">SB_NAME=Wiki</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">SB_DESCRIPTION=Jayadeep's</span> <span class="hljs-string">personal</span> <span class="hljs-string">Wiki</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">GUID=1000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">PUID=1000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">SB_USER=wiki:${WIKI_PASSWORD}</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/opt/podman-compose/mounts/syncthing/data:/psuite-data</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"3000:3000"</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">psuite-network</span>
    <span class="hljs-attr">healthcheck:</span>
      <span class="hljs-attr">test:</span> [<span class="hljs-string">"CMD"</span>, <span class="hljs-string">"wget"</span>, <span class="hljs-string">"--quiet"</span>, <span class="hljs-string">"--tries=1"</span>, <span class="hljs-string">"--spider"</span>, <span class="hljs-string">"http://localhost:3000"</span>]
      <span class="hljs-attr">interval:</span> <span class="hljs-string">30s</span>
      <span class="hljs-attr">timeout:</span> <span class="hljs-string">10s</span>
      <span class="hljs-attr">retries:</span> <span class="hljs-number">3</span>
      <span class="hljs-attr">start_period:</span> <span class="hljs-string">15s</span>
</code></pre>
<h2 id="heading-programmable-notes">Programmable notes</h2>
<p>SilverBullet comes with a built-in embedded language called <a target="_blank" href="https://silverbullet.md/Space%20Lua">Space Lua</a>. You can add scripts directly inside your notes and render dynamic content in real time. This is a powerful feature that lets you extend your notes, for example:</p>
<ul>
<li><p>Query your notes like a database. Filter using tags or attributes and show them as a table or in any format you like.</p>
</li>
<li><p>Run Network calls, execute Bash scripts. I have a script that runs <code>git commit</code> as a cron job, written in Space Lua</p>
</li>
</ul>
<p>Here is an example from my notes. The MindMap page contains a custom command that can create a new MindMap page based on today’s date. Then there is a Lua query that lists all MindMap pages in a table.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770269682089/f2bb5d90-e193-4e6a-a05e-d8911152f117.png" alt class="image--center mx-auto" /></p>
<p>The corresponding query is</p>
<pre><code class="lang-javascript">${ template.each(query[[
  <span class="hljs-keyword">from</span> index.tag <span class="hljs-string">"mindmap"</span>
]], templates.fullPageItem)}
</code></pre>
<p>If you are interested in my setup, all my Homelab and cloud config are open source. You can find the Ansible playbook for SilverBullet and related services <a target="_blank" href="https://github.com/kmjayadeep/homelab-iac/tree/main/hetzner/ansible-nova">here</a></p>
]]></content:encoded></item></channel></rss>