<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://manuelmc.pocosmhz.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://manuelmc.pocosmhz.org/" rel="alternate" type="text/html" /><updated>2026-02-22T17:38:56+00:00</updated><id>https://manuelmc.pocosmhz.org/feed.xml</id><title type="html">Manuel Molina’s blog</title><subtitle>Systems engineering. Tools. Clouds. Whatever.</subtitle><entry><title type="html">NordPass login issue under Debian</title><link href="https://manuelmc.pocosmhz.org/2026/02/22/nordpass-login-issue-debian.html" rel="alternate" type="text/html" title="NordPass login issue under Debian" /><published>2026-02-22T17:06:00+00:00</published><updated>2026-02-22T17:06:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2026/02/22/nordpass-login-issue-debian</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2026/02/22/nordpass-login-issue-debian.html"><![CDATA[<p>I’ve been using <a href="https://nordpass.com">NordPass</a> as my password manager of choice for some years now.</p>

<p>It works like a charm under Mac and Ubuntu Linux. However, a different issue when it came to Debian as my new desktop of choice.</p>

<p>Thanks to ChatGPT I’ve been able to fix an issue that has been annoying me for months.</p>

<p>Under Linux, no matter what, NordPass comes as a <a href="https://en.wikipedia.org/wiki/Snap_(software)">Snap application</a>. Once you have installed it and start the app, it will take you to the default web browser so you can authenticate yourself in their website.</p>

<p>Once you’re authenticated, you’re offered a link of the form <code class="language-plaintext highlighter-rouge">nordpass://vault?action=login&amp;status=done&amp;verify_token=dfc088f3ab8d5be4b554d5d994dffc4cb5c0fb2e676bcf427b6322c4c3bb2f62</code> that you can open, so it takes you back to the desktop app.</p>

<p>Well, that link does open under Ubuntu, but not under Debian. The reason? This is a custom protocol handler, and my suspicion is that Firefox ESR is not allowed to open external protocol handlers <em>by default</em>.</p>

<p>The steps you have to follow to make that happen are:</p>

<p>1) Check if Debian knows the protocol</p>

<p>Run:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">xdg-mime query default x-scheme-handler/nordpass</code></pre></figure>

<p>If nothing is returned → that’s the problem.
NordPass hasn’t registered itself properly.</p>

<p>2) — Manually register NordPass as the handler</p>

<p>First find the Snap desktop file:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">ls</span> /var/lib/snapd/desktop/applications | <span class="nb">grep</span> <span class="nt">-i</span> nordpass</code></pre></figure>

<p>You should see something like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>snap.nordpass.nordpass.desktop
</code></pre></div></div>

<p>Now register it:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">xdg-mime default snap.nordpass.nordpass.desktop x-scheme-handler/nordpass</code></pre></figure>

<p>Then update desktop database:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">update-desktop-database ~/.local/share/applications</code></pre></figure>

<p>Now test:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">xdg-open <span class="s2">"nordpass://test"</span></code></pre></figure>

<p>If NordPass opens → fixed.</p>

<p>3) If it still fails (Snap sandbox issue)</p>

<p>Snap apps sometimes cannot receive custom URL callbacks because of missing portal integration.</p>

<p>Check if the snap has desktop integration:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">snap connections nordpass</code></pre></figure>

<p>Look for:</p>

<p>desktop
desktop-legacy
xdg-desktop-portal</p>

<p>If not connected:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>snap connect nordpass:desktop
<span class="nb">sudo </span>snap connect nordpass:desktop-legacy</code></pre></figure>

<p>4) Firefox setting check</p>

<p>In Firefox:</p>

<ul>
  <li>Go to: <code class="language-plaintext highlighter-rouge">about:config</code></li>
  <li>Search: <code class="language-plaintext highlighter-rouge">network.protocol-handler.expose.nordpass</code></li>
  <li>If it exists and is set to true, change it to false.</li>
  <li>If it doesn’t exist, create:
    <ul>
      <li>Type: Boolean</li>
      <li>Name: network.protocol-handler.expose.nordpass</li>
      <li>Value: false</li>
    </ul>
  </li>
  <li>Restart Firefox.</li>
</ul>

<p>This tells Firefox it is allowed to open external handlers.</p>

<p>🔎 Why this happens</p>

<p>Snap applications:</p>

<ul>
  <li>Run sandboxed</li>
  <li>Sometimes fail to register URL schemes properly</li>
</ul>

<p>Debian 13 is stricter with portals and desktop integration</p>

<p>This is not a NordPass login problem — it’s a custom protocol handler registration issue.</p>]]></content><author><name>Manuel Molina</name></author><category term="debian" /><category term="trixie" /><category term="nordpass" /><category term="chatgpt" /><summary type="html"><![CDATA[I’ve been using NordPass as my password manager of choice for some years now.]]></summary></entry><entry><title type="html">Debian boot and login screens customization</title><link href="https://manuelmc.pocosmhz.org/2026/02/22/debian-boot-and-login-customization.html" rel="alternate" type="text/html" title="Debian boot and login screens customization" /><published>2026-02-22T15:50:00+00:00</published><updated>2026-02-22T15:50:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2026/02/22/debian-boot-and-login-customization</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2026/02/22/debian-boot-and-login-customization.html"><![CDATA[<p>I like to take advantage of the UI customization options that Linux offers.
I’m currently using Debian 13 “Trixie”. Sometimes you like the default theme, sometimes you don’t.</p>

<p>In any case, I want to tell here two main changes I did to my desktop.</p>

<h1 id="grub-default-image">GRUB default image</h1>
<p>In my personal opinion, the default <a href="https://en.wikipedia.org/wiki/GNU_GRUB">GRUB</a> background image for Debian 13 is a bit sad.</p>

<p>Thus, the steps I took for changing it for something more close to my taste were:</p>

<p>1) GRUB has some limitations.</p>

<p>✅ Supported formats</p>

<ul>
  <li>PNG (recommended)</li>
  <li>JPG/JPEG</li>
  <li>TGA</li>
</ul>

<p>PNG works best.</p>

<p>Using a wallpaper from a Lenovo Thinkpad fan page, I created a customized image for this background. Using <a href="https://en.wikipedia.org/wiki/GIMP">GIMP</a>, I created a 1024x768 PNG image, which would suit almost any display at boot time.</p>

<p>You can check your GRUB resolution with:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># grep GRUB_GFXMODE /etc/default/grub</span></code></pre></figure>

<p>If not set, GRUB may default to something like 1024x768.</p>

<p>You can explicitly set resolution:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">GRUB_GFXMODE</span><span class="o">=</span>1920x1080
<span class="nv">GRUB_GFXPAYLOAD_LINUX</span><span class="o">=</span>keep</code></pre></figure>

<p>2) Place the image in the right place.</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># cp mybackground.png /boot/grub</span></code></pre></figure>

<p>3) Tell GRUB to Use the Image</p>

<p>Edit:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">sudo </span>nano /etc/default/grub</code></pre></figure>

<p>Add or modify this line:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">GRUB_BACKGROUND</span><span class="o">=</span><span class="s2">"/boot/grub/mybackground.png"</span></code></pre></figure>

<p>Make sure:</p>

<ul>
  <li>The path is correct</li>
  <li>Quotes are included</li>
  <li>The file exists</li>
</ul>

<p>4) Update GRUB</p>

<p>After saving:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">sudo </span>update-grub</code></pre></figure>

<p>This regenerates:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">/boot/grub/grub.cfg</code></pre></figure>

<p>Reboot to test.</p>

<p>5) (Optional) Enable Graphics Mode if Needed</p>

<p>If the background doesn’t appear, ensure graphics mode is enabled:</p>

<p>In <code class="language-plaintext highlighter-rouge">/etc/default/grub</code>, verify:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">GRUB_TERMINAL</span><span class="o">=</span>console</code></pre></figure>

<p>If it’s set to serial or something else, the background won’t show.</p>

<p>If needed, comment it out:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#GRUB_TERMINAL=console</span></code></pre></figure>

<p>Then run:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">sudo </span>update-grub</code></pre></figure>

<h1 id="lightdm-background-image">LightDM background image</h1>
<p>I’m using <a href="https://en.wikipedia.org/wiki/Xfce">Xfce</a> for Debian 13 as my default desktop.</p>

<p><a href="https://en.wikipedia.org/wiki/LightDM">LightDM</a> is the [display manager] in place, so here is where we have to configure the background image for the login page.</p>

<p>However, I noticed that there is a easier and more straightforward way of changing the desktop theme, altogether.</p>

<p>Debian comes with a list of desktop themes, that you can see by doing</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>update-alternatives <span class="nt">--list</span> desktop-theme
/usr/share/desktop-base/ceratopsian-theme
/usr/share/desktop-base/emerald-theme
/usr/share/desktop-base/futureprototype-theme
/usr/share/desktop-base/homeworld-theme
/usr/share/desktop-base/joy-inksplat-theme
/usr/share/desktop-base/joy-theme
/usr/share/desktop-base/lines-theme
/usr/share/desktop-base/moonlight-theme
/usr/share/desktop-base/softwaves-theme
/usr/share/desktop-base/spacefun-theme</code></pre></figure>

<p>“Ceratopsian” is the one by default, and that is the one that I don’t like.</p>

<p>You can easily go to the folder and see the different images. When you see another that you like, you can just change the default theme, to be system-wide, by doing:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">sudo </span>update-alternatives <span class="nt">--set</span> desktop-theme /usr/share/desktop-base/futureprototype-theme
update-alternatives: using /usr/share/desktop-base/futureprototype-theme to provide /usr/share/desktop-base/active-theme <span class="o">(</span>desktop-theme<span class="o">)</span> <span class="k">in </span>manual mode</code></pre></figure>

<p>If you fancy the GRUB image that comes with your theme, just undo the changes done in the GRUB section of this post. You’ll get the default image that comes with the selected theme.</p>]]></content><author><name>Manuel Molina</name></author><category term="debian" /><category term="trixie" /><category term="grub" /><category term="lightdm" /><category term="chatgpt" /><summary type="html"><![CDATA[I like to take advantage of the UI customization options that Linux offers. I’m currently using Debian 13 “Trixie”. Sometimes you like the default theme, sometimes you don’t.]]></summary></entry><entry><title type="html">Proxmox major version upgrade</title><link href="https://manuelmc.pocosmhz.org/2025/12/23/proxmox-major-upgrade.html" rel="alternate" type="text/html" title="Proxmox major version upgrade" /><published>2025-12-23T17:20:00+00:00</published><updated>2025-12-23T17:20:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/12/23/proxmox-major-upgrade</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/12/23/proxmox-major-upgrade.html"><![CDATA[<p>Some months after <a href="/2025/04/13/proxmox-home-cluster-i.html">starting using Proxmox</a> 8, it’s time to upgrade to <a href="https://www.proxmox.com/en/about/company-details/press-releases/proxmox-virtual-environment-9-0">Proxmox 9</a>.</p>

<p>We’ll be using the <a href="https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#In-place_upgrade">in place upgrade</a> method. I’m going to take you through the steps I took to upgrade my home cluster.</p>

<h1 id="upgrade-all-nodes-to-latest-minor-version">Upgrade all nodes to latest minor version</h1>
<p>From current major version 8, be sure that all nodes are already on this minor version:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# pveversion
pve-manager/8.4.14/b502d23c55afcba1 <span class="o">(</span>running kernel: 6.8.12-17-pve<span class="o">)</span></code></pre></figure>

<h1 id="check-prerequisites">Check prerequisites</h1>
<p>Double check that you have a correct Ceph status (if shared storage is in use) and a healthy PVE cluster, as detailed in the Prerequisites section of the upgrade documentation.</p>

<h1 id="pve8to9-checklist-script">pve8to9 checklist script</h1>
<p>Run the script on all nodes prior to start any upgrade. Be sure that you have no errors or hard notices from it.</p>

<p>I had the following message, and similar ones have to be fixed before starting the upgrade:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# pve8to9 |&amp; <span class="nb">grep</span> <span class="nt">-i</span> FAIL
FAIL: systemd-boot meta-package installed. This will cause problems on upgrades of other boot-related packages. Remove <span class="s1">'systemd-boot'</span> See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#sd-boot-warning <span class="k">for </span>more information.
FAILURES: 1</code></pre></figure>

<p>In my case, it was safe to do this on all nodes:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# apt-get remove systemd-boot
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  proxmox-kernel-6.8.12-13-pve-signed proxmox-kernel-6.8.12-14-pve-signed
Use <span class="s1">'apt autoremove'</span> to remove them.
The following packages will be REMOVED:
  systemd-boot
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 250 kB disk space will be freed.
Do you want to <span class="k">continue</span>? <span class="o">[</span>Y/n] Y
<span class="o">(</span>Reading database ... 80236 files and directories currently installed.<span class="o">)</span>
Removing systemd-boot <span class="o">(</span>252.39-1~deb12u1<span class="o">)</span> ...
Processing triggers <span class="k">for </span>man-db <span class="o">(</span>2.11.2-2<span class="o">)</span> ...</code></pre></figure>

<h1 id="put-a-node-in-maintenance-mode">Put a node in maintenance mode</h1>
<p>We’ll start performing the upgrade to one node, and we’ll repeat these steps on every node, one at a time.
We’ll always wait for the cluster to be 100% available before moving on to the next node upgrade.</p>

<p>To put a node in maintenance mode, go to the shell and do:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ha-manager crm-command node-maintenance <span class="nb">enable </span>pve01</code></pre></figure>

<p>After VMs and CMs have been migrated, we can see there are none in the node we’re about to upgrade:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ha-manager status
quorum OK
master pve03 <span class="o">(</span>active, Tue Dec 23 19:56:08 2025<span class="o">)</span>
lrm pve01 <span class="o">(</span>maintenance mode, Tue Dec 23 19:56:09 2025<span class="o">)</span>
lrm pve02 <span class="o">(</span>active, Tue Dec 23 19:56:07 2025<span class="o">)</span>
lrm pve03 <span class="o">(</span>active, Tue Dec 23 19:56:04 2025<span class="o">)</span>
service vm:100 <span class="o">(</span>pve03, started<span class="o">)</span>
service vm:101 <span class="o">(</span>pve02, started<span class="o">)</span>
service vm:102 <span class="o">(</span>pve02, started<span class="o">)</span>
service vm:104 <span class="o">(</span>pve03, started<span class="o">)</span>
service vm:105 <span class="o">(</span>pve02, started<span class="o">)</span></code></pre></figure>

<h1 id="perform-software-upgrade">Perform software upgrade</h1>
<p>This step is split in several sub-steps.</p>

<h2 id="update-debian-base-repositories-to-trixie">Update Debian Base Repositories to Trixie</h2>
<p>Run the following to update the base OS repositories:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# <span class="nb">sed</span> <span class="nt">-i</span> <span class="s1">'s/bookworm/trixie/g'</span> /etc/apt/sources.list
root@pve01:~# <span class="nb">sed</span> <span class="nt">-i</span> <span class="s1">'s/bookworm/trixie/g'</span> /etc/apt/sources.list.d/pve-enterprise.list</code></pre></figure>

<h2 id="add-the-proxmox-ve-9-package-repository">Add the Proxmox VE 9 Package Repository</h2>
<p>In my case, I’m using the no-subscription repository, so I do this:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# <span class="nb">cat</span> <span class="o">&gt;</span> /etc/apt/sources.list.d/proxmox.sources <span class="o">&lt;&lt;</span> <span class="no">EOF</span><span class="sh">
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOF</span></code></pre></figure>

<p>In any case, double check that you’re not leaving behind traces of Proxmox 8 repositories and that the new ones work, through <code class="language-plaintext highlighter-rouge">apt update</code>. There could be some traces in file <code class="language-plaintext highlighter-rouge">/etc/apt/sources.list</code>.</p>

<h2 id="update-the-ceph-package-repository">Update the Ceph Package Repository</h2>
<p>In case you’re using Ceph shared storage, you must also update the Ceph package repository.</p>

<p>In my case, I’m using the no-subscription repository, so I do this:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# <span class="nb">cat</span> <span class="o">&gt;</span> /etc/apt/sources.list.d/ceph.sources <span class="o">&lt;&lt;</span> <span class="no">EOF</span><span class="sh">
Types: deb
URIs: http://download.proxmox.com/debian/ceph-squid
Suites: trixie
Components: no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOF</span></code></pre></figure>

<p>Also, I made sure no traces of the previous Ceph repository are there, by doing:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# <span class="nb">rm</span> /etc/apt/sources.list.d/ceph.list </code></pre></figure>

<h2 id="refresh-package-index">Refresh package index</h2>
<p>Run the command and make sure you don’t have any errors:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# apt update
Hit:1 http://ftp.es.debian.org/debian trixie InRelease
Hit:2 http://security.debian.org trixie-security InRelease                 
Hit:3 http://ftp.es.debian.org/debian trixie-updates InRelease             
Hit:4 http://security.debian.org/debian-security trixie-security InRelease 
Hit:5 http://download.proxmox.com/debian/ceph-squid trixie InRelease
Hit:6 http://download.proxmox.com/debian/pve trixie InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
639 packages can be upgraded. Run <span class="s1">'apt list --upgradable'</span> to see them.</code></pre></figure>

<h2 id="upgrade-the-system-to-debian-trixie-and-proxmox-ve-90">Upgrade the system to Debian Trixie and Proxmox VE 9.0</h2>
<p>The following command could take quite some time, so be aware:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# apt dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:</code></pre></figure>

<p>[…]</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">639 upgraded, 155 newly installed, 61 to remove and 0 not upgraded.
Need to get 886 MB of archives.
After this operation, 1,729 MB of additional disk space will be used.
Do you want to <span class="k">continue</span>? <span class="o">[</span>Y/n] </code></pre></figure>

<p>Say yes and go ahead with the upgrade.</p>

<p>There will be some questions about configuration changes to be overridden or not by the incoming packages. Answer according to your preferences.</p>

<h2 id="check-result--reboot-into-updated-kernel">Check Result &amp; Reboot Into Updated Kernel</h2>
<p>If the previous command exited without error, you can re-check with <code class="language-plaintext highlighter-rouge">pve8to9</code> and confirm that everything is in place for a reboot.</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# <span class="nb">sync</span> <span class="p">;</span> init 6 <span class="p">;</span> <span class="nb">exit</span></code></pre></figure>

<h2 id="post-upgrade-actions">Post-upgrade actions</h2>
<p>When you log in to the upgraded cluster node, please check:</p>
<ul>
  <li>Cluster status is OK.</li>
  <li>Proxmox VE 9 deprecates HA groups in favor of HA rules. If you are using HA and HA groups, HA groups will be automatically migrated to HA rules once all cluster nodes have been upgraded to Proxmox VE 9.</li>
</ul>

<p>Now you can disable maintenance mode by doing:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ha-manager crm-command node-maintenance disable pve01</code></pre></figure>

<p>Optionally, you can take the change to normalize any package source archive that is still with the old format, by doing:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# apt modernize-sources
The following files need modernizing:
  - /etc/apt/sources.list
  - /etc/apt/sources.list.d/pve-enterprise.list

Modernizing will replace .list files with the new .sources format,
add Signed-By values where they can be determined automatically,
and save the old files into .list.bak files.

This <span class="nb">command </span>supports the <span class="s1">'signed-by'</span> and <span class="s1">'trusted'</span> options. If you
have specified other options inside <span class="o">[]</span> brackets, please transfer them
manually to the output files<span class="p">;</span> see sources.list<span class="o">(</span>5<span class="o">)</span> <span class="k">for </span>a mapping.

For a simulation, respond N <span class="k">in </span>the following prompt.
Rewrite 2 sources? <span class="o">[</span>Y/n] Y
Modernizing /etc/apt/sources.list...
- Writing /etc/apt/sources.list.d/debian.sources

Modernizing /etc/apt/sources.list.d/pve-enterprise.list...</code></pre></figure>

<p><strong>Be aware that any commented entries would be uncommented, so double check with <code class="language-plaintext highlighter-rouge">apt update</code> after that and act accordingly.</strong></p>

<h1 id="wrap-up">Wrap up</h1>
<p>After you’ve done the upgrade procedure on all nodes, do a few final checks.</p>

<p>Check cluster health:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve03:~# pvecm status
Cluster information
<span class="nt">-------------------</span>
Name:             myclust
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
<span class="nt">------------------</span>
Date:             Wed Dec 24 02:20:02 2025
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          1.20c
Quorate:          Yes

Votequorum information
<span class="nt">----------------------</span>
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
<span class="nt">----------------------</span>
    Nodeid      Votes Name
0x00000001          1 192.168.18.131
0x00000002          1 192.168.18.132
0x00000003          1 192.168.18.133 <span class="o">(</span><span class="nb">local</span><span class="o">)</span></code></pre></figure>

<p>Check HA status:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve03:~# ha-manager status
quorum OK
master pve02 <span class="o">(</span>active, Wed Dec 24 02:22:06 2025<span class="o">)</span>
lrm pve01 <span class="o">(</span>active, Wed Dec 24 02:22:03 2025<span class="o">)</span>
lrm pve02 <span class="o">(</span>active, Wed Dec 24 02:22:04 2025<span class="o">)</span>
lrm pve03 <span class="o">(</span>active, Wed Dec 24 02:21:59 2025<span class="o">)</span>
service vm:100 <span class="o">(</span>pve03, started<span class="o">)</span>
service vm:101 <span class="o">(</span>pve01, started<span class="o">)</span>
service vm:102 <span class="o">(</span>pve02, started<span class="o">)</span>
service vm:104 <span class="o">(</span>pve03, started<span class="o">)</span>
service vm:105 <span class="o">(</span>pve01, started<span class="o">)</span></code></pre></figure>

<p>Check Ceph (shared storage) status:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve03:~# pveceph status
  cluster:
    <span class="nb">id</span>:     f35872da-c5a3-4599-af36-b99c2b64c0f3
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum pve01,pve02,pve03 <span class="o">(</span>age 16m<span class="o">)</span>
    mgr: pve03<span class="o">(</span>active, since 16m<span class="o">)</span>, standbys: pve01, pve02
    osd: 3 osds: 3 up <span class="o">(</span>since 15m<span class="o">)</span>, 3 <span class="k">in</span> <span class="o">(</span>since 4M<span class="o">)</span>
 
  data:
    pools:   3 pools, 65 pgs
    objects: 28.56k objects, 106 GiB
    usage:   307 GiB used, 1.8 TiB / 2.1 TiB avail
    pgs:     65 active+clean
 
  io:
    client:   0 B/s rd, 450 KiB/s wr, 0 op/s rd, 91 op/s wr</code></pre></figure>

<p>With that <code class="language-plaintext highlighter-rouge">HEALTH_OK</code>, we’re done.</p>]]></content><author><name>Manuel Molina</name></author><category term="hypervisor" /><category term="home" /><category term="ha" /><category term="budget" /><category term="proxmox" /><summary type="html"><![CDATA[Some months after starting using Proxmox 8, it’s time to upgrade to Proxmox 9.]]></summary></entry><entry><title type="html">iSight in 2025</title><link href="https://manuelmc.pocosmhz.org/2025/12/18/isight-in-2025.html" rel="alternate" type="text/html" title="iSight in 2025" /><published>2025-12-18T21:50:00+00:00</published><updated>2025-12-18T21:50:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/12/18/isight-in-2025</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/12/18/isight-in-2025.html"><![CDATA[<p>A few years ago I bought a webcam that was also quite a piece of industrial design at the time of its release on 2003: <a href="https://en.wikipedia.org/wiki/ISight">iSight</a>.</p>

<p><img src="/content/images/2025-12-18-isight-in-2025/iSight.jpg" alt="iSight" /></p>

<p>With a <a href="https://en.wikipedia.org/wiki/IEEE_1394">Firewire</a> 400 interface, is a 640x480 CCD camera with a max frame rate of 30 fps.</p>

<p>It was perfectly usable with every Mac with a Firewire connector ever since. Or so I thought, until I tried to use it with macOS versions newer than Catalina. Apple removed the support for that, and it’s clear that Opencore Legacy Patcher <a href="https://forums.macrumors.com/threads/firewire-isight-audio-is-back-for-catalina-and-bs.2272444/page-2">might not</a> take that back.</p>

<p>Ok, at least I have it working with macOS El Capitán.</p>

<p>But I started wondering … “And what about Linux?”</p>

<p>And yes, I managed to make it work under Linux!</p>

<h1 id="sound-works-but-video-does-not">Sound works but video does not</h1>
<p>We can just plug the camera and see the stereo input:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@whisky:~# <span class="nb">cat</span> /proc/asound/cards
 0 <span class="o">[</span>iSight         <span class="o">]</span>: iSight - Apple iSight
                      Apple iSight <span class="o">(</span>GUID 000a27000414c178<span class="o">)</span> at fw1.1, S400
 1 <span class="o">[</span>Intel          <span class="o">]</span>: HDA-Intel - HDA Intel
                      HDA Intel at 0x8c004000 irq 37</code></pre></figure>

<p>And it showed up in the audio mixer.</p>

<p>However, you try to list V4L devices and you get:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@whisky:~# v4l2-ctl <span class="nt">--list-devices</span>
Cannot open device /dev/video0, exiting.</code></pre></figure>

<h1 id="but-video-is-there">But video is there</h1>
<p>… as long as you can do this and see the video:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">manuelmc@whisky:~<span class="nv">$ </span>vlc dc1394://</code></pre></figure>

<p>It’s coming from the raw dc1394 source. There is no camera driver, only FireWire access.</p>

<h1 id="checking-driver-support">Checking driver support</h1>
<p>Long story short: Support for FireWire video interfaces in the kernel was deprecated and replaced as part of a large rework of the FireWire stack around the 2.6.37 kernel (released in early 2011).</p>

<p>At that point the old FireWire device drivers (like <code class="language-plaintext highlighter-rouge">ohci1394</code>, <code class="language-plaintext highlighter-rouge">raw1394</code>, <code class="language-plaintext highlighter-rouge">video1394</code>) were replaced with the new unified <code class="language-plaintext highlighter-rouge">firewire_core</code>/<code class="language-plaintext highlighter-rouge">firewire-ohci</code> stack, and the dedicated legacy iSight video/kernel driver (where present) was abandoned.
This change meant that userspace had to handle the camera (via libraries like libdc1394) rather than the kernel exposing it as a native V4L or <code class="language-plaintext highlighter-rouge">/dev/video*</code> device driver.</p>

<h1 id="using-another-route">Using another route</h1>
<p>After trying to bring back the kernel source tree for 5.8.x and confirming that there is not support for iSight FireWire video (even if you tried to compile it yourself), we started looking in other places.</p>

<p>I managed to get plenty of information thanks to ChatGPT and put it together to compile the following guide.</p>

<h1 id="apple-isight-firewire--v4l2-bridge-on-ubuntu-2404-kernel-68-for-zoommeetetc">Apple iSight (FireWire) → V4L2 Bridge on Ubuntu 24.04 (Kernel 6.8) for Zoom/Meet/etc.</h1>

<p>This guide explains how to use an <strong>Apple iSight FireWire</strong> camera on modern Linux
(<strong>Ubuntu 24.04.3, kernel 6.8</strong>) from applications that require a <strong>V4L2 webcam</strong>
such as <strong>Zoom, Google Meet, Microsoft Teams</strong>, etc.</p>

<p>Your iSight works via <strong>libdc1394</strong> (for example, <code class="language-plaintext highlighter-rouge">vlc dc1394://</code> shows live video and
the green LED turns on), but it is <strong>not exposed as <code class="language-plaintext highlighter-rouge">/dev/video*</code> by the kernel</strong>.
That is expected on modern kernels.</p>

<p>To solve this, we create a <strong>virtual V4L2 webcam</strong> using <strong>v4l2loopback</strong> and feed it
from the iSight video stream.</p>

<hr />

<h2 id="what-you-will-build">What you will build</h2>

<ul>
  <li><strong>Input:</strong> Apple iSight FireWire (IIDC / libdc1394)</li>
  <li><strong>Bridge:</strong> FFmpeg <em>or</em> GStreamer (depending on availability)</li>
  <li><strong>Output:</strong> v4l2loopback virtual webcam (<code class="language-plaintext highlighter-rouge">/dev/videoN</code>)</li>
  <li><strong>Result:</strong> Applications see a normal V4L2 camera (e.g. <em>iSight-Virtual</em>)</li>
</ul>

<hr />

<h2 id="0-baseline-checks-important">0) Baseline checks (important)</h2>

<h3 id="confirm-the-camera-works-via-firewire--dc1394">Confirm the camera works via FireWire / dc1394</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vlc dc1394://
</code></pre></div></div>

<p>If you see video and the LED turns green, FireWire and libdc1394 are working.</p>

<h3 id="confirm-there-is-no-native-v4l2-device">Confirm there is no native V4L2 device</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>v4l2-ctl <span class="nt">--list-devices</span>
</code></pre></div></div>

<p>The iSight should <strong>not</strong> appear here. This is normal.</p>

<hr />

<h2 id="1-install-required-packages-ubuntu-2404">1) Install required packages (Ubuntu 24.04)</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt update
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\</span>
  v4l2loopback-dkms v4l2loopback-utils <span class="se">\</span>
  v4l-utils <span class="se">\</span>
  libdc1394-25 <span class="se">\</span>
  vlc
</code></pre></div></div>

<p>Install FFmpeg (optional, see section 3):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> ffmpeg
</code></pre></div></div>

<p>Install GStreamer (recommended fallback and often required on 24.04):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\</span>
  gstreamer1.0-tools <span class="se">\</span>
  gstreamer1.0-plugins-base <span class="se">\</span>
  gstreamer1.0-plugins-good <span class="se">\</span>
  gstreamer1.0-plugins-bad <span class="se">\</span>
  gstreamer1.0-plugins-ugly
</code></pre></div></div>

<hr />

<h2 id="2-create-the-virtual-v4l2-webcam-v4l2loopback">2) Create the virtual V4L2 webcam (v4l2loopback)</h2>

<p>Create one virtual device (example: <code class="language-plaintext highlighter-rouge">/dev/video10</code>):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>modprobe v4l2loopback <span class="se">\</span>
  <span class="nv">video_nr</span><span class="o">=</span>10 <span class="se">\</span>
  <span class="nv">card_label</span><span class="o">=</span><span class="s2">"iSight-Virtual"</span> <span class="se">\</span>
  <span class="nv">exclusive_caps</span><span class="o">=</span>1
</code></pre></div></div>

<p>Verify it exists:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">ls</span> <span class="nt">-l</span> /dev/video10
v4l2-ctl <span class="nt">--list-devices</span>
</code></pre></div></div>

<p>You should see <strong>iSight-Virtual</strong>.</p>

<h3 id="important-note-about-formats">Important note about formats</h3>
<p>On some systems, <code class="language-plaintext highlighter-rouge">v4l2-ctl -d /dev/video10 --list-formats-ext</code> may show only:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Type: Video Capture
</code></pre></div></div>

<p>…with no formats listed. This is normal with <code class="language-plaintext highlighter-rouge">exclusive_caps=1</code> and/or with how
some apps/drivers query loopback devices. The bridge can still work fine.</p>

<h3 id="optional-load-automatically-at-boot">Optional: load automatically at boot</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">echo</span> <span class="s1">'options v4l2loopback video_nr=10 card_label="iSight-Virtual" exclusive_caps=1'</span> | <span class="se">\</span>
  <span class="nb">sudo tee</span> /etc/modprobe.d/v4l2loopback-isight.conf
</code></pre></div></div>

<hr />

<h2 id="3-option-a--use-ffmpeg-only-if-dc1394-input-is-supported">3) OPTION A — Use FFmpeg (ONLY if dc1394 input is supported)</h2>

<h3 id="31-check-if-your-ffmpeg-supports-dc1394">3.1 Check if your FFmpeg supports dc1394</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ffmpeg <span class="nt">-hide_banner</span> <span class="nt">-formats</span> | <span class="nb">grep</span> <span class="nt">-i</span> 1394
</code></pre></div></div>

<p>If you see <strong>dc1394</strong>, you can use FFmpeg.</p>

<p>If you do <strong>NOT</strong> see <code class="language-plaintext highlighter-rouge">dc1394</code> (common on Ubuntu 24.04), you will get:
<code class="language-plaintext highlighter-rouge">Unknown input format: 'dc1394'</code> and you must use <strong>OPTION B (GStreamer)</strong>.</p>

<h3 id="32-ffmpeg-bridge-command">3.2 FFmpeg bridge command</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ffmpeg <span class="se">\</span>
  <span class="nt">-hide_banner</span> <span class="nt">-loglevel</span> warning <span class="se">\</span>
  <span class="nt">-f</span> dc1394 <span class="nt">-framerate</span> 30 <span class="nt">-video_size</span> 640x480 <span class="nt">-i</span> dc1394:// <span class="se">\</span>
  <span class="nt">-vf</span> <span class="s2">"format=yuyv422"</span> <span class="se">\</span>
  <span class="nt">-f</span> v4l2 /dev/video10
</code></pre></div></div>

<p>Leave this running while using Zoom / Meet.</p>

<hr />

<h2 id="4-option-b--use-gstreamer-recommended--works-on-ubuntu-2404">4) OPTION B — Use GStreamer (RECOMMENDED / works on Ubuntu 24.04)</h2>

<p>This is the <strong>preferred and reliable method</strong> on Ubuntu 24.04.</p>

<h3 id="41-confirm-dc1394src-is-available">4.1 Confirm dc1394src is available</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gst-inspect-1.0 dc1394src
</code></pre></div></div>

<p>If it prints element details, the plugin is available.</p>

<h3 id="42-start-the-gstreamer--v4l2-bridge-known-good-pipeline">4.2 Start the GStreamer → V4L2 bridge (KNOWN-GOOD PIPELINE)</h3>

<p>This pipeline is confirmed to work with <code class="language-plaintext highlighter-rouge">v4l2loopback</code> when a strict pixel format
(e.g. <code class="language-plaintext highlighter-rouge">format=YUY2</code>) fails to link:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gst-launch-1.0 <span class="nt">-v</span> <span class="se">\</span>
  dc1394src <span class="o">!</span> <span class="se">\</span>
  videoconvert <span class="o">!</span> <span class="se">\</span>
  video/x-raw,width<span class="o">=</span>640,height<span class="o">=</span>480,framerate<span class="o">=</span>30/1 <span class="o">!</span> <span class="se">\</span>
  v4l2sink <span class="nv">device</span><span class="o">=</span>/dev/video10
</code></pre></div></div>

<p>Leave it running while using your conferencing app.</p>

<h3 id="why-we-do-not-force-formatyuy2-here">Why we do NOT force <code class="language-plaintext highlighter-rouge">format=YUY2</code> here</h3>
<p>You may see an error like:</p>

<blockquote>
  <p>could not link videoconvert0 to v4l2sink0, neither element can handle<br />
video/x-raw, format=YUY2, …</p>
</blockquote>

<p>This happens because <code class="language-plaintext highlighter-rouge">v4l2sink</code> (writing to v4l2loopback) may not accept that exact
caps combination on your build. Letting GStreamer negotiate the format is the most
portable solution.</p>

<h3 id="43-if-the-image-is-unstable-try-lower-frame-rate">4.3 If the image is unstable, try lower frame rate</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gst-launch-1.0 <span class="nt">-v</span> <span class="se">\</span>
  dc1394src <span class="o">!</span> <span class="se">\</span>
  videoconvert <span class="o">!</span> <span class="se">\</span>
  video/x-raw,width<span class="o">=</span>640,height<span class="o">=</span>480,framerate<span class="o">=</span>15/1 <span class="o">!</span> <span class="se">\</span>
  v4l2sink <span class="nv">device</span><span class="o">=</span>/dev/video10
</code></pre></div></div>

<hr />

<h2 id="5-use-the-camera-in-zoom--meet--teams">5) Use the camera in Zoom / Meet / Teams</h2>

<ol>
  <li>Start <strong>one</strong> bridge only (FFmpeg <em>or</em> GStreamer).</li>
  <li>Open Zoom / browser / Teams.</li>
  <li>Select camera: <strong>iSight-Virtual</strong>.</li>
</ol>

<hr />

<h2 id="6-troubleshooting">6) Troubleshooting</h2>

<h3 id="device-or-resource-busy">“Device or resource busy”</h3>
<p>Only one program can access the camera stream at a time.
Close VLC or any other app using the iSight.</p>

<h3 id="virtual-camera-not-listed">Virtual camera not listed</h3>
<p>Check:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>v4l2-ctl <span class="nt">--list-devices</span>
</code></pre></div></div>

<p>Reload v4l2loopback if needed:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>modprobe <span class="nt">-r</span> v4l2loopback
<span class="nb">sudo </span>modprobe v4l2loopback <span class="nv">video_nr</span><span class="o">=</span>10 <span class="nv">card_label</span><span class="o">=</span><span class="s2">"iSight-Virtual"</span> <span class="nv">exclusive_caps</span><span class="o">=</span>1
</code></pre></div></div>

<hr />

<h2 id="7-why-this-bridge-is-required-background">7) Why this bridge is required (background)</h2>

<ul>
  <li>Apple iSight FireWire is an <strong>IIDC camera</strong>, not USB UVC.</li>
  <li>Modern Linux kernels do not expose IIDC cameras as V4L2 nodes by default.</li>
  <li>Audio still works via <code class="language-plaintext highlighter-rouge">snd-isight</code>, but video is typically user-space only.</li>
  <li>Therefore, a <strong>loopback bridge</strong> is the correct modern solution.</li>
</ul>

<hr />

<h2 id="quick-checklist">Quick checklist</h2>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" /><code class="language-plaintext highlighter-rouge">vlc dc1394://</code> shows live video (baseline)</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" /><code class="language-plaintext highlighter-rouge">/dev/video10</code> exists (v4l2loopback)</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />GStreamer pipeline runs without errors (recommended)</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Zoom/Meet selects <strong>iSight-Virtual</strong></li>
</ul>

<hr />

<h2 id="conclusion">Conclusion</h2>

<p>On Ubuntu 24.04, <strong>GStreamer + v4l2loopback</strong> is the most reliable way to use an
Apple iSight FireWire camera with modern applications.</p>

<p>FFmpeg can be used <strong>only</strong> if it was built with dc1394 input support.</p>

<h1 id="additional-script">Additional script</h1>
<p>Keep in mind that you must have the pipeline running before any application tries to use it.</p>

<p>You might want to have a script at hand for turning the camera on when you need it.</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cat</span> <span class="o">&gt;</span> ~/isight-autostart.sh <span class="o">&lt;&lt;</span><span class="sh">'</span><span class="no">EOF</span><span class="sh">'
#!/usr/bin/env bash
set -euo pipefail

DEV=/dev/video10
LABEL="iSight-Virtual"

# Ensure v4l2loopback exists
if [ ! -e "</span><span class="nv">$DEV</span><span class="sh">" ]; then
  sudo modprobe v4l2loopback video_nr=10 card_label="</span><span class="nv">$LABEL</span><span class="sh">" exclusive_caps=1
fi

# Start gstreamer bridge if not running
if ! pgrep -f "gst-launch-1.0.*v4l2sink device=</span><span class="nv">$DEV</span><span class="sh">" &gt;/dev/null 2&gt;&amp;1; then
  nohup gst-launch-1.0 -v </span><span class="se">\</span><span class="sh">
    dc1394src ! videoconvert ! video/x-raw,width=640,height=480,framerate=30/1 ! </span><span class="se">\</span><span class="sh">
    v4l2sink device="</span><span class="nv">$DEV</span><span class="sh">" </span><span class="se">\</span><span class="sh">
    &gt;/tmp/isight-bridge.log 2&gt;&amp;1 &amp;
  sleep 1
fi

# Launch whatever you want
exec "</span><span class="nv">$@</span><span class="sh">"
</span><span class="no">EOF

</span><span class="nb">chmod</span> +x ~/isight-autostart.sh</code></pre></figure>

<h1 id="proof-of-it">Proof of it</h1>
<p>Here you have it!</p>

<p><img src="/content/images/2025-12-18-isight-in-2025/google-meet-isight.png" alt="A happy user" /></p>]]></content><author><name>Manuel Molina</name></author><category term="macOS" /><category term="unsupported" /><summary type="html"><![CDATA[A few years ago I bought a webcam that was also quite a piece of industrial design at the time of its release on 2003: iSight.]]></summary></entry><entry><title type="html">Using Ceph RBD as Container Storage Interface in Proxmox</title><link href="https://manuelmc.pocosmhz.org/2025/07/02/using-ceph-rbd-csi.html" rel="alternate" type="text/html" title="Using Ceph RBD as Container Storage Interface in Proxmox" /><published>2025-07-02T20:50:00+00:00</published><updated>2025-07-02T20:50:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/07/02/using-ceph-rbd-csi</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/07/02/using-ceph-rbd-csi.html"><![CDATA[<p>This is the third post about Proxmox, after my <a href="/2025/04/15/proxmox-home-cluster-ii.html">previous post</a> in which I discussed the basic steps for Ceph installation.</p>

<p>In that, I mentioned about using CephFS. However, right here we’re going to implement persistent storage through Ceph <a href="https://docs.ceph.com/en/reef/rbd/">RBD</a></p>

<p>I’ll use both the documentation about <a href="https://docs.ceph.com/en/reef/rbd/rbd-kubernetes/">block devices and Kubernetes</a> from Ceph and <a href="https://fabreur.medium.com/kubernetes-using-ceph-rbd-as-container-storage-interface-csi-6ab4177a0fc3">this</a> Medium post from <a href="https://fabreur.medium.com">Fabio Reis</a>.</p>

<h1 id="storage-pool-creation">Storage pool creation</h1>
<p>We already have a RBD pool named <code class="language-plaintext highlighter-rouge">pool1</code> in which we allocate storage volumes for Proxmox VMs.</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ceph osd pool <span class="nb">ls </span>detail
pool 1 <span class="s1">'.mgr'</span> replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 19 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 3.00
pool 2 <span class="s1">'pool1'</span> replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 479 lfor 0/479/477 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 1.31</code></pre></figure>

<p>Thus, we’re going to create a separate one for our Kubernetes cluster.</p>

<p>From the same shell in a Proxmox cluster node, let’s create a new pool:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ceph osd pool create kubernetes
pool <span class="s1">'kubernetes'</span> created</code></pre></figure>

<p>We need to initialize a newly created pool prior to use:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# rbd pool init kubernetes</code></pre></figure>

<p>We need to create a new user for Kubernetes and ceph-csi. Execute the following and <strong>record the generated key</strong>:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ceph auth get-or-create client.kubernetes mon <span class="s1">'profile rbd'</span> osd <span class="s1">'profile rbd pool=kubernetes'</span> mgr <span class="s1">'profile rbd pool=kubernetes'</span>
<span class="o">[</span>client.kubernetes]
    key <span class="o">=</span> <span class="nv">AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg</span><span class="o">==</span></code></pre></figure>

<p>For configuring <code class="language-plaintext highlighter-rouge">ceph-csi</code>, we also require a ConfigMap object stored in Kubernetes to define the the Ceph monitor addresses for the Ceph cluster. Collect both the Ceph cluster unique <strong>fsid</strong> and the <strong>monitor addresses</strong>:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# ceph mon dump
epoch 3
fsid b9127830-b0cc-4e34-aa47-9d1a2e9949a8
last_changed 2025-04-16T01:29:22.792421+0200
created 2025-04-15T20:21:39.833395+0200
min_mon_release 19 <span class="o">(</span>squid<span class="o">)</span>
election_strategy: 1
0: <span class="o">[</span>v2:192.168.18.131:3300/0,v1:192.168.18.131:6789/0] mon.pve01
1: <span class="o">[</span>v2:192.168.18.132:3300/0,v1:192.168.18.132:6789/0] mon.pve02
2: <span class="o">[</span>v2:192.168.18.133:3300/0,v1:192.168.18.133:6789/0] mon.pve03
dumped monmap epoch 3</code></pre></figure>

<p><strong>Note</strong>: <code class="language-plaintext highlighter-rouge">ceph-csi</code> currently only supports the legacy V1 protocol. Hence, we’ll use the v1 addresses with port 6789.</p>

<h1 id="kubernetes-connection-to-ceph-storage">Kubernetes connection to Ceph storage</h1>
<p>With all this information, we can now continue with the Ceph document from <a href="https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/#generate-ceph-csi-configmap">this point</a> in our Kubernetes cluster, and create the required assets manually.</p>

<p>Or better than that, we can use the <a href="https://artifacthub.io/packages/helm/ceph-csi/ceph-csi-rbd">ceph-csi-rbd Helm chart</a> together with Terraform and apply something like this:</p>

<p><code class="language-plaintext highlighter-rouge">variables.tf</code>:</p>

<figure class="highlight"><pre><code class="language-hcl" data-lang="hcl"><span class="nx">variable</span> <span class="s2">"k8s_clusters"</span> <span class="p">{</span>
  <span class="nx">description</span> <span class="p">=</span> <span class="s2">"Kubernetes clusters configuration"</span>
  <span class="nx">type</span> <span class="p">=</span> <span class="nx">map</span><span class="err">(</span><span class="nx">object</span><span class="err">(</span><span class="p">{</span>
    <span class="nx">nodes</span> <span class="p">=</span> <span class="nx">map</span><span class="err">(</span><span class="nx">object</span><span class="err">(</span><span class="p">{</span>
      <span class="nx">ip_address</span> <span class="p">=</span> <span class="nx">string</span>
      <span class="nx">ip_gateway</span> <span class="p">=</span> <span class="nx">string</span>
    <span class="p">}</span><span class="err">))</span>
    <span class="nx">ceph</span> <span class="p">=</span> <span class="nx">object</span><span class="err">(</span><span class="p">{</span>
      <span class="nx">username</span>      <span class="p">=</span> <span class="nx">string</span>
      <span class="nx">key</span>           <span class="p">=</span> <span class="nx">string</span>
      <span class="nx">mon_hosts</span>     <span class="p">=</span> <span class="nx">list</span><span class="err">(</span><span class="nx">string</span><span class="err">)</span>
      <span class="nx">cluster_fsid</span>  <span class="p">=</span> <span class="nx">string</span>
      <span class="nx">rbd_pool</span>      <span class="p">=</span> <span class="nx">string</span>
    <span class="p">}</span><span class="err">)</span>
  <span class="p">}</span><span class="err">))</span>
  <span class="nx">default</span> <span class="p">=</span> <span class="p">{</span>
    <span class="nx">k8s01</span> <span class="p">=</span> <span class="p">{</span>
      <span class="nx">nodes</span> <span class="p">=</span> <span class="p">{</span>
        <span class="nx">k8s01cp01</span> <span class="p">=</span> <span class="p">{</span>
          <span class="nx">ip_address</span> <span class="p">=</span> <span class="s2">"192.168.1.5"</span>
          <span class="nx">ip_gateway</span> <span class="p">=</span> <span class="s2">"192.168.1.1"</span>
        <span class="p">}</span>
        <span class="nx">k8s01cp02</span> <span class="p">=</span> <span class="p">{</span>
          <span class="nx">ip_address</span> <span class="p">=</span> <span class="s2">"192.168.1.6"</span>
          <span class="nx">ip_gateway</span> <span class="p">=</span> <span class="s2">"192.168.1.1"</span>
        <span class="p">}</span>
      <span class="p">}</span>
      <span class="nx">ceph</span> <span class="p">=</span> <span class="p">{</span>
        <span class="c1"># we omit the client. prefix !</span>
        <span class="nx">username</span> <span class="p">=</span> <span class="s2">"kubernetes"</span>
        <span class="nx">key</span>          <span class="p">=</span> <span class="s2">"AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg=="</span>
        <span class="nx">mon_hosts</span>    <span class="p">=</span> <span class="p">[</span>
            <span class="s2">"192.168.18.131:6789"</span><span class="p">,</span>
            <span class="s2">"192.168.18.132:6789"</span><span class="p">,</span>
            <span class="s2">"192.168.18.133:6789"</span>
        <span class="p">]</span>
        <span class="nx">cluster_fsid</span> <span class="p">=</span> <span class="s2">"b9127830-b0cc-4e34-aa47-9d1a2e9949a8"</span>
        <span class="nx">rbd_pool</span>    <span class="p">=</span> <span class="s2">"kubernetes"</span>
      <span class="p">}</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span></code></pre></figure>

<p><code class="language-plaintext highlighter-rouge">k8s_storage.tf</code>:</p>

<figure class="highlight"><pre><code class="language-hcl" data-lang="hcl"><span class="nx">resource</span> <span class="s2">"kubernetes_namespace"</span> <span class="s2">"ceph_csi_rbd"</span> <span class="p">{</span>
  <span class="nx">metadata</span> <span class="p">{</span>
    <span class="nx">name</span> <span class="p">=</span> <span class="s2">"ceph-csi-rbd"</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="nx">resource</span> <span class="s2">"helm_release"</span> <span class="s2">"ceph_csi_rbd"</span> <span class="p">{</span>
  <span class="nx">name</span>       <span class="p">=</span> <span class="s2">"ceph-csi-rbd"</span>
  <span class="nx">repository</span> <span class="p">=</span> <span class="s2">"https://ceph.github.io/csi-charts"</span>
  <span class="nx">chart</span>      <span class="p">=</span> <span class="s2">"ceph-csi-rbd"</span>
  <span class="nx">version</span>    <span class="p">=</span> <span class="s2">"3.14.1"</span>
  <span class="nx">namespace</span> <span class="p">=</span> <span class="nx">kubernetes_namespace</span><span class="err">.</span><span class="nx">ceph_csi_rbd</span><span class="err">.</span><span class="nx">metadata</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="err">.</span><span class="nx">name</span>
  <span class="nx">values</span> <span class="p">=</span> <span class="p">[</span>
    <span class="nx">templatefile</span><span class="err">(</span><span class="s2">"${path.module}/source/helm/ceph/ceph-csi-rbd-values.tpl.yml"</span><span class="p">,</span> <span class="p">{</span>
      <span class="nx">ceph_conf</span> <span class="p">=</span> <span class="nx">var</span><span class="err">.</span><span class="nx">k8s_clusters</span><span class="p">[</span><span class="s2">"k8s01"</span><span class="p">]</span><span class="err">.</span><span class="nx">ceph</span>
    <span class="p">}</span><span class="err">)</span>
  <span class="p">]</span>
<span class="p">}</span></code></pre></figure>

<p>The aforementioned <code class="language-plaintext highlighter-rouge">ceph-csi-rbd-values.tpl.yml</code> is the default values template you can get from <a href="https://artifacthub.io/packages/helm/ceph-csi/ceph-csi-rbd?modal=values">here</a> on that Helm chart.</p>

<p>The keys I’ve customized are:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">csiConfig</code></li>
  <li><code class="language-plaintext highlighter-rouge">storageClass</code></li>
  <li><code class="language-plaintext highlighter-rouge">secret</code></li>
</ul>

<p>With that, you apply the chart and all resources cited in Ceph docs are created.</p>

<h1 id="kubernetes-storage-test">Kubernetes storage test</h1>
<p>Let’s create a storage volume from inside our Kubernetes cluster:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@k8s01cp01:~# <span class="nb">cat </span>raw-block-pvc.yaml 
<span class="nt">---</span>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raw-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

root@k8s01cp01:~# kubectl apply <span class="nt">-f</span> raw-block-pvc.yaml
persistentvolumeclaim/raw-block-pvc created

root@k8s01cp01:~# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
raw-block-pvc   Bound    pvc-18ef4c64-caaf-44e5-a123-6502238a2a1e   1Gi        RWO            csi-rbd-sc     &lt;<span class="nb">unset</span><span class="o">&gt;</span>                 19s</code></pre></figure>

<p>and now we see the volume inside Ceph:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# rbd <span class="nb">ls</span> <span class="nt">-p</span> kubernetes
csi-vol-e5fa9aa4-13b9-4f09-a366-91621a34c264</code></pre></figure>

<p>Now, if we remove the volume from Kubernetes:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@k8s01cp01:~# kubectl delete pvc/raw-block-pvc
persistentvolumeclaim <span class="s2">"raw-block-pvc"</span> deleted</code></pre></figure>

<p>… we can confirm that Ceph does not have that anymore:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pve01:~# rbd <span class="nb">ls</span> <span class="nt">-p</span> kubernetes
root@pve01:~# </code></pre></figure>]]></content><author><name>Manuel Molina</name></author><category term="hypervisor" /><category term="home" /><category term="ha" /><category term="budget" /><category term="kubernetes" /><summary type="html"><![CDATA[This is the third post about Proxmox, after my previous post in which I discussed the basic steps for Ceph installation.]]></summary></entry><entry><title type="html">Lenovo Thinkpad Gobi 2000 WWAN adapter support under Linux</title><link href="https://manuelmc.pocosmhz.org/2025/06/08/lenovo-gobi-2000-wwan.html" rel="alternate" type="text/html" title="Lenovo Thinkpad Gobi 2000 WWAN adapter support under Linux" /><published>2025-06-08T19:07:00+00:00</published><updated>2025-06-08T19:07:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/06/08/lenovo-gobi-2000-wwan</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/06/08/lenovo-gobi-2000-wwan.html"><![CDATA[<p>In my <a href="/2025/06/06/lenovo-x201-fingerprint-debian-12.html">previous post</a> about one of my Lenovo Thinkpad laptops, I mentioned that it has WWAN capabilities through a <a href="https://www.thinkwiki.org/wiki/Qualcomm_Gobi_2000">Qualcomm Gobi 2000 adapter</a>.</p>

<p>Here I’ll put my notes on how to make it work under Linux.</p>

<p>Steps</p>

<ol>
  <li>Install the firmware loader for this WWAN modem:
    <pre><code class="language-Shell"> $ sudo apt install gobi-loader
</code></pre>
  </li>
  <li>Create the folder where we’ll host the firmware files:
    <pre><code class="language-Shell"> $ sudo mkdir /lib/firmware/gobi
</code></pre>
  </li>
  <li>Now you need the original firmware files. Going to Lenovo Support web site, you can download the required files <a href="https://support.lenovo.com/us/es/downloads/ds001302">here</a>. You need to download <a href="https://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles/7xwc48ww.exe">7xwc48ww.exe</a>. Install it on a Windows supported system (or at least, extract the files). After that, and following <a href="https://www.thinkwiki.org/wiki/Qualcomm_Gobi_2000#Obtaining_the_Firmware">this table</a>, place the files in the folder created in the previous step:
    <pre><code class="language-Shell"> # cp 0/UQCN.mbn UMTS/amss.mbn UMTS/apps.mbn /lib/firmware/gobi/
</code></pre>
    <p>I did it with the Vodafone firmware, but your mileage may vary.</p>
  </li>
  <li>
    <p>Reboot your system.</p>
  </li>
  <li>If you did it fine, you’ll see the following device now configured:
    <pre><code class="language-Shell"> $ lsusb | grep Qualcomm
 Bus 002 Device 006: ID 05c6:9205 Qualcomm, Inc. Gobi 2000
</code></pre>
  </li>
</ol>

<p>Additional links to check:</p>
<ul>
  <li><a href="https://www.thinkwiki.org/wiki/Qualcomm_Gobi_2000">Qualcomm Gobi 2000</a>. Detailed information coming from ThinkWiki.</li>
  <li><a href="https://github.com/vmikhailenko/gobictl">How to set up Gobi 2000 GPS in Linux</a></li>
  <li><a href="https://thinkpad-forum.de/threads/x201-qualcomm-und-win-10.189978/">X201 Qualcomm und Win 10</a>. It’s in German. You’ll find there how to get support for this device under Windows 10. I can tell you where to find the file <code class="language-plaintext highlighter-rouge">win8beta_7xwc45ww.zip</code> that is also valid to use it under Windows 11.</li>
</ul>]]></content><author><name>Manuel Molina</name></author><category term="debian" /><category term="lenovo" /><category term="wwan" /><summary type="html"><![CDATA[In my previous post about one of my Lenovo Thinkpad laptops, I mentioned that it has WWAN capabilities through a Qualcomm Gobi 2000 adapter.]]></summary></entry><entry><title type="html">Lenovo Thinkpad X201 fingerprint sensor and Debian 12</title><link href="https://manuelmc.pocosmhz.org/2025/06/06/lenovo-x201-fingerprint-debian-12.html" rel="alternate" type="text/html" title="Lenovo Thinkpad X201 fingerprint sensor and Debian 12" /><published>2025-06-06T14:45:00+00:00</published><updated>2025-06-06T14:45:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/06/06/lenovo-x201-fingerprint-debian-12</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/06/06/lenovo-x201-fingerprint-debian-12.html"><![CDATA[<p>On December last year I manage to build a special Lenovo Thinkpad X201 with some very specific treats:</p>
<ul>
  <li>3G embedded support (Qualcomm, Inc. Qualcomm Gobi 2000)</li>
  <li>Lenovo Integrated Webcam</li>
  <li>Upek Biometric Touchchip/Touchstrip Fingerprint Sensor</li>
  <li>Gemalto (was Gemplus) Compact Smart Card Reader Writer</li>
</ul>

<p>It was cheap, fun to configure and easy to carry whenever I needed to go.</p>

<p>In this post I’m going to describe how I configured the fingerprint sensor.</p>

<p>For Microsoft Windows 11 you don’t have to do anything special, as the device is automatically configured. You just go to your user preferences and add fingerprint logging and follow the procedure to enroll your fingerprint(s).</p>

<p>For Debian 12, I’ll describe the quick procedure below.
But before that, please let me cite the original blog entry of the device driver author <a href="http://www.reactivated.net/weblog/archives/2008/07/upek-touchstrip-sensor-only-147e2016-on-linux/">here</a>.</p>

<p>Now with the procedure:</p>

<ol>
  <li>Install some Debian packages:
    <pre><code class="language-Shell"> $ sudo apt install fprintd libpam-fprintd
</code></pre>
  </li>
  <li>Configure PAM (Pluggable Authentication Modules):
    <pre><code class="language-Shell"> $ sudo pam-auth-update
</code></pre>
    <p>In the PAM configuration menu, select “Fingerprint authentication” and click OK.</p>
    <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> [*] Fingerprint authentication
 [*] Unix authentication
 [*] Register user sessions in the systemd control group hierarchy
 [ ] Create home directory on login
 [*] GNOME Keyring Daemon - Login keyring management

 &lt;Ok&gt;    &lt;Cancel&gt;
</code></pre></div>    </div>
  </li>
  <li>Enroll Your Fingerprint.
 After restarting your system, you should be able to enroll your fingerprint. Use the <code class="language-plaintext highlighter-rouge">fprint-enroll</code> tool or a graphical interface (if available) to enroll your fingerprint.
    <pre><code class="language-Shell"> $ fprintd-enroll 
 Using device /net/reactivated/Fprint/Device/0
 Enrolling right-index-finger finger.
 Enroll result: enroll-stage-passed
 Enroll result: enroll-stage-passed
 Enroll result: enroll-stage-passed
 Enroll result: enroll-stage-passed
 Enroll result: enroll-stage-passed
 Enroll result: enroll-completed
</code></pre>
    <p>You will need several passes for the fingerprint to be completely enrolled. Repeat until you see the last message.</p>
  </li>
  <li>Now you can log in to your system using your login manager, like <a href="https://github.com/canonical/lightdm">LightDM</a> in my case.
 Also, you can use your fingerprint to authenticate yourself in other situations:
    <pre><code class="language-Shell"> $ sudo su -
 Swipe your right index finger across the fingerprint reader
 # 
</code></pre>
  </li>
  <li>Optionally, you can enroll other fingerprints or manage them through command-line utilities:
    <pre><code class="language-Shell"> $ fprintd-list manuelmc
 found 1 devices
 Device at /net/reactivated/Fprint/Device/0
 Using device /net/reactivated/Fprint/Device/0
 Fingerprints for user manuelmc on Upek TouchChip Fingerprint Coprocessor (swipe):
 \- #0: right-index-finger
</code></pre>
  </li>
</ol>]]></content><author><name>Manuel Molina</name></author><category term="debian" /><category term="lenovo" /><category term="fingerprint" /><summary type="html"><![CDATA[On December last year I manage to build a special Lenovo Thinkpad X201 with some very specific treats: 3G embedded support (Qualcomm, Inc. Qualcomm Gobi 2000) Lenovo Integrated Webcam Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Gemalto (was Gemplus) Compact Smart Card Reader Writer]]></summary></entry><entry><title type="html">Old Ruby version installation with RVM</title><link href="https://manuelmc.pocosmhz.org/2025/06/06/old-ruby-install-with-rvm.html" rel="alternate" type="text/html" title="Old Ruby version installation with RVM" /><published>2025-06-06T14:07:00+00:00</published><updated>2025-06-06T14:07:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/06/06/old-ruby-install-with-rvm</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/06/06/old-ruby-install-with-rvm.html"><![CDATA[<p>This will be a quick post on how to quickly install an old Ruby version on your Linux or Mac computer, with the help of <a href="https://rvm.io/">RVM</a>, the Ruby Version Manager:</p>

<ol>
  <li>First and foremost, if you haven’t already, let’s install RVM:
 For the current user (no system-wide installation), you have to:
    <pre><code class="language-Shell"> \curl -sSL https://get.rvm.io | bash -s stable --ruby
</code></pre>
  </li>
  <li>Once you have RVM installed, we’ll request the installation of an old Ruby version, with its quirks and things:
    <pre><code class="language-Shell"> $ rvm pkg install openssl
</code></pre>
  </li>
  <li>With that dependency installed, we’re now able to install the selected Ruby version with OpenSSL support:
    <pre><code class="language-Shell"> $ rvm install ruby-3.0.5 --with-openssl-dir=$HOME/.rvm/usr
</code></pre>
    <p>It will take a while.</p>
  </li>
  <li>With the version installed, let’s install <a href="https://bundler.io/">Bundler</a>:
    <pre><code class="language-Shell"> $ rvm all do gem install bundler
</code></pre>
  </li>
  <li>Let’s use the newly installed Ruby version:
    <pre><code class="language-Shell"> $ rvm use ruby-3.0.5

 RVM is not a function, selecting rubies with 'rvm use ...' will not work.

 You need to change your terminal emulator preferences to allow login shell.
 Sometimes it is required to use `/bin/bash --login` as the command.
 Please visit https://rvm.io/integration/gnome-terminal/ for an example.
</code></pre>
    <p>Oops! It looks we need to allow login shell for the macros to work:</p>
    <pre><code class="language-Shell"> $ bash --login
 $ rvm use ruby-3.0.5
 Using /home/manuelmc/.rvm/gems/ruby-3.0.5
</code></pre>
  </li>
</ol>]]></content><author><name>Manuel Molina</name></author><category term="ruby" /><category term="rvm" /><category term="openssl" /><summary type="html"><![CDATA[This will be a quick post on how to quickly install an old Ruby version on your Linux or Mac computer, with the help of RVM, the Ruby Version Manager:]]></summary></entry><entry><title type="html">Proxmox home cluster (II)</title><link href="https://manuelmc.pocosmhz.org/2025/04/15/proxmox-home-cluster-ii.html" rel="alternate" type="text/html" title="Proxmox home cluster (II)" /><published>2025-04-15T17:10:00+00:00</published><updated>2025-04-15T17:10:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/04/15/proxmox-home-cluster-ii</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/04/15/proxmox-home-cluster-ii.html"><![CDATA[<p>This is the second part of my <a href="/2025/04/13/proxmox-home-cluster-i.html">previous post</a> about Proxmox basic installation.</p>

<p>Now we have a working cluster, where we can start virtual hosts and <a href="https://en.wikipedia.org/wiki/LXC">LXC</a> containers. However, the storage is local to each node.</p>

<p>What we’ll achieve next is to incorporate shared storage to this cluster.</p>

<p>As a starting point, I used <a href="https://tech.lobobrothers.com/proxmox-y-ceph-de-0-a-100-parte-iii/">this blog post from Lobobrothers</a> (it’s in Spanish).</p>

<p>The steps I followed:</p>

<ol>
  <li>
    <p>If you installed Proxmox with a single disk, and left enough disk space in the installation process (see previous post), <strong>here is the time to partition that space</strong> to make it available for Ceph:</p>

    <p>See the current configuration:</p>
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> root@pve01:~# fdisk /dev/nvme0n1

 Welcome to fdisk <span class="o">(</span>util-linux 2.38.1<span class="o">)</span><span class="nb">.</span>
 Changes will remain <span class="k">in </span>memory only, <span class="k">until </span>you decide to write them.
 Be careful before using the write command.

 This disk is currently <span class="k">in </span>use - repartitioning is probably a bad idea.
 It<span class="s1">'s recommended to umount all file systems, and swapoff all swap
 partitions on this disk.


 Command (m for help): p

 Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
 Disk model: KINGSTON SNV3S1000G                     
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: gpt
 Disk identifier: 712DDA48-E2CF-4783-AEAD-5103BB5DAD17

 Device             Start        End    Sectors   Size Type
 /dev/nvme0n1p1        34       2047       2014  1007K BIOS boot
 /dev/nvme0n1p2      2048    2099199    2097152     1G EFI System
 /dev/nvme0n1p3   2099200  421529599  419430400   200G Linux LVM

 Command (m for help): 
</span></code></pre></div>    </div>
    <p>Create a new partition:</p>
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> Command <span class="o">(</span>m <span class="k">for </span><span class="nb">help</span><span class="o">)</span>: n
 Partition number <span class="o">(</span>4-128, default 4<span class="o">)</span>: 
 First sector <span class="o">(</span>2099200-421529599, default 2099200<span class="o">)</span>: 
 Last sector, +/-sectors or +/-size<span class="o">{</span>K,M,G,T,P<span class="o">}</span> <span class="o">(</span>2099200-421529599, default 421529599<span class="o">)</span>: 

 Created a new partition 4 of <span class="nb">type</span> <span class="s1">'Linux filesystem'</span> and of size 730.5 GiB.

 Command <span class="o">(</span>m <span class="k">for </span><span class="nb">help</span><span class="o">)</span>: w
 The partition table has been altered.
 Syncing disks.
</code></pre></div>    </div>
  </li>
  <li>
    <p>After loggin in to the admin console, I click on Datacenter -&gt; Ceph. And we receive this message:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-not-installed.png" alt="Ceph not installed warning" />
So we accept.</p>
  </li>
  <li>
    <p>We’re offered a screen to select which version to install:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-install-selection.jpg" alt="Ceph install selection" />
After selecting the latest non-subscription version, we click on <em>Start squid installation</em>.</p>
  </li>
  <li>
    <p>Once we finish the installation, click on <em>Next</em>.
Select the network for Ceph to use:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-config.png" alt="Ceph configuration selection" /></p>
  </li>
  <li>
    <p>Installation of Ceph in the first node is done.
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-finished.png" alt="Ceph configuration selection" /></p>
  </li>
  <li>
    <p>As we’ve been told, we’ll repeat the steps in the nodes <code class="language-plaintext highlighter-rouge">pve02</code> and <code class="language-plaintext highlighter-rouge">pve03</code>.</p>
  </li>
  <li>
    <p>Now, for each of the nodes, we click on <code class="language-plaintext highlighter-rouge">Ceph</code> -&gt; <code class="language-plaintext highlighter-rouge">OSD</code> -&gt; <code class="language-plaintext highlighter-rouge">Create: OSD</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-create-osd.png" alt="Ceph create OSD" /></p>
  </li>
  <li>
    <p>Again, for each of the nodes, we click on <code class="language-plaintext highlighter-rouge">Ceph</code> -&gt; <code class="language-plaintext highlighter-rouge">Monitor</code> -&gt; <code class="language-plaintext highlighter-rouge">Create</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-create-monitor.png" alt="Ceph create Monitor" /></p>
  </li>
  <li>
    <p>For nodes <code class="language-plaintext highlighter-rouge">pve02</code> and <code class="language-plaintext highlighter-rouge">pve03</code> we’ll create additional manager processes, by clicking on <code class="language-plaintext highlighter-rouge">Ceph</code> -&gt; <code class="language-plaintext highlighter-rouge">Monitor</code> -&gt; <code class="language-plaintext highlighter-rouge">Manager</code> -&gt; <code class="language-plaintext highlighter-rouge">Create</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-create-manager.png" alt="Ceph create additional manager" /></p>
  </li>
  <li>
    <p>Now, if you were hosting different storage types, you would want to establish a new set of CRUSH rules for the OSDs. If you have a single storage space per node and OSD, you can safely skip this step.</p>
  </li>
</ol>

<p>In a case you wanted to have Ceph with different set of disks per node, like spinning, SSD, or a bunch of disks to be split in different groups, please go ahead and check <a href="https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_device_classes">Ceph CRUSH &amp; Device Classes</a> section of the documentation.</p>

<ol>
  <li>
    <p>If we now click on <code class="language-plaintext highlighter-rouge">Ceph</code> -&gt; <code class="language-plaintext highlighter-rouge">OSD</code> we can see something like this:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-osd.png" alt="Ceph OSD list" /></p>
  </li>
  <li>
    <p>Now it’s time for create a Ceph pool, where virtual machines will be stored. Go ahead and click on <code class="language-plaintext highlighter-rouge">Ceph</code> -&gt; <code class="language-plaintext highlighter-rouge">Pools</code> -&gt; <code class="language-plaintext highlighter-rouge">Create</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-pool1.png" alt="Ceph create pool" /></p>
  </li>
  <li>
    <p>We can check under <code class="language-plaintext highlighter-rouge">Datacenter</code> -&gt; <code class="language-plaintext highlighter-rouge">Storage</code> that we do have <code class="language-plaintext highlighter-rouge">pool1</code> available for shared storage:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/ceph-shared.png" alt="Ceph shared storage" /></p>
  </li>
</ol>

<p>Now we’re ready to create a new virtual machine in the Ceph shared storage.</p>

<p>From here on, these are optional steps.</p>

<p>If you plan to use <a href="https://github.com/ceph/ceph-csi">Ceph CSI</a> in your Kubernetes cluster, with <a href="https://docs.ceph.com/en/latest/cephfs/">CephFS</a>, you must add now at least one <a href="https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pveceph_fs_mds">Ceph Metadata Server</a>.</p>

<ol>
  <li>
    <p>You can create an MDS through the Proxmox VE web GUI’s <code class="language-plaintext highlighter-rouge">Node</code> -&gt; <code class="language-plaintext highlighter-rouge">CephFS</code> panel -&gt; <code class="language-plaintext highlighter-rouge">Create</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/cephfs-create-mds.png" alt="Ceph MDS" /></p>
  </li>
  <li>
    <p>In the dialog box, confirm the creation of the first metadata server:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/cephfs-create-mds-first.png" alt="Ceph create first MDS" /></p>

    <p>You can create more servers, and it is a good thing to do, but be aware that only one will be active and the rest of them will be standby servers.</p>
  </li>
  <li>
    <p>Let’s do the same with the second and third node, selecting each node and naming the new metadata servers accordingly:</p>

    <p><img src="/content/images/2025-04-15-proxmox-home-cluster-ii/cephfs-create-mds-second.png" alt="Ceph create second MDS" /></p>
  </li>
  <li>
    <p>Now, and before you create any Ceph FS, the status of all MDS is <code class="language-plaintext highlighter-rouge">standby</code>:
<img src="/content/images/2025-04-15-proxmox-home-cluster-ii/cephfs-mds-list.png" alt="Ceph MDS list" /></p>
  </li>
</ol>

<p>We’ll take care of creating a Ceph FS in a separate post, together with its use case.</p>]]></content><author><name>Manuel Molina</name></author><category term="hypervisor" /><category term="home" /><category term="ha" /><category term="budget" /><category term="proxmox" /><summary type="html"><![CDATA[This is the second part of my previous post about Proxmox basic installation.]]></summary></entry><entry><title type="html">Proxmox home cluster (I)</title><link href="https://manuelmc.pocosmhz.org/2025/04/13/proxmox-home-cluster-i.html" rel="alternate" type="text/html" title="Proxmox home cluster (I)" /><published>2025-04-13T19:10:00+00:00</published><updated>2025-04-13T19:10:00+00:00</updated><id>https://manuelmc.pocosmhz.org/2025/04/13/proxmox-home-cluster-i</id><content type="html" xml:base="https://manuelmc.pocosmhz.org/2025/04/13/proxmox-home-cluster-i.html"><![CDATA[<p>For some time I’ve been wondering the best way of having a steady solution to use as a home lab.
That involves a whole lot of factors to think about. Many of them are cost-related.</p>

<p>Nowadays you can think about having a small cluster in your cloud provider of choice.
Even if you automate the creation and destroy of such lab through <a href="https://en.wikipedia.org/wiki/Infrastructure_as_code">IaC</a>, you’ll incur in costs that might end being a real burden.</p>

<p>What about a dedicated server? Even the cheapest ones are either not big enough or you have to pay extra to have proper storage.</p>

<p>And what about using real hardware and hosting it at home? Well, I checked some second-hand carrier-grade hardware providers and they offer you really affordable solutions. See <a href="https://www.give1life.com">Give1life</a>, <a href="https://www.jetcomputer.net">Jet Computer</a> or <a href="https://serverando.de/">Serverando</a>, but there are many others.</p>

<p>As familiar as I am with the hosting and datacenter world, there are three limiting factors in this solution: noise, heat and electric power. Yes, they’re affordable and reliable. But you’ll need a room for that purpose. As I don’t plan on having one, let’s check another solution that came to me.</p>

<h1 id="configuration-selection">Configuration selection</h1>
<h2 id="compute-platform">Compute platform</h2>
<p>There is a new trend of small factor PCs, called <a href="https://en.wikipedia.org/wiki/Mini_PC">mini PC</a> that are very handy when you lack proper desktop space. Not only they’re small, but also very energy efficient.</p>

<p>For a small desktop PC to browse the web and do basic daily chores, they were fine. However, right now, you can find a very wide variety of options. Some of them are not so <em>basic</em>.</p>

<p>Some friends talked me into checking the <a href="https://www.intel.com/content/www/us/en/products/sku/231803/intel-processor-n100-6m-cache-up-to-3-40-ghz/specifications.html">Intel N100</a> CPU from Intel’s 12th gen <a href="https://en.wikipedia.org/wiki/Alder_Lake#Alder_Lake-N">Alder Lake</a> architecture. Indeed, very power savvy (between 6 and 12 watt), but also very capable (4 CPU cores up to 3.4 GHz).</p>

<p>Theoretically they’re able to handle up to 16 GBytes of RAM. That is more than enough for a low cost PC. You can even use it as a workstation, if you stretch the concept a bit.</p>

<p>They made me be interested and I already put my eyes on the <a href="https://www.aliexpress.com/item/1005006445443589.html">Texhoo QN10</a> mini PC. Please <strong>do not</strong> be mislead by the QN10 SE model.</p>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/texhoo-qn10.png" alt="TexHoo QN10" /></p>

<p>As I was about to settle with the idea of a 16 GB RAM configuration, with dual 2.5 Gbit ethernet interface, multiple USB ports, WIFI6 etcetera, I bumped into <a href="https://diycraic.com/2024/12/27/intel-n100-mini-pc-32gb-ram-as-a-home-server/">this blog post</a> stating that <strong>there was a stable 32 GBytes RAM configuration</strong>.</p>

<p>For every cluster node, I bought one <a href="https://www.amazon.es/dp/B0BLTDTD86">Crucial RAM DDR5 32GB 5600MHz SODIMM CL46 - CT32G56C46S5</a> module, as suggested by <a href="https://diycraic.com/user/stepyon/">Stepyon</a> in the blog, and one <a href="https://www.amazon.es/dp/B0DBR3DZWG">Kingston NV3 NVMe PCIe 4.0 SSD Internal 1TB M.2 2280-SNV3S/1000G</a> , which have a reasonable price-quality ratio.</p>

<p>Remember that you have to explicitly enable the virtualization options in your BIOS. I also recommend to set up power management so the mini-pc returns to previous state after a power outage.</p>

<h2 id="networking">Networking</h2>
<p>Let’s select some interconnect hardware on a budget.
I opted for a non-managed solution that has passive heat sink: <a href="https://www.amazon.es/dp/B09M2RXCVN">Tenda TEM2010X 8 x 2.5Gbit port switch</a>.</p>

<h1 id="proxmox-installation">Proxmox installation</h1>

<h2 id="software-installation">Software installation</h2>
<p>In order to install Proxmox we selected version 8.3.1 ISO and downloaded it from their <a href="https://www.proxmox.com/en/downloads">downloads page</a>.</p>

<p>It is very straight-forward, but I’ll go quickly through the steps for a standalone installation here:</p>

<ul>
  <li>First, you can settle with the standard installation option:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-install-menu.jpg" alt="Proxmox installation menu" /></p>

<ul>
  <li>Then you’re asked to accept the license:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-license.jpg" alt="Proxmox license" /></p>

<ul>
  <li>Now you are presented with the Proxmox HD selection dialog:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-target-hd.jpg" alt="Proxmox target HD selection" /></p>
<ul>
  <li>Here you have two options:
    <ol>
      <li>If you either don’t plan on creating a Ceph cluster, or you plan to do it with a secondary disk, just click on OK and leave the default options (ext4 FS).</li>
      <li>If you want to use part of the only disk we have for Ceph, please click on <code class="language-plaintext highlighter-rouge">Options</code> and reduce the size as stated below:
  <img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-target-hd-resize.jpg" alt="Proxmox target HD resize" /></li>
    </ol>

    <p>Now accept and continue with the next step.</p>
  </li>
  <li>In this step you adjust your locale options:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-location-tz.jpg" alt="Proxmox location and timezone selection" /></p>

<ul>
  <li>Here you’ll set up a password for the <em>root</em> user, or administrator user of the node, and a email address for notifications:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-admin-passwd.jpg" alt="Proxmox admin password and notification email address" /></p>

<ul>
  <li>Here we’ll set up an IP address for the management interface. We can only setup one single network interface here, but this could be changed later:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-mgmt-network-cfg.jpg" alt="Proxmox management network interface configuration" /></p>

<ul>
  <li>Before we continue, we are shown the summary of our configuration. If you agree, go ahead:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-summary.jpg" alt="Proxmox summary" /></p>

<ul>
  <li>During the next two or three minutes, you’ll be amused with some advertisement while the packages are installed:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-pkg-install.jpg" alt="Proxmox package installation" /></p>

<ul>
  <li>If you selected the option for automatic reboot after installation, you will see this screen after reboot:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-first-boot-grub.jpg" alt="Proxmox first boot GRUB menu" /></p>

<ul>
  <li>And finally, after a few seconds, you have the login screen:
<img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-login.jpg" alt="Proxmox server login screen" /></li>
</ul>

<h2 id="basic-setup-of-a-node">Basic setup of a node</h2>
<p>Once you have a node correctly installed, you can access it via web, as suggested by the console prompt shown before.</p>

<ul>
  <li>You’ll get the login page, so please, log in:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-web-login.jpg" alt="Proxmox web login" /></p>

<ul>
  <li>Once you log in for the first time, you’ll be warned about not having an enterprise license installed.</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-license-warning.jpg" alt="Proxmox license warning" /></p>

<ul>
  <li>
    <p>After dismissing the warning, you’ll get to the main admin view:
<img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-main-node-view.jpg" alt="Proxmox main view" /></p>
  </li>
  <li>
    <p>In order to add the free repositories, we’ll access <code class="language-plaintext highlighter-rouge">pve01</code> (this node) -&gt; Updates -&gt; Repositories:</p>
  </li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-repositories-view.jpg" alt="Proxmox software repositories view" /></p>

<p>(and yes, we’ve already made some tests :smirk: )</p>

<ul>
  <li>Here, we’ll disable these two:
    <ul>
      <li>Enterprise Proxmox</li>
      <li>Ceph Quincy Enterprise</li>
    </ul>
  </li>
  <li>And we’ll <strong>add</strong> these two:
    <ul>
      <li>Proxmox PVE no subscription</li>
      <li>Ceph Quincy no subscription</li>
    </ul>
  </li>
  <li>You have now the following options on screen:</li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-free-repositories-view.jpg" alt="Proxmox software repositories view after enabling free ones" /></p>

<ul>
  <li>Let’s configure network now in order to use the two 2.5 Gbit interfaces instead of one. We can do this through the web interface and apply changes once we agree to them. Here’s what we need to do:
    <ul>
      <li>Click on <code class="language-plaintext highlighter-rouge">pve01</code> node, then System -&gt; Network.</li>
      <li>Delete current <em>bridge</em>. Click on <code class="language-plaintext highlighter-rouge">vmbr0</code> and then <em>Remove</em></li>
      <li>Click on <em>Create</em> -&gt; <em>Linux bond</em>
        <ul>
          <li>Name: <code class="language-plaintext highlighter-rouge">bond0</code></li>
          <li>Slaves: <code class="language-plaintext highlighter-rouge">enp2s0 enp3s0</code> (the two wired network interfaces)</li>
          <li>Mode: <code class="language-plaintext highlighter-rouge">balance-alb</code> . See <a href="https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_bond">here</a> for more details. This is the best configuration if you don’t have a <a href="https://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol">LACP</a> capable network switch.</li>
          <li>Click on <em>Create</em> button.</li>
        </ul>
      </li>
      <li>Now click on <em>Create</em> -&gt; <em>Linux bridge</em>. Fill the dialog with this:
        <ul>
          <li>Name: <code class="language-plaintext highlighter-rouge">vmbr0</code></li>
          <li>Bridge ports: <code class="language-plaintext highlighter-rouge">bond0</code> .</li>
          <li>IPv4/CIDR: <code class="language-plaintext highlighter-rouge">192.168.18.131/24</code> . Same network address you used before, including network mask.</li>
          <li>Gateway (IPv4): <code class="language-plaintext highlighter-rouge">192.168.18.1</code> . Same gateway as in the previous configuration.</li>
          <li>Click on <em>Create</em> button.</li>
        </ul>
      </li>
      <li>The changes are previewed and you can go through them before confirming:</li>
    </ul>
  </li>
</ul>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-network-changes-preview.jpg" alt="Proxmox network changes preview" /></p>

<ul>
  <li>If you agree, click on <code class="language-plaintext highlighter-rouge">Apply configuration</code>.</li>
</ul>

<h2 id="cluster-creation">Cluster creation</h2>
<p>Once the three nodes are online and able to talk to each other through the network, we’ll double check the <a href="https://pve.proxmox.com/wiki/Cluster_Manager#_requirements">requirements</a> for creating the Proxmox cluster.</p>

<p>Now, for a change, we’re going to create the cluster via command-line, following the steps detailed in the <a href="https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_create_cluster">docs</a>:</p>

<ol>
  <li>Create the cluster.
 From the first node, we create the cluster:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> root@pve01:~# pvecm create tejar
 Corosync Cluster Engine Authentication key generator.
 Gathering 2048 bits <span class="k">for </span>key from /dev/urandom.
 Writing corosync key to /etc/corosync/authkey.
 Writing corosync config to /etc/pve/corosync.conf
 Restart corosync and cluster filesystem
</code></pre></div>    </div>
    <p>We check the current status:</p>
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> root@pve01:~# pvecm status
 Cluster information
 <span class="nt">-------------------</span>
 Name:             tejar
 Config Version:   1
 Transport:        knet
 Secure auth:      on

 Quorum information
 <span class="nt">------------------</span>
 Date:             Tue Apr 15 01:06:31 2025
 Quorum provider:  corosync_votequorum
 Nodes:            1
 Node ID:          0x00000001
 Ring ID:          1.5
 Quorate:          Yes

 Votequorum information
 <span class="nt">----------------------</span>
 Expected votes:   1
 Highest expected: 1
 Total votes:      1
 Quorum:           1  
 Flags:            Quorate 

 Membership information
 <span class="nt">----------------------</span>
     Nodeid      Votes Name
 0x00000001          1 192.168.18.131 <span class="o">(</span><span class="nb">local</span><span class="o">)</span>
</code></pre></div>    </div>
  </li>
  <li>Add second (and subsequent) node(s):
 Log in to the new node you want to join the cluster and do:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> root@pve02:~# pvecm add 192.168.18.131
 Please enter superuser <span class="o">(</span>root<span class="o">)</span> password <span class="k">for</span> <span class="s1">'192.168.18.131'</span>: <span class="k">***********</span>
 Establishing API connection with host <span class="s1">'192.168.18.131'</span>
 The authenticity of host <span class="s1">'192.168.18.131'</span> can<span class="s1">'t be established.
 X509 SHA256 key fingerprint is 80:EC:8B:A3:1F:B0:41:C6:F3:5C:E2:7A:47:6C:66:EE:12:DD:B9:EF:CE:0D:43:85:6A:70:A6:92:A7:DA:0F:27.
 Are you sure you want to continue connecting (yes/no)? yes
 Login succeeded.
 check cluster join API version
 No cluster network links passed explicitly, fallback to local node IP '</span>192.168.18.132<span class="s1">'
 Request addition of this node
 Join request OK, finishing setup locally
 stopping pve-cluster service
 backup old database to '</span>/var/lib/pve-cluster/backup/config-1744672204.sql.gz<span class="s1">'
 waiting for quorum...OK
 (re)generate node files
 generate new node certificate
 merge authorized SSH keys
 generated new node certificate, restart pveproxy and pvedaemon services
 successfully added node '</span>pve02<span class="s1">' to cluster.
</span></code></pre></div>    </div>
    <p>In the command above we’ve used the IP address of an existing cluster node.</p>
  </li>
  <li>
    <p>Repeat the previous step with the third and subsequent nodes to join the cluster.</p>
  </li>
  <li>Check cluster status:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> root@pve03:~# pvecm status
 Cluster information
 <span class="nt">-------------------</span>
 Name:             tejar
 Config Version:   3
 Transport:        knet
 Secure auth:      on

 Quorum information
 <span class="nt">------------------</span>
 Date:             Tue Apr 15 01:14:00 2025
 Quorum provider:  corosync_votequorum
 Nodes:            3
 Node ID:          0x00000003
 Ring ID:          1.d
 Quorate:          Yes

 Votequorum information
 <span class="nt">----------------------</span>
 Expected votes:   3
 Highest expected: 3
 Total votes:      3
 Quorum:           2  
 Flags:            Quorate 

 Membership information
 <span class="nt">----------------------</span>
     Nodeid      Votes Name
 0x00000001          1 192.168.18.131
 0x00000002          1 192.168.18.132
 0x00000003          1 192.168.18.133 <span class="o">(</span><span class="nb">local</span><span class="o">)</span>
</code></pre></div>    </div>
  </li>
</ol>

<h1 id="wrap-up">Wrap-up</h1>
<p>We have a working Proxmox cluster with three nodes. In upcoming posts we’ll deal with <a href="https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster">Ceph shared storage</a> configuration and other minor issues.</p>

<p><img src="/content/images/2025-04-13-proxmox-home-cluster-i/proxmox-cluster-summary-view.jpg" alt="Proxmox cluster summary view" /></p>]]></content><author><name>Manuel Molina</name></author><category term="hypervisor" /><category term="home" /><category term="ha" /><category term="budget" /><category term="proxmox" /><summary type="html"><![CDATA[For some time I’ve been wondering the best way of having a steady solution to use as a home lab. That involves a whole lot of factors to think about. Many of them are cost-related.]]></summary></entry></feed>