<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>False Gems</title>
    <link>https://gem.org.ru/</link>
    <description>Some thoughts without any particular direction</description>
    <pubDate>Sun, 19 Apr 2026 09:24:49 +0000</pubDate>
    <item>
      <title>How I put some Convenience and Organization to My Script Collection</title>
      <link>https://gem.org.ru/how-i-put-some-convenience-and-organization-to-my-script-collection</link>
      <description>&lt;![CDATA[I keep scripts in one place and use desktop entries so Rofi can launch them.&#xA;&#xA;~/Sync/scripts/&#xA;├── lock.sh&#xA;├── argo-translate.sh&#xA;├── backup.sh&#xA;└── applications/&#xA;    ├── lock.desktop&#xA;    ├── argo-translate.desktop&#xA;    └── ...&#xA;&#xA;Rofi&#39;s drun mode uses XDG desktop entry directories. To include my custom applications folder, I used XDGDATADIRS:&#xA;&#xA;XDGDATADIRS=&#34;$HOME/Sync/scripts:$XDGDATADIRS&#34; rofi -show drun&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>I keep scripts in one place and use desktop entries so Rofi can launch them.</p>

<pre><code>~/Sync/scripts/
├── lock.sh
├── argo-translate.sh
├── backup.sh
└── applications/
    ├── lock.desktop
    ├── argo-translate.desktop
    └── ...
</code></pre>

<p>Rofi&#39;s drun mode uses XDG desktop entry directories. To include my custom applications folder, I used <code>XDG_DATA_DIRS</code>:</p>

<pre><code>XDG_DATA_DIRS=&#34;$HOME/Sync/scripts:$XDG_DATA_DIRS&#34; rofi -show drun
</code></pre>
]]></content:encoded>
      <guid>https://gem.org.ru/how-i-put-some-convenience-and-organization-to-my-script-collection</guid>
      <pubDate>Sat, 27 Dec 2025 07:44:17 +0000</pubDate>
    </item>
    <item>
      <title>FIPS complaint Flask and Flask-WTF</title>
      <link>https://gem.org.ru/fips-complaint-flask-and-flask-wtf</link>
      <description>&lt;![CDATA[Recently we had deployed a Flask application on a RHEL9 server where FIPS mode was enabled. It started find but refused to serve any requests. The logs were filled with Unsupported DigestmodError messages.&#xA;&#xA;FIPS (which stands for Federal Information Processing Standards) mode will not allow you system-wide to use any hashing algo that is considered to be insecure. But vanilla Flask (and it&#39;s batteries) often using sha1. We have stumbled upon two cases.&#xA;&#xA;A standard Flask stack often uses sha1 by default in two key places&#xA;&#xA;flask sessions&#xA;&#xA;The default secure cookie sessions are using itsdangerious for signing, which can default to sha1. The fix was easy: fask&#39;s session interface is designed to be subclassed. We can create a custom session class inheriting it from SecureCookieSessionInterface and tell it to use sha256 as the digest method  &#xA;&#xA;flask-wtf&#xA;&#xA;Serializing and signing CSRF token here also uses itsdangerious, that again, defaults to sha1. This is the trickier part. As for now, flask_wtf does not provide a simple config option to change the digest method. We have to create a custom CSRFProtect implementation forcing it to use sha256 serializer. ]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently we had deployed a Flask application on a RHEL9 server where FIPS mode was enabled. It started find but refused to serve any requests. The logs were filled with <code>Unsupported DigestmodError</code> messages.</p>

<p>FIPS (which stands for Federal Information Processing Standards) mode will not allow you system-wide to use any hashing algo that is considered to be insecure. But vanilla Flask (and it&#39;s batteries) often using <code>sha1</code>. We have stumbled upon two cases.</p>

<p>A standard Flask stack often uses <code>sha1</code> by default in two key places</p>

<h3 id="flask-sessions" id="flask-sessions">flask sessions</h3>

<p>The default secure cookie sessions are using <code>itsdangerious</code> for signing, which can default to <code>sha1</code>. The fix was easy: fask&#39;s session interface is designed to be subclassed. We can create a custom session class inheriting it from <code>SecureCookieSessionInterface</code> and tell it to use <code>sha256</code> as the digest method</p>

<h3 id="flask-wtf" id="flask-wtf">flask-wtf</h3>

<p>Serializing and signing CSRF token here also uses <code>itsdangerious</code>, that again, defaults to <code>sha1</code>. This is the trickier part. As for now, <code>flask_wtf</code> does not provide a simple config option to change the digest method. We have to create a custom <code>CSRFProtect</code> implementation forcing it to use <code>sha256</code> serializer.</p>
]]></content:encoded>
      <guid>https://gem.org.ru/fips-complaint-flask-and-flask-wtf</guid>
      <pubDate>Tue, 11 Nov 2025 13:33:51 +0000</pubDate>
    </item>
    <item>
      <title>TIL about socket port byte order</title>
      <link>https://gem.org.ru/til-about-socket-port-byte-order</link>
      <description>&lt;![CDATA[I was writing a simple server in arm64 assembly and was trying to bind port 300 (spartan).&#xA;&#xA;.hword 0x012c  // htons(300)&#xA;&#xA;The server would bind fine, but to an odd port like 11265. The issue was byte order (endianness?).&#xA;&#xA;My &#34;discoveries&#34; are:&#xA;&#xA;Network byte order is big-endian&#xA;ARM64 is little-endian &#xA;&#xA;I was storing 0x012c as .hword on ARM64 The kernel reads bytes 2c 01 as big endian, interpreting it as 0x2c01 = 11265&#xA;&#xA;The solurtion was to explicitly define the order:&#xA;.byte 0x01, 0x2c&#xA;&#xA;The htons() function will handle it properly, but with assembly you have to do it manually.&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I was writing a simple server in arm64 assembly and was trying to bind port 300 (spartan).</p>

<pre><code class="language-assembly">.hword 0x012c  // htons(300)
</code></pre>

<p>The server would bind fine, but to an odd port like 11265. The issue was byte order (endianness?).</p>

<p>My “discoveries” are:</p>
<ul><li>Network byte order is big-endian</li>
<li>ARM64 is little-endian</li></ul>

<p>I was storing <code>0x012c</code> as <code>.hword</code> on ARM64 The kernel reads bytes <code>2c 01</code> as big endian, interpreting it as <code>0x2c01 = 11265</code></p>

<p>The solurtion was to explicitly define the order:</p>

<pre><code>.byte 0x01, 0x2c
</code></pre>

<p>The <code>htons()</code> function will handle it properly, but with assembly you have to do it manually.</p>
]]></content:encoded>
      <guid>https://gem.org.ru/til-about-socket-port-byte-order</guid>
      <pubDate>Fri, 31 Oct 2025 18:23:56 +0000</pubDate>
    </item>
    <item>
      <title>aspe:keyoxide.org:6Y7KI4OG4YF5X3X5ASKPTXTRJ4</title>
      <link>https://gem.org.ru/aspe-keyoxide-org-6y7ki4og4yf5x3x5askptxtrj4</link>
      <description>&lt;![CDATA[aspe:keyoxide.org:6Y7KI4OG4YF5X3X5ASKPTXTRJ4]]&gt;</description>
      <content:encoded><![CDATA[<p>aspe:keyoxide.org:6Y7KI4OG4YF5X3X5ASKPTXTRJ4</p>
]]></content:encoded>
      <guid>https://gem.org.ru/aspe-keyoxide-org-6y7ki4og4yf5x3x5askptxtrj4</guid>
      <pubDate>Sun, 06 Apr 2025 08:18:29 +0000</pubDate>
    </item>
    <item>
      <title>Resolving DNS Issues with firefox, chrome, curl on Arch Linux: A Quick Fix</title>
      <link>https://gem.org.ru/resolving-dns-issues-with-firefox-chrome-curl-on-arch-linux-a-quick-fix</link>
      <description>&lt;![CDATA[Recently, I faced an issue where curl (and browsers) couldn&#39;t resolve a hostname, but other tools like dig and nslookup worked fine. I want to share how I solved this problem in a simple way.&#xA;&#xA;!--more--&#xA;&#xA;When I tried to use curl I got this error:&#xA;Could not resolve host: my.local.hostname&#xA;But when I used other tools:&#xA;dig my.local.hostname&#xA;nslookup my.local.hostname&#xA;They both returned the correct IP address for my.local.hostname.&#xA;&#xA;Understanding the Cause&#xA;I learned that different tools resolve DNS in different ways:&#xA;dig and nslookup: They query DNS servers directly, bypassing the system&#39;s settings.&#xA;curl: It uses the system&#39;s resolver library, which follows the configuration in /etc/nsswitch.conf.&#xA;I checked my /etc/nsswitch.conf file and found this line:&#xA;&#xA;hosts: mymachines resolve [!UNAVAIL=return] files myhostname dns&#xA;&#xA;This line tells the system how to resolve hostnames. The [!UNAVAIL=return] part means that if the resolve method (which uses systemd-resolved) doesn&#39;t find the hostname (and doesn&#39;t return UNAVAIL), the system stops looking and doesn&#39;t try the dns method. That&#39;s why curl couldn&#39;t resolve the hostname, even though DNS was working.&#xA;&#xA;The Solution&#xA;&#xA;I changed the hosts line to remove the [!UNAVAIL=return] part and reorder the methods to prioritize DNS:&#xA;&#xA;hosts: files dns myhostname mymachines resolve&#xA;&#xA;and restarted all DNS realted&#xA;&#xA;sudo systemd-resolve --flush-caches&#xA;sudo systemctl restart systemd-resolved&#xA;&#xA;Afterthoughts&#xA;&#xA;If you don&#39;t need systemd-resolved, you can disable it:&#xA;&#xA;sudo systemctl disable --now systemd-resolved&#xA;&#xA;Then update /etc/nsswitch.conf:&#xA;&#xA;hosts: files dns myhostname&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently, I faced an issue where <code>curl</code> (and browsers) couldn&#39;t resolve a hostname, but other tools like <code>dig</code> and <code>nslookup</code> worked fine. I want to share how I solved this problem in a simple way.</p>



<p>When I tried to use <code>curl</code> I got this error:</p>

<pre><code>Could not resolve host: my.local.hostname
</code></pre>

<p>But when I used other tools:</p>

<pre><code class="language-bash">dig my.local.hostname
nslookup my.local.hostname
</code></pre>

<p>They both returned the correct IP address for <code>my.local.hostname</code>.</p>

<h2 id="understanding-the-cause" id="understanding-the-cause">Understanding the Cause</h2>

<p>I learned that different tools resolve DNS in different ways:
– dig and nslookup: They query DNS servers directly, bypassing the system&#39;s settings.
– curl: It uses the system&#39;s resolver library, which follows the configuration in <code>/etc/nsswitch.conf</code>.
I checked my <code>/etc/nsswitch.conf</code> file and found this line:</p>

<pre><code>hosts: mymachines resolve [!UNAVAIL=return] files myhostname dns
</code></pre>

<p>This line tells the system how to resolve hostnames. The <code>[!UNAVAIL=return]</code> part means that if the resolve method (which uses systemd-resolved) doesn&#39;t find the hostname (and doesn&#39;t return UNAVAIL), the system stops looking and doesn&#39;t try the dns method. That&#39;s why curl couldn&#39;t resolve the hostname, even though DNS was working.</p>

<h2 id="the-solution" id="the-solution">The Solution</h2>

<p>I changed the hosts line to remove the [!UNAVAIL=return] part and reorder the methods to prioritize DNS:</p>

<pre><code>hosts: files dns myhostname mymachines resolve
</code></pre>

<p>and restarted all DNS realted</p>

<pre><code>sudo systemd-resolve --flush-caches
sudo systemctl restart systemd-resolved
</code></pre>

<h2 id="afterthoughts" id="afterthoughts">Afterthoughts</h2>

<p>If you don&#39;t need <code>systemd-resolved</code>, you can disable it:</p>

<pre><code>sudo systemctl disable --now systemd-resolved
</code></pre>

<p>Then update <code>/etc/nsswitch.conf</code>:</p>

<pre><code>hosts: files dns myhostname
</code></pre>
]]></content:encoded>
      <guid>https://gem.org.ru/resolving-dns-issues-with-firefox-chrome-curl-on-arch-linux-a-quick-fix</guid>
      <pubDate>Mon, 07 Oct 2024 12:40:00 +0000</pubDate>
    </item>
    <item>
      <title>How to fix Rust tools that bind libgit2.so</title>
      <link>https://gem.org.ru/how-to-fix-rust-tools-that-bind-libgit2-so</link>
      <description>&lt;![CDATA[Recently, after a system update, I became not able to run some of the system tools written on Rust, like exa and bat&#xA;$ bat --version                                                                ~&#xA;bat: error while loading shared libraries: libgit2.so.1.4: cannot open shared object file: No such file or directory&#xA;The fix was easy: rebuild a binary&#xA;$ cargo install exa bat --force&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Recently, after a system update, I became not able to run some of the system tools written on Rust, like <code>exa</code> and <code>bat</code></p>

<pre><code class="language-bash">$ bat --version                                                                ~
bat: error while loading shared libraries: libgit2.so.1.4: cannot open shared object file: No such file or directory
</code></pre>

<p>The fix was easy: rebuild a binary</p>

<pre><code>$ cargo install exa bat --force
</code></pre>
]]></content:encoded>
      <guid>https://gem.org.ru/how-to-fix-rust-tools-that-bind-libgit2-so</guid>
      <pubDate>Thu, 08 Sep 2022 11:26:51 +0000</pubDate>
    </item>
    <item>
      <title>How to fix yum after CentOS 8 went EOL </title>
      <link>https://gem.org.ru/how-to-fix-yum-after-centos-8-went-eol</link>
      <description>&lt;![CDATA[  Error: Failed to download metadata for repo &#39;appstream&#39;: Cannot prepare internal mirrorlist: No URLs in mirrorlist&#xA;&#xA;So now we have the same issues that we had for Centos 6. And therefore we can fix it like it was described in previous post.&#xA;&#xA;$ sed -i &#39;s,baseurl=http://vault.centos.org,baseurl=http://vault.epel.cloud|g&#39; /etc/yum.repos.d/CentOS-Linux-*&#xA;&#xA;!--more--&#xA;&#xA;Alternative (AlmaLinux)&#xA;&#xA;The issue with the fix above is that now we have a frozen repo that never will be updated. If your want to have the latest security updates you may consider a migration to one of Cento&#39;s successors. An AlmaLinux migration script is located here. Basically, it looks like this:&#xA;$ sudo dnf -y upgrade&#xA;$ curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh&#xA;$ sudo bash almalinux-deploy.sh&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<blockquote><p>Error: Failed to download metadata for repo &#39;appstream&#39;: Cannot prepare internal mirrorlist: No URLs in mirrorlist</p></blockquote>

<p>So now we have the same issues that we had for Centos 6. And therefore we can fix it like it was described in <a href="/deprecated-centos-6">previous post</a>.</p>

<pre><code>$ sed -i &#39;s,baseurl=http://vault.centos.org,baseurl=http://vault.epel.cloud|g&#39; /etc/yum.repos.d/CentOS-Linux-*
</code></pre>



<h2 id="alternative-almalinux" id="alternative-almalinux">Alternative (AlmaLinux)</h2>

<p>The issue with the fix above is that now we have a frozen repo that never will be updated. If your want to have the latest security updates you may consider a migration to one of Cento&#39;s successors. An AlmaLinux migration script is located <a href="https://github.com/AlmaLinux/almalinux-deploy">here</a>. Basically, it looks like this:</p>

<pre><code class="language-bash">$ sudo dnf -y upgrade
$ curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh
$ sudo bash almalinux-deploy.sh
</code></pre>
]]></content:encoded>
      <guid>https://gem.org.ru/how-to-fix-yum-after-centos-8-went-eol</guid>
      <pubDate>Fri, 01 Jul 2022 07:57:56 +0000</pubDate>
    </item>
    <item>
      <title>Hardware troubleshooting</title>
      <link>https://gem.org.ru/hardware-troubleshooting</link>
      <description>&lt;![CDATA[Another post to the &#34;suffering journal&#34;. Experienced a lot of hardware fails:&#xA;&#xA;SSD disks become read-only or other IO errors&#xA;Video card do not start while power on. Had to restart each time.&#xA;Other system freezes of unknown origin.&#xA;&#xA;It happened for a month, and I tried to replace SATA cables, disable &#34;spoiled&#34; disks, do memory checks, use the rest of the voodoo too. Started scaring myself with a shopping list if the motherboard broke. &#xA;&#xA;It was a power unit. No visible signs like an inflated capacitor or burn marks, though. Took a chance and bought a new PU. All problems are gone. ]]&gt;</description>
      <content:encoded><![CDATA[<p>Another post to the “suffering journal”. Experienced a lot of hardware fails:</p>
<ul><li>SSD disks become read-only or other IO errors</li>
<li>Video card do not start while power on. Had to restart each time.</li>
<li>Other system freezes of unknown origin.</li></ul>

<p>It happened for a month, and I tried to replace SATA cables, disable “spoiled” disks, do memory checks, use the rest of the voodoo too. Started scaring myself with a shopping list if the motherboard broke.</p>

<p>It was a power unit. No visible signs like an inflated capacitor or burn marks, though. Took a chance and bought a new PU. All problems are gone.</p>
]]></content:encoded>
      <guid>https://gem.org.ru/hardware-troubleshooting</guid>
      <pubDate>Thu, 02 Dec 2021 08:53:16 +0000</pubDate>
    </item>
    <item>
      <title>Syncthing and NFS</title>
      <link>https://gem.org.ru/syncthing-and-nfs</link>
      <description>&lt;![CDATA[TLDR: Dockerised Syncthing using NFS mounted folder is a bad idea.&#xA;&#xA;!--more--&#xA;&#xA;About a year ago I&#39;ve started the process of migration from Dropbox to Syncthing. The structure of a new files synchronization flow was simple: one always online node, and a bunch of consumers.  It was fine at first, but after a couple of months, I&#39;ve noticed an increased amount of updated-file notifications. Files in the notification were not modified at any device that I&#39;m aware of. Another strange thing is that my tools, which dotfiles were in synching folder started to complain about the lack of config files and reset it to the initial state.&#xA; &#xA;First of all, I blamed the android app and excluded it from the flow. I&#39;d had no luck and mess with my dotfiles continued. Nodes were often unsynchronised, logs showed a lot of deleted-created status for the files that nobody touched. A file could be deleted and reappeared again in a matter of hours.&#xA;After days of troubleshooting, I was able to locate the source of the issue. It was my main node setup. The Syncthing process in the node was dockerized and the mounted sync folder was also an NFS mount. Changing it to SMB did no effect.&#xA;&#xA;So my current solution is a structural change. Now each node is synced with all others and with no master. Each node uses it&#39;s own local filesystem. Works like a charm.]]&gt;</description>
      <content:encoded><![CDATA[<p><em>TLDR: Dockerised Syncthing using NFS mounted folder is a bad idea.</em></p>



<p>About a year ago I&#39;ve started the process of migration from Dropbox to Syncthing. The structure of a new files synchronization flow was simple: one always online node, and a bunch of consumers.  It was fine at first, but after a couple of months, I&#39;ve noticed an increased amount of updated-file notifications. Files in the notification were not modified at any device that I&#39;m aware of. Another strange thing is that my tools, which dotfiles were in synching folder started to complain about the lack of config files and reset it to the initial state.</p>

<p>First of all, I blamed the android app and excluded it from the flow. I&#39;d had no luck and mess with my dotfiles continued. Nodes were often unsynchronised, logs showed a lot of deleted-created status for the files that nobody touched. A file could be deleted and reappeared again in a matter of hours.
After days of troubleshooting, I was able to locate the source of the issue. It was my main node setup. The Syncthing process in the node was dockerized and the mounted sync folder was also an NFS mount. Changing it to SMB did no effect.</p>

<p>So my current solution is a structural change. Now each node is synced with all others and with no master. Each node uses it&#39;s own local filesystem. Works like a charm.</p>
]]></content:encoded>
      <guid>https://gem.org.ru/syncthing-and-nfs</guid>
      <pubDate>Fri, 26 Nov 2021 10:30:07 +0000</pubDate>
    </item>
    <item>
      <title>After a vacancy</title>
      <link>https://gem.org.ru/after-a-vacancy</link>
      <description>&lt;![CDATA[I&#39;ve spent almost two weeks without a laptop and very restricted mobile internet. When I sat at the keyboard again, there was no that excitement level as it used to be, even close. Maybe I&#39;m just so tired after more or less hardcore trailing experience.&#xA;&#xA;That place is a nice tool for writing by the way. It could be a neat tool to draft post ideas and tidy up formatting.  But such short posts should not exist as a blog post and it fits tweet format which is not my intention. ]]&gt;</description>
      <content:encoded><![CDATA[<p>I&#39;ve spent almost two weeks without a laptop and very restricted mobile internet. When I sat at the keyboard again, there was no that excitement level as it used to be, even close. Maybe I&#39;m just so tired after more or less hardcore trailing experience.</p>

<p>That place is a nice tool for writing by the way. It could be a neat tool to draft post ideas and tidy up formatting.  But such short posts should not exist as a blog post and it fits tweet format which is not my intention.</p>
]]></content:encoded>
      <guid>https://gem.org.ru/after-a-vacancy</guid>
      <pubDate>Fri, 14 May 2021 12:14:48 +0000</pubDate>
    </item>
  </channel>
</rss>