<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Dev&Ops]]></title><description><![CDATA[Personal blog of Andreas Mosti. Thoughts, ideas and crazy hacks from my life as a developer and operations guy.]]></description><link>http://blog.amosti.net/</link><generator>Ghost 0.11</generator><lastBuildDate>Sun, 01 Mar 2026 09:25:21 GMT</lastBuildDate><atom:link href="http://blog.amosti.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Scared of Normal]]></title><description><![CDATA[<p>A topic that's been on my mind lately is the idea of simplicity. In a world that often values complexity and sophistication, simplicity can easily be overlooked or undervalued. Be warned — here come some rambling thoughts.</p>

<p>I own a bike. It’s not an expensive bike, but in terms of</p>]]></description><link>http://blog.amosti.net/scared-of-normal/</link><guid isPermaLink="false">32b486de-eb33-44c9-90e4-8df796448603</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sat, 28 Feb 2026 14:15:52 GMT</pubDate><content:encoded><![CDATA[<p>A topic that's been on my mind lately is the idea of simplicity. In a world that often values complexity and sophistication, simplicity can easily be overlooked or undervalued. Be warned — here come some rambling thoughts.</p>

<p>I own a bike. It’s not an expensive bike, but in terms of functionality I genuinely think it’s the pinnacle of human mobility and freedom. It takes up little space, gets me around quickly, keeps me fit, and runs on calories — arguably the most abundant fuel on the planet. It’s a simple machine, yet hard to imagine life without. Put on a pair of studded tires and you can ride it 365 days a year, in almost any weather. The one mechanical service I couldn’t fix myself cost me 500 kroner.</p>

<p><img src="https://i.imgur.com/xf0uf9F.jpeg" alt="The bike"></p>

<p>A human tendency I’ve become increasingly aware of is what I like to call “comfort creep,” or hedonic adaptation. The idea is simple: once we get used to a certain level of comfort, that level quietly becomes the baseline. Going below it doesn’t feel neutral — it feels like loss. We start taking comfort for granted, and worse, we begin tying it to our identity and sense of self-worth. Suddenly even small sacrifices — like not choosing the “Pro” model of the new iPhone — feel like a subtle status drop. And just like that, we justify spending money on things we don’t really need, all to preserve a slightly inflated sense of well-being. Once other people start noticing or commenting on your nice things, the whole “keeping up with the Joneses” mechanism kicks in, and the spiral of comparison accelerates.</p>

<p>Things get even more interesting when you bring in René Girard’s theory of “mimetic desire.” The core idea is unsettlingly simple: our desires are not entirely our own. We learn what to want by watching what others want. We don’t just desire objects — we desire them because someone else has marked them as desirable. That sets off a subtle but powerful cycle of imitation, comparison, and consumption. Your next-door neighbor goes to a five-star restaurant, and suddenly you find yourself craving the same experience.</p>

<p>It helps explain why celebrities and influencers have such an impact on consumer behavior. They don’t just advertise products; they model desire itself. Before we know it, we’re not buying something because we need it, but because it fits into a hierarchy of wanting we never consciously signed up for.</p>

<p>But back to the story.</p>

<p>Mindful of comfort creep, I’ve never felt the urge to buy an expensive car. My first car was a 2000 Honda Civic hatchback. Japanese reliability, almost no features. It did one thing well: got me from A to B. In 2018 my girlfriend crashed it right before we were going on holiday, and we got a rental while it was being repaired — a 2015 Ford Fiesta. Modest by any standard, but compared to the Civic it felt like stepping into the future. Heated seats. CarPlay. Civilization.</p>

<p>That small jump in comfort felt enormous. And that’s the point. Even minor upgrades recalibrate our baseline. What once felt luxurious becomes normal almost overnight. Keeping the bar low is, in a way, a hack: if your baseline is modest, even small improvements feel like abundance.</p>

<p>The same thing happened with phones. My 2020 iPhone SE — the cheapest model — served me faithfully for six years, until newer iOS updates slowed it down to near unusable levels. I had an old iPhone 12 Pro lying around and switched. On paper, a big upgrade. In practice? Within weeks it just felt normal.</p>

<p>A couple of years ago I read the book "Jakten på den grønne lykken" by Bjørn Stærk. He argues that we are born as evolved apes, but increasingly act like giants — maximizing our resources and capabilities, taking up more space, consuming far beyond our needs. We’ve become extraordinarily comfortable, but in the process created problems we don't really have.</p>

<p>Me being privileged enough to ride my bike year-round feels like a quiet protest against that. It’s a reminder that we can get by with less — and that sometimes less really is more.</p>

<p>What scares me isn’t luxury. It’s how quickly it becomes normal.</p>]]></content:encoded></item><item><title><![CDATA[How to Remember What You Know (Without Becoming a Full-Time Clerk)]]></title><description><![CDATA[<h2 id="waityoutakenotes">Wait, you take notes?</h2>

<p>Over the years I have read a lot of articles and blog posts on note-taking and knowledge management. I'm an avid reader, and one of my first <a href="https://blog.amosti.net/how-i-read/">blog posts</a> was about how I organize my reading lists. That post is 11 years old now, and I</p>]]></description><link>http://blog.amosti.net/how-to-remember-what-you-know-without-becoming-a-full-time-clerk/</link><guid isPermaLink="false">e865d468-1cbf-44b0-84d1-d5bafa0d4dea</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Mon, 26 Jan 2026 18:50:02 GMT</pubDate><content:encoded><![CDATA[<h2 id="waityoutakenotes">Wait, you take notes?</h2>

<p>Over the years I have read a lot of articles and blog posts on note-taking and knowledge management. I'm an avid reader, and one of my first <a href="https://blog.amosti.net/how-i-read/">blog posts</a> was about how I organize my reading lists. That post is 11 years old now, and I like to think I've grown a bit since then (intellectually, that is), but I can confirm that the Trello board described in that old mind-fart of a post is still in use today.</p>

<p>One thing that keeps popping up is the importance of having a system to capture and organize what you learn. Without such a system, knowledge is easily forgotten or lost in the chaos of everyday life. Over the years I have tried various ways of capturing notes from the books I read. It might be true that the best books you read will change the way you think and act in the world without you writing a single note or quote down, but let's be real: those books are few and far between. Most of the time, if you want to remember what you read, you have to write <em>something</em> down and return to it later to refresh your memory.</p>

<h3 id="handwritinggreatformemoryterribleforretrieval">Handwriting: Great for Memory, Terrible for Retrieval</h3>

<p>The first couple of years I used good old notebooks extensively, and to some degree still do. The science is quite clear here: if you want to remember something, writing it down by hand is the best way to do it. The problem with notebooks—apart from being hard to organize and search through later—is that reviewing a book you just read becomes a time-consuming exercise in hunting for squiggly lines and pencil highlights, then transcribing them into a notebook, only to never look at them again. Don't get me wrong, I had lots of nice dates with myself at cafes, writing down notes from books I had just read, but the return on investment was low—and there is frankly not enough time in the day for that (throw in a love for endurance sports and there you have it).</p>

<h2 id="thekillersystem">The killer "system"</h2>

<p>It's no surprise then that I have gravitated towards digital book knowledge systems over the years, but instead of shoehorning my notes into a generic note-taking app like [fill in your favorite app here that will be gone in 2 years], I have built my own simple solution around Markdown files stored in a Git repository, plus a few small extraction scripts. Throw in a tiny backend for serving them, and there we are. It's simple, it's flexible, it's plain text, and it will outlive most apps out there. The basic building block is this Markdown template:</p>

<pre><code class="language-markdown"># Book Title

&lt;![]()&gt;

### Metadata

- Author:
- Full Title:
- Category: #books

### Highlights

-
</code></pre>

<p>That's the target template. About 99% of the source material comes from four main sources:</p>

<ul>
<li>Amazon Kindle</li>
<li>Apple Books</li>
<li>Physical books</li>
<li>Manual transcription directly into Markdown files</li>
</ul>

<p>Kindle highlights are by far the easiest to capture.</p>

<p><img src="https://i.imgur.com/BckEjhc.png" alt="Kindle highlights"></p>

<p>Whenever I highlight something on my Kindle device or in the Kindle app, it is automatically synced to my Amazon account. When I'm done reading, I send the notes to myself via email using the "Export Notes" feature in the Kindle device or app. The exported notes come in a simple HTML format that is easy to parse. I have a small <a href="https://github.com/andmos/ReadingList-Data/blob/master/ExtractBookNotes.csx">script</a> that extracts the highlights and formats them into the Markdown template above.</p>

<p>A similar process is used for Apple Books highlights, although the export step is a bit more manual.</p>

<p><img src="https://i.imgur.com/gdNxV0W.png" alt="Apple Books highlights"></p>

<p>The trick is to select all highlights and tap "Share". The easiest approach is to share them to yourself via email to get a standardized format. Again, a small <a href="https://github.com/andmos/ReadingList-Data/blob/master/ExtractAppleBooksNotes.csx">script</a> does the same job as for the Kindle notes.</p>

<p>The last piece of the puzzle is physical books. This is how I read most books, and for a long time manual transcription was the only way to capture notes from them. Thankfully Apple solved this for me in iOS 15 with the introduction of Live Text in photos:</p>

<p><img src="https://i.imgur.com/b16KXes.jpeg" alt="Live Text in iOS"></p>

<p>Now I just open the camera app on my phone, select the text in the book, and copy and paste it into the Markdown format I keep in the Notes app. From there it's just formatting and adding metadata. Voilà—note captured.</p>

<h2 id="randomwisdomondemand">Random Wisdom on Demand</h2>

<p>All my book notes are stored in a public <a href="https://github.com/andmos/ReadingList-Data/tree/master/BookNotes">GitHub repository</a>. This makes it easy to search through them later, and if I want to look something up quickly, the GitHub Markdown viewer does a great job.</p>

<p>What I've found to be the real killer feature is the random lookup script I wrote a while back. It's a simple backend that keeps all the notes in memory, with an API endpoint that picks a random highlight from the entire collection and shows it to me. Hit refresh and a new note is selected. It's a great way to be reminded of what I've read over time—and with habitual use it might just help me internalize some of the knowledge I've captured. Or how about some random wisdom to start a new terminal session? <code>curl</code> and <code>bash</code> to the rescue:</p>

<pre><code class="language-bash">randomBookNote() {  
    curl -s https://app.amosti.net/reading/api/notes/random \
        | jq -r '.note + " - " + .authors[0] + ", " + .title'
}
</code></pre>

<p><img src="https://i.imgur.com/CLJ0smf.png" alt="Random highlight"></p>

<p>The system and content are open source and available on <a href="https://github.com/andmos/ReadingList-Data">GitHub</a>.</p>

<p>The example notes from this article are taken from the following books:</p>

<ul>
<li><em>Beyond the Mountain</em> — Steve House</li>
<li><em>Training for the Uphill Athlete</em> — Steve House, Scott Johnston, and Kilian Jornet</li>
<li><em>Everybody Loves Our Town</em> - Mark Yarm </li>
<li><em>The Psychology of Money</em> — Morgan Housel</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Verify HTTP links with awesome_bot]]></title><description><![CDATA[<p>When writing articles, blog posts, technical papers, README's etc, it's common to reference other web pages. Your content may not move, but there is no guarantee other peoples stuff won't - sites go down, links change and certificates expires. Nothing can be as frustrating as finding that 6 year old</p>]]></description><link>http://blog.amosti.net/verify-http-links-with-awesome_bot/</link><guid isPermaLink="false">58f8b40a-eb23-4168-838f-af576e6dbf03</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Mon, 29 May 2023 15:45:40 GMT</pubDate><content:encoded><![CDATA[<p>When writing articles, blog posts, technical papers, README's etc, it's common to reference other web pages. Your content may not move, but there is no guarantee other peoples stuff won't - sites go down, links change and certificates expires. Nothing can be as frustrating as finding that 6 year old blog post with the answers to your problems, but then find it full of dead links. Thankfully you can set a good example and check your own writing for dead links with a little Ruby-tool called <a href="https://github.com/dkhamsing/awesome_bot">awesome_bot</a>.</p>

<p>The <code>awesome_bot</code> name comes from the "awesome pages" that are rather popular on GitHub. If you don't know about Awesome, it's usually collections of links with "awesome" tools, resources, libraries etc. for a specific topic or ecosystem. Examples are <a href="https://github.com/quozd/awesome-dotnet">awesome-dotnet</a>, <a href="https://github.com/enaqx/awesome-react">awesome-react</a>, <a href="https://github.com/amusi/awesome-ai-awesomeness">awesome-ai-awesomeness</a> or <a href="https://github.com/ligurio/awesome-ci">awesome-ci</a> - the list is endless. Needless to say, these pages contain lots and lots of links, hence the need for validating the aliveness of these links.</p>

<p>If you are into Ruby and gems, <code>awesome_bot</code> can be installed with <code>gem</code>:</p>

<pre><code class="language-shell">gem install awesome_bot  
</code></pre>

<p>I prefer the container approach, and <a href="https://github.com/dkhamsing/awesome_bot#docker-examples">dkhamsing/awesome_bot image can be used</a>.</p>

<p>An example execution of <code>awesome_bot</code> might look like this:</p>

<pre><code class="language-shell">$ docker run --rm -v $(pwd):/mnt -t andmos/awesome-bot -f **/*.csv --allow-redirect --allow 429
&gt; Checking links in misc/Roasteries.csv
&gt; Will allow errors: 429
&gt; Will allow redirects
Links to check: 14  
  01. https://www.kaffebrenneriet.no/
  02. https://www.timwendelboe.no/
  03. https://sh.no/
  04. https://www.kaffa.no/
  05. https://www.srw.no/
  06. https://www.fjellbrent.no/
  07. https://jacobsensvart.no/
  08. https://www.facebook.com/stormkaffe
  09. https://www.pala.no/
  10. https://www.langorakaffe.no/
  11. https://inderoy.coffee/
  12. https://bonneribyen.no/
  13. https://www.facebook.com/brentkaffe/
  14. https://senjaroasters.com/
Checking URLs: ✓✓✓✓✓✓✓→✓✓✓✓✓✓  
No issues :-)  
</code></pre>

<p>In this example we check links from CSV files and use the flags <code>--allow-redirect</code> to, well, allow redirects (which throws errors if not given) and <code>--allow 429</code> to whitelist the "Too many requests" status code.</p>

<p>If something is off with a link, like a 404, <code>awesome_bot</code> will throw an exit-code and show issues in the report:</p>

<pre><code class="language-shell">$ docker run --rm -v $(pwd):/mnt andmos/awesome-bot -f *.md
&gt; Checking links in README.md
Links to check: 1  
  1. https://www.an.no/some/dead/link
Checking URLs: x

Issues :-(  
&gt; Links
  1. [L1] 404 https://www.an.no/some/dead/link
&gt; Dupes
  None ✓

Wrote results to ab-results-README.md.json  
Wrote filtered results to ab-results-README.md-filtered.json  
Wrote markdown table results to ab-results-README.md-markdown-table.json  
</code></pre>

<p><code>awesome_bot</code> can be automated to run scheduled with you favorite CI system - here is a GitHub Actions example:</p>

<pre><code class="language-yaml">name: Verify Links  
on:  
  pull_request:
  workflow_dispatch:
  schedule:
    - cron:  '0 13 * * 1'
jobs:  
  Awesome-bot:
    name: Run Awesome-bot
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Verify Links
        run: |
          docker run --rm -v $(pwd):/mnt andmos/awesome-bot -f *.md --allow-redirect --allow 429 --allow-ssl --white-list "nasdaq.com,researchgate.net"
</code></pre>]]></content:encoded></item><item><title><![CDATA[Github Actions and Terraform, revisited]]></title><description><![CDATA[<p>This week we finally got to do some Terraform work again on a new project.</p>

<p>The setup is simple, we have some Azure resources we want to create, so there is no way around Infrastructure as Code and, by our choice, the battle-proven Terraform from Hashicorp.</p>

<p>Since the code lives</p>]]></description><link>http://blog.amosti.net/github-actions-and-terraform-revisited/</link><guid isPermaLink="false">3b7df2ec-432a-4ed4-b747-62c12cd21fc2</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Thu, 05 Jan 2023 19:07:52 GMT</pubDate><content:encoded><![CDATA[<p>This week we finally got to do some Terraform work again on a new project.</p>

<p>The setup is simple, we have some Azure resources we want to create, so there is no way around Infrastructure as Code and, by our choice, the battle-proven Terraform from Hashicorp.</p>

<p>Since the code lives up on Github, we want to automate the steps involved with the Terraform code and work with a standard pull request workflow. <br>
This includes:</p>

<ul>
<li>Lint and format the code.</li>
<li>Run validation.</li>
<li>Create the plan to be executed.</li>
<li>Post the changes as a pull request comment and review the changes.</li>
<li>If all is good and approved, apply the changes on merge.</li>
</ul>

<p>or visualized:</p>

<p><img src="https://content.hashicorp.com/api/assets?product=tutorials&amp;version=main&amp;asset=public%2Fimg%2Fterraform%2Fautomation%2Fpr-master-gh-actions-workflow.png" alt=""></p>

<p>Thankfully this is not a new subject to explore. There are tons of examples on how to work with Terraform and Github Actions, including <a href="https://developer.hashicorp.com/terraform/tutorials/automation/github-actions">an excellent post from Hashicorp</a> on how to write this workflow.</p>

<p>Upon reading this post and writing out the code, we <em>nearly</em> got what we wanted.</p>

<p>The generated pull request post with the plan itself looks like this with the standard example:</p>

<p><img src="https://user-images.githubusercontent.com/1283556/210856729-bb4f080d-8a96-4e88-aeba-dafefb82506d.png" alt=""></p>

<p>A couple of things we didn't fancy here. First off, the plan itself is hidden away behind the drop-down menu, making it easy to miss if you're not careful - and you should be - Terraform is unforgiving if some resources are deleted by an error.</p>

<p>Second, the plan has no color highlighting, making it a bit hard to read. It appears just as a wall of text where details might be missed.</p>

<p>And third, the post itself should contain <em>the brief summary of what is actually going to happen</em> without reading the actual plan. I'm talking about this part:</p>

<pre><code class="language-shell">Plan: 0 to add, 1 to change, 1 to destroy.  
</code></pre>

<p>After some more searching we found <a href="https://blog.testdouble.com/posts/2021-12-07-elevate-your-terraform-workflow-with-github-actions/">another excelent post by Andrew Walker</a> who introduces enhanced formatting on the <code>terraform plan</code> output with the use of <code>diff</code>. <br>
<code>diff</code> is a nice syntax to color highlight changes in files, just as Github's pull request view has. In a nutshell, <code>diff</code> has the following coding:</p>

<p><img src="https://cdn-blog.testdouble.com/img/elevate-your-terraform-workflow-with-github-actions/diff-colors.98276d25d0b6c7af1821d6304e6f79d296a61f79b59860d63f98eb3a88f6d3d2.png" alt=""></p>

<p>Great to highlight what will be <em>created</em>, <em>deleted</em> or <em>changed</em> when a <code>terraform plan</code> is executed.</p>

<p>After some inspiration from our friend Andrew and some more tinkering, we came out with an output we were quite happy with:</p>

<p><img src="https://user-images.githubusercontent.com/1283556/210859359-591133fa-c13c-4794-82ba-50bf01aed519.png" alt=""></p>

<p>The overview itself is much cleaner with less stuff. The checks are still there, but with emojis instead of the text. The summary is also on the top of the comment.</p>

<p>The plan itself shows like this:</p>

<p><img src="https://user-images.githubusercontent.com/1283556/210972701-f4ad59ea-6a1c-4161-aa9f-4c1dc0b4e19b.png" alt=""></p>

<p>The observant Terraform author will notice something here. The symbol for "update in place" should be <code>~</code>, not <code>!</code>. This is a little trick we did to get the orange color for in-place changes, not just additions and destroyed resources.</p>

<p>So far we are quite happy with this layout.</p>

<p>The following gist contains the workflow. Hopefully it will be of inspiration to others down the road.</p>

<script src="https://gist.github.com/andmos/e349f693d04a27e8630dc358b4a36f24.js"></script>]]></content:encoded></item><item><title><![CDATA[Extend your automated tests to the Shell]]></title><description><![CDATA[<blockquote>
  <p>As last year, this is my <a href="https://www.bekk.christmas/post/2022/7/extend-your-automated-tests-to-the-shell">contribution</a> to the <a href="https://www.bekk.christmas/">advent calendar</a> we do over at Bekk. Hope you like it!</p>
</blockquote>

<h2 id="testingtesting">Testing, testing</h2>

<blockquote>
  <p>"Testing shows the presence, not the absence of bugs." - <a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1969.PDF">Edsger W. Dijkstra</a></p>
</blockquote>

<p>As professional software developers, writing automated tests has become second nature to what we do.</p>]]></description><link>http://blog.amosti.net/extend-your-automated-tests-to-the-shell/</link><guid isPermaLink="false">79859cb3-e22b-4b98-b10c-66514e626353</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Wed, 07 Dec 2022 09:46:00 GMT</pubDate><content:encoded><![CDATA[<blockquote>
  <p>As last year, this is my <a href="https://www.bekk.christmas/post/2022/7/extend-your-automated-tests-to-the-shell">contribution</a> to the <a href="https://www.bekk.christmas/">advent calendar</a> we do over at Bekk. Hope you like it!</p>
</blockquote>

<h2 id="testingtesting">Testing, testing</h2>

<blockquote>
  <p>"Testing shows the presence, not the absence of bugs." - <a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1969.PDF">Edsger W. Dijkstra</a></p>
</blockquote>

<p>As professional software developers, writing automated tests has become second nature to what we do. The exact amount of tests, amount of code-coverage, unit vs. integration vs. integrated test etc. may still be up for debate, but the industry as a whole seems to have settled on the practice of automated tests as a way to ensure the viability of a codebase over time. The <a href="http://www.extremeprogramming.org">Extreme Programming</a> methodology of the late 1990's even went as far as citing <a href="http://www.extremeprogramming.org/rules/unittests.html">"Code without tests may not be released"</a>. <br>
Even with the general consensus among developers that automated testing is something <em>we should do</em>, it's still an open choice: one can actively choose to <em>not to write tests</em> as well.  </p>

<p><a href="https://www.davefarley.net/?p=278">Dave Farley wrote a blog post some years ago</a> about how we should use techniques from classical science and engineering and apply them to software development to make the field "a real engineering field, not just a pretend-to-be", and to archive this, testing is essential. To be frank, in how many other engineering fields is testing of the things created optional?</p>

<p>Used correctly, automated testing is a great way to implement the essence of science, to falsify the hypothesis about the code.</p>

<p>Every programming language or ecosystem that's worth taking seriously has a testing library, or sometimes whole frameworks created for them. For C# and dotnet <a href="https://github.com/xunit/xunit">xunit</a> has become somewhat of a standard, in Java and JVM land <a href="https://junit.org/junit5/">JUnit</a> has been around for a long time, and for JavaScript, both frontend and backend, testing with <a href="https://jestjs.io/">Jest</a> has seen a lot of traction.</p>

<p>Writing our unit, integration or integrated tests in the same language as the production code and keeping the tests close is the de facto standard.</p>

<p>But aren't we forgetting something? Our systems do not stop at the application code level. To be able to build and deploy our code, chances are we have some build scripts and deployment pipelines, or some good old "glue scripts" keeping it all together. To do most of this heavy lifting most of us still depend on the good old shell scripts, most notably written in the old work-horse Bourne Again Shell, or Bash.</p>

<p>If anything, chances are slim that Bash is going away anytime soon, and scripts written for the shell deserves tests of it's own. <br>
The shell is also often forgotten as an environment to test systems from in of itself. There are lots of CLI's that can be used to test different aspect of our systems, and there is no rule proclaiming that integration or load tests need to be written or hosted by the same codebase as our application.</p>

<p>It's time to introduce shell testing with <code>bats</code>.</p>

<h2 id="introducingshelltestingwithbats">Introducing shell testing with Bats</h2>

<p><code>bats</code> is a <a href="http://testanything.org/">TAP, or "Test Anything Protocol"</a> compliant testing framework for Bash. It provides a simple way to verify that the *NIX programs you write behave as expected.
The initial public release of <code>bats</code> was <a href="https://github.com/sstephenson/bats">done back in 2011 by Sam Stephenson</a>, but the project was archived and put in a read-only state in 2016. As of 2017, the current actively maintained <a href="https://github.com/bats-core/bats-core">fork, know as bats-core</a> has been looked after by the bats-core organization, and the project is still under active development.</p>

<p>To show how writing tests with <code>bats</code> work, let's write some black-box test verifying a REST-API with <code>curl</code>. <br>
Our "system under test", or "SUT", is the <a href="https://swapi.dev/">The Star Wars API</a>. This will show how <code>bats</code> work, and show one of it's use cases.</p>

<p>We begin with a test to make sure Luke Skywalker is the first entity of the <code>/people</code> endpoint. <br>
As we will see, tests are set up like most other testing frameworks, here following the "arrange act assert" pattern.</p>

<pre><code class="language-sh">#!/usr/bin/env bats

@test "GET: /people first person entity is Luke Skywalker" { # The first line contains the test-annotation and a name / explanation of the function to be tested.
    #Arrange
    local EXPECTED_PERSON="Luke Skywalker"
    local ACTUAL_PERSON
    #Act
    ACTUAL_PERSON="$(curl -s https://swapi.dev/api/people/1/ |jq '.name' --raw-output)" # Here we do our call to the Star Wars API with curl and parse the JSON with jq
    # Assert
    [ "${ACTUAL_PERSON}" == "${EXPECTED_PERSON}" ] # The assert is a normal Bash test, where we check the actual person against what we expect.
}
</code></pre>

<p>To run the test, use the <code>bats</code> command:</p>

<pre><code class="language-sh">$ bats tests/star-wars-api.bats 
star-wars-api.bats  
 ✓ GET: /people first person entity is Luke Skywalker

1 tests, 0 failures  
</code></pre>

<p>We can expand on this and check if the <code>/planets</code> endpoint contains Naboo:</p>

<pre><code class="language-sh">@test "GET: /planets contains 'Naboo'" {
    local EXPECTED_PLANET="Naboo"
    local ACTUAL_PLANET 

    ACTUAL_PLANET="$(curl -s https://swapi.dev/api/planets/ | EXPECTED_PLANET="$EXPECTED_PLANET" jq '.results[] | select(.name == env.EXPECTED_PLANET).name' --raw-output)"

    [ "${ACTUAL_PLANET}" == "${EXPECTED_PLANET}" ]
}
</code></pre>

<p>The two tests are now run together:</p>

<pre><code class="language-sh">$ bats tests/star-wars-api.bats 
star-wars-api.bats  
 ✓ GET: /people first person entity is Luke Skywalker
 ✓ GET: /planets contains 'Naboo'

2 tests, 0 failures  
</code></pre>

<p>Great. Those examples show the overview of <code>bats</code>.  For more ideas on how to use <code>bats</code> for integration tests, <a href="https://zachholman.com/">Zach Holman</a> has a great <a href="https://zachholman.com/posts/integration-tests">blog post with that topic in mind</a>.</p>

<p>Next, let's spice it up with some Test Driver Development (TDD) on some actual shell scripts.</p>

<h2 id="usingtddinourshellscriptsforfunandprofit">Using TDD in our shell scripts for fun and profit</h2>

<p>Let's begin simple: We want a function to extract the author from a markdown file containing book notes, but want the job to exit if no argument (I.e. a file) is given. This being TDD, we want a failing test explaining the logic we want before we actually write som code.</p>

<p>In this example, we will see the keyword <code>run</code>, which is used to run a function, as well as the <code>$status</code> and <code>$lines</code> variables provided by <code>bats</code>. <a href="https://bats-core.readthedocs.io/en/stable/writing-tests.html#">The overview of bats variables can be found here</a>.</p>

<pre><code class="language-sh">#!/usr/bin/env bats

function setup(){  
    source ./extract-booknotes.sh # The setup method sources in the script we want to test.
}

function teardown(){  
  echo $result # The teardown method returns the "$result" variable from bats itself. This helps with debugging failing tests.
}

@test "extract_author no argument" { # The first line contains the test-annotation and a name / explanation of the function to be tested.
    run get_author  # The `run` keyword comes from bats and is for triggering functions or script.

    [ "${status}" -eq 1 ] # The $status variable contains the return code from the functions or scripts being tested, 
    [ "${lines[0]}" == "Missing argument file" ] # While the $lines variable is an array of strings from the functions or script we want to test.
}
</code></pre>

<p>When run against an empty file, this test will (not surprisingly) fail:</p>

<pre><code class="language-sh">#!/usr/bin/env bash

function get_author(){

}
</code></pre>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✗ extract_author no argument
   (from function `setup' in test file tests/extract-booknotes.bats, line 4)
     `source ./extract-booknotes.sh ' failed with status 2
   ./extract-booknotes.sh: line 9: syntax error near unexpected token `}'


1 test, 1 failure  
</code></pre>

<p>And if we fill in the code:</p>

<pre><code class="language-sh">#!/usr/bin/env bash

function get_author(){  
    local FILE="$1"
    if [[ -z $FILE ]]; then 
        echo "Missing argument file"
        exit 1
    fi
}
</code></pre>

<p>And run the tests again:</p>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✓ extract_author no argument

1 test, 0 failures  
</code></pre>

<p>We get some green tests.</p>

<p>Cool. Now for the functionality. We want to extract the author from the markdown notes file:</p>

<pre><code class="language-md">### Metadata

- Author: Kilian Jornet
- Full Title: Above the Clouds: How I Carved My Own Path to the Top of the World
- Category: #books
</code></pre>

<p>Again, let's start with the test:</p>

<pre><code class="language-sh">@test "extract_author with 'Above The Clouds' file as argument" {
    run get_author BookNotes/Above.The.Clouds.md

    [ "${status}" -eq 0 ]
    [ "${lines[0]}" == "Kilian Jornet" ]
}
</code></pre>

<p>And that test will fail since we don't have updated our code:</p>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✓ extract_author no argument
 ✗ extract_author with 'Above The Clouds' file as argument
   (in test file tests/extract-booknotes.bats, line 20)
     `[ "${lines[0]}" == "Kilian Jornet" ]' failed


2 tests, 1 failure  
</code></pre>

<p>Cool. Now let's do some regex-matching, an operation that's good to have tests for:</p>

<pre><code class="language-sh">function get_author(){  
    local FILE="$1"
    if [[ -z $FILE ]]; then 
        echo "Missing argument file"
        exit 1
    fi
    local AUTHOR
    AUTHOR=$(grep -oP '(?&lt;=Author:\s)(\w+).*' "$FILE") # &lt;--- This one right here. I'm not comfortable with regex spread around, so a test for this is quite nice.
    echo "$AUTHOR"
}
</code></pre>

<blockquote>
  <p>Note: For this pice of code, the GNU version of <code>grep</code> is used. It might not work as expected on macOS with BSD <code>grep</code>.</p>
</blockquote>

<p>With this pice of code in place, the test should be green:</p>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✓ extract_author no argument
 ✓ extract_author with 'Above The Clouds' file as argument

2 tests, 0 failures  
</code></pre>

<p>And indeed it is.</p>

<p>Don't believe the tests? Let' try the method ourself:</p>

<pre><code class="language-sh">$ source extract-booknotes.sh
$ get_author BookNotes/BookNotes/Above.The.Clouds.md 
Kilian Jornet  
</code></pre>

<p>Don't you know it, it works! Tests don't lie.</p>

<p>How about the title? We have the author, but not the title? That's kind of backwards. <br>
Let's begin by specifying how the function should look, again from our tests:</p>

<pre><code class="language-sh">@test "extract_title no argument" {
    run get_title

    [ "${status}" -eq 1 ]
    [ "${lines[0]}" == "Missing argument file" ]
}
</code></pre>

<p>Same as before, no file should give output and exit code 1. No code written yet, no green test.</p>

<p>With that in place we can focus on extracting the title from the file. As always, the test first:</p>

<pre><code class="language-sh">@test "extract_title with 'Above The Clouds' file as argument" {
    run get_title BookNotes/Above.The.Clouds.md

    [ "${status}" -eq 0 ]
    [ "${lines[0]}" == "Above the Clouds: How I Carved My Own Path to the Top of the World" ]
}
</code></pre>

<p>How do you think running this test will go? That's a no. We don't have any code yet.</p>

<p>Let's break out that regex to match the title from the markdown:</p>

<pre><code class="language-sh">function get_title(){  
    local FILE="$1"
    if [[ -z $FILE ]]; then 
        echo "Missing argument file"
        exit 1
    fi
    local TITLE
    TITLE=$(grep -oP '(?&lt;=Full Title:\s)(\w+).*' "$FILE")
    echo "$TITLE"
}
</code></pre>

<p>With this line in place, the tests go green:</p>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✓ extract_author no argument
 ✓ extract_author with 'Above The Clouds' file as argument
 ✓ extract_title no argument
 ✓ extract_title with 'Above The Clouds' file as argument

4 tests, 0 failures  
</code></pre>

<p>NICE. But wait. What about refactoring? Isn't that a part of TDD?</p>

<p>Yes indeed. The observant reader may have seen that the two functions have something in common: both take in an argument of a file name for parsing, and both do the parsing based on regex expressions. If no argument is given, return error exit code and a message saying "Missing Argument file".</p>

<p>Let's begin by extracting the input validation from the <code>get_author</code> and <code>get_title</code> functions and call it <code>_file_exists</code>. This function will be more generalized and standalone. As always, the functions behavior is first described with tests, which makes for great documentation of the function.</p>

<pre><code class="language-sh"># The behavior should be the same for no arguments, exitcode 1 and 'Missing Argument file' as output:
@test "_file_exists with no argument" {
    run _file_exists

    [ "${status}" -eq 1 ]
    [ "${lines[0]}" == "Missing argument file" ]
}

# Since the function validating the input file now can be extracted from the book notes parser, it can check the script itself:
@test "_file_exists with 'extract-booknotes.sh' file as argument" {
    run _file_exists extract-booknotes.sh

    [ "${status}" -eq 0 ]
    [ "${lines[0]}" == "extract-booknotes.sh" ]
}
</code></pre>

<p>Then we can extract the input check directly from the functions:</p>

<pre><code class="language-sh">function _file_exists(){  
    local FILE="$1"
    if [[ -z $FILE ]]; then 
        echo "Missing argument file"
        return 1
    fi
    echo "$FILE"
}
</code></pre>

<p>Now here we could also extend the <code>_file_exists</code> code with a branch handling an argument of the name of a non-existing file, but to stay strictly in line with TDD we should only add enough production code for the tests to to pass, which we have done with the extracted snippet.</p>

<p>The refactored <code>get_author</code> function now looks like this:</p>

<pre><code class="language-sh">function get_author(){  
    FILE=$(_file_exists "$1") # &lt;--- Here we run the input through the validation function
    if [ $? -eq 1 ]; then  # &lt;--- And then check the return code from the function
        echo "$FILE" # &lt;--- Before we return the output from the validation if it fails
        exit 1 # &lt;--- Then, exit.
    fi
    local AUTHOR
    AUTHOR=$(grep -oP '(?&lt;=Author:\s)(\w+).*' "$FILE")
    echo "$AUTHOR"
}
</code></pre>

<p>How about the tests?</p>

<pre><code class="language-sh">$ bats tests/extract-booknotes.bats 
extract-booknotes.bats  
 ✓ extract_author no argument
 ✓ extract_author with 'Above The Clouds' file as argument
 ✓ extract_title no argument
 ✓ extract_title with 'Above The Clouds' file as argument
 ✓ _file_exists with no argument
 ✓ _file_exists with 'extract-booknotes.sh' file as argument

6 tests, 0 failures  
</code></pre>

<p>Did the code get any better or clearer? More dynamic and with a better abstraction. Thanks to the tests the refactoring could be made safer with guardrails.</p>

<h2 id="conclusion">Conclusion</h2>

<p>In this post we have seen how our shell scripts also can have automated tests thanks to <code>bats</code>. <br>
Hopefully this will motivate you to extend your thinking about tests to also include the pieces outside the application code.</p>

<p>As a colleague of mine said: "Every pice of code in the repository that is necessary to bring value to the customer is critical, and should thus also be tested. If we think we have a bug in the deployment pipeline, we don't guess. We test and prove.</p>

<p>All code examples from this post can be found <a href="https://github.com/andmos/bats-examples">in this Git repository</a>.</p>]]></content:encoded></item><item><title><![CDATA[Backing up API data to GitHub with Flat-Data]]></title><description><![CDATA[<p>For the last 10 years or so I have used Trello as my preferred service to keep track of my reading.</p>

<p>I <a href="https://blog.amosti.net/how-i-read/">wrote a blog post</a> about this setup and my technique with regards to reading, which, upon re-reading, I see that both my technique for reading and writing-skills has</p>]]></description><link>http://blog.amosti.net/backing-up-api-data-to-github-with-flat-data/</link><guid isPermaLink="false">c58ea838-0ddb-4628-8334-ac1c62716b62</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Mon, 17 Jan 2022 19:16:18 GMT</pubDate><content:encoded><![CDATA[<p>For the last 10 years or so I have used Trello as my preferred service to keep track of my reading.</p>

<p>I <a href="https://blog.amosti.net/how-i-read/">wrote a blog post</a> about this setup and my technique with regards to reading, which, upon re-reading, I see that both my technique for reading and writing-skills has improve, so that's that.</p>

<p>In 2017 <a href="https://techcrunch.com/2017/01/09/atlassian-acquires-trello/?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAB4EB-u4iDYnVbIaNgQ5Zc3vh2a4oO12KL7tXvnrIAjHpVo76z6XqXP_F2I7-TDusMGfPvzMr21IRMIyUbenmxznHq_Ap4RcAzuG0yRLxve7Kc8naDdpLEgavPQkU7wuxfOIet-bTDAHjct5eB8DZujMtQiVKaZ2JSq1ji7v7Ypy">Atlassian acquired Trello</a>, and safe to say, thing are starting to become more "enterprisey" over there.</p>

<p>I figured I won't be staying with Trello forever, and the amount of sunk-cost I have compounded in my simple reading list is astonishing. After some grooming, the backlog has currently 242 items and my done list has 450 entries. </p>

<p>What can I say, I like to read.</p>

<p>Some years ago I began my humble planning for moving away from Trello by writing a <a href="https://github.com/andmos/ReadingList">wrapper API service</a> for the reading list, <br>
using the Trello API itself as repository layer, so the switch to some other service could be easier. Maybe this API will act on top of something like SQLite or Firebase, who knows. To make sure I always had a copy of the data from Trello, I wrote a small bash-script with some <code>curl</code> calls and set up a cronjob. Now cronjobs are messy, I need to remember that I have it running, correct it if it runs into some error, and - after all - back up the data that comes from it.</p>

<p>Searching for a better solution than bash and cronjobs, the next step on this pet project of mine came February 2021, when GitHub releases a cool project called <a href="https://next.github.com/projects/flat-data">Flat-Data</a>.</p>

<p>In the projects own words:</p>

<blockquote>
  <p>Flat Data is a GitHub action which makes it easy to fetch data and commit it to your repository as flatfiles. The action is intended to be run on a schedule, retrieving data from any supported target and creating a commit if there is any change to the fetched data.</p>
</blockquote>

<p>So in a nutshell, Flat-Data allows us to set up a simple GitHub Action, point it at some data source (like an API or SQL database), query the data, run some processing on it, and have it checked in to Git. A great tool for the data scientists I would <br>
imagine, or for a guy who just want to back up his reading list from an over-engineered API layer on top of Trello.</p>

<p>It's easy to get started. All we need is a repository and a GitHub Action. <br>
My repo is <a href="https://github.com/andmos/ReadingList-Data">ReadingList-Data</a> and contains a few files: <code>backlog.json</code>, <code>done.json</code>, <code>reading.json</code> and <code>postprocess.js</code>.</p>

<p>The GitHub Action itself using Flat-Data looks like this:</p>

<script src="https://gist.github.com/andmos/00fe8179aac87a805e2d4d12a749058e.js"></script>

<p>Every Monday afternoon this action runs, fetches the current status from the reading list API, runs it through the post-processor <br>
script (a simple Deno script that only formats the JSON) and creates commits with the changes. It could not be simpler.</p>

<p>As a bonus the commit history works as a great timeline to see when a book was added to the backlog or I was done reading it. </p>

<p><img src="https://user-images.githubusercontent.com/1283556/149825223-9894be37-ff14-4788-92dd-e3eb654e06cd.png" alt="Commit-history"></p>

<p>This is a quite simple and banal use case for Flat-Data, but hopefully it will inspire someone out there.</p>]]></content:encoded></item><item><title><![CDATA[Deterministic systems with Nix]]></title><description><![CDATA[<blockquote>
  <p>This is a cross-post of <a href="https://www.bekk.christmas/post/2021/13/deterministic-systems-with-nix">my contribution</a> to this year's <a href="https://www.bekk.christmas/">advent calendar</a> we do over at Bekk. Hope you like it!</p>
</blockquote>

<h2 id="introduction">Introduction</h2>

<p>Setting up reliable environments for our software is tricky. <br>
The task has kept developers and sysadmins up at night for decades. Making environments and packages truly reproducible and</p>]]></description><link>http://blog.amosti.net/deterministic-systems-with-nix/</link><guid isPermaLink="false">4c869f73-1644-4fb4-b3d7-3c618f088230</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Thu, 16 Dec 2021 19:01:24 GMT</pubDate><content:encoded><![CDATA[<blockquote>
  <p>This is a cross-post of <a href="https://www.bekk.christmas/post/2021/13/deterministic-systems-with-nix">my contribution</a> to this year's <a href="https://www.bekk.christmas/">advent calendar</a> we do over at Bekk. Hope you like it!</p>
</blockquote>

<h2 id="introduction">Introduction</h2>

<p>Setting up reliable environments for our software is tricky. <br>
The task has kept developers and sysadmins up at night for decades. Making environments and packages truly reproducible and reliable for more than a few weeks before regression sets in, is surely no easy task. In this post, we'll see how we can set up truly deterministic, reproducible and even ephemeral environments with the help of a clever set of tools called Nix, so we can sleep better, knowing our systems can be installed from literary scratch and be guaranteed the same binary packages down to the lowest dependencies.</p>

<h2 id="whatsinaname">What's in a name?</h2>

<p>Let's face it, "Nix" has a quite ambiguous name that can reference a lot of things, so first, let's get the naming out of the way.</p>

<p>When people hear "Nix", they might think about "*nix", or the commonly spoken variant (without the asterix) "nix", the short name the industry has adopted for systems based on good old UNIX. Linux is "nix", macOS is "nix", BSD is "nix" - and in a way, "Nix" is also... well, "nix." Confused? Yeah.</p>

<p>Nix, in our context, refers to three things: the <a href="https://nixos.wiki/wiki/Nix_Expression_Language">Nix Expression Language</a>, a pure, lazy, functional language. This language makes up the foundational building blocks of the Nix package manager, which can be installed on any "*nix" system (<a href="https://nixos.org/manual/nix/stable/quick-start.html">like Linux or macOS</a>) or as it's own unique Linux distro, <a href="https://nixos.org/">NixOS</a>. So a language, a package manager and even a distro. What's this all about?</p>

<h2 id="whatmakesnixsospecial">What makes Nix so special?</h2>

<p>With the naming out of the way, what makes Nix so special? What does it have to offer that <code>apt</code>, <code>yum</code>, or <code>brew</code> don't have?</p>

<p>First off, it's cross platform. The Nix Package Manager <a href="https://nixos.org/manual/nix/unstable/installation/supported-platforms.html">can run on the most common Linux systems, as well as macOS</a>, but that is true for many package managers these days, and is not it's main advantage.</p>

<p>What makes Nix special is how it manages packages and dependencies. Nix guarantees reproducible packages, which means that all steps involved in building a package can be run again and again with the same outcome, and if any of the variables in the dependency chain change (all the way down to low-level packages like <code>libc</code>), it will result in a new version of this package, that can be installed side-by-side with the old version. This is possible thanks to the nature of the functional Nix language. From the docs:</p>

<blockquote>
  <p>Nix is a purely functional package manager. This means that it treats packages like values in purely functional programming languages such as Haskell — they are built by functions that don’t have side-effects, and they never change after they have been built. </p>
</blockquote>

<p>Nix stores packages in the Nix store, usually the directory <code>/nix/store</code>, where each package has its own unique subdirectory such as</p>

<pre><code class="language-sh">/nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-33.1
</code></pre>

<p>where <code>b6gvzjyb2pg0…</code> is a unique identifier for the package that captures all its dependencies (it’s a cryptographic hash of the package’s build dependency graph). This enables many powerful features.</p>

<p>In more practical terms, this is accomplished by generating hash values of all dependencies going <em>in</em> to the package build, as well as the <em>outcome</em> of the build itself.</p>

<h2 id="kickingthetires">Kicking the tires</h2>

<p>Let's look at an example package called <code>hello</code>. <br>
The Nix-script responsible for building the package can be found in the <a href="https://github.com/NixOS/nixpkgs">nixpkgs-github repo</a> <br>
(all packages are essentially built and installed from these scripts) and it looks like this:</p>

<script src="https://gist.github.com/andmos/9c56554310a6a1dd653d997bcfeae943.js"></script>

<p>We begin by installing it to the user's environment:</p>

<script src="https://gist.github.com/andmos/c1d48189a5ad662c59bbf25c54f9bb53.js"></script>

<p>As we can see, a lot of things was required for a simple program that prints out <code>Hello, World!</code>.</p>

<p>One might look at this list and think "Hey, i see curl on there - curl is already installed on my machine, why do I need it again, and won't multiple versions of the same package wreck-havoc on my machine?"</p>

<p>To address the first comment, this neat little trick is what makes Nix-packages self-contained and immune to what else might be installed on the system. <br>
Other package managers, like <code>apt</code> or <code>brew</code> are often heavily dependent on there existing <em>one</em> version of a package or it's transitive dependency. <br>
This is why installing packages on different systems can lead to quite different results, and why a package update can break a system. <br>
With Nix, all package dependencies comes bundled and are stored in their own hashed directories. <br>
The model Nix follows is that every transitive dependency must be defined in a Nix-expression that can be built itself, thus supplying a dependency chain of "build instructions" all the way to the lowest parts. <br>
If one lower-level dependency change, the main package can not be seen as the same exact version as we had before, and will be installed side-by-side with the old version, completely isolated. <br>
This nifty feature is why Nix and NixOS has become a favorite among developers and system administrators alike, it makes for highly robust and deterministic systems, easy to update or rollback.</p>

<p>To address the concern of multiple versions of <code>curl</code>, let's take a look at what we have on our <code>PATH</code> after the install of <code>hello</code>:</p>

<pre><code class="language-sh">$ which curl
/usr/bin/curl
</code></pre>

<p><code>curl</code> being a transitive dependency for <code>hello</code> does not place it on our <code>PATH</code>, the version of <code>curl</code> provided by macOS is still in place.</p>

<p>If we install <code>curl</code> as a top level package, the story would be different:</p>

<script src="https://gist.github.com/andmos/19dd36c37fa4b3afa2c942bb5a9e8f5b.js"></script>

<p>To keep score of what version of a Nix-package is currently being used, Nix takes leverage of symlinks:</p>

<script src="https://gist.github.com/andmos/0e5c437602621d098c0dcb7c62a06602.js"></script>

<p>If we regret installing <code>curl</code> via Nix or something broke, Nix keeps track of the users environment in <br>
<a href="https://nixos.wiki/wiki/NixOS#Generations">Generations</a>, making it easy to rollback the system:</p>

<script src="https://gist.github.com/andmos/0361e12c6b59dd874450370052556350.js"></script>

<h2 id="creatingreproducibledevelopmentenvironmentswithnixshell">Creating reproducible development environments with nix-shell</h2>

<p>Another powerful tool provided with Nix, is <code>nix-shell</code>. Software teams have always been struggling with the famous "works on my machine" syndrome, where a build or piece of code works as expected on one machine, but not on another. <br>
Creating reproducible development environments has been the holy grail for many, and tools like <a href="https://www.packer.io/">Packer</a> and <a href="https://www.vagrantup.com/">Vagrant</a> takes the virtual machine way to solve this, by building VM images that can have tool pre-installed or installed via provisioning systems like <a href="https://www.vagrantup.com/docs/provisioning/ansible">Ansible</a>.</p>

<p>Another way to solve this is with containers, typically with <a href="https://www.docker.com/">Docker</a> and <a href="https://docs.docker.com/compose/">Docker-Compose</a>. <br>
Both virtual machines and container technology has pros and cons, but the mayor drawback is that it is quite hard to make truly reproducible environments. <br>
Both are great for freezing a setup in time (like a VM image or a container image), but are hardly deterministic. <br>
A badly written <code>Dockerfile</code> can produce different results when built on two different systems.  </p>

<blockquote>
  <p>As a side note, it is possible to build reproducible and small Docker images <a href="https://nix.dev/tutorials/building-and-running-docker-images">with Nix</a>.</p>
</blockquote>

<p><code>nix-shell</code> on the other hand leverages the power of Nix to build local, reproducible, isolated and ephemeral shell-environments.</p>

<p>Let's say we want  <code>python3</code> but don't want to install it user/system-wide. It is possible to use <code>nix-shell</code> to provide an on-demand shell with just <code>python3</code>:</p>

<script src="https://gist.github.com/andmos/c89cfa43a073fd6c263307ac0279e7f9.js"></script>

<p>As we can see, no <code>python3</code> package was installed on the system, but with <code>nix-shell</code> we are able to download the package with all dependencies and make it available in a local nix-shell. <br>
When exiting the shell, no version of <code>python3</code> is available on path.</p>

<p>With the Nix language, it is possible to write declarations for these shells that can be shared among the development team. <br>
Let's say the team is maintaining a Java application, deployed on Kubernetes, and want a setup that just works™ on all systems:</p>

<script src="https://gist.github.com/andmos/d6c853be08f78def1e6241bc9470aff5.js"></script>

<p>This script can be stored in the root of the Java-project and added to version control. When a developer wants the environment, a simple command will provide it:</p>

<script src="https://gist.github.com/andmos/0d76eda18d21d0f502958f464fe861e4.js"></script>

<p>As we can se, the selected packages and dependencies are all installed.</p>

<p>To clean up old <code>nix-shell</code> sessions, we can simply run  </p>

<pre><code class="language-sh">$ nix-collect-garbage
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>This post has been a brief intro to Nix and what at can provide in terms of reproducible, isolated systems. It is possible to do so much more than just install pre-built packages. I would recommend <a href="https://shopify.engineering/shipit-presents-how-shopify-uses-nix">How Shopify Uses Nix</a> for further inspiration on how to build and deliver software using Nix, as well as checking out the <a href="https://github.com/nix-community/home-manager">home-manager</a> project for managing user environments.</p>

<p>Interested in building your first Nix package? See the excellent <a href="https://nix-tutorial.gitlabpages.inria.fr/nix-tutorial/first-package.html">nix-tutorials</a> website and start hacking!</p>]]></content:encoded></item><item><title><![CDATA[Containerize FluentMigrator for effortless db migrations]]></title><description><![CDATA[<h2 id="continuingthecontainerization">Continuing the containerization</h2>

<p>Last year I wrote about how to set up <a href="https://blog.amosti.net/local-reverse-proxy-with-nginx-mkcert-and-docker-compose/">a local reverse proxy with nginx and mkcert via Docker-Compose</a>. <br>
Being able to spin up a local, production-like reverse proxy to use while developing is great, but why stop there?</p>

<p>Sooner or later the need for a database</p>]]></description><link>http://blog.amosti.net/containerize-fluentmigrator-for-effortless-db-migrations/</link><guid isPermaLink="false">84db819b-9d5d-4a01-aba2-8e49a3132de7</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sun, 21 Feb 2021 11:50:17 GMT</pubDate><content:encoded><![CDATA[<h2 id="continuingthecontainerization">Continuing the containerization</h2>

<p>Last year I wrote about how to set up <a href="https://blog.amosti.net/local-reverse-proxy-with-nginx-mkcert-and-docker-compose/">a local reverse proxy with nginx and mkcert via Docker-Compose</a>. <br>
Being able to spin up a local, production-like reverse proxy to use while developing is great, but why stop there?</p>

<p>Sooner or later the need for a database to store the applications data will emerge, and with any data structure, the need for change - adding, updating or deleting elements of the structure is needed. In other words, the need for <em>migrations</em>.</p>

<p>Back in the day, a usual practice for a team depending on a database (depending on the complexity of the application and maturity of the team), was to share a database instance for development. Setting up a local database can be tricky, and up until 2017 Microsoft's <a href="https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-server-on-linux/">SQL Server was only available on the Windows platform</a>, requiring a VM for local development for *nix users. The single, shared instance strategy is also quite limiting for teams working in parallel on tasks requiring database and / or application code changes. A developer testing a database change on a single branch can easily break the main branch when a single instance is used.</p>

<p>In 2017 Microsoft release SQL Server 2017 (and now 2019) with Linux support, and with it, thankfully, <a href="https://hub.docker.com/_/microsoft-mssql-server">Docker support</a>.</p>

<p>A clean instance of MS SQL 2019 can be added to a <code>docker-compose</code> setup as easily as this:</p>

<script src="https://gist.github.com/andmos/ec3838d72ea0e6137e9798f267ee59b4.js"></script>

<p>With <code>docker-compose</code>, every developer on a team can have their own version of the database.</p>

<h2 id="runningmigrationswithfluentmigrator">Running migrations with FluentMigrator</h2>

<p>The next step is bootstrapping the structure of the database. For .NET, <a href="https://docs.microsoft.com/en-us/ef/">Entity Framework</a> or <a href="https://fluentmigrator.github.io/">FluentMigrator</a> are popular choices. Let's focus on FluentMigrator.</p>

<p>A common pattern for handling migrations is creating a dedicated .NET <code>csproj</code> file where the migrations live. Let's call it <code>MyApp.Migrations</code>. <br>
After <a href="https://fluentmigrator.github.io/articles/quickstart.html?tabs=runner-in-process">writing some migrations</a>, the easiest way of running them is via the FluentMigrator <a href="https://fluentmigrator.github.io/articles/runners/dotnet-fm.html">dotnet tool dotnet-fm</a>. With <code>dotnet-fm</code>, running migrations is as easy as</p>

<pre><code class="language-sh">dotnet-fm migrate --processor SqlServer2016 --assembly MyApp.Migrations.dll --connection "Data Source=myConnectionString"`  
</code></pre>

<h2 id="bootstrappingthedatabasewithdockercompose">Bootstrapping the database with Docker-Compose</h2>

<p>Now we have a <code>docker-compose</code> containing an MS SQL instance, and we have a <code>csproj</code> file containing some database migrations. To save us from having to run the migrations manually, the process of bootstrapping the development environment can be automated by containerizing the process running the migrations. For this, we create a <code>Dockerfile</code> that compiles the migration project, grabs the <code>dotnet-fm</code> tool for running FluentMigrator and wraps it up with an <code>entrypoint</code> for running.</p>

<p>The <code>Dockerfile</code>:</p>

<script src="https://gist.github.com/andmos/b33e2f07b6b1ceb8b9e6e6bfe074f5d6.js"></script>

<p>Some things to notice here.</p>

<p>The FluentMigrator library (installed with NuGet) and the <code>dotnet-fm</code> tool needs to be the same version, so the <code>sed</code> command on line 7 grabs the version-string from the <code>csproj</code> file and uses it to install the correct version of <code>dotnet-fm</code> tool on line 9.</p>

<p>On line 12 a little shell-script called <code>wait-for</code> is cloned. <a href="http://blog.amosti.net/containerize-fluentmigrator-for-effortless-db-migrations/wait-for">https://github.com/eficode/wait-for.git</a> is used to wrap the execution of <code>dotnet-fm</code> and <em>wait</em> for the database the become available. This is a neat trick to handle the timing issues that can occur when running via <code>docker-compose</code>, where the migrations can be executed before the database is ready.</p>

<p>The <code>Dockerfile</code> is multi-stage, so the compiled library, <code>dotnet-fm</code> binary and <code>wait-for</code> script is copied to a <code>dotnet</code> runtime image. The <code>netcat</code> package installed on line 17 is a runtime dependency for <code>wait-for</code>.</p>

<p>With the <code>Dockerfile</code> in place, the final <code>docker-compose.yaml</code> file:</p>

<script src="https://gist.github.com/andmos/cc5d63023d68cdfad5de953fcdc22c78.js"></script>

<p>The whole thing spins up with <code>docker-compose up</code>. When <code>db</code> is ready, the migrations will be run and bootstraps the database.</p>]]></content:encoded></item><item><title><![CDATA[Cross post: Take Argo CD for a spin with K3s and k3d]]></title><description><![CDATA[<p>For the second year in a row, my current employer <a href="https://www.bekk.no/">Bekk</a> has set out on a ambitious December journey:</p>

<p><a href="https://bekk.christmas/">Bekk Christmas</a>, 264 tech articles in 24 days, each day of the advent calendar.</p>

<p>One of the categories was <em>thecloud.christmas</em>, so it felt natural to me to contribute here.</p>

<p>So</p>]]></description><link>http://blog.amosti.net/cross-post-take-argo-cd-for-a-spin-with-k3s-and-k3d/</link><guid isPermaLink="false">b8420786-72f6-4963-96e1-375a15e3f8a6</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sun, 20 Dec 2020 11:33:43 GMT</pubDate><content:encoded><![CDATA[<p>For the second year in a row, my current employer <a href="https://www.bekk.no/">Bekk</a> has set out on a ambitious December journey:</p>

<p><a href="https://bekk.christmas/">Bekk Christmas</a>, 264 tech articles in 24 days, each day of the advent calendar.</p>

<p>One of the categories was <em>thecloud.christmas</em>, so it felt natural to me to contribute here.</p>

<p>So here it it, from the 13th of December, my post about Kubernetes, ArgoCD, K3s and k3d. Enjoy <a href="https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d">Take Argo CD for a spin with K3s <br>
and k3d</a>!</p>]]></content:encoded></item><item><title><![CDATA[Local reverse-proxy with Nginx, mkcert and Docker-Compose]]></title><description><![CDATA[<h2 id="goodpracticesfromthetwelvefactorapp">Good practices from the Twelve-Factor app</h2>

<p>When developing modern web application or services, the <a href="https://12factor.net/port-binding">Twelve-factor app</a> taught us that our services</p>

<blockquote>
  <p>is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a</p></blockquote>]]></description><link>http://blog.amosti.net/local-reverse-proxy-with-nginx-mkcert-and-docker-compose/</link><guid isPermaLink="false">f369dc9d-7e66-4773-9aa1-55bed4d422a9</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Fri, 10 Apr 2020 11:43:29 GMT</pubDate><content:encoded><![CDATA[<h2 id="goodpracticesfromthetwelvefactorapp">Good practices from the Twelve-Factor app</h2>

<p>When developing modern web application or services, the <a href="https://12factor.net/port-binding">Twelve-factor app</a> taught us that our services</p>

<blockquote>
  <p>is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.</p>
</blockquote>

<p>What this means is that our apps written with modern frameworks (like <a href="https://docs.microsoft.com/en-us/aspnet/core/?view=aspnetcore-3.1">ASP.NET Core</a>) should provide their own web-servers, exposing a HTTP port, and not require anything in front for hosting, like <code>IIS</code> or Apache <code>HTTPD</code>. For local development, you should be able to run the app without any third-party component requirements for hosting, and the app should be reachable on <code>http://localhost:5000/</code> as an example.</p>

<p>Now hosting the app <em>directly</em> in this way in a production setting is something you <em>don't want</em> to do for obvious reasons, since the infrastructure-layer of the app would grow thick, and the developer must code in all sort of hardening, routing etc, not to mention the security concerns - that self-hosted HTTP server would be exposed all on it's own as an attack-vector. When running in production, a component suitable for the <a href="https://en.wikipedia.org/wiki/Reverse_proxy">reverse-proxy</a> role should be responsible for binding a public-facing hostname to the app(s), as well as do HTTPS termination - the app itself should focus on what it does best, the business logic (this is why it exists in the first place), while a component like <a href="https://www.nginx.com/">nginx</a> or <a href="http://www.haproxy.org/">HAProxy</a> should handle hostname binding, HTTPS and load-balance incoming requests.</p>

<p>Modern platforms like <a href="https://kubernetes.io/">Kubernetes</a> or <a href="https://www.openshift.com/">OpenShift</a> offers <em>routes</em> that gives the app an external-reachable hostname, and load-balances the application when running on different nodes, as well as provide HTTPS termination up-front. For small solutions not needing a container orchestrator, plain old nginx in front works great.</p>

<p>All modern application <em>should</em> be hosted with SSL and HTTPS. Thanks to projects like <a href="https://letsencrypt.org/">Let's Encrypt</a>, trusted SSL certificates can be obtained for free, and the world is now, slowly but surely, moving to HTTPS as default. This does not mean that our application's first meeting with HTTPS should be in a staging or production environment, it should also ble possible to develop and test locally with HTTPS as default. Thanks to modern tools, running a local reverse-proxy with a valid HTTPS certificate is quite straight forward.</p>

<h2 id="localreverseproxywithssltermination">Local reverse-proxy with SSL termination</h2>

<p>Let's say we have a single application, <code>MyService</code>, that is written with ASP NET, running with <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel?view=aspnetcore-3.1">Kestrel</a>. The app has no code for HTTPS-redirects, and knows nothing about any SSL certificate or setup, it only talks HTTP on port 80. The application is destined for a life in a container orchestrator of some kind, so it has a <code>Dockerfile</code>. To be able to run the <code>MyService</code> application via HTTPS in an environment <em>similar but not equal to</em>, let's say Kubernets, we need to run it behind a reverse-proxy when testing locally. We also need some sort of SSL certificate. Earlier in the post I mentioned Let's Encrypt that offers free certificates, but to be able to leverage it, a public-facing hostname is needed. For local development, a self-signed certificate is plenty. Now the road down self-signed certificates can be quite dirty and lead to many half-working solutions and "not-trusted" warnings in the browser. One easy solution is using a great tool called <a href="https://github.com/FiloSottile/mkcert">mkcert</a>. <code>mkcert</code> is a simple CLI that registers a trusted CA on your machine, both in the local certificate store and in all installed browsers, and can generate certificates from this CA.</p>

<p>To install <code>mkcert</code> with <code>brew</code>:</p>

<p><code>$ brew install mkcert</code></p>

<p>Before installing the <code>mkcert</code> CA and generating a certificate:</p>

<script src="https://gist.github.com/andmos/7fae6b63942f0c27f65cd1fd5dc9e47d.js"></script>

<p>Please note, this CA and certificates generated from it is for <em>local</em> purposes only.</p>

<p>Next up, let's configure <code>nginx</code> to work as a reverse-proxy with SSL termination:</p>

<script src="https://gist.github.com/andmos/76fed99e90c2370eab3abcfd316d604e.js"></script>

<p>This configuration will tell <code>nginx</code> to listen on <code>localhost</code>, port <code>5000</code> with the generated certificate from <code>mkcert</code>. Requests to <code>/</code> is then forwarded to the app, listening on plain old HTTP on port <code>80</code>.</p>

<p>The whole thing is then tied together with <code>docker-compose</code>:</p>

<script src="https://gist.github.com/andmos/b09aeb7bdef0e0d991140e199f41ea6f.js"></script>

<p>Now run the whole thing with</p>

<pre><code class="language-shell">$ docker-compose up
</code></pre>

<p>Navigating to <code>https://localhost:5000/</code> reveals a nice HTTPS symbol:</p>

<p><img src="https://i.imgur.com/Yo2Jqgt.png" style="zoom:50%;"></p>

<h2 id="note">Note</h2>

<p>Unless <code>nginx</code> is used as reverse-proxy in the live environment, this solution will not be <em>excactly</em> on parity with staging or production, but the mechanisms and practices should be similar. As a rule of thumb, the Twelve-Factor app <a href="https://12factor.net/dev-prod-parity">talks about the importance of  dev/prod parity</a>.</p>]]></content:encoded></item><item><title><![CDATA[Automate Docker base image updates with Watchtower]]></title><description><![CDATA[<p>If you host some simple hobby services with plain old Docker, chances are high that you have been thinking about how to automate the deployment process. If the services are small enough and you host them on your own servers or VM's, going to the PaaS cloud or introducing Kubernetes</p>]]></description><link>http://blog.amosti.net/automate-docker-base-image-updates-with-watchtower/</link><guid isPermaLink="false">48e6b568-63f1-434d-8518-6a78915cc32c</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sun, 03 Nov 2019 09:43:52 GMT</pubDate><content:encoded><![CDATA[<p>If you host some simple hobby services with plain old Docker, chances are high that you have been thinking about how to automate the deployment process. If the services are small enough and you host them on your own servers or VM's, going to the PaaS cloud or introducing Kubernetes with a sophisticated CI/CD pipeline is, in most cases, total overkill.</p>

<p>Why invest more time in setting up the complicated hosting and scheduling platform than it took to write that 500 lines single-container web service?</p>

<p>Don't fear, <a href="https://containrrr.github.io/watchtower/">watchtower</a> is here.</p>

<p>Watchtower is a single process container that runs on your system and <em>polls</em> a container registry (private or public, like Dockerhub) at given intervals to check for new versions of the base image on the running container(s) you want to update. If it detects a new image, Watchtower stores the parameters used to start the running container, like startup argument and environment variables, pulls down the new image, stops the running container, and starts it up again, with the same parameters, but with the new image. Easy as that. This simplifies the deployment process, and the only automation needed is a CI pipeline that builds the service's container image and pushes it to the registry when code is committed. Here is an example from my simple <a href="https://github.com/andmos/ReadingList">ReadingList</a> API:</p>

<p>In <code>.travis.yml</code>:</p>

<script src="https://gist.github.com/andmos/4783be0dda67cd8e74d598ef92c6006b.js"></script>

<p>On push to master, if the build and test steps are successful, the image is tagged as <code>latest</code> and pushed to DockerHub. Nothing more to it.</p>

<p>Now on my private <a href="https://welcome.linode.com/">Linode</a> server the ReadingList API has been started <em>once</em> with the correct env-variables, so it's running as intended. Then Watchtower comes in:</p>

<pre><code class="language-shell">docker run -d --name watchtower  -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower readinglist  
</code></pre>

<p>Watchtower is now running and has access to <code>docker.sock</code> to be able to start and stop container.</p>

<p>When a change to ReadingList is merged and new image pushed, watchtower kicks in and replaces the running container with a new, updated version:</p>

<pre><code class="language-shell">docker logs watchtower  
time="2019-10-30T13:06:39Z" level=info msg="Found new andmos/readinglist:latest image (sha256:803aa566d2dd53f3ec774406f6bd8e20cb4e926006cdc1012dc663e206fbc9dc)"  
time="2019-10-30T13:06:40Z" level=info msg="Stopping /readinglist (45ceaaa2de930b22424438d1f2d078796feead127b0ab578c0ff8ac14dc8630e) with SIGTERM"  
time="2019-10-30T13:06:41Z" level=info msg="Creating /readinglist"  
time="2019-10-30T13:13:38Z" level=info msg="Waiting for running update to be finished..."  
</code></pre>

<p>And thats it. Be mindful, this approach has limitations. When managing larger multi-container systems in production something like Kubernetes and a more thorough setup is recommended, but for the casual, single container hobby project Watchtower works just fine.</p>]]></content:encoded></item><item><title><![CDATA[Github Actions and publishing artifacts to Azure Blob Storage]]></title><description><![CDATA[<h3 id="intro">Intro</h3>

<p><a href="https://github.com/features/actions">Github Actions</a> is a welcomed edition to the (still) growing world of CI/CD tools.
Since Actions is Github's own tool, it integrates more closely to your repo and the Github Workflow, with actions to automate tasks around issues, pull-requests, releases etc. Writing a task that regularly, say, check</p>]]></description><link>http://blog.amosti.net/github-actions-and-publishing-artifacts-to-azure-blob-storage/</link><guid isPermaLink="false">9c4507a5-3c83-4582-8749-e92f2e73c8aa</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Thu, 03 Oct 2019 16:59:13 GMT</pubDate><content:encoded><![CDATA[<h3 id="intro">Intro</h3>

<p><a href="https://github.com/features/actions">Github Actions</a> is a welcomed edition to the (still) growing world of CI/CD tools.
Since Actions is Github's own tool, it integrates more closely to your repo and the Github Workflow, with actions to automate tasks around issues, pull-requests, releases etc. Writing a task that regularly, say, check issues and mark them as stalled if it hasn't been any activity for some time has, would mean leveraging the Github API when running in other tools, while abstractions for these kinds of integrations are present directly in Github Actions. That makes their "workflow" semantics more comprehensive than just plain CI/CD capabilities.</p>

<p>Extendability is at the core of Github Actions. All workflows consists of one to many actions, and these actions can be run natively or via containers. Referencing a third party actions is as easy as knowing the action's github namespace.</p>

<script src="https://gist.github.com/andmos/22a0276f9288c9eb281fc49e6833a114.js"></script>

<p>In this example the <em>workflow</em> <code>Tests</code> will trigger on <code>git push</code>, run on macOS, Ubuntu and Windows, checkout code with the action <a href="https://github.com/actions/checkout">actions/checkout</a>, and install Python via <a href="https:(//github.com/actions/setup-python). These two actions are " official",="" hence="" the="" "actions"="" namespace.="" next="" task="" installs="" python="" package="" manager="" [poetry](https:="" poetry.eustace.io="" "="">actions/setup-python</a> and is a third party action: <a href="https://github.com/dschep/install-poetry-action">dschep/install-poetry-action</a>. Since these actions directly reference living repositories, specifing a release or branch (<code>dschep/install-poetry-action@v1.2</code>) will save you from some unpleasant discoveries.</p>

<p>To speed up the build, these actions can be run directly from DockerHub:</p>

<script src="https://gist.github.com/andmos/1ddb8949fba768fc6373c91beab4f7a1.js"></script>

<h3 id="uscaseuploadartifactstoazureblobstorage">Uscase: Upload artifacts to Azure Blob Storage</h3>

<p>Github Actions is still in it's early days, so there are not actions for everything just yet. The other day I needed a workflow to build and publish an Electron App to MacOS and Linux, with the artifacts stored in Azure Blob Storage. In Azure Blob Storage we have two containers, one for <code>dev</code> and one for <code>release</code>, so the app can be tested before released out to the world. Here is the workflow:</p>

<script src="https://gist.github.com/andmos/416601771109493b49aba3591e3f7c2c.js"></script>

<p>So this workflow will only trigger on push to the <code>dev</code> branch. <br>
<a href="https://github.com/actions/setup-node">actions/setup-node@master</a> installs node on version <code>12.10</code>, and <code>electron-builder</code> is used to package and sign (the macOS) app. Here we also see secrets in play, <code>${{ secrets.BASE_64_CERT }}</code> holds an encryptet
signing certificate.</p>

<p>Finally, I could not find a suitable action for uploading to Azure Blob Storage directly, but going via <a href="https://github.com/azure/actions/">azure/actions/login</a> worked great to auth against Azure and give access to the <code>az</code> CLI. All that is needed is generating <a href="https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest">Azure Service Principal</a> to give the workflow access to just the Blob Storage Containers, store them as a secret and we are good to go. For a great intro to using dotnet core with Github Actions, check out <a href="https://hjerpbakk.com/blog/2019/10/03/asp-net-core-and-github-actions">Runar's post</a>.</p>]]></content:encoded></item><item><title><![CDATA[Running GBFS bikeshare functions with OpenFaaS for fun and profit]]></title><description><![CDATA[<h2 id="intro">Intro</h2>

<p>Micro-mobility has gotten a lot of hype over the last couple of years. In many cities all over the world rentable city bikes, cargo bikes and electrical scooters has popped out and seriously changed the way people move, and it doesn't look like you can <a href="https://companies.bnpparibasfortis.be/en/article?n=will-micro-mobility-redesign-the-smart-city">spell "smart city" without</a></p>]]></description><link>http://blog.amosti.net/running-gbfs-bikeshare-functions-with-openfaas-for-fun-and-profit/</link><guid isPermaLink="false">3d9c2fe7-0b73-41cd-a03f-39929041ee9e</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Tue, 04 Jun 2019 20:22:59 GMT</pubDate><content:encoded><![CDATA[<h2 id="intro">Intro</h2>

<p>Micro-mobility has gotten a lot of hype over the last couple of years. In many cities all over the world rentable city bikes, cargo bikes and electrical scooters has popped out and seriously changed the way people move, and it doesn't look like you can <a href="https://companies.bnpparibasfortis.be/en/article?n=will-micro-mobility-redesign-the-smart-city">spell "smart city" without micro-mobility</a>. The consulting company McKinsey is estimating that the value of the micro-mobility market will reach a <a href="https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/micromobilitys-15000-mile-checkup">value of $200 billion to $300 billion in the United States in 2030</a>. What gives micro-mobility systems advantage is the accessability and digital-first approach providers have taken. Most bikeshare systems are activated via smartphones, and the bikes themselves are IoT devices producing data <a href="https://urbansharing.com/">for the providers</a> or <a href="https://trondheimbysykkel.no/en/open-data">the public</a> to explore. To make micro-mobility more useful interoperability is important, and many standards have surfaced. One of these is the <a href="https://github.com/NABSA/gbfs">General Bikeshare Feed Specification</a> (GBFS).</p>

<h2 id="generalbikesharefeedspecificationgbfsandusecases">General Bikeshare Feed Specification (GBFS) and use cases</h2>

<p>GBFS is an open data standard for bikeshare systems developed by <a href="http://www.nabsa.net">North American Bikeshare Association</a>. GBFS provides info about the system itself, including real-time data like available stations, capacity, available bikes and locks etc. This API can be freely used to build 3rd party systems or apps. <a href="https://github.com/andmos/BikeDashboard">How about a dashboard that shows your closest station and local weather forecast</a>, or <a href="https://github.com/gffny/blue-bike-skill">Amazon Echo skill to check for available bikes</a>? <br>
The <a href="https://github.com/NABSA/gbfs/blob/master/systems.csv">GBFS systems overiew</a> currently contains 228 providers using GBFS.</p>

<p>A search for GBFS <a href="https://github.com/topics/gbfs">on github topics</a> shows a lot of project integrating with the standard, including my <a href="https://github.com/andmos/BikeshareClient">GBFS Bikeshare client for dotnet</a>. This client is a great starting point for exploring several exiting concepts: serverless and function as a service.</p>

<h2 id="serverlessfunctionasaserviceandopenfaas">Serverless, function as a service and OpenFaaS</h2>

<p><a href="https://martinfowler.com/articles/serverless.html">Serverless architecture</a> comes as a result of the rising popularity of cloud computing, where providers like Google, Microsoft and Amazon have raised the abstraction level when deploying software. At the infrastructure as a service (IaaS) level you have to mange VMs, the platform as a service (PaaS) level want your binaries or containers, while the function as a service (FaaS) provider needs one thing: your code. The runtime, scalability etc. Is taken care of by the cloud vendor.</p>

<p>FaaS can be looked at as breaking up the <a href="https://martinfowler.com/articles/microservices.html">microservice pattern</a> into smaller pieces. Examples of functions can be transforming input data and store it in a database, resize images, handle messages on a queue or, to stay in the micro-mobility domain, check the availability status on a bikeshare station.</p>

<p>The biggest criticism directed at serverless and FaaS is vendor lock-in. Amazon has Lambda, Microsoft has Azure Functions, and Google has Cloud Functions. Since these platforms require plain code to run, some platform specific boilerplate or project types are needed for each platform, thus not contributing to portability between vendors.</p>

<p>So vendor lock-in is one thing, but I would also like the possibility to run a serverless FaaS solution on that old VMWare cluster in the basement, the Mac mini rack or on the Raspberry Pi spotted in the wild. Luckily, <a href="https://www.openfaas.com/">OpenFaaS</a> is here to help. OpenFaaS is a framework that leverages container technology to run functions in containers on top of orchestrators like Docker Swarm and Kubernetes. This breaks the FaaS architecture free from the cloud vendors, providing the freedom to deploy serverless application in any environment offering a container orchestrator.</p>

<p><img src="https://pbs.twimg.com/media/DFrkF4NXoAAJwN2.jpg" alt="OpenFaaS architecture">
The OpenFaaS architecture.</p>

<h2 id="babysfirstgbfsbikesharefunctions">Baby's first GBFS bikeshare functions</h2>

<p>So let's put OpenFaaS to work and build some GBFS powered bikeshare functions.</p>

<p>For development purposes, running OpenFaaS via Docker Swarm is a good approach. <br>
<a href="https://docs.openfaas.com/deployment/docker-swarm/">The installation of OpenFaaS and initialization of a Swarm cluster is straight forward</a>.</p>

<p>My preferred language is <code>C#</code> and <code>dotnet core</code>, so to write the <code>dotnet</code> function a <a href="https://docs.openfaas.com/cli/templates/#templates">template</a> is needed. <br>
Github user <a href="https://github.com/burtonr">burtonr</a> has written a nice <a href="https://github.com/burtonr/csharp-kestrel-template">dotnet template</a> with Kestrel and async support, perfect for high performance HTTP functions. To fetch the template:</p>

<pre><code class="language-shell">$ faas-cli template pull https://github.com/burtonr/csharp-kestrel-template
</code></pre>

<p>Next, let's have a look at what a typical function might look like. The <a href="https://github.com/NABSA/gbfs/blob/master/systems.csv">GBFS systems list</a> is currently in <code>CSV</code> format, but I prefer to abstract it away and offer it as <code>JSON</code> via HTTP endpoint.</p>

<p>To create a new OpenFaaS function:</p>

<pre><code class="language-shell">$ faas-cli new --lang csharp-kestrel gbfs-systems-function
Function created in folder: gbfs-systems-function  
Stack file written: gbfs-systems-function.yml  
</code></pre>

<p>As the output shows, a folder for the function and <code>stack</code> file are now created, next to the template.</p>

<pre><code class="language-shell">$ tree
.
├── gbfs-systems-function
│   ├── FunctionHandler.cs
│   └── FunctionHandler.csproj
├── gbfs-systems-function.yml
└── template
    └── csharp-kestrel
        ├── Dockerfile
        ├── Program.cs
        ├── Startup.cs
        ├── function
        │   ├── FunctionHandler.cs
        │   └── FunctionHandler.csproj
        ├── root.csproj
        └── template.yml
</code></pre>

<p>To write the function, <code>gbfs-systems-function/FunctionHandler.cs</code> is edited. Here is the full function:</p>

<script src="https://gist.github.com/andmos/13822f83e76cfab2b764d40f6ca884e8.js"></script>

<p>The <code>Handle</code> method is entrypoint for the function. In this case the input is not validated, but upon triggering the function parses the <code>systems.csv</code> file and serializes it as JSON.</p>

<p>To build and deploy the function, take a look at the <code>stack</code> file:</p>

<script src="https://gist.github.com/andmos/5f8244497cf2bf36687d14d8ffdb3e7d.js"></script>

<p>Notice the <code>image:</code> tag. Since OpenFaaS runs functions in Docker containers, this is the name of the container image that is created, and is the artifact of the build.</p>

<p>To build, deploy and trigger the function:</p>

<pre><code class="language-shell">$ faas-cli build -f gbfs-systems-function.yml
$ faas-cli deploy -f gbfs-systems-function.yml

$ echo "" |faas-cli invoke gbfs-systems-function # or via Curl
$ curl -d "" localhost:8080/function/gbfs-systems-function
</code></pre>

<p>The two last commands will return <code>JSON</code> straight from the new function.</p>

<p>To push the image to <a href="https://hub.docker.com/">Docker hub</a>:</p>

<pre><code class="language-shell">$ faas-cli push -f gbfs-systems-function.yml
</code></pre>

<h2 id="buildingagbfspoweredslackbot">Building a GBFS powered Slack bot</h2>

<p>So that example function was rather simple. Let's make something more useful: A Slack bot for showing available bikes and locks at bikeshare stations. Right off the bat this seems like a nice use case for multiple functions, since a function should optimally do only one thing. So for the bot we need a <code>bikeshare-function</code> that takes the name of a bikeshare systems station as input, and return the number of available bikes and locks for this station as output.</p>

<p>Not much code needed here neither:</p>

<script src="https://gist.github.com/andmos/a8116f0121bd8e75d4d3371a01642775.js"></script>

<p>To link the function ta a GBFS system, the GBFS discovery URL must be exposed to the function via environment variable in the <code>stack</code> file.</p>

<p>When deployed, the <code>bikeshare-function</code> can be invoked:</p>

<pre><code class="language-shell">$ curl -d "skansen" localhost:8080/function/bikeshare-function
{"Name":"Skansen","BikesAvailable":19,"LocksAvailable":2}
</code></pre>

<p>The next function is the Slack bot itself. The bot will trigger on mentions and call the <code>bikeshare-function</code> with a bikeshare station name. For creating Slack apps, <a href="https://api.slack.com/slack-apps">see the documentation</a>.</p>

<p>As expected, under 100 lines of code here too:</p>

<script src="https://gist.github.com/andmos/6e992d653377f7bb934ddb817cf47388.js"></script>

<p>Now there are a couple of things to notice here. <br>
For the bot to be able to call the <code>bikeshare-function</code>, it needs to know where the <a href="https://docs.openfaas.com/architecture/gateway/">OpenFaaS API gateway</a> is. When running via Docker Swarm this address defaults to <code>http://gateway:8080/</code>, but it is <a href="https://github.com/openfaas/workshop/blob/master/lab4.md#call-one-function-from-another">recommended to make this address customizable</a>, as it may vary from environment to environment and other orchestrators.</p>

<p>The next thing to notice is secrets. The bot needs a OAUTH token when connecting to Slack, and this token should be kept secret. OpenFaaS <a href="https://docs.openfaas.com/reference/secrets/">integrates with Swarm and Kubernetes secrets</a>, both reachable from <code>openfaas-cli</code>:</p>

<pre><code class="language-shell">$ faas-cli secret create secret-api-key \
  --from-file=slackToken.txt
</code></pre>

<p>The token is then written to a file inside the container that must be read up:  </p>

<pre><code class="language-csharp">var botToken = File.ReadAllText(@"/var/openfaas/secrets/bikeBotSlackToken");  
</code></pre>

<p>The final <code>stack</code> file:</p>

<script src="https://gist.github.com/andmos/b7037ab2266393737db10097366bc20f.js"></script>

<p>To initialize the Slack bot:  </p>

<pre><code class="language-shell">$ curl -d "" localhost:8080/function/bikeshare-slack-function
Bot initializing  
</code></pre>

<p>The bot is now online and can be asked for station status:</p>

<p><img width="651" alt="Skjermbilde 2019-06-04 kl  22 00 29" src="https://user-images.githubusercontent.com/1283556/58909797-4d732c00-8714-11e9-8bf6-026fe7e1dff5.png"></p>

<h2 id="conclusion">Conclusion</h2>

<p>It is exiting times for micro-mobility and the city of the future. For solutions like bikeshare systems to reach it's potential and help cities become more accessible, integration with other smart city systems is almost a requirement. Thanks to open standards like GBFS and the serverless paradigm, creating new application leveraging and combining data is only a couple of code lines away. Frameworks like OpenFaaS helps democratize serverless and FaaS, giving developers tools to run functions where and how they want.</p>

<p>Finally, all code for this post <a href="https://github.com/andmos/BikeshareFunction">can be found on Github.</a>.</p>]]></content:encoded></item><item><title><![CDATA[Code Coverage for dotnet core with Coverlet, multi-stage Dockerfile and codecov.io]]></title><description><![CDATA[<h2 id="entercoverlet">Enter Coverlet</h2>

<p>The one thing I missed when moving away from full-framework and Visual Studio to VSCode and dotnet core, was simple code coverage.</p>

<p>Given the easy tooling <code>dotnet</code> provides, with <code>dotnet build</code>, <code>dotnet test</code> and <code>dotnet publish</code>, I looked for something that integrated nicely with these commands without adding</p>]]></description><link>http://blog.amosti.net/code-coverage-for-dotnet-core-with-coverlet-multistage-dockerfile-and-codecov-io/</link><guid isPermaLink="false">04fb64ee-3fee-40a8-84c9-c8e2d9279c91</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sun, 26 May 2019 13:38:51 GMT</pubDate><content:encoded><![CDATA[<h2 id="entercoverlet">Enter Coverlet</h2>

<p>The one thing I missed when moving away from full-framework and Visual Studio to VSCode and dotnet core, was simple code coverage.</p>

<p>Given the easy tooling <code>dotnet</code> provides, with <code>dotnet build</code>, <code>dotnet test</code> and <code>dotnet publish</code>, I looked for something that integrated nicely with these commands without adding to much complexity to the code project itself. After som googling, I stumbled over Scott Hanselman's <a href="https://www.hanselman.com/blog/NETCoreCodeCoverageAsAGlobalToolWithCoverlet.aspx">blogpost</a> about a cool little project called <a href="https://github.com/tonerdo/coverlet">Coverlet</a>. Coverlet was just what I was looking for:</p>

<blockquote>
  <p>Coverlet is a cross platform code coverage library for .NET Core, with support for line, branch and method coverage.</p>
</blockquote>

<p><code>coverlet</code> can be installed as a <code>dotnet tool</code> with</p>

<pre><code class="language-shell">dotnet tool install --global coverlet.console  
</code></pre>

<p>to make it globally available, providing it's own <a href="https://github.com/tonerdo/coverlet#code-coverage">CLI tool running directly at the test assemblies</a>.</p>

<p>The strategy I have settled on is using the <code>coverlet.msbuild</code> package that can be added to your test projects with  </p>

<pre><code class="language-shell">dotnet add package coverlet.msbuild  
</code></pre>

<p>When using the <code>coverlet.msbuild</code> package, no extra setup is needed, and <code>coverlet</code> integrates directly with <code>dotnet test</code> with some extra parameters,</p>

<pre><code>dotnet test /p:CollectCoverage=true /p:Threshold=80 /p:ThresholdType=line /p:CoverletOutputFormat=opencover  
</code></pre>

<p>The clue here is <code>/p:CollectCoverage=true</code>, the parameter that enables collection of code coverage. if no other option is specified, the coverage will be reported to the console when the tests are finished running:</p>

<pre><code class="language-shell">+-----------------+--------+--------+--------+
| Module          | Line   | Branch | Method |
+-----------------+--------+--------+--------+
| BikeshareClient | 93.2%  | 94.6%  | 85.7%  |
+-----------------+--------+--------+--------+
</code></pre>

<p>Now the other parameters specified in the example is <code>/p:Threshold=80</code> and <code>/p:ThresholdType=line</code>. So if the code coverage drops below 80%, the build breaks here, while <code>/p:CoverletOutputFormat=opencover</code> writes a report in the <a href="https://github.com/opencover/opencover/wiki/Reports">opencover</a> format.</p>

<h2 id="multistagedockerfile">Multi-stage Dockerfile</h2>

<p>For most new projects, I have found myself using a simple <code>Dockerfile</code> along with some CI/CD tool like <a href="https://travis-ci.org/">Travis</a>, <a href="https://www.appveyor.com/">AppVeyor</a> or <a href="https://azure.microsoft.com/nb-no/services/devops/pipelines/">Azure Pipelines</a>. This approach helps keeping the builds simple, as large <code>Dockerfiles</code> are harder to work with. The sole purpose of <code>Docker</code> is to keep things reproducible no mather the environment it builds and runs images in, so migrating from one CI provider to another is hardly any work. Building locally will always match the result on the CI system.</p>

<p>But, let's say build using <a href="https://docs.docker.com/develop/develop-images/multistage-build/">multi-stage Dockerfiles</a>. In a multi-stage build, we separate the SDK, build and test tools in one image, while copying the resulting artifacts to another image, more suitable for production runtimes. The rule is, have a small production image containing just what is needed for running your artifacts. Just one problem: How do we take care of that <code>coverage.opencover.xml</code> file? We don't what to transfer that file to the production image to grab hold of it, code coverage results don't belong in a production image.</p>

<p>Thankfully, <code>Docker</code> stores <em>layers</em> that can be brought up after building the image. <br>
Here is our example multi-stage <code>Dockerfile</code>:</p>

<script src="https://gist.github.com/andmos/1ccfb13473a896f598cd51cccbe3fa4c.js"></script>

<p>In short, we build, test and publish the app with the <code>microsoft/dotnet:2.2-sdk</code> base image, before copying over the binaries to the <code>microsoft/dotnet:2.2-aspnetcore-runtime</code> image.</p>

<p>To use <code>coverlet</code> and extract code coverage, this line does the trick:</p>

<pre><code class="language-shell">RUN dotnet test /p:CollectCoverage=true /p:Include="[BikeDashboard*]*" /p:CoverletOutputFormat=opencover  
</code></pre>

<p>Notice the <code>label</code> on line 3:</p>

<pre><code class="language-shell">LABEL test=true  
</code></pre>

<p>With the label, it is possible to look up the id of the <code>docker build</code> <em>layer</em> containing the code coverage file, create a container from that <em>image layer</em> and use <code>docker copy</code> to grab hold of the coverage XML. Take a look:</p>

<pre><code class="language-shell">export id=$(docker images --filter "label=test=true" -q | head -1)  
docker create --name testcontainer $id  
docker cp testcontainer:/app/TestBikedashboard/coverage.opencover.xml .  
</code></pre>

<h2 id="wrappingitupwithtravisandcodecovio">Wrapping it up with Travis and codecov.io</h2>

<p>So now we have a simple build chain with a multi-stage <code>Dockerfile</code> and code coverage generation. As a last feature, the coverage report can be used by code coverage analyzers like <a href="https://codecov.io/">codecov.io</a>. codecov.io <a href="https://github.com/apps/codecov">integrates with Github</a>, and can automatically analyze incoming pull-request and break a build if coverage drops by merging the PR. Quite nifty.</p>

<p>Integrating codecov.io with CI systems like Travis is done with a one-liner, thanks to the provided <a href="https://docs.codecov.io/docs/about-the-codecov-bash-uploader">upload-script</a>. When using Travis, not even a token is required.</p>

<p><code>.travis</code> example file:</p>

<script src="https://gist.github.com/andmos/65143919934e8f5deeb02c6705f9e780.js"></script>]]></content:encoded></item><item><title><![CDATA[Ensure consistent Markdown style with Markdownlint]]></title><description><![CDATA[<p>Markdown is great. It's easy and flexible, and provides a good markup language even non-technical people can understand and enjoy. But, that flexibility and customizability can come at a cost. Document buildup can be done in many ways, and it can be hard to ensure consistency when working with multiple</p>]]></description><link>http://blog.amosti.net/ensure-consistent-markdown-style-with-markdownlint/</link><guid isPermaLink="false">4e562b9e-adf2-4d08-8b81-3e12617d76ef</guid><dc:creator><![CDATA[Andreas Mosti]]></dc:creator><pubDate>Sat, 05 Jan 2019 16:04:00 GMT</pubDate><content:encoded><![CDATA[<p>Markdown is great. It's easy and flexible, and provides a good markup language even non-technical people can understand and enjoy. But, that flexibility and customizability can come at a cost. Document buildup can be done in many ways, and it can be hard to ensure consistency when working with multiple documents and contributors.</p>

<p>I like to think of markup languages as code, and most code deserves a good style guide. <a href="https://github.com/DavidAnson/markdownlint">Markdownlint</a> is a good alternative.</p>

<p><code>markdownlint</code> provides <a href="https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md">a nice set of standard rules</a> when writing markdown, like:</p>

<ul>
<li>Heading levels should only increment by one level at a time</li>
<li>Lists should be surrounded by blank lines</li>
<li>First line in file should be a top level heading</li>
<li>No empty links</li>
<li>No trailing spaces</li>
<li>No multiple consecutive blank lines</li>
</ul>

<p>just to name a few. It also ensures consistency in headers, like</p>

<pre><code class="language-markdown">My Heading  
===
</code></pre>

<p>vs.</p>

<pre><code class="language-markdown"># My Heading
</code></pre>

<p>Another smart rule is ensuring language description when writing code blocks. <br>
The <a href="https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint">VSCode extension</a> shows a squiggle when a code block is missing a language description:</p>

<p><img src="https://user-images.githubusercontent.com/1283556/176104410-42b63ddb-ead2-4a38-b9a6-1a1ffcc82b97.png" alt="MarkdownLint Example"></p>

<p>If some rules don't fit your style or project, they can be override with a <code>.markdownlint.json</code> file:</p>

<pre><code class="language-markdown">{
    "MD013": false, // Disable line length rule.  
    "MD024": false // Allow Multiple headings with the same content.
}
</code></pre>

<p>The easiest way to start using <code>markdownlint</code> is to install the extension for <a href="https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint">VSCode</a> or <a href="https://atom.io/packages/linter-node-markdownlint">Atom</a> (RIP Atom), or integrated with builds using <a href="https://github.com/sagiegurari/grunt-markdownlint">Grunt</a>, <a href="https://github.com/xt0rted/markdownlint-problem-matcher">Github Actions</a> etc. My preferred way is directly with the <a href="https://github.com/igorshubovych/markdownlint-cli">markdownlint-cli</a>.</p>

<p>For my <a href="https://github.com/andmos/Coffee">Coffee recipes</a> I use a simple container with Github Actions:  </p>

<script src="https://gist.github.com/andmos/a32940491b540ff5a1bf487ac0b26046.js"></script>

<p>If any rules are broken, it breaks the build.</p>]]></content:encoded></item></channel></rss>