webcodr

Please don't do this with switch statements

The classic C-like switch statement is fine, but it has its flaws. It’s no coincidence that modern languages like Kotlin or Rust offer alternatives like when or match or a more fine-tuned version of switch like Zig.

I’m currently the in early stages of rewriting a large and complex Java code base to Kotlin. Some parts of this codebase are really ugly and uncessarily complicated and convoluted. Yesterday I crossed the path of a nasty use of a switch statement in a operation on a Java Stream. Unfortunately I can’t share the real code, but imagine something like this:

int value1;
int value2;

for (...) {
    switch (enum) {
        case Enum.FOO:
            if (something) {
                if (somethingElse) {
                    value1 = someConvulutedStreamOperations();
                    value2 = someOtherConvulutedStreamOperations();
                    // imagine 30 more lines here
                } else {
                    value2 = 0;
                }

                break;
            }

        case Enum.BAR:
        default:
            value1 = 1;
            value2 = 2;

            break;
    }
}

This was part of a Java class with over 1,000 lines of code. Streams with many operations everywhere and sometimes very deep nesting thanks to old-style Java code. The original case for Enum.FOO stretches over almost the whole display space, so it’s not easy to spot any potential pitfalls. After I ran IntelliJ’s Kotlin migration tool and cleaned up all errors, it was time to run the unit tests and four out of 24 failed.

As you can imagine it’s not straight forward to find problems in such a large and complex class but I came across a notice of an unused value assignment. Why it was there was also not clear immediately, so I compared the original Java file with the Kotlin version.

Since Kotlin has no switch IntelliJ converted it to when. Unlike it’s Java counterpart when can handle null and is exhaustive with enums. There also no break keyword and that’s exactly the problem here.

Look at the Java code and think about what happens on Enum.FOO if something is true but somethingElse isn’t. The break keyword is not triggered and the switch statement goes on to the default block. As result value1 and value2 are assigned their values.

IntelliJ’s migration tool is quite good, but it didn’t catch that and the generated when statement was wrong. That’s also the reason for the unused assignment notice. I assume the migration tool was confused by the scope of break since both switch and for can use it. Also there is no real match in Kotlin for such a structure. You have to assign the values of both variables for every branch inside the when statement. After fixing that all tests ran successfully.

It’s a really good example how not to use switch statements, especially with nested control structures in a case. Since the original author of the code is no longer available, I can only guess why it was written this way. Probably to avoid duplication or even unintended. Well, it could seem as an elegant solution if you’re familiar with the code, but to other people it’s just an unnecessary pitfall that should be avoided. It’s untuitive at best, not easy to comprehend and can lead to nasty bugs, especially if it’s unintended behaviour. That’s why Kotlin, Rust, Zig and other modern languages avoid breaks in their alternatives to switch statements in the first place.

Fixing no A2DP with Bluetooth headsets on Linux

Please beware that the following instructions are suitable for media consumption only! After this changes your headset can’t make any calls without a dedicated microphone until you undo them.

Having trouble with the audio quality of your Bluetooth headset on Linux? It sounds awful if you’re listing to music and videos? Well, congratulations, I had the same problem and found a solution. At least if you’re only into listening and won’t make any calls. This works on Ubuntu or Ubuntu-based like Pop_OS! or any other distribution that relies on Blue Z and Pipewire/WirePlumber for Bluetooth audio.

What’s wrong?

Bluetooth has different profiles for different things. If you want to make a call, your headset will switch to the Hands-free Profile (HFP). The available bandwidth will be shared for audio input and output and different audio codecs will be used. It’s good for calls, but really shitty if you want to listen to music. The headset needs to switch to A2DP (Advanced Audio Distribution Profile) for good sound quality. This should happen automatically and HFP should only be active, if you’re making a call. I had never trouble on macOS or Windows with this, but I’m trying Pop_OS! now. It worked for a few days, but today the headset would only connect with the HFP and streaming music or watchig videos was a pleasant as dental treatment with a power drill.

Many searches later, I found out that WirePlumper (the session manager for the Pipewire multimedia framework) has some bugs that will trigger HFP on BT headsets even if there’s no call. That’s pretty annying but at least somewhat easy to solve. There’s a solution in the Arch Linux Wiki, but it needs some modifications for Ubuntu-based distributions.

The solution

First you need to create a directory path in your home directory:

mkdir -p ~/.config/wireplumber/bluetooth.lua.d/

This will create an directory that allows you to override the default WirePlumper Bluetooth config without overwriting the original file.

Now copy the original config to the overwrite directory:

cp /usr/share/wireplumber/bluetooth.lua.d/50-bluez-config.lua ~/.config/wireplumber/bluetooth.lua.d/

You can now edit the file in the override directory.

Beware that the original instructions from the Arch Linux Wiki contain a conf file, but at least with Ubuntu (and Pop_OS!) the config file is written in lua, so it’s a completely different syntax.

Look for bluez5.roles – it should be commented out. I would recommend not replace the comment and just put the following line below. It’s easier to undo if something goes wrong or you need to enable HFS.

["bluez5.roles"] = "[ a2dp_sink a2dp_source ]"

Save the file and restart the Bluetooth service:

sudo systemctl restart bluetooth

Now reconnect your device and A2DP should be working fine.

Awsome CLI Tools

There are some incredibly useful CLI tools out there. Here’s a list with some awesome tools I’m using for my daily work.

atuin

Atuin is a history replacement with a fuzzy finding search and sync/backup options (self-hosted if you need). It’s written in Rust (blazingly fast!) and stores the history entries in a SQLite db. You can even import your current history from your shell. Atuin supports bash, zsh, fish and NuShell.

bat

Need cat a lot? Bat is a cat clone on steroids with syntax highlighting, themes, git integration and much more.

eza

Everyone needs ls ? Nope, eza is much better. Colored output, icons via nerd fonts, git status tracking per file, tons of display options …

tldr

Reading an man page can be frustrating, why can’t I just have the TL;DR version? Well, tldr does exactly that. Just use it like man and enjoy the TL;DR version of a man page.

zoxide

As his siblings cd is a little dated and clunky. With zoxide you can easily jump to directories without typing the full path. It stores a history of your visited paths and you can jump via keywords to your directories.

chezmoi

You’re using multiple computers or just want a simple and reliable way to store your dot files? Chezmoi is your friend and stores your dot files in a git repo with syncing capabilities to other devices. It’s even possible to encrypt your files. If you have secrets in your dot files, chezmoi comes with integrations for many password managers to safely store your passwords, tokens etc.

starship

Your shell looks boring? Just theme it with starship! It’s pretty easy to build your own and if you don’t want to, there many themes available. Starship also has integrations for many dev tools to show the current git status or currently active versions of your runtime environments like NodeJS, Rust, Go, Java etc.

fzf

Finding files with the usual suspects works fine, but fzf can do this faster and much more intuitive. It’s a fuzzy finding search within your current directory, processes, git commits, history (if you don’t like Atuin) and much more.

ripgrep

Ripgrep is a really fast regex-based search tool and can do much more than grep alone.

btop

Another top variant? Yup, but btop is way more like it’s modern GUI-based colleagues on macOS or Windows with CPU and GPU usage, process trees, I/O and disk activities, battery status …

Micro DSLs for builders with Kotlin

The builder pattern is a great tool and it’s heavily used in many Java projects and dependencies. But in a Kotlin code base it’s looks a little odd and out-of-date. In this short post I will show you how to write a micro DSL on top of builder with just a few lines of code.

I’m using Spring’s ResponseCookie class as base for the DSL as it has a builder already on-board.

A little example:

val cookie = ResponseCookie
    .from("cookie name", "cookie value")
    .httpOnly(true)
    .path("/")
    .build()

How would this look like with a micro DSL?

val cookie = createCookie("cookie name", "cookie value") {
    httpOnly(true)
    path("/")
}

Instead of calling the static method ResponseCookie.from() which returns a ResponseCookieBuilder object, you just give the function three parameters: two strings for name and value, and trailing lambda with the builder context. There is also no need to call ResponseCookieBuilder.build() anymore. It’s shorter and better to read. Since this is only a small example the advantages are not that big. Micro DSLs really shine with large and often used builders. They can also help to automate things, see below.

How?

fun createCookie(
    name: String, 
    value: String, 
    lambda: ResponseCookieBuilder.() -> Unit
) = ResponseCookie.from(name, value).apply(lambda).build()

Et voila, a new micro DSL is born. Deriving the trailing lambda function from ResponseCookieBuilder does the trick. The lambda function takes an instance of ResponseCookieBuilder as context of this, so it’s possible to access the methods of the given ResponseCookieBuilder inside the lambda function. All we have to do is to create an instance of ResponseCookieBuilder with ResponseCookie.from() and call Kotlin’s apply method on the builder object with the lambda function. It will automatically inject the current instance of ResponseCookieBuilder into the lambda function and apply the instructions inside the lambda function on the instance. To create a ResponseCookie object from the builder, just call the build method and return the result.

You can use this little trick with all builders. Need more automation? No problem! In my current project we’re using such micro DSLs to create product configurations. The factory method contains sanity checks after the lambda function was applied to the builder object. It will also fetch a YAML file via the product ID given to the builder. This data is parsed into an object and will be put into a property of the builder object. After the configuration object is created, the factory method will add to a map and return the instance to store in a variable. It’s now possible to directly access the configuration via its variable name or to fetch it from the map. The variable is very useful for tests, but if we have to fetch the configuration dynamically by ID from a string, the map is the way to go.

Of course there are other ways of achieving such automation, but the micro DSL approach is simple, improves readability and can also reduce redundant code. You can even easily nest builders to create a more powerful DSL. Spring’s Kotlin extensions also rely on micro DSLs and extension functions. Take Spring Security for example. Their fluent interface for the security configuration is awful to read and difficult to understand, but Spring also provides a Kotlin extension with a micro DSL for that. So much more intuitive and better to read. There are many more extensions like for bean creation, the MVC mock in tests etc.

US International Keyboard Layout Without Dead Keys

Depending on your country’s keyboard layout writing code can be quite annoying. The German ISO layout is an exceptional pain in the ass, as almost all relevant symbols require a modifier key, sometimes even two (I’m looking at you, Apple). German has some special characters (ä. ö, ü, ß) and it makes sense to have them readily available without modifier keys, but it just sucks for programming.

I decided to switch to the US ANSI layout and bought two new keyboards: a Keychron K3 Pro for my MacBook Pro (light and portable, perfect if I have to go to my company’s office) and Keychron Q1 version 2 for my Windows PC. By the way, the Q1 is heavily modded and will be tweaked further in the coming weeks. I will write an article about the mods after the keyboard is finished.

Both keyboards work great, but to type German special characters I have to use the US international layout on Windows and there’s a catch. Who would have thought, if Microsoft is involved? Certain keys like single quote/double quote or accents/tilde are so called dead keys.

If you press them, nothing will happen at first. Only after pressing the next key, the symbol will appear. So, if you want to type a text wrapped in double quotes, you press the the key and the double quote will appear, as soon as you type the first character of the actual text. It’s annoying as fuck and drives me crazy.

Solution

Unfortunately Microsoft does not ship an US international layout without dead keys for Windows. Most Linux distributions do exactly that, but I guess that’s to easy for a big international corporation like Microsoft.

What to do? After a little bit of googling, I found a neat little tool, the Microsoft Keyboard Layout Creator. I’ve never heard of this program before but it’s legit and works fine.

If you’re having trouble with the dead keys like me, I recommend the following steps:

  1. Download the Microsoft Keyboard Layout Creator from Microsoft’s website
  2. Open it and load the US international layout via File -> Load Existing Keyboard...
  3. All dead keys are shown with a light grey background. You can remove their dead key status via the context menu
  4. Don’t forget to activate the shift layer via the checkboxes left of the keyboard layout and disable the dead keys as well
  5. Save your config
  6. Build the layout via Project -> Build DLL and Setup Package
  7. After the build finished, a dialog will ask you to open the build directory. Open it and run setup.exe to install the new layout
  8. Restart Windows. This should not be necessary, but unfortunately Windows is Windows …
  9. Go to Settings -> Time and Language -> Language and Region
  10. Select your preferred language and click the three dots and choose Language options
  11. Use Add a keyboard and select United States International - No Dead Keys
  12. I recommend to remove all other keyboard layouts, so they can’t interfere

Now you should have a working US international layout without those annoying dead keys. Happy typing!

Introducing Server Runner

In my recent adventures with Rust, I planned to write a REST API with the help of the excellent book “Zero To Production In Rest” from Luca Palmieri. That’s still happing, but as small side project, I wanted to write some kind of CLI tool.

A few weeks ago I had wrote a bash script to run some web servers and check their status until they’re up and running. When all servers are ready, a command would be executed and all servers would be closed after this command is finished. Since I hate bash with passion, I asked an friend to help me: ChatGPT.

I would never trust an AI in this day and age to write a code base for me, but for small scripts? Why not. As long as the scope is small and I can understand the code, ChatGPT is a really good tool. That script does exactly what I want and it’s easy enough to understand, even for me as a bash hater.

But I wanted to this properly and so I decided to rewrite this script as a small CLI tool program in Rust: Server Runner. Well, not very creative name, but it does what is says.

Configuration

Server Runner is quite simple and just needs a small YAML file as configuration. Here’s a small example.

servers:
  - name: "Hello World"
    url: "http://localhost:3000"
    command: "node index.js"
command: "node sleep.js"

To start server runner, just run:

server-runner -c servers.yaml

Server Runner will execute all server commands defined in the config section servers and waits until the URLs return HTTP 200. When all servers are up and running, the primary command will be started. After the command finished, all server processes will be killed off.

How do I get it?

Server Runner is available as a Cargo Crate and will be published soon on NPM with executables for macOS (aarch64, x86_64), Linux (aarch64, x86_86) and Windows (x86_64).

Installation via Cargo

cargo add server-runner

The source code is available on GitHub.

Terminal evolved

I always saw myself as a casual user of the terminal. I preferred zsh with the Prezto framework within in iTerm 2 with tabs and that’s about it. No more! A colleague of mine introduced me to kitty as terminal emulator, together with tmux and Neovim. That’s a lot to swallow. I was never a fan of the vi/vim user experience and more of a mouse guy. Well, what should I say? It’s awesome if you’re getting used to it. Let me explain …

JFYI

kitty, tmux and nvim are available in Homebrew on macOS and should be also available in your favorite package manager on Linux.

Be aware, that most examples contain some macOS-specific settings marked with corresponding comments, as I am a Mac user.

kitty

iTerm 2 is a pretty good terminal emulator with many features and way better than Apple’s sorry excuse of a terminal. To be fair, the macOS terminal app has gotten better over the years, but it still lacks essntial features like true color support. As good as iTerm 2 is, there’s one catch: iTerm 2 is slow. GPU-accelerated alternatives like kitty render much faster. Don’t get me wrong, iTerm 2 is no slouch and works well, but if you’re on the way to a terminal power-user, you will notice it. Switching between tmux windows is much faster in kitty or other terminal emulators like Alacritty. The later is really nice app, but unfortunately has some trouble with macOS key bindings within tmux and I found no easy solution to that. Kitty works out of the box.

Taming the kitten

Kitty’s configuration is very well documented, but can be overwhelming. There are hundreds of options to explore. One of the most important is the font. Grab yourself a nerd-font, add it to your OS and specify the font family. I’m using “Hack Nerd Font Mono” for this example. Just open your kitty config in ~/.config/kitty/kitty.conf and add the following:

# Replace with your preferred font
font_family			Hack Nerd Font Mono 
bold_font			auto
italic_font			auto
bold_italic_font	auto
# Replace with your preferred font size in points
font_size			17.0 

Save and reload the config via menu bar. Enjoy!

Now that’s out of the way, how about some comfort features?

# Set how many lines the buffer can scroll back
scrollback_lines 10000 

# Auto-detect URLs
detect_url yes
# Open URLs with ctrl + click
mouse_map ctrl+left press ungrabbed,grabbed mouse_click_url 

# Copy the mouse selection directly to the clipboard
copy_on_select yes 
# Paste on right click
mouse_map right press grabbed,ungrabbed no-op
mouse_map right click grabbed,ungrabbed paste_from_clipboard

# Enable macOS copy & paste via CMD + c/v
map cmd+c copy_to_clipboard
map cmd+v paste_from_clipboard

# Jump to beginning and end of a word with alt and arrow keys (macOS)
map alt+left send_text all \x1b\x62
map alt+right send_text all \x1b\x66

# Jump to beginning and end of a line with cmd and arrow keys (macOS)
map cmd+left send_text all \x01
map cmd+right send_text all \x05

# Nicer titlebar on macOS
macos_titlebar_color background

# Make vim the default editor for the kitty config
editor vim

Want some color? No problem, there are hundreds of themes available just a Google search away. I prefer Catppuccin Macchiato, but choose what ever you want. kitty config files support includes, so it’s easy to add a theme:

include ./theme.conf

Add put the file theme.conf in the same directory as the kitty config and paste your theme of choice into the file.

tmux

So, what the hell is tmux? If you need a terminal, tmux will be one of your best friends. Did you ever run something complex on the shell and accidently closed the terminal window or something similar happened during a SSH session? It sucks.

tmux sessions to the rescue! A session will be open until you close it, so even if your internet connection breaks down during a SSH session, nothing will vanish. Just connect to the server again and re-join the tmux session. Everything will be as you left it.

Just type tmux new or if you want to give the session a name tmux new -s my_new_session . Of course tmux can handle multiple sessions. To list all open sessions use tmux ls and to join a session type tmux a -t session_name .

After opening a new session, tmux will display window 0. Need more windows? No problem. Need a window inside an window? No problem, they are called panes. Windows can be split in horizontal or vertical panes, as many and wild as you like.

Inside a tmux session you can trigger commands via a so-called prefix key following one or more keys to tell tmux what you want to do. The default prefix key is ctrl + b . To split your current window into two horizontal panes press ctrl + b followed by % , for a vertical split use ctrl + b and ".

To close a pane, just exit the shell of the pane with exit. You can switch panes with ctrl + b followed by an arrow key in the corresponding direction.

Our new best friend ctrl + b is not the most intuitive key combination. I recommend using ctrl + a and map the caps lock key to ctrl (pro-tip: macOS can do this for you without tools or customizable keyboard firmware). It’s way faster and easier to press. Of course, you can map what ever key combination you, just beware of conflicts with other combinations like cmd + space.

To change the command key, go to your tmux config in ~/.tmux.conf and add the following lines:

unbind C-b
set -g prefix C-a
bind-key C-a send-prefix

This unbinds ctrl + b and sets the prefix key to ctrl + a. To reload the tmux config inside a session use tmux source-file ~/.tmux.conf.

More? More!

There is much more you can do. Here are some recommendations.

# Enable mouse support
set -g mouse on

# Set history limit to 100,000 lines
set-option -g history-limit 100000

# Enable true color support
set-option -sa terminal-overrides ",xterm*:Tc"

# Start windows and panes at 1, not 0
set -g base-index 1
set -g pane-base-index 1
set-window-option -g pane-base-index 1
set-option -g renumber-windows on

# Open new panes in the same directory as their parent pane
bind '"' split-window -v -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"

# Vim style pane selection
bind h select-pane -L
bind j select-pane -D 
bind k select-pane -U
bind l select-pane -R

# Shift arrow to switch windows
bind -n S-Left  previous-window
bind -n S-Right next-window

# Don't scroll down on copy via mouse
unbind -T copy-mode-vi MouseDragEnd1Pane

With mouse support you can resize panes with drag & drop and even get a context menu with a right click. By default tmux assigns numbers to windows for fast switching via ctrl + b and number key. Unfortunately the developers decided to begin with 0. This is technically correct, but on the keyboard it’s quite unintuitive, so we can tell tmux to begin with 1. The rest is pretty much self-explanatory.

Neovim

Why neo? Good old vim is extensible via vimscript. It works, but it’s like bash: ugly as fuck. Neovim is a fork of vim and replaces vimscript with support for lua-based extensions. So it’s still blazingly fast(tm) and much nicer to write extensions.

You could setup Neovim and the necessary extensions yourself, but won’t recommend it in the beginning. Pre-build configs like AstroNvim or NvChad will massively speed up the process and have great defaults. It can be very overwhelming to get used to vim/nvim, so I would recommend to wait with your own config until you get more familiar with a keyboard-based editor.

HELP! I can’t quit vim!

Don’t worry, you are not the first and will certainly not be the last. vim is a so-called modal-based editor. It has different modes like command, insert, visual etc. As you may have noticed, typing will not add text to the buffer. You need to press certain keys like i to enter insert mode to edit text. There are other keys to go in insert mode and everyone of them has slightly different, but pretty useful function, like o which creates a new line below the cursor and starts insert mode.

To exit vim you to leave the insert mode by pressing esc. Now you are in command mode and can quit by typing : to enter the command line mode and hit q for quit, followed by enter to execute the command.

If you want to save a file, enter command line mode and use w for write. It’s possible to chain certain commands. wq will save the file and quit vim.

Congratulations, you now know how to exit vim!

To move the cursor just use the arrow keys in most other editors. But there is more efficient key mapping in command mode: h (left), j (down), k (up) and l (right). No need to move your to the arrow keys anymore. To be honest, I’m still not comfortable with this way of navigation, but it’s objectively more efficient than moving the right hand to the arrow keys.

Of course vim has way more navigation possibilities. For example, the cursor can jump forward by one word with w and backwards with b. Press $ to jump to the end of the current line or 0 to beginning. G navigates you to last line and gg jumps to the first line. And there is so much more to explore. I recommend a decent vim cheat sheet to learn. But do yourself a favor and try not learn all keys at once. You will only become frustrated and give up more easily, it’s just too much to learn everything in the beginning.

Netflix developer and Twitch Streamer ThePrimagen designed a Neovim plug-in to learn the navigation commands as game. it’s pretty good and fun: https://github.com/ThePrimeagen/vim-be-good

AstroNvim has many plug-in out of the box. Syntax highlighting, linting, auto-formatting etc. are all there, but you need to install the corresponding servers, parsers etc.

To do this, enter command line mode and use LspInstall followed by the language name for installing a language server protocol. A language server for Tree-Sitter can be installed with TSInstall followed by the language name. If there nothing available, both commands will recommend plug-ins according to your input, if available.

Be aware, that LSPs and Tree-Sitter will not bring the necessary tools with them. If you install tooling for Rust, rust-analyzer has to be installed on the system. Same goes for ESlint, Prettier, Kotlin, Java etc.

The End

… for now

That was a lot unpack, but there is so much more to show to you. I will be back with more productivity tools and tips in the near future. Stay tuned!

Real-world performance of the Apple M1 in software development

There are enough videos on YouTube out there to show how awesome the new Macs are, but I want to share my perspective as a software developer.

About six weeks ago, I was too hyped not to buy an ARM-based Mac, so I ordered a basic MacBook Air with 8 GB RAM (16 GB was hard to get at this time). As strange as it sounds, I don’t regret buying only 8 GB of RAM. On an Intel-based Mac this would be an absolute pain in the ass, even my old 15” MacBook Pro Late 2017 with 16 GB struggles sometimes with RAM usage.

It’s really amazing how good this small, passively cooled MacBook Air is keeping up. In many scenarios it even surpasses my MacBook Pro with ease. I never had an Intel-based MacBook Air, but the last time is used a dual-core CPU for development, was not pretty und that was a pretty decent i5 and not a ultra-low voltage i3.

Speed, Speed, Speed

Unfortunately I couldn’t really develop software on the MacBook Air for a while, since Java and IntelliJ were not available for aarch64-based Macs. Of course I tried Rosetta 2, but at least for these two, it’s quite slow. NodeJS on the other hand is incredibly fast.

All this changed after my christmas vacation. IntelliJ was updated and thankfully Azul released a JDK 8 for ARM-Macs. A native version of Visual Studio Code is also available and quite fast.

So, no more introductions, here are some real-world scenarios and numbers.

I currently work on a Java project with a steadily growing codebase of Kotlin. It’s a little special, since another part of the application is written in Ruby. We’re using JRuby, so it’s bundled all together with a Vue-based frontend in a WAR-File with Maven.

Maven Build Times

All build times are from fully cached dependencies, so there are no interferences from my internet connection.

Used devices:

  • 15” MacBook Pro Late 2017: Intel Core-i7 7700HQ, 16 GB RAM
  • 13” MacBook Air Late 2020: Apple M1, 8 GB RAM
  • PC: AMD Ryzen 9 3950X, 32 GB RAM
Device Build time with tests Build time without tests
MacBook Pro 223 s 183 s
MacBook Air 85 s 63 s
PC 84 s 66 s

Well, a small, passively cooled MacBook Air is as fast as a full-blown and custom water-cooled 16-core monster of a PC. The MacBook Pro gets utterly destroyed. To get this straight: the cheapest notebook Apple makes, destroys a MacBook Pro that costs more than twice as much.

Ruby Unit Test Times

The test suite contains 1,087 examples. Please keep in mind, that I had to use Rosetta 2 in order to get everything running on the MacBook Air, since not all used Ruby Gems are compatible with ARM at this time. All tests were run with Ruby 2.7.1.

Device Test duration
MacBook Pro native 1.9 s
MacBook Air with Rosetta 2 1.1 s
PC native 1.3 s

Yeah, it’s quite fast, compared to a Suite of Java-based unit tests, but even here the MacBook Pro has no chance at all.

Frontend Build Times

The frontend is a Vue-based single-page application. As with Ruby, I had to use NodeJS with Rosetta 2, since not all used modules are compatible with ARM.

Device Test duration
MacBook Pro native 27.8 s
MacBook Air with Rosetta 2 20.7 s
PC native 20.6 s

Well, it’s more than obvious now, that the MacBook Pro has no chance at all against my MacBook Air. It’s not just the performance. After a few seconds of load, the MacBook Pro sounds like my F/A-18C in DCS immediately before a carrier launch, while the MacBook Air has no fan and therefore makes no noise at all.

And the battery life. Oh my god. Ten straight hours of development with IntelliJ and Visual Studio Code is entirely possible now, all while staying cool and quiet.

Even the dreaded battery murderer Google Meet is no problem anymore. My MacBook Pro on battery would last perhaps 2.5h max. The MacBook Air is capable of 8, perhaps even 9 hours of Meet. It’s as insane as an 8h long Meet itself.

Ah, yes, there is another thing: Meet does not cripple the performance anymore. The MacBook Air is totally usable with a Meet going on, while my MacBook Pro becomes sluggish as hell and is barely usable (even without Chrome and frakking Google Keystone).

Conclusion

I will make it short: if your tools and languages are already supported or at least quite usable with Rosetta, go for it. I would recommend 16 GB or more (depending on future models), if you want to buy one. I’m surprised that a 8 GB MacBook Air is that capable and to be honest, I don’t feel like there a going to be a problem for a while, but no one regrets more RAM …

Air vs Pro

The 13” MacBook Pro is little faster over longer periods of load due to active cooling, it has more GPU-cores, the love or hated Touch Bar and a bigger battery. If you need this, go for it, but if you can wait, I’d recommend to wait for the new 14” and 16” Pro models.

They will be real power houses with 8 instead of 4 Firestorm cores, vastly more RAM and even bigger batteries. And hey, perhaps they come with MagSafe and some other ports we MacBook users didn’t see for a while.

Ryzen vs Apple Silicon and why Zen 3 is not so bad as you may think

Apple‘s M1 is a very impressive piece of hardware. If you look at the benchmarks, like SPECperf or Cinebench, it‘s an absolute beast with a ridiculously low power consumption compared with a Ryzen 9 5950X.

Zen 3 is that inefficient?

10 W vs 50 W looks really bad for AMD, but the answer to this question is a little more complicated than this raw comparison of power usage.

Here are basic things to know:

  • Apple‘s M1 has four high power cores (Firestorm) and four high efficiency cores (Icestorm), this is called BIG.little architecture.
  • AMD’s Ryzen 9 5950X is a traditional x86 CPU with 16 cores and 32 Threads

So what? The benchmarks are about single core performance and Ryzen needs 40 W more. What a waste.

Right, but this says nothing about the efficiency of a single core.

I don‘t have a 5950X, but it‘s older brother, the 3950X. Same core count and about the same power usage. While monitoring with HWINFO64 on a single core run of Cinebench R20 the used core needs about 10 - 12 W.

Okay, but what happened to the other 38 watts?

As you may know AMD introduced a so called chiplet architecture with Ryzen 3000. A Ryzen 9 5950X has three dies. Two with 8 cores each and a separate chip that handles I/O like PCIe 4, DDR4 memory and the Infinity Fabric links to the CPU dies.

This I/O die consumes 15 to 17 W. It‘s a quite large chip produced in a 12 nm node by GlobalFoundries. That alone is a big reason for higher power usage compared to a modern node like TSMC‘s N7P (7 nm) used for the other two chiplets.

Why 12 nm? It‘s a compromise, since TSMCs 7 nm production capabilities are still somewhat limited and AMD takes a fairly large share of the capacity. All Zen 2 and 3 cores are produced in 7 nm, as are all modern AMD GPUs and of course there are other customers as well.

I hope AMD will improve the I/O die with Zen 4 (Ryzen 6000) dramatically. A modern process node and a bunch of better energy saving functions would do the trick.

There are still 21 W unaccounted for!

15 more cores are also unaccounted for. 21 divided by 15: about 1.4 W per core average. Seems a bit high, but possible. It depends on background tasks from the OS, the current Windows power plan etc.

To be fair, AMDs energy saving functions per core could a little bit better. They are not bad but there‘s room for improvements.

Process nodes

For a fair comparison we also have to include the process nodes. As mentioned above, Ryzen cores are manufactured in TSMC‘s 7 nm node. Apple is one step ahead in this category. The M1 and it‘s little brother A14 are already on 5 nm, again from TSMC.

This alone could account for up to 30% less power usage. There are no exact numbers available, but 20 to 30% would reasonable.

Core architecture

Current Zen 3 cores and Apple‘s Firestorm cores have fundamental differences in architecture. Modern x86 cores are small but clock quite high. On a single core load like Cinebench R23 you will get about 5 GHz from 5950X while a Firestorm core will only clock to 3.2 GHz.

Wait, what? Firestorm with 3.2 GHz is about as fast as Zen 3 with 5 GHz? That can‘t be true. Yet it is. According to the benchmarks a Firestorm core can execute about twice the instructions per cycle (IPC) as a Zen 3 core.

Firestorm is an extremely wide core design with many processing units. This is complemented by absurdly large L1 caches and re-order buffers. Much larger than on any x86 core ever.

This allows Apple to lower the clock speed significantly without compromising performance. Lower clock speeds also mean lower voltage and power usage.

But there‘s more: since the A13 (iPhone 11) Apple has quite many incremental power saving functions within every part their SoCs. They can lower the power usage to an absolute minimum or even turn parts of the silicon on and off quite fast, to be ready when the user needs more computational power and as fast they turn it off.

Modern x86 CPUs have similar functions but not quite as advanced or in as many parts as current Apple SoCs. Remember AMD‘s I/O which has basically the same power usage all the time.

Instruction set

x86 is old and can be quite cumbersome to use. Apple’s CPUs are based on ARM, a more modern instruction set. The same tasks can be achieved with fewer commands. This means faster computation and less power usage.

Under the hood current x86 CPUs have noting to do with their older cousins, but the instruction set is still the same. To be fair, over the years AMD and Intel added way more modern instructions like AVX for certain operations, but the core is still 40 years old.

Conclusion

Taken all this into account, a Zen 3 is not that bad compared to a Firestorm core. Of course, Apple has advantages over x86, but they are more about the process node, different architecture or the instruction set, than about the efficiency of the cores itself.

Well, and then there‘s Intel. The quite slowly awakening giant is not without hope, but they are way behind Apple and AMD. As long as their high performance CPUs are stuck on 14 nm, they are … to put it simply, fucked.

10 nm is on it‘s way, but there a still problems. The next release of desktop- and server-class CPUs will still be on 14 nm, but with a more efficient architecture backported from 10 nm CPUs. It will help, but they need a better process node and they need it asap.

10 nm could be ready for the big guys in 2021. At least new Xeons in 10 nm were announced. But the question is: how good is the node? Some of the already released 10 nm notebook CPUs are quite good, so Intel could be back in year or two.

They also plan to release Alder Lake in 2021, a BIG.little CPU design. That would be much appreciated on notebooks, but software has to be ready. Without a decent support of the operating systems, such a CPU will not work properly.

There‘s another problem: Intel‘s 7 nm node. According to YouTubers like AdoredTV or Moore‘s law is dead, 7 nm could be the 10 nm disaster all over again. Let‘s hope not.

Three competing CPU vendors on par would be amazing. Not just for pricing, but also computational power. Look what AMD did in the past three years. If someone said in 2016 that AMD would kick Intel‘s and nVidia‘s ass in four years, he would have been called a mad man. The same with Apple.

2020 is a shitty year, but the hardware? Awesome would be an understatement.

webcodr goes Netlify CMS

Until today posts on webcodr were published via a simple git-based workflow. If I wanted to create a new posts, I had to open the repository in Visual Studio Code and created a markdown file. After pushing the commit with the new file, a GitHub hook notified Netlify to pull the repo and build and publish the site.

It was quite simple and effective, but lacks comfort and does not work on iOS/iPadOS devices. After buying a new iPad Air and Magic Keyboard, I wanted a pragmatic way to write posts without my MacBook or PC.

I always wanted to try Netlify CMS, so this was my chance. The transition was really simple. I followed the guide for Hugo-based sites and adjusted the config to work with Hugo. That‘s it. Netlify CMS works just fine with the already existing markdown files. Even custom frontmatter fields within the markdown files are no problem, just set them up in the config file.

A little more configuration in the Netlify admin panel is necessary, but the guide explains everything very well. It was just a matter of 15 minutes to get everything going.

If you use a static site generator and want to have a little more comfort, Netlify CMS makes it really simple. They provide guides for every major player like Gatsby, Jekyll or Nuxt and of course Hugo. You don‘t even have to use GitHub. GitLab and BitBucket are supported as well. As are more complex workflows for more than one editor, custom authentication with OAuth or custom media libraries.

Editor

Netlify CMS supports markdown and has a basic, but decent editor with rich-text mode. But I wanted a little bit more, so I decided to write my posts in Ulysses — a specialized writing app with support of GitHub-flavored markdown, including syntax highlighting preview . It‘s available for macOS and iOS/iPadOS. All files and settings are synched via iCloud. So, once you have setup everything, you‘re good to go on any of your Apple devices.

Since Ulysses requires a subscription, I will use this to „force“ me writing more posts. 😁

I wrote this post entirely on my iPad Air and so far, I‘m quite happy with the new workflow. Of course I will not write every post this way. New posts with code examples will be easier to handle on a Mac or PC. (Hey Apple, how about IntelliJ on an iPad?)

btw: even as an enthusiast of mechanical keyboards I have to say, it‘s quite nice to type on a Magic Keyboard. I just have to get used to the smaller size. The Magic Keyboard for the 10.9“ iPad Air or 11“ iPad Pro is a compromise in size and some keys like tab, shift, backspace, enter and the umlaut keys (bracket keys on english keyboard layouts) are way smaller compared to normal-sized keyboards. It‘s bigger brother for the 13“ iPad Pro has a normal layout, but that monster of an iPad seems a bit excessive for my use case.