Planet Tor

@blog October 4, 2022 - 00:00 • 2 days ago
Arti 1.0.1 is released: Bugfixes and groundwork

Arti is our ongoing project to create an next-generation Tor client in Rust. Last month, we released Arti 1.0.0. Now we're announcing the next release in its series, Arti 1.0.1.

The last month, our team's time has been filled with company meetings, vacations, COVID recovery, groundwork for anticensorship features, and followup from Arti 1.0.0. Thus, this has been a fairly small release, but we believe it's worth upgrading for.

This is a fairly small release: it fixes a few annoying bugs (including one that would cause a busy-loop), tightens log security, improves, exposes an API for building circuits manually, and contains some preparatory work for anticensorship support, which we hope to deliver in early November.

You can find a more complete list of changes in our CHANGELOG.

For more information on using Arti, see our top-level README, and the docmentation for the arti binary.

Thanks to everybody who has helped with this release, including Alexander Færøy, Trinity Pointard, and Yuan Lyu.

Also, our deep thanks to Zcash Community Grants for funding the development of Arti!

@blog October 3, 2022 - 00:00 • 3 days ago
The Role of the Tor Project Board and Conflicts of Interest

Over the last couple of weeks, friends of the Tor Project have been raising questions about how Tor Project thinks of conflicts of interest and its board members, in light of the reporting from Motherboard about Team Cymru. I understand why folks would have questions, and so I want to write a bit about how the board of directors interacts with the Tor Project, and how our conflict of interest process works.

The Role of the Board

First off, a word about non-profit boards of directors. Although every non-profit is unique in its own way, the purpose of a board of an organization like The Tor Project, with a substantial staff and community, is not to set day-to-day policy or make engineering decisions for the organization. The board's primary role is a fiduciary one: to ensure that Tor is meeting its obligations under its bylaws and charter, and “hire/fire” power over the executive director. Although staff members may consult board members with relevant expertise over strategic decisions, and board members are selected in part for their background in the space, the board is separate from the maintenance and decision-making on Tor's code, and a board seat doesn't come with any special privileges over the Tor network. Board members may be consulted on technical decisions, but they don't make them. The Tor Project's staff and volunteers do. The Tor Project also has a social contract which everyone at Tor, including board members, has to comply with.

When we invite a person to join the Board, we are looking at the overall individual, their experience, expertise, character, and other qualities. We are not looking at them as representatives of another organization. But because Board members have fiduciary duties, they are are required to agree to a conflict of interest policy. That policy defines a conflict as “...the signee has an economic interest in, or acts as an officer or a director of, any outside entity whose financial interests would reasonably appear to be affected by the signee's relationship with The Tor Project, Inc. The signee should also disclose any personal, business, or volunteer affiliations that may give rise to a real or apparent conflict of interest.”

Handling Conflicts of Interest

Like most conflict processes under United States law, non-profit conflicts rely on individuals to assess their own interests and the degree to which they might diverge. The onus is often on individual board members, who know the extent of their obligations, to raise questions about conflicts to the rest of the board, or to recuse themselves from decisions.

It also means that conflicts, and perceived conflicts, change over time. In the case of Rob Thomas's work with Team Cymru, the Tor Project staff and volunteers expressed concerns to me at the end of 2021, spurring internal conversations. I believe it is important to listen to the community, and so I worked to facilitate discussions and surface questions that we could try to address. During these conversations, it became clear that although Team Cymru may offer services that run counter to the mission of Tor, there was no indication that Rob Thomas's role in the provision of those services created any direct risk to Tor users, which was our primary concern. This was also discussed by the Board in March and the Board came to the same conclusion.

But of course, not actively endangering our users is a low bar. It is reasonable to raise questions about the inherent disconnection between the business model of Team Cymru and the mission of Tor which consists of private and anonymous internet access for all. Rob Thomas's reasons for choosing to resign from the board are his own, but it has become more clear over the months since our initial conversation how Team Cymru's work is at odds with the Tor Project's mission.

What's Next

We at Tor, me, the board, staff and volunteers, will continue these conversations to identify how to do better from what we have learned here.

I have been working with the board to see where things can be done better in general. One of these initiatives is changing the Tor Project's board recruitment process. Historically, recruitment for board slots has been ad hoc - with current board members or project staff suggesting potential new candidates. This selection process has limited the pool of who has joined the board, and meant that we do not always reflect the diversity of experiences or perspectives of Tor users. For the first time we are running an open call for our board seats. Although this may seem unrelated to the idea of conflicts, we believe that more formalized processes create healthier boards that are able to work through potential conflict issues from a number of different angles.

Finally, let's talk about infrastructure for a moment. Our community has, rightly so, also raised concerns regarding the Tor Project usage of Team Cymru infrastructure. Team Cymru has donated hardware, and significant amounts of bandwidth to Tor over the years. These were mostly web mirrors and for internal projects like build and simulation machines.

Like all hardware that the Tor Project uses, we cannot guarantee perfect security when there is physical access, so we operate from a position of mistrust and rely on cryptographically verifiable reproducibility of our code to keep our users safe. As we would with machines hosted anywhere, the machines hosted at Cymru were cleanly installed using full disk encryption. This means that the set up with Team Cymru was not different from any other provider we would be using. So the level of risk for our users was the same when we used other providers.

But given the discussion of conflicts above, it's not tenable to continue to accept Team Cymru's donations of infrastructure. We have already been planning to move things out since early 2022. It is not a simple, or cheap task to move everything to some other location, so this process is going to take some time. We've already moved the web mirrors away, and are working on the next steps of this plan to completely move all services away from Team Cymru infrastructure. We thank the community for its patience with this process.

@ooni October 3, 2022 - 00:00 • 3 days ago
New online OONI training course launched by Advocacy Assembly
We are excited to share that a free, online OONI training course (“Measuring Internet Censorship with OONI tools”) has been launched today on Small Media’s Advocacy Assembly platform! Through this course, you will learn how to measure internet censorship through the use of OONI tools. You will also learn how to access and interpret real-time OONI data on internet censorship around the world. Today, the course is available in English, Arabic, Spanish, and Farsi. ...
@anarcat September 29, 2022 - 19:05 • 6 days ago
Detecting manual (and optimizing large) package installs in Puppet

Well this is a mouthful.

I recently worked on a neat hack called puppet-package-check. It is designed to warn about manually installed packages, to make sure "everything is in Puppet". But it turns out it can (probably?) dramatically decrease the bootstrap time of Puppet bootstrap when it needs to install a large number of packages.

Detecting manual packages

On a cleanly filed workstation, it looks like this:

root@emma:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
0 unmanaged packages found

A messy workstation will look like this:

root@curie:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
288 unmanaged packages found
apparmor-utils beignet-opencl-icd bridge-utils clustershell cups-pk-helper davfs2 dconf-cli dconf-editor dconf-gsettings-backend ddccontrol ddrescueview debmake debootstrap decopy dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dnsdiag dropbear-initramfs ebtables efibootmgr elpa-lua-mode entr eog evince figlet file file-roller fio flac flex font-manager fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi freetype2-demos ftp fwupd-amd64-signed gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdisk gdm3 gdu gedit gedit-plugins gettext-base git-debrebase gnome-boxes gnote gnupg2 golang-any golang-docker-credential-helpers golang-golang-x-tools grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gtypist gvfs-backends hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-gtk3 ibus-libpinyin ibus-pinyin im-config imediff img2pdf imv initramfs-tools input-utils installation-birthday internetarchive ipmitool iptables iptraf-ng jackd2 jupyter jupyter-nbextension-jupyter-js-widgets jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools keyboard-configuration kfind konsole krb5-locales kwin-x11 leiningen lightdm lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 lynx lz4json magic-wormhole mailscripts mailutils manuskript mat2 mate-notification-daemon mate-themes mime-support mktorrent mp3splt mpdris2 msitools mtp-tools mtree-netbsd mupdf nautilus nautilus-sendto ncal nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache npm2deb ntfs-3g ntpdate nvme-cli nwipe obs-studio okular-extra-backends openstack-clients openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby ruby-dev ruby-feedparser ruby-magic ruby-mocha ruby-ronn rygel-playbin rygel-tracker s-tui sanoid saytime scrcpy scrcpy-server screenfetch scrot sdate sddm seahorse shim-signed sigil smartmontools smem smplayer sng sound-juicer sound-theme-freedesktop spectre-meltdown-checker sq ssh-audit sshuttle stress-ng strongswan strongswan-swanctl syncthing system-config-printer system-config-printer-common system-config-printer-udev systemd-bootchart systemd-container tardiff task-desktop task-english task-ssh-server tasksel tellico texinfo texlive-fonts-extra texlive-lang-cyrillic texlive-lang-french texlive-lang-german texlive-lang-italian texlive-xetex tftp-hpa thunar-archive-plugin tidy tikzit tint2 tintin++ tipa tpm2-tools traceroute tree trocla ucf udisks2 unifont unrar-free upower usbguard uuid-runtime vagrant-cachier vagrant-libvirt virt-manager vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas wireshark xapian-tools xclip xdg-user-dirs-gtk xlax xmlto xsensors xserver-xorg xsltproc xxd xz-utils yubioath-desktop zathura zathura-pdf-poppler zenity zfs-dkms zfs-initramfs zfsutils-linux zip zlib1g zlib1g-dev
157 old: apparmor-utils clustershell davfs2 dconf-cli dconf-editor ddccontrol ddrescueview decopy dnsdiag ebtables efibootmgr elpa-lua-mode entr figlet file-roller fio flac flex font-manager freetype2-demos ftp gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdu gedit git-debrebase gnote golang-docker-credential-helpers golang-golang-x-tools gtypist hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-pinyin imediff input-utils internetarchive ipmitool iptraf-ng jackd2 jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools kfind konsole leiningen lightdm lynx lz4json magic-wormhole manuskript mat2 mate-notification-daemon mktorrent mp3splt msitools mtp-tools mtree-netbsd nautilus nautilus-sendto nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache ntpdate nwipe obs-studio openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby-feedparser ruby-magic ruby-mocha ruby-ronn s-tui saytime scrcpy screenfetch scrot sdate seahorse shim-signed sigil smem smplayer sng sound-juicer spectre-meltdown-checker sq ssh-audit sshuttle stress-ng system-config-printer system-config-printer-common tardiff tasksel tellico texlive-lang-cyrillic texlive-lang-french tftp-hpa tikzit tint2 tintin++ tpm2-tools traceroute tree unrar-free vagrant-cachier vagrant-libvirt vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas xdg-user-dirs-gtk xlax xmlto xsensors xxd yubioath-desktop zenity zip
131 new: beignet-opencl-icd bridge-utils cups-pk-helper dconf-gsettings-backend debmake debootstrap dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dropbear-initramfs eog evince file fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi fwupd-amd64-signed gdisk gdm3 gedit-plugins gettext-base gnome-boxes gnupg2 golang-any grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gvfs-backends ibus-gtk3 ibus-libpinyin im-config img2pdf imv initramfs-tools installation-birthday iptables jupyter jupyter-nbextension-jupyter-js-widgets keyboard-configuration krb5-locales kwin-x11 lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 mailscripts mailutils mate-themes mime-support mpdris2 mupdf ncal npm2deb ntfs-3g nvme-cli okular-extra-backends openstack-clients plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap ruby ruby-dev rygel-playbin rygel-tracker sanoid scrcpy-server sddm smartmontools sound-theme-freedesktop strongswan strongswan-swanctl syncthing system-config-printer-udev systemd-bootchart systemd-container task-desktop task-english task-ssh-server texinfo texlive-fonts-extra texlive-lang-german texlive-lang-italian texlive-xetex thunar-archive-plugin tidy tipa trocla ucf udisks2 unifont upower usbguard uuid-runtime virt-manager wireshark xapian-tools xclip xserver-xorg xsltproc xz-utils zathura zathura-pdf-poppler zfs-dkms zfs-initramfs zfsutils-linux zlib1g zlib1g-dev

Yuck! That's a lot of shit to go through.

Notice how the packages get sorted between "old" and "new" packages. This is because popcon is used as a tool to mark which packages are "old". If you have unmanaged packages, the "old" ones are likely things that you can uninstall, for example.

If you don't have popcon installed, you'll also get this warning:

popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

The error can otherwise be safely ignored, but you won't get "help" prioritizing the packages to add to your manifests.

Note that the tool ignores packages that were "marked" (see apt-mark(8)) as automatically installed. This implies that you might have to do a little bit of cleanup the first time you run this, as Debian doesn't necessarily mark all of those packages correctly on first install. For example, here's how it looks like on a clean install, after Puppet ran:

root@angela:/home/anarcat# ./bin/puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
127 unmanaged packages found
ca-certificates console-setup cryptsetup-initramfs dbus file gcc-12-base gettext-base grub-common grub-efi-amd64 i3lock initramfs-tools iw keyboard-configuration krb5-locales laptop-detect libacl1 libapparmor1 libapt-pkg6.0 libargon2-1 libattr1 libaudit-common libaudit1 libblkid1 libbpf0 libbsd0 libbz2-1.0 libc6 libcap-ng0 libcap2 libcap2-bin libcom-err2 libcrypt1 libcryptsetup12 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libedit2 libelf1 libext2fs2 libfdisk1 libffi8 libgcc-s1 libgcrypt20 libgmp10 libgnutls30 libgpg-error0 libgssapi-krb5-2 libhogweed6 libidn2-0 libip4tc2 libiw30 libjansson4 libjson-c5 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 liblocale-gettext-perl liblockfile-bin liblz4-1 liblzma5 libmd0 libmnl0 libmount1 libncurses6 libncursesw6 libnettle8 libnewt0.52 libnftables1 libnftnl11 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnss-systemd libp11-kit0 libpam-systemd libpam0g libpcre2-8-0 libpcre3 libpcsclite1 libpopt0 libprocps8 libreadline8 libselinux1 libsemanage-common libsemanage2 libsepol2 libslang2 libsmartcols1 libss2 libssl1.1 libssl3 libstdc++6 libsystemd-shared libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo6 libtirpc-common libtirpc3 libudev1 libunistring2 libuuid1 libxtables12 libxxhash0 libzstd1 linux-image-amd64 logsave lsb-base lvm2 media-types mlocate ncurses-term pass-extension-otp puppet python3-reportbug shim-signed tasksel ucf usr-is-merged util-linux-extra wpasupplicant xorg zlib1g
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

Normally, there should be unmanaged packages here. But because of the way Debian is installed, a lot of libraries and some core packages are marked as manually installed, and are of course not managed through Puppet. There are two solutions to this problem:

  • really manage everything in Puppet (argh)
  • mark packages as automatically installed

I typically chose the second path and mark a ton of stuff as automatic. Then either they will be auto-removed, or will stop being listed. In the above scenario, one could mark all libraries as automatically installed with:

apt-mark auto $(./bin/puppet-package-check | grep -o 'lib[^ ]*')

... but if you trust that most of that stuff is actually garbage that you don't really want installed anyways, you could just mark it all as automatically installed:

apt-mark auto $(./bin/puppet-package-check)

In my case, that ended up keeping basically all libraries (because of course they're installed for some reason) and auto-removing this:

dh-dkms discover-data dkms libdiscover2 libjsoncpp25 libssl1.1 linux-headers-amd64 mlocate pass-extension-otp pass-otp plocate x11-apps x11-session-utils xinit xorg

You'll notice xorg in there: yep, that's bad. Not what I wanted. But for some reason, on other workstations, I did not actually have xorg installed. Turns out having xserver-xorg is enough, and that one has dependencies. So now I guess I just learned to stop worrying and live without X(org).

Optimizing large package installs

But that, of course, is not all. Why make things simple when you can have an unreadable title that is trying to be both syntactically correct and click-baity enough to flatter my vain ego? Right.

One of the challenges in bootstrapping Puppet with large package lists is that it's slow. Puppet lists packages as individual resources and will basically run apt install $PKG on every package in the manifest, one at a time. While the overhead of apt is generally small, when you add things like apt-listbugs, apt-listchanges, needrestart, triggers and so on, it can take forever setting up a new host.

So for initial installs, it can actually makes sense to skip the queue and just install everything in one big batch.

And because the above tool inspects the packages installed by Puppet, you can run it against a catalog and have a full lists of all the packages Puppet would install, even before I even had Puppet running.

So when reinstalling my laptop, I basically did this:

apt install puppet-agent/experimental
puppet agent --test --noop
apt install $(./puppet-package-check --debug \
    2>&1 | grep ^puppet\ packages 
    | sed 's/puppet packages://;s/ /\n/g'
    | grep -v -e onionshare -e golint -e git-sizer -e github-backup -e hledger -e xsane -e audacity -e chirp -e elpa-flycheck -e elpa-lsp-ui -e yubikey-manager -e git-annex -e hopenpgp-tools -e puppet
) puppet-agent/experimental

That massive grep was because there are currently a lot of packages missing from bookworm. Those are all packages that I have in my catalog but that still haven't made it to bookworm. Sad, I know. I eventually worked around that by adding bullseye sources so that the Puppet manifest actually ran.

The point here is that this improves the Puppet run time a lot. All packages get installed at once, and you get a nice progress bar. Then you actually run Puppet to deploy configurations and all the other goodies:

puppet agent --test

I wish I could tell you how much faster that ran. I don't know, and I will not go through a full reinstall just to please your curiosity. The only hard number I have is that it installed 444 packages (which exploded in 10,191 packages with dependencies) in a mere 10 minutes. That might also be with the packages already downloaded.

In any case, I have that gut feeling it's faster, so you'll have to just trust my gut. It is, after all, much more important than you might think.

Similar work

The blueprint system is something similar to this:

It figures out what you’ve done manually, stores it locally in a Git repository, generates code that’s able to recreate your efforts, and helps you deploy those changes to production

That tool has unfortunately been abandoned for a decade at this point.

Also note that the AutoRemove::RecommendsImportant and AutoRemove::SuggestsImportant are relevant here. If it is set to true (the default), a package will not be removed if it is (respectively) a Recommends or Suggests of another package (as opposed to the normal Depends). In other words, if you want to also auto-remove packages that are only Suggests, you would, for example, add this to apt.conf:

AutoRemove::SuggestsImportant false;

Paul Wise has tried to make the Debian installer and debootstrap properly mark packages as automatically installed in the past, but his bug reports were rejected. The other suggestions in this section are also from Paul, thanks!

@blog September 29, 2022 - 00:00 • 7 days ago
New Alpha Release: Tor Browser 12.0a3 (Android, Windows, macOS, Linux)

Tor Browser 12.0a3 is now available from the Tor Browser download page and also from our distribution directory.

Tor Browser 12.0a3 updates Firefox on Android, Windows, macOS, and Linux to 102.3.0esr.

We use this opportunity to update various other components of Tor Browser as well :

  • NoScript 11.4.11

This version includes important security updates to Firefox. We also backport the following Android-specific security updates from Firefox 105:

Additionally, the HTTPS-Everywhere extension has been removed and its functionality replaced with HTTP-Only mode on Android.

The full changelog since Tor Browser 12.0a2 is:

@anarcat September 28, 2022 - 15:12 • 8 days ago
Evaluating suspend battery use with systemd

This is a quick hack that will allow you to do some (manual) computations on power usage during suspend on your laptop, using systemd hooks.

It might be possible to use a similar hack on non-systemd systems of course, you just need something that fires a hook on suspend and resume.

On systemd, this happens thanks to the systemd-suspend.service. That service is not designed to be called directly, but it fires off a series of hooks and targets that makes it possible to do things before and after suspend. There is a that you can hook other services too, but the really much easier way is to just drop a shell script in /usr/lib/systemd/system-sleep/.

The simplest way I found to dump battery use is with tlp program (Debian package):

apt install tlp
tlp-stat -b

This should show you something like this:

root@angela:~# tlp-stat  -b
--- TLP 1.3.1 --------------------------------------------

+++ Battery Features: Charge Thresholds and Recalibrate
natacpi    = inactive (laptop not supported)
tpacpi-bat = inactive (laptop not supported)
tp-smapi   = inactive (laptop not supported)

+++ Battery Status: BAT
/sys/class/power_supply/BAT/manufacturer                    = TPS
/sys/class/power_supply/BAT/model_name                      = S10
/sys/class/power_supply/BAT/cycle_count                     = (not supported)
/sys/class/power_supply/BAT/charge_full_design              =   6040 [mAh]
/sys/class/power_supply/BAT/charge_full                     =   6098 [mAh]
/sys/class/power_supply/BAT/charge_now                      =   6098 [mAh]
/sys/class/power_supply/BAT/current_now                     =    850 [mA]
/sys/class/power_supply/BAT/status                          = Full

Charge                                                      =  100.0 [%]
Capacity                                                    =  101.0 [%]

Then you just need to hook that into a simple shell script, say in /lib/systemd/system-sleep/tlp-stat-battery:


# tlp - systemd suspend/resume hook
# Copyright (c) 2020 Thomas Koch <linrunner at> and others.
# This software is licensed under the GPL v2 or later.

case $1 in
    pre)  tlp-stat -b ;;
    post) tlp-stat -b ;;

Then when your laptop suspends, the script will run before sleep and dump the battery stats in the systemd journal (or syslog). When it resumes, it will do the same, so you will be able to compare.

Then a simple way to compare suspend usage is to suspend the laptop for (say) 10 minutes and see how much power was used. This is the usage on my Purism Librem 13v4, for example.


sep 28 11:19:45 angela systemd-sleep[209379]: --- TLP 1.3.1 --------------------------------------------
sep 28 11:19:45 angela systemd-sleep[209379]: +++ Battery Features: Charge Thresholds and Recalibrate
sep 28 11:19:45 angela systemd-sleep[209379]: natacpi    = inactive (laptop not supported)
sep 28 11:19:45 angela systemd-sleep[209379]: tpacpi-bat = inactive (laptop not supported)
sep 28 11:19:45 angela systemd-sleep[209379]: tp-smapi   = inactive (laptop not supported)
sep 28 11:19:45 angela systemd-sleep[209379]: +++ Battery Status: BAT
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/manufacturer                    = TPS
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/model_name                      = S10
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/cycle_count                     = (not supported)
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_full_design              =   6040 [mAh]
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_full                     =   6098 [mAh]
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/current_now                     =   1024 [mA]
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/status                          = Discharging
sep 28 11:19:45 angela systemd-sleep[209655]: Charge                                                      =   99.1 [%]
sep 28 11:19:45 angela systemd-sleep[209656]: Capacity                                                    =  101.0 [%]


sep 28 11:29:47 angela systemd-sleep[209725]: --- TLP 1.3.1 --------------------------------------------
sep 28 11:29:47 angela systemd-sleep[209725]: +++ Battery Features: Charge Thresholds and Recalibrate
sep 28 11:29:47 angela systemd-sleep[209725]: natacpi    = inactive (laptop not supported)
sep 28 11:29:47 angela systemd-sleep[209725]: tpacpi-bat = inactive (laptop not supported)
sep 28 11:29:47 angela systemd-sleep[209725]: tp-smapi   = inactive (laptop not supported)
sep 28 11:29:47 angela systemd-sleep[209725]: +++ Battery Status: BAT
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/manufacturer                    = TPS
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/model_name                      = S10
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/cycle_count                     = (not supported)
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_full_design              =   6040 [mAh]
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_full                     =   6098 [mAh]
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/current_now                     =    850 [mA]
sep 28 11:29:47 angela systemd-sleep[209961]: /dev/sda:
sep 28 11:29:47 angela systemd-sleep[209961]:  setting standby to 36 (3 minutes)
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/status                          = Discharging
sep 28 11:29:47 angela systemd-sleep[210013]: Charge                                                      =   99.0 [%]
sep 28 11:29:47 angela systemd-sleep[210018]: Capacity                                                    =  101.0 [%]
sep 28 11:29:47 angela systemd[1]: systemd-suspend.service: Succeeded.

The important parts being of course:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]

In other words, 8mAh were used in the 10 minutes (and 2 seconds) test I did. This works out to around 48mAh per hour, or, with this battery, about 127 hours or roughly 5 days.

Obviously, an improvement could be to actually write this to a file, do the math and log only that. But that's more work and I'm lazy right now, exercise for the reader I guess.

Update: someone actually wrote a full Python/SQLite implementation of this already, which does all the computations above, although it's hardcoded for the Framework laptop and will not necessarily work out of the box with other systems.

@ooni September 25, 2022 - 00:00 • 11 days ago
Iran blocks social media, app stores and encrypted DNS amid Mahsa Amini protests
Protests erupted in Iran over the last week following the death of Mahsa Amini, a 22-year-old Kurdish woman who was reportedly beaten to death by Iran’s morality police for allegedly violating strict hijab rules. Amid the ongoing protests, which have reportedly resulted in at least 31 civilian deaths, Iranian authorities cracked down on the internet in an attempt to curb dissent. Over the past week, Iran experienced severe mobile network outages, in addition to increased levels of internet censorship. ...
@anarcat September 24, 2022 - 18:39 • 11 days ago
Pourquoi nationaliser Internet au Québec

J’ai écrit un article un peu trop long sur comment nationaliser Internet (en anglais) mais j’ai réalisé que je n’ai pas tant expliqué pourquoi.

Québec doit nationaliser Internet parce que le moment est venu d’investir dans un réseau public moins cher, plus juste et décentralisé. Cet ambitieux projet pourrait révolutionner l’architecture de nos réseaux et permettre une plus grande innovation, et peut-être même aider à réduire nos dépenses énergétiques.

Bon moment

La première raison de reconsidérer l’architecture du réseau au Québec, est que c’est le bon moment. Partout au Québec, les municipalités doivent progressivement remplacer tout le système d’égouts désuets et d’eau potable contaminés au plomb.

C’est une occasion unique d’installer un réseau de fibre optique. Ce n’est pas tous les jours qu’une ville ouvre tous ses soubassements. Il faut en profiter, maintenant.

Plus largement, beaucoup de nos réseaux sont encore basés sur du cuivre — les vieux systèmes téléphonique et de télévision câblée — et nous achevons une transition vers de la fibre optique. La question est : à qui appartiendra cette ressource critique ?

Investir dans le futur

Brancher tout le monde sur la fibre est un investissement, pas une dépense. Construire une infrastructure de fibre publique est un projet similaire à l’électrification du 20ᵉ siècle. Tout comme à cette époque nous pouvons choisir d’investir dans le privé ou dans le public, dans une technologie désuète (le cuivre) ou celle qui nous amènera dans le prochain millénaire (la fibre).

Présentement, nous choisissons d’investir dans le privé au profit de compagnies antiques basées sur réseaux de cuivre, surtout dans le “dernier mille”. Au lieu de ce banditisme, nous pourrions créer un réseau public qui serait ultimement générateur de revenus, tout comme Hydro-Québec l’est désormais.

Par justice

L’accès Internet n’est plus seulement un privilège, c’est un droit. Il faut par exemple avoir accès à Internet pour franchir la frontière canadienne, et de plus en plus de services migrent en ligne.

Présentement, l’accès internet est une forme de taxe privée, régressive. Cette situation défavorise davantage les pauvres, qui ne peuvent souvent pas se permettre de débourser 50-100$/mois pour accéder au réseau.

Établir un réseau public n’éliminerait pas complètement ces disparités, mais permettrait d’établir des critères plus justes pour permettre l’accès Internet, par exemple en offrant des tarifs dégressifs selon le revenu.


Quand les gouvernements ont donné un milliard de dollars dans ces compagnies privées, ils ont établi un barème pour évaluer si les compagnies auraient fait leur travail. La barre de la “haute vitesse” a été mise assez basse : 50 mégabits par seconde. Or, ce niveau de service est disponible depuis plus d’une décennie, une éternité dans le domaine des technologies.

Nous avons donc payé une fortune pour un réseau désuet.

Si, à l’inverse, nous investissons dans un réseau de fibre optique, évolutif, il sera facile d’offrir des services plus performants à l’avenir. La municipalité de Chattanooga au Tennessee vient d’implanter un service 25 gigabit — c’est-à-dire 25 000 mégabits par seconde — soit 500 fois plus performant que le standard établi au Québec. Pour ce faire, la ville n’a pas eu à rouvrir des canalisations ou passer des nouveaux fils ; la fibre installée en 2010 était parfaitement suffisante.

De telles vitesses donnent le vertige, et on est en droit de se demander à quoi elle pourrait bien servir. Mais la venue du travail à distance a changé la donne : la vidéo-conférence, autrefois artefact de science-fiction, est maintenant dans la vie de tous les jours. Et bien que, par miracle, nos réseaux de cuivre arrivent à fournir à la demande, ce n’est pas sans compromis au niveau de la qualité du signal. Ultimement, on défavorise encore la participation des moins nantis, qui n’ont pas accès aux meilleurs signaux plus coûteuses.

Mais on pourrait aussi aller plus loin. Avec des vitesses passées le gigabit, il est possible d’héberger des services de tout genre à la maison. Normalement, ces services sont viables seulement dans un centre de données climatisés. J’en sais quelque chose, ayant co-fondé le Réseau Koumbit il y a près de 20 ans maintenant. Les coûts d’une telle opération sont prohibitifs et font que Koumbit est, à ce jour, un des rares hébergeurs indépendants encore en opération au Québec.

Tout le réseau est centralisé : la capacité est concentrée dans le centre-ville. En créant un réseau de fibre universel, tout point du réseau peut devenir un serveur. Ceci permettrait une plus saine compétition sur Internet, présentement dominé par les gros joueurs tels que Google, Amazon, Facebook, Apple et Microsoft.


Concrètement, un réseau public devrait être fédéré autour des municipalités et des MRCs, chacune ayant l’autonomie de gérer son propre réseau. Ceci éliminerait toute une classe de problèmes similaire à la coupure de service catastrophique de Rogers de juillet 2022, ayant affecté des clients d’un océan à l’autre.

Cette idée a priori peut être bizarre, mais c’est en fait la façon dont beaucoup de services publics opèrent. Les transports en commun, la collecte des ordures, la voirie, ces compétences sont généralement d’ordre municipal. Il devrait en être de même des télécommunications, dans la mesure bien sûr où le provincial puisse fournir des dorsales ou au moins les ressources financières pour interconnecter certaines régions éloignées…

À tout le moins, le réseau devrait être conçu avec une dorsale partagée, neutre. Quitte à ce qu’elle soit gérée par une compagnie privée. Présentement, les dorsales sont toutes gérées par les mêmes compagnies. Bell Canada, par exemple, offre des services Internet, mais aussi téléphonique, télévision, journaux, magazines, radios, tout en fournissant une dorsale pour d’autres fournisseurs Internet.

Sauver le climat

Un réseau centralisé comme celui d’Amazon et Google peut permettre des économies énergétiques, car les compagnies ont intérêt à optimiser les ressources pour réduire leurs coûts. Mais leur modèle d’affaire — souvent basé sur la gratuité – cache parfois les coûts réels de ces produits.

Ultimement, les gros centres de données mènent au gaspillage. La dissipation de chaleur est un énorme problème dans ce modèle, par exemple : la vague de chaleur de cet été a mené à des coupures de services nuagiques en Grande-Bretagne. Pire encore, la sur-utilisation de l’électricité par ces compagnies privées surcharge les réseaux électriques, à un tel point que certains quartiers de Londres ne peuvent plus construire pour adresser la crise du logement.

Il serait peut-être possible de concevoir un autre Internet, réellement décentralisé, où votre petit “routeur” pourrait aussi faire office de serveur pour vos contacts, photos, dossiers personnels, et, pourquoi pas, vous permettre de publier vos propres contenus en ligne.

Peut-être que, de cette façon, nous sauverions aussi des coûts écologiques, en économisant des frais de climatisation et en réutilisant des machines qui, la plupart du temps, ne servent pas à grand-chose à la maison.

Cet article a été refusé au Devoir.

@anarcat September 19, 2022 - 16:41 • 17 days ago
Looking at Wayland terminal emulators

Back in 2018, I made a two part series about terminal emulators that was actually pretty painful to write. So I'm not going to retry this here, not at all. Especially since I'm not submitting this to the excellent LWN editors so I can get away with not being very good at writing. Phew.

Still, it seems my future self will thank me for collecting my thoughts on the terminal emulators I have found out about since I wrote that article. Back then, Wayland was not quite at the level where it is now, being the default in Fedora (2016), Debian (2019), RedHat (2019), and Ubuntu (2021). Also, a bunch of folks thought they would solve everything by using OpenGL for rendering. Let's see how things stack up.


In the previous article, I touched on those projects:

Terminal Changes since review
Alacritty releases! scrollback, better latency, URL launcher, clipboard support, still not in Debian, but close
GNOME Terminal not much? couldn't find a changelog
Konsole outdated changelog, color, image previews, clickable files, multi-input, SSH plugin, sixel images
mlterm long changelog but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!)
pterm changes: Wayland support
st unparseable changelog, suggests scroll(1) or scrollback.patch for scrollback now
Terminator moved to GitHub, Python 3 support, not being dead
urxvt no significant changes, a single release, still in CVS!
Xfce Terminal hard to parse changelog, presumably some improvements to paste safety?
xterm notoriously hard to parse changelog, improvements to paste safety (disallowedPasteControls), fonts, clipboard improvements?

After writing those articles, bizarrely, I was still using rxvt even though it did not come up as shiny as I would have liked. The colors problems were especially irritating.

I briefly played around with Konsole and xterm, and eventually switched to XTerm as my default x-terminal-emulator "alternative" in my Debian system, while writing this.

I quickly noticed why I had stopped using it: clickable links are a huge limitation. I ended up adding keybindings to open URLs in a command. There's another keybinding to dump the history into a command. Neither are as satisfactory as just clicking a damn link.


Figuring out my requirements is actually a pretty hard thing to do. In my last reviews, I just tried a bunch of stuff and collected everything, but a lot of things (like tab support) I don't actually care about. So here's a set of things I actually do care about:

  • latency
  • resource usage
  • proper clipboard support, that is:
    • mouse selection and middle button uses PRIMARY
    • control-shift-c and control-shift-v for CLIPBOARD
  • true color support
  • no known security issues
  • active project
  • paste protection
  • clickable URLs
  • scrollback
  • font resize
  • non-destructive text-wrapping (ie. resizing a window doesn't drop scrollback history)
  • proper unicode support (at least latin-1, ideally "everything")
  • good emoji support (at least showing them, ideally "nicely"), which involves font fallback

Latency is particularly something I wonder about in Wayland. Kitty seem to have been pretty dilligent at doing latency tests, claiming 35ms with a hardware-based latency tester and 7ms with typometer, but it's unclear how those would come up in Wayland because, as far as I know, typometer does not support Wayland.


Those are the projects I am considering.

  • darktile - GPU rendering, Unicode support, themable, ligatures (optional), Sixel, window transparency, clickable URLs, true color support, not in Debian
  • foot - Wayland only, daemon-mode, sixel images, scrollback search, true color, font resize, URLs not clickable, but keyboard-driven selection, proper clipboard support, in Debian
  • havoc - minimal, scrollback, configurable keybindings, not in Debian
  • sakura - libvte, Wayland support, tabs, no menu bar, original libvte gangster, dynamic font size, probably supports Wayland, in Debian
  • termonad - Haskell? in Debian
  • wez - Rust, Wayland, multiplexer, ligatures, scrollback search, clipboard support, bracketed paste, panes, tabs, serial port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?), not in Debian
  • XTerm - status quo, no Wayland port obviously
  • zutty: OpenGL rendering, true color, clipboard support, small codebase, no Wayland support, crashes on bremner's, in Debian

Candidates not considered


I would really, really like to use Alacritty, but it's still not packaged in Debian, and they haven't fully addressed the latency issues although, to be fair, maybe it's just an impossible task. Once it's packaged in Debian, maybe I'll reconsider.


Kitty is a "fast, feature-rich, GPU based", with ligatures, emojis, hyperlinks, pluggable, scriptable, tabs, layouts, history, file transfer over SSH, its own graphics system, and probably much more I'm forgetting. It's packaged in Debian.

So I immediately got two people commenting (on IRC) that they use Kitty and are pretty happy with it. I've been hesitant in directly talking about Kitty publicly, but since it's likely there will be a pile-up of similar comments, I'll just say why it's not the first in my list, even if it might, considering it's packaged in Debian and otherwise checks all the boxes.

I don't trust the Kitty code. Kitty was written by the same author as Calibre, which has a horrible security history and generally really messy source code. I have tried to do LTS work on Calibre, and have mostly given up on the idea of making that program secure in any way. See calibre for the details on that.

Now it's possible Kitty is different: it's quite likely the author has gotten some experience writing (and maintaining for so long!) Calibre over the years. But I would be more optimistic if the author's reaction to the security issues were more open and proactive.

I've also seen the same reaction play out on Kitty's side of things. As anyone who worked on writing or playing with non-XTerm terminal emulators, it's quite a struggle to make something (bug-for-bug) compatible with everything out there. And Kitty is in that uncomfortable place right now where it diverges from the canon and needs its own entry in the ncurses database. I don't remember the specifics, but the author also managed to get into fights with those people as well, which I don't feel is reassuring for the project going forward.

If security and compatibility wasn't such big of a deal for me, I wouldn't mind so much, but I'll need a lot of convincing before I consider Kitty more seriously at this point.

Next steps

It seems like Arch Linux defaults to foot in Sway, and I keep seeing it everywhere, so it is probably my next thing to try, if/when I switch to Wayland.

One major problem with foot is that it's yet another terminfo entry. They did make it into ncurses (patch 2021-07-31) but only after Debian bullseye stable was released. So expect some weird compatibility issues when connecting to any other system that is older or the same as stable (!).

One question mark with all Wayland terminals, and Foot in particular, is how much latency they introduce in the rendering pipeline. The foot performance and benchmarks look excellent, but do not include latency benchmarks.

No conclusion

So I guess that's all I've got so far, I may try alacritty if it hits Debian, or foot if I switch to Wayland, but for now I'm hacking in xterm still. Happy to hear ideas in the comments.

Stay tuned for more happy days.

@ooni September 16, 2022 - 00:00 • 20 days ago
Azerbaijan and Armenia block TikTok amid border clashes
Earlier this week, on 12th September 2022, fighting erupted between Azerbaijani and Armenian troops along their border. Over the next few days, community members in Azerbaijan reported that the TikTok app was blocked locally. We analyzed OONI network measurement data to investigate the block. We found that TikTok has been blocked in both Azerbaijan and Armenia over the last few days. In this report, we share our technical findings. In both Armenia and Azerbaijan, we found TLS and DNS level interference of TikTok domains and endpoints during the border clashes. ...
@kushal September 10, 2022 - 16:21 • 26 days ago
khata, under WASI

While I am learning about WebAssembly slowly, I was also trying to figure out where all I can use it. That is the way I general learn all new things. So, as a start, I thought of compiling my static blogging tool khata into WASM and then run it under WASI.

While trying to do that, the first thing I noticed that I can not just create a subprocess (I was doing it to call rsync internally), so that was top priority thing to fix in the list. First, I tried to use rusync crate, but then it failed at runtime as it is using threads internally. After asking around a bit more in various discord channels, I understood that the easiest way would be to just write that part of the code myself.

Which is now done, means this blog post is actually rendered using wasmtime.

wasmtime --dir=. khata.wasm

I am very slowly learning about the SPEC, and the limitations. But, this is very interesting and exciting learning journey so far.

@blog September 10, 2022 - 00:00 • 26 days ago
New Alpha Release: Tor Browser 12.0a2 (Android, Windows, macOS, Linux)

Tor Browser 12.0a2 is now available from the Tor Browser download page and also from our distribution directory.

This marks our first alpha on the Firefox ESR 102 series, and our first Android released based on the Firefox ESR (Extended Support Release) series.

In the past, our Tor Browser for Android releases have been based on the Firefox Rapid Release schedule. Going forward, Tor Browser for Android will be based on the latest Firefox ESR (same as our Windows, macOS and Linux releases) and we will be back-porting Android-specific security updates from the Rapid Release branches.

This updated release schedule should allow us to improve the general stability of Tor Browser for Android and be more confident in our releases going forward.

Tor Browser 12.0a2 updates Firefox on Android, Windows, macOS, and Linux to 102.2.0esr.

We use this opportunity to update various other components of Tor Browser as well :

  • Tor
  • Tor Launcher 0.2.39 (Desktop only)
  • NoScript 11.4.10
  • Go 1.18.5 (Android only)

This version includes important security updates to Firefox. We also backport the following Android-specific security updates:

Notably, we have also enabled HTTPS-Only mode for Tor Browser for Android (this feature is already enabled for Desktop). This feature, when enabled, overrides HTTPS-Everywhere functionality. For now this can be reverted in the 'Settings > Privacy and security' pane( see: ).

The full changelog since Tor Browser 12.0a1 is:

@anarcat September 8, 2022 - 14:45 • 28 days ago
Complaint about Canada's phone cartel

I have just filed a complaint with the CRTC about my phone provider's outrageous fees. This is a copy of the complaint.

I am traveling to Europe, specifically to Ireland, for a 6 days for a work meeting.

I thought I could use my phone there. So I looked at my phone provider's services in Europe, and found the "Fido roaming" services:

The fees, at the time of writing, at fifteen (15!) dollars PER DAY to get access to my regular phone service (not unlimited!!).

If I do not use that "roaming" service, the fees are:

  • 2$/min
  • 0.75$/text
  • 10$/20MB

That is absolutely outrageous. Any random phone plan in Europe will be cheaper than this, by at least one order of magnitude. Just to take any example:

Those fine folks offer a one-time, prepaid plan for €15 for 28 days which includes:

  • unlimited data
  • 1000 minutes
  • 500 text messages
  • 12GB data elsewhere in Europe

I think it's absolutely scandalous that telecommunications providers in Canada can charge so much money, especially since the most prohibitive fee (the "non-prepaid" plans) are automatically charged if I happen to forget to remove my sim card or put my phone in "airplane mode".

As advised, I have called customer service at Fido for advice on how to handle this situation. They have confirmed those are the only plans available for travelers and could not accommodate me otherwise. I have notified them I was in the process of filing this complaint.

I believe that Canada has become the technological dunce of the world, and I blame the CRTC for its lack of regulation in that matter. You should not allow those companies to grow into such a cartel that they can do such price-fixing as they wish.

I haven't investigated Fido's competitors, but I will bet at least one of my hats that they do not offer better service.

I attach a screenshot of the Fido page showing those outrageous fees.

I have no illusions about this having any effect. I thought of filing such a complain after the Rogers outage as well, but felt I had less of a standing there because I wasn't affected that much (e.g. I didn't have a life-threatening situation myself).

This, however, was ridiculous and frustrating enough to trigger this outrage. We'll see how it goes...

"We will respond to you within 10 working days."

Response from CRTC

They did respond within 10 days. Here is the full response:

Dear Antoine Beaupré:

Thank you for contacting us about your mobile telephone international roaming service plan rates concern with Fido Solutions Inc. (Fido).

In Canada, mobile telephone service is offered on a competitive basis. Therefore, the Canadian Radio-television and Telecommunications Commission (CRTC) is not involved in Fido's terms of service (including international roaming service plan rates), billing and marketing practices, quality of service issues and customer relations.

If you haven't already done so, we encourage you to escalate your concern to a manager if you believe the answer you have received from Fido's customer service is not satisfactory.

Based on the information that you have provided, this may also appear to be a Competition Bureau matter. The Competition Bureau is responsible for administering and enforcing the Competition Act, and deals with issues such as false or misleading representations, deceptive marketing practices and collusion. You can reach the Competition Bureau by calling 1-800-348-5358 (toll-free), by TTY (for deaf and hard of hearing people) by calling 1-866-694-8389 (toll-free). For more contact information, please visit

When consumers are not satisfied with the service they are offered, we encourage them to compare the products and services of other providers in their area and look for a company that can better match their needs. The following tool helps to show choices of providers in your area:

Thank you for sharing your concern with us.

In other words, complain with Fido, or change providers. Don't complain to us, we don't manage the telcos, they self-regulate.

Great job, CRTC. This is going great. This is exactly why we're one of the most expensive countries on the planet for cell phone service.

Live chat with Fido

Interestingly, the day after I received that response from the CRTC, I received this email from Fido, while traveling:

Date: Tue, 13 Sep 2022 10:10:00 -0400 From: Fido To: REDACTED Subject: Courriel d’avis d’itinérance | Fido

Roaming Welcome Confirmation


Date : 13 septembre 2022
Numéro de compte : [redacted]

Antoine Beaupré!

Nous vous écrivons pour vous indiquer qu’au moins un utilisateur inscrit à votre compte s’est récemment connecté à un réseau en itinérance.
Vous trouverez ci-dessous le message texte de bienvenue en itinérance envoyé à l’utilisateur (ou aux utilisateurs), qui contenait les tarifs d’itinérance

Message texte de bienvenue en itinérance

Destinataire : REDACTED

Date et heure : 2022-09-13 / 10:10:00
Allo, ici Fido : Bienvenue à destination! Vous êtes inscrit à Fido Nomade alors utilisez vos données, parlez et textez comme vous le faites à la
maison. Depuis le 1 mars 2022 le tarif cette destination pour 15 $/jour (+ taxes) et valide tous les jours jusqu'à 23 h 59 HE, peu importe le fuseau
horaire dans lequel vous vous trouvez. Bon voyage! Des questions? Consultez ou composez +15149333436 (sans frais).

Besoin d’aide?

    Découvrez nos options d’itinérance et restez en contact à l’étranger sans vous soucier de votre


  • Gérez votre compte
    Gérez vos produits et services Fido à la maison ou sur la route grâce à Mon


Ce courriel est produit automatiquement; veuillez ne pas y répondre. Ce courriel (et toute pièce jointe) est confidentiel. Si vous n’êtes pas le destinataire,
veuillez supprimer le présent message et en détruire toute copie.


I found that message utterly confusing (and yes, I can read french). Basically, it says that some user (presumably me!) connected to the network with roaming. I did just disabled airplane mode on my phone to debug a Syncthing bug but had not enabled roaming.

So this message seemed to say that I would be charged 15$ (per DAY!) for roaming from now on. Confused, I tried their live chat to try to clarify things, worried I would get charged even more for calling tech support on *611. This is a transcript of the chat:

F: Hi! What are we doing today?

Type in your question or choose from the options below: * Track my Equipment Order * View Bill Online * Payment Options * iPhone 14 Pre-Order

A: i received a message about roaming while abroad but i did not enable roaming on my phone, will i be charged anyways?

F: I think I know what you’re asking for.

Select the topic that best matches your request or try rephrasing your question.

A: no

F: Thank you, this will help us to improve! Would you like to chat with a specialist?

Chat with a specialist

I'll get a specialist to help you with this.

It appears that you're not signed in. Your session may have timed out.

To save time and identify your account details, please sign in to My Account.

  • Sign in
  • I'm not able to sign in

Have any questions specific to your Fido account? To service you faster, please identify yourself by completing the form below.

A: Personal info Form submitted

F: Thank you! I'll connect you with the next available specialist.

Your chat is being transferred to a Live Chat agent. Thanks for your patience.

We are here to assist you and we kindly ask that our team members be treated with respect and dignity. Please note that abuse directed towards any Consumer Care Specialist will not be tolerated and will result in the termination of your conversation with us.

All of our agents are with other customers at the moment. Your chat is in a priority sequence and someone will be with you as soon as possible. Thanks!

Thanks for continuing to hold. An agent will be with you as soon as possible.

Thank you for your continued patience. We’re getting more Live Chat requests than usual so it’s taking longer to answer. Your chat is still in a priority sequence and will be answered as soon as an agent becomes available.

Thank you so much for your patience – we're sorry for the wait. Your chat is still in a priority sequence and will be answered as soon as possible.

Hi, I'm [REDACTED] from Fido in [REDACTED]. May I have your name please?

A: hi i am antoine, nice to meet you

sorry to use the live chat, but it's not clear to me i can safely use my phone to call support, because i am in ireland and i'm worried i'll get charged for the call

F: Thank You Antoine , I see you waited to speak with me today, thank you for your patience.Apart from having to wait, how are you today?

A: i am good thank you

[... delay ...]

A: should i restate my question?

F: Yes please what is the concern you have?

A: i have received an email from fido saying i someone used my phone for roaming

it's in french (which is fine), but that's the gist of it

i am traveling to ireland for a week

i do not want to use fido's services here... i have set the phon eto airplane mode for most of my time here

F: The SMS just says what will be the charges if you used any services.

A: but today i have mistakenly turned that off and did not turn on roaming

well it's not a SMS, it's an email

F: Yes take out the sim and keep it safe.Turun off or On for roaming you cant do it as it is part of plan.

A: wat

F: if you used any service you will be charged if you not used any service you will not be charged.

A: you are saying i need to physically take the SIM out of the phone?

i guess i will have a fun conversation with your management once i return from this trip

not that i can do that now, given that, you know, i nee dto take the sim out of this phone

fun times

F: Yes that is better as most of the customer end up using some kind of service and get charged for roaming.

A: well that is completely outrageous

roaming is off on the phone

i shouldn't get charged for roaming, since roaming is off on the phone

i also don't get why i cannot be clearly told whether i will be charged or not

the message i have received says i will be charged if i use the service

and you seem to say i could accidentally do that easily

can you tell me if i have indeed used service sthat will incur an extra charge?

are incoming text messages free?

F: I understand but it is on you if you used some data SMS or voice mail you can get charged as you used some services.And we cant check anything for now you have to wait for next bill.

and incoming SMS are free rest all service comes under roaming.

That is the reason I suggested take out the sim from phone and keep it safe or always keep the phone or airplane mode.

A: okay

can you confirm whether or not i can call fido by voice for support?

i mean for free

F: So use your Fido sim and call on +1-514-925-4590 on this number it will be free from out side Canada from Fido sim.

A: that is quite counter-intuitive, but i guess i will trust you on that

thank you, i think that will be all

F: Perfect, Again, my name is [REDACTED] and it’s been my pleasure to help you today. Thank you for being a part of the Fido family and have a great day!

A: you too

So, in other words:

  1. they can't tell me if I've actually been roaming
  2. they can't tell me how much it's going to cost me
  3. I should remove the SIM card from my phone (!?) or turn on airplane mode, but the former is safer
  4. I can call Fido support, but not on the usual *611, and instead on that long-distance-looking phone number, and yes, that means turning off airplane mode and putting the SIM card in, which contradicts step 3

Also notice how the phone number from the live chat (+1-514-925-4590) is different than the one provided in the email (15149333436). So who knows what would have happened if I would have called the latter. The former is mentioned in their contact page.

I guess the next step is to call Fido over the phone and talk to a manager, which is what the CRTC told me to do in the first place...

I ended up talking with a manager (another 1h phone call) and they confirmed there is no other package available at Fido for this. At best they can provide me with a credit if I mistakenly use the roaming by accident to refund me, but that's it. The manager also confirmed that I cannot know if I have actually used any data before reading the bill, which is issued on the 15th of every month, but only available... three days later, at which point I'll be back home anyways.


@anarcat September 7, 2022 - 01:34 • 29 days ago
Deleted GitLab forks from my account

I have just deleted two forks I had of the GitLab project in my account. I did this after receiving a warning that quotas would now start to be enforced. It didn't say that I was going over quota, so I actually had to go look in the usage quotas page, which stated I was using 5.6GB of storage. So far so good, I'm not going to get billed because I'm below the 10GB threshold.

But still, I found that number puzzling. That's a lot of data! Maybe wallabako? I build images there in CI... Or the ISOs in stressant?

Nope. The biggest disk users were... my forks of gitlab-ce and gitlab-ee (now respectively called gitlab-foss and gitlab-ee, but whatever). CE was taking up roughly 1GB and EE was taking up the rest.

So I deleted both repos, which means that the next time I want to contribute a fix to their documentation — which is as far as I managed to contribute to GitLab — I will need to re-fork those humongous repositories.

Maybe I'm reading this wrong. Maybe there's a bug in the quotas system. Or, if I'm reading this right, GitLab is actually double-billing people: once for the source repository, and once for the fork. Because surely repos are not duplicating all those blobs on disk... right? RIGHT?

Either case, that's rather a bad move on their part, I feel like. With GitHub charging 4$/user/month, it feels like GitLab is going to have to trouble to compete by charging 20$/user/month as a challenger...

(Update: as noted in the comments by Jim Sorenson, this is actually an issue with older versions of GitLab. Deleting and re-forking the repos will actually fix the issue so, in a way, I did exactly what I should. Another workaround is to run the housekeeping jobs on the repo, although I cannot confirm this works myself.)

But maybe it's just me: I'm not an economist, surely there's some brilliant plan here I'm missing...

In the meantime, free-ish alternatives include (currently free for public repos) and (2$/mth, but at least not open-core, unfortunately no plan for a container registry). And of course, you can painfully self-host GitLab,, gitea, pagure, or whatever it is the current fancy gitweb.

@blog September 5, 2022 - 00:00 • 1 months ago
Boosting Adoption of Tor Browser Using Behavioral Science
This is a guest post by Peter Story.

This blog post summarizes research presented at The Privacy Enhancing Technologies Symposium in July 2022. The research was conducted by a team from Clark University, Carnegie Mellon University, University of Michigan, and the University of Maryland.

As part of our research, we used an experiment to test the effectiveness of different nudging interventions at increasing adoption of Tor Browser. We found that our nudge based on Protection Motivation Theory nearly doubled the odds that participants would use Tor Browser. Our results also show that users commonly encounter usability challenges when using Tor Browser, and that people use Tor Browser for a variety of benign activities. Our study contributes to a greater understanding of factors influencing the adoption of Tor Browser, and how nudges might be used to encourage the adoption of Tor Browser and similar privacy enhancing technologies.


Browsing privacy tools can help people protect their digital privacy. However, our research suggests that the tools which provide the strongest protections (e.g., Tor Browser) are less widely adopted than other tools. This may be due to usability challenges, misconceptions, behavioral biases, or mere lack of awareness. This convinced us to test ways to increase adoption of Tor Browser.

'When did you most recently using [TOOL]?' Tool usage in descending order: antivirus software, ad blockers, private browsing, VPNs, and Tor Browser

Nudging Experiment


Specifically, we tested the effectiveness of three different nudging interventions, designed to encourage adoption of Tor Browser. First, we tested an informational nudge based on protection motivation theory (PMT), designed to raise awareness of Tor Browser and to help participants form accurate perceptions of it. Next, we added an action planning implementation intention (AP), designed to help participants identify opportunities for using Tor Browser. Finally, we added a coping planning implementation intention (CP), designed to help participants overcome challenges to using Tor Browser, such as extreme website slowness. We tested these nudges in a longitudinal field experiment with 537 participants.

Survey 1 through 5. The PMT and action planning treatments were administered in Survey 2, while the coping planning treatment was administered in Survey 3.

The quote below is an excerpt from our PMT-based nudge. We wrote this text to help participants accurately gauge their susceptibility to browsing privacy threats. Our nudge addressed well-defined threats and common misconceptions about other tools’ protections, as suggested by our prior work.

Many different organizations can gather information about your browsing activity. Here are just a few examples:

And unfortunately, most browsing tools offer only partial protection against these privacy threats. For example:

  • Private browsing only partially hides your browsing from advertisers, and does nothing to hide your location from websites or your browsing from your internet service provider or the government
  • Most VPNs do nothing to hide your browsing from advertisers, many VPNs keep logs which can be accessed by the government, and some VPNs even spy on their users
  • Ad blockers only partially hide your browsing from advertisers, and do nothing to protect against other privacy threats

Other equally important parts of the PMT-based nudge addressed the protections offered by Tor Browser, and gave instructions for using Tor Browser effectively. Our action planning nudge encouraged participants to list privacy-sensitive activities they planned to use Tor Browser for in the coming week. Our coping planning nudge explained how to overcome common challenges to using Tor Browser; for example, using the “New Circuit” button to resolve extreme website slowness. Our paper contains more details about our nudges.


We found that our PMT-based nudge increased use of Tor Browser in both the short- and long-term; participants who saw our PMT-based nudge were nearly twice as likely to report using Tor Browser as those in our control group. Our coping planning nudge also increased use of Tor Browser, but only in the week following our intervention. We did not find statistically significant evidence of our action planning nudge increasing use of Tor Browser.

The tables below summarize these findings. For odds ratios, 1.5, 2, and 3 are the conventional thresholds for small, medium, and large effect sizes, respectively. Results significant at α = 0.05 are bolded. Only participants who encountered challenges could be given the opportunity to form coping plans to overcome those challenges, which is why we tested those comparisons separately.

Short-Term Effects of Nudging Interventions
Comparison Use of Tor Browser Odds Ratio p-value
Control vs PMT Survey 3: 14.9% vs 24.2% 1.83 0.026
PMT vs PMT+AP Survey 3: 24.2% vs 29.8% 1.33 0.125
PMT+AP vs PMT+AP+CP Survey 4: 34.4% vs 40.0% 1.27 0.173
Comparison, for those who
encountered challenges
PMT+AP vs PMT+AP+CP Survey 4: 42.3% vs 65.9% 2.64 0.027
Long-Term Effects of Nudging Interventions
Comparison Use of Tor Browser Odds Ratio p-value
Control vs PMT Survey 5: 15.4% vs 27.3% 2.05 0.011
PMT vs PMT+AP Survey 5: 27.3% vs 32.2% 1.26 0.211
PMT+AP vs PMT+AP+CP Survey 5: 32.2% vs 29.2% 0.87 0.691
Comparison, for those who
encountered challenges
PMT+AP vs PMT+AP+CP Survey 5: 47.6% vs 41.5% 0.78 0.678


Our results suggest that there are opportunities to increase adoption of Tor Browser using nudging techniques, particularly those based on protection motivation theory (PMT). Certainly, not everyone is interested in using Tor Browser. However, our nudging techniques show that many people are willing to give it a try, and that our PMT-based nudge can encourage a significant percentage to continue using Tor Browser in the long term. We also tested nudges based on action and coping planning implementation intentions. Although we did not find evidence of these plans further increasing long-term adoption of Tor Browser, those who were given the opportunity to form coping plans were more likely to use Tor Browser in the subsequent week.

Several things should be considered when translating our results to a real-world deployment of nudges. First, our participants knew our nudges were part of a research study. However, how people respond to information depends on which entity delivers that information. Nudges may be more or less effective depending on how people perceive the entity administering the nudges. Tor Browser itself might serve as a trusted messenger for nudges, perhaps incorporating nudges into the browser’s homepage, or displaying them in the UI when various challenges are encountered. For example, if Tor Browser can determine that a website is blocking Tor users, Tor Browser might explain this and recommend using an alternative website.

Second, we only recruited participants who we thought would be highly motivated to use Tor Browser. Specifically, we recruited participants who had prior experience with other privacy tools, and who expressed a high level of interest in preventing at least one privacy threat Tor Browser can protect against. Our intuition was that it would be easier to detect the effects of our nudges among these participants in our experiment; perhaps similar targeting should be employed when deploying nudges in the wild.

Finally, additional research is needed to fully understand the effects of our nudges. We asked people to report whether they used Tor Browser during our study, but we did not capture finer naunces of their behavior. For example, our PMT nudge reminded participants that Tor Browser’s protections are reduced if one logs in to websites. Did participants understand and follow this recommendation? Also, our action planning nudge was designed to help people identify opportunities to use Tor Browser. Although it did not significantly increase the number of participants who reported using Tor Browser in the previous week, did it increase consistency of using Tor Browser? If someone regularly performs a particular privacy-sensitive activity, their identify can be revealed if they forget to use Tor Browser even once. Similarly, our coping plan was designed to help people continue using Tor Browser in spite of challenges. Free text responses from participants suggest their coping plans were helpful to them, though we don’t know this definitively. Future work should explore whether nudges can have these kinds of positive effects.

Other Findings

Did people encounter challenges using Tor Browser?

How commonly did people encounter challenges when using Tor Browser? Did our coping plans address the most common types of challenges? We asked the study participants who reported trying to use Tor Browser whether they had encountered any challenges. The majority of these participants reported encountering some form of challenge, with extreme website slowness being the most common, and websites not working being the second most common.

Challenges encountered, in descending order: 'I did not encounter any challenges,' 'Websites were extremely slow,' 'Websites did not work,' 'Other'

Our findings suggest that people are likely to encounter challenges when using Tor Browser, so it is important to address these challenges using coping planning or other approaches. Our coping plan templates addressed the two most commonly encountered types of challenges, supporting the validity of our experiment. Please refer to our paper for details about the “Other” challenges.

What activities do people use Tor Browser for?

We encouraged participants in our action planning treatment group to make plans for using Tor Browser. In their plans, we invited participants to list privacy-sensitive activities they might perform using Tor Browser. If they didn’t want to disclose a certain activity, we told them to write “prefer not to disclose.” In total, participants wrote 598 activities, which we coded to identify common themes.

In the table below, we summarize the ten most common types of activities reported by participants. The “Described” column shows the number of times participants described each type of activity. The “Performed” column shows the number of activities participants reported actually performing, while the “Performed Using Tor Browser” column shows the number of activities participants reported using Tor Browser to perform.

Activities Code Description Described Performed Performed Using Tor Browser
PREFER NOT TO DISCLOSE Either the literal text “prefer not to disclose,” or something close to it. 192 100 33
SHOPPING Looking up information about consumer products, regardless of intention to purchase. 55 39 13
FINANCIAL Looking up information about financial products (e.g., stocks, bitcoin), mortgages, banking, insurance, salaries, applying to jobs, etc. 49 35 8
VAGUE A vaguely defined activity, such as “using a search engine” or “researching things.” 49 41 25
NEWS Looking up information about politics, celebrities, current events, document leaks, etc. 40 29 17
NSFW Pornography or other “Not Safe For Work” content. 36 29 15
MEDICAL Accessing medical information. Includes personal care and cannabis. 33 21 11
VIDEOS Watching videos, movies, or streaming. 26 18 10
YOUTUBE Using YouTube. 25 22 12
SNOOPING Looking up information about non-celebrities (e.g., ex’s, friends, background checks) or similar entities (e.g., employers, competitors). 20 11 4

Our participants shared that they used Tor Browser for many innocuous activities, including shopping, reading the news, and researching medical topics. This stands in contrast to the illicit nature some ascribe to Tor Browser. Our findings suggest that websites may benefit from supporting Tor traffic to cater to privacy-sensitive visitors. For example, the New York Times is available as an Onion Service.

@kushal September 2, 2022 - 06:39 • 1 months ago
Johnnycanencrypt 0.9.0 release

3 days ago I released Johnnycanencrypt 0.9.0. Here is the changelog:

- Adds `setuptools-rust` as build system.
- Key.uids now contains the certification details of each user id.
- `merge_keys` in rjce now takes a force boolean argument.
- `certify_key` can sign/certify another key by both card and on disk primary key.

The first biggest change is related to build system, now we are using setuptools-rust to build. This change happened as dkg is working towards packaging the module for Debian.

The other big change is about certifying someone's key. We can use the primary key (either on disk or on Yubikey) to do the signing.

k = ks.certify_key(
    ["Kushal Das <>", "Kushal Das <>"],

In the above example I am signing two user ids of the key k using my_key with a PositiveCertification.

@blog September 2, 2022 - 00:00 • 1 months ago
Arti 1.0.0 is released: Our Rust Tor implementation is ready for production use.

Back in 2020, we started work on a new implementation of the Tor protocols in the Rust programming language. Now we believe it's ready for wider use.

In this blog post, we'll tell you more about the history of the Arti project, where it is now, and where it will go next.

Background: Why Arti? And How?

Why rewrite Tor in Rust? Because despite (or because of) its maturity, the C Tor implementation is showing its age. While C was a reasonable choice back when we started working on Tor 2001, we've always suffered from its limitations: it encourages a needlessly low-level approach to many programming problems, and using it safely requires painstaking care and effort. Because of these limitations, that pace of development in C has always been slower than we would have liked.

What's more, our existing C implementation has grown over the years to have a not-so-modular design: nearly everything is connected to everything else, which makes it even more difficult to analyze the code and make safe improvements.

A movement to Rust seemed like a good answer. Started in 2010 at Mozilla, and now maintained by the Rust Foundation, Rust has grown over the years to become an independently maintained programming language with great ergonomics and performance, and strong safety properties. In 2017, we started experimenting with adding Rust inside the C Tor codebase, with a view to replacing the code bit by bit.

One thing that we found, however, was that our existing C code was not modular enough to be easily rewritten. (Rust's security guarantees depend on Rust code interacting with other Rust code, so to get any benefit, you need to rewrite a module at a time rather than just one function at a time.) The parts of the code that were isolated enough to replace were mostly trivial, and seemed not worth the effort—whereas the parts that most needed replacement were to intertwined with each other to practically disentangle. We tried to disentangle our modules, but it proved impractical to do so without destabilizing the codebase.

So in 2020, we started on a Rust Tor implementation that eventually became Arti. At first, it was a personal project to improve my Rust skills, but by the end of the summer, it could connect to the Tor network, and by September it sent its first anonymized traffic. After some discussion, we decided to adopt Arti as an official part of the Tor Project, and see how far we could take it.

Thanks to generous support from Zcash Community Grants starting in 2021, we were able to hire more developers and speed up the pace of development enormously. By October, we had our first "no major privacy holes" release (0.0.1), and we started putting out monthly releases. In March of this year, we had enough of a public API to be confident in recommending Arti for experimental embedding, and so we released version 0.1.0.

And now, with our latest release, we've reached our 1.0.0 milestone. Let's talk more about what that means.

Arti 1.0.0: Ready for production use

When we defined our set of milestones, we defined Arti 1.0.0 as "ready for production use": You should be able to use it in the real world, to get a similar degree of privacy, usability, and stability to what you would with a C client Tor. The APIs should be (more or less) stable for embedders.

We believe we have achieved this. You can now use arti proxy to connect to the Tor network to anonymize your network connections.

Note that we don't recommend pointing a conventional web browser at arti (or, indeed, C Tor): web browsers leak much private and identifying information. To browse the web anonymously, use Tor Browser; we have instructions for using it with Arti.

Recent work

To achieve this, we we've made many improvements to Arti. (Items marked as NEW are new or substantially improved since last month's 0.6.0 release.)

For a complete list of changes, including a list of just the changes since 0.6.0, see our CHANGELOG.

So, how's Rust been?

Our experience with Rust has been a definite success.

At every stage, we've encountered way fewer bugs than during comparable C development. The bugs that we have encountered have almost all been semantic/algorithmic mistakes (real programming issues), not mistakes in using the Rust language and its facilities. Rust has a reputation for being a difficult language with a picky compiler - but the pickiness of the compiler has been a great boon. Generally speaking, if our Rust code compiles and passes its tests, it is much likelier to be correct than our C code under the same conditions.

Development of comparable features has gone way faster, even considering that we're building most things for the second time. Some of the speed improvement is due to Rust's more expressive semantics and more usable library ecosystem—but a great deal is due to the confidence Rust's safety brings.

Portability has been far easier than C, though sometimes we're forced to deal with differences between operating systems. (For example, when we've had to get into the fine details of filesystem permissions, we've found that most everything we do takes different handling on Windows.)

One still-uncracked challenge is binary size. Unlike C's standard library, Rust's standard library doesn't come installed by default on our target systems, and so it adds to the size of our downloads. Rust's approach to high-level programming and generic code can make fast code, but also large executables. We've been able to offset this somewhat with the Rust ecosystem's improved support for working with platform-native TLS implementations, but there's more work to do here.

Embedding has been practical so far. We have preliminary work embedding Arti in both Java and Python.

We've found that Arti has attracted volunteer contributions in greater volume and with less friction than C Tor. New contributors are greatly assisted by Rust's strong type system, excellent API documentation support, and safety properties. These features help them find where to make a change, and also enable making changes to unfamiliar code with much greater confidence.

What's coming next?

Our primary focus in Arti 1.1.0 will be to implement Tor's anticensorship features, including support for bridges and pluggable transports. We've identified our primary architectural challenges there, and are working through them now.

In addition, we intend to further solidify our compliance with semantic versioning in our high-level arti-client crate. We are confident that our intentionally exposed APIs there are stable, but before we can promise long-term stability we need to make sure that we have a way to detect and prevent changes to the lower-level APIs that arti-client re-exports. The cargo-public-api and cargo-semver-checks crates both seem promising, but we may need additional thinking.

(This semantic versioning difficulty is the primary reason why arti-client is still at 0.6.0 instead of 1.0.0. When we declare 1.0.0 for arti-client, we want to be sure that we can keep backward compatibility for as long as possible.)

We expect that Arti 1.1.0 will be complete around the end of October. We had originally estimated one month of the team's time for this work, but since we'll all be off for a week for a meeting, and then a few of us have vacations, it seems that we'll need to allocate two months in order to find a month of hacking time. (Such is life!)

And then?

After Arti 1.1.0, we're going to focus on onion services in Arti 1.2.0. They're a complex and important part of the Tor protocols, and will take a significant amount of effort to build. Making onion services work securely and efficiently will require a number of related protocol features, including support for congestion control, DOS protection, vanguards, and circuit padding machines.

After that, Arti 2.0.0 will focus on feature parity with the C tor client implementation, and support for embedding Arti in different languages. (Preliminary embedding work is promising: we have the beginnings of a VPN tool for mobile, embedding Arti in Java.) When we're done, we intend that Arti will be a suitable replacement for C tor as a client implementation in all (or nearly all) use contexts.

We've applied to the Zcash Community Grants for funding to support these next two phases, and we're waiting hopefully to see what they say.

And after that?

We intend that, in the long run, Arti will replace our C tor implementation completely, not only for clients, but also for relays and directory authorities. This will take several more years of work, but we're confident that it's the right direction forward.

(We won't stop support for the C implementation right away; we expect that it will take some time for people to migrate.)

How can you try Arti now?

We rely on users and volunteers to find problems in our software and suggest directions for its improvement. You can test Arti as a SOCKS proxy (if you're willing to compile from source) and as an embeddable library (if you don't mind a little API instability).

Assuming you've installed Arti (with cargo install arti, or directly from a cloned repository), you can use it to start a simple SOCKS proxy for making connections via Tor with:

$ arti proxy -p 9150

and use it more or less as you would use the C Tor implementation!

(It doesn't support onion services yet. If compilation doesn't work, make sure you have development files for libsqlite installed on your platform.)

If you want to build a program with Arti, you probably want to start with the arti-client crate. Be sure to check out the examples too.

For more information, check out the README file. (For now, it assumes that you're comfortable building Rust programs from the command line). Our CONTRIBUTING file has more information on installing development tools, and on using Arti inside of Tor Browser. (If you want to try that, please be aware that Arti doesn't support onion services yet.)

When you find bugs, please report them on our bugtracker. You can request an account or report a bug anonymously.

And if this documentation doesn't make sense, please ask questions! The questions you ask today might help improve the documentation tomorrow.

Whether you're a user or a developer, please give Arti a try, and let us know what you think. The sooner we learn what you need, the better our chances of getting it into an early milestone.


Thanks to everybody who has helped take us here from Arti 0.1.0, including: 0x4ndy, Alexander Færøy, Alex Xu, Arturo Marquez, Christian Grigis, Dimitris Apostolou, Emptycup, FAMASoon, feelingnothing, Jim Newsome, Lennart Kloock, Michael, Michael Mccune, Neel Chauhan, Orhun Parmaksız, Richard Pospesel, Samanta Navarro, solanav, spongechameleon, Steven Murdoch, Trinity Pointard, and Yuan Lyu!

And, of course, thanks to Zcash Community Grants for their support of this critical work! The Zcash Community Grants program (formerly known as ZOMG) funds independent teams entering the Zcash ecosystem to perform major ongoing development (or other work) for the public good of the Zcash ecosystem. Zcash is a privacy-focused cryptocurrency, which pioneered the use of zk-SNARKs. The Zcash ecosystem is driven to further individual privacy and freedom.

@blog August 30, 2022 - 00:00 • 1 months ago
New Release: Tor Browser 11.5.3 (Android)
@blog August 29, 2022 - 00:00 • 1 months ago
New Release: Tor Browser 11.5.2 (Android,Windows, macOS, Linux)

Tor Browser 11.5.2 is now available from the Tor Browser download page and also from our distribution directory.

Tor Browser 11.5.2 updates Firefox on Windows, macOS, and Linux to 91.13.0esr.

This version includes important security updates to Firefox:

We use the opportunity as well to update various other components of Tor Browser:

  • Tor
  • NoScript 11.4.9

The full changelog since Tor Browser 11.5.1 is:

  • All Platforms
    • Update Tor to
    • Update NoScript to 11.4.9
  • Windows + macOS + Linux
  • Build System
    • Windows + macOS + Linux
      • Update Go to 1.17.3
    • Android
      • Update Go to 1.18.5
@anarcat August 26, 2022 - 16:56 • 1 months ago
How to nationalize the internet in Canada

Rogers had a catastrophic failure in July 2022. It affected emergency services (as in: people couldn't call 911, but also some 911 services themselves failed), hospitals (which couldn't access prescriptions), banks and payment systems (as payment terminals stopped working), and regular users as well. The outage lasted almost a full day, and Rogers took days to give any technical explanation on the outage, and even when they did, details were sparse. So far the only detailed account is from outside actors like Cloudflare which seem to point at an internal BGP failure.

Its impact on the economy has yet to be measured, but it probably cost millions of dollars in wasted time and possibly lead to life-threatening situations. Apart from holding Rogers (criminally?) responsible for this, what should be done in the future to avoid such problems?

It's not the first time something like this has happened: it happened to Bell Canada as well. The Rogers outage is also strangely similar to the Facebook outage last year, but, to its credit, Facebook did post a fairly detailed explanation only a day later.

The internet is designed to be decentralised, and having large companies like Rogers hold so much power is a crucial mistake that should be reverted. The question is how. Some critics were quick to point out that we need more ISP diversity and competition, but I think that's missing the point. Others have suggested that the internet should be a public good or even straight out nationalized.

I believe the solution to the problem of large, private, centralised telcos and ISPs is to replace them with smaller, public, decentralised service providers. The only way to ensure that works is to make sure that public money ends up creating infrastructure controlled by the public, which means treating ISPs as a public utility. This has been implemented elsewhere: it works, it's cheaper, and provides better service.

A modest proposal

Global wireless services (like phone services) and home internet inevitably grow into monopolies. They are public utilities, just like water, power, railways, and roads. The question of how they should be managed is therefore inherently political, yet people don't seem to question the idea that only the market (i.e. "competition") can solve this problem. I disagree.

10 years ago (in french), I suggested we, in Québec, should nationalize large telcos and internet service providers. I no longer believe is a realistic approach: most of those companies have crap copper-based networks (at least for the last mile), yet are worth billions of dollars. It would be prohibitive, and a waste, to buy them out.

Back then, I called this idea "Réseau-Québec", a reference to the already nationalized power company, Hydro-Québec. (This idea, incidentally, made it into the plan of a political party.)

Now, I think we should instead build our own, public internet. Start setting up municipal internet services, fiber to the home in all cities, progressively. Then interconnect cities with fiber, and build peering agreements with other providers. This also includes a bid on wireless spectrum to start competing with phone providers as well.

And while that sounds really ambitious, I think it's possible to take this one step at a time.

Municipal broadband

In many parts of the world, municipal broadband is an elegant solution to the problem, with solutions ranging from Stockholm's city-owned fiber network (dark fiber, layer 1) to Utah's UTOPIA network (fiber to the premises, layer 2) and municipal wireless networks like which connects about 40,000 nodes in Catalonia.

A good first step would be for cities to start providing broadband services to its residents, directly. Cities normally own sewage and water systems that interconnect most residences and therefore have direct physical access everywhere. In Montréal, in particular, there is an ongoing project to replace a lot of old lead-based plumbing which would give an opportunity to lay down a wired fiber network across the city.

This is a wild guess, but I suspect this would be much less expensive than one would think. Some people agree with me and quote this as low as 1000$ per household. There is about 800,000 households in the city of Montréal, so we're talking about a 800 million dollars investment here, to connect every household in Montréal with fiber and incidentally a quarter of the province's population. And this is not an up-front cost: this can be built progressively, with expenses amortized over many years.

(We should not, however, connect Montréal first: it's used as an example here because it's a large number of households to connect.)

Such a network should be built with a redundant topology. I leave it as an open question whether we should adopt Stockholm's more minimalist approach or provide direct IP connectivity. I would tend to favor the latter, because then you can immediately start to offer the service to households and generate revenues to compensate for the capital expenditures.

Given the ridiculous profit margins telcos currently have — 8 billion $CAD net income for BCE (2019), 2 billion $CAD for Rogers (2020) — I also believe this would actually turn into a profitable revenue stream for the city, the same way Hydro-Québec is more and more considered as a revenue stream for the state. (I personally believe that's actually wrong and we should treat those resources as human rights and not money cows, but I digress. The point is: this is not a cost point, it's a revenue.)

The other major challenge here is that the city will need competent engineers to drive this project forward. But this is not different from the way other public utilities run: we have electrical engineers at Hydro, sewer and water engineers at the city, this is just another profession. If anything, the computing science sector might be more at fault than the city here in its failure to provide competent and accountable engineers to society...

Right now, most of the network in Canada is copper: we are hitting the limits of that technology with DSL, and while cable has some life left to it (DOCSIS 4.0 does 4Gbps), that is nowhere near the capacity of fiber. Take the town of Chattanooga, Tennessee: in 2010, the city-owned ISP EPB finished deploying a fiber network to the entire town and provided gigabit internet to everyone. Now, 12 years later, they are using this same network to provide the mind-boggling speed of 25 gigabit to the home. To give you an idea, Chattanooga is roughly the size and density of Sherbrooke.

Provincial public internet

As part of building a municipal network, the question of getting access to "the internet" will immediately come up. Naturally, this will first be solved by using already existing commercial providers to hook up residents to the rest of the global network.

But eventually, networks should inter-connect: Montréal should connect with Laval, and then Trois-Rivières, then Québec City. This will require long haul fiber runs, but those links are not actually that expensive, and many of those already exist as a public resource at RISQ and CANARIE, which cross-connects universities and colleges across the province and the country. Those networks might not have the capacity to cover the needs of the entire province right now, but that is a router upgrade away, thanks to the amazing capacity of fiber.

There are two crucial mistakes to avoid at this point. First, the network needs to remain decentralised. Long haul links should be IP links with BGP sessions, and each city (or MRC) should have its own independent network, to avoid Rogers-class catastrophic failures.

Second, skill needs to remain in-house: RISQ has already made that mistake, to a certain extent, by selling its neutral datacenter. Tellingly, MetroOptic, probably the largest commercial dark fiber provider in the province, now operates the QIX, the second largest "public" internet exchange in Canada.

Still, we have a lot of infrastructure we can leverage here. If RISQ or CANARIE cannot be up to the task, Hydro-Québec has power lines running into every house in the province, with high voltage power lines running hundreds of kilometers far north. The logistics of long distance maintenance are already solved by that institution.

In fact, Hydro already has fiber all over the province, but it is a private network, separate from the internet for security reasons (and that should probably remain so). But this only shows they already have the expertise to lay down fiber: they would just need to lay down a parallel network to the existing one.

In that architecture, Hydro would be a "dark fiber" provider.

International public internet

None of the above solves the problem for the entire population of Québec, which is notoriously dispersed, with an area three times the size of France, but with only an eight of its population (8 million vs 67). More specifically, Canada was originally a french colony, a land violently stolen from native people who have lived here for thousands of years. Some of those people now live in reservations, sometimes far from urban centers (but definitely not always). So the idea of leveraging the Hydro-Québec infrastructure doesn't always work to solve this, because while Hydro will happily flood a traditional hunting territory for an electric dam, they don't bother running power lines to the village they forcibly moved, powering it instead with noisy and polluting diesel generators. So before giving me fiber to the home, we should give power (and potable water, for that matter), to those communities first.

So we need to discuss international connectivity. (How else could we consider those communities than peer nations anyways?c) Québec has virtually zero international links. Even in Montréal, which likes to style itself a major player in gaming, AI, and technology, most peering goes through either Toronto or New York.

That's a problem that we must fix, regardless of the other problems stated here. Looking at the submarine cable map, we see very few international links actually landing in Canada. There is the Greenland connect which connects Newfoundland to Iceland through Greenland. There's the EXA which lands in Ireland, the UK and the US, and Google has the Topaz link on the west coast. That's about it, and none of those land anywhere near any major urban center in Québec.

We should have a cable running from France up to Saint-Félicien. There should be a cable from Vancouver to China. Heck, there should be a fiber cable running all the way from the end of the great lakes through Québec, then up around the northern passage and back down to British Columbia. Those cables are expensive, and the idea might sound ludicrous, but Russia is actually planning such a project for 2026. The US has cables running all the way up (and around!) Alaska, neatly bypassing all of Canada in the process. We just look ridiculous on that map.

(Addendum: I somehow forgot to talk about Teleglobe here was founded as publicly owned company in 1950, growing international phone and (later) data links all over the world. It was privatized by the conservatives in 1984, along with rails and other "crown corporations". So that's one major risk to any effort to make public utilities work properly: some government might be elected and promptly sell it out to its friends for peanuts.)

Wireless networks

I know most people will have rolled their eyes so far back their heads have exploded. But I'm not done yet. I want wireless too. And by wireless, I don't mean a bunch of geeks setting up OpenWRT routers on rooftops. I tried that, and while it was fun and educational, it didn't scale.

A public networking utility wouldn't be complete without providing cellular phone service. This involves bidding for frequencies at the federal level, and deploying a rather large amount of infrastructure, but it could be a later phase, when the engineers and politicians have proven their worth.

At least part of the Rogers fiasco would have been averted if such a decentralized network backend existed. One might even want to argue that a separate institution should be setup to provide phone services, independently from the regular wired networking, if only for reliability.

Because remember here: the problem we're trying to solve is not just technical, it's about political boundaries, centralisation, and automation. If everything is ran by this one organisation again, we will have failed.

However, I must admit that phone services is where my ideas fall a little short. I can't help but think it's also an accessible goal — maybe starting with a virtual operator — but it seems slightly less so than the others, especially considering how closed the phone ecosystem is.

Counter points

In debating these ideas while writing this article, the following objections came up.

I don't want the state to control my internet

One legitimate concern I have about the idea of the state running the internet is the potential it would have to censor or control the content running over the wires.

But I don't think there is necessarily a direct relationship between resource ownership and control of content. Sure, China has strong censorship in place, partly implemented through state-controlled businesses. But Russia also has strong censorship in place, based on regulatory tools: they force private service providers to install back-doors in their networks to control content and surveil their users.

Besides, the USA have been doing warrantless wiretapping since at least 2003 (and yes, that's 10 years before the Snowden revelations) so a commercial internet is no assurance that we have a free internet. Quite the contrary in fact: if anything, the commercial internet goes hand in hand with the neo-colonial internet, just like businesses did in the "good old colonial days".

Large media companies are the primary censors of content here. In Canada, the media cartel requested the first site-blocking order in 2018. The plaintiffs (including Québecor, Rogers, and Bell Canada) are both content providers and internet service providers, an obvious conflict of interest.

Nevertheless, there are some strong arguments against having a centralised, state-owned monopoly on internet service providers. FDN makes a good point on this. But this is not what I am suggesting: at the provincial level, the network would be purely physical, and regional entities (which could include private companies) would peer over that physical network, ensuring decentralization. Delegating the management of that infrastructure to an independent non-profit or cooperative (but owned by the state) would also ensure some level of independence.

Isn't the government incompetent and corrupt?

Also known as "private enterprise is better skilled at handling this, the state can't do anything right"

I don't think this is a "fait accomplit". If anything, I have found publicly ran utilities to be spectacularly reliable here. I rarely have trouble with sewage, water, or power, and keep in mind I live in a city where we receive about 2 meters of snow a year, which tend to create lots of trouble with power lines. Unless there's a major weather event, power just runs here.

I think the same can happen with an internet service provider. But it would certainly need to have higher standards to what we're used to, because frankly Internet is kind of janky.

A single monopoly will be less reliable

I actually agree with that, but that is not what I am proposing anyways. Current commercial or non-profit entities will be free to offer their services on top of the public network.

And besides, the current "ha! diversity is great" approach is exactly what we have now, and it's not working. The pretense that we can have competition over a single network is what led the US into the ridiculous situation where they also pretend to have competition over the power utility market. This led to massive forest fires in California and major power outages in Texas. It doesn't work.

Wouldn't this create an isolated network?

One theory is that this new network would be so hostile to incumbent telcos and ISPs that they would simply refuse to network with the public utility. And while it is true that the telcos currently do also act as a kind of "tier one" provider in some places, I strongly feel this is also a problem that needs to be solved, regardless of ownership of networking infrastructure.

Right now, telcos often hold both ends of the stick: they are the gateway to users, the "last mile", but they also provide peering to the larger internet in some locations. In at least one datacenter in downtown Montréal, I've seen traffic go through Bell Canada that was not directly targeted at Bell customers. So in effect, they are in a position of charging twice for the same traffic, and that's not only ridiculous, it should just be plain illegal.

And besides, this is not a big problem: there are other providers out there. As bad as the market is in Québec, there is still some diversity in Tier one providers that could allow for some exits to the wider network (e.g. yes, Cogent is here too).

What about Google and Facebook?

Nationalization of other service providers like Google and Facebook is out of scope of this discussion.

That said, I am not sure the state should get into the business of organising the web or providing content services however, but I will point out it already does do some of that through its own websites. It should probably keep itself to this, and also consider providing normal services for people who don't or can't access the internet.

(And I would also be ready to argue that Google and Facebook already act as extensions of the state: certainly if Facebook didn't exist, the CIA or the NSA would like to create it at this point. And Google has lucrative business with the US department of defense.)

What does not work

So we've seen one thing that could work. Maybe it's too expensive. Maybe the political will isn't there. Maybe it will fail. We don't know yet.

But we know what does not work, and it's what we've been doing ever since the internet has gone commercial.

In 1984 (of all years), the US Department of Justice finally broke up AT&T in half a dozen corporations, after a 10 year legal battle. Yet a decades later, we're back to only three large providers doing essentially what AT&T was doing back then, and those are regional monopolies: AT&T, Verizon, and Lumen (not counting T-Mobile that is from a different breed). So the legal approach really didn't work that well, especially considering the political landscape changed in the US, and the FTC seems perfectly happy to let those major mergers continue.

In Canada, we never even pretended we would solve this problem at all: Bell Canada (the literal "father" of AT&T) is in the same situation now. We have either a regional monopoly (e.g. Videotron for cable in Québec) or an oligopoly (Bell, Rogers, and Telus controlling more than 90% of the market). Telus does have one competitor in the west of Canada, Shaw, but Rogers has been trying to buy it out. The competition bureau seems to have blocked the merger for now, but it didn't stop other recent mergers like Bell's acquisition one of its main competitors in Québec, eBox.

Regulation doesn't seem capable of ensuring those profitable corporations provide us with decent pricing, which makes Canada one of the most expensive countries (research) for mobile data on the planet. The recent failure of the CRTC to properly protect smaller providers has even lead to price hikes. Meanwhile the oligopoly is actually agreeing on their own price hikes therefore becoming a real cartel, complete with price fixing and reductions in output.

There are actually regulations in Canada supposed to keep the worst of the Rogers outage from happening at all. According to CBC:

Under Canadian Radio-television and Telecommunications Commission (CRTC) rules in place since 2017, telecom networks are supposed to ensure that cellphones are able to contact 911 even if they do not have service.

I could personally confirm that my phone couldn't reach 911 services, because all calls would fail: the problem was that towers were still up, so your phone wouldn't fall back to alternative service providers (which could have resolved the issue). I can only speculate as to why Rogers didn't take cell phone towers out of the network to let phones work properly for 911 service, but it seems like a dangerous game to play.

Hilariously, the CRTC itself didn't have a reliable phone service due to the service outage:

Please note that our phone lines are affected by the Rogers network outage. Our website is still available:

I wonder if they will file a complaint against Rogers themselves about this. I probably should.

It seems the federal government is thinking more of the same medicine will fix the problem and has told companies should "help" each other in an emergency. I doubt this will fix anything, and could actually make things worse if the competitors actually interoperate more, as it could cause multi-provider, cascading failures.


The absurd price we pay for data does not actually mean everyone gets high speed internet at home. Large swathes of the Québec countryside don't get broadband at all, and it can be difficult or expensive, even in large urban centers like Montréal, to get high speed internet.

That is despite having a series of subsidies that all avoided investing in our own infrastructure. We had the "fonds de l'autoroute de l'information", "information highway fund" (site dead since 2003, link) and "branchez les familles", "connecting families" (site dead since 2003, link) which subsidized the development of a copper network. In 2014, more of the same: the federal government poured hundreds of millions of dollars into a program called connecting Canadians to connect 280 000 households to "high speed internet". And now, the federal and provincial governments are proudly announcing that "everyone is now connected to high speed internet", after pouring more than 1.1 billion dollars to connect, guess what, another 380 000 homes, right in time for the provincial election.

Of course, technically, the deadline won't actually be met until 2023. Québec is a big area to cover, and you can guess what happens next: the telcos threw up their hand and said some areas just can't be connected. (Or they connect their CEO but not the poor folks across the lake.) The story then takes the predictable twist of giving more money out to billionaires, subsidizing now Musk's Starlink system to connect those remote areas.

To give a concrete example: a friend who lives about 1000km away from Montréal, 4km from a small, 2500 habitant village, has recently got symmetric 100 mbps fiber at home from Telus, thanks to those subsidies. But I can't get that service in Montréal at all, presumably because Telus and Bell colluded to split that market. Bell doesn't provide me with such a service either: they tell me they have "fiber to my neighborhood", and only offer me a 25/10 mbps ADSL service. (There is Vidéotron offering 400mbps, but that's copper cable, again a dead technology, and asymmetric.)


Remember Chattanooga? Back in 2010, they funded the development of a fiber network, and now they have deployed a network roughly a thousand times faster than what we have just funded with a billion dollars. In 2010, I was paying Bell Canada 60$/mth for 20mbps and a 125GB cap, and now, I'm still (indirectly) paying Bell for roughly the same speed (25mbps). Back then, Bell was throttling their competitors networks until 2009, when they were forced by the CRTC to stop throttling. Both Bell and Vidéotron still explicitly forbid you from running your own servers at home, Vidéotron charges prohibitive prices which make it near impossible for resellers to sell uncapped services. Those companies are not spurring innovation: they are blocking it.

We have spent all this money for the private sector to build us a private internet, over decades, without any assurance of quality, equity or reliability. And while in some locations, ISPs did deploy fiber to the home, they certainly didn't upgrade their entire network to follow suit, and even less allowed resellers to compete on that network.

In 10 years, when 100mbps will be laughable, I bet those service providers will again punt the ball in the public courtyard and tell us they don't have the money to upgrade everyone's equipment.

We got screwed. It's time to try something new.


There was a discussion about this article on Hacker News which was surprisingly productive. Trigger warning: Hacker News is kind of right-wing, in case you didn't know.

Since this article was written, at least two more major acquisitions happened, just in Québec:

In the latter case, vMedia was explicitly saying it couldn't grow because of "lack of access to capital". So basically, we have given those companies a billion dollars, and they are not using that very money to buy out their competition. At least we could have given that money to small players to even out the playing field. But this is not how that works at all. Also, in a bizarre twist, an "analyst" believes the acquisition is likely to help Rogers acquire Shaw.

Also, since this article was written, the Washington Post published a review of a book bringing similar ideas: Internet for the People The Fight for Our Digital Future, by Ben Tarnoff, at Verso books. It's short, but even more ambitious than what I am suggesting in this article, arguing that all big tech companies should be broken up and better regulated:

He pulls from Ethan Zuckerman’s idea of a web that is “plural in purpose” — that just as pool halls, libraries and churches each have different norms, purposes and designs, so too should different places on the internet. To achieve this, Tarnoff wants governments to pass laws that would make the big platforms unprofitable and, in their place, fund small-scale, local experiments in social media design. Instead of having platforms ruled by engagement-maximizing algorithms, Tarnoff imagines public platforms run by local librarians that include content from public media.

(Links mine: the Washington Post obviously prefers to not link to the real web, and instead doesn't link to Zuckerman's site all and suggests Amazon for the book, in a cynical example.)

And in another example of how the private sector has failed us, there was recently a fluke in the AMBER alert system where the entire province was warned about a loose shooter in Saint-Elzéar except the people in the town, because they have spotty cell phone coverage. In other words, millions of people received a strongly toned, "life-threatening", alert for a city sometimes hours away, except the people most vulnerable to the alert. Not missing a beat, the CAQ party is promising more of the same medicine again and giving more money to telcos to fix the problem, suggesting to spend three billion dollars in private infrastructure.

@anarcat August 25, 2022 - 19:28 • 1 months ago
One dead Purism laptop

The "série noire" continues. I ordered my first Purism Librem 13v4 laptop in April 2019 and it arrived, unsurprisingly, more than three weeks later. But more surprisingly, it did not work at all: a problem eerily similar to this post talking about a bricked Purism laptop. Thankfully, Purism was graceful enough to cross-ship a replacement, and once I paid the extra (gulp) 190$ Fedex fee, I had my new elite laptop read.

Less than a year later, the right USB-A port breaks: it would deliver power, but no data signal (nothing in dmesg or lsusb). Two months later, the laptop short-circuits and completely dies. And here goes another RMA, this time without a shipping label or cross shipping, so I had to pay shipping fees.

Now the third laptop in as many years is as good as dead. The left hinge basically broke off. Earlier this year, I had noticed something was off with the lid: it was wobbly. I figured that it was just the way that laptop was, "they don't make it as sturdy as they did in the good old days, do they". But it was probably a signal of some much worse problem. Eventually, the bottom panel actually cracked open, and I realized that some internal mechanism had basically exploded.

The hinges of the Librem are screwed into little golden sprockets that are fitted in plastic shims of the laptop casing. The shims had exploded: after opening the back lid, they litterally fell off (alongside the tiny golden sprocket). Support confirmed that I needed a case replacement, but unfortunately they were "out of stock" of replacement cases for the Librem 13, and have been for a while. I am 13 on the waiting list, apparently.

So this laptop is basically dead for me right now: it's my travel laptop. It's primary purpose is to sit at home until I go to a conference or a meeting or a cafe or upstairs or wherever to do some work. I take the laptop, pop the lid, tap-tap some work, close the lid. Had I used that laptop as my primary device, I would probably have closed and opened that lid thousands of times. But because it's a travel laptop, that number is probably in the hundreds, which means this laptop is not designed to withstand prolonged use.

I have now ordered a framework laptop, 12th generation. I have some questions about their compatibility with Debian (and Linux in general), and concerns about power usage, but it certainly can't be worse than the Purism, in any case. And it can only get better over time: the main board is fully replaceable, and they have replacement hinges on stock, although the laptop itself is currently in pre-order (slated for September). I will probably post a full review when I actually lay my hand on this device.

In the meantime, I strongly discourage anyone from buying Purism products, as I previously did. You can the full maintenance history of the laptop in the review page as well.

@anarcat August 22, 2022 - 17:17 • 1 months ago
Alternative MPD clients to GMPC

GMPC (GNOME Music Player Client) is a audio player based on MPD (Music Player Daemon) that I've been using as my main audio player for years now.

Unfortunately, it's marked as "unmaintained" in the official list of MPD clients, along with basically every client available in Debian. In fact, if you look closely, all but one of the 5 unmaintained clients are in Debian (ario, cantata, gmpc, and sonata), which is kind of sad. And none of the active ones are packaged.

GMPC status and features

GMPC, in particular, is basically dead. The upstream website domain has been lost and there has been no release in ages. It's built with GTK2 so it's bound to be destroyed in a fire at some point anyways.

Still: it's really an awesome client. It has:

  • cover support
  • lyrics and tabs lookups (although those typically fail now)
  • lookups
  • high performance: loading thousands of artists or tracks is almost instant
  • repeat/single/consume/shuffle settings (single is particularly nice)
  • (global) keyboard shortcuts
  • file, artist, genre, tag browser
  • playlist editor
  • plugins
  • multi-profile support
  • avahi support
  • shoutcast support

Regarding performance, the only thing that I could find to slow down gmpc is to make it load all of my 40k+ artists in a playlist. That's slow, but it's probably understandable.

It's basically impossible to find a client that satisfies all of those.

But here are the clients that I found, alphabetically. I restrict myself to Linux-based clients.


CoverGrid looks real nice, but is sharply focused on browsing covers. It's explicitly "not to be a replacement for your favorite MPD client but an addition to get a better album-experience", so probably not good enough for a daily driver. I asked for a FlatHub package so it could be tested.


mpdevil is a nice little client. It supports:

  • repeat, shuffle, single, consume mode
  • playlist support (although it fails to load any of my playlist with a UnicodeDecodeError)
  • nice genre / artist / album cover based browser
  • fails to load "all artists" (or takes too long to (pre-?)load covers?)
  • keyboard shortcuts
  • no file browser

Overall pretty good, but performance issues with large collections, and needs a cleanly tagged collection (which is not my case).


QUIMUP looks like a simple client, C++, Qt, and mouse-based. No Flatpak, not tested.


SkyMPC is similar. Ruby, Qt, documentation in Japanese. No Flatpak, not tested.


Xfmpc is the XFCE client. Minimalist, doesn't seem to have all the features I need. No Flatpak, not tested.


Ymuse is another promising client. It has trouble loading all my artists or albums (and that's without album covers), but it eventually does. It does have a Files browser which saves it... It's noticeably slower than gmpc but does the job.

Cover support is spotty: it sometimes shows up in notifications but not the player, which is odd. I'm missing a "this track information" thing. It seems to support playlists okay.

I'm missing an album cover browser as well. Overall seems like the most promising.

Written in Golang. It crashed on a library update. There is an ITP in Debian.


For now, I guess that ymuse is the most promising client, even though it's still lacking some features and performance is suffering compared to gmpc. I'll keep updating this page as I find more information about the projects. I do not intend to package anything yet, and will wait a while to see if a clear winner emerges.

@atagar August 19, 2022 - 21:20 • 2 months ago
Status Report for August 2022

Hey there wonderful world, ’tis been a year since my last post.

Yesterday my family had some local excitement. Amid a mighty crash a tree fell in front of our house. It blocked the road and pancaked our neighborhood’s mailboxes.

Fallen tree

I’m glad cuz it was heart-warming serendipity. Within ten minutes a dozen of us from the neighborhood were cleaning it up. Folks then rebuilt the mailboxes the next day. I love Vashon. It’s such a friendly place to live.

I’m taking a break from open source to selfishly dabble in writing fiction. Amazon managers used to compliment me on my writing skills but like my coding I’m meticulous. After a full year I’m only 12,000 words into a book. By obsessively polishing each paragraph I’m comically slow.

Is it time efficient? Nay. Fun? Absolutely. I spent a week reading Shakespeare to attempt archaic dialog (dost thou wot such uneath quoths?). I also whipped up a python script to improve word diversity. I’m such an engineer…

My last experience with Tor and Wikipedia were disappointing so I’ll continue to focus on non-technical projects for a while. I still volunteer at Vashon’s Food Bank and Granny’s Attic. But at some point I’d like to get back into coding. That’s my trained profession after all. But I’ve discovered the importance of being appreciated within a project so TBD on where it’ll be.

@blog August 18, 2022 - 00:00 • 2 months ago
Open Call for Tor Board Candidates

We are happy to announce that for the first time the Tor Project Board is publishing an open call for candidates to become new members of the Board. The goal of this open call is to provide a way for the whole community to participate in this process.

We believe that this new process will not only help us find great new members for our Board but will also generate new relationships and get us to get closer to the communities that Tor serves.

You can read the full open call here.

The current Tor Board is:

  • Alissa Cooper - Vice-chair
  • Dees Chinniah
  • Gabriella Coleman - Clerk
  • Julius Mittenzwei - Treasurer
  • Kendra Albert - Chair
  • Nighat Dad