Planet Tor

@blog January 14, 2022 - 00:00 • 4 days ago
Arti 0.0.3 is released: Configuration, predictive circuits, and more!

Arti is our ongoing project to create a working embeddable Tor client in Rust. It’s nowhere near ready to replace the main Tor implementation in C, but we believe that it’s the future.

We're working towards our 0.1.0 milestone in early March, where our main current priorities are stabilizing our APIs, and resolving issues that prevent integration. We're planning to do releases every month or so until we get to that milestone.

Please be aware that every release between now and then will probably break backward compatibility.

So, what's new in Arti 0.0.3?

Our biggest API change is that we've completely revamped our configuration system to allow changing configuration values from Rust, while the TorClient instance is running.

In the background, we've also implemented a system for “preemptive circuit construction.” Based on which ports you've used in the recent past, it predicts which circuits you'll likely need in the future, and constructs them in advance to lower your circuit latency.

There are also a bunch of smaller features, bugfixes, and infrastructure improvements; see the changelog for a more complete list.

And what's next?

Between now and March, we're going to be focused on three kinds of improvements:

We'll try to do our next release around the start of February. It might have a new error system, support for bootstrap reporting, easier setup, and more!

Here's how to try it out

We rely on users and volunteers to find problems in our software and suggest directions for its improvement. Although Arti isn't yet ready for production use, it should work fine as a SOCKS proxy (if you're willing to compile from source) and as an embeddable library (if you don't mind a little API instability).

Assuming you've installed Arti (with cargo install arti, or directly from a cloned repository), you can use it to start a simple SOCKS proxy for making connections via Tor with:

$ arti proxy -p 9150

and use more or less as you would use the C Tor implementation!

(It doesn't support onion services yet. If compilation doesn't work, make sure you have development files for libsqlite installed on your platform.)

For more information, check out the README file. (For now, it assumes that you're comfortable building Rust programs from the command line). Our CONTRIBUTING file has more information on installing development tools, and on using Arti inside of Tor Browser. (If you want to try that, please be aware that Arti doesn't support onion services yet.)

When you find bugs, please report them on our bugtracker. You can request an account or report a bug anonymously.

And if this documentation doesn't make sense, please ask questions! The questions you ask today might help improve the documentation tomorrow.

Call for comments—Urgent!

We need feedback on our APIs. Sure, we think we're making them more complete and ergonomic… but it's the users' opinion that matters!

Here are some ideas of how you can help:

  1. You can read over the high-level APIs for the arti-client crate, and look for places where the documentation could be more clear, or where the API is ugly or hard to work with.

  2. Try writing more code with this API: what do you wish you could do with Tor in Rust? Give it a try! Does this API make it possible? Is any part of it harder than necessary? (If you want, maybe clean up your code and contribute it as an example?)


Thanks to everybody who has contributed to this release, including dagon, Daniel Eades, Muhammad Falak R Wani, Neel Chauhan, Trinity Pointard, and Yuan Lyu!

And thanks, of course, to Zcash Open Major Grants (ZOMG) for funding this project!

@blog January 12, 2022 - 00:00 • 6 days ago
New Foundations for Tor Network Experimentation
This is a guest post by Rob Jansen.

Hello, Tor World!

Justin Tracey, Ian Goldberg, and I (Rob Jansen) recently published some work that makes it easier to run Tor network experiments under simulation and helps us do a better job of quantifying confidence in simulation results. This post offers some background and a high-level summary of our scientific publication:

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation

30th USENIX Security Symposium (Sec 2021)

Rob Jansen, Justin Tracey, and Ian Goldberg

The research article, video presentation, and slides are available online, and we've also published our research artifacts.

If you don't want to read the entire post (which provides more background and context), here are the main points that we hope you will take away from our work:

  • Better Models and Tools:

    • Contribution: We improved modeling and simulation tools to produce Tor test networks that are more representative of the live Tor network and that we can simulate faster and at larger scales than were previously possible.
    • Outcome: We achieved a signficant new milestone: we ran simulations with 6,489 relays and 792k simultaneously active users, the largest known Tor network simulations and the first at a network scale of 100%.
  • New Statistical Methodologies:

    • Contribution: We established a new methodology for employing statistical inference to quantify test network sampling error and make more useful predictions from test networks.
    • Outcomes: We find that (1) running multiple simulations in independently sampled Tor test networks is necessary to draw statistically significant conclusions, and (2) larger-scale test networks require fewer repeated trials than smaller-scale test networks to reach the same level of confidence in the results.

More details are below!

Background: Tor Network Experiments

Network experimentation is of vital importance to the Tor Project's research, development, and deployment processes. Experiments help us understand and estimate the viability of new research ideas, to test out newly written code, and to measure the real world effects of new features. Measurements taken during experiments help us gain confidence that Tor is working how we expect it should be.

Experiments are often run directly on the live, public Tor network---the one to which we all connect when we use Tor Browser. Live network experiments are possible when production-ready code is available and deployed through standard Tor software updates, or when code needs to be deployed on only a small number of nodes. Live network experiments allow us to gather, analyze, and assess information that is most relevant the target, real-world network environment. (We maintain a list of ongoing and recent live network experiments from those who notify us.)

However, live network experiments carry additional, sometimes significant risk to the safety and privacy of Tor users and should be avoided whenever possible. As outlined by our Research Safety Board, we should use a private, test Tor network to conduct experiments whenever possible. Test networks such as those that are run in the Shadow network simulator are completely private and segregated from the Internet, providing an environment in which we can run Tor experiments with absolutely no safety or privacy risks. Test networks should be our only choice when evaluating attacks or other experiments that are otherwise unethical to run.

Private test networks have many important advantages in addition to the safety and privacy benefits they offer:

  • Test networks can help us more quickly test and debug new code during the development process. Even the best programmers in the world can occasionally introduce a bug that is not covered by more conventional unit or integration testing. Running a larger and more diverse test network can help us exercise complex corner cases and improve test coverage.

  • Test networks allow us to immediately deploy new code across the entire private network of Tor relays and clients without having to wait for lengthy deployment cycles. (If you run a Tor relay, thank you! and please keep it up to date.) Immediate deployment in test networks not only helps us tighten the development cycle and tune parameters, but also increases our confidence that things will work as expected when the code is deployed to the live network.

  • Test networks allow the community to more quickly design and evaluate novel research ideas (e.g., a performance enhancing algorithm or protocol) using prototypes without committing the time and effort that would be required to produce production-quality code. This allows us to more quickly learn about and identify design changes that are worth the additional development, deployment, and maintenance costs.

More Realistic Tor Test Networks


We want to ensure that Tor test networks that operate independently of the live Tor network still produce results that are relevant to the real world. The first Tor test network models were published about 10 years ago using network data published by Tor metrics, and our methodology has continued to improve over the years thanks to new privacy-preserving measurement systems (PrivEx and PrivCount) and new privacy-preserving measurement studies of Tor network composition and background traffic. These works have enabled us to create private Tor test networks whose characteristics are increasingly similar to those of the live Tor network.

We further advance Tor network modeling in our work. We designed a new network modeling approach that (1) can synthesize the state of the Tor network over time (rather than modeling a static point in time), and (2) can use a small number of background traffic generator processes to accurately simulate the traffic from a much large number of Tor users (reducing the computing resources required to run an experiment). With these changes, we can now produce Tor test networks that are more representative than those used in previous work.


In the live Tor network, thousands of relays forward hundreds of Gbit/s of traffic from hundreds of thousands of users at an average point in time. Accurately reproducing the associated traffic properties in a test network requires a significant amount of computing resources. As a result, it has become standard practice for researchers to down-sample relays and create a smaller-scale Tor network that could be run with fewer computing resources. However, as we'll show in the next section, we find that smaller-scale test networks are less representative of the live Tor network and we have significantly less confidence in the results they produce. Therefore, it is beneficial to have more efficient tools that use fewer resources to run a simulation and that allow us to run larger-scale simulations.

After conducting a performance audit of Shadow, the tool we use to run Tor test network experiments, we implemented several accuracy and performance improvements that were merged into Shadow v1.13.2. Our improvements enable us to run Tor simulations faster and at larger scales than were previously possible. With our modeling and performance improvements, we achieved a signficant new milestone: we ran simulations with 6,489 relays and 792k simultaneously active users, the largest known Tor network simulations and the first at a network scale of 100%. (Please note that these experiments required a machine with ~4TB of RAM to complete, but we think ongoing work could reduce this by a 10x factor.)

Improving Our Confidence in Test Network Results

A critical but understudied component of Tor network modeling is how the scale of the test network affects our confidence in the results it produces. Due in part to performance and resource limitations, researchers have usually run a single experimental trial in a scaled-down Tor test network. Because test networks are sampled using data from the live Tor network, there is an associated sampling error that must be quantified when making predictions about how the effects observed in sampled Tor networks generalize to the live Tor network. However, the standard practice was to ignore this sampling error.

In our work, we establish a new methodology for employing statistical inference to quantify the sampling error and to guide us toward making more useful predictions from sampled test network. Our methodology employs repeated sampling and confidence intervals (CIs) to establish the precision of estimations that are made across sampled networks. CIs are a statistical tool to help us do better science; they allow us to make a statistical argument about the extent to which the simulation results are (or are not) relevant to the real world. In particular, CIs help guide us to sample additional Tor networks (and run additional experiment trials) if additional precision is necessary to confirm or reject a research hypothesis.

We conducted a case study on Tor usage and performance to demonstrate how to apply our methodology to a concrete set of experiments. We considered whether adding 20% of additional load to the Tor network would reduce performance--we certainly expect that it should!

Figure 7a

Figure 7a plots the results of applying our statistical inference methodology to 1% scaled-down test networks in which we ran n={10,100} trials with network loads of ℓ={1.0,1.2} times the normal load. We can see that there is considerable overlap in the CIs, even when running n=100 repeated trials. In fact, this graph indicates that adding 20% additional load to the network reduces the time to download files, i.e., makes the network faster--the opposite of the outcome that we expected!

Figure 7b

Figure 7b plots the results of applying our statistical inference methodology to much larger 10% scaled-down test networks in which we ran n={5,10,100} trials with network loads of ℓ={1.0,1.2} times the standard load. Here we see that when running only n=5 trials, there is some separation between the CIs but still some overlap in the lower 80% of the distribution. However, running more trials produces more precise (narrower) CIs that increase our confidence in our hypothesis that adding 20% additional load to the network does in fact increase the time to download files.

We conclude from our case study on Tor usage and performance that (1) running multiple simulations in independently sampled Tor test networks is necessary to draw statistically significant conclusions, and (2) that fewer simulations are generally needed to achieve a desired CI precision in test networks of larger scale than in those of smaller scale.

An important takeaway is that our work demonstrates a methodology that those of us running Tor experiments in test networks can now follow in order to (1) estimate the extent to which our experimental results are scientifically meaningful, and (2) guide us toward producing more statistically rigorous conclusions.


Our methods and tools have been contributed to the open source community. If you're interested in taking advantage of our work, a good place to start is by first setting up Shadow, and then tornettools will help guide you through the process of creating Tor test networks, running simulations, and processing results. You can ask questions on Shadow's discussion page.

Practical Applications of Our Work

The Shadow team has adopted our tools as part of their automated, continuous integration tests which now include testing in private Tor test networks.

The core Tor network team has been building upon our contributions as they develop, test, and tune a new set of congestion control protocols that will begin to roll out in the coming months. Our work has helped them more rapidly prepare test network environments and more thoroughly explore the design space while tuning parameters. For more information, see the GitLab tracking issue, the congestion control proposal, and the Shadow congestion control experiment plan.

Thanks for reading!

All the best, ~Rob

[Thanks to Ian Goldberg and Justin Tracey for input on this post!]

@blog January 11, 2022 - 00:00 • 7 days ago
New Release: Tor Browser 11.0.4

Tor Browser 11.0.4 is now available from the Tor Browser download page and also from our distribution directory

This version includes important security updates to Firefox.

Tor Browser 11.0.4 updates Firefox to 91.5.0esr and gives our landing page the usual Tor Browser look and feel back, removing the parts of our year end donation campaign.

Additionally, we update NoScript to the latest release (11.2.14) and bundle the Noto Sans Gurmukhi and Sinhala fonts for our Linux users again after the underlying font rendering issue got resolved.

Full changelog

The full changelog since Tor Browser 11.0.3 is:

Known issues

Tor Browser 11.0.4 comes with a number of known issues (please check the following list before submitting a new bug report):

@kushal January 7, 2022 - 09:10 • 11 days ago
Trouble with signing and notarization on macOS for Tumpa

This week I released the first version of Tumpa on Mac. Though the actual changes required for building the Mac app and dmg file were small, but I had to reap apart those few remaining hairs on my head to get it working on any other Mac (than the building box). It was the classic case of Works on my laptop.

The issue

Tumpa is a Python application which uses PySide2 and also Johnnycanencrypt which is written in Rust.

I tried both briefcase tool and manual calling to codesign and create-dmg tools to create the and the tumpa-0.1.3.dmg.

After creating the dmg file, I had to submit it for notarisation to Apple, following:

xcrun /Applications/ --notarize-app --primary-bundle-id "in.kushaldas.Tumpa" -u "" -p "@keychain:MYNOTARIZATION" -f macOS/tumpa-0.1.3.dmg

This worked successfully, after a few minutes I can see that the job passed. So, I can then staple the ticket on the dmg file.

xcrun stapler staple macOS/tumpa-0.1.3.dmg

I can install from the file, and run the application, sounds great.

But, whenever someone else tried to run the application after installing from dmg, it showed the following.

mac failure screenshot


It took me over 4 hours to keep trying all possible combinations, and finally I had to pass --options=runtime,library to the codesign tool, and that did the trick. Not being able to figure out how to get more logs on Mac was making my life difficult.

I had to patch briefcase to make sure I can keep using it (also created the upstream issue).

--- .venv/lib/python3.9/site-packages/briefcase/platforms/macOS/	2022-01-07 08:48:12.000000000 +0100
+++ /tmp/	2022-01-07 08:47:54.000000000 +0100
@@ -117,7 +117,7 @@
                     '--deep', str(path),
-                    '--options', 'runtime',
+                    '--options', 'runtime,library',

You can see my build script, which is based on input from Micah.

I want to thank all of my new friends inside of SUNET who were excellent helping hands to test the multiple builds of Tumpa. Later many folks from IRC also jumped in to help to test the tool.

@kushal January 5, 2022 - 13:50 • 13 days ago
Releasing Tumpa for Mac

I am happy to announce the release of Tumpa (The Usability Minded PGP Application) for Mac. This release contains the old UI (and the UI bugs), but creates RSA4096 keys by default. Right now Tumpa will allow the following:

  • Create new RSA4096 OpenPGP key. Remember to click on the “Authentication” subkey checkbox if you want to use the key for ssh.
  • Export the public key.
  • You can reset the Yubikey from the smartcard menu.
  • Allows to upload the subkeys to Yubikey (4 or 5).
  • Change the user pin/admin pin of the Yubikey.
  • Change the name and public key URL of the Yubikey.

The keys are stored at ~/.tumpa/ directory, you can back it up in an encrypted USB drive.

You can download the dmg file from my website.

$ wget
$ sha256sum ./tumpa-0.1.3.dmg 
6204cf3253fbe41ada91429684fccc0df87257f85345976d9468c8adf131c591  ./tumpa-0.1.3.dmg

Download & install from the dmg in the standard drag & drop style. If you are using one of the new M1 box, remember to click on “Open in Rosetta” for the application.

Tumpa opening on Mac

Click on “Open”.

Here is a GIF recorded on Linux, the functions are same in Mac.

Tumpa gif

Saptak (my amazing comaintainer) is working on a new website. He is also leading the development of the future UI, based on usability reports. We already saw a few UI issues on Mac (specially while generating a new key), those will be fixed in a future release.

Feel free to open issues as you find, find us in #tumpa channel on IRC network.

@kushal January 3, 2022 - 10:31 • 15 days ago
2021 blog review

Last year I wrote only a few blog posts, 19 exactly. That also reduced the views, to around 370k from 700k year before (iirc).

The post about Getting TLS certificates for Onion services was the highest read post in this year, 9506 views.

A major part of the year went to thinking if we can survive the year, India's medical system broke down completely (the doctors and staff did some amazing job using whatever was available). Everyone I know lost someone in COVID, including in our family. All 3 of us were down in COVID from the end of April & the recovery was long. For a few days in between I could not remember any name.

After the COVID issues came down in brain (that is after getting the vaccines), next we were also waiting for our move to Sweden.

At the beginning of 2022, things look a bit settled for us. In the last few weeks of 2021, I managed to start writing again. I am hoping to continue this. You can also read about the 2018 or 2017, 2016 reviews.

@kushal December 30, 2021 - 06:36 • 19 days ago
Johnnycanencrypt 0.6.0 released

A few days ago I released 0.6.0 of Johnnycanencrypt. It is a Python module written in Rust for OpenPGP using the amazing sequoia-pgp library. It allows you to access/use Yubikeys (without gpg-agent) directly from your code.

This release took almost an year. Though most of the work was done before, but I was not in a state to do a release.

Major changes

  • We can now sign and decrypt using both Curve25519 and RSA keys on the smartcard (we support only Yubikeys)
  • Changing of the password of the secret keys
  • Updating expiry date of the subkeys
  • Adding new user ID(s)
  • Revoking user ID(s)

I also released a new version of Tumpa which uses this. An updated package for Debian 11.

@kushal December 27, 2021 - 07:52 • 22 days ago
Using your OpenPGP key on Yubikey for ssh

Last week I wrote about how you can generate ssh keys on your Yubikeys and use them. There is another way of keeping your ssh keys secure, that is using your already existing OpenPGP key (along with authentication subkey) on a Yubikey and use it for ssh.

In this post I am not going to explain the steps on how to move your key to a Yubikey, but only the steps required to start using it for ssh access. Feel free to have a look at Tumpa if you want an easy way to upload keys to your card.

Enabling gpg-agent for ssh

First we have to add gpg-agent.conf file with correct configuration. Remember to use a different pinentry program if you are on Mac or KDE.

❯ echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
❯ echo "pinentry-program $(which pinentry-gnome)" >> ~/.gnupg/gpg-agent.conf
❯ echo "export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)" >> ~/.bash_profile
❯ source ~/.bash_profile 
❯ gpg --export-ssh-key <KEYID> > ~/.ssh/

At this moment your public key (for ssh usage) is at ~/.ssh/ file. You can use it in the ~/.ssh/authorized_keys file on the servers as required.

We can then restart the gpg-agent using the following command and then also verify that the card is attached and gpg-agent can find it.

❯ gpgconf --kill gpg-agent
❯ gpg --card-status

Enabling touch policy on the card

We should also enable touch policy on the card for authentication operation. This means every time you will try to ssh using the Yubikey, you will have to touch the interface (it will be flashing the light till you touch it).

❯ ykman openpgp keys set-touch aut On
Enter Admin PIN: 
Set touch policy of authentication key to on? [y/N]: y

If you still have servers where you have only the old key, ssh client will be smart enough to ask you the passphrase for those keys.

@ooni December 27, 2021 - 00:00 • 22 days ago
Year in Review: OONI in 2021
In light of the ongoing global COVID-19 pandemic, 2021 continued to be a challenging year for everyone. Yet, several exciting things happened in the censorship measurement world. In this post, we share some OONI highlights from 2021, as well as some upcoming OONI projects for 2022! OONI Probe Automated OONI Probe testing New Debian package for OONI Probe New OONI Probe Command Line Interface for Linux and macOS ...
@kushal December 26, 2021 - 05:34 • 23 days ago
Using onion services over unix sockets and nginx

I have explained before about how to create Onion services, this provides an easy solution to expose any service from inside of your home network to the Internet, in a secured manner (authorized services). But, in all of those examples I used an IP/port combination to expose/talk to the internal service. Instead you can also use unix sockets to do the same.

To do so, use the following style in your torrc file, this example is from my blog.

HiddenServiceDir /var/lib/tor/hidden/
HiddenServiceVersion 3
HiddenServicePort 80 unix:/var/run/tor-hs-kushal.sock
HiddenServicePort 443 unix:/var/run/tor-hs-kushal-https.sock

And the corresponding nginx configuration parts:

server {
    listen unix:/var/run/tor-hs-kushal.sock;

    server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
    access_log /var/log/nginx/kushal_onion-access.log;

    location / {
        return 301 https://$host$request_uri;


server {
    listen unix:/var/run/tor-hs-kushal-https.sock ssl http2;

    server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
    access_log /var/log/nginx/kushal_onion-access.log;

Now if you start tor and also nginx pointing to the same unix domain, things will go fine. But, nginx will fail to restart, you will have to remove the socket files by hand to restart. This happens due to a bug in nginx. You can edit the restart process and fix this issue.

systemctl restart nginx

Add the following the configuration file in the correct location (between the comments):

### Editing /etc/systemd/system/nginx.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

### Editing /etc/systemd/system/nginx.service.d/override.conf Anything between here and the comment below will become the new contents of the file
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry TERM/5 --pidfile /run/

### Lines below this comment will be discarded

If you go and read the original ExecStop value, you will find that it is using SIGQUIT, but that does not remove the socket files, only a SIGTERM does. You can read more in the []upstream bug](

After this nginx should be able to restart without any trouble.

Thank you reader who emailed and asked for this.

@anarcat December 22, 2021 - 16:51 • 26 days ago
L'Internet néo-colonial

Cet article est une traduction d'un article écrit originalement en anglais. Merci à Globenet (leur copie). Voir aussi les notes de la traduction pour le contexte particulier de cet article.

J'ai grandi avec Internet, et son éthique et sa politique ont toujours été importantes dans ma vie. Mais je me suis également engagé à d'autres niveaux contre la brutalité policière, pour de la bouffe, pas des bombes, l'autonomie des travailleur·euses, les logiciels libres, etc. Longtemps, tout cela m'a paru cohérent.

Mais plus j'observe l'Internet moderne — et les mégacorporations qui le contrôlent — et moins j'ai confiance en mon analyse originale du potentiel libérateur de la technologie. J'en viens à croire que l'essentiel de notre développement technologique est dommageable pour la grande majorité de la population de la planète, et bien évidemment pour reste de la biosphère. Et je ne pense plus que c'est un nouveau problème.

C'est que l'Internet est un outil néo-colonial, et ce depuis ses débuts. Je m'explique.

Qu'est-ce que le néo-colonialisme ?

Le terme « néo-colonialisme » a été inventé par Kwame Nkrumah, premier président du Ghana. Dans Le néo-colonialisme : Dernier stade de l'impérialisme (1965), il écrit :

À la place du colonialisme, principal instrument de l'impérialisme, nous avons aujourd'hui le néo-colonialisme […] [qui], comme le colonialisme, est une tentative d'exporter les conflits sociaux des pays capitalistes. […]

Le résultat du néo-colonialisme est que le capital étranger est utilisé pour l'exploitation plutôt que pour le développement des régions moins développées du monde.

L'investissement, sous le néo-colonialisme, accroît, au lieu de le réduire, le fossé entre les pays riches et les pays pauvres. (Traduction libre)

En résumé, si le colonialisme, ce sont des Européens apportant génocide, guerre et religion au reste du monde, le néo-colonialisme, ce sont des Américains ramenant le capitalisme au reste du monde.

Avant de voir comment cela s'applique à l'Internet, nous devons par conséquent faire un petit détour par l'histoire des États-Unis. C'est important car il serait difficile pour quiconque de séparer le néo-colonialisme de l'empire sous lequel il se développe, et ici nous ne pouvons que nommer les États-Unis d'Amérique.

Déclaration d'indépendance des États-Unis

Commençons avec la Déclaration d'indépendance des États-Unis (1776). Beaucoup d'Américain·es pourraient s'en étonner, parce que cette déclaration ne fait pas réellement partie de la constitution des États-Unis et sa légitimité juridique est donc discutable. Pour autant, elle a tout de même été une force philosophique influente sur la fondation de la nation. Comme son auteur, Thomas Jefferson, l'affirmait :

Cela devait être une expression de l'esprit américain, et donner à cette expression le ton et l'esprit requis par l'occasion. (traduction libre)

Dans ce document vieillissant, on trouve la perle suivante :

Nous tenons pour évidentes pour elles-mêmes les vérités suivantes : tous les hommes sont créés égaux ; ils sont dotés par le Créateur de certains droits inaliénables, parmi lesquels figurent la Vie, la Liberté et la recherche du Bonheur.

NDLTr: nous avons choisi la traduction de Wikipedia, mais vous pouvez aussi consulter la traduction de Jefferson.

En tant que document fondateur, la Déclaration a toujours un impact dans le sens que la citation précédente a été qualifiée de :

« déclaration immortelle », et « peut-être [la] seule phrase » de la période de la Révolution américaine avec une telle « importance persistante ». (Wikipédia, traduction libre)

Relisons cette « déclaration immortelle » : « tous les hommes sont créés égaux ». « Hommes », dans ce contexte, est limité à un certain nombre de personnes, à savoir « propriétaires ou contribuables mâles blancs, soit environ 6% de la population ». À l'époque de la rédaction, les femmes n'avaient pas le droit de vote, et l'esclavage était légal. Jefferson lui-même possédait des centaines d'esclaves.

La déclaration était adressée au Roi et était une liste de doléances. L'une des préoccupations des colons était que le Roi :

a excité parmi nous des insurrections domestiques, et il a cherché à attirer sur les habitants de nos frontières les Indiens, ces sauvages sans pitié, dont la manière bien connue de faire la guerre est de tout massacrer, sans distinction d'âge, de sexe ni de condition.

Voilà un signe évident du mythe de la Frontière, qui a ouvert la voie à la colonisation du territoire — et à l'extermination de ses peuples — que certain·es nomment maintenant les États-Unis d'Amérique.

La Déclaration d'indépendance est évidemment un document colonial, puisqu'elle a été écrite par des colons. Rien de tout cela n'est vraiment surprenant, historiquement parlant, mais il est bon de rappeler l'origine d'Internet, étant né aux États-Unis.

Déclaration d'indépendance du cyberespace

Deux cent vingt ans plus tard, en 1996, John Perry Barlow écrit une déclaration d'indépendance du cyberespace. À ce stade, (presque) tout le monde a le droit de vote (y compris les femmes), l'esclavage est aboli (bien que certain·es considèrent qu'il existe toujours sous la forme du système carcéral) ; les États-Unis ont fait d'énormes progrès. Sûrement le texte a bien mieux vieilli que la déclaration précédente, dont il s'est clairement inspiré. Voyons comment il peut être lu aujourd'hui et comment il correspond à la manière dont Internet est réellement construit aujourd'hui.

Frontières de l'indépendance

L'une des idées clé que Barlow apporte est que « le cyberespace ne se situe pas dans vos frontières ». En ce sens, le cyberespace est l'ultime frontière : n'ayant pas réussi à coloniser la lune, le peuple Américain s'est replié sur lui-même, plus profondément dans la technologie, mais toujours avec l'idéologie de la frontière. Et d'ailleurs, Barlow est un des cofondateurs de l'Electronic Frontier Foundation (la bien-aimée EFF), fondée six ans auparavant.

Mais il y a d'autres problèmes avec cette idée. Comme le cite Wikipédia :

La déclaration a été critiquée pour ses incohérences internes [9]. L'affirmation de la déclaration que le « cyberespace » est un endroit retiré du monde physique a aussi été contestée par des personnes qui ont pointé le fait qu'Internet reste toujours lié à sa géographie sous-jacente [10].

Et en effet, l'Internet est indéniablement un objet physique. Au départ contrôlé et sévèrement limité par les entreprises de télécommunications comme AT&T, il a été quelque peu « libéré » de ce monopole en 1982 lorsqu'un procès a mis fin au monopole, un événement clé qui a vraisemblablement rendu l'Internet possible.

(À partir de là, les fournisseurs de « dorsale » ont pu se lancer sur le marché de la concurrence et croître, pour éventuellement fusionner en de nouveaux monopoles : Google a le monopole des moteurs de recherche et de la publicité, Facebook de la communication pour quelques générations, Amazon du stockage et de la capacité de traitement, Microsoft du matériel, etc. Même AT&T est maintenant presqu'aussi consolidé qu'auparavant.)

En clair : toutes ces compagnies possèdent de gigantesques centres de données et des câbles intercontinentaux. Ceux-ci priorisent le monde occidental, le cœur de cet empire. Pour donner un exemple, le dernier câble sous-marin de 7000 km de Google ne connecte pas l'Argentine à l'Afrique du Sud ou à la Nouvelle-Zélande, il relie les États-Unis au Royaume-Uni et à l'Espagne. Pas vraiment une fibre révolutionnaire.

Internet privé

Mais revenons à la Déclaration :

Ne pensez pas que vous pouvez le construire, comme si c’était un projet de travaux publics. Vous ne le pouvez pas. C’est un phénomène naturel, et il croît par lui même au travers de nos actions collectives.

Dans la pensée de Barlow, le « public » est mauvais et le privé est bon, naturel. Ou, en d'autres termes, un « projet de travaux publics » n'est pas naturel. Et effectivement, la « nature » moderne du développement est privée : la plus grande part de l'Internet est maintenant possédée et opérée par le privé.

Je dois reconnaître qu'en tant qu'anarchiste, j'ai adoré cette phrase quand je l'ai lue. J'étais pour « nous », les opprimés, les révolutionnaires. Et, en un sens, c'est toujours le cas : je suis au conseil d'administration de Koumbit et travaille pour un organisme à but non lucratif qui s'est réorienté contre la censure et la surveillance. Et maintenant, je ne peux m'empêcher de penser que, collectivement, nous n'avons pas réussi à établir cette indépendance et avons trop fait confiance aux entreprises privées. Rétrospectivement, c'est évident, mais ça ne l'était pas il y a trente ans.

Internet n'a désormais aucun comptes à rendre aux pouvoirs politiques traditionnels supposés représenter le peuple, ou seulement même ses utilisateur·ices. La situation est en fait pire que lorsque les États-Unis ont été fondés (à savoir quand "6 % de la population avait le droit de vote"), car les propriétaires des géants de la techno ne sont qu'une poignée de personnes qui ont le pouvoir d'outrepasser toute décision. Il n'y a qu'un seul patron chez Amazon, il s'appelle Jeff Bezos et il a un contrôle total.

Contrat social

Voici une autre affirmation de la Déclaration :

Nous formons notre propre Contrat Social.

Je me souviens des premiers jours, à l'époque où « nétiquette » était un mot commun, on avait le sentiment d'avoir une sorte de contrat. Évidemment pas écrit comme un "standard" — ou alors de justesse (voir la RFC1855) — mais comme un accord tacite. Comme nous avions tort! Il suffit de regarder Facebook pour comprendre à quel point cette idée est problématique à l'échelle d'un réseau global.

Facebook est la quintessence de l'idéologie « hacker » mise en pratique. Mark Zuckerberg a explicitement refusé d'être l'« arbitre de la vérité », ce qui implique qu'il laissera les mensonges proliférer sur ses plateformes.

Il voit également Facebook comme un endroit où tout le monde est égal, ce qui fait écho à la Déclaration :

Nous créons un monde où tous peuvent entrer sans privilège ou préjugés découlant de la race, du pouvoir économique, de la force militaire ou du lieu de naissance.

(Notons au passage l'omission du genre dans cette liste, répétant le fameux tristement célèbre « Tous les hommes sont créés égaux » de la déclaration des États-Unis.)

Comme les « Facebook files » du Wall Street Journal (WSJ) l'ont montré plus tard, ces "contrats sociaux" sont bien limités au sein de Facebook. Il y a des célébrités qui échappent systématiquement aux contrôles, y compris des fascistes et des violeurs. Les cartels de drogue et les trafiquants de personnes prospèrent sur la plateforme. Zuckerberg a lui-même essayé de dompter la plateforme — pour la rendre plus saine ou pour promouvoir la vaccination — et il a échoué : Facebook est devenu « encore plus fâchée » et les conspirations « antivax » pullulent.

C'est que le « contrat social » derrière Facebook et ces grandes compagnies est un mensonge : leur préoccupation est le profit et cela passe par la publicité, l'« engagement » avec la plateforme, ce qui provoque une augmentation de l'anxiété et de la dépression chez les adolescent·es, par exemple.

La réponse de Facebook est qu'elle travaille vraiment fort sur la modération. Mais la vérité est que même ce système est sévèrement biaisé. Le WSJ a montré que Facebook n'a de traducteur·ices que pour 50 langues. Il est étonnamment difficile de compter les langues, mais les estimations de leur nombre oscillent entre 3 000 et 7 000 langues vivantes. Donc si, à première vue, 50 langues, ça semble important, c'est en fait une infime fraction de la population utilisant Facebook. Si on prend la liste Wikipédia des 50 premières langues par nombre de locuteur·ices, on écarte le néerlandais (52e), le grec (74e) et le hongrois (78e), et il ne s'agit là que de quelques nations choisies au hasard en Europe.

Facebook a par exemple des difficultés à modérer même une langue majeure comme l'arabe. La plateforme a censuré du contenu en provenance de médias arabes légitimes quand ils ont mentionné le mot al-Aqsa, car Facebook l'associe avec les Brigades des martyrs d'Al-Aqsa, alors que ces médias parlaient de la mosquée al-Aqsa… Ce biais contre les Arabes montre comment Facebook reproduit la politique coloniale américaine.

Le WSJ souligne également que Facebook consacre seulement 13 % de ses ressources de modération en dehors des États-Unis, même si ça représente 90 % de ses utilisateur·ices. Facebook passe trois fois plus de temps à modérer sur la « sécurité des marques », ce qui montre que sa priorité n'est pas la sécurité de ses utilisateur·ices, mais celle des publicitaires.

Internet militaire

Sergey Brin et Larry Page sont les "Lewis and Clark" de notre génération. Tout comme ces derniers ont été envoyés par Jefferson (le même) pour déclarer la souveraineté sur toute la côte ouest des États-Unis, Google a déclaré la souveraineté sur tout le savoir humain, avec sa déclaration de mission en s'attribuant la mission d'« organiser l'information du monde et [de] la rendre universellement accessible et utile ». (Il convient de noter que Page a quelque peu remis en question cette mission, mais uniquement parce qu'elle n'était pas assez ambitieuse, Google l'ayant « dépassée ».)

L'expédition Lewis et Clark, tout comme Google, avait un prétexte scientifique, car c'est ainsi qu'on colonise un monde, supposément. Pourtant, les deux hommes étaient des militaires et ont dû recevoir une formation scientifique avant de partir. Le "Corps of Discovery" était composé de quelques dizaines de militaires et d'une douzaine de civils, dont York, un esclave afro-américain possédé par Clark et vendu après l'expédition, son destin ultime noyé dans l'Histoire.

Et tout comme Lewis et Clark, Google a des liens serrés avec les militaires. Par exemple, Google Earth n'a pas été construit à l'origine par Google mais est le fruit de l'acquisition d'une société appelée Keyhole qui avait des liens avec la CIA. Ces liens ont été intégrés à Google lors de l'acquisition. L'investissement croissant de Google au sein du complexe militaro-industriel a finalement conduit les travailleur·euses de Google à organiser une révolte, si bien que je ne sache pas exactement à quel point Google est impliqué dans l'appareil militaire en ce moment. (Mise à jour: cet article de Novembre 2021 indique qu'ils vont "fièrement travailler avec le département de la défense".) D'autres entreprises, évidemment, n'ont pas une telle réserve : Microsoft, Amazon et beaucoup d'autres, répondent à des offres de contrats militaires avec un enthousiasme débordant.

Propagation d'Internet

Je ne suis évidemment pas le premier à identifier des structures coloniales dans l'Internet. Dans un article intitulé « The Internet as an Extension of Colonialism » (« L'Internet comme extension du colonialisme »), Heather McDonald identifie à juste titre les problèmes fondamentaux liés au « développement » de nouveaux « marchés » de « consommateurices » de l'Internet, en faisant valoir principalement qu'il crée une fracture numérique qui engendre un « manque d'autonomie et de liberté individuelle » :

De nombreuses personnes africaines ont désormais accès à ces technologies, mais pas à la liberté de développer à leur manière des contenus tels que des pages web ou des plateformes de médias sociaux. Les natif·ves du numérique ont beaucoup plus de pouvoir et l'utilisent donc pour créer leur propre espace avec leurs propres normes, façonnant leur monde en ligne en fonction de leurs propres perspectives.

Mais la fracture numérique n'est certainement pas le pire problème auquel nous devons faire face sur Internet aujourd'hui. Pour en revenir à la Déclaration, nous pensions à l'origine que nous créions un monde entièrement nouveau :

Cette gouvernance émergera des conditions de notre monde, pas du vôtre. Notre monde est différent.

Comme j'aurais aimé que ce soit vrai. Malheureusement, l'Internet n'est pas si différent du monde hors ligne. Ou, pour être plus précis, les valeurs que nous avons intégrées à l'Internet, notamment la liberté d'expression absolue, le sexisme, le corporatisme et l'exploitation, sont en train d'exploser en dehors d'Internet, dans le monde « réel ».

L'internet a été construit avec des logiciels libres qui, fondamentalement, reposent sur le travail quasi bénévole d'une élite d'hommes blancs ayant manifestement trop de temps libre (et aussi : pas d'enfants). L'écriture mythique de GCC et d'Emacs par Richard Stallman en est un bon exemple, mais l'intégralité d'Internet semble aujourd'hui fonctionner sur des morceaux disparates construits par des programmeur·euses à la sauvette travaillant sur leur généreux temps libre. Dès que l'un de ces éléments tombe en panne, il peut compromettre ou faire tomber des systèmes entiers. (M'enfin, j'ai écrit cet article pendant mon jour de repos… [NDLT: et que dire des traducteur·ices?!])

Ce modèle — qui est fondamentalement du « cheap labour » — se répand au-delà d'Internet. Les livreur·euses sont exploité·es jusqu'à l'os par des applications comme Uber — même s'il convient de noter que ces gens s'organisent et se défendent. Les conditions des centre de distribution d'Amazon dépassent l'imagination, avec l'interdiction de prendre des pauses jusqu'au point qu'on doive pisser dans des bouteilles, avec des ambulances prêtes à évacuer les corps. Au plus fort de la pandémie, le personnel était dangereusement exposé au virus. Tout cela alors qu'Amazon est plus oui moins en train de prendre le contrôle de l'ensemble de l'économie.

La Déclaration culmine avec cette prophétie :

Nous nous propagerons sur la planète afin que personne ne puisse arrêter nos pensées.

Cette prédiction, qui semblait d'abord révolutionnaire, donne maintenant froid dans le dos.

Internet colonial

Internet est, sinon néo-colonial, tout bonnement colonial. Les colonies avaient des champs de coton et des esclaves, nous avons des portables jetables et Foxconn. Le Canada a son génocide culturel, Facebook a ses propres génocides en Éthiopie, au Myanmar, et provoque des lapidations en Inde. Apple accepte implicitement le génocide ouïghour. Et tout comme les esclaves de la colonie, ces atrocités sont ce qui fait fonctionner l'empire.

La Déclaration se termine en fait comme ceci, une citation que j'ai dans mon fichier de citations :

Nous créerons une civilisation de l'Esprit dans le Cyberespace. Puisse-t-elle être plus humaine et plus juste que le monde que vos gouvernements ont créé auparavant.

Ça demeure une source d'inspiration pour moi. Mais si nous voulons rendre le « cyberespace » plus humain, nous devons le décoloniser. Travailler à la cyberpaix plutôt qu'à la cyberguerre. Établir un code de conduite clair, discuter de l'éthique et remettre en question ses propres biais, privilèges et sa culture. Pour moi, la première étape de la décolonisation de mon esprit est l'écriture de cet article. Briser les monopoles technologiques est peut-être une étape importante, mais cela ne suffira pas : nous devons également opérer un changement de culture, et c'est le nœud le plus difficile à défaire.

Appendice : excuses à Barlow

J'ai quelque peu mauvaise conscience de cribler la déclaration de Barlow comme ça, point par point. C'est un peu injuste, d'autant plus que Barlow est décédé il y a quelques années et qu'il ne peut pas élaborer une réponse (même en supposant humblement qu'il pourrait lire ceci). Mais d'un autre côté, il a lui-même reconnu en 2009 qu'il était un peu trop « optimiste », disant que « tout le monde devient plus mature et intelligent » :

Je suis un optimiste. Pour être "libertarien", il faut être optimiste. Il faut avoir une vision bienveillante de la nature humaine, pour croire que les êtres humains livrés à eux-mêmes sont fondamentalement bons. Mais je n'en suis pas si sûr pour les institutions humaines, et je pense que le vrai sujet de discussion ici est de savoir si oui ou non les grandes entreprises sont des institutions humaines ou une autre entité que nous devons penser à restreindre. La plupart des libertarien s'inquiètent du gouvernement mais pas des entreprises. Je pense que nous devons nous inquiéter des entreprises exactement de la même manière que nous nous inquiétons du gouvernement.

Et, dans un sens, c'était un peu naïf de s'attendre à ce que Barlow ne soit pas un colon. Barlow est, entre autres, un éleveur qui a grandi dans un ranch colonial du Wyoming. Le ranch a été fondé en 1907 par son grand-oncle, 17 ans après l'entrée de l'État dans l'Union, et seulement une ou deux générations après la guerre de Powder River (1866-1868) et la guerre des Black Hills (1876-1877), au cours desquelles les États-Unis ont volé les terres occupées par les Lakotas, les Cheyennes, les Arapahos et d'autres nations autochtones, dans le cadre des dernières guerres aux Premières Nations.

Annexe : articles connexes

Il existe un autre article qui porte presque le même titre que celui-ci : « Facebook and the New Colonialism » (« Facebook et le nouveau colonialisme »). (Curieusement, la balise <title> de l'article est en fait « Facebook l'empire colonial », ce que je trouve également approprié). L'article mérite d'être lu dans son intégralité, mais j'ai tellement aimé cette citation que je n'ai pas pu résister à l'envie de la reproduire ici :

Les représentations du colonialisme sont depuis longtemps présentes dans les paysages numériques. (« Même Super Mario Brothers », me disait le concepteur de jeux vidéo Steven Fox l'an dernier. « Vous courez dans le paysage, vous piétinez tout, et vous hissez votre drapeau à la fin. »). Mais le colonialisme sur Internet n'est pas une abstraction. Les forces qui façonnent un nouveau type d'impérialisme vont au-delà de Facebook.

Cela continue ainsi :

Prenons, par exemple, les projets de numérisation qui se concentrent principalement sur la littérature anglaise. Si le web est censé être la nouvelle bibliothèque d'Alexandrie de l'humanité, un dépôt vivant de toutes les connaissances de l'humanité, c'est un problème. De même, la grande majorité des pages Wikipédia portent sur une portion relativement minuscule de la planète. Par exemple, 14 % de la population mondiale vit en Afrique, mais moins de 3 % des articles géolocalisés de Wikipédia en sont originaires, selon un rapport de l'Oxford Internet Institute datant de 2014.

Et elle présente une autre définition du néo-colonialisme, tout en mettant en garde contre l'abus du mot, comme je le fais en quelque sorte ici :

« Je répugne à lancer des mots comme “colonialisme”, mais il est difficile d'en ignorer les airs de famille et l'ADN reconnaissable », a déclaré Deepika Bahri, professeure d'anglais à l'université Emory, qui se concentre sur les études postcoloniales. Dans un courriel, Mme Bahri a résumé ces similitudes sous forme de liste :

  1. fait son entrée comme sauveur
  2. brandit des mots comme égalité, démocratie, droits fondamentaux
  3. dissimule le mobile du profit à long terme (voir le point 2 ci-dessus)
  4. justifie la logique de la diffusion partielle comme étant mieux que rien
  5. s'associe aux élites locales et aux intérêts particuliers
  6. accuse les critiques d'ingratitude

« En fin de compte, m'a-t-elle dit, si ce n'est pas un canard, ça ne devrait pas cancaner comme un canard. »

Une autre bonne lecture est le classique "Code and other laws of cyberspace" (Code et autres lois du cyberespace, 1999, PDF gratuit), qui est également critique de la déclaration de Barlow. Dans Code is law, Lawrence Lessig soutient que

le code informatique (ou « code de la côte ouest », en référence à la Silicon Valley) régit la conduite de la même manière que le code juridique (ou « code de la côte est », en référence à Washington, D.C.) (Wikipedia).

Et à présent, on a l'impression que la côte ouest a gagné sur la côte est, ou peut-être l'a-t-elle recolonisée. En tout cas, Internet couronne désormais les empereurs.

Annexe: notes de traduction

Alors. Des camarades bienveillant·es et extrêmement généreux·ses (merci Globenet!) ont fignolé le gros de cette traduction. J'ai copié-collé sur mon blogue, fait quelques modifications, ajouté des liens, et pouf, je parle français à nouveau. C'est franchement bizarre d'écrire même cette note ici parce que ça fait des lustres que j'ai pas écrit d'article en français (la honte). Mais plus important encore, je ne sais pas si j'aurais écrit cet article de cette façon du tout, si j'avais commencé en français.

De un, les notions de colonialismes sont complètement différentes au Québec: les francophones ici ont l'idée particulière d'être des victimes de la colonisation (des anglais), omettant évidemment la partie où ils ont participé au génocide, mais bon, c'est compliqué.

De deux, que dire du passé colonial de la France! Il faudrait parler de l'Afrique, de l'Asie, et bien sûr aussi de l'Amérique et des Antilles... Sans parler que pour écrire cet article du point de vue de la France demanderait aussi de parler du minitel, de FDN, et j'en passe...

Bref, à prendre avec un certain grain de sel, et je regrette ne pas écrire d'avantage en français. Mes excuses au lectorat de ma langue natale...

@kushal December 22, 2021 - 08:34 • 27 days ago
Setting up local mTLS environment using mkcert

mTLS or mutual TLS is a way of doing mutual authentication. When we talk about TLS in general, we only about TLS for the servers/services. There the clients can verify that they are connected to the right server. But, the server does not know much about the clients themselves. This can be done via mTLS, say for services talking to each other. To know more please read the Cloudflare writeup on mTLS.

In this blog post we will see how we can use the mkcert from Filippo Valsorda to setup a local environment, so that you can play-around & learn.

Install nss-tools package for your system

For Fedora, I installed it via dnf.

$ sudo dnf install nss-tools -y

Getting mkcert

I grabbed the latest release from the gitub release page.

$ wget
$ mv mkcert-v1.4.3-linux-amd64 ~/bin/mkcert
$ chmod +x ~/bin/mkcert

Setting up the local CA

$ mkcert -install
Created a new local CA 💥
The local CA is now installed in the system trust store! ⚡️
The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊

This will create two important files inside of your user home directory.

❯ ls -l .local/share/mkcert/
.r--------@ 2.5k kdas 20 Dec 12:14 rootCA-key.pem
.rw-r--r--@ 1.8k kdas 20 Dec 12:14 rootCA.pem

Note:: The rootCA-key.pem is an important file and it can allow people to decrypt traffic from your system. Do not share or randomly copy it around.

The rootCA.pem file contains the public key, we can use the openssl tool to inspect it.

❯ openssl x509 -text -noout -in ~/.local/share/mkcert/rootCA.pem
        Version: 3 (0x2)
        Serial Number:
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O = mkcert development CA, OU = kdas@localhost.localdomain (Kushal Das), CN = mkcert kdas@localhost.localdomain (Kushal Das)
            Not Before: Dec 20 11:14:33 2021 GMT
            Not After : Dec 20 11:14:33 2031 GMT
        Subject: O = mkcert development CA, OU = kdas@localhost.localdomain (Kushal Das), CN = mkcert kdas@localhost.localdomain (Kushal Das)
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (3072 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
            X509v3 Subject Key Identifier: 
    Signature Algorithm: sha256WithRSAEncryption

If you look closely at the X509v3 extensions section of the output, you will notice two important things:

  • It is a CA certificate
  • pathlen 0 means it can not sign/create any new CA but only sign leaf certificates. Do man x509v3_config to learn more.

Setting up certificate for local development

❯ cd ~/code/mtls-example
❯ mkcert localhost ::1

Created a new certificate valid for the following names 📜
 - "localhost"
 - ""
 - "::1"

The certificate is at "./localhost+2.pem" and the key at "./localhost+2-key.pem" ✅

It will expire on 20 March 2024 🗓

Starting a nginx podman container with the certificate

Next we will start a podman nginx container to try to out the certificate. On my Fedora machine, I will also have take care of SELinux. First let us create a default.conf.

server {
  listen [::]:443 ssl http2 ipv6only=on;
  listen 443 ssl http2;
  server_name  localhost;
  ssl_protocols TLSv1.3;
  ssl_certificate /etc/nginx/conf.d/localhost+2.pem;
  ssl_certificate_key /etc/nginx/conf.d/localhost+2-key.pem;

  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /usr/share/nginx/html;

Then, I will copy the rootCA.pem file in the current directory and start the container.

❯ cp ~/.local/share/mkcert/rootCA.pem .
❯ chcon -Rt svirt_sandbox_file_t .
❯ podman run --rm -p 8080:443 -v $PWD:/etc/nginx/conf.d/ nginx

and from another terminal I can verify the setup using curl.

❯ curl --tlsv1.3 https://localhost:8080
<!DOCTYPE html>
<title>Welcome to nginx!</title>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Same via Python httpx module (includes commands to create and activate the Python virtualenv).

❯ python3 -m venv .venv
❯ source .venv/bin/activate
❯ python3 -m pip install httpx
>>> import httpx
>>> r = httpx.get("https://localhost:8080/", verify="./rootCA.pem")
>>> r
<Response [200 OK]>

Now let us enable client side certificate and verification in nginx

We will modify the default.conf to the following.

server {
  listen [::]:443 ssl http2 ipv6only=on;
  listen 443 ssl http2;
  server_name  localhost;
  ssl_protocols TLSv1.3;
  ssl_certificate /etc/nginx/conf.d/localhost+2.pem;
  ssl_certificate_key /etc/nginx/conf.d/localhost+2-key.pem;
  ssl_client_certificate /etc/nginx/conf.d/rootCA.pem;
  ssl_verify_client on;
  ssl_verify_depth  3;

  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /usr/share/nginx/html;

and restart the podman container.

Now, let us try the same curl command and Python code.

❯ curl --tlsv1.3 https://localhost:8080
<head><title>400 No required SSL certificate was sent</title></head>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
>>> r = httpx.get("https://localhost:8080/", verify="./rootCA.pem")
>>> r
<Response [400 Bad Request]>

Creating a client side certificate and using the same

Here we are saying to use the name nehru in the client certificate. Note: I am running the commands in a different day, that is why the dates will not match with the CA certificate date :)

❯ mkcert -client nehru

Created a new certificate valid for the following names 📜
 - "nehru"

The certificate is at "./nehru-client.pem" and the key at "./nehru-client-key.pem" ✅

It will expire on 22 March 2024 🗓

If you use the openssl x509 -text -noout -in ./nehru-client.pem and check the details of the certificate, you will notice the following:

        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Client Authentication, TLS Web Server Authentication

Next, we will use the same certificates in curl.

❯ curl --tlsv1.3 --key nehru-client-key.pem --cert nehru-client.pem https://localhost:8080

And then in Python.

>>> cert = ("./nehru-client.pem", "./nehru-client-key.pem")
>>> r = httpx.get("https://localhost:8080/", verify="./rootCA.pem", cert=cert)
>>> r
<Response [200 OK]>

I hope this will help you to start trying out mTLS on your local development environment. In future posts we will learn more in depth examples.

@kushal December 21, 2021 - 04:46 • 28 days ago
ssh authentication using FIDO/U2F hardware authenticators

From OpenSSH 8.2 release it supports authentication using FIDO/U2F. These tokens are required to implement the ECDSA-P256 "ecdsa-sk" key type, but some (say Yubikey) also supports Ed25519 (ed25519-sk) keys. In this example I am using a Yubikey 5.

I am going to generate a non-discoverable key on the card itself. Means along with the card, we will also have a key on disk, and one will need both to authenticate. If someone steals you Yubikey, they will not be able to login just via that.

✦ ❯ ssh-keygen -t ed25519-sk -f .ssh/id_ed25519_sk
Generating public/private ed25519-sk key pair.
You may need to touch your authenticator to authorize key generation.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in .ssh/id_ed25519_sk
Your public key has been saved in .ssh/
The key fingerprint is:
SHA256:CoQKA0blJ8A1xOwri167mIDb7rHxr59TYwI25ChOZ4Y kdas@localhost.localdomain
The key's randomart image is:
+[ED25519-SK 256]-+
|++*=             |
|o.o+o            |
|o +*..           |
|oE.*B            |
|+.+.oo  S        |
|.o . ...+        |
|+ =.  .+ .       |
|o++=. ..         |
|o*=o+++.         |

Here we passed the type of the key using -t flag and saving the private key using -f. I pasted the public key in the server's ~/.ssh/authorized_keys file, and then also configured the ssh client on my laptop to use that specified key via the ~/.ssh/config file.

  User kushal
  IdentityFile ~/.ssh/id_ed25519_sk

Finally we can login via ssh.

✦ ❯ ssh
Enter passphrase for key '/home/kdas/.ssh/id_ed25519_sk': 
Confirm user presence for key ED25519-SK SHA256:CoQKA0blJ8A1xOwri167mIDb7rHxr59TYwI25ChOZ4Y
User presence confirmed

You will notice that after asking for the passphrase of the key, ssh is asking to touch the Yubikey to confirm the user presence. You can read more in the tutorial from Yubico.

If you miss to touch the Yubikey on time, you will get an error like:

sign_and_send_pubkey: signing failed for ED25519-SK "/home/kdas/.ssh/id_ed25519_sk": invalid format

@blog December 21, 2021 - 00:00 • 28 days ago
Tor in 2022

It has become a tradition for me to write a blog post at the end of each year, sharing my vision for the Tor Project for the upcoming year. Before talking about what I see for us in 2022, I want to reflect on 2021 and how this has been a year of resilience for Tor.

Reflecting on a year of resilience

I’m very proud of every single person who contributed to Tor, the Tor Project staff, our core contributors, our community, and our supporters. 2020 was a year of sacrifice, but none of the stones thrown in our way stopped us from looking ahead and dreaming of a greater future. And in 2021, we bounced back to continue to shape this greater future.

The Tor Project is now financially stable and healthy after a couple of years of challenge. This is due to all the support we received, from each one of you who never gave up on the Tor Project, on our mission, and the power of Tor. This year, after layoffs in 2020, we were financially healthy and able to hire again, bringing amazing, skilled people to our teams.

With a healthy team, we were able to ship very important projects like bringing Snowflake to Tor Browser stable; discontinuing Tor launcher and replacing it with the connect page, and launching the Tor Forum, a tool to better support our community and users.

Our project to bring congestion control to the Tor network has made tremendous progress, and we’re close to seeing these features in Tor stable release, thanks to the work done on the Shadow simulator, which allows us to test and calibrate new features for the network. Arti is now on its 0.0.2 release. Tens of thousands of users answered our usability surveys, giving us great insight on how to improve our tools. I’ve just named a few of the great things we accomplished in 2021.

Looking forward to 2022

My vision for 2022 is to keep Tor on this track, and our users are our priority when building this strategy.

We will continue to work on Arti so that Tor is on the right path for the future, and we will simultaneously continue to make improvements to the current Tor daemon code. Our congestion control project will make a big improvement in user experience on the Tor network and we want you to benefit from it right away. We already knew that the speed of page loads and downloads are big issues for our users, and our recent usability survey confirmed that. You’ve made it clear that it’s important for us to address speed on the network and by this time in 2022, we have high hopes that you will experience improvements.

As I write this blog post, our team and our community is dealing with new censorship attacks from Russia. When a user is facing censorship against the Tor network, it can be difficult for them to understand why they can’t connect and how exactly to change their Tor Browser configuration to circumvent this censorship. Our Anti-Censorship, UX, and Application teams have been working on this problem for a long time, and in 2022, we will ship a completely new experience that will automate the censorship detection and circumvention process, simplifying connecting to Tor for users who need it the most.

In 2022 we will also begin the work to provide a better experience for users on mobile, starting with the Android platform, the most used mobile operating system in the world. We know that the user experience of Tor Browser for Android is different from Tor Browser for desktop. Many of the services you use in a browser on desktop (by visiting their website) have become a stand-alone app on mobile. We are doing research to better understand users' needs in this environment and will begin to design a Tor app for Android that will help you route your app connections through Tor, a ‘VPN-like’ app. This way, we can offer the robust protection of an encrypted and decentralized network like Tor for a wider variety of use cases.

The success of all of the above depends on ensuring that the Tor network is healthy. This year, our team has improved existing and created new tools that help us monitor the Tor network for malicious relay activity so that we can remove them from the network. In 2022, we will continue to improve these tools and we will work on automating some of the steps in this process so that we can take faster actions to protect the network. We will also roll out a series of initiatives to better organize our relay operator community and strengthen the relationship and trust between the Tor Project and the relay operators. This way our network, which is 100% based on volunteer (community) support, can grow and stay safe.

I see that 2022 will not be a year of resilience, but a year of passion. Our passion and your passion for the mission of the Tor Project is what keeps this fire on. Millions of people all around the world depend on our technology, and it is the passion of the people behind it, like you, that makes Tor possible.

Thank you!

To end this post, I want to thank you for supporting Tor and for sharing this passion with us. I want ask those who can to make a donation to the Tor Project. Your contribution is key for us to stay on track and achieve our goals for 2022. You can donate using cryptocurrency or old fashion fiat currency :) and get some amazing swag, including a limited edition DEF CON badge!

@blog December 20, 2021 - 00:00 • 29 days ago
New Release: Tor Browser 11.0.3

Tor Browser 11.0.3 is now available from the Tor Browser download page and also from our distribution directory

This release updates Firefox to 91.4.1esr and picks up a number of bug fixes. In particular, this release should fix various extension related and crash issues Windows users were experiencing. Additionally, Linux users especially on Ubuntu and Fedora systems were reporting fonts not properly rendering, which should be solved by this release.

We used the opportunity to upgrade various components to their respective latest versions as well: Tor to, OpenSSL to 1.1.1m, and snowflake for enhanced censorship resistance.

Full changelog

The full changelog since Tor Browser 11.0.2 is:

Note: tor-browser#40698 and tor-browser#40721 were occuring due to the same underlying issue in Firefox. Mozilla has opted to make the ticket private, and we've followed suit on our end too.

Known issues

Tor Browser 11.0.3 comes with a number of known issues (please check the following list before submitting a new bug report):

@ooni December 20, 2021 - 00:00 • 29 days ago
iThena integration of OONI Probe boosts censorship measurement coverage worldwide
Over the last months, the iThena project integrated OONI Probe into their platform, resulting in a major spike in OONI censorship measurement coverage around the world. In this blog post, we’re excited to introduce you to iThena and discuss how they helped support censorship measurement worldwide. About iThena OONI Probe integration into iThena About iThena iThena, developed by the Cyber-Complex Foundation, is a distributed computation and measurement project based on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. ...
@kushal December 19, 2021 - 12:58 • 30 days ago
OpenSpace on Digitization, skills supply and lifelong learning

On 8th of this month I attended a full day OpenSpace on "Digitalisering, kompetensförsörjning och livslångt lärande" organized by JobTechDev and Sunet. This was the first in-person event for me after 2020 Nullcon in March. That brought in some extra excitement. Then the night before I tried to look for the place and to my surprise we were having it in Internet Stiftelsen, The Swedish Internet Foundation.

I managed to the reach the venue around 15 minutes before the event started and talked a few people. At beginning we all sat in a circular fashion and Leif & Greg (from JobTechDev) started explaining the format and the plan for the day. All in Swedish :P Though people moved into English after Leif pointed out that I am the only person in the room (we had 30+ participants) who neither speaks nor understand Swedish.

The board

I put in a topic on "How to run an Open Source project" and luckily all the other discussions I wanted to attend, were in the same room.

So, my day went on discussing (and learning a lot about different Swedish government organizations) different topics including:

  • Micro Credentials
  • Data Licensing
  • Open Source project management
  • Solid project

During the discussion of Open Source, one thing was super clear that all the people present in the room (both developers and high number of management folks) were all convinced about writing and using Open Source technologies. My organization, Sunet is already into writing only Open Source solutions mode. The rest of the orgs also agreed that they should put that in the organization policy and make sure that they maintain proper Open Source projects. After all we all are being paid by the government using public money.

At the end of the day we had a feedback session in the same manner as we started the day. I really loved the fact that at the very end, all the chairs were kept in the exact same position (row/column) and no one even could say that there were so many people in the room whole day.

Among the various organizations participated:

  • Arbetsförmedlingen
  • Skolverket
  • Myndigheten för yrkeshögskolan
  • Vetenskapsrådet
  • Universitets- och högskolerådet
  • Statistiska centralbyrån
  • Myndigheten för digital förvaltning (Digg)
  • Verket för innovationssystem (Vinnova)

Here are few more photos from the beautiful venue.

heart sign Circual logo

Meeting so many people from all the different organizations were a very refreshing thing for my mind.

@ooni December 17, 2021 - 00:00 • 1 months ago
Russia started blocking Tor
On 1st December 2021, some Internet Service Providers (ISPs) in Russia started blocking access to the Tor anonymity network. In this report, we share OONI network measurement data on the blocking of the Tor network and in Russia. About Tor Methods Findings Blocking of the Tor network Blocking of the Tor Project website Conclusion Acknowledgements ...
@blog December 14, 2021 - 00:00 • 1 months ago
New Alpha Release: Tor Browser 11.5a1 (Windows, macOS, Linux)

Tor Browser 11.5a1 is now available from the Tor Browser download page and also from our distribution directory.

This is the first alpha version in the 11.5 series. This version updates Firefox to 91.4.0esr, which includes important security updates.

We are also fixing some of the known issues that were introduced with Tor Browser 11.0. In particular, this release should fix the crashes a lot of our Windows users have been seeing.

Note: A new PGP subkey was used for signing this release. You may need to refresh your keychain from to get the updated key.

Full changelog

The full changelog since Tor Browser 11.0a10 is:

  • Windows + OS X + Linux
    • Update Firefox to 91.4.0esr
    • Tor Launcher 0.2.32
    • Bug 40059: YEC activist sign empty in about:tor on RTL locales
    • Bug 40386: Add new default obfs4 bridge "deusexmachina"
    • Bug 40393: Point to a forked version of pion/dtls with fingerprinting fix
    • Bug 40394: Bump version of Snowflake to 221f1c41
    • Bug 40438: Add Blockchair as a search engine
    • Bug 40646: Revert tor-browser#40475 and inherit upstream fix
    • Bug 40680: Prepare update to localized assets for YEC
    • Bug 40682: Disable network.proxy.allow_bypass
    • Bug 40684: Misc UI bug fixes in 11.0a10
    • Bug 40686: Update Onboarding link for 11.0
    • Bug 40689: Change Blockchair Search provider's HTTP method
    • Bug 40690: Browser chrome breaks when private browsing mode is turned off
    • Bug 40691: Make quickstart checkbox gray when "off" on about:torconnect
    • Bug 40698: Addon menus missing content in TB11
    • Bug 40700: Switch Firefox recommendations off by default
    • Bug 40705: "visit our website" link on about:tbupdate pointing to different locations
    • Bug 40706: Fix issue in HTTPS-Everywhere WASM
    • Bug 40714: Next button closes "How do circuits work?" onboarding tour
    • Bug 40718: Application Menu items should be sentence case
    • Bug 40721: Tabs crashing on certain pages in TB11 on Win 10
    • Bug 40725: about:torconnect missing identity block content on TB11
    • Translations update
  • Linux
    • Bug 40318: Remove check for DISPLAY env var in start-tor-browser
    • Bug 40387: Remove some fonts on Linux
@blog December 8, 2021 - 00:00 • 1 months ago
New Release: Tor Browser 11.0.2

Tor Browser 11.0.2 is now available from the Tor Browser download page and also from our distribution directory

This version updates Firefox on Windows, macOS, and Linux to 91.4.0esr. This version includes important security updates to Firefox.

Full changelog

The full changelog since Tor Browser 11.0.1 is:

  • Windows, MacOS & Linux:
    • Update Firefox to 91.4.0esr
    • Bug 40318: Remove check for DISPLAY env var in start-tor-browser
    • Bug 40386: Add new default obfs4 bridge "deusexmachina"
    • Bug 40682: Disable network.proxy.allow_bypass
  • Linux

Known issues

Tor Browser 11.0.2 comes with a number of known issues (please check the following list before submitting a new bug report):

  • Bug 40668: DocumentFreezer & file scheme
  • Bug 40382: Fonts don’t render
  • Bug 40679: Missing features on first-time launch in esr91 on MacOS
  • Bug 40667: AV1 videos shows as corrupt files in Windows 8.1
  • Bug 40666: Switching svg.disable affects NoScript settings
  • Bug 40693: Potential Wayland dependency
  • Bug 40705: “visit our website” link on about:tbupdate pointing to different locations
  • Bug 40706: Fix issue in https-e wasm
@blog December 7, 2021 - 00:00 • 1 months ago
Responding to Tor censorship in Russia

Update: Right after we published this article, the Russian government has officially blocked our main website in Russia. Users can circumvent this block by visiting our website mirror.

Since December 1st, some Internet providers in Russia have started to block access to Tor. Today, we've learned that the Federal Service for Supervision of Communications, Information Technology and Mass Media (Roskomnadzor), a Russian government bureaucratic entity, is threatening to censor our main website ( Russia is the country with the second largest number of Tor users, with more than 300,000 daily users or 15% of all Tor users. As it seems this situation could quickly escalate to a country-wide Tor block, it's urgent that we respond to this censorship! We need your help NOW to keep Russians connected to Tor!

Run a Tor bridge

Last month we launched the campaign Help Censored Users, Run a Tor Bridge to motivate more volunteers to spin up more bridges. The campaign has been a great success, and we've already achieved our goal of 200 new obfs4 bridges. Today, we have more than 400 new bridges.

But now, if the censorship pattern that we're analyzing in some Russian internet providers is to be deployed country-wide, we will need many more bridges to keep Russians online. Thanks to researchers, we've learned that the default bridges available in Tor Browser aren't working in some places in Russia - this includes Snowflake bridges and obfs4 bridges obtained dynamically using Moat. Russian users need to follow our guide to use bridges that are not blocked.

We are calling on everyone to spin up a Tor bridge! If you've ever considered running a bridge, now is an excellent time to get started, as your help is urgently needed. You can find the requirements and instructions for starting a bridge in the Help Censored Users, Run a Tor Bridge blog post.

We need the support of the Internet Freedom community

Teach users about Tor bridges

Digital security trainers and internet freedom advocates, your help is needed! As this instance of censorship limits direct access to our website, malicious actors could start phishing users with fake Tor Browsers or spreading disinformation about Tor. Teaching users how to bypass censorship and how to get the official Tor Browser version using GetTor or a mirror will be crucial. We need you help spread accurate information about Tor and Tor bridges, particularly among Russian audiences.

Localize Tor

We have an extremely helpful and responsive Russian translator community, but we urgently need more volunteers. Learn how to become a Tor translator and join Tor's localization IRC channel or use Element to connect to (

Document internet censorship

Russian users can help us see how the Russian government is censoring the internet by running the OONI probe app on their mobile or desktop devices. OONI, the Open Observatory of Network Interference, will test if and how Tor is being blocked by your internet provider. After installing, please run the "Circumvention test", which will check if circumvention tools like Tor are blocked. Internet measurements are important for detection of anomalous activities; a volunteer running the OONI probe and discussing results with the Tor community was how we discovered the current censorship in Russia.

Apply pressure

International digital rights and human rights organizations must pressure Russia's government to immediately revert this censorship.

We will update this post if the situation changes. To receive a notification for updates, you can subscribe to our new Forum and click on the bell icon.

@ooni December 1, 2021 - 00:00 • 2 months ago
[Event Report] India, Let's Build the List
This is a guest post by The Bachchao Project, originally published here. The Bachchao Project in partnership with OONI hosted an online event on 9th and 10th October 2021 to update the Citizen Lab test list for India. The event, which was called “India, Lets build the list”, was organised to help strengthen community based monitoring of internet censorship in India. The event allowed experts from different fields to contribute to a curated list of websites that are relevant to India and which are regularly tested for censorship by volunteers in India. ...
@blog November 30, 2021 - 00:00 • 2 months ago
Privacy-Preserving and Incrementally-Deployable Support for Certificate Transparency in Tor
This is a guest post by Rasmus Dahlberg, Tobias Pulls, Tom Ritter, and Paul Syverson.

The Why of Certificate Transparency

There are many things that the web could be better at. One part relates to transparent management of TLS certificates. In case you are not familiar with certificates, websites present them to visitors in an attempt to prove their identities. For example, "I'm and not some imposter".

The problem is that certificates can be issued by many different central authorities. If one of these authorities gets the issuance process wrong, e.g. due to mistakes, coercion, or compromise, there may be a mis-issued certificate for some domain name. A mis-issued certificate can be used by attackers to impersonate websites. This is obviously not great. In the context of Tor it is also easy for an attacker to run an exit to be in a perfect position to perform such attacks. Tor has been shipping the HTTPS Everywhere extension with Tor Browser for some time to stop any attackers at exit relays who try to hijack unencrypted connections. Although this is a valuable protection, it will not stop an attacker with access to a mis-issued certificate.

The goal of Certificate Transparency is to ensure that certificate mis-issuance does not go unnoticed. The idea is that before a browser accepts a certificate as valid, it must be visible in a public Certificate Transparency log. This is excellent, because now anyone can inspect the logs to see for themselves whether there are any mis-issued certificates. If something bad shows up, one can act on that accordingly.

Although the basic idea of Certificate Transparency is simple, the exact instantiation turns out to be less straightforward in the real-world. Should the public logs be parties that you trust blindly? Are there any privacy concerns? What about backwards-compatibility? These are all valid questions that we considered for Tor in particular.

The How in Tor

Like other browsers that already enforce Certificate Transparency partially, we used trusted logs as a starting point for our addition to Tor Browser. It is not ideal, but much better than what existed before. This reduces the attack surface from hundreds of trusted central authorities that issue certificates to a handful of Certificate Transparency logs--- a significant win for the web!

Next, we showed how to relax these trust assumptions gradually while taking advantage of Tor Browser and the Tor network to preserve privacy. This would be a larger challenge for any browser that cannot leverage an anonymity network with thousands of relays across the globe. For example, to do better than simply having trusted logs available, you need to verify that public logging actually took place. That verification currently requires external interaction with other parties, effectively leaking browsing history to whomever you interact with. In contrast, our incremental designs let Tor relays do that verification for you. Your privacy is preserved by Tor, and aggregate leakages from Tor are reduced via caching.

For more detail we refer the interested reader to our paper and presentation.

Next steps include the following:

  • Complete partial Certificate Transparency enforcement in Firefox. This would bring Certificate Transparency with trusted logs to both Firefox and Tor Browser.
  • Create and implement a torspec proposal that uses Tor relays to increase your confidence that public logging actually happened when using Tor Browser.


It is exciting to see that Certificate Transparency can be deployed safely in Tor without being restricted to a weak attacker. Our incremental design considers an attacker that controls a fraction of Tor relays, at least one central authority that issues certificates, and all but one Certificate Transparency log. Our full design relaxes these trust assumptions further by allowing the attacker to control all Certificate Transparency logs.

@ooni November 30, 2021 - 00:00 • 2 months ago
Why Collaboration and Transparency is Key to Internet Measurement
This post was originally published on the Internet Society Pulse blog. With Internet shutdowns, disruptions and censorship events on the increase around the world, tracking where such events are happening and gathering evidence to help in the fight against them is becoming more and more important. Tracking these events is crucial because of the impact they have on society and the economy. When social media apps are blocked, for example, freedom of speech, access to information, and movement-building is hampered. ...