Planet Tor

@blog June 11, 2021 - 13:59 • 2 days ago
New Release: Tor Browser 10.5a16
New Release: Tor Browser 10.5a16 sysrqb June 11, 2021

Tor Browser 10.5a16 is now available from the Tor Browser download page and also from our distribution directory.

Note: This is an alpha release, an experimental version for users who want to help us test new features. For everyone else, we recommend downloading the latest stable release instead.

This version updates Firefox to 78.11esr and Fenix to 89.0. In addition, Tor Browser 10.5a16 updates Tor to 0.4.6.4-rc. This version includes important security updates to Firefox for Desktop and security updates for Android.

Warning:
Tor Browser Alpha does not support version 2 onion services. Tor Browser (Stable) will stop supporting version 2 onion services later this year. Please see the previously published deprecation timeline regarding Tor version 0.4.6. Migrate your services and update your bookmarks to version 3 onion services as soon as possible.

The full changelog since Tor Browser 10.5a15:

  • All Platforms
    • Update NoScript to 11.2.8
    • Update Tor to 0.4.6.4-rc
    • Bug 40432: Prevent probing installed applications
  • Windows + OS X + Linux
    • Update Firefox to 78.11.0esr
    • Bug 40037: Announce v2 onion service deprecation on about:tor
    • Bug 40428: Correct minor Cryptocurrency warning string typo
  • Android
    • Update Fenix to 89.1.1
    • Bug 40055: Rebase android-componets patches on 75.0.22 for Fenix 89
    • Bug 40165: Announce v2 onion service deprecation on about:tor
    • Bug 40169: Rebase fenix patches to fenix v89.1.1
    • Bug 40170: Error building tor-browser-89.1.1-10.5-1
    • Bug 40453: Rebase tor-browser patches to 89.0
  • Build System
    • All Platforms
      • Update Go to 1.15.12
    • Android
      • Bug 40290: Update components for mozilla89-based Fenix
...
@blog June 2, 2021 - 16:17 • 11 days ago
New Release: Tor Browser 10.0.17
New Release: Tor Browser 10.0.17 sysrqb June 02, 2021

Tor Browser 10.0.17 is now available from the Tor Browser download page and also from our distribution directory.

This version updates Firefox to 78.11esr. In addition, Tor Browser 10.0.17 updates NoScript to 11.2.8, HTTPS Everywhere to 2021.4.15, and Tor to 0.4.5.8. This version includes important security updates to Firefox for Desktop.

Warning:
Tor Browser will stop supporting version 2 onion services later this year. Please see the previously published deprecation timeline. Migrate your services and update your bookmarks to version 3 onion services as soon as possible.

Note: The Android Tor Browser update will be available next week.

The full changelog since Desktop Tor Browser 10.0.16:

  • Windows + OS X + Linux
    • Update Firefox to 78.11.0esr
    • Update HTTPS Everywhere to 2021.4.15
    • Update NoScript to 11.2.8
    • Update Tor to 0.4.5.8
    • Bug 27002: (Mozilla 1673237) Always allow SVGs on about: pages
    • Bug 40432: Prevent probing installed applications
    • Bug 40037: Announce v2 onion service deprecation on about:tor
...
@blog May 28, 2021 - 16:54 • 16 days ago
Dreaming at Dusk: the Tor Project’s NFT Auction & What’s Next
Dreaming at Dusk: the Tor Project’s NFT Auction & What’s Next Al Smith May 28, 2021

In mid-May, the Tor Project held a nonfungible token (NFT) auction of a generative art piece we called Dreaming at Dusk, created by artist Itzel Yard (ixshells) and derived from the private key of the first onion service, Dusk.

This action was held on Foundation and resulted in a final bid of 500 Ethereum (ETH), roughly $2M USD at the time of the auction, with the proceeds going towards the Tor Project and our work to improve and promote Tor.

Raising roughly $2M USD in one day breaks all records of individual giving we could possibly imagine, and we are extremely humbled and grateful for the success of this auction and what this means for the Tor Project nonprofit organization.

We deeply appreciate everyone who shared this effort and followed along, and want to share more about why we held this auction, the artist ixshells, and what happens next with the money raised.

Why auction an NFT?

If you have been following the Tor Project, you will know that 2020 was a difficult year for the organization (as for many nonprofits, small businesses, and people). We made the difficult decision to lay off one third of our staff in April 2020. Beyond the challenges brought on by COVID-19 and economic changes in 2020, Open Technology Fund—a long-time supporter and funder of the Tor Project and other efforts in our ecosystem—faced a political attack that froze its funds and halted one of our contracts.

Despite these challenges, we have made strong strides in regaining solid financial footing, much of which is a result of the 2020 year-end campaign (#UseAMaskUseTor) and your generous support during this time. We’ve also been able to re-hire several staff members and about a year on from that moment, we are in a better place.

Still, these disruptions made it very clear that our goal of diversifying our funding sources is critical, and that having a solid reserve of general operating funds would help us weather any future storms and keep the Tor Project a strong nonprofit for a long time. We are always looking for strategies to raise these kinds of funds.

Over the last several months, Tor community members have discussed the idea of auctioning an NFT as a fundraising campaign—could an effort like this help to raise general operating funds and keep Tor strong? After Freedom of the Press Foundation had such an epic success with their NFT auction with Edward Snowden for the piece called Stay Free, and after we saw that our specific audience responded positively to this auction, we decided to hold an auction of our own.

The NFT & the artist

We wanted to honor a piece of Tor history with this process, and with the deprecation of v2 onion services coming up rapidly, we decided to honor the very first .onion website known as Dusk, or duskgytldkxiuqc6.onion. We decided that the winner of our auction would receive two things: (1) the private cryptographic key used to create Dusk, and (2) a one-of-a-kind art piece generated using this key. We wanted this to be an opportunity to own a piece of history from the origins of the decentralized internet.

 

We decided to partner with ixshells, an artist from Panama to create this piece. Beyond creating a one-of-a-kind piece of art, she helped us to mobilize the NFT community and raise awareness about what we do for privacy online among many who had never heard of Tor.

NFTs and climate change

We wanted to be mindful of the impact of the blockchain on climate change. Part of our decision to move forward came after we looked into the efforts Ethereum has been putting forward to address their part in this—here is a blog post from them that just came out about moving ETH from PoW to PoS and what this means for ETH’s climate impact.

We also decided that instead of buying carbon offsets as part of this auction, we would put money in the hands of those who are on the frontlines fighting for our planet. We chose to donate to the Munduruku Wakoborũn Women's Association, a grassroots indigenous organization in Pará, Brazil.

At the time of the auction, we decided to help them because illegal miners had attacked, burned, and destroyed their office. But more recently the home of their coordinator, Maria Leusa Kaba, was also burned and destroyed. If others would like to support their work, you can find more information here. The Munduruku Wakoborũn Women's Association is yet another example of the kinds of organizations and communities for whom we build our technology—people who need help to stay safe online in order to keep fighting for their rights. We invite others to help them as well.

Results of the auction

After roughly 24 hours of bidding, the NFT sold to the highest bidder, PleasrDAO, a decentralized autonomous organization that also purchased Stay Free by Edward Snowden, for 2,224 ETH, which equated to roughly $5.5 million at the time of sale.

As a result of this auction, ixshells became the highest selling female NFT artist on Foundation. She was an excellent partner in this process, and we hope you check out the rest of her awesome generative artwork.

Some news coverage:

What’s next

The funds raised from this auction will help us to:

  1. Continue work with grassroots communities in the Global South with training on privacy and security online and give a percentage of the proceeds to one of these organizations.
  2. Improve the security of our network. Dusk might go, but v3 onion services are here to stay. These funds will help make onions more resilient against anonymity and DoS attacks.
  3. Work on Arti, a rewrite of Tor in Rust, which improves Tor's security, makes it more sustainable, easier to improve, and lighter-weight for mobile integration.
  4. Keep Tor strong for whistleblower solutions like SecureDrop and GlobaLeaks, used by sources to communicate about important stories with journalists without sacrificing their anonymity.

And of course, save a bit of it for our future.

...
@blog May 28, 2021 - 16:28 • 16 days ago
New release candidate: Tor 0.4.6.4-rc
New release candidate: Tor 0.4.6.4-rc nickm May 28, 2021

There's a new release candidate available for download. If you build Tor from source, you can download the source code for 0.4.6.4-rc from the download page on the website. Packages should be available over the coming weeks, with a new alpha Tor Browser release likely next week.

Remember, this is a not a stable release yet: but we still hope that people will try it out and look for bugs before the official stable release comes out in June.

Tor 0.4.6.4-rc fixes a few bugs from previous releases. This, we hope, the final release candidate in its series: unless major new issues are found, the next release will be stable.

Changes in version 0.4.6.4-rc - 2021-05-28

  • Minor features (compatibility):
    • Remove an assertion function related to TLS renegotiation. It was used nowhere outside the unit tests, and it was breaking compilation with recent alpha releases of OpenSSL 3.0.0. Closes ticket 40399.
  • Minor bugfixes (consensus handling):
    • Avoid a set of bugs that could be caused by inconsistently preferring an out-of-date consensus stored in a stale directory cache over a more recent one stored on disk as the latest consensus. Fixes bug 40375; bugfix on 0.3.1.1-alpha.

 

  • Minor bugfixes (control, sandbox):
    • Allow the control command SAVECONF to succeed when the seccomp sandbox is enabled, and make SAVECONF keep only one backup file to simplify implementation. Previously SAVECONF allowed a large number of backup files, which made it incompatible with the sandbox. Fixes bug 40317; bugfix on 0.2.5.4-alpha. Patch by Daniel Pinto.
  • Minor bugfixes (metrics port):
    • Fix a bug that made tor try to re-bind() on an already open MetricsPort every 60 seconds. Fixes bug 40370; bugfix on 0.4.5.1-alpha.
  • Removed features:
    • Remove unneeded code for parsing private keys in directory documents. This code was only used for client authentication in v2 onion services, which are now unsupported. Closes ticket 40374.
...
@ooni May 27, 2021 - 00:00 • 18 days ago
Making the OONI Probe Android app more resilient
We recently made OONI Probe Android more robust against accidental or deliberate blocking of our backend services. Specifically, we implemented support for specifying a proxy that speaks with OONI’s backend services. We also improved the build process to influence the TLS Client Hello fingerprint, which helps with avoiding accidental blocking. Adding support for a proxy Changing our Android TLS fingerprint Future improvements Adding support for a proxy Since late 2020, community members have been reporting specific OONI Probe Android failures. ...
@blog May 26, 2021 - 16:57 • 18 days ago
Announcing new Board members
Announcing new Board members isabela May 26, 2021

We are excited to announce that three new members are joining the Tor Project’s Board of Directors: Alissa Cooper, Desigan (Dees) Chinniah, and Kendra Albert! Each new member comes to Tor with a different set of expertise that will help the organization and our community. At the end of this post, you can read each of their bios.

Please join us in welcoming Alissa, Dees, and Kendra to the Board!

Alissa Cooper is a Chief Technology Officer and Fellow at Cisco Systems and served in a variety of leadership roles in the Internet Engineering Task Force (IETF). We are excited that Alissa is joining the Board, her expertise will help Tor continue to mature as an organization. 

"I am honored to be joining the board of the Tor Project, an organization I have long admired for producing one of the most powerful and enduring privacy technologies on the Internet.”  — Alissa

Desigan Chinniah (aka @cyberdees) is a long time supporter of Tor. He is a creative technologist with a strong background in the Free Software movement as well as in the industry with his experience as an investor and on product. We are looking forward to his contribution to the Board and to Tor. 

"I've cheered on Tor from afar for many years during my time at Mozilla Firefox and beyond. More recently I've seen just how powerful it's efforts in privacy infrastructure for the internet can be during the #EndSARS, #FreeHongKong, #BlackLivesMatter and other pivotal movements. I’m humbled and honored to join this special community to push forward their mission." — Dees

Kendra Albert is a public interest technology lawyer with a special interest in computer security and in protecting marginalized speakers and users. They serve as a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice law by working with pro bono clients. We are also honored to have Kendra with us and their legal expertise will be a big bonus to Tor.

"I have long admired the Tor Project's work in protecting private access to the Internet, especially in a time of increasing crackdowns on adult content and pervasive corporate surveillance. I'm honored to join Tor's board and play a part in its future." — Kendra

And as a reminder, the other nine members of the Tor Project’s Board are: Bruce Schneier, Cindy Cohn, Chelsea Komlo, Gabriella Coleman, Julius Mittenzwei, Matt Blaze, Nighat Dad, Rabbi Rob, and Ramy Raoof.

Full Biographies of Incoming Board Members:

Alissa Cooper: Alissa Cooper is a VP/CTO and Fellow at Cisco Systems. Her work advances the state of the art at the intersection of engineering, policy, and technical standards. She previously served as Vice President of Technology Standards at Cisco and in a variety of leadership roles in the Internet Engineering Task Force (IETF), including serving as IETF Chair from 2017 to 2021. She served as the chair of the IANA Stewardship Coordination Group (ICG) from 2014 to 2016. At Cisco she was responsible for driving privacy and policy strategy within the company's portfolio of real-time collaboration products before being appointed as IETF Chair. Prior to joining Cisco, Alissa served as the Chief Computer Scientist at the Center for Democracy and Technology, where she was a leading public interest advocate and technologist on issues related to privacy, net neutrality, and technical standards. Alissa holds a PhD from the Oxford Internet Institute and MS and BS degrees in computer science from Stanford University.

Desigan Chinniah: Dees or cyberdees, is a creative technologist. He has a portfolio of advisory roles and board positions within technology organizations in areas that include machine learning on encrypted data via homomorphic encryption (Zama), connectivity and edge of the network content delivery within emerging markets (BRCK), and alternative business models for the web via open protocols and web standards (Coil). Dees co-created Grant for the Web, a $100M philanthropic fund to boost open, fair, and inclusive standards and innovation for creators. He occasionally makes early stage investments with a focus on diverse and unrepresented founders. Dees is a stalwart of the web and has had check-ins at various dot-coms most notably almost a decade at Mozilla, starting at Firefox 3.8. A self-confessed geek, Dees lives in London with his wife, Sanne and their two kids, Summer Skye & Kiran Quinn. Visit: https://desiganchinniah.com for more. 

Kendra Albert: Kendra Albert is a public interest technology lawyer with a special interest in computer security and in protecting marginalized speakers and users. They serve as a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice law by working with pro bono clients. Kendra is also the founder and director of the Initiative for a Representative First Amendment. Before they joined the Clinic, Kendra worked with Marcia Hofmann at Zeitgeist Law. They serve on the board of the ACLU of Massachusetts, and as a legal advisor for Hacking // Hustling.
 

...
@meejah May 26, 2021 - 00:00 • 19 days ago
Libera dot chat
Apparently some drama ...
@anarcat May 24, 2021 - 18:04 • 20 days ago
Leaving Freenode

The freenode IRC network has been hijacked.

TL;DR: move to libera.chat or OFTC.net, as did countless free software projects including Gentoo, CentOS, KDE, Wikipedia, FOSDEM, and more. Debian and the Tor project were already on OFTC and are not affected by this.

What is freenode and why should I care?

freenode is the largest remaining IRC network. Before this incident, it had close to 80,000 users, which is small in terms of modern internet history -- even small social networks are larger by multiple orders of magnitude -- but is large in IRC history. The IRC network is also extensively used by the free software community, being the default IRC network on many programs, and used by hundreds if not thousands of free software projects.

I have been using freenode since at least 2006.

This matters if you care about IRC, the internet, open protocols, decentralisation, and, to a certain extent, federation as well. It also touches on who has the right on network resources: the people who "own" it (through money) or the people who make it work (through their labor). I am biased towards open protocols, the internet, federation, and worker power, and this might taint this analysis.

What happened?

It's a long story, but basically:

  1. back in 2017, the former head of staff sold the freenode.net domain (and its related company) to Andrew Lee, "American entrepreneur, software developer and writer", and, rather weirdly, supposedly "crown prince of Korea" although that part is kind of complex (see House of Yi, Yi Won, and Yi Seok). It should be noted the Korean Empire hasn't existed for over a century at this point (even though its flag, also weirdly, remains)

  2. back then, this was only known to the public as this strange PIA and freenode joining forces gimmick. it was suspicious at first, but since the network kept running, no one paid much attention to it. opers of the network were similarly reassured that Lee would have no say in the management of the network

  3. this all changed recently when Lee asserted ownership of the freenode.net domain and started meddling in the operations of the network, according to this summary. this part is disputed, but it is corroborated by almost a dozen former staff which collectively resigned from the network in protest, after legal threats, when it was obvious freenode was lost.

  4. the departing freenode staff founded a new network, irc.libera.chat, based on the new ircd they were working on with OFTC, solanum

  5. meanwhile, bot armies started attacking all IRC networks: both libera and freenode, but also OFTC and unrelated networks like a small one I help operate. those attacks have mostly stopped as of this writing (2021-05-24 17:30UTC)

  6. on freenode, however, things are going for the worse: Lee has been accused of taking over a channel, in a grotesque abuse of power; then changing freenode policy to not only justify the abuse, but also remove rules against hateful speech, effectively allowing nazis on the network (update: the change was reverted, but not by Lee)

Update: even though the policy change was reverted, the actual conversations allowed on freenode have already degenerated into toxic garbage. There are also massive channel takeovers (presumably over 700), mostly on channels that were redirecting to libera, but also channels that were still live. Channels that were taken over include #fosdem, #wikipedia, #haskell...

Instead of working on the network, the new "so-called freenode" staff is spending effort writing bots and patches to basically automate taking over channels. I run an IRC network and this bot is obviously not standard "services" stuff... This is just grotesque.

At this point I agree with this HN comment:

We should stop implicitly legitimizing Andrew Lee's power grab by referring to his dominion as "Freenode". Freenode is a quarter-century-old community that has changed its name to libera.chat; the thing being referred to here as "Freenode" is something else that has illegitimately acquired control of Freenode's old servers and user database, causing enormous inconvenience to the real Freenode.

I don't agree with the suggested name there, let's instead call it "so called freenode" as suggested later in the thread.

What now?

I recommend people and organisations move away from freenode as soon as possible. This is a major change: documentation needs to be fixed, and the migration needs to be coordinated. But I do not believe we can trust the new freenode "owners" to operate the network reliably and in good faith.

It's also important to use the current momentum to build a critical mass elsewhere so that people don't end up on freenode again by default and find an even more toxic community than your typical run-of-the-mill free software project (which is already not a high bar to meet).

Update: people are moving to libera in droves. It's now reaching 18,000 users, which is bigger than OFTC and getting close to the largest, traditionnal, IRC networks (EFnet, Undernet, IRCnet are in the 10-20k users range). so-called freenode is still larger, currently clocking 68,000 users, but that's a huge drop from the previous count which was 78,000 before the exodus began. We're even starting to see the effects of the migration on netsplit.de.

Update 2: the isfreenodedeadyet.com site is updated more frequently than netsplit and shows tons more information. It shows 25k online users for libera and 61k for so-called freenode (down from ~78k), and the trend doesn't seem to be stopping for so-called freenode. There's also a list of 400+ channels that have moved out. Keep in mind that such migrations take effect over long periods of time.

Where do I move to?

The first thing you should do is to figure out which tool to use for interactive user support. There are multiple alternatives, of course -- this is the internet after all -- but here is a short list of suggestions, in preferred priority order:

  1. irc.libera.chat
  2. irc.OFTC.net
  3. Matrix.org, which bridges with OFTC and (hopefully soon) with libera as well, modern IRC alternative
  4. XMPP/Jabber also still exists, if you're into that kind of stuff, but I don't think the "chat room" story is great there, at least not as good as Matrix

Basically, the decision tree is this:

  • if you want to stay on IRC:
    • if you are already on many OFTC channels and few freenode channels: move to OFTC
    • if you are more inclined to support the previous freenode staff: move to libera
    • if you care about matrix users (in the short term): move to OFTC
  • if you are ready to leave IRC:
    • if you want the latest and greatest: move to Matrix
    • if you like XML and already use XMPP: move to XMPP

Frankly, at this point, everyone should seriously consider moving to Matrix. The user story is great, the web is a first class user, it supports E2EE (although XMPP as well), and has a lot of momentum behind it. It even bridges with IRC well (which is not the case for XMPP) so if you're worried about problems like this happening again.

(Indeed, I wouldn't be surprised if similar drama happens on OFTC or libera in the future. The history of IRC is full of such epic controversies, takeovers, sabotage, attacks, technical flamewars, and other silly things. I am not sure, but I suspect a federated model like Matrix might be more resilient to conflicts like this one.)

Changing protocols might mean losing a bunch of users however: not everyone is ready to move to Matrix, for example. Graybeards like me have been using irssi for years, if not decades, and would take quite a bit of convincing to move elsewhere.

I have mostly kept my channels on IRC, and moved either to OFTC or libera. In retrospect, I think I might have moved everything to OFTC if I would have thought about it more, because almost all of my channels are there. But I kind of expect a lot of the freenode community to move to libera, so I am keeping a socket open there anyways.

How do I move?

The first thing you should do is to update documentation, websites, and source code to stop pointing at freenode altogether. This is what I did for feed2exec, for example. You need to let people know in the current channel as well, and possibly shutdown the channel on freenode.

Since my channels are either small or empty, I took the radical approach of:

  • redirecting the channel to ##unavailable which is historically the way we show channels have moved to another network
  • make the channel invite-only (which effectively enforces the redirection)
  • kicking everyone out of the channel
  • kickban people who rejoin
  • set the topic to announce the change

In IRC speak, the following commands should do all this:

/msg ChanServ set #anarcat mlock +if ##unavailable
/msg ChanServ clear #anarcat users moving to irc.libera.chat
/msg ChanServ set #anarcat restricted on
/topic #anarcat this channel has moved to irc.libera.chat

If the channel is not registered, the following might work

/mode #anarcat +if ##unavailable

Then you can leave freenode altogether:

/disconnect Freenode unacceptable hijack, policy changes and takeovers. so long and thanks for all the fish.

Keep in mind that some people have been unable to setup such redirections, because the new freenode staff have taken over their channel, in which case you're out of luck...

Some people have expressed concern about their private data hosted at freenode as well. If you care about this, you can always talk to NickServ and DROP your nick. Be warned, however, that this assumes good faith of the network operators, which, at this point, is kind of futile. I would assume any data you have registered on there (typically: your NickServ password and email address) to be compromised and leaked. If your password is used elsewhere (tsk, tsk), change it everywhere.

Update: there's also another procedure, similar to the above, but with a different approach. Keep in mind that so-called freenode staff are actively hijacking channels for the mere act of mentioning libera in the channel topic, so thread carefully there.

Last words

This is a sad time for IRC in general, and freenode in particular. It's a real shame that the previous freenode staff have been kicked out, and it's especially horrible that if the new policies of the network are basically making the network open to nazis. I wish things would have gone out differently: now we have yet another fork in the IRC history. While it's not the first time freenode changes name (it was called OPN before), now the old freenode is still around and this will bring much confusion to the world, especially since the new freenode staff is still claiming to support FOSS.

I understand there are many sides to this story, and some people were deeply hurt by all this. But for me, it's completely unacceptable to keep pushing your staff so hard that they basically all (except one?) resign in protest. For me, that's leadership failure at the utmost, and a complete disgrace. And of course, I can't in good conscience support or join a network that allows hate speech.

Regardless of the fate of whatever we'll call what's left of freenode, maybe it's time for this old IRC thing to die already. It's still a sad day in internet history, but then again, maybe IRC will never die...

...
@asn May 17, 2021 - 00:00 • 28 days ago
The Token Zoo is now open!

We recently launched the Token Zoo!

The Token Zoo is a knowledge base of various published (and unpublished) anonymous credential schemes and their properties. It might prove useful to you, if - like me - you’ve spent months going through the vast anonymous credential literature and you feel like you need some hand holding!

...
@atagar May 16, 2021 - 23:46 • 28 days ago
Status Report for May 2021

Hi all! This month I moved on from Tor to begin volunteering with Wikipedia. Covid taught me the importance of face to face contact, and Wikipedia has local Seattle meetups that could scratch an itch Tor didn’t.

Something I desperately look forward to now that I have…

Vaccine


To get my feet wet I invested this month toward standardizing and making minor adjustments to pywikibot

...
@blog May 13, 2021 - 19:40 • 1 months ago
Dreaming At Dusk
Dreaming At Dusk root May 13, 2021

 

 

The first star of a dying galaxy. Curves taking over while bits dream at dusk.

 
Welcome to the onion space. We've been here since 2004, and we grow every day. In a few months, some onions will rot, while others will blossom.
 
 
Dusk was the first onion; now there are hundreds of thousands.
 
 
In an entirely different corner of the universe, where smart contracts thrive, Dusk is being auctioned. You can bid for it by interacting with functions living in a chain of blocks.
 
The Dusk auction will last about 24 hours. It will end on Friday at around 20:00UTC. When it ends, the winning bidder becomes the owner of a generative art piece created in collaboration with the artist @ixshells which has been derived directly by the Dusk private key. On November, the Owners of the NFT also gets the private key of Dusk directly from its owner.
 
We know that miners are heavy to the environment. Part of the auction winnings will be donated to a grassroot organization fighting on the frontlines of the climate crisis. At the same time, we've been actively monitoring the developments of Ethereum and we believe that their efforts at moving away from PoW is a fight worth fighting and can't come soon enough.
 
Welcome to The Auction. Please take your seat :)
...
@blog May 10, 2021 - 14:50 • 1 months ago
New release: Tor 0.4.5.8
New release: Tor 0.4.5.8 nickm May 10, 2021

We have a new stable release today. If you build Tor from source, you can download the source code for Tor 0.4.5.8 on the download page. Packages should be available within the next several weeks, with a new Tor Browser likely next week.

Tor 0.4.5.8 fixes several bugs in earlier version, backporting fixes from the 0.4.6.x series.

Changes in version 0.4.5.8 - 2021-05-10

  • Minor features (compatibility, Linux seccomp sandbox, backport from 0.4.6.3-rc):
    • Add a workaround to enable the Linux sandbox to work correctly with Glibc 2.33. This version of Glibc has started using the fstatat() system call, which previously our sandbox did not allow. Closes ticket 40382; see the ticket for a discussion of trade-offs.
  • Minor features (compilation, backport from 0.4.6.3-rc):
    • Make the autoconf script build correctly with autoconf versions 2.70 and later. Closes part of ticket 40335.

 

  • Minor features (fallback directory list, backport from 0.4.6.2-alpha):
    • Regenerate the list of fallback directories to contain a new set of 200 relays. Closes ticket 40265.
  • Minor features (geoip data):
    • Update the geoip files to match the IPFire Location Database, as retrieved on 2021/05/07.
  • Minor features (onion services):
    • Add warning message when connecting to now deprecated v2 onion services. As announced, Tor 0.4.5.x is the last series that will support v2 onions. Closes ticket 40373.
  • Minor bugfixes (bridge, pluggable transport, backport from 0.4.6.2-alpha):
    • Fix a regression that made it impossible start Tor using a bridge line with a transport name and no fingerprint. Fixes bug 40360; bugfix on 0.4.5.4-rc.
  • Minor bugfixes (build, cross-compilation, backport from 0.4.6.3-rc):
    • Allow a custom "ar" for cross-compilation. Our previous build script had used the $AR environment variable in most places, but it missed one. Fixes bug 40369; bugfix on 0.4.5.1-alpha.
  • Minor bugfixes (channel, DoS, backport from 0.4.6.2-alpha):
    • Fix a non-fatal BUG() message due to a too-early free of a string, when listing a client connection from the DoS defenses subsystem. Fixes bug 40345; bugfix on 0.4.3.4-rc.
  • Minor bugfixes (compiler warnings, backport from 0.4.6.3-rc):
    • Fix an indentation problem that led to a warning from GCC 11.1.1. Fixes bug 40380; bugfix on 0.3.0.1-alpha.
  • Minor bugfixes (controller, backport from 0.4.6.1-alpha):
    • Fix a "BUG" warning that would appear when a controller chooses the first hop for a circuit, and that circuit completes. Fixes bug 40285; bugfix on 0.3.2.1-alpha.
  • Minor bugfixes (onion service, client, memory leak, backport from 0.4.6.3-rc):
    • Fix a bug where an expired cached descriptor could get overwritten with a new one without freeing it, leading to a memory leak. Fixes bug 40356; bugfix on 0.3.5.1-alpha.
  • Minor bugfixes (testing, BSD, backport from 0.4.6.2-alpha):
    • Fix pattern-matching errors when patterns expand to invalid paths on BSD systems. Fixes bug 40318; bugfix on 0.4.5.1-alpha. Patch by Daniel Pinto.
...
@blog May 10, 2021 - 14:46 • 1 months ago
New release candidate: Tor 0.4.6.3-rc
New release candidate: Tor 0.4.6.3-rc nickm May 10, 2021

There's a new release candidate available for download. If you build Tor from source, you can download the source code for Tor 0.4.6.3-rc from the download page on the website. Packages should be available over the coming weeks, with a new Tor Browser release likely next week.

Tor 0.4.6.3-rc is the first release candidate in its series. It fixes a few small bugs from previous versions, and adds a better error message when trying to use (no longer supported) v2 onion services.

Though we anticipate that we'll be doing a bit more clean-up between now and the stable release, we expect that our remaining changes will be fairly simple. There will likely be at least one more release candidate before 0.4.6.x is stable.

Changes in version 0.4.6.3-rc - 2021-05-10

  • Major bugfixes (onion service, control port):
    • Make the ADD_ONION command properly configure client authorization. Before this fix, the created onion failed to add the client(s). Fixes bug 40378; bugfix on 0.4.6.1-alpha.
  • Minor features (compatibility, Linux seccomp sandbox):
    • Add a workaround to enable the Linux sandbox to work correctly with Glibc 2.33. This version of Glibc has started using the fstatat() system call, which previously our sandbox did not allow. Closes ticket 40382; see the ticket for a discussion of trade-offs.

 

  • Minor features (compilation):
    • Make the autoconf script build correctly with autoconf versions 2.70 and later. Closes part of ticket 40335.
  • Minor features (geoip data):
    • Update the geoip files to match the IPFire Location Database, as retrieved on 2021/05/07.
  • Minor features (onion services):
    • Add a warning message when trying to connect to (no longer supported) v2 onion services. Closes ticket 40373.
  • Minor bugfixes (build, cross-compilation):
    • Allow a custom "ar" for cross-compilation. Our previous build script had used the $AR environment variable in most places, but it missed one. Fixes bug 40369; bugfix on 0.4.5.1-alpha.
  • Minor bugfixes (compiler warnings):
    • Fix an indentation problem that led to a warning from GCC 11.1.1. Fixes bug 40380; bugfix on 0.3.0.1-alpha.
  • Minor bugfixes (logging, relay):
    • Emit a warning if an Address is found to be internal and tor can't use it. Fixes bug 40290; bugfix on 0.4.5.1-alpha.
  • Minor bugfixes (onion service, client, memory leak):
    • Fix a bug where an expired cached descriptor could get overwritten with a new one without freeing it, leading to a memory leak. Fixes bug 40356; bugfix on 0.3.5.1-alpha.
...
@blog May 5, 2021 - 14:33 • 1 months ago
Check the status of Tor services with status.torproject.org
Check the status of Tor services with status.torproject.org anarcat May 05, 2021

The Tor Project now has a status page which shows the state of our major services.

You can check status.torproject for news about major outages in Tor services, including v3 and v2 onion services, directory authorities, our website (torproject.org), and the check.torproject.org tool. The status page also displays outages related to Tor internal services, like our GitLab instance.

This post documents why we launched status.torproject.org, how the service was built, and how it works.

Why a status page

The first step in setting up a service page was to realize we needed one in the first place. I surveyed internal users at the end of 2020 to see what could be improved, and one of the suggestions that came up was to "document downtimes of one hour or longer" and generally improve communications around monitoring. The latter is still on the sysadmin roadmap, but a status page seemed like a good solution for the former.

We already have two monitoring tools in the sysadmin team: Icinga (a fork of Nagios) and Prometheus, with Grafana dashboards. But those are hard to understand for users. Worse, they also tend to generate false positives, and don't clearly show users which issues are critical.

In the end, a manually curated dashboard provides important usability benefits over an automated system, and all major organisations have one.

Picking the right tool

It wasn't my first foray in status page design. In another life, I had setup a status page using a tool called Cachet. That was already a great improvement over the previous solutions, which were to use first a wiki and then a blog to post updates. But Cachet is a complex Laravel app, which also requires a web browser to update. It generally requires more maintenance than what we'd like, needing silly things like a SQL database and PHP web server.

So when I found cstate, I was pretty excited. It's basically a theme for the Hugo static site generator, which means that it's a set of HTML, CSS, and a sprinkle of Javascript. And being based on Hugo means that the site is generated from a set of Markdown files and the result is just plain HTML that can be hosted on any web server on the planet.

Deployment

At first, I wanted to deploy the site through GitLab CI, but at that time we didn't have GitLab pages set up. Even though we do have GitLab pages set up now, it's not (yet) integrated with our mirroring infrastructure. So, for now, the source is hosted and built in our legacy git and Jenkins services.

It is nice to have the content hosted in a git repository: sysadmins can just edit Markdown in the git repository and push to deploy changes, no web browser required. And it's trivial to setup a local environment to preview changes:

hugo serve --baseUrl=http://localhost/
firefox https://localhost:1313/

Only the sysadmin team and gitolite administrators have access to the repository, at this stage, but that could be improved if necessary. Merge requests can also be issued on the GitLab repository and then pushed by authorized personnel later on, naturally.

Availability

One of the concerns I have is that the site is hosted inside our normal mirror infrastructure. Naturally, if an outage occurs there, the site goes down. But I figured it's a bridge we'll cross when we get there. Because it's so easy to build the site from scratch, it's actually trivial to host a copy of the site on any GitLab server, thanks to the .gitlab-ci.yml file shipped (but not currently used) in the repository. If push comes to shove, we can just publish the site elsewhere and point DNS there.

And, of course, if DNS fails us, then we're in trouble, but that's the situation anyway: we can always register a new domain name for the status page when we need to. It doesn't seem like a priority at the moment.

Comments and feedback are welcome!

...
@pastly May 3, 2021 - 15:30 • 1 months ago
How I set up my websites with Tor and Nginx

I was recently asked how I setup my websites to:

  1. Redirect HTTP to HTTPS when not accessed via an onion service.
  2. Serve the website over HTTPS when not accessed via an onion service.
  3. Serve the website over HTTP when accessed via an onion service.

I will further explain:

  • How the .onion available button is obtained in my setup.
  • How to add an onion Alt-Svc that works.

I have a very simple setup. I have a tor daemon running on the same machine as nginx. As most of my websites are static, nginx serves their files directly in most cases. There is no software between tor and nginx; if there is for you, that drastically changes things and this post may be of little use to you. If you have extra software "behind" nginx (e.g. a python app generating a dynamic website), most likely this post will still be useful to you. For example, instead of telling nginx this like I do:

location / {
        try_files $uri $uri/ =404;
}

You might be telling nginx this:

location / {
        include proxy_params;
        proxy_pass http://unix:/path/to/some/app.sock;
}

I use Certbot (Let's Encrypt) for my CA, and it automatically generates some of the nginx config you will see below.

All of the nginx config blocks are in one file, /etc/nginx/sites-available/flashflow.pastly.xyz. As is standard with nginx on Debian, there's a symlink to that file in /etc/nginx/sites-enabled/ and /etc/nginx/nginx.conf was already set to load files in /etc/nginx/sites-enabled/.

This post uses flashflow.pastly.xyz and its onion address as an example. Whenever you see flashflow.pastly.xyz or its onion address, mentally replace the domains with your own.

Redirect HTTP to HTTPS when not accessed via an onion service.

This is entirely handled by nginx and uses a server {} block automatically generated by Certbot. It is this:

server {
    if ($host = flashflow.pastly.xyz) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    listen 80;
    listen [::]:80;
    server_name flashflow.pastly.xyz;
    return 404; # managed by Certbot
}

All this block does is redirect to HTTPS. It is used when the user is visiting flashflow.pastly.xyz on port 80, as indicated by the server_name and listen lines.

Serve the website over HTTPS when not accessed via an onion service.

This is entirely handled by nginx. Again as the server_name and listen lines indicate, this block is used when the user is visiting flashflow.pastly.xyz on port 443 (using TLS). This is overwhelmingly automatically generated by Certbot too.

I slightly simplified this block as presented here. We will edit this block later in this post to add Onion-Location and Alt-Svc headers.

server {
    server_name flashflow.pastly.xyz;
    root /var/www/flashflow.pastly.xyz;
    index index.html;
    location / {
        try_files $uri $uri/ =404;
    }
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/flashflow.pastly.xyz/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/flashflow.pastly.xyz/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

Serve the website over HTTP when accessed via an onion service.

This is the nginx config block. It is a simplified version of the previous one, as it is also actually serving the website, but with plain HTTP and when the user is visiting the onion service, not flashflow.pastly.xyz.

server {
    listen 80;
    listen [::]:80;
    server_name jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion;
    root /var/www/flashflow.pastly.xyz;
    index index.html;
    location / {
        try_files $uri $uri/ =404;
    }
}

These are the relevant lines from the tor's torrc. We will edit this block later in this post to add Alt-Svc support.

HiddenServiceDir /var/lib/tor/flashflow.pastly.xyz_service
HiddenServicePort 80

In this post I've shared two server {} blocks that tell nginx to listen on port 80. Nginx knows to use this block for onion service connections because the server_name (the hostname that the user's browser is telling nginx it wants to visit) is the onion service. Nginx uses the other server {} block with port 80 when the user's browser tells nginx that it wants to visit flashflow.pastly.xyz.

After adding those lines to the torrc, I reloaded tor (restart not required). Then I could learn what the onion address is:

$ cat /var/lib/tor/flashflow.pastly.xyz_service/hostname 
jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion

And from there knew what to put on the server_name line.

Whenever I edited nginx's config, I reloaded nginx when done (systemctl reload nginx) and verified it didn't say there was an error.

Whenever I edited tor's config, I reloaded tor when done (systemctl reload tor@default) and verified by checking tor's logs that there was no error (journalctl -eu tor@default) and that tor is still running (systemctl status tor@default).

How the .onion available button is obtained in my setup.

Verify that the preceeding steps are working. Verify that:

  1. Visiting http://flashflow.pastly.xyz redirects to https://flashflow.pastly.xyz to serve the website.
  2. Visiting http://jsd33qlp6[...]d.onion serves the website.

This button advertises the fact that the website is also available at an onion service, which improves users' security and may even improve their performance. Further, if they've configured Tor Browser to do so, Tor Browser can automatically redirect to the onion service instead of presenting a button for the user to maybe click.

Find the 2nd server {} block you added, the one that listens on port 443. We are now going to add a single line to it that instructs nginx to add an HTTP header in its responses.

server {
    server_name flashflow.pastly.xyz;
    [... lines omitted ...]
    location / {
        try_files $uri $uri/ =404;
        # ADD THE BELOW LINE
        add_header Onion-Location http://jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion$request_uri;
    }
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    [... lines omitted ...]
}

Reload nginx and verify it didn't say there was an error.

Visiting https://flashflow.pastly.xyz should now result in a purple .onion available button appearing in the URL bar when the page is done loading. Clicking it will take the user from https://flashflow.pastly.xyz/foo/bar to http://jsd33qlp6[...]d.onion/foo/bar.

How to add an onion Alt-Svc that works.

Verify that the preceeding steps are working. Verify that:

  1. Visiting http://flashflow.pastly.xyz redirects to https://flashflow.pastly.xyz to serve the website.
  2. Visiting http://jsd33qlp6[...]d.onion serves the website.
  3. (Optional) visiting https://flashflow.pastly.xyz results in a purple .onion available button in the URL bar.

This is another HTTP header that tells the browser there is another way to fetch the given resource that it should consider using in the future instead. The Alt-Svc header is used in contexts entirely outside of Tor, but it can also be used to tell Tor Browser to consider secretly fetching content from this host from an onion service in the future.

Common gotcha: The onion service must also support HTTPS. The onion service does not need a TLS certificate that is valid for the onion address: it should just use the same certificate as the regular web service, even though it is invalid for the onion service. The browser verifies that the certificate it gets from jsd33qlp6[...]d.onion is valid for flashflow.pastly.xyz when using the .onion as an Alt-Svc for the .xyz.

Add to the torrc the following line:

HiddenServiceDir /var/lib/tor/flashflow.pastly.xyz_service
HiddenServicePort 80
# ADD THE BELOW LINE
HiddenServicePort 443

Reload tor when done (systemctl reload tor@default) and verify by checking tor's logs that there was no error (journalctl -eu tor@default) and that tor is still running (systemctl status tor@default).

Find the 2nd server {} block you added, the one that listens on port 443. We are now going to add a single line to it that instructs nginx to add an HTTP header in its responses, and edit the server_name line to list the onion service.

server {
    # EDIT THE BELOW LINE
    server_name flashflow.pastly.xyz jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion;
    [... lines omitted ...]
    location / {
        try_files $uri $uri/ =404;
        # ADD THE BELOW LINE
        add_header Alt-Svc 'h2="jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion:443"; ma=86400;';
    }
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    [... lines omitted ...]
}

Reload nginx and verify it didn't say there was an error.

You can verify the Alt-Svc header is being sent by, well, inspecting the headers that nginx sends when you request either https://flashflow.pastly.xyz or https://jsd33qlp6[...]d.onion.

$ curl --head https://flashflow.pastly.xyz
HTTP/2 200 
server: nginx/1.14.2
[... lines omitted ...]
onion-location: http://jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion/
alt-svc: h2="jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion:443"; ma=86400;
[... lines omitted ...]


# the --insecure flag tells curl to keep going even though it will see a
# cert that isn't valid for the onion service. This is expected, as
# explained previously.
$ torsocks curl --insecure --head https://jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion
HTTP/2 200 
server: nginx/1.14.2
[... lines omitted ...]
onion-location: http://jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion/
alt-svc: h2="jsd33qlp6p2t3snyw4prmwdh2sukssefbpjy6katca5imn4zz4pepdid.onion:443"; ma=86400;
[... lines omitted ...]

Verifying that Tor Browser actually uses the headers is harder and beyond the scope of this post. The basic idea is to abuse Alt-Svc to serve something different up via the onion service and check that you get the different content after a couple of page refreshes.

...
@anarcat April 28, 2021 - 20:05 • 2 months ago
Building a status page service with Hugo

The Tor Project now has a status page which shows the state of our major services.

You can check status.torprojet.org for news about major outages in Tor services, including v3 and v2 onion services, directory authorities, our website (torproject.org), and the check.torproject.org tool. The status page also displays outages related to Tor internal services, like our GitLab instance.

This post documents why we launched status.torproject.org, how the service was built, and how it works.

Why a status page

The first step in setting up a service page was to realize we needed one in the first place. I surveyed internal users at the end of 2020 to see what could be improved, and one of the suggestions that came up was to "document downtimes of one hour or longer" and generally improve communications around monitoring. The latter is still on the sysadmin roadmap, but a status page seemed like a good solution for the former.

We already have two monitoring tools in the sysadmin team: Icinga (a fork of Nagios) and Prometheus, with Grafana dashboards. But those are hard to understand for users. Worse, they also tend to generate false positives, and don't clearly show users which issues are critical.

In the end, a manually curated dashboard provides important usability benefits over an automated system, and all major organisations have one.

Picking the right tool

It wasn't my first foray in status page design. In another life, I had setup a status page using a tool called Cachet. That was already a great improvement over the previous solutions, which were to use first a wiki and then a blog to post updates. But Cachet is a complex Laravel app, which also requires a web browser to update. It generally requires more maintenance than what we'd like, needing silly things like a SQL database and PHP web server.

So when I found cstate, I was pretty excited. It's basically a theme for the Hugo static site generator, which means that it's a set of HTML, CSS, and a sprinkle of Javascript. And being based on Hugo means that the site is generated from a set of Markdown files and the result is just plain HTML that can be hosted on any web server on the planet.

Deployment

At first, I wanted to deploy the site through GitLab CI, but at that time we didn't have GitLab pages set up. Even though we do have GitLab pages set up now, it's not (yet) integrated with our mirroring infrastructure. So, for now, the source is hosted and built in our legacy git and Jenkins services.

It is nice to have the content hosted in a git repository: sysadmins can just edit Markdown in the git repository and push to deploy changes, no web browser required. And it's trivial to setup a local environment to preview changes:

hugo serve --baseUrl=http://localhost/
firefox https://localhost:1313/

Only the sysadmin team and gitolite administrators have access to the repository, at this stage, but that could be improved if necessary. Merge requests can also be issued on the GitLab repository and then pushed by authorized personnel later on, naturally.

Availability

One of the concerns I have is that the site is hosted inside our normal mirror infrastructure. Naturally, if an outage occurs there, the site goes down. But I figured it's a bridge we'll cross when we get there. Because it's so easy to build the site from scratch, it's actually trivial to host a copy of the site on any GitLab server, thanks to the .gitlab-ci.yml file shipped (but not currently used) in the repository. If push comes to shove, we can just publish the site elsewhere and point DNS there.

And, of course, if DNS fails us, then we're in trouble, but that's the situation anyway: we can always register a new domain name for the status page when we need to. It doesn't seem like a priority at the moment.

Comments and feedback are welcome!


This article was first published on the Tor Project Blog.

...
@blog April 27, 2021 - 08:23 • 2 months ago
Defend Dissent with Tor
Defend Dissent with Tor Gus April 27, 2021

Guest post by Glencora Borradaile

After 4 years of giving digital security trainings to activists and teaching a course called "Communications Security and Social Movements", I've compiled all my materials into an open, digital book - Defend Dissent: Digital Suppression and Cryptographic Defense of Social Movements hosted by Oregon State University where I am an Associate Professor. The book is intended for an introductory, non-major college audience, and I hope it will find use outside the university setting.

Defend Dissent has three parts:

  • Part 1 covers the basics of cryptography: basic encryption, how keys are exchanged, how passwords protect accounts and how encryption can help provide anonymity.  When I give digital security trainings, I don't spend a lot of time here, but I still want people to know (for example) what end-to-end encryption is and why we want it.
  • Part 2 gives brief context for how surveillance is used to suppress social movements, with a US focus.
  • Part 3 contains what you might consider more classic material for digital security training, focusing on the different places your data might be vulnerable and the tactics you can use to defend your data.

Each chapter ends with a story that brings social context to the material in that chapter (even in Part 1!) - from surveillance used against contemporary US protests to the African National Congress' use of partially manual encryption in fighting apartheid in South Africa in the 80s.

It should be no surprise that Tor is a star of Defend Dissent, ending out Parts 1 and 3. The anonymity that the Tor technology enables turns the internet into what it should be: a place to communicate without everyone knowing your business. As a professor, I love teaching Tor. It is a delightful combination of encryption, key exchange, probability and threat modeling.

In Defend Dissent, I aim to make Tor easy to understand, and use a three-step example to explain Tor to audiences who may have never used it before: There are just three steps to understanding how Tor works:

1. Encryption allows you to keep the content of your communications private from anyone who doesn't have the key. But it doesn't protect your identity or an eavesdropper from knowing who you are communicating with and when.

Encryption keeps the content of your communications private

2. Assata can send Bobby an encrypted message even if they haven't met ahead of time to agree on a key for encryption. This concept can be used to allow Assata and Bobby to agree on a single encryption key. (Put an encryption key in the box.)

Exchanging a secure message without sharing a key

3. When Assata accesses Tor, the Tor Browser picks three randomly chosen nodes (her entry, relay and exit nodes) from amongst thousands in the Tor network. Assata's Tor Browser agrees on a key with the entry node, then agrees on a key with the relay node by communicating with the relay node through the entry node, and so on. Assata's Tor Browser encrypts the message with the exit key, then with the relay key and then with the entry key and sends the message along. The entry node removes one layer of encryption and so on. (Like removing the layers of an onion ...) This way, the relay doesn't know who Assata is - just that it is relaying a message through the Tor network.

I'm excited to share this accessible resource and to teach the world more about Tor, encryption, and secure communication. Even if you're a technical expert, Defend Dissent may help you talk to others in your life about how to use Tor and why these kinds of tools are so vital to social movements, change, and dissent.

For more details on how Tor works you can read the four chapters of Defend Dissent that lead to Anonymous Routing: What is Encryption?, Modern Cryptography, Exchanging Keys for Encryption, and Metadata.
Or discover other topics in defending social movements with cryptography.

...
@blog April 27, 2021 - 02:13 • 2 months ago
New Release: Tor Browser 10.5a15
New Release: Tor Browser 10.5a15 sysrqb April 26, 2021

Tor Browser 10.5a15 is now available from the Tor Browser download page and also from our distribution directory.

Note: This is an alpha release, an experimental version for users who want to help us test new features. For everyone else, we recommend downloading the latest stable release instead.

This version updates Firefox to 78.10esr and Fenix to 88.1.1. In addition, Tor Browser 10.5a15 updates Tor to 0.4.6.2-alpha. This version includes important security updates to Firefox for Desktop and security updates for Android.

Warning:
Tor Browser Alpha does not support version 2 onion services. Tor Browser (Stable) will stop supporting version 2 onion services later this year. Please see the previously published deprecation timeline regarding Tor version 0.4.6. Migrate your services and update your bookmarks to version 3 onion services as soon as possible.

Note: This version is not completely reproducible. We are investigating non-determinism in the Android Tor Browser build. Tor Browser for Windows, macOS and Linux are reproducible.

The full changelog since Tor Browser 10.5a14:

  • All Platforms
    • Update Tor to 0.4.6.2-alpha
  • Windows + OS X + Linux
    • Update Firefox to 78.10.0esr
    • Bug 40408: Disallow SVG Context Paint in all web content
  • Android
    • Update Fenix to 88.1.1
    • Bug 40051: Rebase android-components patches for Fenix 88
    • Bug 40158: Rebase Fenix patches to Fenix 88.1.1
    • Bug 40399: Rebase 10.5 patches on 88.0
  • Build System
    • All Platforms
      • Update Go to 1.15.11
    • Android
      • Bug 40259: Update components for mozilla88-based Fenix
...
@nickm April 27, 2021 - 00:00 • 2 months ago
Implementing Rust futures when you only have async functions

Rust doesn't yet support asynchronous functions in traits, but several important async-related traits (like AsyncRead and Stream ) define their interface using functions that return Poll. So, what can you do when you have a function that is async, and you need to use it to implement one of these traits?

(I'll be assuming that you already know a little about pinning, futures, and async programming in Rust. That's not because they're easy “everybody-should-know-it” topics, but because I'm still learning them myself, and I don't understand them well enough be a good teacher. You can probably keep reading if you don't understand them well . )

A little background

Here's a situation I ran into earlier this year. In the end, I only solved it with help from Daniel Franke, so I decided that I should write up the solution here in case it can help somebody else.

I've been working on Arti, an implementation of the Tor protocols in Rust. After a bunch of hacking, I finally got to the point where I had a DataStream type that provides an anonymous connection over the Tor network:

impl DataStream {
    pub async fn read(&mut self, buf: &mut[u8]) -> io::Result<usize>
    { ... }
    pub async fn write(&mut self, buf: &[u8]) -> io::Result<usize>
    { ... }
}

Now, there's a lot of complexity hiding in those ellipses. Tor isn't a simple protocol: when it's trying to read, it may need to wait for data to arrive. It may also need to send messages in response to arriving data. It might need to update internal state, or even tear down an entire Tor circuit because of a protocol error. Async functions made it possible to implement all of this stuff in a more-or-less comprehensible way, so rewriting those functions to explicitly return a typed future was not an option.

But I wanted DataStream to implement AsyncRead and AsyncWrite, so I could use it with other code in the Rust async ecosystem. So let's look at AsyncRead (because it's simpler than AsyncWrite). The only required method in AsyncRead is:

pub fn poll_read(
    self: Pin<&mut Self>,
    cx: &mut Context<'_>,
    buf: &mut [u8]
) -> Poll<io::Result<usize>>

This read() has to check whether there's data can be read into buf immediately, without blocking. If there is, we read the data and return the number of bytes we read. Otherwise, we have to schedule ourself on cx, and return Poll::Pending.1

Moving forward, getting stuck

Compare poll_read to the read function in DataStream. First off, there's a mismatch between how these two functions use their output buffers. Because DataStream::read is async, it returns a future that will hang on to its buffer until the future is finally ready. But poll_read has to return right away, and it can't store a reference to its buffer at all. So I started by defining a wrapper variant of DataStream to implements the behavior that poll_read would need:2

pub struct DataReaderImpl {
    s: DataStream,
    pending: Vec<u8>
    offset: usize,
    len: usize,
}
pub struct DataReaderImpl {
    s: DataStream,
    pending: Vec<u8>
}
impl DataReaderImpl {
    fn new(s: DataStream) -> DataReaderImpl {
        DataReaderImpl {
            s,
            pending: Vec::new(),
        }
    }
    // Load up to 1k into the pending buffer.
    async fn fill_buf(&mut self) -> io::Result<usize> {
        let mut data = vec![0;1024];
        let len = self.s.read(&mut data[..]).await?;
        data.truncate(len);
        self.pending.extend(data);
        Ok(len)
    }
    // pull bytes from the pending buffer into `buf`.
    fn extract_bytes(&mut self, buf: &mut [u8]) -> usize {
        let n = cmp::min(buf.len(), self.pending.len());
        buf[..n].copy_from_slice(&self.pending[..n]);
        self.pending.drain(0..n);
        n
    }
}

Then, I thought, it ought to be easy to write AsyncRead! Here was my first try:

// This won't work...
impl AsyncRead for DataReaderImpl {
    fn poll_read(mut self: Pin<&mut Self>,
                 cx: &mut Context<'_>,
                 buf: &mut [u8]) -> Poll<io::Result<usize>> {
       if self.pending.is_empty() {
            // looks like we need more bytes.
            let fut = self.fill_buf();
            futures::pin_mut!(fut);
            match fut.poll(cx) {
                Poll::Ready(Err(e)) =>
                    return Poll::Ready(Err(e)),
                Poll::Ready(Ok(n)) =>
                    if n == 0 {
                        return Poll::Ready(Ok(0)); // EOF
                    }
                Poll::Pending =>
                    todo!("crud, where do i put the future?"), // XXXX
            }
        }

        // We have some data; move it out to the caller.
        let n = self.extract_bytes(buf);
        Poll::Ready(Ok(n))
    }
}

Almost there! But what do I do if the future says it's pending? I need to store it and poll it again later the next time I call this function. But to do that, I won't be able to pin the future to the stack! I'll have to store it in the structure instead. And since the future comes from an async function, it won't have a type that I can name; I'll have to store it as a Box<dyn Future>.

Oh hang on, it'll need to be pinned. And sometimes there won't be a read in progress, so I won't have a future at all. Maybe I store it in an Option<Pin<Box<dyn Future>>>?

(This is the point where I had to take a break and figure out pin-projection3.)

But after I played around with that for a while, I hit the final snag: ultimately, I was trying to create a self-referential structure4, which you can't do in safe Rust. You see, the future returned by DataReaderImpl::fill_buf needs to hold a reference to the DataReaderImpl, and so the future needs to outlive the DataReaderImpl. That means you can't store it in the DataReaderImpl. You can't even store it and the DataReaderImpl in the same struct: that creates self-reference.

So what could I do? Was I supposed to use unsafe code or some tricky crate to make a self-referential struct anyway? Was my solution fundamentally flawed? Was I even trying to do something possible‽

I asked for help on Twitter. Fortunately, Daniel Franke got back to me, looked at my busted code, and walked me through the actual answer.

Hold the future or the reader: not both!

Here's the trick: We define an enum that holds the DataReaderImpl or the future that its fill_buf function returns, but not both at once. That way, we never have a self-referential structure!

First we had to define a new variation on fill_buf that will take ownership of the reader when it's called, and return ownership once it's done:

impl DataReaderImpl {
    async fn owning_fill_buf(mut self) -> (Self, io::Result<usize>) {
        let r = self.fill_buf().await;
        (self, r)
    }
}

Then we had to define an enum that could hold either the future or the DataReaderImpl object, along with a wrapper struct to hold the enum.

type OwnedResult = (DataReaderImpl, io::Result<usize>);
enum State {
    Closed,
    Ready(DataReaderImpl),
    Reading(Pin<Box<dyn Future<Output=OwnedResult>>>),
}
struct DataReader {
    state: Option<State>
}

Note that the DataReader struct holds an Option<State>—we'll want to modify the state object destructively, so we'll need to take ownership of the state in poll_read and then replace it with something else.5

With this groundwork in place we could finally give an implementation of AsyncRead that works:

impl AsyncRead for DataReader {
    fn poll_read(mut self: Pin<&mut Self>,
                 cx: &mut Context<'_>,
                 buf: &mut [u8]) -> Poll<io::Result<usize>> {
        // We're taking this temporarily. We have to put
        // something back before we return.
        let state = self.state.take().unwrap();

        // We own the state, so we can destructure it.
        let mut future = match state {
            State::Closed => {
                self.state = Some(State::Closed);
                return Poll::Ready(Ok(0));
            }
            State::Ready(mut imp) => {
                let n = imp.extract_bytes(buf);
                if n > 0 {
                    self.state = Some(State::Ready(imp));
                    // We have data, so we can give it back now.
                    return Poll::Ready(Ok(n));
                }
                // Nothing available; launch a read and poll it.
                Box::pin(imp.owning_fill_buf())
            }
            // If we have a future, we need to poll it.
            State::Reading(fut) => fut,
        };

        // Now we have a future for an in-progress read.
        // Can it make any progress?
        match future.as_mut().poll(cx) {
            Poll::Ready((_imp, Err(e))) => { // Error
                self.state = Some(State::Closed);
                Poll::Ready(Err(e))
            }
            Poll::Ready((_imp, Ok(0))) => { // EOF
                self.state = Some(State::Closed);
                Poll::Ready(Ok(0))
            }
            Poll::Ready((mut imp, Ok(_))) => {
                // We have some data!
                let n = imp.extract_bytes(buf);
                self.state = Some(State::Ready(imp));
                debug_assert!(n > 0);
                Poll::Ready(Ok(n))
            }
            Poll::Pending => {
                // We're pending; remember the future
                // and tell the caller.
                self.state = Some(State::Reading(future));
                Poll::Pending
            }
        }
    }
}

Now when poll_read() takes ownership of the previous state, it either owns a DataReaderImpl or a future returned by owning_fill_buf()—but never both at once, so we don't have any self-reference problems. When poll_read() done, it has to put a new valid state back before it returns.

Conclusions

For the current version of all this code, have a look at tor_proto::stream::data in Arti. Note that the code in Arti is more complex than what I have in this post, and some of that complexity is probably unnecessary: I've been learning more about Rust as I go along.

I hope that some day there's an easier way to do all of this (with real asynchronous traits, maybe?) but in the meantime, I hope that this write-up will be useful to somebody else.


1

We might also have to report an EOF as Poll::Ready(Ok(0)), or an error as Poll::Ready(Err(_). But let's keep this simple.

2

At this point I started writing my code really inefficiently, since I was just trying to get it to work. In the interest of clarity, I'll leave it as inefficient code here too.

3

It didn't turn out to be what I needed in the end, but I'm glad I learned about it: it has been the answer for a lot of other problems later on.

4

Self-referential structures in Rust require unsafe code and pinning. I spent a semi-unpleasant hour or two looking through example code here just to see what would be involved, and tried learning the rental crate, in case it would help.

5

We could probably use std::mem::replace for this too, but I don't expect there would be a performance difference.

...
@anarcat April 25, 2021 - 01:02 • 2 months ago
Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:

Attic

Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog

Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin

Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas

A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled

Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)

...
@anarcat April 24, 2021 - 17:56 • 2 months ago
A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

...
@blog April 23, 2021 - 01:04 • 2 months ago
Domain Shadowing: Leveraging CDNs for Robust Blocking-Resistant Communications
Domain Shadowing: Leveraging CDNs for Robust Blocking-Resistant Communications Mingkui Wei April 22, 2021

We invited guest blog author, Mingkui Wei, to submit a summary of their research to the blog this week. This blog post is based on the upcoming Usenix Security paper (full version here). Note that the domain shadowing ideas presented herein are intended to be a building block for a future system that doesn't exist for end-users yet. We hope this post will help system designers to think in new ways, and use those ideas to build new censorship circumvention tools.

What is Domain Shadowing?
Domain shadowing is a new censorship circumvention technique that uses Content Distribution Networks (CDNs) as its leverage to achieve its goal, which is similar to domain fronting. However, domain shadowing works completely differently from domain fronting and is stronger in terms of blocking-resistance. Compared to domain fronting, one big difference among many is that the user in domain shadowing is in charge of the whole procedure. In other words, the complete system can be solely configured by the user without necessary assistance from neither the censored website nor an anti-censorship organization.

How Domain Shadowing Works
We start this section by explaining how domain names are resolved and translated by CDN.

CDNs act like a reverse proxy that hides the back-end domain and presents only the front-end domain to the public. CDNs typically take two approaches to accomplish the name translation, as shown in the following two figures. We make the following assumptions to facilitate the illustration: assume the publisher's (i.e. the person who wants to use CDN to distribute the content of their website) origin server is hosted on Amazon Web Service (AWS) and assigned a canonical name abc.aws.com, and the publisher wants to advertise the website using the domain example.com, which is hosted on GoDaddy's name server.

Figure 1 shows the name translation procedure used by most CDNs, and we use Fastly as an example. To use Fastly's service, the publisher will first log into their Fastly account and set example.com as the frontend, and abc.aws.com as backend. Then, the publisher will create a new CNAME record in their GoDaddy's name server, which resolves the domain example.com to a fixed domain global.ssl.fastly.net. The remaining steps in Figure 1 are intuitive.

There are also some CDNs who host their own TLD name server, such as Cloudflare. If this is the case, step 2 and step 3 in Figure 1 can be skipped (as shown in Figure 2).

Note that the last four steps on both figures show the difference of how a name resolution is conducted by CDN. Specifically, a regular DNS server will only respond to a DNS query with the location of the origin server and the document must be fetched by the client itself, while the CDN will actually fetch the web document for the client.

Name resolution by Fastly

Figure 1. Name resolution by Fastly

Name resolution by Cloudflare

Figure 2: Name resolution by Cloudflare

Based on the above introduction, we can now present how domain shadowing works. Domain shadowing takes advantage of the fact that when the domain binding (i.e. the connection between the frontend and the backend domains) is created, the CDN allows arbitrary domains to be set at the backend. As a result, a user can freely bind a frontend domain to any backend domain. To access a blocked domain (e.g. censored.com) within a censored area, a censored user only needs to take the following steps:

  1. The user registers a random domain as the "shadow" domain, for example: shadow.com. We assume the censor won't block this newly registered domain.
  2. The user subscribes to a CDN service that is accessible within the censored area, but the CDN itself is not censored. A practical example would be the CDN deploys all its edge servers outside the censored area.
  3. The user binds the shadow domain to the censored domain in the CDN service by setting the shadow domain as the frontend and the censored domain as the backend.
  4. The user creates a rule in their CDN account to rewrite the Host header of incoming requests from Host:shadow.com to Host:censored.com. This is an essential step since otherwise, the origin server of censored.com will receive an unrecognized Host header and unable to serve the request.
  5. Finally, to access the censored domain, the user sends a request to https://shadow.com within the censored area. The request will be sent to the CDN, which will rewrite the Host header and forwards the request to censored.com. After the response is received from censored.com, the CDN will return the response to the user "in the name" of https://shadow.com.

During this process, the censor will only see the user connect to the CDN using HTTPS and request resources from shadow.com, and thus will not block the traffic.

On a CDN that still supports domain fronting, we can apply domain fronting techniques to make domain shadowing stealthier. To do this, we still set shadow.com as the frontend and censored.com as the backend, but when an HTTPS request is issued from within the censored area, the user will request the front domain front.com and set the Host header to be shadow.com. This way, the censor only sees the user is communicating with the front domain and will not even suspect the user's behavior.

What’s the Benefit of Domain Shadowing?
Compared to its siblings, domain fronting, the obvious benefit of domain shadowing is that it can use any CDN (as long as the CDN supports domain shadowing, and based on our experiments most CDNs do) to access any domain. The censored domain does not need to be on the same CDN, which is a big limitation of domain fronting. Actually, the censored domain does not need to be a domain that uses CDN at all. This is a big leap compared to domain fronting, which can only access the domains on the same CDN as the front domain.

Another shortcoming of domain fronting is that it can be (and is being) painlessly disabled by CDNs by mandating the Host header of an HTTPS request must match the SNI of the TLS handshake. Domain shadowing, on the other hand, is harder to be disabled since allowing a user to configure the backend domains is a legitimate feature of CDNs.

Compared to other VPS-based schemes, domain shadowing is (possibly) faster and does not need dedicated third-party support. It is faster because compared to the proxy-on-VPS scheme that uses a self-deployed proxy to relay the traffic, domain shadowing's relay is actually all the CDN's edge servers that operate on the CDN's high-speed backbone network, and the whole infrastructure is optimized specifically to distribute content fast and reliably. The following figure compares the delay of fetching a web document directly from the origin server, using Psiphon, using proxy-over-EC2 (with 2 instances based on different hardware configuration), and using domain shadowing based on 5 different CDN providers (Fastly, Azure CDN, Google CDN, AWS Cloudfront, and StakePath). From the figure, we can see domain shadowing beats other schemes most of the time.

image: domain shadowing is (possibly) faster

Challenges of Domain Shadowing
At this moment, domain shadowing faces the following main challenges:

Complexity: The user must config the frontend and backend domains in their CDN account for every censored domain they want to visit. Although such configuration can be automated using the CDN’s API, the user still needs to have sufficient knowledge about relatively complex operations such as how to register to a CDN, enable API configuration and obtain API credentials, and how to register a domain.

Cost: Based on our survey, for 500 GB monthly data usage, the cost of using domain shadowing with a reputable CDN is about $40, which increases or decreases linearly with the data usage in general. If the user chooses to use an inexpensive CDN, the cost could be brought to under $10 per month. However, this still can't beat free tools such as Psiphon and Tor.

Security: By using domain shadowing, the browser "thinks" it is only communicating with the shadow domain, while the web documents are actually from all the different censored domains (see the following figure where we visit Facebook using Forbes.com as the shadow domain). Such domain transformation makes the Same-Origin-Policy no longer enforceable. While we can use a browser extension to help with this issue to some extent, the user must be aware and cautious about what websites to visit.

Facebook screenshot

Privacy: CDNs intercept all HTTP and HTTPS traffic. That is, when a CDN is involved, the HTTPS is no longer between the client and the origin server but between the client and the CDN edge server. Thus, the CDN is able to view and modify any and all traffic between the user and the target server. While this is very unlikely, especially for large and reputable CDNs, users should be aware of the possibility.

Conclusion
We explained domain shadowing, a new technique that achieves censorship circumvention using CDN as leverage. It differs from domain fronting, but can work hand-in-hand with domain fronting to achieve better blocking-resistance. While significant work is still needed to address all the challenges and make it deployable, we see domain shadowing as a promising technique to achieve better censorship circumvention.

...
@kushal April 21, 2021 - 07:46 • 2 months ago
Adding dunder methods to a Python class written in Rust

Last week I did two rounds of my Creating Python modules in Rust workshop. During the second session on Sunday, someone asked if we can create standard dunder methods, say __str__ or __repr__. I never did that before, and during the session I tried to read the docs and implement it. And I failed :)

Later I realized that I should have read the docs carefully. To add those methods, we will have to implement PyObjectProtocol for the Rust structure.

#[pyproto]
impl PyObjectProtocol for Ros {
    fn __repr__(&self) -> PyResult<String> {
        let cpus = self.sys.get_processors().len();
        let repr = format!("<Ros(CPUS: {})>", cpus);
        Ok(repr)
    }

    fn __str__(&self) -> PyResult<String> {
        let cpus = self.sys.get_processors().len();
        let repr = format!("<Ros(CPUS: {})>", cpus);
        Ok(repr)
    }
}
>>> from randomos import Ros
>>> r = Ros()
>>> r
<Ros (CPUS: 8)>
>>> str(r)
'<Ros (CPUS: 8)>'

This code example is in the ros-more branch of the code.

...
@blog April 20, 2021 - 15:59 • 2 months ago
New Release: Tor Browser 10.0.16
New Release: Tor Browser 10.0.16 sysrqb April 20, 2021

Tor Browser 10.0.16 is now available from the Tor Browser download page and also from our distribution directory.

This version updates Firefox to 78.10esr and Fenix 88.1.3 for Android devices. In addition, Tor Browser 10.0.16 updates NoScript to 11.2.4, and adds localization in Burmese. This version includes important security updates to Firefox for Desktop and security updates to Firefox for Android.

Warning:
Tor Browser will stop supporting version 2 onion services later this year. Please see the previously published deprecation timeline. Migrate your services and update your bookmarks to version 3 onion services as soon as possible.

Note: New macOS Users, please report if you experience trouble with Gatekeeper when installing this Tor Browser version, and provide the error and the version of macOS you are using.

Note: The Android Tor Browser update will be available next week.

The full changelog since Desktop and Android Tor Browser 10.0.15:

  • Windows + OS X + Linux
    • Update Firefox to 78.10.0esr
    • Update NoScript to 11.2.4
    • Bug 40007: Update domain fronting config for Moat
    • Bug 40390: Add Burmese as a new locale
    • Bug 40408: Disallow SVG Context Paint in all web content
  • Android
    • Update Fenix to 88.1.3
    • Update HTTPS Everywhere to 2021.4.15
    • Update NoScript to 11.2.6
    • Translations update
    • Bug 40052: Rebase android-components patches for Fenix 88
    • Bug 40162: Disable Numbus experiments
    • Bug 40163: Rebase Fenix patches to Fenix 88.1.3
    • Bug 40423: Disable http/3
    • Bug 40425: Rebase 10.5 patches on 88.0.1
  • Build System
    • Android
      • Bug 40259: Update components for mozilla88-based Fenix
      • Bug 40293: Patch app-services' vendored uniffi_bindgen

    Changes:

    • Updated on 2021-04-23 to include Mozilla's Security Advisory
    • Updated on 2021-06-03 to include Android release information
...