Install Jitsi-Meet alongside ejabberd

Image of an office conference room with empty chairs

Since the corona virus is forcing many of us into home office there is a high demand for video conference solutions. A popular free and open source tool for creating video conferences similar to Google’s hangouts is Jitsi Meet. It enables you to create a conference room from within your browser for which you can then share a link to your coworkers. No client software is needed at all (except mobile devices).

The installation of Jitsi Meet is super straight forward – if you have a dedicated server sitting around. Simply add the jitsi repository to your package manager and (in case of debian based systems) type

sudo apt-get install jitsi-meet

The installer will guide you through most of the process (setting up nginx / apache, installing dependencies, even do the letsencrypt setup) and in the end you can start video calling! The quick start guide does a better job explaining this than I do.

Jitsi Meet is a suite of different components that all play together (see Jitsi Meet manual). Part of the mix is a prosody XMPP server that is used for signalling. That means if you want to have the simple easy setup experience, your server must not already run another XMPP server. Otherwise you’ll have to do some manual configuration ahead of you.

I did that.

Since I already run a personal ejabberd XMPP server and don’t have any virtualization tools at hands, I wanted to make jitsi-meet use ejabberd instead of prosody. In the end both should be equally suited for the job.

Looking at the prosody configuration file that comes with Jitsi’s bundled prosody we can see that Jitsi Meet requires the XMPP server to serve two different virtual hosts.
The file is located under /etc/prosody/conf.d/meet.example.org.cfg.lua

VirtualHost "meet.example.org"
        authentication = "anonymous"
        ssl = {
                ...
        }
        modules_enabled = {
            "bosh";
            "pubsub";
            "ping";
        }
        c2s_require_encryption = false

Component "conference.meet.example.org" "muc"
    storage = "memory"
admins = { "focus@auth.meet.example.org" }

Component "jitsi-videobridge.meet.example.org"
    component_secret = "<VIDEOBRIDGE_COMPONENT_SECRET>"

VirtualHost "auth.meet.example.org"
    ssl = {
        ...
    }
    authentication = "internal_plain"

Component "focus.meet.example.org"
    component_secret = "<JICOFO_COMPONENT_SECRET>"

There are also some external components that need to be configured. This is where Jitsi Meet plugs into the XMPP server.

In my case I don’t want to server 3 virtual hosts with my ejabberd, so I decided to replace auth.meet.jabberhead.tk with my already existing main domain jabberhead.tk which already uses internal authentication. So all I had to do is to add the virtual host meet.jabberhead.tk to my ejabberd.yml.
The ejabberd config file is located under /etc/ejabberd/ejabberd.yml or /opt/ejabberd/conf/ejabberd.yml depending on your ejabberd distribution.

hosts:
    ## serves as main host, as well as auth.meet.jabberhead.tk for focus user
  - "jabberhead.tk"
    ## serves as anonymous authentication host for meet.jabberhead.tk
  - "meet.jabberhead.tk"

The syntax for external components is quite different for ejabberd than it is for prosody, so it took me some time to get it working.

listen:
  -
    port: 5280
    ip: "::"
    module: ejabberd_http
    request_handlers:
      "/http-bind": mod_bosh
    tls: true
    protocol_options: 'TLS_OPTIONS'
  -
    port: 5347
    module: ejabberd_service
    hosts:
      "focus.meet.jabberhead.tk":
        password: "<JICOFO_COMPONENT_SECRET>"

The configuration of the modules was a bit trickier on ejabberd, as the ejabberd config syntax seems to disallow duplicate entries. I ended up moving everything from the existing main modules: block into a separate host_config: for my existing domain. That way I could separate the configuration of my main domain from the config of the meet subdomain.

Update: I migrated my server (and this tutorial) to the new videobridge2 component. This means that we no longer need to configure the videobridge as an XMPP component in ejabberd’s configuration.

host_config:
  ## Already existing vhost.
  jabberhead.tk:
    s2s_access: s2s
    ## former main modules block, now further indented
    modules:
      mod_adhoc: {}
      mod_admin_extra: {}
      ...

  ## ADD THIS: New meeting host with anonymous authentication and no s2s
  meet.jabberhead.tk:
    ## Disable s2s to prevent spam
    s2s_access: none
    auth_method: anonymous
    allow_multiple_connections: true
    anonymous_protocol: both
    modules:
      mod_bosh: {}
      mod_disco: {}
      mod_muc:
        access: all
        access_create: local
        access_persistent: local
        access_admin: admin
      mod_muc_admin: {}
      mod_ping: {}
      mod_pubsub:
        access_createnode: local

As you can see I only enabled required modules for the meet.jabberhead.tk service and even disabled s2s to prevent the anonymous Jitsi Meet users from contacting users on other servers.

Last but not least we have to add the focus user as an admin and also generate (not discussed here) and add certificates for the meet.jabberhead.tk subdomain. This step is not necessary if the meet domain is already covered by the certificate in use.

certfiles:
  - ...
  - "/etc/ssl/meet.jabberhead.tk/cert.pem"
  - "/etc/ssl/meet.jabberhead.tk/fullchain.pem"
  - "/etc/ssl/meet.jabberhead.tk/privkey.pem"
...
acl:
  admin:
    user:
      - "focus@jabberhead.tk"
      ...

That’s it for the ejabberd configuration. Now we have to configure the other Jitsi Meet components. Lets start with jicofo, the Jitsi Conference Focus component.

My /etc/jitsi/jicofo/config file looks as follows.

JICOFO_HOST=localhost
JICOFO_HOSTNAME=meet.jabberhead.tk
JICOFO_SECRET=<JICOFO_COMPONENT_SECRET>
JICOFO_PORT=5347
JICOFO_AUTH_DOMAIN=jabberhead.tk
JICOFO_AUTH_USER=focus
JICOFO_AUTH_PASSWORD=<FOCUS_USER_SECRET>
JICOFO_OPTS=""
# Below can be left as is.
JAVA_SYS_PROPS=...

Respectively the videobridge configuration (/etc/jitsi/videobridge/config) looks like follows.

JVB_SECRET=<VIDEOBRIDGE_COMPONENT_SECRET>
## Leave below as originally was
JAVA_SYS_PROPS=...

Note that since the update to videobridge2, most config options have become obsolete and were replaced by options in the sip-communicator.properties file. So lets change /etc/jitsi/videobridge/sip-communicator.properties:

org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=jabberhead.tk
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=<JVB_USER_SECRET>
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@conference.meet.jabberhead.tk
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=<some UUID, just leave the original value>

Now we can wire it all together by modifying the Jitsi Meet config file found under /etc/jitsi/meet/meet.example.org-config.js:

var config = {
    hosts: {
        domain: 'meet.jabberhead.tk',
        anonymousdomain: 'meet.jabberhead.tk',
        authdomain: 'jabberhead.tk',
        focus: 'focus.meet.jabberhead.tk',
        muc: 'conference.meet.jabberhead.tk'
    },
    bosh: '//meet.jabberhead.tk/http-bind', // NOTE: or /bosh, depending on your setup
    clientNode: 'http://jitsi.org/jitsimeet',
    focusUserJid: 'focus@jabberhead.tk',

    testing: {
    ...
    }
...
}

Finally of course, I also had to register the focus and jvb users as XMPP accounts:

ejabberdctl register focus jabberhead.tk <FOCUS_USER_SECRET>
ejabberdctl register jvb jabberhead.tk <JVB_USER_SECRET>

Remember to replace the placeholders with strong password and also stop and disable the bundled prosody! That’s it!

I hope this lowers the bar for some to deploy Jitsi Meet next to their already existing ejabberd. Lastly please do not ask me for support, as I barely managed to get this working for myself 😛

Update (11.04.2020)

With feedback from Holger I reworked my ejabberd config and disabled s2s for the meet vhost, see above.

Someone also pointed out that it may be a good idea to substitute prosody with a dummy package to save disk space and possible attack surface.

Update (16.06.2020)

I migrated this guide to use the new videobridge2 package. The changes in a gist:

  • Remove videobridge component configuration from ejabberd config
  • Register new jvb@yourdomain user for the videobridge
  • Update videobridge config and sip-communicator.properties files (see further up)

What I might look into next is how to use ejabberd’s bundled (since 20.04) STUN/TURN server instead of the default one…

Note: The Jitsi Meet components do some regular health checks. They do this by (as far as I understand) creating temporary MUC rooms every few seconds. Since ejabberd stores entity capabilities indefinitely, this may cause the caps_features table of ejabberd’s database to quickly fill up with lots of junk. Apparently this was fixed in later versions of ejabberd, however, if you still need a workaround, you may try deleting unused caps entries periodically.

Smack: Some more busy nights and 12 bytes of IV

Decorative image of an architecture blueprint

In the last months I stayed up late some nights, so I decided to add some additional features to Smack.

Among the additions is support for some new XEPs, namely:

I also started working on an implementation of XEP-0245: Message Moderation, but that one is not yet finished and needs more work.

Direct MUC invitations are a method to invite users to a group chat. Smack already had support for another similar mechanism, but this one is recommended by the XMPP Compliance Suites 2020.

Message Fastening is a generalized mechanism to add information to messages. That might be a reaction, eg. a thumbs up which is added to a previous message.

Message Retraction is used to retract previously sent messages. Internally it is based on Message Fastening.

The Stanza Content Encryption pull request only teaches Smack what SCE elements are, but it doesn’t yet teach it how to use them. That is partly due to no E2EE specification actually using them yet. That will hopefully change soon 😉

Anu brought up the fact that the OMEMO XEP is not totally clear on the length of initialization vectors used for message encryption. Historically most clients use 16 bytes length, while normally you would want to use 12. Apparently some AES-GCM libraries on iOS only support 12 bytes length, so using 12 bytes is definitely desirable. Most OMEMO implementations already support receiving 12 bytes as well as 16 bytes IV.

That’s why Smack will soon also start sending OMEMO messages with 12 bytes IV.

Re: The Ecosystem is Moving

Moxie Marlinspike, the creator of Signal gave a talk at 36C3 on Saturday titled “The ecosystem is moving”.

The Fahrplan description of that talk reads as follows:

Considerations for distributed and decentralized technologies from the perspective of a product that many would like to see decentralize.

Amongst an environment of enthusiasm for blockchain-based technologies, efforts to decentralize the internet, and tremendous investment in distributed systems, there has been relatively little product movement in this area from the mobile and consumer internet spaces.

This is an exploration of challenges for distributed technologies, as well as some considerations for what they do and don’t provide, from the perspective of someone working on user-focused mobile communication. This also includes a look at how Signal addresses some of the same problems that decentralized and distributed technologies hope to solve.

https://fahrplan.events.ccc.de/congress/2019/Fahrplan/events/11086.html

Basically the talk is a reiteration of some arguments from a blog post with the same title he posted back in 2016.

In his presentation, Marlinspike basically states that federated systems have the issue of being frozen in time while centralized systems are flexible and easy to change.

As an example, Marlinspike names HTTP/1.1, which was released in 1999 and on which we are stuck on ever since. While it is true that a huge part of the internet is currently running on HTTP 1.0 and 1.1, one has to consider that its successor HTTP/2.0 was only released in 2015. 4 / 5 years are not a long time to update the entirety of the internet, especially if you consider the fact that the big browser vendors announced to only make their browsers work with HTTP/2.0 sites when they are TLS encrypted.

Marlinspike then goes on listing 4 expectations that advocates of federated systems have, namely privacy, censorship resistance, availability and control. This is pretty accurate and matches my personal expectations pretty well. He then argues, that Signal as a centralized application can fulfill those expectations as well, if not better than a decentralized system.

Privacy

Privacy is often expected to be provided by the means of data ownership, says Marlinspike. As an example he mentions email. He argues that even though he is self-hosting his emails, “each and every mail has GMail at the other end”.

I agree with this observation and think that this is a real problem. But the answer to this problem would logically be that we need to increase our efforts to change that by reducing the number of GMail accounts and increasing the number of self-hosted email servers, right? This is not really an argument for centralization, where each and every message is guaranteed to have the same service at the other end.

I also agree with his opinion that a more effective tool to gain privacy is good encryption. He obviously brings the point that email encryption is unusable, (hinting to PGP probably), totally ignoring modern approaches to email encryption like autocrypt.

Censorship resistance

Federated systems are censorship resistant. At least that is the expectation that advocates of federated systems have. Every time a server gets blocked, the user just simply switches to another server. The issue that Marlinspike points out is, that every time this happens, the user loses his entire social graph. While this is an issue, there are solutions to this problem, one being nomadic identities. If some server goes down the user simply migrates to another server, taking his contacts with him. Hubzilla does this for example. There are also import/export features present in most services nowadays thanks to the GDPR. XMPP offers such a solution using XEP-0277.

But lets take a look at how Signal circumvents censorship according to Marlinspike. He proudly presents Domain Fronting as the solution. With domain fronting, the client connects to some big service which is costly to block for a censor and uses that as a proxy to connect to the actual server. While this appears to be a very elegant solution, Marlinspike conceals the fact that Google and Amazon pretty quickly intervened and stopped Signal from using their domains.

With Google Cloud and AWS out of the picture, it seems that domain fronting as a censorship circumvention technique is now largely non-viable in the countries where Signal had enabled this feature.

https://signal.org/blog/looking-back-on-the-front/

Notice that above quote was posted by Marlinspike himself more than one and a half years ago. Why exactly he brings this as an argument remains a mystery to me.

Update: Apparently Signal still successfully uses Domain Fronting, just with content delivery networks other than Google and Amazon.

And even if domain fronting was an effective way to circumvent censorship, it could also be applied to federated servers as well, adding an additional layer of protection instead of solely relying on it.

But what if the censor is not a foreign nation, but instead the nation where your servers are located? What if the US decides to shutdown signal.org for some reason? No amount of domain fronting can protect you from police raiding your server center. Police confiscating each and every server of a federated system (or even a considerable fraction of it) on the other hand is unlikely.

Availability

This brings us nicely to the next point on the agenda, availability.

If you have a centralized service than you want to move that centralized service into two different data centers. And the way you did that was by splitting the data up between those data centers and you just halved your availability, because the mean time between failures goes up since you have two different data centers which means that it is more likely to have an outage in one of those data centers in any given moment.

Moxie Marlinspike in his 36c3 talk “The Ecosystem is Moving”

For some reason Marlinspike confuses a decentralized system with a centralized, but distributed system. It even reads “Centralized Service” on his slides… Decentralization does not equal distribution.

A federated system would obviously not be fault free, as servers naturally tend to go down, but an outage only causes a small fraction of the network to collapse, contrary to a total outage of centralized systems. There even are techniques to minimize the loss of functionality further, for example distributed chat rooms in the matrix protocol.

Control

The advocates argument of control says that if a service provider behaves undesirably, you simply switch to another service provider. Marlinspike rightfully asks the question how it then can be that many people still use Yahoo as their mail provider. Indeed that is a good question. I guess the only answer I can come up with is that most people probably don’t care enough about their email to make the switch. To be honest, email is kind of boring anyways 😉

XMPP

Next Marlinspike talks about XMPP. He (rightfully) notes that due to XMPPs extensibility there is a morass of XEPs and that those don’t really feel consistent.

The XMPP community already recognized the problem that comes with having that many XEPs and tries to solve this issue by introducing so called compliance suites. These are annually published documents that contain a list of XEPs that are considered vitally important for clients or servers. These suites act as maps that point a way through the XEP jungle.

Next Marlinspike states that the XMPP protocol still fails to be a suitable option for mobile devices. This statement is plain wrong and was already debunked in a blog post by Daniel Gultsch back in 2016. Gultsch develops an XMPP client for Android which is totally usable and generally has lower battery consumption than Signal has. Conversations implements all of the XEPs listed in the compliance suites to be required for mobile clients. This shows that implementing a decent mobile client for a federated system can be done and there is a recipe for it.

What Marlinspike could have pointed out instead is that the XMPP community struggles to come up with a decent iOS client. That would have been a fair argument, but spreading FUD about the XMPP protocol as a whole is unfair and dishonest.

Luckily the audience of the talk didn’t fully buy into Marlinspikes weaker arguments as demonstrated by some entertaining questions during the QA afterwards.

What Marlinspike is right about though is that developing a federated system is harder than doing a centralized service. You as the developer have control over the whole system and subsequently over the users. However this is actually the reason why we, the community of decentralized systems and federated protocols do what we do. In the words of J.F. Kennedy, we do these things…

…not because they are easy, but because they are hard…

… or simply because they are right.

On “Clean Architecture”

Some skyscrapers photographed from the bottom to the top.

I recently did what I rarely do: buy and read an educational book. Shocking, I know, but I can assure you that I’m fine 😉

The book I ordered is Clean Architecture – A Craftsman’s Guide to Software Structure and Design by Robert C. Martin. As the title suggests it is about software architecture.

I’ve barely read half of the book, but I’ve already learned a ton! I find it curious that as a halfway decent programmer, I often more or less know what Martin is talking about when he is describing a certain architectural pattern, but I often didn’t know said pattern had a name, or what consequences using said pattern really implied. It is really refreshing to see the bigger picture and having all the pros and cons of certain design decisions layed out in an overview.

One important point the book is trying to put across is how important it is to distinguish between important things like business rule, and not so important things as details. Let me try to give you an example.

Lets say I want to build a reactive Android XMPP chat application using Smack (foreshadowing? 😉 ). Lets identify the details. Surely Smack is a detail. Even though I’d be using Smack for some of the core functionalities of the app, I could just as well chose another XMPP library like babbler to get the job done. But there are even more details, Android for example.

In fact, when you strip out all the details, you are left with reactive chat application. Even XMPP is a detail! A chat application doesn’t care, what protocol you use to send and receive messages, heck it doesn’t even care if it is run on Android or on any other device (that can run java).

I’m still not quite sure, if the keyword reactive is a detail, as I’d say it is a more a programming paradigm. Details are things that can easily be switched out and/or extended and I don’t think you can easily replace a programming paradigm.

The book does a great job of identifying and describing simple rules that, when applied to a project lead to a cleaner, more structured architecture. All in all it teaches how important software architecture in general is.

There is however one drawback with the book. It constantly wants to make you want to jump straight into your next big project with lots of features, so it is hard to keep reading while all that excited ;P

If you are a software developer – no matter whether you work on small hobby projects or big enterprise products, whether or not you pursue to become a Software Architect – I can only recommend reading this book!

One more thought: If you want to support a free software project, maybe donating books like this is a way to contribute?

Happy Hacking!

Closer Look at the Double Ratchet

In the last blog post, I took a closer look at how the Extended Triple Diffie-Hellman Key Exchange (X3DH) is used in OMEMO and which role PreKeys are playing. This post is about the other big algorithm that makes up OMEMO. The Double Ratchet.

The Double Ratchet algorithm can be seen as the gearbox of the OMEMO machine. In order to understand the Double Ratchet, we will first have to understand what a ratchet is.

Before we start: This post makes no guarantees to be 100% correct. It is only meant to explain the inner workings of the Double Ratchet algorithm in a (hopefully) more or less understandable way. Many details are simplified or omitted for sake of simplicity. If you want to implement this algorithm, please read the Double Ratchet specification.

A ratchet tool can only turn in one direction, hence it is eponymous for the algorithm.
Image by Benedikt.Seidl [Public domain]

A ratchet is a tool used to drive nuts and bolts. The distinctive feature of a ratchet tool over an ordinary wrench is, that the part that grips the head of the bolt can only turn in one direction. It is not possible to turn it in the opposite direction as it is supposed to.

In OMEMO, ratchet functions are one-way functions that basically take input keys and derives a new keys from that. Doing it in this direction is easy (like turning the ratchet tool in the right direction), but it is impossible to reverse the process and calculate the original key from the derived key (analogue to turning the ratchet in the opposite direction).

Symmetric Key Ratchet

One type of ratchet is the symmetric key ratchet (abbrev. sk ratchet). It takes a key and some input data and produces a new key, as well as some output data. The new key is derived from the old key by using a so called Key Derivation Function. Repeating the process multiple times creates a Key Derivation Function Chain (KDF-Chain). The fact that it is impossible to reverse a key derivation is what gives the OMEMO protocol the property of Forward Secrecy.

A Key Derivation Function Chain or Symmetric Ratchet

The above image illustrates the process of using a KDF-Chain to generate output keys from input data. In every step, the KDF-Chain takes the input and the current KDF-Key to generate the output key. Then it derives a new KDF-Key from the old one, replacing it in the process.

To summarize once again: Every time the KDF-Chain is used to generate an output key from some input, its KDF-Key is replaced, so if the input is the same in two steps, the output will still be different due to the changed KDF-Key.

One issue of this ratchet is, that it does not provide future secrecy. That means once an attacker gets access to one of the KDF-Keys of the chain, they can use that key to derive all following keys in the chain from that point on. They basically just have to turn the ratchet forwards.

Diffie-Hellman Ratchet

The second type of ratchet that we have to take a look at is the Diffie-Hellman Ratchet. This ratchet is basically a repeated Diffie-Hellman Key Exchange with changing key pairs. Every user has a separate DH ratcheting key pair, which is being replaced with new keys under certain conditions. Whenever one of the parties sends a message, they include the public part of their current DH ratcheting key pair in the message. Once the recipient receives the message, they extract that public key and do a handshake with it using their private ratcheting key. The resulting shared secret is used to reset their receiving chain (more on that later).

Once the recipient creates a response message, they create a new random ratchet key and do another handshake with their new private key and the senders public key. The result is used to reset the sending chain (again, more on that later).

Principle of the Diffie-Hellman Ratchet.
Image by OpenWhisperSystems (modified by author)

As a result, the DH ratchet is forwarded every time the direction of the message flow changes. The resulting keys are used to reset the sending-/receiving chains. This introduces future secrecy in the protocol.

The Diffie-Hellman Ratchet

Chains

A session between two devices has three chains – a root chain, a sending chain and a receiving chain.

The root chain is a KDF chain which is initialized with the shared secret which was established using the X3DH handshake. Both devices involved in the session have the same root chain. Contrary to the sending and receiving chains, the root chain is only initialized/reset once at the beginning of the session.

The sending chain of the session on device A equals the receiving chain on device B. On the other hand, the receiving chain on device A equals the sending chain on device B. The sending chain is used to generate message keys which are used to encrypt messages. The receiving chain on the other hand generates keys which can decrypt incoming messages.

Whenever the direction of the message flow changes, the sending and receiving chains are reset, meaning their keys are replaced with new keys generated by the root chain.

The full Double Ratchet Algorithms Ratchet Architecture

An Example

I think this rather complex protocol is best explained by an example message flow which demonstrates what actually happens during message sending / receiving etc.

In our example, Obi-Wan and Grievous have a conversation. Obi-Wan starts by establishing a session with Grievous and sends his initial message. Grievous responds by sending two messages back. Unfortunately the first of his replies goes missing.

Session Creation

In order to establish a session with Grievous, Obi-Wan has to first fetch one of Grievous key bundles. He uses this to establish a shared secret S between him and Grievous by executing a X3DH key exchange. More details on this can be found in my previous post. He also extracts Grievous signed PreKey ratcheting public key. S is used to initialize the root chain.

Obi-Wan now uses Grievous public ratchet key and does a handshake with his own ratchet private key to generate another shared secret which is pumped into the root chain. The output is used to initialize the sending chain and the KDF-Key of the root chain is replaced.

Now Obi-Wan established a session with Grievous without even sending a message. Nice!

The session initiator prepares the sending chain.
The initial root key comes from the result of the X3DH handshake.
Original image by OpenWhisperSystems (modified by author)

Initial Message

Now the session is established on Obi-Wans side and he can start composing a message. He decides to send a classy “Hello there!” as a greeting. He uses his sending chain to generate a message key which is used to encrypt the message.

Principle of generating message keys from the a KDF-Chain.
In our example only one message key is derived though.
Image by OpenWhisperSystems

Note: In the above image a constant is used as input for the KDF-Chain. This constant is defined by the protocol and isn’t important to understand whats going on.

Now Obi-Wan sends over the encrypted message along with his ratcheting public key and some information on what PreKey he used, the current sending key chain index (1), etc.

When Grievous receives Obi-Wan’s message, he completes his X3DH handshake with Obi-Wan in order to calculate the same exact shared secret S as Obi-Wan did earlier. He also uses S to initialize his root chain.

Now Grevious does a full ratchet step of the Diffie-Hellman Ratchet: He uses his private and Obi-Wans public ratchet key to do a handshake and initialize his receiving chain with the result. Note: The result of the handshake is the same exact value that Obi-Wan earlier calculated when he initialized his sending chain. Fantastic, isn’t it? Next he deletes his old ratchet key pair and generates a fresh one. Using the fresh private key, he does another handshake with Obi-Wans public key and uses the result to initialize his sending chain. This completes the full DH ratchet step.

Full Diffie-Hellman Ratchet Step
Image by OpenWhisperSystems

Decrypting the Message

Now that Grievous has finalized his side of the session, he can go ahead and decrypt Obi-Wans message. Since the message contains the sending chain index 1, Grievous knows, that he has to use the first message key generated from his receiving chain to decrypt the message. Because his receiving chain equals Obi-Wans sending chain, it will generate the exact same keys, so Grievous can use the first key to successfully decrypt Obi-Wans message.

Sending a Reply

Grievous is surprised by bold actions of Obi-Wan and promptly goes ahead to send two replies.

He advances his freshly initialized sending chain to generate a fresh message key (with index 1). He uses the key to encrypt his first message “General Kenobi!” and sends it over to Obi-Wan. He includes his public ratchet key in the message.

Unfortunately though the message goes missing and is never received.

He then forwards his sending chain a second time to generate another message key (index 2). Using that key he encrypt the message “You are a bold one.” and sends it to Obi-Wan. This message contains the same public ratchet key as the first one, but has the sending chain index 2. This time the message is received.

Receiving the Reply

Once Obi-Wan receives the second message and does a full ratchet step in order to complete his session with Grevious. First he does a DH handshake between his private and the Grevouos’ public ratcheting key he got from the message. The result is used to setup his receiving chain. He then generates a new ratchet key pair and does a second handshake. The result is used to reset his sending chain.

Obi-Wan notices that the sending chain index of the received message is 2 instead of 1, so he knows that one message must have been missing or delayed. To deal with this problem, he advances his receiving chain twice (meaning he generates two message keys from the receiving chain) and caches the first key. If later the missing message arrives, the cached key can be used to successfully decrypt the message. For now only one message arrived though. Obi-Wan uses the generated message key to successfully decrypt the message.

Conclusions

What have we learned from this example?

Firstly, we can see that the protocol guarantees forward secrecy. The KDF-Chains used in the three chains can only be advanced forwards, and it is impossible to turn them backwards to generate earlier keys. This means that if an attacker manages to get access to the state of the receiving chain, they can not decrypt messages sent prior to the moment of attack.

But what about future messages? Since the Diffie-Hellman ratchet introduces new randomness in every step (new random keys are generated), an attacker is locked out after one step of the DH ratchet. Since the DH ratchet is used to reset the symmetric ratchets of the sending and receiving chain, the window of the compromise is limited by the next DH ratchet step (meaning once the other party replies, the attacker is locked out again).

On top of this, the double ratchet algorithm can deal with missing or out-of-order messages, as keys generated from the receiving chain can be cached for later use. If at some point Obi-Wan receives the missing message, he can simply use the cached key to decrypt its contents.

This self-healing property was eponymous to the Axolotl protocol (an earlier name of the Signal protocol, the basis of OMEMO).

Acknowledgements

Thanks to syndace and paul for their feedback and clarification on some points.

Shaking Hands With OMEMO: X3DH Key Exchange

This is the first part of a small series about the cryptographic building blocks of OMEMO. This post is about the Extended Triple Diffie Hellman Key Exchange Algorithm (X3DH) which is used to establish a session between OMEMO devices.
Part 2: Closer Look at the Double Ratchet

In the past I have written some posts about OMEMO and its future and how it does compare to the Olm encryption protocol used by matrix.org. However, some readers requested a closer, but still straightforward look at how OMEMO and the underlying algorithms work. To get started, we first have to take a look at its past.

OMEMO was implemented in the Android Jabber Client Conversations as part of a Google Summer of Code project by Andreas Straub in 2015. The basic idea was to utilize the encryption library used by Signal (formerly TextSecure) for message encryption. So basically OMEMO borrows almost all the cryptographic mechanisms including the Double Ratchet and X3DH from Signals encryption protocol, which is appropriately named Signal Protocol. So to begin with, lets look at it first.

The Signal Protocol

The famous and ingenious protocol that drives the encryption behind Signal, OMEMO, matrix.org, WhatsApp and a lot more was created by Trevor Perrin and Moxie Marlinspike in 2013. Basically it consists of two parts that we need to further investigate:

  • The Extended Triple-Diffie-Hellman Key Exchange (X3DH)
  • The Double Ratchet Algorithm

One core principle of the protocol is to get rid of encryption keys as soon as possible. Almost every message is encrypted with another fresh key. This is a huge difference to other protocols like OpenPGP, where the user only has one key which can decrypt all messages ever sent to them. The later can of course also be seen as an advantage OpenPGP has over OMEMO, but it all depends on the situation the user is in and what they have to protect against.

A major improvement that the Signal Protocol introduced compared to encryption protocols like OTRv3 (Off-The-Record Messaging) was the ability to start a conversation with a chat partner in an asynchronous fashion, meaning that the other end didn’t have to be online in order to agree on a shared key. This was not possible with OTRv3, since both parties had to actively send messages in order to establish a session. This was okay back in the days where people would start their computer with the intention to chat with other users that were online at the same time, but it’s no longer suitable today.

Note: The recently worked on OTRv4 will not come with this handicap anymore.

The X3DH Key Exchange

Let’s get to it already!

X3DH is a key agreement protocol, meaning it is used when two parties establish a session in order to agree on a shared secret. For a conversation to be confidential we require, that only sender and (intended) recipient of a message are able to decrypt it. This is possible when they share a common secret (eg. a password or shared key). Exchanging this key with one another has long been kind of a hen and egg problem: How do you get the key from one end to the other without an adversary being able to get a copy of the key? Well, obviously by encrypting it, but how? How do you get that key to the other side? This problem has only been solved after the second world war.

The solution is a so called Diffie-Hellman-Merkle Key Exchange. I don’t want to go into too much detail about this, as there are really great resources about how it works available online, but the basic idea is that each party possesses an asymmetric key pair consisting of a public and a private key. The public key can be shared over insecure networks while the
private key must be kept secret. A Diffie-Hellman key exchange (DH) is the process of combining a public key A with a private key b in order to generate a shared secret. The essential trick is, that you get the same exact secret if you combine the secret key a with the public key B. Wikipedia does a great job at explaining this using an analogy of mixing colors.

Deniability and OTR

In normal day to day messaging you don’t always want to commit to what you said. Especially under oppressive regimes it may be a good idea to be able to deny that you said or wrote something specific. This principle is called deniability.

Note: It is debatable, whether cryptographic deniability ever saved someone from going to jail, but that’s not scope of this blog post.

At the same time you want to be absolutely sure that you are really talking to your chat partner and not to a so called man in the middle. These desires seem to be conflicting at first, but the OTR protocol featured both. The user has an IdentityKey, which is used to identify the user by means of a fingerprint. The (massively and horribly simplified) procedure of creating a OTR session is as follows: Alice generates a random session key and signs the public key with her IdentityKey. She then sends that public key over to Bob, who generates another random session key with which he executes his half of the DH handshake. He then sends the public part of that key (again, signed) back to Alice, who does another DH to acquire the same shared secret as Bob. As you can see, in order to establish a session, both parties had to be online. Note: The signing part has been oversimplified for sake of readability.

Normal Diffie-Hellman Key Exchange

From DH to X3DH

Perrin and Marlinspike improved upon this model by introducing the concept of PreKeys. Those basically are the first halves of a DH-handshake, which can – along with some other keys of the user – be uploaded to a server prior to the beginning of a conversation. This way another user can initiate a session by fetching one half-completed handshake and completing it.

Basically the Signal protocol comprises of the following set of keys per user:

IdentityKey (IK)Acts as the users identity by providing a stable fingerprint
Signed PreKey (SPK)Acts as a PreKey, but carries an additional signature of IK
Set of PreKeys ({OPK})Unsigned PreKeys

If Alice wants to start chatting, she can fetch Bobs IdentityKey, Signed PreKey and one of his PreKeys and use those to create a session. In order to preserve cryptographic properties, the handshake is modified like follows:

DH1 = DH(IK_A, SPK_B)
DH2 = DH(EK_A, IK_B)
DH3 = DH(EK_A, SPK_B)
DH4 = DH(EK_A, OPK_B)

S = KDF(DH1 || DH2 || DH3 || DH4)

EK_A denotes an ephemeral, random key which is generated by Alice on the fly. Alice can now derive an encryption key to encrypt her first message for Bob. She then sends that message (a so called PreKeyMessage) over to Bob, along with some additional information like her IdentityKey IK, the public part of the ephemeral key EK_A and the ID of the used PreKey OPK.

Visual representation of the X3DH handshake

Once Bob logs in, he can use this information to do the same calculations (just with swapped public and private keys) to calculate S from which he derives the encryption key. Now he can decrypt the message.

In order to prevent the session initiation from failing due to lost messages, all messages that Alice sends over to Bob without receiving a first message back are PreKeyMessages, so that Bob can complete the session, even if only one of the messages sent by Alice makes its way to Bob. The exact details on how OMEMO works after the X3DH key exchange will be discussed in part 2 of this series 🙂

X3DH Key Exchange TL;DR

X3DH utilizes PreKeys to allow session creation with offline users by doing 4 DH handshakes between different keys.

A subtle but important implementation difference between OMEMO and Signal is, that the Signal server is able to manage the PreKeys for the user. That way it can make sure, that every PreKey is only used once. OMEMO on the other hand solely relies on the XMPP servers PubSub component, which does not support such behavior. Instead, it hands out a bundle of around 100 PreKeys. This seems like a lot, but in reality the chances of a PreKey collision are pretty high (see the birthday problem).

OMEMO does come with some counter measures for problems and attacks that arise from this situation, but it makes the protocol a little less appealing than the original Signal protocol.

Clients should for example keep used PreKeys around until the end of catch -up of missed message to allow decryption of messages that got sent in sessions that have been established using the same PreKey.

A look at Matrix.org’s OLM | MEGOLM encryption protocol

Everyone who knows and uses XMPP is probably aware of a new player in the game. Matrix.org is often recommended as a young, arising alternative to the aging protocol behind the Jabber ecosystem. However the founders do not see their product as a direct competitor to XMPP as their approach to the problem of message exchanging is quite different.

An open network for secure, decentralized communication.

matrix.org

During his talk at the FOSDEM in Brussels, matrix.org founder Matthew Hodgson roughly compared the concept of matrix to how git works. Instead of passing single messages between devices and servers, matrix is all about synchronization of a shared state. A chat room can be seen as a repository, which is shared between all servers of the participants. As a consequence communication in a chat room can go on, even when the server on which the room was created goes down, as the room simultaneously exists on all the other servers. Once the failed server comes back online, it synchronizes its state with the others and retrieves missed messages.

Matrix in the French State

Olm, Megolm – What’s the deal?

Matrix introduced two different crypto protocols for end-to-end encryption. One is named Olm, which is used in one-to-one chats between two chat partners (this is not quite correct, see Updates for clarifying remarks). It can very well be compared to OMEMO, as it too is an adoption of the Signal Protocol by OpenWhisperSystems. However, due to some differences in the implementation Olm is not compatible with OMEMO although it shares the same cryptographic properties.

The other protocol goes by the name of Megolm and is used in group chats. Conceptually it deviates quite a bit from Olm and OMEMO, as it contains some modifications that make it more suitable for the multi-device use-case. However, those modifications alter its cryptographic properties.

Comparing Cryptographic Building Blocks

ProtocolOlmOMEMO (Signal)
IdentityKeyCurve25519X25519
FingerprintKey⁽¹⁾Ed25519none
PreKeysCurve25519X25519
SignedPreKeys⁽²⁾noneX25519
Key Exchange
Algorithm⁽³⁾
Triple Diffie-Hellman
(3DH)
Extended Triple
Diffie-Hellman (X3DH)
Ratcheting AlgoritmDouble RatchetDouble Ratchet
  1. Signal uses a Curve X25519 IdentityKey, which is capable of both encrypting, as well as creating signatures using the XEdDSA signature scheme. Therefore no separate FingerprintKey is needed. Instead the fingerprint is derived from the IdentityKey. This is mostly a cosmetic difference, as one less key pair is required.
  2. Olm does not distinguish between the concepts of signed and unsigned PreKeys like the Signal protocol does. Instead it only uses one type of PreKey. However, those may be signed with the FingerprintKey upon upload to the server.
  3. OMEMO includes the SignedPreKey, as well as an unsigned PreKey in the handshake, while Olm only uses one PreKey. As a consequence, if the senders Olm IdentityKey gets compromised at some point, the very first few messages that are sent could possibly be decrypted.

In the end Olm and OMEMO are pretty comparable, apart from some simplifications made in the Olm protocol. Those do only marginally affect its security though (as far as I can tell as a layman).

Megolm

The similarities between OMEMO and Matrix’ encryption solution end when it comes to group chat encryption.

OMEMO does not treat chats with more than two parties any other than one-to-one chats. The sender simply has to manage a lot more keys and the amount of required trust decisions grows by a factor roughly equal to the number of chat participants.

Yep, this is a mess but luckily XMPP isn’t a very popular chat protocol so there are no large encrypted group chats ;P

So how does Matrix solve the issue?

When a user joins a group chat, they generate a session for that chat. This session consists of an Ed25519 SigningKey and a single ratchet which gets initialized randomly.

The public part of the signing key and the state of the ratchet are then shared with each participant of the group chat. This is done via an encrypted channel (using Olm encryption). Note, that this session is also shared between the devices of the user. Contrary to Olm, where every device has its own Olm session, there is only one Megolm session per user per group chat.

Whenever the user sends a message, the encryption key is generated by forwarding the ratchet and deriving a symmetric encryption key for the message from the ratchets output. Signing is done using the SigningKey.

Recipients of the message can decrypt it by forwarding their copy of the senders ratchet the same way the sender did, in order to retrieve the same encryption key. The signature is verified using the public SigningKey of the sender.

There are some pros and cons to this approach, which I briefly want to address.

First of all, you may find that this protocol is way less elegant compared to Olm/Omemo/Signal. It poses some obvious limitations and security issues. Most importantly, if an attacker gets access to the ratchet state of a user, they could decrypt any message that is sent from that point in time on. As there is no new randomness introduced, as is the case in the other protocols, the attacker can gain access by simply forwarding the ratchet thereby generating any decryption keys they need. The protocol defends against this by requiring the user to generate a new random session whenever a new user joins/leaves the room and/or a certain number of messages has been sent, whereby the window of possibly compromised messages gets limited to a smaller number. Still, this is equivalent to having a single key that decrypts multiple messages at once.

The Megolm specification lists a number of other caveats.

On the pro side of things, trust management has been simplified as the user basically just has to decide whether or not to trust each group member instead of each participating device – reducing the complexity from a multiple of n down to just n. Also, since there is no new randomness being introduced during ratchet forwarding, messages can be decrypted multiple times. As an effect devices do not need to store the decrypted messages. Knowledge of the session state(s) is sufficient to retrieve the message contents over and over again.

By sharing older session states with own devices it is also possible to read older messages on new devices. This is a feature that many users are missing badly from OMEMO.

On the other hand, if you really need true future secrecy on a message-by-message base and you cannot risk that an attacker may get access to more than one message at a time, you are probably better off taking the bitter pill going through the fingerprint mess and stick to normal Olm/OMEMO (see Updates for remarks on this statement).

Note: End-to-end encryption does not really make sense in big, especially public chat rooms, since an attacker could just simply join the room in order to get access to ongoing communication. Thanks to Florian Schmaus for pointing that out.

I hope I could give a good overview of the different encryption mechanisms in XMPP and Matrix. Hopefully I did not make any errors, but if you find mistakes, please let me know, so I can correct them asap 🙂

Happy Hacking!

Sources

Updates:

Thanks for Matthew Hodgson for pointing out, that Olm/OMEMO is also effectively using a symmetric ratchet when multiple consecutive messages are sent without the receiving device sending an answer. This can lead to loss of future secrecy as discussed in the OMEMO protocol audit.

Also thanks to Hubert Chathi for noting, that Megolm is also used in one-to-one chats, as matrix doesn’t have the same distinction between group and single chats. He also pointed out, that the security level of Megolm (the criteria for regenerating the session) can be configured on a per-chat basis.

I am up to no good.

Photo of a security camera on a white wall.

I am a user of “the darknet”. I use Tor to secure my communications from curious eyes. At the latest since Edward Snowden’s leaks we know, that this might be a good idea. There are many other valid, legal use-cases for using Tor. Circumventing censorship is one of them.

But German state secretary Günter Krings (49, CDU) believes something else. Certainly he “understand[s], that the darknet may have a use in autocratic systems, but in my opinion there is no legitimate use for it in a free, open democracy. Whoever uses the darknet is usually up to no good.”

Günter Krings should know, that Tor is only capable to reliably anonymize users traffic, if enough “noise” is being generated. That is the case if many users use Tor, so that those who really depend their life on it can blend in with the masses. Shamed be he, who thinks evil of it.

I’d like to know, if Krings ever thought about the fact that maybe, just maybe an open, free democracy is the only political system that is even capable to tolerate “the darknet”. Forbidding its use would only bring us one step closer to becoming said autocratic system.

Instead of trying to ban our democratic people from using tor, we should celebrate the fact that we are a democracy that can afford having citizens who can avoid surveillance and that have access to uncensored information.

Update:

I heard a great thought about this: “So, once Germany becomes a dictatorship, the first thing we’ll do is to revert the anti-tor law.”

I Love Free Software Day 2019

Free Software is a substantial part of my life. I got introduced to it by my computer science teacher in middle school, however back then I wasn’t paying that much attention to the ethics behind it and rather focused on the fact that it was gratis and new to me.

Using GNU/Linux on a school computer wasn’t really fun for me, as the user interface was not really my taste (I’m sorry KDE). It was only when I got so annoyed from the fact that my copy of Windows XP was 32 bit only and that I was supposed to pay the full price again for a 64 bit license, that I deleted Windows completely and installed Ubuntu on my computer – only to reinstall Windows again a few weeks later though. But the first contact was made.

Back then I was still mostly focused on cool features rather than on the meaning of free software. Someday however, I watched the talk by Richard Stallman and started to read more about what software freedom really is. At this point I was learning how to use blender on Ubuntu to create animations and only rarely booted into Windows. But when I did, it suddenly felt oddly wrong. I realized that I couldn’t truly trust my computer. This time I tried harder to get rid of Windows.

Someone once said that you only feel your shackles when you try to move. I think the same goes for free software. Once you realize what free software is and what rights it grants you (what rights you really have), you start to feel uncomfortable if you’re suddenly denied those rights.

And that’s why I love free software! It gives you back the control over your machine. It’s something that you can trust, as there are no secrets kept from you (except if the program is written in Haskell and uses monads :P).

My favorite free software projects for this years I love free software day are the document digitization and management tool paperwork, the alternative Mastodon/Pleroma interface Halcyon and the WordPress ActivityPub Plugin. These are projects that I discovered in 2018/2019 and that truly amazed me.

I already wrote two blog posts about paperwork and the fediverse / the ActivityPub plugin earlier, so I’ll focus mainly on Halcyon today. Feel free to give those other posts a read though!

I’m a really big fan of the fediverse and Mastodon in particular, but I dislike Mastodon’s current interface (two complaints about user interfaces in one post? Mimimi…). In my opinion Mastodons column interface doesn’t really give enough space to the content and is not very intuitive. Halcyon is a web client which acts as an alternative interface to your Mastodon/Pleroma account. Visually it closely resembles the Twitter UI which I quite like.

Halcyon – An alternative user interface to Mastodon/Pleroma

As a plus, it is way easier to get people to move from Twitter to the fediverse by providing them with a familiar interface 😉

There are some public instances of Halcyon available, which you can use to try out Halcyon for yourselves, however in the long run I recommend you to self-host it, as you have to enter your account details in order to use it. Hosting it doesn’t take much more than a simple Raspberry Pi as it’s really light weight.

I know that a huge number of free software projects is developed by volunteers in their free time. Most of them don’t get any monetary compensation for their work and people often take this for granted. Additionally, a lot of the feedback developers get from their users is when things don’t work out or break.

(Not only) today is a chance to give some positive feedback and a huge Thank You to the developers of the software that makes your life easier!

Happy Hacking!

Brussels Day 1 and 2

Atmosphere at a train station in Brussels

Day one and two of my stay in Brussels are over. I really enjoyed the discussions I had at the XMPP Standards Foundation Summit which was held in the impressive Cisco office building in Diegem. It’s always nice to meet all the faces behind those ominous nicknames that you only interact with through text chats for the rest of the year. Getting to know them personally is always exciting.

A lot of work has been done to improve the XMPP ecosystem and the protocols that make up its skeleton. For me it was the first time ever to hold a presentation in English, which – in the end – did not turn out as bad as I expected – I guess 😀

I love how highly internationally the XSF Summit and FOSDEM events are. As people from over the world we get together and even though we are working on different projects and systems, we all have very similar goals. It’s refreshing to see a different mind set and hear some different positions and arguments.

I’ve got the feeling that this post is turning into some sort of humanitarian advertisement and sleep is a scarce commodity, so I’m going to bed now to get a snatch.