Day one and two of my stay in Brussels are over. I really enjoyed the discussions I had at the XMPP Standards Foundation Summit which was held in the impressive Cisco office building in Diegem. It’s always nice to meet all the faces behind those ominous nicknames that you only interact with through text chats for the rest of the year. Getting to know them personally is always exciting.
A lot of work has been done to improve the XMPP ecosystem and the protocols that make up its skeleton. For me it was the first time ever to hold a presentation in English, which – in the end – did not turn out as bad as I expected – I guess 😀
I love how highly internationally the XSF Summit and FOSDEM events are. As people from over the world we get together and even though we are working on different projects and systems, we all have very similar goals. It’s refreshing to see a different mind set and hear some different positions and arguments.
I’ve got the feeling that this post is turning into some sort of humanitarian advertisement and sleep is a scarce commodity, so I’m going to bed now to get a snatch.
I recently got really excited when I noticed, that the number of page views on my blog suddenly sky-rocketed from around 70 to over 300! What brought me back down to earth was the fact, that I also received around 120 spam comments on that single day. Luckily all of those were reliably caught by Antispam Bee.
Still, it would be nice to have accurate statistics about page views and those stupid spam requests distort the number of views. Also I’d like to fight spam with tooth and nail, so simply filtering out the comments is not enough for me.
That’s why I did some research and found out about the plugin WP Fail2Ban Redux, which allows logging of spammed comments for integration with the famous fail2ban tool. The plugin does not come with a settings page, so any settings and options have to be defined in the wp-config.php. In my case it was sufficient to just add the following setting:
Requirements on encryption change from time to time. New technologies pop up and crypto protocols get replaced by new ones. There are also different use-cases that require different encryption techniques.
For that reason there is a number of encryption protocols specified for XMPP, amongst them OMEMO and OpenPGP for XMPP.
Most crypto protocols share in common, that they all aim at encrypting certain parts of the message that is being sent, so that only the recipient(s) can read the encrypted content.
OMEMO is currently only capable to encrypt the messages body. For that reason the body of the message is being encrypted and stored in a <payload/> element, which is added to the message. This is inconvenient, as it makes OMEMO quite inflexible. The protocol cannot be used to secure arbitrary extension elements, which might contain sensitive content as well.
<message email@example.com' firstname.lastname@example.org' id='send1'>
<!-- the payload contains the encrypted content of the body -->
The modern OpenPGP for XMPP XEP also uses <payload/> elements, but to transport arbitrary extension elements. The difference is, that in OpenPGP, the payload elements contain the actual payload as plaintext. Those <payload/> elements are embedded in either a <crypt/> or <signcrypt/> element, depending on whether or not the message will be signed and then passed through OpenPGP encryption. The resulting ciphertext is then appended to the message element in form of a <openpgp/> element.
This is a secret message.
<!-- The above element is passed to OpenPGP and the resulting ciphertext is included in the actual message as an <openpgp/> element -->
Upon receiving a message containing an <openpgp/> element, the receiver decrypts the content of it, does some verity checks and then replaces the <openpgp/> element of the message with the extension elements contained in the <payload/> element. That way the original, unencrypted message is constructed.
The benefit of this technique is that the <payload/> element can in fact contain any number of arbitrary extension elements. This makes OpenPGP for XMPPs take on encrypting message content way more flexible.
A logical next step would be to take OpenPGP for XMPPs <payload/> elements and move them to a new XEP, which specifies their use in a unified way. This can then be used by OMEMO and any other encryption protocol as well.
The motivation behind this is, that it would broaden the scope of encryption to cover more parts of the message, like read markers and other metadata.
It could also become easier to implement end-to-end encryption in other scenarios such as Jingle file transfer. Even though there is Jingle Encrypted Transports, this protocol only protects the stream itself and leaves the metadata such as filename, size etc. in the clear. A unified <encrypted/> element would make it easier to encrypt such metadata and could be the better approach to the problem.
Federated Networks are AWESOME! When I first learned about the concept of federation when I started using Jabber/XMPP, I was blown away. I could set up my own private chat server on a Raspberry Pi and still be able to communicate with people from the internet. I did not rely on external service providers and instead could run my service on my own hardware.
About a year ago or so I learned about ActivityPub, another federated protocol, which allows users to share their thoughts, post links, videos and other content. Mastodon is probably the most prominent service that uses ActivityPub to create a Twitter-like microblogging platform.
But there are other examples like PeerTube, a YouTube-like video platform which allows users to upload, view and share videos with each other. Pleroma allows users to create longer posts than Mastodon and Plume can be used to create whole blogs. PixelFed aims to recreate the Instagram experience and Prismo is a federated Reddit alternative.
But the best thing about ActivityPub: All those services federate not only per service, but only across each other. For instance, you can follow PeerTube creators from your Mastodon account!
And now the icing on the cake: You can now also follow this particular blog! It is traveling the fediverse under the handle @email@example.com
Matthias Pfefferle wrote a WordPress plugin, that teaches your WordPress blog to talk to other services using the ActivityPub protocol. That makes all my blog posts available in and a part of the fediverse. You can even comment on the posts from within Mastodon for example!
In my opinion, the internet is too heavily depending on centralized services. Having decentralized services that are united in federation is an awesome way to take back control.
Just a quick hint: Mike Kuketz released a blog post about how you can use Blokada to block ads and trackers on your android device. In his post, he explains how Blokada uses a private VPN to block DNS requests to known tracker/ad sites and recommends a set of rules to configure the app for best experience.
He also briefly mentions F-Droid and gives some arguments, why you should get your apps from there instead of the Play Store.
The blog post is written in German and is available on kuketz-blog.de.
I live in a fast-paced world. News from all over the planet reach me within minutes, even seconds. This creates a huge, violent stream of information, trying to get into my mind.
Meanwhile I have less and less time on my hands and can only hastily process all the information I consume. Too often I catch myself quickly scrolling through the news feed, only reading the headlines of articles, the excerpt at best.
I have to admit it: I depend on the news articles I read to be truthful, as I don’t have time to verify them on my own. I am at the mercy of journalists to tell me the stories the way they really happened.
At the same time journalists desperately try to get me to read their articles. They have to get clicks on their websites in order to survive, as printed newspapers are slowly dying.
As a result my news feed is flooded with sensational headlines and click-bait articles. Scandals are made to appear bigger than they really are or simply made up from thin air. Often the title of an article contradicts the content itself or is massively exaggerated.
Recent examples of this trend are the allegations around the YouTube creator PewDiePie, who is regularly accused by several news outlets to be a white supremacist, which – if you know his videos and understand his type of humor – is just absurd. Sure, there are some edgy jokes here and there, but they are exactly this: Jokes and satire. Any viewer knows and understands this.
I really hate the term fake news, as it’s often used as a lazy excuse to ignore inconvenient facts, but reading bad researched articles like those around PewDiePie make me question the credibility of some news organizations and it makes me sad to see, how shortsighted some trade away credibility for clicks.
Another example would be the case of Class Relotius, a journalist who wrote for Der Spiegel, a prominent German newspaper. Relotius deliberately made up a number of articles. This massively hurts the trustworthiness of the press, even though I think (and hope) that Der Spiegel itself is an otherwise reliable newspaper.
As I wrote earlier, I want to be able to depend and rely on the news. I don’t want to live in a world where people screaming “Fake News” are those who speak the truth.
So what solutions are there to fix these issues?
Journalism needs financing. Most sites greet you with popups that demand you to disable your ad-blocker to read their articles. I know that this is not an option for me.
Blocking advertisements is not – as often depicted by the advertising industry – simply a way to make my life more comfortable, it is actually a security measurement. Ads spy on the user and can even be used to execute malicious code. As a proponent of the free software movement I believe that its my right to decide which software is run on my machines. Therefore I am persuaded that it’s my right to decide to disable ads.
In Germany we have the “Rundfunkbeitrag”, a model for financing public service broadcasters in Germany. Some people say that it is unfair to be forced to pay for something that you don’t necessarily consume. While I see their point (some people don’t own a TV or radio, why should they pay?), I think that it is more important to have independent journalism. In the end that’s the whole reason behind this blog post.
I am not sure if subscribing to a news outlet in order to be able to read their articles is the right way to solve the issue. Sure, this is the way it has been in times prior to the internet (you bought the news paper), but things changed. My biggest issue with the subscription model is that I could only subscribe to a limited number of news sites at once. That however makes me dependent on those sources. If I’d want to read an article of another site I’d need to pay for that again.
One approach would be a unified subscription which would give you access to a variety of news sites. That way I wouldn’t be bound to a single source and the fee would ensure the editorial independence of the journalists. This idea is however not yet well thought out.
Maybe we need a Rundfunkbeitrag for newspapers. In the end the only difference between news on TV and newspapers is the medium that transports the content. Both are however created by journalists that are in need of financing to stay independent.
In the meantime I will consider, whether I can afford to subscribe to a news site and if so, which would be the right choice for me. Possible candidates are Der Spiegel (yes, I’d give them another chance and yes, no https :/) and Netzpolitik.org who solely rely on donations at the moment.
As a result I get a list of all txt metadata files that do not contain the String “Changelog”. Those are our culprits.
For every of those files (quite a bunch) we now need to find a changelog. Unfortunately there is no standard place to put a changelog and many developers don’t do changelogs at all for their apps.
Nice places to search are any existing “changelog.md”, “changelog.txt” etc. Though I think that @Izzy covered all of those already. As a next step I’d search the apps website (if it exists) for a changelog section, or do a quick Google search for it (worked for me in case of Wikipedia, PEP…). Lastly I check, if the release section in the repository contains useful information (i.e. not just “Bump version”, but actually useful information about added features and such).
In case I find any of those information, I add the URL to the changelog to the metadata file. See for example the changes to the Wikipedia Android app metadata.
Usually I make those changes in a dedicated branch per app (eg. wikipedia_changelog) and then create a merge request against the fdroiddata repository.
I hope my post will inspire someone to join in on the work 😀 I’m working my way from the bottom up (from ‘z’ to ‘a’), so it would be nice if I could meet somebody in the middle 😉
Planets are a thing of the 90s, but still they are quite cool as they can bring a community closer together by helping users to exchange ideas. I hope this will also work out for the F-Droid community 🙂
For that reason I proposed to set up a planet for F-Droid / FOSS Android development in the F-Droid forum. After explaining my idea, Hans suggested that I should give it a try and go serverless by basing the setup on GitLab Pages.
Up to that point I didn’t even know, that GitLab Pages was a thing, as I only ever came in touch with Github Pages (shame on me). However, setting everything up was pretty straight forward and I’m quite happy with the outcome.
I chose the planet software Venus for the job, as it was one of the only search results I found while researching the topic. It was also the one used by some planets I already personally followed. Venus is a python program, which fetches the list of registered blogs and creates a directory with static HTML/CSS files which contain all the blog posts. That HTML can then be deployed somewhere (in our case GitLab Pages).
I configured GitLab CI to run Venus every 30 minutes. I might increase the interval at some point, as 30 minutes might be overkill.
Design-wise I tried to mimic the style of the F-Droid website as close as possible, but I’m not a web designer and haven’t got in touch with HTML + CSS so far, so there are still a lot of things that can be improved. However, it was a lot of fun to experiment and do trial and error to come up with the current design. If you want to jump in and help me with the CSS/HTML, feel free to contact me!
The only thing missing now are blogs! If you run a cool FOSS, Android development related project and/or blog about your adventures in the FOSS world, please apply to be included 🙂
For now the planet can be found here, but I hope that it can at some point migrate to a F-Droid subdomain.
Until very recently, my handling of paperwork was rather poorly. I keep all my letters and invoices in a big binder. Unfortunately at some point that binder got unsorted and I lost all motivation to sort new letters into it, so I started to insert fresh letters randomly. Eventually I lost even more motivation and began to just toss new letters into the compartment where I store the binder. It’s a big mess.
Paperwork massively simplifies the management of letters and other documents. Whenever I receive a new letter, I put it on my scanner, start paperwork and digitalize it. Paperwork automatically optimizes the scanned image and runs some OCR on it. All I have to type in manually, is the date of the letter. Paperwork automatically tries to detect the sender and tags the document based on that. All letters from my bank are labeled accordingly, while letters from my power company are given another label.
At first I missed the feature to create separate collections for different types of letters, but I quickly realized, that paperwork’s approach to order letters just by date and tags is way superior. Just scan, enter a date and you are done.
If I need a certain document, I can (thanks to OCR) do a full text search. Yay!
Unfortunately there are some bugs. When I move my mouse over some documents, the image viewer gets plain white with some massive letters on it. I suspect its a bug in the OCR display. However, I can work around that by literally just moving my mouse around the document 😀 Also, sometimes all my documents disappear from the overview, but a quick restart brings them back.
I’m so glad that I found paperwork. Finally I can get rid of a lot of useless letters 🙂 Now I’d like to know: How are you digitalizing your documents?