This year was the very first time I had the chance to attend the open-source conference, FOSDEM, in Brussels. I almost made it there last year, but finally this time I could go with all my teammates from work, thanks to our company.

I’ve just read in a Medium article that FOSDEM is an inspiration heaven — and I couldn’t agree more!

My favourite talks

I’ll try to summarise first the presentations, which were a great inspiration source for me. Mostly it’s related to the topics I’m working on currently at my job, but I think it could be equally interesting / useful for others as well, even if it’s not your core professional area.

Monitoring Legacy Java Applications with Prometheus

One of the first talks I attended was this one by Dr. Fabian Stäber, who had a truly amazing style: he was presenting everything using tmux, including his slides! (It looked cool and it totally reflected his presentation skills and level of preparation as well. It was a true gem!)

He shared a bunch of great tips on how to parse traditional Tomcat, JBoss logs without any changes on server side, and also none on code side! (The latter is usually pretty difficult to push through.) Then he exposed these parsed logs natively as a Prometheus compatible endpoint, in or next to each application.

Basically it means you can do real-time monitoring of your traditional Java monolith applications in a standardised format; and each time you need to change your log parser, you only change your grok rules for this specific application and it doesn’t affect any other part of your logging infrastructure or other applications being monitored and stored in the same ELK stack.

By the way, on JMX monitoring: did you know you can actually blacklist, whitelist specific metrics by regex to avoid performance penalty of too much load generated by JMX polling?

I recommend taking a look at his grok-exporter GitHub project and you can also see the full video at the FOSDEM event website.

Blue elephant on-demand: Postgres + Kubernetes

I have spent a great portion of my FOSDEM time with Kubernetes related topics and databases/distributed storage talks. This one about Postgres on top of Kubernetes was one of my favourites.

Imagine when they start their presentation with a quote from Father Kubernetes:

…and then we spend 50 minutes describing how to go against his recommendation. 😛 It was absolutely worth it and let me just show you why.

The guys were implementing the well-known Kubernetes Operator pattern developed by the CoreOS guys: you have a stateless manager service talking to the K8s API which watches the cluster state and continuously tries to adapt your deployments and pods to match the intended state, coordinate database master failovers, coordinate replication setup, etc. This service is an outsider from the database point of view, so if it’s down / restarting / relocating to a different host, you don’t experience any problems, as it’s only a service which writes Kubernetes manifests files to the Kube API and that’s all.

They also came up with some great ideas on automated credential management to the database clusters, using the Secrets API and service accounts to split application deployment vs. application runtime access. You never need to touch any database servers to replicate user privileges or change passwords when an employee leaves, etc.

The whole presentation was a true gem, I’m happy I had the chance to attend the Zalando guys’ talk!

Some really smart solutions and insights from the Zalando team

Take a look at their Patroni project: https://github.com/zalando/patroni — it runs on any environment where you have etcd, consul or Zookeeper, and of course on Kubernetes too. (And Spilo for Docker bundles.)

MySQL 8.0

Well, this is kind of an exception here regarding the technology: I attended a bunch of talks about this new database engine and there was one thing in common from each presentation:

Don’t use it yet. In general, I’m very much looking forward to upgrade and use the new engine, but it looks pretty immature at the moment.

(The new role management in MySQL will be very powerful and it’s been long due to implement it. However the way the Oracle guys chose to do it shows a very bad planning and a lot of rookie mistakes. For example, when you delete a role and it’s still referenced by some user, there’s no automatic cleanup and not even warning message, you will just face random unexplainable misbehaviours.)

MySQL Point-in-time recovery like a rockstar!

This was literally the title of the talk. And damn, the guy was not even exaggerating, he had this genius idea about how to recover a master server using its own relay logs in a matter of minutes, with no external tools and software involved!

Without too much spoilers, here’s a key point of the idea:

  • It’s so simple, yet I would have never thought about it.
  • And there you go, you can do multi-thread recovery, instead of replaying from backup on single thread with no option to improve performance:
  • I highly recommend checking his slides and hopefully the video will be uploaded soon as well. Until then, make sure to check Frederic’s blog.

DNS privacy, Internet privacy

Certainly one of my favourite topics is online privacy. Turns out, even though we have GDPR on our necks and a lot of movements have happened recently regarding online privacy and identity protection,

nobody cares and thinks about DNS protection, it’s so much underrated.

For example, you have all your fancy encrypted HTTPS traffic which nobody can see (ahem…please read below in the next chapter some more practical notes about this), yet you’re querying pornhub.com during a plaintext protocol untouched for decades.

Which is probably your ISP’s DNS server.

But no, you were a smart guy, you are using Google DNS (meh…) or some other provider. Yes, but

  • it’s still plaintext.
  • your ISP (if you’re lucky and it’s not some bad guy or Russian hackers) can easily do man-in-the-middle and you won’t even notice it.

So while everybody was busy securing his web browser traffic, his e-mail messages, chats, the NSA realised that DNS is an obvious weak point in the chain and nobody thinks and cares about it! They could (and can) collect valuable metadata regardless if everything else was secured. And if the NSA can, it means anybody else can and use it against you, redirect you to malicious websites, fake credit card sites or whatever, without you noticing anything.

So in a nutshell, what can you do?

  • Use encrypted DNS servers: DNSCrypt
  • Secure your plaintext DNS queries locally: DNS over TLS — Stubby project
  • A great service: Quad9 —do you know the famous Google DNS, 8.8.8? Well, this is 9.9.9.9, or Quad9and turns out it has an undocumented, but working DNS over TLS implementation! 😉

A drawback of DNS over TLS? It uses a non-default port number, therefore a lot of company firewalls will cause problems. 2018 will be about DNS over HTTPS (= port 443), so hopefully we will see some traction in this painfully undervalued topic.

Honorable mentions

Package management over Tor

Definitely an interesting talk about privacy concerns and how to operate and verify package managers and pools over Onion routing. Did you know about the apt-transport-tor package for Debian/Ubuntu? Or that the famous pkgsrc tool from BSD is also available for Linux?

Lot of people would think HTTPS is a solution for the spying of NSA or your own government/your ISP/Romanian hackers in Starbucks. But Alexander Nanosov showed a great example: if you connect to a public software package repository and your traffic pattern shows a 7550 KB download over an encrypted channel…and guess what, there’s a 7550 KB sized package in the repository — I don’t care it’s encrypted, I already know what you just downloaded! These are the little things I wouldn’t even think about myself…

Some Kubernetes fun

Did you think your K8s cluster is secure if you use isolated Docker containers?

Check your mounts inside any container:

tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,seclabel,relatime)

Oh, so what’s in this folder?

# ls /run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token

Yes, you’re right! That’s the CA certificate to the API server + the authentication token of your current service account. If you use the defaults (which is very likely the case, despite RBAC), you have access at least to the full namespace and all of its pods! Now I think twice about creating a lot of isolated namespaces + specific service accounts for specific services.

More about Kubernetes security: I have always thought the Apparmor and seccomp profiles are a real pain in the ass, turns out it’s not the case at all!

  • you can use the seccomp Docker default profile as a pod annotation, it provides pretty good defaults!
  • same for Apparmor
  • and use securityContext: SELinux options

Random other topics

  • ClickHouse database analytics tool by Yandex
  • The Webmin of 2018: Cockpit by Red Hat
  • A super amazing backup tool: Duplicitywhy didn’t I know this before?
  • How do you backup a CEPH cluster in a robust way? To another CEPH cluster!
  • KubeVirt is a super trending topic at the moment, full of hype, but turns out it’s very immature yet — let’s see what the future holds and whether this project stays alive or not

…and a lot more talks I forgot about, this article is already way too long. Go and check the rest of the talks, it’s a real goldmine!

This blog post originally appeared on Medium on February 18, 2018.