OpenVPN 2.4 CRL Expired Foo

I chose a nice Friday evening and a good Scotch to upgrade an older Ubuntu LTS to the latest and greatest. And all went well, until I wanted to connect one of the clients via VPN. All I saw was this nasty little line in the log files of the server

... VERIFY ERROR: depth=0, error=CRL has expired: ...

Now, that’s not good. After a little bit of digging I found out, that I am not the only one running into that issue when migrating to OpenVPN 2.4 when using CRLs. The good news is, that there is a fix for it. The bad news is, that it is of course not available in Ubuntu right now. But fret not, there is a workaround. It is not a nice one, but you can regenerate the CRL by doing the following.

Modify the OpenSSL configuration, that you use to manage your certificates. If you use Easy RSA, then it is most likely in /etc/openvpn/easy-rsa/ and it is called openssl-1.0.0.cnf. Look for the default expiration for certificates and CRLs. In my case that looked like this:

default_days = 3650 # how long to certify for
default_crl_days= 30 # how long before next CRL

Increase the default to something like this:

default_days = 3650 # how long to certify for
default_crl_days= 3650 # how long before next CRL

And now regenerate the CRL. This is assuming you are using Easy RSA and you are in the folder /etc/openvpn/easy-rsa:

openssl ca -gencrl -keyfile keys/ca.key -cert keys/ca.crt -out keys/crl.pem -config ./openssl-1.0.0.cnf

After a restart of the OpenVPN server, the clients should be able to connect again.

Happy VPN’ing

Send Email from the Command Line using an External SMTP Server

Sending email in Linux is pretty straight forward, once an email server is set up. Just use mutt or mail and all is good. But sometimes you actually want to test if SMTP is working correctly. And not only on your box, but on a remote box. That is of course easy using a MUA like Thunderbird or Sylpheed, but that is not always feasible on a remote server in a remote network.

Luckily there is a solution to it using just the command line. To be precise, there are multiple solutions. Starting with SMTP by hand using telnet, but that is pretty hardcore. So how about mailx or swaks.

mailx

mailx is part of multiple packages, like mailutils, but I prefer the heirloom-mailx version. This version allows you specify a lot of SMTP connection details. Just check out the manpage. So on a Debian based distribution quickly install it with

apt-get install heirloom-mailx

An email can be sent with:

echo "Testbody" | mailx -v \
-r "me@example.com" \
-s "Test Subject" \
-S smtp="smtp.example.com:587" \
-S smtp-use-starttls \
-S smtp-auth=login \
-S smtp-auth-user="me@example.com" \
-S smtp-auth-password="changeme" \
-S ssl-verify=ignore \
memyselfandi@example.com

This would send an email to memyselfandi@example.com using the SMTP server smtp.example.com with STARTTLS, without verifying the SSL certificate. There are of course tons of other options. Just play around with it.

Swaks – Swiss Army Knife for SMTP

Swaks, the swiss army knife for SMTP, is a great little tool on the command line, that also offers an option to test SMTP servers. And it supports of course encryption using TLS.

Just install it with the packet manager of your distribution. Here is the Debian based version:

apt-get install swaks

And you can send an email with:

echo "This is the message body" | swaks \
--to "memyselfandi@example.com" \
--from "me@example.com" \
--server smtp.example.com \
--auth LOGIN \
--auth-user "me@example.com" \
--auth-password "changem" \
-tls

Yes, TLS is activate with a single dash parameter. Swaks, being a Perl script, can also just be downloaded from the Swaks homepage and works nicely in Cygwin.

MySQL max_connections limited to 214 on Ubuntu Foo

After moving a server to a new machine with Ubuntu 16.10 I received some strange Postfix SMTP errors. Which turned out to be a connection issue to the MySQL server:

postfix/cleanup[30475]: warning: connect to mysql server 127.0.0.1: Too many connections

Oops, did I forgot to up max_connections during the migration:

# grep max_connections /etc/mysql/mysql.conf.d/mysqld.cnf
max_connections = 8000

Nope, I didn’t. Did we all of a sudden have a surge in clients accessing the database. Let me check and ask MySQL, and the process list looked fine. But something was off. So let’s check the value in the SQL server itself:

mysql> show variables like 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 214 |
+-----------------+-------+
1 row in set (0.01 sec)

Wait, what?! A look into the error log gave the same result:

# grep max_connections /var/log/mysql/error.log
2017-06-14T01:23:29.804684Z 0 [Warning] Changed limits: max_connections: 214 (requested 8000)

Something is off here and ye olde oracle Google has quite some hits on that topic. And the problem lies with the maximum allowed number of open files. You can’t have more connections, than open files. Makes sense. Some people suggest to solve it using /etc/security/limits.conf to fix it. Which is not so simple on Ubuntu anymore, because you have to first enable pam_limits.so. And even then it doesn’t work, because since Ubuntu is using systemd (15.04 if I am not mistaken) this configuration is only valid for user sessions and not services/demons.

So let’s solve it using systemd’s settings to allow for more connections/open files. First you have to copy the configuration file, so that you can make the changes we need:

cp /lib/systemd/system/mysql.service /etc/systemd/system/

Append the following lines to the new file using vi (or whatever editor you want to use):

vi /etc/systemd/system/mysql.service

LimitNOFILE=infinity
LimitMEMLOCK=infinity

Reload systemd:

systemctl daemon-reload

After restarting MySQL it was finally obeying the setting:

mysql> show variables like 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 8000 |
+-----------------+-------+
1 row in set (0.01 sec)

The universe is balanced again.

Check SSL Connection Foo

It was that time of the year, when I had to renew some SSL certificates. Renewing and updating them in the server is a nice and easy process. But checking, whether the server is delivering the correct certificate and, that I updated and popluated the intermediate certificates correctly, is a different story.

For websites it is quite easy. Every browser is pretty verbose about the certificate of an https connection. But mail clients are not so talkative. Luckily openssl can help here.

To get the certificate in PEM form to compare you can simply call this command. Of course you have to replace and with the correct values, like example.com:993 for IMAPS on example.com:


openssl s_client -showcerts -connect :

If you want it a little bit more verbose, then you can pipe it again through openssl to get a more human readable version:


openssl s_client -showcerts -connect : | openssl x509 -text

Sometimes the connection itself is not supporting SSL or TLS directly, so you have to give it a hint. For instance for SMTP connection with STARTTLS you can use:


openssl s_client -showcerts -connect :25 -starttls smtp | openssl x509 -text

In my version of s_client only smtp, pop3, imap and ftp were supported protocols. If you are looking for more information about this you will find it in the man pages of openssl and s_client.

Nagios Enabling External Command on Debian based Distributions

While debugging my check disk problem after the 15.10 upgrade, I saw that I forgot to enable external commands. That is handy, when you want to re-schedule a check to see, if your changes took effect. Again, something that is easily activated. So if you see something like this, then you might want to make some changes:


Error: Could not stat() command file ‘/var/lib/nagios3/rw/nagios.cmd’!

The external command file may be missing, Nagios may not be running, and/or Nagios may not be checking external commands. An error occurred while attempting to commit your command for processing.

First stop the Nagios service with systemctl, service or with the init script. Whatever your distribution prefers. Then edit as root the configuration file /etc/nagios3/nagios.cfg and check if the variable check_external_commands is set to 1:


check_external_commands=1

Afterwards update the rights to the external command with the following:


dpkg-statoverride --update --add nagios www-data 2710 /var/lib/nagios3/rw
dpkg-statoverride --update --add nagios nagios 751 /var/lib/nagios3

And then start Nagios again. Et voila, you can call external commands.

Nagios check_disk Foo on Ubuntu 15.10

Another day another foo, this time done to the check_disk plugin for Nagios on Ubuntu. After updating to 15.10 my disk space check all of a sudden failed with this one here:


DISK CRITICAL - /sys/kernel/debug/tracing is not accessible: Permission denied

It seemed a little odd, especially when I could access that file normally before. So something has changed and the workaround is actually fairly easy. As root edit the file /etc/nagios-plugins/config/disk.cfg and change the command for check_all_disks. You need to add -A -i ‘/sys’ to the command line. So your command for check_all_disks will look like this:


# 'check_all_disks' command definition
define command{
command_name check_all_disks
command_line /usr/lib/nagios/plugins/check_disk -w '$ARG1$' -c '$ARG2$' -e -A -i '/sys'
}

Restart Nagios and all is good. After I fixed it this way I found, that it is actually filed as a bug 1516451 in Ubuntu’s Launchpad here

Happy monitoring.

Automatic Ubuntu Kernel Clean Up Foo (Update)

Cleaning up old kernel images on a Ubuntu machine is a quite annoying task. If you forget it and you have a separate /boot partition, then you will sooner or later run out of disk space. And then of course all your updates will fail.

Doing the clean up manually is, as mentioned, more than annoying and very tedious. But other smart people have spent some time and created a nice little one-liner that will get rid of old kernel versions. This command line will of course make sure that the currently running kernel is not removed. So it is very important to reboot after a kernel upgrade before you run this script!

And without further ado I present….

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs apt-get -y purge

Update:
Not a big deal but a sudo snuck into the xargs call. It is now removed and shouldn’t cause any trouble anymore.

Too Many Open Files Foo With Chrome On Ubuntu 13.10

The last Chrome update and one of the last Thunderbird updates caused some strange crashes of either on of them on my Ubuntu 13.10. All is fine, it runs great and all of a sudden *boom*, browser window gone, or email client gone.

Luckily .xsession-errors exists and there I could find some entries like this:

[3827:4038:0518/230904:ERROR:shared_memory_posix.cc(226)] Creating shared memory in /dev/shm/.com.google.Chrome.12UDei failed: Too many open files

Not good. But there is help. For MyEclipse I had the same issue in the past. But it didn’t seem necessary anymore since 13.10. But I also haven’t used it in a while. Anywhoo, here is what has to be done. And before I forget it, all these changes have to be done as root.

First check the setting for file-max with the following command

cat /proc/sys/fs/file-max

In my case this value seems fine, as it is well beyond the 200,000 that they recommend.

peter@majestix:~$ cat /proc/sys/fs/file-max
1627431

If that is below 200,000 you can set by adding the following line to /etc/sysctl.conf

fs.file-max=200000

The next is the ulimit setting for open files. You can check it with the following command

ulimit -n

This one was set to 1024 in my case and that can be a little bit low. At MyEclipse they recommend setting it to 65535 and that’s what I did. Just add the following lines to /etc/security/limits.conf

* hard nofile 65535
* soft nofile 65535

Afterwards restart your machine and all should be fine. If you only have to change the sysctl.conf setting then you can activate that change with the following command

service procps start

PS3 Media Server And Ubuntu Foo… Again

It feels like Groundhog Day all over again. After finding a relatively painless way of installing the PS3 Media Server on Ubuntu (PS3 Media Server And Ubuntu Foo), I found an easy way with a PPA (PS3 Media Server made easy) and I thought all will be good when I do the re-install of my server with Saucy. I couldn’t be more wrong. The latest Ubuntu version that is supported by the PPA is Raring and it seems it stopped at version 1.81.0 of the PS3 Media Server. A quick check of the home page and the current version is 1.90.1.

After some thought, I checked my old blog post and the configuration files from the PPA. So, this is a chimera of all these components and most importantly, it works. I can now feed media to my devices that don’t support the Plex Media Server.

So, lets get started. Oh, before I forget it. All these steps need to be done as root!

Dependencies/Repositories

The good thing is, that nowadays most of the media related packages are already part of Ubuntu. So, we can simply pull most dependencies directly from Ubuntu’s repository and don’t have to add tons of PPA’s.

First and foremost you need Java. I prefer for some unrelated reasons to use the Oracle JDK. I know JDK 8 was just released but 7 will do for the moment. And as far as I know PMS (yes, I use the abbreviation again) works fine with the OpenJDK. So this first step is kinda optional.

add-apt-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java7-installer

And if you want to try OpenJDK, you do the following

apt-get install openjdk-7-jdk

Now we need the media related dependencies. All the encoders, decoders, muxers, etc. Most of it is already in Ubuntu and you might not need all of it (like dcraw). But I think it is better to have it installed and ready use, then be surprised when a feature, you never used before, doesn’t work. So here we go.

apt-get install mplayer mencoder mediainfo ffmpeg imagemagick vlc flac dcraw

tsMuxeR is the only one missing in this list. Luckily Robert Tari created a PPA. Lets just add it:

add-apt-repository ppa:robert-tari/main
apt-get update
apt-get install tsmuxer

Get PS3 Media Server

The project switched from Google Code to SourceForge but has its source code at GitHub. Confused? Well, so am I, but they must have their reasons and I don’t question it. Anywhoo, you can download the latest version (currently 1.90.1) from here:

http://sourceforge.net/projects/ps3mediaserver/

Installation

After downloading you can install PMS into /opt or any other directory you think might be useful (/usr/local, etc.). I personally prefer /opt. Here we go:

tar xzvf pms-1.90.1-generic-linux-unix.tar.gz -C /opt/
ln -s /opt/pms-1.90.1/ /opt/pms

Creating the symlink in the second step makes life easier for later updates. All the configuration and start/stop scripts just look for /opt/pms. Updating should be as easy, as extracting the package into /opt and recreating the symlink.

Start Script

I based my scripts on the scripts from the PPA from Happy-Neko. Currently I did just some path corrections but I am planning on moving more configuration options to the configuration in /etc/default/.

Here are the steps to get the Upstart script, service configuration file and set the legacy link.

wget http://www.rfc3092.net/wp-content/uploads/2014/03/ps3mediaserver.conf_.gz
gunzip ps3mediaserver.conf_.gz
mv ps3mediaserver.conf_ /etc/init
wget http://www.rfc3092.net/wp-content/uploads/2014/03/ps3mediaserver.gz
gunzip ps3mediaserver.gz
mv ps3mediaserver /etc/default
cd /etc/init.d/
ln -s /lib/init/upstart-job ps3mediaserver
initctl reload-configuration

The last two steps set the legacy link, so that you can start the service using the old /etc/init.d mechanism. And the second is to tell Upstart, to scan for new services.

Please check /etc/default/ps3mediaserver, if it fits your needs. For instance, not everybody wants to run PMS as root. So take a minute and clean that up.

Configuration

The configuration for PMS with this setup in the configuration area for the user root.

/root/.config/ps3mediaserver

You can change this in /etc/default/ps3mediaserver. Here is how you get a basic configuration going.

wget http://www.rfc3092.net/wp-content/uploads/2014/03/PMS.conf_.gz
gunzip PMS.conf_.gz
mkdir -p /root/.config/ps3mediaserver
mv PMS.conf_ /root/.config/ps3mediaserver

The configuration file is already updated to reflect the paths for all external tools. It does not contain a UUID for the server, because that is created automatically when you fire up the server for the first time.

You should take a look at the following settings (see also my blog post PS3 Media Server And Ubuntu Foo for tips):

  • folders
  • name
  • network_interface
  • hostname

folders is the only one out of these that you definitely want to set to reflect your setup. The server is running as a service and therefor headless. Just put a comma separated list of directories in there. Something like

folders = /src/videos,/srv/music

name on the other hand is just a cosmetic thing and defines with which name the server shows up on the client.

Yet, the default for network_interfaces can sometimes cause some grieve. You might have to bind the server to a specific interface if some virtual interface seems to be more attractive to PMS.

hostname is similar to network_interfaces. Usually it should not be needed but if you have multiple IP’s on a device you might want to specify which IP it binds itself to.

Service Start

Now that everything is set up you can start the service.

service ps3mediaserver start

The service should fire up without any errors. And if you encounter errors you will find the logs in /var/log/ps3mediaserver/root/.

Happy Streaming!

Final Thoughts

I think it is pretty straight forward to get PMS to work on Ubuntu. If I find some time… did I just write that?! Well, if I find some time, I will create a PPA with the latest version and I will try to keep it up to date.

BIND Journal Foo

After doing some updates to my DNS set up I ran some standard checks. And it took me a while to realize that for some reason my zone didn’t load correctly and the secondary server was used.

So I dove into the logs and low and behold I saw this:

zone dusares.com/IN: journal rollforward failed: journal out of sync with zone
zone dusares.com/IN: not loaded due to errors.

And it dawned on me. I am currently implementing my own little dynamic DNS updater and all the updates are stored in a journal. Fine, I can re-run my tests and simply remove that journal (.jnl files. In Debian based distributions they are in /var/lib/bind) before restarting BIND.

That works of course but should not the way how you should handle things. Especially not, if you need the content of the journal. So here is the correct way of doing it:

  1. rndc freeze dusares.com
  2. apply changes to zone file
  3. rndc reload dusares.com
  4. rndc thaw dusares.com