Adventures in Red Hat Enterprise Linux, CentOS, Fedora, OpenBSD and other open source solutions.

alternatives on RHEL 6 using update-alternatives

When multiple types/versions of software can be installed on a system, the alternatives system can be used. Some examples of software that use alternatives are:

  • java (openjdk/oracle)
  • mta (sendmail/postfix/exim)
  • php (different versions)
  • zabbix-server (using mysql/psql)
  • gpg (using gpg/pgp)

Alternatives can be added to the system:

alternatives --install <link> <name> <path> <priority>

  1. link - refers to the path the system uses to access the facility, common to all alternatives.
  2. name - is an identifier common to all alternatives.
  3. path - is a path to the unique altenative.
  4. priority - a number to indicate what alternative may be used when using "auto". Higher means it will be selected.

When adding an alternative, sometimes "slaves" are required. For example, these "masters" and "slaves" relate to eachother:

  • postfix - mailq
  • java - keytool

Alternatives can be selected in "manual" or "auto" mode. Manual means an alternative is selected and it will use an alternative until another one is selected. Auto means the alternatives system will automatically select the alternatives with the hightest priority. This means the selected alternative can change when a new alternative is added.
By default "auto" is used. Package upgrades may cause a switch. Auto does have the benefit that vendor updates will cause a desirable effect. When removing an alternative (for example by removing a package that supplied the alternative software) the next preferred alternative will be selected, which is nice.

RPM SPEC pre/post/preun/postun argument values

More as a reminder for myself, but hope it helps you.

RPM has 4 parts where (shell) scripts can be used:

  • %pre - Executed before installation.
  • %preun - Executed before un-installation.
  • %post - Executed after installation.
  • %postun - Executed after un-installation.

In all these parts or sections the variable "$1" can be checked to see what's being done:

  • Initial installation
  • Upgrade
  • Un-installation

This table show the values of "$1" per section related to the action.

%pre %preun %post %postun
Initial installation 1 not applicable 1 not applicable
Upgrade 2 1 2 1
Un-installation not applicable 0 not applicable 0

This can be used for example when registering new services:

case "$1" in
    # This is an initial install.
    chkconfig --add newservice
    # This is an upgrade.
    # First delete the registered service.
    chkconfig --del newservice
    # Then add the registered service. In case run levels changed in the init script, the service will be correctly re-added.
    chkconfig --add newservice

case "$1" in
    # This is an un-installation.
    service newservice stop
    chkconfig --del newservice
    # This is an upgrade.
    # Do nothing.

Good to know; this is the order for certain RPM actions:

install upgrade un-install
%pre ($1=1) %pre ($1=2) %preun ($1=0)
copy files copy files remove files
%post ($1=1) %post ($1=2) %postun ($1=0)
%preun ($1=1) from old RPM.
delete files only found in old package
%postun ($1=1) from old RPM.

So when upgrading the exemplary package "software" from version 1 to version 2, this is the script (%post and %postun) order:

  1. Run %pre from "software-2".
  2. Place files from "software-2".
  3. Run %post from "software-2".
  4. Run %preun from "software-1".
  5. Delete files unique to "software-1".
  6. Run %postun from "software-1".

This means there are cases where "software-1" has incorrect scripts, and there is no way to upgrade. In that case the RPM can be uninstalled, which might execute different commands because $1 equals 0 (un-install) instead of 1 (upgrade).
When the RPM uninstall scripts fail, the only way to fix things is to manually execute the intended commands... RPM is not perfect, but it's pretty well thought through!

Missing telnet, but need to test a TCP connection? Try echo!

This is such a simple trick, I'm still surprised. When you need to test a connection (TCP) but have no telnet or nc, you can use this workaround:

echo > /dev/tcp/
echo $?

You won't get a response back, but that exitstatus ($?) will be either:

  1. 0 - It worked.
  2. not 0 - It did not work.
    1. If it takes really long for the command line to get back, not connection is possible.

      You can use this to send data to UDP, but since UDP is stateless, the exit status will always be "0".

Becoming an NTP server for

So you have:

  1. Some 2 hours to set it up.
  2. Some bandwidth to share.
  3. A fixed IP address.
  4. Some knowledge of Linux.

Good, you can help the project by becoming a member of the NTP pool. Internet connected users will connect to you host to get the correct time from your machine.

It's quite straight forward. Start by reading the how to join documentation.

Change /etc/ntp.conf, these parameters:

# Add this line to allow anybody (default) limited access.
restrict default kod limited nomodify nopeer noquery notrap

# Comment out the pool NTP servers:
#server iburst
#server iburst
#server iburst
#server iburst

# Add server from
# Choose machines that are physically close.
server hostname1.domain.tld
server hostname2.domain.tld
server hostname3.domain.tld
server hostname4.domain.tld
server hostname5.domain.tld
server hostname6.domain.tld

Check if NTP is working:

ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
-      2 u  107  512  377    9.694    1.765   1.152
-hostname1.domain.tld.     2 u  125  512  377   18.407    3.662   0.998
-hostname1.domain.tld.    2 u  126  512  377   31.769   -1.389   0.218
-hostname1.domain.tld.    2 u  144  512  377   19.508    1.260   0.531
+hostname1.domain.tld.   3 u  117  512  377   24.559    0.928   0.779
+hostname1.domain.tld.   2 u  203  512  377   14.287   -0.134   0.133
*hostname1.domain.tld. .GPS.            1 u  211  512  377   67.740    0.268   0.308

Hint, it takes some 6 minutes to get the minuses (-), plusses (+) and asterisks (*).

Now go back to the join page and register your server. It takes a couple of hours to prove that your NTP server is stable before you will receive traffic.

The internet thanks you!

Rundeck on CentOS behind Apache HTTPD proxy

Rundeck is getting more and more attention, not strange; it's a wonderful tool to execute code on remote hosts.

I had some troubles figuring out how to make Rundeck work when installed behind an Apache HTTPD proxy. Here are the steps I took.

1. Install Rundeck

That's easy:

sudo rpm -Uvh
sudo yum install rundeck

Interesting to know: the configuration has been split off to a different RPM:

rpm -qR rundeck

2. Let Rundeck use MySQL

By default Rundeck uses an H2 database. It's probably technically nice, but difficult to manage. I suggest to use MySQL immediately.

Rundeck comes with a mysql connector which is great!

rpm -ql rundeck|grep -i mysql

In the file set the parameter dataSource.url

dataSource.url = jdbc:mysql://localhost/rundeck
dataSource.username = rundeck
dataSource.password = SomePassword

Now create the database and user in MySQL

mysql> create database rundeck;
mysql> grant all on rundeck.* to 'rundeck'@'localhost' identiefied by 'SomePassword';

Rundeck will provision the database automatically.

3. Configure Apache HTTPD

Install Apache HTTPD.
sudo yum install httpd
Add a file /etc/httpd/conf.d/rundeck.conf

<Location "/rundeck">
        ProxyPass http://localhost:4440/rundeck
        ProxyPassReverse http://localhost:4440/rundeck

4. Configure Rundeck's profile

This is an important one; without this step you will see a very ugly rundeck, stylesheets and images are not loaded.
Change /etc/rundeck/profile. Somewhere you'll find the variable export RDECK_JVM. Add an option to it: -Dserver.web.context=/rundeck \. My result looks like this:

export RDECK_JVM=" \ \
        -Drdeck.config=/etc/rundeck \
        -Drdeck.base=/var/lib/rundeck \
        -Drundeck.server.configDir=/etc/rundeck \
        -Dserver.datastore.path=/var/lib/rundeck/data \
        -Drundeck.server.serverDir=/var/lib/rundeck \
        -Drdeck.projects=/var/rundeck/projects \
        -Drdeck.runlogs=/var/lib/rundeck/logs \
        -Drundeck.config.location=/etc/rundeck/ \
        -Dserver.web.context=/rundeck \$RUNDECK_TEMPDIR"

5. Start it all up (persistently)

sudo chkconfig httpd on
sudo service httpd start
sudo chkconfig rundeck on
sudo service rundeck start

Now you should be able to access http://yourhost/rundeck

Deploying web applications (war) using RPM packages

I'm actually not sure if this is the most logical approach, but you can deploy web archives (wars) into an application server like Apache Tomcat.

There are benefits:

  • Deployments are very similar all the time.
  • You can check the version installed. (rpm -q APPLICATION)
  • You can verify if the installtion is still valid. (rpm -qV APPLICATION)
  • You can use puppet to deploy these application.

And there are drawbacks:

  • Apache Tomcat has to be stopped to deploy. This makes all installed web applications unavailable for a moment.
  • RPM is a package, WAR is also a package. A package in a package is not very logical.
  • Apache Tomcat "explodes" (unpacks) the WAR. That exploded directory is no managed by the RPM.

I've been using this method for a few years now. My conclusion; the benefits win from the drawbacks.

Here is what a SPEC file looks like:

Version: 1.2.3
Release: 1
Summary: The package for APPLICATION.
Group: Applications/Productivity
License: internal
Source: %{name}-%{version}.tar.gz
Requires: httpd
Requires: apache-tomcat
Requires: apache-tomcat-ojdbc5
Requires: apache-tomcat-jt400

BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildArch: noarch

The package for APPLICATION

#%setup -n %{name}-dist-%{version}

%{__cat} << 'EOF' > %{name}.conf
<Location "/%{name}">
ProxyPass http://localhost:8080/%{name}
ProxyPassReverse http://localhost:8080/%{name}

%{__cat} <<'EOF' > %{name}.xml
<?xml version="1.0" encoding="UTF-8"?>
<Resource name="jdbc/DATABASE"
    validationQuery="select sysdate from dual"

rm -Rf %{buildroot}
mkdir -p %{buildroot}/opt/apache-tomcat/webapps/
cp ../SOURCES/%{name}-%{version}.war %{buildroot}/opt/apache-tomcat/webapps/%{name}.war
mkdir -p %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost
cp %{name}.xml %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml
mkdir -p %{buildroot}/etc/httpd/conf.d/
cp %{name}.conf %{buildroot}/etc/httpd/conf.d/

rm -rf %{buildroot}

%config /etc/httpd/conf.d/%{name}.conf
%config /opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml

* Tue Sep 9 2014 - robert (at)
- Initial build.

Doing HTTPS requests from the command line with (basic) password authentication

Imagine you want to test a web service/site secured by SSL and a password. Here is how to do that from the commandline:

In this example these values are used:

  • username: username
  • password: password
  • hostname:

Generate the base64 encoded username and password combination:

echo -n "username:password"  | openssl base64 -base64

The output will be something like dXNlcm5hbWU6cGFzc3dvcmQ=. Paste that string into a file called "input.txt".

GET /some/directory/some-file.html HTTP/1.1
Authentication: basic dXNlcm5hbWU6cGFzc3dvcmQ=

N.B. The two empty lines are required.

Next, throw that file into openssl:

(cat input.txt ; sleep 3) | openssl s_client -connect

The output will show all headers and HTML content so you can grep all you want.

CloudFlare and F5 LTM X-Forwarded-For and X-Forwarded-Proto

If you want an application (such as Hippo) be able to determine what page is served with what protocol (http/https), you must insert an HTTP header when using a Apache ProxyPass.

When you use CloudFlare, the correct headers are inserted by default.

When you use an F5 loadbalancer, or in fact any loadbalancer or proxy, you must tell the loadbalancer to insert these two headers:

When you use a combination of the two, you have to make the loadbalancer a little smarter; it must detect if the header is present and add the header if not. That can be done by iRules.

The first iRule is to add "X-Forwarded-For" to the header:

if {![HTTP::header exists X-Forwarded-For]}{
HTTP::header insert X-Forwarded-For [IP::remote_addr]

The second one is a bit more complex; it needs to verify if the X-Forwarded-Proto is available, and if not, add it, but based on it's original request to either port 80 (http) or port 443 (https):

if {![HTTP::header exists X-Forwarded-Proto]}{
if {[TCP::local_port] equals 80}{
HTTP::header insert X-Forwarded-Proto "http"
} elseif {[TCP::local_port] equals 443}{
HTTP::header insert X-Forwarded-Proto "https"

Add these two iRules to your Virtual Service and with or without CloudFlare (or any other CDN) and your application can find the two headers to decide how to rewrite traffic.

Zabbix Low Level Discovery for TCP ports on a host

You can let Zabbix do a portscan of a host and monitor the ports that have been reported as open. I really like that option, it gives you the option to quickly add a host and monitor changes on TCP ports.

You'd need to:

  1. Place a script on the Zabbix server and all Zabbix proxies.
  2. Be sure "nmap" is installed. That's a port scanning tool.
  3. Create a Discovery rule on a template.

Place a script

Place this script in /etc/zabbix/externalscripts/ and change owner to the user that is running Zabbix server. (I presume zabbix:zabbix) Also change mode to 750.


echo '{'
echo ' "data":['

nmap -T4 -F ${1} | grep 'open' | while read portproto state protocol ; do
port=$(echo ${portproto} | cut -d/ -f1)
proto=$(echo ${portproto} | cut -d/ -f2)
echo '  { "{#PORT}":"'${port}'", "{#PROTO}":"'${proto}'" },'

echo ' ]'
echo '}'

Install NMAP

Depending on your distribution:

RHEL/CentOS/Fedora Debian
sudo yum install nmap sudo apt-get install nmap

Configure a Discovery rule Zabbix

Select a template that you would like to add this discovery rule to. I've greated a "Network" template that does a few pings and has this discover rule.

I've listed the parameters that are required, the rest can be filled in however you like to use Zabbix.


  • Name: Open TCP ports
  • Type: External check
  • Key:[{HOST.CONN}]

This makes the variable {#PORT} and {#PROTO} available for use in the items and triggers.

Item Prototypes

  • Name: Status of port {#PORT}/{#PROTO}
  • Type: Simple check
  • Key: net.tcp.service[{#PROTO},,{#PORT}]
  • Type of information: Numeric (unsigned)
  • Data type: Boolean

Trigger Prototypes

  • Name: {#PROTO} port {#PORT}
  • Expression: {Template_network:net.tcp.service[{#PROTO},,{#PORT}].last(0)}=0

Now simply attach a host to this template to let it portscan and monitored the open (TCP) ports found.

Automounting Windows CIFS Shares

It can be very useful to mount a Windows (CIFS) Share on a Linux system. It's super easy to setup automount to go to mulitple servers and multiple shares on those servers using automount.

The goal is to tell automount to pickup the hostname and share from the path, so that a user can simple do:

cd /mnt/hostname/share

Use these setup to set this up:

Install autofs:

yum install autofs

Add a few lines to auto.master:

echo "/mnt /etc/" >> /etc/auto.master

This tells autofs that "/mnt" is managed by autofs.

Create /etc/

echo "* -fstype=autofs,rw,-Dhost=& file:/etc/auto.smb.sub" > /etc/

Create /etc/auto.smb.sub:

echo "* -fstype=cifs,rw,credentials=/etc/${host:-default}.cred ://${host}/&" > /etc/auto.smb.sub

Create a credentials file for each server:

cat << EOF > /etc/hostname.cred

And create a file with default credentials:

cat << EOF > /etc/default.cred

Restart autofs:

service autofs restart

Now you should be ready to cd into /mnt/hostname/share. You will notice this takes a second or so to complete, this second is used to mount the share and present you with the data directly.

One drawback of this solution; the username/password is assigned to the hostname, so if a share requires a different username/password, that's a problem.

About Consultancy Articles Contact

References Red Hat Certified Architect By Robert de Bock Robert de Bock
Curriculum Vitae By Fred Clausen +31 6 14 39 58 72
By Nelson Manning
Syndicate content