Me in IT UNIX/Linux Consultancy is based in Utrecht, The Netherlands and specialized in UNIX and Linux consultancy. Experience with Red Hat Enterprise Linux (Red Hat Certified Architect), Fedora Project, CentOS, OpenBSD and related released Open Source products makes Me in IT UNIX/Linux Consultancy a great partner in implementing, maintaining and upgrading your environment.

Open Source software is an important aspect of any Linux distribution. Me in IT UNIX/Linux Consultancy tries to use Open Source software where possible and tries to share experiences actively. In the articles section you will find many UNIX/Linux adventures shared for others to benefit.

Becoming an NTP server for pool.ntp.org

So you have:

  1. Some 2 hours to set it up.
  2. Some bandwidth to share.
  3. A fixed IP address.
  4. Some knowledge of Linux.

Good, you can help the pool.ntp.org project by becoming a member of the NTP pool. Internet connected users will connect to you host to get the correct time from your machine.

It's quite straight forward. Start by reading the how to join pool.ntp.org documentation.

Change /etc/ntp.conf, these parameters:

# Add this line to allow anybody (default) limited access.
restrict default kod limited nomodify nopeer noquery notrap

# Comment out the pool NTP servers:
#server 0.amazon.pool.ntp.org iburst
#server 1.amazon.pool.ntp.org iburst
#server 2.amazon.pool.ntp.org iburst
#server 3.amazon.pool.ntp.org iburst

# Add server from http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers
# Choose machines that are physically close.
server hostname1.domain.tld
server hostname2.domain.tld
server hostname3.domain.tld
server hostname4.domain.tld
server hostname5.domain.tld
server hostname6.domain.tld

Check if NTP is working:

ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
-1.2.3.4     2.3.4.5      2 u  107  512  377    9.694    1.765   1.152
-hostname1.domain.tld.   3.4.5.6     2 u  125  512  377   18.407    3.662   0.998
-hostname1.domain.tld.   4.5.6.7    2 u  126  512  377   31.769   -1.389   0.218
-hostname1.domain.tld. 5.6.7.8    2 u  144  512  377   19.508    1.260   0.531
+hostname1.domain.tld. 6.7.8.9   3 u  117  512  377   24.559    0.928   0.779
+hostname1.domain.tld.     7.8.9.10   2 u  203  512  377   14.287   -0.134   0.133
*hostname1.domain.tld. .GPS.            1 u  211  512  377   67.740    0.268   0.308

Hint, it takes some 6 minutes to get the minuses (-), plusses (+) and asterisks (*).

Now go back to the join page and register your server. It takes a couple of hours to prove that your NTP server is stable before you will receive traffic.

The internet thanks you!

Rundeck on CentOS behind Apache HTTPD proxy

Rundeck is getting more and more attention, not strange; it's a wonderful tool to execute code on remote hosts.

I had some troubles figuring out how to make Rundeck work when installed behind an Apache HTTPD proxy. Here are the steps I took.

1. Install Rundeck

That's easy:

sudo rpm -Uvh http://repo.rundeck.org/latest.rpm
sudo yum install rundeck

Interesting to know: the configuration has been split off to a different RPM:
rpm -qR rundeck
...
rundeck-config

2. Let Rundeck use MySQL

By default Rundeck uses an H2 database. It's probably technically nice, but difficult to manage. I suggest to use MySQL immediately.

Rundeck comes with a mysql connector which is great!

rpm -ql rundeck|grep -i mysql
/var/lib/rundeck/exp/webapp/WEB-INF/lib/mysql-connector-java-5.1.17-bin.jar

In the file rundeck-config.properties set the parameter dataSource.url

dataSource.url = jdbc:mysql://localhost/rundeck
dataSource.username = rundeck
dataSource.password = SomePassword

Now create the database and user in MySQL

mysql> create database rundeck;
mysql> grant all on rundeck.* to 'rundeck'@'localhost' identiefied by 'SomePassword';

Rundeck will provision the database automatically.

3. Configure Apache HTTPD

Install Apache HTTPD.
sudo yum install httpd
Add a file /etc/httpd/conf.d/rundeck.conf

<Location "/rundeck">
        ProxyPass http://localhost:4440/rundeck
        ProxyPassReverse http://localhost:4440/rundeck
</Location>

4. Configure Rundeck's profile

This is an important one; without this step you will see a very ugly rundeck, stylesheets and images are not loaded.
Change /etc/rundeck/profile. Somewhere you'll find the variable export RDECK_JVM. Add an option to it: -Dserver.web.context=/rundeck \. My result looks like this:

export RDECK_JVM="-Djava.security.auth.login.config=/etc/rundeck/jaas-loginmodule.conf \
        -Dloginmodule.name=RDpropertyfilelogin \
        -Drdeck.config=/etc/rundeck \
        -Drdeck.base=/var/lib/rundeck \
        -Drundeck.server.configDir=/etc/rundeck \
        -Dserver.datastore.path=/var/lib/rundeck/data \
        -Drundeck.server.serverDir=/var/lib/rundeck \
        -Drdeck.projects=/var/rundeck/projects \
        -Drdeck.runlogs=/var/lib/rundeck/logs \
        -Drundeck.config.location=/etc/rundeck/rundeck-config.properties \
        -Dserver.web.context=/rundeck \
        -Djava.io.tmpdir=$RUNDECK_TEMPDIR"

5. Start it all up (persistently)

sudo chkconfig httpd on
sudo service httpd start
sudo chkconfig rundeck on
sudo service rundeck start

Now you should be able to access http://yourhost/rundeck

Deploying web applications (war) using RPM packages

I'm actually not sure if this is the most logical approach, but you can deploy web archives (wars) into an application server like Apache Tomcat.

There are benefits:

  • Deployments are very similar all the time.
  • You can check the version installed. (rpm -q APPLICATION)
  • You can verify if the installtion is still valid. (rpm -qV APPLICATION)
  • You can use puppet to deploy these application.

And there are drawbacks:

  • Apache Tomcat has to be stopped to deploy. This makes all installed web applications unavailable for a moment.
  • RPM is a package, WAR is also a package. A package in a package is not very logical.
  • Apache Tomcat "explodes" (unpacks) the WAR. That exploded directory is no managed by the RPM.

I've been using this method for a few years now. My conclusion; the benefits win from the drawbacks.

Here is what a SPEC file looks like:

Name: APPLICATION
Version: 1.2.3
Release: 1
Summary: The package for APPLICATION.
Group: Applications/Productivity
License: internal
Source: %{name}-%{version}.tar.gz
Requires: httpd
Requires: apache-tomcat
Requires: apache-tomcat-ojdbc5
Requires: apache-tomcat-jt400

BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildArch: noarch

%description
The package for APPLICATION

#%prep
#%setup -n %{name}-dist-%{version}

%{__cat} << 'EOF' > %{name}.conf
<Location "/%{name}">
ProxyPass http://localhost:8080/%{name}
ProxyPassReverse http://localhost:8080/%{name}
</Location>
EOF

%{__cat} <<'EOF' > %{name}.xml
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<Resource name="jdbc/DATABASE"
    auth="Container"
    type="javax.sql.DataSource"
    validationQuery="select sysdate from dual"
    validationInterval="30000"
    timeBetweenEvictionRunsMillis="30000"
    maxActive="100"
    minIdle="10"
    maxWait="10000"
    initialSize="10"
    removeAbandonedTimeout="60"
    removeAbandoned="true"
    minEvictableIdleTimeMillis="30000"
    jmxEnabled="true"
    username="USERNAME"
    password="PASSWORD"
    driverClassName="oracle.jdbc.driver.OracleDriver"
    url="DATABASEURL"/>
</Context>
EOF

%install
rm -Rf %{buildroot}
mkdir -p %{buildroot}/opt/apache-tomcat/webapps/
cp ../SOURCES/%{name}-%{version}.war %{buildroot}/opt/apache-tomcat/webapps/%{name}.war
mkdir -p %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost
cp %{name}.xml %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml
mkdir -p %{buildroot}/etc/httpd/conf.d/
cp %{name}.conf %{buildroot}/etc/httpd/conf.d/

%clean
rm -rf %{buildroot}

%files
%defattr(-,tomcat,tomcat,-)
/opt/apache-tomcat/webapps/%{name}.war
%config /etc/httpd/conf.d/%{name}.conf
%config /opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml

%changelog
* Tue Sep 9 2014 - robert (at) meinit.nl
- Initial build.

Puppet manifests for DTAP environments

Here is how I implement a manifest to install an application on different environment.

1. I package the application into an RPM.

2. I build a manifest (init.pp) that hold the shared properties:

# mkdir -p /etc/puppet/modules/APPLICATION/{manifest,file,template}s

# cat /etc/puppet/modules/APPLICATION/manifests/init.pp
class APPLICATION {
package { APPLICATION:
ensure => present,
}

file { "/opt/apache-tomcat/conf/Catalina/localhost/APPLICATION.xml":
content => template("/etc/puppet/modules/APPLICATION/templates/APPLICATION.xml.erb"),
notify => Service["apache-tomcat"],
require => Package["APPLICATION"],
}
}

Add the template to the module:

# cat /etc/puppet/modules/APPLICATION/templates/APPLICATION.conf.erb
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE Context>
<Context>
<Resource name="jdbc/APPLICATION"
auth="Container"
type="javax.sql.DataSource"
testWhileIdle="true"
testOnBorrow="true"
testOnReturn="false"
validationQuery="select sysdate from dual"
validationInterval="30000"
timeBetweenEvictionRunsMillis="30000"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
minEvictableIdleTimeMillis="30000"
jmxEnabled="true"
username="<%= APPLICATIONUSERNAME %>"
password="<%= APPLICATIONPASSWORD %>"
driverClassName="oracle.jdbc.driver.OracleDriver"
url="<%= APPLICATIONDBURL %>"/>
</Context>

Now I make a manifest for each environment:

# cat /etc/puppet/modules/APPLICATION/manifests/development.pp
class APPLICATION::development {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/test.pp
class APPLICATION::test {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/acceptance.pp
class APPLICATION::acceptance {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/production.pp
class APPLICATION::production {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

On a machine simple include:

include APPLICATION::development

Doing HTTPS requests from the command line with (basic) password authentication

Imagine you want to test a web service/site secured by SSL and a password. Here is how to do that from the commandline:

In this example these values are used:

  • username: username
  • password: password
  • hostname: example.com

Generate the base64 encoded username and password combination:

echo -n "username:password"  | openssl base64 -base64

The output will be something like dXNlcm5hbWU6cGFzc3dvcmQ=. Paste that string into a file called "input.txt".

GET /some/directory/some-file.html HTTP/1.1
Host: example.com
Authentication: basic dXNlcm5hbWU6cGFzc3dvcmQ=

N.B. The two empty lines are required.

Next, throw that file into openssl:

(cat input.txt ; sleep 3) | openssl s_client -connect example.com:443

The output will show all headers and HTML content so you can grep all you want.

CloudFlare and F5 LTM X-Forwarded-For and X-Forwarded-Proto

If you want an application (such as Hippo) be able to determine what page is served with what protocol (http/https), you must insert an HTTP header when using a Apache ProxyPass.

When you use CloudFlare, the correct headers are inserted by default.

When you use an F5 loadbalancer, or in fact any loadbalancer or proxy, you must tell the loadbalancer to insert these two headers:

When you use a combination of the two, you have to make the loadbalancer a little smarter; it must detect if the header is present and add the header if not. That can be done by iRules.

The first iRule is to add "X-Forwarded-For" to the header:

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-For]}{
HTTP::header insert X-Forwarded-For [IP::remote_addr]
}
}

The second one is a bit more complex; it needs to verify if the X-Forwarded-Proto is available, and if not, add it, but based on it's original request to either port 80 (http) or port 443 (https):

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-Proto]}{
if {[TCP::local_port] equals 80}{
HTTP::header insert X-Forwarded-Proto "http"
} elseif {[TCP::local_port] equals 443}{
HTTP::header insert X-Forwarded-Proto "https"
}
}
}

Add these two iRules to your Virtual Service and with or without CloudFlare (or any other CDN) and your application can find the two headers to decide how to rewrite traffic.

Zabbix Low Level Discovery for TCP ports on a host

You can let Zabbix do a portscan of a host and monitor the ports that have been reported as open. I really like that option, it gives you the option to quickly add a host and monitor changes on TCP ports.

You'd need to:

  1. Place a script on the Zabbix server and all Zabbix proxies.
  2. Be sure "nmap" is installed. That's a port scanning tool.
  3. Create a Discovery rule on a template.

Place a script

Place this script in /etc/zabbix/externalscripts/zabbix_tcpport_lld.sh and change owner to the user that is running Zabbix server. (I presume zabbix:zabbix) Also change mode to 750.

#!/bin/sh

echo '{'
echo ' "data":['

nmap -T4 -F ${1} | grep 'open' | while read portproto state protocol ; do
port=$(echo ${portproto} | cut -d/ -f1)
proto=$(echo ${portproto} | cut -d/ -f2)
echo '  { "{#PORT}":"'${port}'", "{#PROTO}":"'${proto}'" },'
done

echo ' ]'
echo '}'

Install NMAP

Depending on your distribution:

RHEL/CentOS/Fedora Debian
sudo yum install nmap sudo apt-get install nmap

Configure a Discovery rule Zabbix

Select a template that you would like to add this discovery rule to. I've greated a "Network" template that does a few pings and has this discover rule.

I've listed the parameters that are required, the rest can be filled in however you like to use Zabbix.

Discovery

  • Name: Open TCP ports
  • Type: External check
  • Key: zabbix_tcpport_lld.sh[{HOST.CONN}]

This makes the variable {#PORT} and {#PROTO} available for use in the items and triggers.

Item Prototypes

  • Name: Status of port {#PORT}/{#PROTO}
  • Type: Simple check
  • Key: net.tcp.service[{#PROTO},,{#PORT}]
  • Type of information: Numeric (unsigned)
  • Data type: Boolean

Trigger Prototypes

  • Name: {#PROTO} port {#PORT}
  • Expression: {Template_network:net.tcp.service[{#PROTO},,{#PORT}].last(0)}=0

Now simply attach a host to this template to let it portscan and monitored the open (TCP) ports found.

Automounting Windows CIFS Shares

It can be very useful to mount a Windows (CIFS) Share on a Linux system. It's super easy to setup automount to go to mulitple servers and multiple shares on those servers using automount.

The goal is to tell automount to pickup the hostname and share from the path, so that a user can simple do:

cd /mnt/hostname/share

Use these setup to set this up:

Install autofs:

yum install autofs

Add a few lines to auto.master:

echo "/mnt /etc/auto.smb-root.top" >> /etc/auto.master

This tells autofs that "/mnt" is managed by autofs.

Create /etc/auto.smb-root.top:

echo "* -fstype=autofs,rw,-Dhost=& file:/etc/auto.smb.sub" > /etc/auto.smb-root.top

Create /etc/auto.smb.sub:

echo "* -fstype=cifs,rw,credentials=/etc/${host:-default}.cred ://${host}/&" > /etc/auto.smb.sub

Create a credentials file for each server:

cat << EOF > /etc/hostname.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

And create a file with default credentials:

cat << EOF > /etc/default.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

Restart autofs:

service autofs restart

Now you should be ready to cd into /mnt/hostname/share. You will notice this takes a second or so to complete, this second is used to mount the share and present you with the data directly.

One drawback of this solution; the username/password is assigned to the hostname, so if a share requires a different username/password, that's a problem.

Popularity of Fedora Spins

Fedora has introduced Spins. These spins are ISO's that allow a user to choose quickly try a Live DVD of Fedora, tailored to their needs.

Ordered on popularity as measured by me using bittorrent to upload this DVD's to the rest of the world. The Ratio column means the number of times the data has been uploaded.

Spin Ratio
Desktop i686 14.00
Desktop x86_64 13.80
MATE Compiz x86_64 11.50
LXDE i686 11.40
Design suite x86_64 10.30
Security x86_64 9.14
Xfce i686 9.030
MATE Compiz i686 8.89
Scientific KDE x86_64 8.54
Electronic Lab x86_64 8.24
Xface x86_64 7.97
KDE i686 7.52
Design suite i686 7.50
KDE x86_64 7.48
Games x86_64 7.31
Electronic lab i686 6.69
LXDE x86_86 6.68
Security i686 6.63
Jam KDE x86_64 5.72
Games i686 5.64
SoaS x86_64 4.78
Scientific KDE i686 4.64
Robotics x86_64 4.11
SoaS i686 3.98
Original (no spin) x86_64 3.91
Jam KDE i686 3.58
Robotics i686 3.28
Original (no spin) i686 3.04
Original (no spin) source 2.54

Without taking the architecture (x86_64 or i686) in consideration, this table show the most popular spins:

Spin x86_64 i686 Total
Desktop 14 13.8 27.80
MATE Compiz 11.5 8.89 20.39
LXDE 6.68 11.4 18.08
Design suite 10.4 7.50 17.9
Xfce 9.03 7.79 16.82
Security 9.14 6.63 15.77
KDE 7.48 7.52 15.00
Electronic lab 8.24 6.69 14.93
Scientific KDE 8.54 4.64 13.18
Games 7.31 5.64 12.94
Jam KDE 5.72 3.58 9.30
SoaS 4.78 3.98 8.76
Robotics 4.11 3.28 7.39
Original (no spin) 3.91 3.04 6.95
Source (no spin) source - - 2.54

And just to complete the overview, the popularity of the architectures:

Architecture Ratio
x86_64 110.84
i686 94.29

So; I'm sure some spins will be here to stay.

Interesting is that the non-branded (no-spin) DVD is not that popular. Most people choose a specific spin.

Some spins see more popularity on the i686 architecture:

  • LXDE
  • KDE

Zabbix LLD (low level discovery) SNMP examples

In my opinion it's not easy to understand the low level discovery mechanism that Zabbix now offers. It's however a very useful tool to setup a simple template to monitor hundreds of items at once.

The Zabbix documentation about low level discovery is good to setup one type of discovery; network interfaces.

Although that's a pretty important discovery, there are more tricks to use. I ran into a problem where a Juniper SRX ran out of diskspace. This was not monitored, so I added a discovery rule to find all storage devices and see how full the are. I added this discovery rule to a template calles "SNMP devices". This means all devices that have that template applied will be "discovered". Many of these devices will not have local storage though. Not an issue, the discovery will simply fail for these devices.

I added this discovery rule:

  • Name: Available storage devices
  • Type: SNMPv2 Agent
  • Key: snmp.discovery.storage
  • SNMP OID: hrStorageDescr
  • SNMP community: {$SNMPCOMMUNITY} (This variable is set on host level and referred to here.)
  • Port: 161
  • Update interval (in sec): 3600 (Set this to 60 temporarily, to speed up the discovery process, but remember to set it back.)
  • Keep lost resources period (in days): 1
  • Filter: Macro: {#SNMPVALUE} Regexp: ^/dev/da|^/dev/bo (This ensures only mounts that have a physical underlying storage device are found, the rest will be ignored.)

That rule will discover devices such as these:

  1. /dev/da0s1a
  2. /dev/bo0s1e
  3. /dev/bo0s1f

Now that these devices have been discovered, you can get all kinds of information about them. This is done using the item prototypes. I created two; one to get the size of the device, the other to get the usage of the device. Those two can be used to calculate a percentage later, with a trigger prototype. Here is one of the two item prototypes:

  • Name: hrStorageSize {#SNMPVALUE}
  • Type: SNMPv2 Agent
  • Key: hrStorageSize.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageSize.{#SNMPINDEX}
  • SNMP community: {$SNMPCOMMUNITY}
  • Port: 161
  • Type of information: Numeric (unsigned)
  • Data type: Decimal
  • Units: bytes
  • Use custom multiplier: 2048 (Because SNMP reports sectors here, which is less logical to understand in my opinion.)
  • Update interval (in sec): 1800 (Pretty long, but the size of a device will not change quickly.)

And this other item prototype is to see how many bytes (sectors) are used: (I cloned the previous one and changed only these values:)

  • Name: hrStorageUsed {#SNMPVALUE}
  • Key: hrStorageUsed.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageUsed.{#SNMPINDEX}
  • Update interval (in sec): 60 (Shorter, this will change.)

Now check if these items are being found by checking the "latest data" for the host. You should start to see a few items appear. In that case you can setup the trigger prototype. This is a bit complex, because I want to report on 95% full.

  • Name: Disk space available on {#SNMPVALUE} ({ITEM.LASTVALUE1}/{ITEM.LASTVALUE2})
  • Expression: 100*{Template_SNMP_Devices:hrStorageUsed.["{#SNMPINDEX}"].last(0)}/{Template_SNMP_Devices:hrStorageSize.["{#SNMPINDEX}"].last(0)}>95

That should start to alarm when the disk is 95% full or more.

I hope this article helps to understand the capabilities of Zabbix LLD. It's a great feature which I use to monitor blades, power supplies in chassis, network interfaces, disks and TCP ports. It makes templates much simpler which I really like.

About Consultancy Articles Contact




References Red Hat Certified Architect By Robert de Bock Robert de Bock
Curriculum Vitae By Fred Clausen +31 6 14 39 58 72
By Nelson Manning robert@meinit.nl
Syndicate content