wordpress-logo-32I’ve got this blog of mine where I log in every now and then, so whenever there is a new version of WordPress, I see the update reminder message. There is, however, over a dozen other blogs under my wings and I don’t log into those nearly as often, in fact, I hardly log into them at all. Some more savvy admins of those blogs take care of the updates themselves, some don’t. Leaving those blogs not updated poses a significant risk, so I thought I would create a tool that would keep me informed.

It’s also possible to acivate the Automatic Background Updates, but with a few custom-made plugins in place, I prefer some degree of human supervision over the update process, either testing the update on a development machine first, or at least to be able to see if something goes wrong immediately.

So I wrote this perl script that will keep an eye on those websites for me. The only problem is that it depends on meta tag “generator” to get the WordPress site’s version, and some themes prefer to keep this information hidden.

Uning Cron, this script will email the results to your address at some convenient time, once a week sounds just about right for me.

You only need to add more URLs into the array on line 41:

#!/usr/bin/perl
#
#       .--' |
#      /___^ |     .--.
#          ) |    /    \
#         /  |  /`      '.
#        |   '-'    /     \
#        \         |      |\
#         \    /   \      /\|
#          \  /'----`\   /
#          |||       \\ |
#          ((|        ((|
#          |||        |||
#  perl   //_(       //_(   script :)

use strict;
use Data::Dumper;
use LWP::UserAgent;
use Email::MIME;

my $wp_url = 'https://wordpress.org/latest.tar.gz';
my $message = '';

# get information from $wp_url using HEAD method
my $ua = LWP::UserAgent->new;
$ua->timeout(10);
$ua->env_proxy;
my $response = $ua->head($wp_url);


if ($response->is_success) {
	if ($response->header('content-disposition') =~ m/wordpress-([0-9\.]+)\.tar\.gz/) {
		my $wp_version = $1;
		my $wp_site_content;
		my $wp_update_url;
		my $wp_site_version;
		
		foreach my $url (
			'http://www.example.com/blog/',
			'http://www.example2.com/',
			# 'add more here',
		) {
			#print "\n$url\n";
			my $wp_site = LWP::UserAgent->new;
			$wp_site->timeout(10);
			$wp_site->env_proxy;
			$wp_site_content = $wp_site->get($url);
			if ($wp_site_content->is_success) {
				if ($wp_site_content->decoded_content =~ m/<meta\s+name="generator"\s+content="WordPress\s+([0-9\.]+)"/) { $wp_site_version = $1; if ($wp_site_version ne $wp_version) { $wp_update_url = $url; $wp_update_url =~ s/\/$//; #remove the trailing / $message .= "- $url is version $wp_site_version (update here: $wp_update_url/wp-admin/update-core.php ) \n"; } else { $message .= "- $url is up-to-date\n"; } } else { $message .= "- unable to determine WP version of $url - hidden generator tag?\n"; } } else { $message .= "- unable to determine read $url\n"; } undef $wp_site; undef $wp_site_content; undef $wp_site_version; sleep 1; } if ($message ne '') { $message = "The following sites have been checked against the current stable version of WordPress: \n\n\n" . $message; my $email_message = Email::MIME->create(
			  header_str => [
			    From    => 'from@example.com',
			    To      => 'to@example.com',
			    Subject => 'Wordpress sites report',
			  ],
			  attributes => {
			    encoding => 'quoted-printable',
			    charset  => 'ISO-8859-1',
			  },
			  body_str => $message,
			); 
			use Email::Sender::Simple qw(sendmail);
			sendmail($email_message);
		}
	}
	else {
		die 'Unable to determine current stable WordPress version';
	}
	
}
else {
    die $response->status_line;
}

And this is what the incoming email looks like (supposing there are a couple of URLs):

Subject: WordPress sites report

– http://www.example.com/blog/ is up-to-date
– http://www.example2.com/ is version 3.9.2 (update here: http://www.example2.com/wp-admin/update-core.php )
– http://www.example3.com/ is version 4.0 (update here: http://www.example3.com/wp-admin/update-core.php )
– http://www.example4.com/ is up-to-date
– unable to determine WP version of http://www.example5.com/ – hidden generator tag?

 

Tagged with: , ,

awk2Using awk to display information, the print function, by default, adds newline after each iteration. I wanted to copy the results and use them as an actual array in a program and preferred to have them all on one line, however. So printf had to be used instead of print:

awk '{printf "\"%s\", ", $1;}' file.txt

The result would be:

"value1", "value2", "value3", "value4", "value5", "value6"

which I could copy and use as an actual value of an array somewhere else.

Tagged with: , ,

bash-shellThere was this website containing regularly updated content I wanted to follow. Unfortunately, they had no RSS feed available and I didn’t feel like checking on the website every now and then. Also, what I needed was just the posts containing particular keywords in the Title. So I wrote a tiny script to do the mundane task for me and let me know when the keywords I was interested in would appear:

#!/bin/bash

TMPFILE=`mktemp /tmp/website_content.XXXXXX` || exit 1

wget --output-document="$TMPFILE" --user-agent="Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.1.0" http://someinterestingurl.com/

RESULT=`cat $TMPFILE | grep -i 'keyword-1\|keyword-2\|keyword-3'`

if [[ `echo ${#RESULT}` -gt "0" && `echo $RESULT | wc -l` -ge "1" ]]; then
	echo "yes"
	echo $RESULT | mail -s "new keywords found" example@example.com
fi

rm $TMPFILE

However, grep would return only one line with all the results when assigning the result to the $RESULT variable, not line by line as it deos when printing to the standard output. The problem turned out to be bad usage of echo. echo $RESULT had to be replaced by echo “${RESULT}, my bad, after that the script worked just fine.

#!/bin/bash

TMPFILE=`mktemp /tmp/website_content.XXXXXX` || exit 1

wget --output-document="$TMPFILE" --user-agent="Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.1.0" http://someinterestingurl.com/

RESULT=`cat $TMPFILE | grep -i 'keyword-1\|keyword-2\|keyword-3'`

if [[ `echo ${#RESULT}` -gt "0" && `echo "${RESULT}" | wc -l` -ge "1" ]]; then
	echo "yes"
	echo "${RESULT}" | mail -s "new keywords found" eaxmple@example.ccom
fi

rm $TMPFILE
Tagged with: , ,

Screenshot_2014-06-01-23-24-11I’ve got a small home server made from a half-broken laptop nobody wanted any more. The screen was’t working, but otherwise the machine was fine, so I took it, shipped Centos 6 on it and it’s been serving as a media center at home for nearly two years now.

Now, I wanted to put MiniDLNA on it so that various small Android devices could stream video and music from there. In Debian, it’s as a straightforward task as installing the minidlna package (kudos to the maintainers). But in Centos, things got more complicated. There was no package in the main repositories. I could find some RPMs to download here and there, but it seemed like compiling the software from scratch would be the best option here.

The source can be found here:
http://sourceforge.net/projects/minidlna/ external_link

However, the

.configure

ended in an error:

configure: error: libavutil headers not found or not usable

There could be some packages starting with libav* found in the current repos, but none of them helped. Eventually, it turned out that the problem could be solved by installing ffmpeg package. One way to get it is to use the media-oriented ATrpms repository.

vim /etc/yum.repos.d/atrpms.repo
[atrpms]
name=Fedora Core $releasever - $basearch - ATrpms
baseurl=http://dl.atrpms.net/f$releasever-$basearch/atrpms/stable
gpgkey=http://ATrpms.net/RPM-GPG-KEY.atrpms
enabled=1
gpgcheck=1

After installing ffmpeg, compilation went just fine and I can now play cartoons for the kids or listen to music on any device at home.

Tagged with: ,

It’s very convenient, even necessary, to use a version control system, e.g. CVS or Git. One can go back in history and easily trace the changes (especially) when something goes wrong. Git, being a distributed system, can be used to develop locally and once your work is ready, you can push it upstream to your repository, where it is available to the others to see or to pull. Next, you’d probably want to apply the patches to the production server, where the software is actually deployed. Most often, this is done uploading the files via FTP.

git-deployment-1

This approach is common and usually serves well. Over the years, however, I have found that there is space for “human error”. It happened a few times that I forgot to upload a file that was part of the patch and had to deal with the consequent problems later. Then it occurred to me that I could make use of the version control features for deployment as well. You need shell access, of course, but where possible, this approach can be very convenient and it can eliminate the human factor.

git-deployment-2

Now, it can be useful sometimes not to checkout all the contents of the repository on the production server. Let’s say you have this directory structure:

|
|__Documentation
|__SQL_structure_files
|__public_html

where only the public_html directory needs to be deployed. To be able to check out only some part of the repository you can use git sparse-checkout in the directory on the production server where you want to put the files:

git init 
git remote add origin git@repo.example.com:/git/repo_name
git config core.sparsecheckout true
echo public_html >> .git/info/sparse-checkout
git pull origin master

On the first line, you initialize a new repository. Then you add your remote repository to the list of remotes. On the third line, you enable the sparsecheckout feature that allows you to check out partial content only. On the fourth line, you define what directories you want to check out, it’s only public_html in this case. Once you’re done, you can get the repository contents and checkout the allowed contents, i.e. to pull.

Tagged with:

Hard_DiskIf you have an image with standard partitions, you can mount any partition as a loopback device using the offset given by fdisk:

user@machine:/home/user#  fdisk -l disk.img

Disk disk.img: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00073e63

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   204802047   102400000    7  HPFS/NTFS/exFAT
/dev/sda2       204804094  1465147391   630171649    5  Extended
/dev/sda5       204804096  1457340415   626268160   83  Linux
/dev/sda6      1457342464  1465147391     3902464   82  Linux swap / Solaris

The sda5 partition starts on sector 204804096. We know from fdisk -l that one sector is made of 512 bytes, so we need to multiply that number: 204804096 * 512 = 104859697152

Now we can mount the partition using the specified offset (in bytes):

mount -o loop,offset=104859697152 -t ext4 /dev/sda5 /mnt

LVM

However, if there is LVM inside that image, the above described procedure won’t work. In this case, we need to use the whole image as a loop device first:

user@machine:/home/user#  fdisk -l lvmdisk.raw

Disk lvmdisk.raw: 250.1 GB, 250058268160 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488395055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bad8e

Device Boot      Start         End      Blocks   Id  System
/home/user/lvmdisk.raw1   *          63      208844      104391   83  Linux
/home/user/lvmdisk.raw2          208845   488392064   244091610   8e  Linux LVM

losetup is used to set up and control the loop device on /dev/loop0, while kpartx maps the partitions of the image and maps them as virtual block devices in /dev/mapper.

losetup /dev/loop0 lvmdisk.raw
kpartx -a /dev/loop0

We’ll now be able to see the partitions in /dev/mapper/ as /dev/mapper/loop0p1 and /dev/mapper/loop0p2. Now, we can work with volume groups as usual

vgscan
vgchange -ay VolGroup00
mount -t ext4 /dev/mapper/VolGroup00-LogVol00 /mnt
Tagged with: ,

network-wiredSome tasks you find yourself doing only once in a while, so they never stick in your memory for too long. Adding a new device to Observium external_link (a great monitoring tool), and that device being off the premises, so it’s actually worth bothering with SNMPv3 (because of the encryption feature, compared to v2) is definitely one of those moments.

Step 1: SNMP installation on the monitored device (it was Debian this time)

apt-get install snmp snmpd libsnmp-dev

Step 2: SNMP daemon configuration

vim /etc/snmp/snmpd.conf

Look for this line, by default the daemon listens on localhost only, so you need to add the interface on which it should listen

agentAddress  udp:127.0.0.1:161,udp:192.168.1.105:161

Furhter on, you need to uncomment (i.e. allow) the user we’re going to use, called “authOnlyUser” in this case, and also add string “priv” after the username “authOnlyUser”, that will enforce use of encrypted traffic, which is the main advantage here:

#  Full read-only access for SNMPv3
rouser   authOnlyUser   priv

Step 3: Add the snmpv3 user

If the daemon is running, you need to stop it before you can add the user

service snmpd stop

Then you can create the user:

net-snmp-config --create-snmpv3-user -ro -a ZM367Q7gtd2o3bB -A SHA -x roL98LMQI39hpic -X AES authOnlyUser
service snmpd start

Let’s elaborate on the options further:
-ro – the user has read-only access
-a – authentication, that is the password
-A type of hash (SHA or MD5)
-x – encryption key
-X – encryption type (AES or DES)
authOnlyUser – this is the actual username

Step 4: test the connection

It’s a good idea to allow snmp only from the machine which gathers the data. You can test the connection using snmpwalk:

snmpwalk -u authOnlyUser -A ZM367Q7gtd2o3bB -a SHA -l authnoPriv host -v3

Step 5: Add the device to Observium

If all went well, it’s time to add the device:

./add_device.php hostname ap v3 authOnlyUser ZM367Q7gtd2o3bB roL98LMQI39hpic sha aes 161 udp

The key and password used above were randomly generated and used only for the sake of better readability.

Tagged with: ,

Following an upgrade in Debian Jessie/testing, I was left with no panels at all after logging into the MATE desktop. Running dpkg -s mate-desktop, I saw that MATE had been changed to 1.8 from the previous 1.6 version. My first thought was to try

apt-get install mate-desktop-environment

mateand it actually did the trick. I’m glad I didn’t need to resort to Gnome Shell yet.

Tagged with: , , ,

gitIt’s always best to commit your work only once it’s done. Sometimes the changes span over days or weeks and this is where stashing comes useful. Sometimes, however, I start work at the office and then I want to go on at home in the evening. So I commit the half-done changes to the dev branch, push it to the origin and check it out at home again. Once the work is done and ready to be deployed, I need to see what files have been modified. If there was only one commit, it would be easy, but since the work spans over several commits, I need a list of all files:

git diff --name-only SHA1 SHA2

SHA1 and SHA2 are the hashes of the two commits. You can get them displayed using:

git log

or to get a better format:

git log --graph --date-order -C -M --pretty=format:"<%h> %ad [%an] %Cgreen%d%Creset %s" --all --date=short

SaraSara_Ice_cube_2I am no big fan of cloud solutions, mainly because I don’t like to entrust my data to people I don’t know. But like with any other tool, the cloud is still just a tool and it really depends on the way you decide to use it.

I was looking at the Amazon Glacier storage prices and it occurred to me that I could actually find it useful. There are important data that simply shouldn’t leave the premises, but I also have some data that are worth keeping, but not so critical that I would include them in my redundancy plans. They include some really old stuff (10+ years obsolete and non-existent websites about Ultima Online in php3) that I keep mostly for nostalgic reasons than anything else, some old OS installation images – nothing I would really miss should my disk fail, and not important enough to bother with having two copies of them in separate locations. So I decided to entrust them to Glacier, it’s a cheap solution, it’s reasonably safe to presume that the data won’t be lost over time and that the data is so mundane that no employee-turned-evil with access to them would be actually interested in them. All that said, the thought of putting my data somewhere out there felt weird, even if there is absolutely nothing private or secret in there. This is where GPG comes useful.

This command asks you for a password and creates an encrypted copy of a file using AES256 cipher:

gpg -c --cipher-algo AES256 file.txt

This command can restore the file:

gpg -d file.txt.gpg > file.txt

The thing is, I had a directory with dozens of archived files and it simply wouldn’t do to repeat the process for all of them and enter the passphrase manually over and over again. So this is a workaround to make things more automated:

1. We store the password in a file. Be sure to delete this file afterwards and keep the permissions to 0600

echo -n PASSWORD > password_file

2. We pass the password to the gpg command. We can either use yes command and gpg with file descriptor option –passphrase-fd to feed the file to gpg. Or we can use gpg with the –passphrase-file option.

yes | gpg --passphrase-fd 3 -c --cipher-algo AES256 file.txt 3

3. We need to repeat the process for every file in the given directory, find - exec will take care of that:

find . -name "*.*" -type f -exec bash -c 'gpg --passphrase-file pass -c --cipher-algo AES256 "$0"' {} \; -exec echo {} \;

or

find . -name "*.*" -type f -exec bash -c 'yes | gpg --passphrase-fd 3 -c --cipher-algo AES256 "$0" 3<phrase' {} \; -exec echo {} \;

The files are reasonably safe from prying eyes now and ready to be uploaded to Glacier.

Tagged with: ,
Top