Monday, November 30, 2009

Install LAMP On Ubuntu

Install Apache

To start off we will install Apache.

1. Open up the Terminal (Applications > Accessories > Terminal).

2. Copy/Paste the following line of code into Terminal and then press enter:

sudo apt-get install apache2

3. The Terminal will then ask you for you're password, type it and then press enter.

Test Apache

To make sure everything installed correctly we will now test Apache to ensure it is working properly.

1. Open up any web browser and then enter the following into the web address:


You should see a folder entitled apache2-default/. Open it and you will see a message saying "It works!" , congrats to you!

Install PHP

In this part we will install PHP 5.

Step 1. Again open up the Terminal (Applications > Accessories > Terminal).

Step 2. Copy/Paste the following line into Terminal and press enter:

sudo apt-get install php5 libapache2-mod-php5

Step 3. In order for PHP to work and be compatible with Apache we must restart it. Type the following code in Terminal to do this:

sudo /etc/init.d/apache2 restart

Test PHP

To ensure there are no issues with PHP let's give it a quick test run.

Step 1. In the terminal copy/paste the following line:

sudo gedit /var/www/testphp.php

This will open up a file called phptest.php.

Step 2. Copy/Paste this line into the phptest file:

<?php phpinfo(); ?>

Step 3. Save and close the file.

Step 4. Now open you're web browser and type the following into the web address:


Open the test PHP page. You should get some information about PHP parameters.

Install MySQL

To finish this guide up we will install MySQL. (Note - Out of Apache and PHP, MySQL is the most difficult to set up. I will provide some great resources for anyone having trouble at the end of this guide.)

Step 1. Once again open up the amazing Terminal and then copy/paste this line:

sudo apt-get install mysql-server

Step 2 (optional). In order for other computers on your network to view the server you have created, you must first edit the "Bind Address". Begin by opening up Terminal to edit the my.cnf file.

gksudo gedit /etc/mysql/my.cnf

Change the line

bind-address =

And change the to your IP address.

Step 3. This is where things may start to get tricky. Begin by typing the following into Terminal:

mysql -u root

Following that copy/paste this line:

mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('yourpassword');

(Make sure to change yourpassword to a password of your choice.)

Step 4. We are now going to install a program called phpMyAdmin which is an easy tool to edit your databases. Copy/paste the following line into Terminal:

sudo apt-get install libapache2-mod-auth-mysql php5-mysql phpmyadmin

After that is installed our next task is to get PHP to work with MySQL. To do this we will need to open a file entitled php.ini. To open it type the following:

gksudo gedit /etc/php5/apache2/php.ini

Now we are going to have to uncomment the following line by taking out the semicolon (;).

Change this line:


To look like this:

Now just restart Apache and you are all set!

sudo /etc/init.d/apache2 restart

Quick note to anyone who encountered problems with setting up the MySQL password, please refer to this page: MysqlPasswordReset

I applaud everyone who has taken the time to read this guide. This guide is also my first ever so I would love to hear back from the public on what you guys think! Just don't be too harsh. ;)

* PHPMyAdmin did not work until the following was added to /etc/apache2/apache2.conf:
Include /etc/phpmyadmin/apache.conf

* Copy your web site files to: /var/www
* Copy your database files to: var/bin/mysql

* Restarting web server apache2
apache2: Could not reliably determine the server's fully qualified domain name, using for ServerName ...
waiting apache2:
Could not reliably determine the server's fully qualified domain name, using for ServerName [OK]
* To fix that problem, you need to edit the httpd.conf file. Open the terminal and type,
sudo gedit /etc/apache2/httpd.conf
Now, simply add the follwing line to the httpd.conf file.
ServerName localhost
Save the file and exit from gEdit. Finally restart the server.
sudo /etc/init.d/apache2 restart

The simplest way installing a LAMP server in Ubuntu or Debian is to let Tasksel do it: sudo tasksel - select 'LAMP Server' and then follow the prompts. You can make adjustments to the configuration as needed after the install but you will have a working LAMP server right out of the box! Or type directly: sudo tasksel install lamp-serverOr type directly: sudo tasksel install lamp-server

Saturday, November 28, 2009

Find files in Linux

Searches based on file name

The simplest kinds of search are those based on file name, and the shell's filename wildcard matching provides a starting point for this. For example, the command
$ ls *invoice*
will list all file names in the current directory containing the string invoice. Not too impressive? Why not try something like:
$ ls */*invoice*
which will list files with invoice in the name in any subdirectories of your current directory? Then you can extend the idea to whatever level you want, maybe using something like this:
$ ls  *invoice*  */*invoice*  */*/*invoice*
If you want to search the entire file system for a file based on a file name, the slocate command provides a solution. For example,
$ slocate invoice
will find all files with names that contain the string invoice. You'll find that slocate is lightning fast because it uses a pre-built index of filenames. This index is built using the program updatedb (the slocate command with the -u option does the same thing) which is usually run once a day via cron or anacron.
On Ubuntu installations, the slocate database is /var/lib/slocate/slocate.db. This is the only down-side of slocate - it won't find files that were created since updatedb was last run.

Don't give me bad news...

Among the output from find you'll often notice a bunch of error messages relating to directories we don't have permission to search. Sometimes, there can be so many of these messages that they entirely swamp the 'good' output. You can easily suppress the error messages by redirecting them to the 'black hole' device, /dev/null. To do this, simply append 2> /dev/null to the command line.

S for secure

In case you were wondering, the s in slocate stands for 'secure'. Here's the scoop on this: the updatedb program (the one that builds the index) runs with root privilege, so it can be sure of seeing all the files. This means that potentially there will be files listed in the slocate.db index that ordinary users should not be able to see. These might be system files, or they might be private files belonging to other users.
The slocate index also keeps a record of the ownership and permissions on the files, and the slocate program is careful not to show you file names that you shouldn't be able to see. There was (I think) an older program called locate that wasn't this smart, but on a modern Linux distribution, slocate and locate are links to the same program.

Specialised search: which and whereis

There are a couple of more specialised search tools, whereis and which, that should be mentioned for the sake of completeness. The program whereis searches for the executable, source code and documentation (manual page) for a specified command. It looks in a pre-defined list of directories. For example:
$ whereis ls
ls: /bin/ls /usr/share/man/man1/ls.1.gz
tells us the location the executable (binary) and the man page for the ls command. The which command is even more specialised. It simply looks up a specified command on our search path, reporting where it would first find it. For example:
$ which vi
tells us that the vi command is in /usr/bin/vi. Effectively, this command answers the question "If I entered the command vi, which program would actually get run?"

Searching on steroids: find

At the other end of the scale is the top-of-the-range search tool, find. In addition to filename-based searching, find is able to locate files based on ownership, access permissions, time of last access, size, and much else besides. Of course, the price you pay for all this flexibility is a rather perplexing command syntax. We'll dive into the details later, but here's an example to give you the idea:
$ find /etc -name '*.conf' -user cupsys -print 
find: /etc/ssl/private: Permission denied 
find: /etc/cups/ssl: Permission denied 
In this example, find is searching in (and below) the directory /etc for files whose name ends in .conf and that are owned by the cupsys account.
Generally, the syntax of the find command is of the form:
$ find <where to look> <what to look for> <what to do with it>
The "where to look" part is simply a space-separated list of the directories we want find to search. For each one, find will recursively descend into every directory beneath those specified. Our table below, titled Search Criteria For Find, lists the most useful search criteria (the "what to look for" part of the command).

Search criteria for find

-name string File name matches string (wildcards are allowed) -name '*.jpg'
-iname string Same as -name but not case sensitive -iname '*tax*'
-user username File is owned by username -user chris
-group groupname File has group groupname -group admin
-type x File is of type 'x', one of: f - regular file
d - directory
l - symbolic link
c - character device
b - block device
p - named pipe (FIFO)
-type d
-size +N File is bigger than N 512-byte blocks (use suffix c for bytes, k for kilobytes, M for megabytes) -size +100M
-size -N File is smaller than N blocks (use suffix c for bytes, k for kilobytes, M for megabytes) -size -50c
-mtime -N File was last modified less than N days ago -mtime -1
-mtime +N File was last modified more than N days ago -mtime +14
-mmin -N File was last modified less than N minutes ago -mmin -10
-perm mode The files permissions exactly match mode. The mode can be specified in octal, or using the same symbolic notation that chmod supports -perm 644
-perm -mode All of the permission bits specified by mode are set. -perm -ugo=x
-perm /mode Any of the permission bits specified by mode is set -perm /011
And the smaller table Actions For Find, below, lists the most useful actions (the "what to do with it" part of the command). Neither of these is a complete list, so check the manual page for the full story.

Actions for find

-print Print the full pathname of the file to standard output
-ls Give a full listing of the file, equivalent to running ls -dils
-delete Delete the file
-exec command Execute the specified command. All following arguments to find are taken to be arguments to the command until a ';' is encountered. The string {} is replaced by the current file name.
If no other action is specified, the -print action is assumed, with the result that the pathname of the selected file is printed (or to be more exact, written to standard output). This is a very common use of find. I should perhaps point out that many of the search criteria supported by find are really intended to help in rounding up files to perform some administrative operation on them (make a backup of them, perhaps) rather than helping you find odd files you happen to have mislaid.

Why is this not a command?

The which command can - occasionally - give a misleading answer, if the command in question also happens to be a built-in command of the bash shell. For example:
$ which kill
tells us that the kill command lives in /bin. However, kill is also a built-in bash command, so if I enter a command like
$ kill -HUP 1246
it will actually run the shell's built-in kill and not the external command.
To find out whether a command is recognised as a shell built-in, an alias, or an external command, you can use the type command, like this:
$ type kill
kill is a shell builtin

Learning by Example

It takes a while to get your head around all this syntax, so maybe a few examples would help ...
Example 1 This is a simple name-based search, starting in my home directory and looking for all PowerPoint (.ppt) files. Notice we've put the filename wildcard expression in quotes to stop the shell trying to expand it. We want to pass the argument '*.ppt' directly and let find worry about the wildcard matching.
$ find ~ -name '*.ppt'

Example 2 You can supply multiple "what to look for" tests to find and by default they will be logically AND-ed, that is, they must all be true in order for the file to match. Here, we look for directories under /var that are owned by daemon:
$ find /var -type d -user daemon

Example 3 This shows how you can OR tests together rather than AND-ing them. Here, we're looking in /etc for files that are either owned by the account cupsys or are completely empty:
$ find /etc -user cupsys -or -size 0

Example 4 This uses the '!' operator to reverse the sense of a test. Here, we're searching /bin for files that aren't owned by root:
$ find /usr/bin ! -user root

Example 5 The tests that make numeric comparisons are especially confusing. Just remember that '+' in front of a number means 'more than', '-' means 'less than', and if there is no '+' or '-', find looks for an exact match. These three example search for files that have been modified less than 10 minutes ago, more than 1 year ago, and exactly 4 days ago. (This third example is probably not very useful.)
$ find ~ -mmin -10
$ find ~ -mtime +365
$ find ~ -mtime 4

Example 6 Perhaps the most confusing tests of all are those made on a file's access permissions. This example isn't too bad, it looks for an exact match on the permissions 644 (which would be represented symbolically by ls -l as rw-r--r--:
$ find ~ -perm 644

Example 7 Here we look for files that are writeable by anybody (that is, either the owner, the group, or rest-of-world). The two examples are equivalent; the first uses the traditional octal notation, the second uses the same symbolic notation for representing permissions that chmod uses:
$ find ~ -perm -222
$ find ~ -perm -ugo=w

Example 8 Here we look for files that are writeable by everybody (that is, by the owner and the group and the rest-of-world):
$ find ~ -perm /222
$ find ~ -perm /ugo=w

Example 9 So far we've just used the default -print action of find to display the names of the matching files. Here's an example that uses the -exec option to move all matching files into a backup directory. There are a couple of points to note here. First, the notation {} gets replaced by the full pathname of the matching file, and the ';' is used to mark the end of the command that follows -exec. Remember: ';' is also a shell metacharacter, so we need to put the backslash in front to prevent the shell interpreting it.
$ find ~ -mtime +365 -exec mv {} /tmp/mybackup \;

Never mind the file name, what's in the file?

As we've seen, tools such as find can track down files based on file name, size, ownership, timestamps, and much else, but find cannot select files based on their content. It turns out that we can do some quite nifty content-based searching using grep in conjunction with the shell's wildcards. This example is taken from my personal file system:
$ grep -l Hudson */*
Here, we're asking grep to report the names of the files containing a match for the string Hudson. The wildcard notation */* is expanded by the shell to a list of all files that are one level below the current directory. If we wanted to be a bit more selective on the file name, we could do something like:
$ grep -l Hudson */*.txt
which would only search in files with names ending in .txt. In principal you could extend the search to more directory levels, but in practice you may find that the number of file names matched by the shell exceeds the number of arguments that can appear in the argument list, as happened when I tried it on my system:
$ grep -l Hudson  */*  */*/*
bash: /bin/grep: Argument list too long
A more powerful approach to content-based searching is to use grep in conjunction with find. This example shows a search for files under my home directory ('~') whose names end in .txt, that contain the string Hudson.
$ find ~ -name '*.txt' -exec grep -q Hudson {} \; -print
This approach does not suffer from the argument list overflow problem that our previous example suffered from. Remember, too, that find is capable of searching on many more criteria that just file name, and grep is capable of searching for regular expressions not just fixed text, so there is a lot more power here than this simple example suggests.
If you're unclear about the syntax of this example, read The truth about find, below left. In this example, the predicate -exec grep -q Hudson {} \; returns true if grep finds a match for the string Hudson in the specified file, and false if not. If the predicate is false, find does not continue to evaluate any following expressions, that is, it does not execute the -print action.

The truth about find

The individual components of a find command are known as expressions, (or more technically, as predicates). For example, -uname cupsys is a predicate. The find command operates by examining each and every file under the directory you ask it to search and evaluating each of the predicates in turn against that file.
Each predicate returns either true or false, and the results of the predicates are logically AND-ed together. If one of the predicates returns a false result, find does not evaluate the remaining predicates. So for example in a command such as:
$ find . -user chris -name '*.txt' -print
if the predicate -user chris is false (that is, if the file is not owned by chris) find will not evaluate the remaining predicates. Only if -user chris and -name '*.txt' both return true will find evaluate the -print predicate (which writes the file name to standard output and also returns the result 'true').

Linux Filesystem Tour

We're starting our tour in the root directory, and I suppose you might say this is the, er, high point of the tour, because it's right at the top level of the directory hierarchy. (I don't write the jokes, folks, I just read this script.)
Interestingly, the root directory is the only directory that doesn't have a name. Most people will tell you it's called /, but it isn't really. It's just that when we write down an absolute path name, we start with /, and in the case of the root directory, there's nothing else to write - we're done.
As we set off, on our left you'll notice a directory called /root. This is a little confusing: it isn't the root directory, it's just a directory called root. It's actually the private property of the system administrator. In fact, it's the super-user's home directory.
I have an uncle who's the system administrator of a Solaris system, and he's always complaining that he doesn't have a private home directory. His home directory is /, and he hates it. How would you like to have to hang your washing out to dry in the town square? Can we take a look in /root, Hal? Oh, apparently not - Colonel Linux says no. Well, there's no harm in asking, but /root is one of the few directories for which ordinary users don't have read permission.
Generally speaking, Linux permissions implement a 'look but don't touch' policy. The main exception is in a user's home directory, where they can do whatever they want.

Understanding the root partition

One of the principles guiding the organisation of the filesystem is to allow it to be split across multiple disk partitions (or multiple disks) in a rational manner, and to allow appropriate pieces of it to be shared between machines. Key to this is the notion of the root partition.
When Linux boots, the kernel attaches a single filesystem partition all by itself. This is known as the root partition. Any other partitions that need to be attached are mounted by the mount command, usually under control of entries in the file /etc/fstab. Because in the early stages of startup, only the root filesystem is available, it must contain everything needed for the system to function and attach the other pieces of the filesystem.
Tools on the root partition include the init program (which starts all the other processes), a shell, mount and the /etc/fstab file. The File System Hierarchy standard specifies a number of directories that must lie within the root partition.
Speaking of Colonel Linux, our next port of call is /boot, which is where the colonel lives, so get your cameras or Print Screen buttons ready. He has a file there called something like vmlinuz-2.6.19, which is the (compressed) image of the Linux kernel. When Linux is booted, the boot loader (usually Grub) brings this file into memory and starts it running.
You'll also notice a file there called something like initrd-2.6.19.img, which is an initial RAM disk image. It contains the modules the kernel needs when it's booting, before it can access the filesystem by itself. Don't delete these files, folks, or we won't be able to reboot. Do you want to put me out of a job?
Ah, the guys on the back row have noticed a directory called lost+found. Well done, guys! You'll see a directory of this name at the top level of any partition that contains an ext2 or ext3 filesystem. What's it for? Well, you might think it's a meeting point for stray BSD visitors, ha ha, but it's there for a program called fsck, which checks the consistency of the filesystem.
If fsck finds a file that appears to be intact but doesn't actually have a name, it will create an entry for it in lost+found. To be honest, this hardly ever happens nowadays, so lost+found is probably just an empty directory. Just leave it alone, and stop worrying about it.

Configuration city

All right, coming up on your right you'll see a directory called etc. Historically, we think the name just stood for 'et cetera' (literally 'and other things'). This was the place to put all the stuff that didn't seem to fit anywhere else. We used to be able to stop off here for hot dogs, but nowadays it has become the home for a large collection of system configuration files and scripts.
Some of these files are critically important; for example, /etc/passwd contains the account information for all the locally-defined logins (including root's). You'll also notice /etc/inittab there, which tells init what to do (init is a really special program because when Linux boots, it's the only program that gets started automatically by the kernel. It's responsible for starting all the other services, including the ones that let you log in.)
Also important is /etc/fstab, which tells us which other filesystems should be mounted. You probably don't mess with any of these unless you know what you're doing; errors in these files might prevent the system from booting or stop you from logging in.
While in /etc you'll find the configuration files for various network services - there's /etc/xinetd.conf, which configures xinetd, and /etc/syslog.conf, which configures syslog. All of these files are plain text files, by the way, so the only tool you really need to configure Linux is a text editor. My preferred editor is Vi, but then Hal says I'm weird in other ways, too.
As /etc passes out of view on our right, you'll see /home coming up on the left. This is our residential district. Under here, you'll find the home directories of individual users. For example, Hal's home directory is /home/hal (isn't that right, Hal?).
This is generally a pretty smart neighbourhood, but it's up to individual users what they do under here. Some folks scatter everything around in the one directory, others are really organised with things kept in multiple levels of carefully-named folders.
By default, file permissions under /home allow you to list and examine other users' files, but you can't change them. Of course, individual users can tighten the permissions if they wish. Young Tom has a directory called /home/tom/photos on which only he has read permission. Tom, we'd all love to know what's in there...
Now, Hal likes to speed through /mnt. It's not very exciting, folks, I'm afraid - just an empty directory to temporarily mount other filesystems on to. Right next to /mnt we come to /media. This directory contains sub-directories that are used as mount points for removable media such as floppy discs or CD-ROMs. Probably the younger ones among you won't remember floppy discs? Anyway, the idea is that the hotplug system mounts media on to here automatically when they're inserted.
We've stopped going in there on the tour, ever since an alarming incident last month when Hal drove into /media/cdrom, just to prove it was empty. Then someone shoved a CD in, the hotplug daemon woke up, and wallop - suddenly this entire hierarchy of holiday snaps opened up in front of us. Gave us the willies, I can tell you.

The virtual part of the tour

Ah, now, the directory we're coming up to, /proc, is really interesting because it doesn't actually exist. All the files in here are just a figment of the kernel's imagination - they don't correspond to any information that's actually spinning round on the disc.
Colonel Linux was explaining to me the other week that he had all these internal data structures to keep track of things like memory usage and lots of per-process information like their environment variables. In the olden days, commands like ps (which displays information about running processes) used to fish around inside the memory image of the kernel to snag the information it needed.
The colonel wasn't too wild about this - he said that it felt like having a postmortem performed on you while you were still alive. So he came up with the idea of making this information available as if it were a collection of files. That way, programs can find the information they need just by opening and reading these imaginary files, just like they would open and read any other file.
Hal's going to drive into /proc so we can look around. You can get a hint that there's something weird about /proc because if you do an ls -l in here most of the files have zero length, but if you examine their content with cat or less, they're not empty!
As I said, the so-called files in here show us the content of internal kernel data structures as plain text. For example, the file cmdline shows us the arguments that the kernel was booted with. The file cpuinfo shows us what the kernel knows about the CPU (or CPUs) it's executing on. The file meminfo tells us more about the virtual memory system that we probably wanted to know. And so on.
Sorry, madam, would you mind sitting down? Hal's just going to take this tight bend to show you a collection of directories with names like '3412'. These names are process IDs, and their directories contain yet more imaginary files that provide access to per-process information.
There is actually some documentation on all of this (try man 5 proc for details), but much of the information is at too low a level to be intelligible to the average tourist. In most cases, it's better to use programs like top and ps, which will show you the per-process information in a more digestible form.
Mostly, we think of /proc as a read-only file system, but in fact there are some 'files' under /proc/sys that contain various kernel-tuning parameters that you can adjust by writing to them. For example, we can reduce a parameter called TCP FIN TIMEOUT from 60 to 50 like this:
# cd /proc/sys/inet/ipv4
# cat tcp_fin_timeout
# echo 50 > tcp_fin_timeout
# cat tcp_fin_timeout
OK, hands up those of you who don't have the faintest idea what a TCP FIN TIMEOUT is, or why you might want to adjust it? Yes, most of you. I thought as much. For 99% of us, the best thing is to leave this stuff alone.
Just down the street from /proc we come to /sys. This is another of those 'imaginary' filesystems. It was added to the 2.6 kernel to make it easier for kernel-level code, such as device drivers, to exchange data with programs running in user space.
The hierarchy under /sys enables you to see the hardware environment (the busses, devices and so on) that the kernel has discovered, but unless you're rewriting, say, the Linux hotplug subsystem, you should probably ignore it entirely. There is a book /proc et /sys, written by Olivier Daudel and published by O'Reilly, that documents all this... in French.
Highlights of the filesystem tour Highlights of the filesystem tour

Toilet stop

We're going to visit /dev now. 'Dev' is short for 'devices', and there are some strange critters living in here. They don't really behave like ordinary files. If you do an ls -l in here and look carefully, you'll see a 'b' or a 'c' as the first character on the line.
The b's are so-called block devices and represent devices that are block-structured and can be randomly accessed - usually this means disk partitions. For example, this little guy just here, /dev/hda1, is the first partition on the first hard drive. He's a block device.
On this system, hda1 is the root partition (it might be different on your machine depending on how you installed it); in fact, everything we've visited so far on our tour sits on the partition represented by this guy, so he's kept pretty busy. As soon as you click on Shutdown, he's looking forward to a bath, cocoa and bed.
On the left you'll see a large number of what are known as 'tty' devices (hi, guys!). They're character devices, representing character-based terminals. For example, Linux is typically configured to support six virtual terminals. From your graphical desktop you can reach them with the key combinations Ctrl+Alt+F1 through Ctrl+Alt+F6 (and you can get back to the desktop with Ctrl+Alt+F7). Anyway, these six virtual terminals are the devices /dev/tty1 through /dev/tty6.

History lesson

The name 'tty' originally stood for 'teletype'. A teletype was a mechanical typewriter-style printing device with a keyboard. One model in particular, the ASR33, was extremely popular on mini-computers in the 1970s, when even Colonel Unix was still in adolescence and Colonel Linux was just a twinkle in his father's eye. Teletypes are long gone, but the name has stuck. Nowadays, a tty is a character-based screen of some sort.
Moving on, straight ahead of the bus lives a very strange guy called /dev/null. He lives in that cave with the dark entrance. As we get closer you can see it has 'Abandon hope all ye who enter here' inscribed around the cave entrance. We've seen folk go in there, but no one ever comes out. Some people call it a black hole and use it to throw away unwanted output from programs. Er, we seem to be getting a little close to the entrance, Hal. Don't go in... you'll never... don't go in, Hal. Turn around...!
Phew, that was close.

The gritty side of town

Well, after all that imaginary stuff I'm sure you'd like to see some real files again, and there are plenty of them over here in what I think of as our industrial estate, made up of the two directories /bin and /sbin. For bin, think "binary" - most of what you'll see in here are executable programs.
Why are there two? Well, the idea is that stuff that ordinary users might want to use, such as Vi and tar and rm and date, lives in /bin, whereas things that only the super-user is likely to want to use are in /sbin. You can think of the 's' in /sbin as standing for 'system' or perhaps 'super-user'.
For example, ifconfig (which sets network card parameters) and iptables (which establishes firewall rules) are in /sbin, because only root is allowed to do those things. On most Linux distributions, /bin is included on the search path for a normal user account but /sbin is not. Of course, ordinary users can easily access these commands using full path names such as /sbin/iptables, but it won't do them a lot of good because most of these commands won't let you do anything unless you're root.
There's an important corner of our industrial estate called /lib. Actually it's rather a large corner. 'Lib' stands for 'library', and the files in here are shared libraries required by the system programs in /bin and /sbin. (If you're from a Windows world you will know them as DLLs.) One critically important library that's used by practically everything is the standard C library, If you chose to delete this file, almost everything would instantly stop working.
While we're still looking at our industrial estate, we're going to end our tour by taking a quick look inside a very important directory called /usr. I should warn you that /usr is usually on a different partition, so you may feel a bit of a bump as we cross the mount point. Go slow please, Hal!
As we look around in /usr, we see directories that seem to repeat some of those in the root directory. In particular we see /usr/bin, /usr/sbin and /usr/lib. Indeed, these do contain the same sort of stuff that we saw in /bin, /sbin and /lib earlier. That is, /usr/bin contains user-level commands, /usr/sbin contains system administration commands, and /usr/lib contains libraries.
So why is this stuff spread across two separate sets of directories? The answer is to do with partitioning. The stuff in /usr is often on a separate partition, which doesn't get attached into the filesystem until a relatively late stage in the boot process. Those really critical components used in the early stages of booting must lie within the root partition, not in /usr.
Splitting stuff up in this way also keeps to a minimum those parts of the filesystem that need to be intact to do a single-user boot. The stuff in /bin, /sbin, and /lib is needed, but the stuff in /usr isn't. It's also possible to mount /usr into the filesystem read-only, for improved security, and on a network it may be possible to share /usr out from a single file server, at least among machines sharing a common hardware architecture.
Actually, it turns out that the great majority of executables and libraries live in /usr, and relatively few in the root partition. A check on the disk space usage of the system I'm currently running looks like this:
$ du -sh /bin /sbin /lib /usr/bin /usr/sbin /usr/lib
4.9M    /bin
6.3M    /sbin
109M    /lib
92M     /usr/bin
5.7M    /usr/sbin625M    /usr/lib
The figures you'll see on your own system will be different of course, but the general message will be the same: most stuff is under /usr, and the root partition can be relatively small.
There are a few directories like /var and /tmp and /opt that we haven't visited, but I know I have to get you back in time for you to catch a few big downloads at the FTP mirrors, so we'll return to the root directory where we began and close our tour. Take care getting off the bus. We hope you'll spend a few minutes in our souvenir shop, where you can buy public key rings with a plastic Tux on the fob and postcards that say 'I did an ls -R of /proc and survived!'. So long!"

Filesystem flora and fauna

There are seven kinds of 'creature' living in the filesystem. When you do a long directory listing (ls -l), the very first character of each line tells you the type of creature you're looking at. These are shown in the first column of the table. We also did a population count of each type on an Ubuntu system; these figures are shown in the second column. Of course, as they used to say in the car adverts, "your mileage may vary".
Type Population Description
- 102,314 An ordinary file. This is by far the commonest type.
d 14,701 Directory. A directory is a container for other entries. Some people call them folders.
l 15,258 A symbolic link. These are tiny files that contain the name of some other file, similar to a shortcut in Windows. So if I have, for example, a symbolic link from /etc/motd to /var/run/motd, and a program opens /etc/motd, the kernel says, "Aha, that's a symbolic link. He doesn't really mean /etc/motd, he means /var/run/motd", and it opens that instead. Symbolic links are sometimes called symlinks or soft links.
c 785 A so-called character device (also sometimes called a raw device or a character special file). These entries serve to give names to devices. Some, like /dev/console, correspond to actual physical devices. Others correspond to pseudo-devices; for example, /dev/random provides access to the kernel's random number generator.
b 65 A block device. Block devices are most commonly disks - /dev/hda2, say, is partition 2 on hard drive a. Generally speaking, a character device supports reading and writing of a sequential byte stream, and a block device supports random access. The real distinction, however, is that the kernel provides a layer of buffering for block devices so that they are read and written a complete block at a time. It does not do this for character devices. Practically all device files live in /dev.
s 34 Unix-domain sockets. These are named 'communication endpoints' in the filesystem. They are used in a slightly similar way to TCP and UDP sockets, except that they only support inter-process communication between processes running on the same machine.
p 7 Named pipes. These are so rare they should be on the endangered species list! Like Unix-domain sockets, they are named endpoints used for inter-process communication.

Linux tips

#1: Check processes not run by you

  • Difficulty: Expert
  • Application: bash
Imagine the scene - you get yourself ready for a quick round of Crack Attack against a colleague at the office, only to find the game drags to a halt just as you're about to beat your uppity subordinate - what could be happening to make your machine so slow? It must be some of those other users, stealing your precious CPU time with their scientific experiments, webservers or other weird, geeky things!
OK, let's list all the processes on the box not being run by you!
ps aux | grep -v `whoami`
Or, to be a little more clever, why not just list the top ten time-wasters:
ps aux  --sort=-%cpu | grep -m 11 -v `whoami` 
It is probably best to run this as root, as this will filter out most of the vital background processes. Now that you have the information, you could just kill their processes, but much more dastardly is to run xeyes on their desktop. Repeatedly!

#2: Replacing same text in multiple files

  • Difficulty: Intermediate
  • Application: find/Perl
If you have text you want to replace in multiple locations, there are several ways to do this. To replace the text Windows with Linux in all files in current directory called test[something] you can run this:
perl -i -pe 's/Windows/Linux/;' test*
To replace the text Windows with Linux in all text files in current directory and down you can run this:
find . -name '*.txt' -print | xargs perl -pi -e's/Windows/Linux/ig' *.txt
Or if you prefer this will also work, but only on regular files:
find -type f -name '*.txt' -print0 | xargs --null perl -pi -e 's/Windows/Linux/'
Saves a lot of time and has a high guru rating!

#3: Fix a wonky terminal

  • Difficulty: Easy
  • Application: bash
We've all done it - accidentally used less or cat to list a file, and ended up viewing binary instead. This usually involves all sorts of control codes that can easily screw up your terminal display. There will be beeping. There will be funny characters. There will be odd colour combinations. At the end of it, your font will be replaced with hieroglyphics and you don't know what to do. Well, bash is obviously still working, but you just can't read what's actually going on! Send the terminal an initialisation command:
and all will be well again.

#4: Creating Mozilla keywords

  • Difficulty: Easy
  • Application: Firefox/Mozilla
A useful feature in Konqueror is the ability to type gg onion to do a Google search based on the word onion. The same kind of functionality can be achieved in Mozilla by first clicking on Bookmarks>Manage Bookmarks and then Add a New Bookmark. Add the URL as:
Now select the entry in the bookmark editor and click the Properties button. Now enter the keyword as gg (or this can be anything you choose) and the process is complete. The %s in the URL will be replaced with the text after the keyword. You can apply this hack to other kinds of sites that rely on you passing information on the URL.
Alternatively, right-click on a search field and select the menu option "Add a Keyword for this Search...". The subsequent dialog will allow you to specify the keyword to use.

#5: Running multiple X sessions

  • Difficulty: Easy
  • Application: X
If you share your Linux box with someone and you are sick of continually logging in and out, you may be relieved to know that this is not really needed. Assuming that your computer starts in graphical mode (runlevel 5), by simultaneously pressing the keys Control+Alt+F1 - you will get a login prompt. Insert your login and password and then execute:
startx -- :1
to get into your graphical environment. To go back to the previous user session, press Ctrl+Alt+F7, while to get yours back press Ctrl+Alt+F8.
You can repeat this trick: the keys F1 to F6 identify six console sessions, while F7 to F12 identify six X sessions. Caveat: although this is true in most cases, different distributions can implement this feature in a different way.

#6: Faster browsing

  • Difficulty: Easy
  • Application: KDE
In KDE, a little-known but useful option exists to speed up your web browsing experience. Start the KDE Control Center and choose System > KDE performance from the sidebar. You can now select to preload Konqueror instances. Effectively, this means that Konqueror is run on startup, but kept hidden until you try to use it. When you do, it pops up almost instantaneously. Bonus! And if you're looking for more KDE tips, make sure you check out our article, 20 all-new KDE 4.2 tips.

#7: Backup your website easily

  • Difficulty: Easy
  • Application: Backups
If you want to back up a directory on a computer and only copy changed files to the backup computer instead of everything with each backup, you can use the rsync tool to do this. You will need an account on the remote computer that you are backing up from. Here is the command:
rsync -vare ssh jono@* /home/jono/backup/
Here we are backing up all of the files in /home/jono/importantfiles/ on to /home/jono/backup on the current machine.

#8: Keeping your clock in time

  • Difficulty: Easy
  • Application: NTP
If you find that the clock on your computer seems to wander off the time, you can make use of a special NTP tool to ensure that you are always synchronised with the kind of accuracy that only people that wear white coats get excited about. You will need to install the ntpdate tool that is often included in the NTP package, and then you can synchronise with an NTP server:
A list of suitable NTP servers is available at If you modify your boot process and scripts to include this command you can ensure that you are perfectly in time whenever you boot your computer. You could also run a cron job to update the time.

#9: Finding the biggest files

  • Difficulty: Easy
  • Application: Shell
A common problem with computers is when you have a number of large files (such as audio/video clips) that you may want to get rid of. You can find the biggest files in the current directory with:
ls -lSrh
The "r" causes the large files to be listed at the end and the "h" gives human readable output (MB and such). You could also search for the biggest MP3/MPEGs:
ls -lSrh *.mp*
You can also look for the largest directories with:
du -kx | egrep -v "\./.+/" | sort -n

#10: Nautilus shortcuts

  • Difficulty: Easy
  • Application: Nautilus
Although most file managers these days are designed to be used with the mouse, it's also useful to be able to use the keyboard sometimes. Nautilus has a few keyboard shortcuts that can have you flying through files:
  • Open a location - Ctrl+L
  • Open Parent folder - Ctrl+Up
  • Arrow keys navigate around current folder.
You can also customise the file icons with 'emblems'. These are little graphical overlays that can be applied to individual files or groups. Open the Edit > Backgrounds and Emblems menu item, and drag-and-drop the images you want.

#11: Defrag your databases

  • Difficulty: Easy
  • Application: MySQL
Whenever you change the structure of a MySQL database, or remove a lot of data from it, the files can become fragmented resulting in a loss of performance, particularly when running queries. Just remember any time you change the database to run the optimiser:
mysqlcheck -o <databasename>
You may also find it worth your while to defragment your database tables regularly if you are using VARCHAR fields: these variable-length columns are particularly prone to fragmentation.

#12: Quicker emails

  • Difficulty: Easy
  • Application: KMail
Can't afford to waste three seconds locating your email client? Can't be bothered finding the mouse under all those gently rotting mountains of clutter on your desk? Whatever you are doing in KDE, you are only a few keypresses away from sending a mail. Press Alt+F2 to bring up the 'Run command' dialog. Type:
Press return and KMail will automatically fire up, ready for your words of wisdom. You don't even need to fill in the entire email address. This also works for Internet addresses: try typing to launch Konqueror.

#13: Parallelise your build

  • Difficulty: Easy
  • Application: GCC
If you're running a multiprocessor system (SMP) with a moderate amount of RAM, you can usually see significant benefits by performing a parallel make when building code. Compared to doing serial builds when running make (as is the default), a parallel build is a vast improvement. To tell make to allow more than one child at a time while building, use the -j switch:
make -j4; make -j4 modules

#14: Save battery power

  • Difficulty: Intermediate
  • Application: hdparm
You are probably familiar with using hdparm for tuning a hard drive, but it can also save battery life on your laptop, or make life quieter for you by spinning down drives.
hdparm -y /dev/hdb
hdparm -Y /dev/hdb
hdparm -S 36 /dev/hdb
In order, these commands will: cause the drive to switch to Standby mode, switch to Sleep mode, and finally set the Automatic spindown timeout. This last includes a numeric variable, whose units are blocks of 5 seconds (for example, a value of 12 would equal one minute).
Incidentally, this habit of specifying spindown time in blocks of 5 seconds should really be a contender for a special user-friendliness award - there's probably some historical reason for it, but we're stumped. Write in and tell us if you happen to know where it came from!

#15: Wireless speed management

  • Difficulty: Intermediate
  • Application: iwconfig
The speed at which a piece of radio transmission/receiver equipment can communicate with another depends on how much signal is available. In order to maintain communications as the available signal fades, the radios need to transmit data at a slower rate. Normally, the radios attempt to work out the available signal on their own and automatically select the fastest possible speed.
In fringe areas with a barely adequate signal, packets may be needlessly lost while the radios continually renegotiate the link speed. If you can't add more antenna gain, or reposition your equipment to achieve a better enough signal, consider forcing your card to sync at a lower rate. This will mean fewer retries, and can be substantially faster than using a continually flip-flopping link. Each driver has its own method for setting the link speed. In Linux, set the link speed with iwconfig:
iwconfig eth0 rate 2M
This forces the radio to always sync at 2Mbps, even if other speeds are available. You can also set a particular speed as a ceiling, and allow the card to automatically scale to any slower speed, but go no faster. For example, you might use this on the example link above:
iwconfig eth0 rate 5.5M auto
Using the auto directive this way tells the driver to allow speeds up to 5.5Mbps, and to run slower if necessary, but will never try to sync at anything faster. To restore the card to full auto scaling, just specify auto by itself:
iwconfig eth0 rate auto
Cards can generally reach much further at 1Mbps than they can at 11Mbps. There is a difference of 12dB between the 1Mbps and 11Mbps ratings of the Orinoco card - that's four times the potential distance just by dropping the data rate!

#16: Unclog open ports

  • Difficulty: Intermediate
  • Application: netstat
Generating a list of network ports that are in the Listen state on a Linux server is simple with netstat:
root@catlin:~# netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 
tcp 0 0* LISTEN 698/perl 
tcp 0 0* LISTEN 217/httpd 
tcp 0 0* LISTEN 220/named 
tcp 0 0* LISTEN 220/named 
tcp 0 0* LISTEN 220/named 
tcp 0 0* LISTEN 200/sshd 
udp 0 0* 220/named 
udp 0 0* 220/named 
udp 0 0* 220/named 
udp 0 0* 220/named 
udp 0 0* 222/dhcpd 
raw 0 0* 7 222/dhcpd
That shows you that PID 698 is a Perl process that is bound to port 5280. If you're not root, the system won't disclose which programs are running on which ports.

#17: Faster Hard drives

  • Difficulty: Expert
  • Application: hdparm
You may know that the hdparm tool can be used to speed test your disk and change a few settings. It can also be used to optimise drive performance, and turn on some features that may not be enabled by default. Before we start though, be warned that changing drive options can cause data corruption, so back up all your important data first. Testing speed is done with:
hdparm -Tt /dev/hda
You'll see something like:
Timing buffer-cache reads:   128 MB in  1.64 seconds =78.05 MB/sec
Timing buffered disk reads:  64 MB in 18.56 seconds = 3.45MB/sec
Now we can try speeding it up. To find out which options your drive is currently set to use, just pass hdparm the device name:
hdparm /dev/hda
 multcount    =  16 (on)
 I/O support  =  0 (default 16-bit)
 unmaskirq    =  0 (off)
 using_dma    =  0 (off)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 40395/16/63, sectors = 40718160, start = 0
This is a fairly default setting. Most distros will opt for safe options that will work with most hardware. To get more speed, you may want to enable dma mode, and certainly adjust I/O support. Most modern computers support mode 3, which is a 32-bit transfer mode that can nearly double throughput. You might want to try
hdparm -c3 -d1/dev/hda
Then rerun the speed check to see the difference. Check out the modes your hardware will support, and the hdparm man pages for how to set them.

#18: Uptime on your hands

  • Difficulty: Expert
  • Application: Perl
In computing, wasted resources are resources that could be better spent helping you. Why not run a process that updates the titlebar of your terminal with the current load average in real-time, regardless of what else you're running?
Save this as a script called tl, and save it to your ~/bin directory:
#!/usr/bin/perl -w

use strict;

my $host=`/bin/hostname`;
chomp $host;

while(1) {

open(LOAD,"/proc/loadavg") || die "Couldn't open /proc/loadavg: $!\n";

my @load=split(/ /,<LOAD>);

print "$host: $load[0] $load[1] $load[2] at ", scalar(localtime);
print "\007";

sleep 2;
When you'd like to have your titlebar replaced with the name, load average, and current time of the machine you're logged into, just run tl&. It will happily go on running in the background, even if you're running an interactive program like Vim.

#19: Grabbing a screenshot without X

  • Difficulty: Easy
  • Application: Shell
There are plenty of screen-capture tools, but a lot of them are based on X. This leads to a problem when running an X application would interfere with the application you wanted to grab - perhaps a game or even a Linux installer. If you use the venerable ImageMagick import command though, you can grab from an X session via the console. Simply go to a virtual terminal (Ctrl+Alt+F1 for example) and enter the following:
chvt 7; sleep 2; import -display :0.0 -window root sshot1.png; chvt 1;
The chvt command changes the virtual terminal, and the sleep command gives it a while to redraw the screen. The import command then captures the whole display and saves it to a file before the final chvt command sticks you back in the virtual terminal again. Make sure you type the whole command on one line.
This can even work on Linux installers, many of which leave a console running in the background - just load up a floppy/CD with import and the few libraries it requires for a first-rate run-anywhere screen grabber.

#20: Access your programs remotely

  • Difficulty: Easy
  • Application: X
If you would like to lie in bed with your Linux laptop and access your applications from your Windows machine, you can do this with SSH. You first need to enable the following setting in /etc/ssh/sshd_config:
X11Forwarding yes
We can now run The GIMP on with:
ssh -X gimp

#21: Making man pages useful

  • Difficulty: Easy
  • Application: man
If you are looking for some help on a particular subject or command, man pages are a good place to start. You normally access a man page with man <command>, but you can also search the man page descriptions for a particular keyword. As an example, search for man pages that discuss logins:
man -k login
When you access a man page, you can also use the forward slash key to search for a particular word within the man page itself. Simply press / on your keyboard and then type in the search term.

#22: Talk to your doctor!

  • Difficulty: Easy
  • Application: Emacs
To say that Emacs is just a text editor is like saying that a Triumph is just a motorcycle, or the World Cup is just some four-yearly football event. True, but simplified juuuust a little bit. An example? Open the editor, press the Esc key followed by X and then enter in doctor: you will be engaged in a surreal conversation by an imaginary and underskilled psychotherapist. And if you want to waste your time in a better way
Esc-X tetris
will transform your 'editor' into the old favourite arcade game.
Does the madness stop there? No! Check out your distro's package list to see what else they've bundled for Emacs: we've got chess, Perl integration, IRC chat, French translation, HTML conversion, a Java development environment, smart compilation, and even something called a "semantic bovinator". We really haven't the first clue what that last one does, but we dare you to try it out anyway! (Please read the disclaimer first!)

#23: Generating package relationship diagrams

  • Difficulty: Easy
  • Application: Debian
The most critical part of the Debian system is the ability to install a package and have the dependencies satisfied automatically. If you would like a graphical representation of the relationships between these packages (this can be useful for seeing how the system fits together), you can use the Graphviz package from Debian non-free (apt-get install graphviz) and the following command:
apt-cache dotty >
The command generated the graph file which can then be loaded into dotty:

#24: Unmount busy drives

  • Difficulty: Easy
  • Application: bash
You are probably all too familiar with the situation - you are trying to unmount a drive, but keep getting told by your system that it's busy. But what application is tying it up? A quick one-liner will tell you:
lsof +D /mnt/windows
This will return the command and process ID of any tasks currently accessing the /mnt/windows directory. You can then locate them, or use the kill command to finish them off.

#25: Text file conversion

  • Difficulty: Easy
  • Application: recode
recode is a small utility that will save you loads of effort when using text files created on different platforms. The primary source of discontent is line breaks. In some systems, these are denoted with a line-feed character. In others, a carriage return is used. In still more systems, both are used. The end result is that if you are swapping text from one platform to another, you end up with too many or too few line breaks, and lots of strange characters besides.
However, the command parameters of recode are a little arcane, so why not combine this hack with HACK 26 in this feature, and set up some useful aliases:
alias dos2unix='recode dos/CR-LF..l1'
alias unix2win='recode'
alias unix2dos='recode l1..dos/CR-LF'
There are plenty more options for recode - it can actually convert between a whole range of character sets. Check out the man pages for more information.

#26: Listing today's files only

  • Difficulty: Easy
  • Application: Various
You are probably familiar with the problem. Sometime earlier in the day, you created a text file, which now is urgently required. However, you can't remember what ridiculous name you gave it, and being a typical geek, your home folder is full of 836 different files. How can you find it? Well, there are various ways, but this little tip shows you the power of pipes and joining together two powerful shell commands:
ls -al --time-style=+%D | grep `date +%D`
The parameters to the ls command here cause the datestamp to be output in a particular format. The cunning bit is that the output is then passed to grep. The grep parameter is itself a command (executed because of the backticks), which substitutes the current date into the string to be matched. You could easily modify it to search specifically for other dates, times, filesizes or whatever. Combine it with HACK 26 to save typing!

#27: Avoid common mistypes and long commands

  • Difficulty: Easy
  • Application: Shell
The alias command is useful for setting up shortcuts for long commands, or even more clever things. From HACK 25, we could make a new command, lsnew, by doing this:
alias lsnew=" ls -al --time-style=+%D | grep `date +%D` "
But there are other uses of alias. For example, common mistyping mistakes. How many times have you accidentally left out the space when changing to the parent directory? Worry no more!
alias cd..="cd .."
Alternatively, how about rewriting some existing commands?
alias ls="ls -al"
saves a few keypresses if, like us, you always want the complete list.
To have these shortcuts enabled for every session, just add the alias commands to your user .bashrc file in your home directory.

#28: Alter Mozilla's secret settings

  • Difficulty: Easy
  • Application: Mozilla
If you find that you would like to change how Mozilla works but the preferences offer nothing by way of clickable options that can help you, there is a special mode that you can enable in Mozilla so that you can change anything. To access it, type this into the address bar:
You can then change each setting that you are interested in by changing the Value field in the table.
Other interesting modes include general information (about:), details about plugins (about:plugins), credits information (about:credits) and some general wisdom (about:mozilla).

#29: A backdrop of stars

  • Difficulty: Easy
  • Application: KStars
You may already have played with KStars, but how about creating a KStars backdrop image that's updated every time you start up?
KStars can be run with the --dump switch, which dumps out an image from your startup settings, but doesn't load the GUI at all. You can create a script to run this and generate a desktop image, which will change every day (or you can just use this method to generate images).
Run KStars like this:
kstars --dump --width 1024 --height 768 --filename = ~/kstarsback.png
You can add this to a script in your ~/.kde/Autostart folder to be run at startup. Find the file in Konqueror, drag it to the desktop and select 'Set as wallpaper' to use it as a randomly generated backdrop.

#30: Open an SVG directly

  • Difficulty: Easy
  • Application: Inkscape
You can run Inkscape from a shell and immediately edit a graphic directly from a URL. Just type:
Remember to save it as something else though!

#31: Editing without an editor

  • Difficulty: Intermediate
  • Application: Various
Very long files are often hard to manipulate with a text editor. If you need to do it regularly, chances are you'll find it much faster to use some handy command-line tools instead, like in the following examples.
To print columns eg 1 and 3 from a file file1 into file2, we can use awk:
awk '{print $1, $3}' file1 > file2
To output only characters from column 8 to column 15 of file1, we can use cut:
cut -c 8-15 file1 > file2

To replace the word word1 with the word word2 in the file file1, we can use the sed command:
sed "s/word1/word2/g" file1 > file2
This is often a quicker way to get results than even opening a text editor.

#32: Backup selected files only

  • Difficulty: Intermediate
  • Application: tar
Want to use tar to backup only certain files in a directory? Then you'll want to use the -T flag as follows. First, create a file with the file you want to backup:
cat >> /etc/backup.conf
# /etc/passwd
# /etc/shadow
# /etc/yp.conf
# /etc/sysctl.conf
Then run tar with the -T flag pointing to the file just created:
tar -cjf bck-etc-`date +%Y-%m-%d`.tar.bz2 -T /etc/backup.conf
Now you have your backup.

#33: Merging columns in files

  • Difficulty: Intermediate
  • Application: bash
While splitting columns in files is easy enough, merging them can be complicated. Below is a simple shell script that does the job:
length=`wc -l $1 | awk '{print $1}'`
[ -f $3 ] && echo "Optionally removing $3" && rm -i $3
while [ "$count" -le "$length" ] ; do
      a=`head -$count $1 | tail -1`
      b=`head -$count $2 | tail -1`
      echo "$a      $b" >> $3
      count=`expr $count + 1`
Give to this script the name and make it executable with:
chmod u+x
Now, if you want to merge the columns of file1 and file2 into file3, it's just matter of executing
/path/to/ file1 file2 file3
where /path/to has to be replaced with the location of in your filesystem.

#34: Case sensitivity

  • Difficulty: Intermediate
  • Application: bash
Despite the case of a word not making any difference to other operating systems, in Linux "Command" and "command" are different things. This can cause trouble when moving files from Windows to Linux. tr is a little shell utility that can be used to change the case of a bunch of files.
for i in `ls -1`; do
        file1=`echo $i | tr [A-Z] [a-z] `
        mv $i $file1 2>/dev/null
By executing it, FILE1 and fiLe2 will be renamed respectively file1 and file2.

#35: Macros in Emacs

  • Difficulty: Intermediate
  • Application: Emacs
When editing files, you will often find that the tasks are tedious and repetitive, so to spare your time you should record a macro. In Emacs, you will have to go through the following steps:
  1. Press Ctrl+X to start recording.
  2. Insert all the keystrokes and commands that you want
  3. Press Ctrl+X to stop when you're done.
Now, you can execute that with
Ctrl -u <number> Ctrl -x e

where <number> is the number of times you want to execute the macro. If you enter a value of 0, the macro will be executed until the end of the file is reached. Ctrl -x e is equivalent to Ctrl -u 1 Ctrl-x e.

#36: Simple spam killing

  • Difficulty: Intermediate
  • Application: KMail
Spam, or unsolicited bulk email, is such a widespread problem that almost everyone has some sort of spam protection now, out of necessity. Most ISPs include spam filtering, but it isn't set to be too aggressive, and most often simply labels the spam, but lets it through (ISPs don't want to be blamed for losing your mails).
The result is that, while you may have anti-spam stuff set up on the client-side, you can make its job easier by writing a few filters to remove the spam that's already labelled as such. The label is included as a header. In KMail, you can just create a quick filter to bin your mail, or direct it to a junk folder. The exact header used will depend on the software your ISP is using, but it's usually something like X-Spam-Flag = YES for systems like SpamAssassin.
Simply create a filter in KMail, choose Match Any of the Following and type in the header details and the action you require. Apply the filter to incoming mail, and you need never be troubled by about half the volume of your spam ever again.

#37: Read OOo docs without OOo

  • Difficulty: Intermediate
  • Application:
Have you ever been left with an OOo document, but no in which to read it? Thought you saved it out as plain text (.txt), but used the StarOffice .sxw format instead? The text can be rescued. Firstly, the sxw file is a zip archive, so unzip it:
unzip myfile.sxw
The file you want is called 'content.xml'. Unfortunately, it's so full of xml tags it's fairly illegible, so filter them out with some Perl magic:
cat content.xml | perl -p -e  "s/<[^>]*>/ /g;s/\n/ /g;s/ +/ /;"
It may have lost lots of formatting, but at least it is now readable.

#38: Find and execute

  • Difficulty: Intermediate
  • Application: find
The find command is not only useful for finding files, but is also useful for processing the ones it finds too. Here is a quick example.
Suppose we have a lot of tarballs, and we want to find them all:
find . -name '*.gz'
will locate all the gzip archives in the current path. But suppose we want to check they are valid archives? The gunzip -vt option will do this for us, but we can cunningly combine both operations, using xargs:
find . -name '*.gz' | xargs gunzip -vt

#39: Use the correct whois server

  • Difficulty: Intermediate
  • Application: whois
The whois command is very useful for tracking down Internet miscreants and the ISPs that are supplying them with service. Unfortunately, there are many whois servers, and if you are querying against a domain name, you often have to use one which is specific to the TLD they are using. However, there are some whois proxies that will automatically forward your query on to the correct server. One of these is available at
whois -h

#40: Where did that drive mount?

  • Difficulty: Intermediate
  • Application: bash
A common problem with people who have lots of mountable devices (USB drives, flash memory cards, USB key drives) is working out where that drive you just plugged in has ended up?
Practically all devices that invoke a driver - such as usb-storage - will dump some useful information in the logs. Try
dmesg | grep SCSI
This will filter out recognised drive specs from the dmesg output. You'll probably turn up some text like:
SCSI device sda: 125952 512-byte hdwr sectors (64 MB)
So your device is at sda.

#41: Autorun USB devices

  • Difficulty: Expert
  • Application: hotplug scripts
Want to run a specific application whenever a particular device is added? The USB hotplug daemon can help you! This service is notified when USB devices are added to the system. For devices that require kernel drivers, the hotplug daemon will call a script by the same name in /etc/hotplug/usb/, for example, a script called usb-storage exists there. You can simply add your own commands to the end of this script (or better still, tag a line at the end of it to execute a script elsewhere). Then you can play a sound, autosync files, search for pictures or whatever.
For devices that don't rely on kernel drivers, a lookup table is used matching the USB product and manufacturer ID. Many distros already set this up to do something, but you can customise these scripts pretty easily. See for an example of what can be done.

#42: Rename and resize images

  • Difficulty: Expert
  • Application: bash
Fond of your new camera but can't put up with the terrible names? Do you want also to prepare them for publishing on the web? No problem, a simple bash script is what you need:
for i in `ls -1 $1/*.jpg`; do
        echo "Now working on $i"
        convert -resize $resolution $i ${root}_${counter}.jpg
        counter=`expr $counter + 1`
Save the script in a file called and make it executable with
chmod u+x

and store it somewhere in your path. Now, if you have a bunch of .jpg files in the directory /path/to/pictdir, all you have to do is to execute /path/to/pictdir
and in the current directory you'll find mypict_1.jpg, mypict_2.jpg etc, which are the resized versions of your original ones. You can change the script according to your needs, or, if you're just looking for super-simple image resizing, try looking at the mogrify command with its -geometry parameter.

#43: Secure logout

  • Difficulty: Easy
  • Application: bash
When you are using a console on a shared machine, or indeed, just on your own desktop, you may find that when you logout, the screen still shows a trace of who was logged in and what you were doing. A lot of distros will clear the screen, but some don't. You can solve this by editing your ~/.bash_logout file and adding the command:
You can add any other useful commands here too.

#44: Transferring files without ftp or scp

  • Difficulty: Easy
  • Application: netcat
Need to transfer a directory to another server but do not have FTP or SCP access? Well this little trick will help out using the netcat utility. On the destination server run:
nc -l -p 1234 | uncompress -c | tar xvfp -
And on the sending server run:
tar cfp - /some/dir | compress -c | nc -w 3 [destination] 1234
Now you can transfer directories without FTP and without needing root access.

#45: Backing up a Debian package list

  • Difficulty: Easy
  • Application: Debian
If you are running Debian and have lost track of which packages you are running, it could be useful to get a backup of your currently installed packages. You can get a list by running:
dpkg --get-selections > debianlist.txt
This will put the entire list in debianlist.txt. You could then install the same packages on a different computer with:
dpkg --set-selections < debianlist.txt
You should bear in mind that you would also need to copy over configuration files from /etc when copying your system to a new computer.
To actually install the selections, use:
apt-get -u dselect-upgrade.

#46: Hardening ssh

  • Difficulty: Easy
  • Application: ssh
Although SSH is a pretty secure way to connect to your server, there are two simple changes you can make that will boost its security even further. First, you almost certainly don't want people logging in directly as root - instead, they should logon as a normal user, then use the su command to switch over. You can change this simply in the /etc/ssh/ssh_config file by adding the line:
PermitRootLogin  no

Now the only way to get root privilges is through su, which means crackers now need to break two passwords to get full access. While you are editing that file, find the line which says:
Protocol 2, 1
And change it to:
Protocol 2
This removes the option to fallback on the original SSH protocol, now considered very vulnerable.

#47: Stop replying to pings

  • Difficulty: Easy
  • Application: sysctl
While ping is a very useful command for discovering network topology, the disadvantage is that it does just that, and makes it easier for hackers on the network to target live servers. But you can tell Linux to ignore all pings - the server simply won't respond. There are a number of ways to achieve this, but the best is to use sysctl. To turn off ping replies:
sysctl -w net.ipv4.icmp_echo_ignore_all=1
To turn it back on, again use:
sysctl -w net.ipv4.icmp_echo_ignore_all=0
If turning off ping is too severe for you, take a look at the next hack.

#48: Slow down ping rates

  • Difficulty: Easy
  • Application: sysctl
You may want to keep the ability to reply to pings, but protect yourself from a form of attack known as a 'ping flood'. So how can you manage such a feat? The easiest way is to slow down the rate at which the server replies to pings. They are still valid, but won't overload the server:
sysctl -w net.ipv4.icmp_echoreply_rate=10
This slows the rate at which replies are sent to a single address.

#49: Clean up KDE on logout

  • Difficulty: Easy
  • Application: bash
On Windows there are plenty of programs that do stuff like clean out your web cache, remove temporary files and all sorts of other stuff when you logout. Wouldn't it be cool to do this on Linux too? With KDE, you don't need to even install any new software, as the startkde script will automatically run scripts you put in special places.
First, you need to create a directory called shutdown in your .kde directory:
mkdir /home/username/.kde/shutdown
Now create a script to do any stuff you like on shutdown. Here is an example:
#clear up temp folder
rm -rf ~/tmp/*
#clear out caches
rm -rf ~/.ee/minis/*
rm -rf ~/.kde/share/cache/http/*
# delete konqueror form completions
rm ~/.kde/share/apps/khtml/formcompletions
Now make sure you set the correct permissions:
chmod ug+x ~/.kde/shutdown/

(or whatever you called it). As well as cleaning up sensitive files, you can also have global shutdown scripts for all users, by placing the script in your default KDE folder, in a subfolder called shutdown. To find out which is your default KDE directory, try:
kde-config --path exe

#50: Password-less ssh

  • Difficulty: Intermediate
  • Application: ssh
Tired of typing your password every time you log into the server? ssh also supports keys, so you'll only have to type in your password when you log in to the desktop. Generate a keypair on your desktop machine:
ssh-keygen -t dsa -C
Enter a passphrase for your key. This puts the secret key in ~/.ssh/id_dsa and the public key in ~/.ssh/ Now see whether you have an ssh-agent running at present:
Most window managers will run it automatically if it's installed. If not, start one up:
eval $(ssh-agent)
Now, tell the agent about your key:
and enter your passphrase. You'll need to do this each time you log in; if you're using X, try adding
SSH_ASKPASS=ssh-askpass ssh-add
to your .xsession file. (You may need to install ssh-askpass.) Now for each server you log into, create the directory ~/.ssh and copy the file ~/.ssh/ into it as ~/.ssh/authorized_keys . If you started the ssh-agent by hand, kill it with
ssh-agent -k
when you log out.

#51: Using rsync over ssh

  • Difficulty: Intermediate
  • Application: Shell
Keep large directory structures in sync quickly with rsync. While tar over SSH is ideal for making remote copies of parts of a filesystem, rsync is even better suited for keeping the filesystem in sync between two machines. To run an rsync over SSH, pass it the -e switch, like this:
rsync -ave ssh greendome:/home/ftp/pub/ /home/ftp/pub/
Note the trailing / on the file spec from the source side (on greendome.) On the source spec, a trailing / tells rsync to copy the contents of the directory, but not the directory itself. To include the directory as the top level of what's being copied, leave off the /:
rsync -ave ssh bcnu:/home/six .
This will keep a copy of the ~/six/ directory on village in sync with whatever is present on bcnu:/home/six/. By default, rsync will only copy files and directories, but not remove them from the destination copy when they are removed from the source. To keep the copies exact, include the --delete flag:
rsync -ave ssh  --delete greendome:~one/reports .
Now when old reports are removed from ~one/reports/ on greendome, they're also removed from ~six/public_html/reports/ on the synced version, every time this command is run. If you run a command like this in cron, leave off the v switch. This will keep the output quiet (unless rsync has a problem running, in which case you'll receive an email with the error output). Using SSH as your transport for rsync traffic has the advantage of encrypting the data over the network and also takes advantage of any trust relationships you already have established using SSH client keys.

#52: Asset scanning

  • Difficulty: Intermediate
  • Application: nmap
Normally, when people think of using nmap, they assume it's used to conduct some sort of nefarious network reconnaissance in preparation for an attack. But as with all powerful tools, nmap can be made to wear a white hat, as it's useful for far more than breaking into networks. For example, simple TCP connect scans can be conducted without needing root privileges:
nmap rigel
nmap can also scan ranges of IP addresses by specifying the range or using CIDR notation:
nmap can provide much more information if it is run as root. When run as root, it can use special packets to determine the operating system of the remote machine by using the -O flag. Additionally, you can do half-open TCP scanning by using the -sS flag. When doing a half-open scan, nmap will send a SYN packet to the remote host and wait to receive the ACK from it; if it receives an ACK, it knows that the port is open.
This is different from a normal three-way TCP handshake, where the client will send a SYN packet and then send an ACK back to the server once it has received the initial server ACK. Attackers typically use this option to avoid having their scans logged on the remote machine.
nmap -sS -O rigel
Starting nmap V. 3.00 ( )
Interesting ports on rigel.nnc (
(The 1578 ports scanned but not shown below are in state: filtered)
Port       State       Service
7/tcp      open     echo 
9/tcp      open     discard 
13/tcp     open     daytime 
19/tcp     open     chargen 
21/tcp     open     ftp 
22/tcp     open     ssh 
23/tcp     open     telnet 
25/tcp     open     smtp 
37/tcp     open     time 
79/tcp     open     finger 
111/tcp    open     sunrpc 
512/tcp    open     exec 
513/tcp    open     login 
514/tcp    open     shell 
587/tcp    open     submission 
7100/tcp   open     font-service 
32771/tcp  open     sometimes-rpc5 
32772/tcp  open     sometimes-rpc7 
32773/tcp  open     sometimes-rpc9 
32774/tcp  open     sometimes-rpc11 
32777/tcp  open     sometimes-rpc17 
Remote operating system guess: Solaris 9 Beta through Release on SPARC
Uptime 44.051 days (since Sat Nov  1 16:41:50 2003)
Nmap run completed -- 1 IP address (1 host up) scanned in 166 seconds
With OS detection enabled, nmap has confirmed that the OS is Solaris, but now you also know that it's probably Version 9 running on a SPARC processor.
One powerful feature that can be used to help keep track of your network is nmap's XML output capabilities. This is activated by using the -oX command-line switch, like this:
nmap -sS -O -oX scandata.xml rigel
This is especially useful when scanning a range of IP addresses or your whole network, because you can put all the information gathered from the scan into a single XML file that can be parsed and inserted into a database. Here's what an XML entry for an open port looks like:
<port protocol="tcp" portid="22">

<state state="open" />
<service name="ssh" method="table" conf="3" />
nmap is a powerful tool. By using its XML output capabilities, a little bit of scripting, and a database, you can create an even more powerful tool that can monitor your network for unauthorized services and machines.

#53: Backup your bootsector

  • Difficulty Expert
  • Application Shell
Messing with bootloaders, dual-booting and various other scary processes can leave you with a messed up bootsector. Why not create a backup of it while you can:
dd if=/dev/hda of=bootsector.img bs=512 count=1
Obviously you should change the device to reflect your boot drive (it may be sda for SCSI). Also, be very careful not to get things the wrong way around - you can easily damage your drive! To restore use:
dd if=bootsector.img of=/dev/hda 

#54: Protect log files

  • Difficulty: Expert
  • Application: Various
During an intrusion, an attacker will more than likely leave telltale signs of his actions in various system logs: a valuable audit trail that should be protected. Without reliable logs, it can be very difficult to figure out how the attacker got in, or where the attack came from. This info is crucial in analysing the incident and then responding to it by contacting the appropriate parties involved. But, if the break-in is successful, what's to stop him from removing the traces of his misbehaviour?
This is where file attributes come in to save the day (or at least make it a little better). Both Linux and the BSDs have the ability to assign extra attributes to files and directories. This is different from the standard Unix permissions scheme in that the attributes set on a file apply universally to all users of the system, and they affect file accesses at a much deeper level than file permissions or ACLs.
In Linux, you can see and modify the attributes that are set for a given file by using the lsattr and chattr commands, respectively. At the time of this writing, file attributes in Linux are available only when using the ext2 and ext3 filesystems. There are also kernel patches available for attribute support in XFS and ReiserFS. One useful attribute for protecting log files is append-only. When this attribute is set, the file cannot be deleted, and writes are only allowed to append to the end of the file.
To set the append-only flag under Linux, run this command:
chattr +a  filename
See how the +a attribute works: create a file and set its append-only attribute:
touch /var/log/logfile
echo "append-only not set" > /var/log/logfile
chattr +a /var/log/logfile
echo "append-only set" > /var/log/logfile
bash: /var/log/logfile: Operation not permitted

The second write attempt failed, since it would overwrite the file. However, appending to the end of the file is still permitted:
echo "appending to file" >> /var/log/logfile
cat /var/log/logfile
append-only not set
appending to file
Obviously, an intruder who has gained root privileges could realise that file attributes are being used and just remove the append-only flag from our logs by running chattr -a. To prevent this, we need to disable the ability to remove the append-only attribute. To accomplish this under Linux, use its capabilities mechanism.
The Linux capabilities model divides up the privileges given to the all-powerful root account and allows you to selectively disable them. In order to prevent a user from removing the append-only attribute from a file, we need to remove the CAP_LINUX_IMMUTABLE capability. When present in the running system, this capability allows the append-only attribute to be modified. To modify the set of capabilities available to the system, we will use a simple utility called lcap (
To unpack and compile the tool, run this command:
tar xvfj lcap-0.0.3.tar.bz2 && cd lcap-0.0.3 && make

Then, to disallow modification of the append-only flag, run:
The first command removes the ability to change the append-only flag, and the second removes the ability to do raw I/O. This is needed so that the protected files cannot be modified by accessing the block device they reside on. It also prevents access to /dev/mem and /dev/kmem, which would provide a loophole for an intruder to reinstate the CAP_LINUX_IMMUTABLE capability. To remove these capabilities at boot, add the previous two commands to your system startup scripts (eg /etc/rc.local). You should ensure that capabilities are removed late in the boot order, to prevent problems with other startup scripts. Once lcap has removed kernel capabilities, they can be reinstated only by rebooting the system.
Before doing this, you should be aware that adding append-only flags to your log files will most likely cause log rotation scripts to fail. However, doing this will greatly enhance the security of your audit trail, which will prove invaluable in the event of an incident.

#55: Automatically encrypted connections

  • Difficulty: Expert
  • Application: FreeS/WAN
One particularly cool feature supported by FreeS/WAN is opportunistic encryption with other hosts running FreeS/WAN. This allows FreeS/WAN to transparently encrypt traffic between all hosts that also support opportunistic encryption. To do this, each host must have a public key generated to use with FreeS/WAN. This key can then be stored in a DNS TXT record for that host. When a host that is set up for opportunistic encryption wishes to initiate an encrypted connection with another host, it will look up the host's public key through DNS and use it to initiate the connection.
To begin, you'll need to generate a key for each host that you want to use this feature with. You can do that by running the following command:
ipsec newhostkey --output /tmp/`hostname`.key
Now you'll need to add the contents of the file that was created by that command to /etc/ipsec.secrets:
cat /tmp/`hostname`.key >> /etc/ipsec.secrets
Next, you'll need to generate a TXT record to put into your DNS zone. You can do this by running a command similar to this one:
ipsec showhostkey --txt @colossus.nnc
Now add this record to your zone and reload it. You can verify that DNS is working correctly by running this command:
ipsec verify
Checking your system to see if IPsec got installed and started correctly
Version check and ipsec on-path  [OK]
Checking for KLIPS support in kernel   [OK]
Checking for RSA private key (/etc/ipsec.secrets)   [OK]
Checking that pluto is running   [OK]
DNS checks. 
Looking for TXT in forward map: colossus   [OK]
Does the machine have at least one non-private address   [OK]
Now just restart FreeS/WAN - you should now be able to connect to any other host that supports opportunistic encryption. But what if other hosts want to connect to you? To allow this, you'll need to create a TXT record for your machine in your reverse DNS zone.
You can generate the record by running a command similar to this:
ipsec showhostkey --txt
Add this record to the reverse zone for your subnet, and other machines will be able to initiate opportunistic encryption with your machine. With opportunistic encryption in use, all traffic between the hosts will be automatically encrypted, protecting all services simultaneously.

#56: Eliminate suid binaries

  • Difficulty: Intermediate
  • Application: find
If your server has more shell users than yourself, you should regularly audit the setuid and setgid binaries on your system. Chances are you'll be surprised at just how many you'll find. Here's one command for finding all of the files with a setuid or setgid bit set:
find / -perm +6000 -type f -exec ls -ld {} \; > setuid.txt &
This will create a file called setuid.txt that contains the details of all of the matching files present on your system. To remove the s bits of any tools that you don't use, type:
chmod a-s program

#57: Mac filtering Host AP

  • Difficulty: Expert
  • Application: iwpriv
While you can certainly perform MAC filtering at the link layer using iptables or ebtables, it is far safer to let Host AP do it for you. This not only blocks traffic that is destined for your network, but also prevents miscreants from even associating with your station. This helps to preclude the possibility that someone could still cause trouble for your other associated wireless clients, even if they don't have further network access.
When using MAC filtering, most people make a list of wireless devices that they wish to allow, and then deny all others. This is done using the iwpriv command.
iwpriv wlan0 addmac 00:30:65:23:17:05
iwpriv wlan0 addmac 00:40:96:aa:99:fd
iwpriv wlan0 maccmd 1
iwpriv wlan0 maccmd 4
The addmac directive adds a MAC address to the internal table. You can add as many MAC addresses as you like to the table by issuing more addmac commands. You then need to tell Host AP what to do with the table you've built. The maccmd 1 command tells Host AP to use the table as an "allowed" list, and to deny all other MAC addresses from associating. Finally, the maccmd 4 command boots off all associated clients, forcing them to reassociate. This happens automatically for clients listed in the table, but everyone else attempting to associate will be denied.
Sometimes, you only need to ban a troublemaker or two, rather than set an explicit policy of permitted devices. If you need to ban a couple of specific MAC address but allow all others, try this:
iwpriv wlan0 addmac 00:30:65:fa:ca:de
iwpriv wlan0 maccmd 2
iwpriv wlan0 kickmac 00:30:65:fa:ca:de
As before, you can use addmac as many times as you like. The maccmd 2 command sets the policy to "deny," and kickmac boots the specified MAC immediately, if it happens to be associated. This is probably nicer than booting everybody and making them reassociate just to ban one troublemaker. Incidentally, if you'd like to remove MAC filtering altogether, try maccmd 0.
If you make a mistake typing in a MAC address, you can use the delmac command just as you would addmac, and it (predictably) deletes the given MAC address from the table. Should you ever need to flush the current MAC table entirely but keep the current policy, use this command:
iwpriv wlan0 maccmd 3
Finally, you can view the running MAC table by using /proc:
cat /proc/net/hostap/wlan0/ap_control 
The iwpriv program manipulates the running Host AP driver, but doesn't preserve settings across reboots. Once you are happy with the contents of your MAC filtering table, be sure to put the relevant commands in an rc script to run at boot time.
Note that even unassociated clients can still listen to network traffic, so MAC filtering actually does very little to prevent eavesdropping. To combat passive listening techniques, you will need to encrypt your data.