Not Really a Blog

October 15, 2009

Little surprises in HTTP Headers

Filed under: Internet, System Administration — Tags: — jesus @ 22:59

Last week I move a blog I’ve got in Spanish to wordpress.com. Basically I really like wordpress.com and I believe it’s really worth it in terms of freeing my time from administering a wordpress installation and keeping up with the security fixes etc. And today, having a little bit of time I was tweaking my old website to redirect to the new site using an HTTP permanent redirect header. This is what I found in the HTTP headers:

[golan@mars ~] % HEAD http://roncero.org/blog/
200 OK
Cache-Control: max-age=260, must-revalidate
Connection: close
Date: Thu, 15 Oct 2009 21:35:09 GMT
Server: nginx
Vary: Cookie
Content-Type: text/html; charset=UTF-8
Last-Modified: Thu, 15 Oct 2009 21:34:29 +0000
Client-Date: Thu, 15 Oct 2009 21:35:09 GMT
Client-Peer: 76.74.254.123:80
Client-Response-Num: 1
Link: ; rel=shortlink
X-Hacker: If you're reading this, you should visit automattic.com/jobs and apply to join the fun, mention this header.
X-Nananana: Batcache
X-Pingback: http://blog.roncero.org/xmlrpc.php

So, apart from various bits of information (nginx), what I really really liked was the X-Hacker header :-). Fancy a job?

February 10, 2008

Tricks to diagnose processes blocked on strong I/O in linux

Filed under: Linux, System Administration — Tags: , , — jesus @ 23:59

There’s one aspect of the Linux kernel and the GNU operating system and related tools in which it might be lacking behind, specially with kernel 2.4 series. I’m talking about I/O accounting or how to know what’s going on with the hard disk or other devices which are used to write and read data.

The thing is that Linux provides you with a few tools with which you can tell what’s going on with the box and its set of disks. Say vmstat provides you with a lot of information and various other files scattered in the /proc filesystem. But that information only tells us about the system globally, so it’s good for diagnosing if a high load on a box is due to some process chewing CPU cycles away or because of the hard disk being hammered and being painfully slow. But what about if you want to know what exactly is going on, which process or processes are responsible for the situation, how do you know? The answer is that Linux doesn’t provide you with tools for that, as far as I know (If you know of any, please leave a comment). There’s no such thing as a top utility for process I/O accounting. The situation is better in Linux 2.6 provided you activate the taskstats accounting module with which you can query information about the processes. The user-space utilities are somewhat scarce, but at least there’s something with which you can start playing.

However there are some tricks you can use to try to find out which process is the culprit when things go wrong. As usually, many of these tricks come from work where I keep learning from my colleagues, who, by the way, are much more intelligent than I am ;-), when things go wrong and some problem arises that needs immediate action.

So, let’s define the typical scenario on which we could apply these tricks. You’ve got a Linux box which has a high load average. Say 15, 20, etc. As you may know, the load average measures the number of processes that are waiting to be executed on the process queue. That doesn’t necessarily mean that the CPU is loaded when, for example, processes are blocked because of I/O, say a read from the disk because this is slow or something. The CPU would just sit there most of the time being idle. This number makes sense when you know the number of CPU the box has. If you have a loadavg of 2 in a two-CPU box, then you are just fine, ideally.

The number one tool for identifying what’s going on is vmstat, which would tell you a lot of things going on in the box, specially when you execute it periodically as vmstat 1. If you read the man page (and I do recommend you to read it), you can get an idea of all the information what would be going through the screen :-). Click here to see a screen shot of the output of vmstat on 4 different boxes. Almost all of its output is useful for diagnosis, except the last column in the case of a Linux 2.4 box (that value is added to the idle column).

With this tool we can find out if the system is busy on I/O and how. For example by looking at the bo and bi columns. Swapping, when it’s happening, could also imply that the hard disk is being hammered but that would also mean that there’s not enough memory in the system for all the processes running at that very moment. Well, all of its output can be useful for identifying what’s going on.

Ok, so back to our problem, how do we start? Well, the first thing to do is to try to find out what’s on execution that could be causing this. Who are the usual suspects. By looking at ps output we could get an idea of which processes and/or application could be causing the disk I/O. The problem with this is that sometimes an application runs tens or hundreds of processes, each of which is serving a remote client (say Apache prefork) and maybe only some of them are causing the havoc, so killing all possible processes is not an option in a production environment (possibly killing the processes causing the problem is feasible because they might be wedged or something).

Finding the suspects

One way to find what processes are causing the writes and reads is to have a look at the processes in uninterruptible sleep state. As the box has a high load average because of I/O, surely there must be processes in such a state because they are waiting for the disk to return back the data and return from their system calls. And these processes are likely to be involved in the high load of the system. If you think that uninterruptible sleep processes cannot be killed you are right, but we are assuming that they are in this state briefly again and again because of reading and writing to the disk non-stop. If you have read the vmstat man page, you must have noticed that the column b tells us the number of processes in such a state.

golan@kore:~$ ps aux | grep " D"
root     27351  2.9  0.2 11992 9160 ?        DN   23:06   0:08 /usr/bin/python /usr/bin/rdiff-backup -v 5 --restrict-read-only /disk --server
mail     28652  0.5  0.0  4948 1840 ?        D    23:11   0:00 exim -bd -q5m
golan    28670  0.0  0.0  2684  804 pts/23   S+   23:11   0:00 grep --color  D

Here we can see two processes in such a state (noted by D on ps output). Normally we don’t get to see many of these at the same time and if we issue the same command again, we are probably not going to see it again unless there is a problem which is why I’m writing this in the first place :-).

Examining the suspects

Well, we now need to examine the suspects and filter them, because there might be perfectly valid processes that are in uninterruptible sleep state but are not responsible of the high load, so we need to find out. One thing that we could do is attach strace to a specific process and see how it’s doing. This can be easily achieved this way:

golan@kore:~$ strace -p 12766
Process 12766 attached - interrupt to quit
write(1, "y\n", 2)                      = 2

Here we see the output of a process executing yes. So, what does this output tell us? It shows us all the system calls that the process is doing, so we can effectively see if it is reading or writing.

But all this can be very time consuming if we have quite a few processes to examine. What we could do is strace all of them and save their output to different files and then check them later:

If what we are examining is a process called command, we could do it this way:

# mkdir /tmp/strace
# cd /tmp/strace
# for i in `ps axuwf | grep command | awk '{ print $2 }'`; do (strace -p $i > command-$i.strace 2>&1)&  done

What this would do is create a series of files called command-PID-strace, one for each of the processes that match the regular expression in the grep command. If we set this running for a while, we can now examine the contents of all the files. Even better if we display the files ordered by size we would have a pointer to the process that are doing the most system calls. All we would need to do is verify that those system calls are actually read and write system calls. And also, don’t forget to kill all the strace processes that we sent to the background by issuing a killall strace :-)

So now we have a list of processes that are causing lots of reads and writes in the hard disk. What to do next depends on the situation and what you want to do. You might want to kill the processes, or find who (the person) who started them, in case they were started by someone. Or which network connection, IP address, etc. There are a bunch of utilities that you can use including strace, netstat, lsof, etc. It’s up to you what to do next.

And…

Well, This is me learning from my colleagues and from problems that arise when you don’t expect them. My understanding of the Linux kernel is not that good, but now many of the things that I studied in the Operating System class start to make a little bit more sense. So please, if you have experience with this, know of other ways to get this kind of information, please share it with me (as a comment or otherwise). I’m still learning :)

December 19, 2007

Modifying a live linux kernel

Filed under: Linux, Programming, System Administration — Tags: , , — jesus @ 17:22

Before reading this, I just need to say something:

I’ve no idea of linux, I’ve no idea of programming, I’ve no idea of computers… Everything you read here might have been invented, so, please, do not reproduce what I write here. If you do, bear in mind that you do it under your own responsibility. In fact, what is a computer anyway?

The other day we were having issues with a box that was used as a NFS box among other things. These issues appeared on upgrading this box from kernel 2.4 to kernel 2.6.22.1. These issues were related to locking on the NFS server, because of changing the behaviour of the flock system call on linux 2.6.11-18. From the NFS FAQ:

The NFS client in 2.6.12 provides support for flock()/BSD locks on NFS files by emulating the BSD-style locks in terms of POSIX byte range locks. Other NFS clients that use the same emulation mechanism, or that use fcntl()/POSIX locks, will then see the same locks that the Linux NFS client sees.

The problems we had are related to using NFS and Samba for exporting the same file system and locking not working properly.

SMB supports two types of locks – file-wide locks and byte-range locks.

  • File-wide locks
    • Called ‘share modes’ in SMB parlance
    • Also known as ‘BSD-style locks’
    • provided by flock() in Linux
    • provided by a ‘share mode’ flag when opening a file under Win32
    • Supported primarily by samba within samba itself by storing in a TDB – get listed under ‘Locked Files’ at the bottom of smbstatus
    • May also be enforced in the kernel using flock() if HAVE_KERNEL_SHARE_MODES is 1.
  • Byte-range locks
    • Called ‘POSIX-style’ locks.
    • provided by fcntl(fd, F_GETLK) in POSIX.
    • provided by _locking() in Win32
    • lockf() is a Linux wrapper around fcntl() for locking the whole file.
    • Supported by samba by a ‘Windows to POSIX byte-range overlap conversion layer’ and then fcntl().

Windows applications appear to use both share modes and byterange locks.

In Linux, flock() and fcntl() locks are oblivious to each other, as per http://lxr.linux.no/source/Documentation/locks.txt.

NFSv3 (as a protocol) only supports byte-range locks. However, nfsd does flock() locks on files on the server taken out by other processes – although clients cannot set them themselves. See http://nfs.sourceforge.net/#faq_d10

Unfortunately, linux 2.6.12 adds flock() emulation to the Linux NFS client by translating it into a file-wide fcntl(). This means that flock()s and fcntl()s *do collide* on remote NFS shares, which introduces all the potential application race conditions which Linux avoided by having them oblivious to each other locally. The practical upshot of this is that if you re-share an NFS share via samba, then if a Windows client (e.g. Outlook opening a PST file) opens a file with a share mode, then byte-range locking operations will fail as the lock has already been acquired. (The fact that NFS doesn’t realise the same PID has both locks and allow them both is probably an even bigger problem). The solution for this is to revert bits of the patch responsible: http://www.linux-nfs.org/Linux-2.6.x/2.6.11/linux-2.6.11-18-flock.dif. Disabling share modes in samba is not an option, as it also disables the application-layer TDB support for them – and disabling HAVE_KERNEL_SHARE_MODES will stop other programs (e.g. nfsd) on dump being aware of what’s been flock()ed.

So our solution for our server was reverting this patch on 2.6.11-18 and apply this patch:

--- fs/nfs/file.c       2007-07-10 19:56:30.000000000 +0100
+++ fs/nfs/file.c.nfs_flock_fix 2007-11-13 13:40:06.000000000 +0000
@@ -543,10 +543,24 @@
         * Not sure whether that would be unique, though, or whether
         * that would break in other places.
         */
-       if (!(fl->fl_flags & FL_FLOCK))
+
+       /**
+        * Don't simulate flock() using posix locks, as they appear to collide with
+        * legitimate posix locks from the same process.
+        */
+       if (fl->fl_flags & FL_FLOCK)
                return -ENOLCK;

        /* We're simulating flock() locks using posix locks on the server */
+       /* ...except we shouldn't get here, due to the above patch. */
        fl->fl_owner = (fl_owner_t)filp;
        fl->fl_start = 0;
        fl->fl_end = OFFSET_MAX;

So, for us, recompiling the kernel with such patch on the production server fixes all our problems. But, what if we wanted to do this live, could such a subtle change be done without rebooting? You might be thinking right now about the different options that you have on /proc about changing the behaviour of the kernel live, but what if you don’t have such option? What if we wanted to change something and there was no way to do this because it is not implemented or not possible?

Let’s see.

So, from an academic point of view we wanted to see if this could really be done. If the linux kernel would let us do that, if it was feasible. So, we set up a testing box in which we would try to modify the running kernel. How hard would be to do it on the testing box?

It seems there’s a way to do it which my colleague Matthew came up with (all credit to him, I’m just telling the story). Let’s examine the piece of code that we want to change. The offending code lives on the file fs/nfs/file.c of the linux kernel

*
 * Lock a (portion of) a file
 */
static int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
{
        dprintk("NFS: nfs_flock(f=%s/%ld, t=%x, fl=%x)n",
                        filp->f_path.dentry->d_inode->i_sb->s_id,
                        filp->f_path.dentry->d_inode->i_ino,
                        fl->fl_type, fl->fl_flags);
         /*
         * No BSD flocks over NFS allowed.
         * Note: we could try to fake a POSIX lock request here by
         * using ((u32) filp | 0x80000000) or some such as the pid.
         * Not sure whether that would be unique, though, or whether
         * that would break in other places.
         */

        if (!(fl->fl_flags & FL_FLOCK))
                return -ENOLCK;

        /* We're simulating flock() locks using posix locks on the server */
        fl->fl_owner = (fl_owner_t)filp;
        fl->fl_start = 0;
        fl->fl_end = OFFSET_MAX;

        if (fl->fl_type == F_UNLCK)
                return do_unlk(filp, cmd, fl);

        return do_setlk(filp, cmd, fl);

}

What we wanted to do is to change the behaviour of the previous function such that the if condition would be:

        if (fl->fl_flags & FL_FLOCK)
                return -ENOLCK;</pre>

That means that the change is fairly trivial and that it would result in switching an operation, basically changing the way the branching (in the machine code) is done on the if instruction. If we disassemble such object file, file.o, we get something like

# objdump -d fs/nfs/file.o
...
00000885 :
 885:   57                      push   %edi
 886:   89 d7                   mov    %edx,%edi
 888:   56                      push   %esi
 889:   89 c6                   mov    %eax,%esi
 88b:   53                      push   %ebx
 88c:   89 cb                   mov    %ecx,%ebx
 88e:   83 ec 14                sub    $0x14,%esp
 891:   f6 05 00 00 00 00 40    testb  $0x40,0x0
 898:   74 3b                   je     8d5
 89a:   0f b6 41 2c             movzbl 0x2c(%ecx),%eax
 89e:   89 44 24 10             mov    %eax,0x10(%esp)
 8a2:   0f b6 41 2d             movzbl 0x2d(%ecx),%eax
 8a6:   89 44 24 0c             mov    %eax,0xc(%esp)
 8aa:   8b 56 0c                mov    0xc(%esi),%edx
 8ad:   8b 42 0c                mov    0xc(%edx),%eax
 8b0:   8b 40 20                mov    0x20(%eax),%eax
 8b3:   89 44 24 08             mov    %eax,0x8(%esp)
 8b7:   8b 42 0c                mov    0xc(%edx),%eax
 8ba:   8b 80 9c 00 00 00       mov    0x9c(%eax),%eax
 8c0:   c7 04 24 20 01 00 00    movl   $0x120,(%esp)
 8c7:   05 40 01 00 00          add    $0x140,%eax
 8cc:   89 44 24 04             mov    %eax,0x4(%esp)
 8d0:   e8 fc ff ff ff          call   8d1
 8d5:   f6 43 2c 02             testb  $0x2,0x2c(%ebx)
 8d9:   74 47                   je     922
 8db:   80 7b 2d 02             cmpb   $0x2,0x2d(%ebx)
 8df:   89 73 14                mov    %esi,0x14(%ebx)
 8e2:   c7 43 30 00 00 00 00    movl   $0x0,0x30(%ebx)
 8e9:   c7 43 34 00 00 00 00    movl   $0x0,0x34(%ebx)
 8f0:   c7 43 38 ff ff ff ff    movl   $0xffffffff,0x38(%ebx)
 8f7:   c7 43 3c ff ff ff 7f    movl   $0x7fffffff,0x3c(%ebx)
 8fe:   75 11                   jne    911
 900:   83 c4 14                add    $0x14,%esp
 903:   89 d9                   mov    %ebx,%ecx
 905:   89 f0                   mov    %esi,%eax
...

We can actually have a look at the disassembled code of the function flock(). If you have a look at address 0x8d9, there’s an instruction that looks suspiciously similar to the test carried on the if instructions. If you know assembly and know how a compiler works, you could find out that this jump instruction (JE) is just the one we want to change, exactly to a JNE instruction (I am not going to extend here on assembly and compilers and the like. I guess that if you wanted to do this, you should already know this. And, by the way, I’ve no idea of those concepts either, seriously).

If you are not sure if that’s the right instruction, you could recompile the kernel, get the assembly out of the same file.o object and compare it to see what is what changed.

Also, if you look at address 0x898, there another JE instruction which may look like the one we are looking, but this belongs to dprintk as we have debug enabled on that kernel.

If we have a look at the instructions on the IA32 manual, we see that the opcodes for the interesting instructions are:

  • JE: 74
  • JNE: 75

Ok, so right now, we know that we want to change an JE (with opcode 74) instruction for a JNE instruction (with opcode 75) on address 0x8d9 of the object file “file.o”.

The problem now is to find out where on the kernel memory this piece of code lives. One approach that you might think of doing is grepping the whole memory for a particular sequence of instructions. This is not recommended and I will explain why later on. First, let’s see how we can have access to the kernel memory, where we could possibly modify the data…

If you have a look at your unix system, you’ll see that you have a /dev/kmem special file, with which you can access the memory from the kernel’s point of view. This is quite useful as you can access it in read mode and, more interestingly, write mode. However, doing stuff with it might be a bit dangerous, as you might have guessed. It even seems that some vendors will disable this special file.

Anyway, as I said before, you cannot and don’t want to read or grep the whole memory at /dev/kmem. It seems that the reason is having write-only registers mapped into memory, so a read would crash the system. (You might think that that message is very old, from a distant time, but believe me, it crashes linux if you do so. We tried reading the whole of /dev/mem and the network driver crashed among other little pieces, so don’t bother).

So basically the thing boils down to:

  1. Finding out exactly where we have to change the kernel (ie, at which memory address)
  2. Open /dev/kmem and changing it with the right tool
  3. Hope everything went fine :-)

Finding out where

We need to find out where on the kernel we want to change it. This can be easily do by using tools and information the kernel provides us.

First, we need to find out where the kernel function nfs_flock starts, and that can be done by having a look at the System.map file that is generated every time we compile a kernel. The System.map file is a file that helps kernel developers to debug their code by mapping kernel functions to memory addresses, so it is actually much easier to find stuff. It contains the kernel symbol table with all symbols.

Ours looks like:

...
c01a9b3f t do_unlk
c01a9b97 t do_setlk
c01a9c2b t nfs_lock
c01a9d21 t nfs_flock
c01a9dcc T nfs_get_root
c01a9f3c T nfs_write_inode
...

So now, we know that nfs_flock is located at 0xc01a9d21. We’ll use this in a minute.

We saw that we were having a look at the instruction located at 0x8D9 on the object file file.o (got by using objdump before). We also know that, on such object file, nfs_flock starts at 0x885, right?. That means that, the byte we want to change, is located exactly:

0x8D9 - 0x885 = 0x54

at nfs_flock + 0x54.

Well, as you might know, those adresses (on the object file) are relative to such file, and that, when being linked into the actual kernel, the addresses are all relocated and recalculated. So, basically the right point is on

0xc01a9d21 + 0x54

Opening /dev/kmem and modifying it

Opening /dev/kmem needs to be done with the right tools. This is basically because we need LARGE_FILE support on whichever tool we use to modify it, as the /dev/kmem special file is a representation of the kernel memory and we need access it with a tool that supports large files.

In our system, the easiest way to do it is with perl. First, we double check that perl was compiled with LARGE_FILE support:

root@devbox:~# perl -V  | grep LARGE
    cc='cc', ccflags ='-fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
  Compile-time options: USE_LARGE_FILES
root@devbox:~#

and, now, we can modify the kernel in a oneliner such as:

root@devbox:~# perl -e 'open (KERNEL, "+</dev/kmem") || die $!; seek(KERNEL, 0xc01a9d21 + 0x54, 0); syswrite(KERNEL, chr(0x75)); close(KERNEL);'
root@devbox:~#

where we are writing 0x75 in the right position (calculated previously).

And that’s all there is to it. If everthing went fine, the kernel behavior has been modified and nothing has crashed.

Again, only do this under a dev box, under your responsibility and if you really really know what you are doing. And only only only if you want to play with a live kernel. Remember, I know nothing about computers.

November 9, 2006

How to become a unix guru

Filed under: Fun, Internet — Tags: , — jesus @ 16:34

Just read this on the irc channel #netbsd on freenode. Found it funny :)

<alphawaves> how can I become a unix guru?
<alphawaves> is it only possible?
<sqweek> do you really want to?
<alphawaves> uhm y eah i try to several years
* AccessExcess has quit (Remote closed the connection)
<sqweek> do you still want to after i tell you that aside from general understanding it requires rote learning of countless quirks and inconsistencies between tools/distros/systems?
<alphawaves> uhm yeah i try since several years
* arnito has quit ("bye")
<marcello_> write your own unix kernel
<sqweek> well, it's the same as learning about everything else
<sqweek> strive for understanding

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,877 other followers