Saturday, March 23, 2013

mkstatic: join binary and it's libraries together

You just built your program on your own notebook. And noticed, you don't have many of libraries on the other host. The host, where you want to check out your job. Nothing unusual. First approach to install 'em (lucky if you got root-access). Another approach is to recompile your program linking all the libraries statically; and it's good if you have static brothers for your dynlibs. So, it doesn't work everywhere. Especially, on production server.

But what if you just want to launch your program on another machine and you don't want to bother yourself with tons of libraries it depends on? To do this job quickly you may use mkstatic Perl script. It creates .static package which contains your binary and all it's libraries within.

No magic, again :-(

Surely, it doesn't perform anything you can't do manually (if you familiar with ld.so). But I think you won't do this job so accurate. So, what mkstatic does for you:

  • it collects all dependencies of your binary_file (and remembers symlinks to libraries)
  • it creates .tgz which is actually placed in Shell-file. This archive includes binary, libraries and bootstrapping code
  • you may use binary.staic as usual binary file
I believe the latest point is the most critical. Because all the mess are hidden from you: all it works just like your original binary. With absolutely the same usage. And requires nothing from target host.

Known limitations

Keen reader may guess: "Hey, this will work for all binaries!". Yes, you can't [easily] copy your Chromium distribution this way to empty machine. Simply because it depends not only from libraries, but from many data (drivers for your Xorg, font configs). And mkstatic doesn't know anything about them. But you can use mkstatic, for instance, with Midnight Commander :-). Or any your executable which uses "data-free" libraries (libogg, libboost, libstdc++, etc).

Let's test it

Surely, it's better to test mkstatic in two machines: one which has all the bunch of libraries, and the fresh one. But I'll show you how I've tested this thing.

First of all, you have to build .static package. Use Midnight Commander' binary as example:
$ ./mkstatic -o /tmp/mc.static `which mc`
executable package is ready: /tmp/mc.static
I'm using xubuntu-12.04 (Precise Pangolin). As any Debian-like distribution it contains debootstrap utility. So, launch:
$ sudo debootstrap precise precise-chroot http://mirror.yandex.ru/ubuntu/
$ sudo cp /tmp/mc.static precise-chroot/tmp/
$ sudo chroot precise-chroot /bin/bash # now you're in test environment
# /tmp/mc.static

Wuala! Midnight Commander is working on your chroot environment, though you don't have libgpm.so within. You may say what Midnight commander is pretty simple. Surely! But you may use mkstatic with much more heavy binaries like mencoder which requires about 100 libraries. Or even Skype! All programs containing one executable binary file is mkstatic-friendly. Try it!

As usually, there is manual-page in package. See mkstatic --man for details.

P.S. If you just interested in approach self-extractable archive, you may see makself. It's widely used for binary installations on Unix world (Nvidia drivers, VirtualBox, etc).

Monday, December 17, 2012

crxprof: handy profiler

Some weeks ago we faced with strange situation: program that performs indexing for our search engine started to ding. And it was very interesting what exactly going on right now. We launched `gdb` and tried to `finish` particular stack frame, but nothing unusual: stack frames finished and started again. So, there were nothing that stalled process. I was almost OK, just very slow, much slower than usual.
After that I tried to remember profiler which is able to work with already launched executable. There are some (popular) linux profilers:
  • gprof
  • Callgrind
  • OProfile
  • Google perftools

gprof

UNIX gprof is pretty old profiler which been written by Bill Joy during performance checking of BSD. It requires recompilation of your source code to inject checkpoints at the beginning and at the end of every function. So your code will look like as follows:
void fn()
{
  enter_fn();
  
  ... /* actual code of fn() */

  leave_fn();
}
Surely, difference of time between enter_fn() and leave_fn() will be usage of function fn(). And gprof will know exactly, how many times you called an fn(). But the drawbacks are obvious: it has to be integrated in compile-time, and gives appreciable overhead: the less your fn() contain, the more percent will take checkpointing. And surely it doesn't work with already launched process.

Callgrind

Callgrind is a part of Valgrind - great instrumentation framework for building dynamic analysis tools. Callgrind do profiling based on breakpoints on instruction like function call and return. It slows down launched program significantly, 5x to 20x times. And usually it's hard to use it for big data sets, don't speaking about runtime. But it has a simple format of call-graph and there is nice program to visualize it: KCachegrind.

OProfile

OProfile is a system-wide profiler for Linux systems, capable of profiling all running code at low overhead. Before Linux 2.6.31 it was kernel driver and user-space daemon for gathering sample data. Now (since 0.9.8) it performs profiling via Linux Kernel Performance Events. Performing a system-wide profiling requires a root authority. Oprofile is sampling profiler (gathering Program Counter with specific frequency). It really low-cost doing flat profile, but requires more for callgraph (see notes about unwinding)

Google perftools

Google profiler is a part of Google perftools set. It contains tcmalloc (allocator designed specially for multythreading environment), heap checker and CPU profiler. It works by collecting samples using ITIMER_PROF as timer. Using ITIMER_PROF gives ability to collect samples only when execution really performing, because usually you won't interest in sleep(3) or epoll_wait(2) usages.
Each time SIGPROF occurs, it collects backtrace using libunwind. After your program successfully finished (via exit(3)), you will get your profile raw-data, which is convertible to many formats using google-pprof.
Google profiler, just like any other tool from perftools, can be used being explicitly linked or at runtime: via LD_PRELOAD facility. So, it can be used for any program, but still it's not suitable for already launched ones due to it's design.
There are some more disadvantages here: google perftools doesn't go through fork(2), and your program can't be finished abnormally (via signal). That makes it hard to profile daemons: they usually build upon master-workers schema and assume endless event-loop.

Crxprof

crxprof is simple profiler designed to profile already launched programs. It collects callchain and may visualise it by request (ENTER) or after the completion of traced program. It also saves call graph in callgrind format making it easy to examine by KCachegrind. It works extremely fast and doesn't require any additional commands to convert raw-data. Simply because it doesn't write any internal format :-).
It works mostly like Google CPU profiler, but performs profiling externally via ptrace(2). Like Google profiler it uses libunwind to unroll stack. To avoid some work on raw-data (for example, heavy addr2line(1) like google profiler does) it also uses libbfd.
No any special support is required - you can use crprof with any program you able to (s)trace.
You can download crxprof from github. Since it's been made for me and my colleagues, I suppose there may be some features missing for your particular use-case. Feel free to ask.

Building

To build crxprof you may follow usual Unix build-sequence like:
autoreconf -fiv
./configure
make
sudo make install
If you have libunwind installed in special place, point this via:
./configure --with-libunwind=/path/to/libunwind
You may also skip installing since ./crxprof is the only file you need. Also, I recommend you to use static linkage to copy this file to "fresh" servers.

Profiling

To get job done you need to launch crxprof like this:
crxprof pid
That's all! Press ENTER to print profile, ^C to exit. crxprof will also exit (showing profile info) when program dies.

Options

As with most UNIX programs, you can get actual help using
$ crxprof --help
But I'll post this usage() here anyway. It's very compact:
Usage: ./crxprof [options] pid
Options are:
 -t|--threshold N:  visualize nodes that takes at least N% of time (default: 5)
 -d|--dump FILE:    save callgrind dump to given FILE
 -f|--freq FREQ:    set profile frequency to FREQ Hz (default: 100)
 -m|--max-depth N:  show at most N levels while visualising (default: no limit)
 -r|--realtime:     use realtime profile instead of CPU
 -h|--help:         show this help

 --full-stack:      print full stack while visualising
 --print-symbols:   just print funcs and addrs (and quit)

Real example

To make real but not complicated example, I will use this program. Just run crxprof asking to dump callgraph to file. (Assuming 32366 is PID of test program)
$ crxprof --dump /tmp/test.calls 32366
Reading symbols (list of function)
reading symbols from /home/dkrot/test/a.out (exe)
reading symbols from /lib/x86_64-linux-gnu/libc-2.15.so (dynlib)
reading symbols from /lib/x86_64-linux-gnu/ld-2.15.so (dynlib)
Attaching to process: 32366
Starting profile (interval 10ms)
Press ENTER to show profile, ^C to quit
2248 snapshot interrputs got (0 dropped)
main (100% | 0% self)
 \_ strong_function (75% | 49% self)
   \_ a (25% | 25% self)
 \_ a (24% | 24% self)
Profile saved to /tmp/test.calls (Callgrind format)
^C--- Exit since ^C pressed



Using this visualisation we can easily see what's going on:
  • main() calls strong_function() (and this is the most consuming path)
  • strong_function() calls an a()
  • main() also calls an a()
  • strong_function() half of CPU-time itself.
  • a() consuming the rest of CPU-time being called from 2 different places
  • main() doesn't consume anything by itself

This visualisation made by principle of "Biggest Subtrees First". So, it's handy to use crxprof in terminal. But for GUI representation and just deeper analysis you can use saved dump file (/tmp/test.calls):

$ kcachegrind /tmp/test.calls
And get something like this picture. KCachegrind summarise the information and shows that a() consumes 50% self-time. It differs from visualisation for terminal: I found separate accounting more appropriate for compact text-output.



Unwinding stack

Unwinding stack needed for collecting backtrace. Without backtrace it's impossible to show callgraph. And usually it's not so interesting to look at flat profile: you can't eliminate all malloc-s if they take significant time. And it's not you interested in. Usually you interested in "who called malloc" to work around this particular call-chain. What's why flat profile is mostly negligible.

Pretty old mechanism

Basically, stack consist of arguments, instruction pointer to return to (caller IP) and local variables. To make addressing easier, special register BP (base pointer) is used.
In this schema it's easy to unroll stack using previos base-pointer saved on stack.But the problem is, what making stack frame sometimes wasting. If your function consist of just 10 commands, overhead will be great. Therefore, some distributions compile it's core libraries without frame pointer (gcc -fomit-framepointer). Local variables and params still can be accessed via stack pointer (SP), saving one more register for general cases.
. As example, e-glibc from Debian distribution: built without frame pointers.-->
But the interesting thing is what frame pointers itself are not used by debuggers: they use exception frame handlers

Exception handling frames

Exception handling frames was involved for languages that support exceptions, such as C++. They consist of records addressing relative positions of IP and params. Each of this record covers specified region of code pointing "where stack frame is located when you are here". So, to extract IP you should unpack these uncommon records depending on where exactly you are now (IP). It's one of the reasons why exception handling in C++ is slow. I mean, it should be used exactly as exception handling, not as a thing which occur 100000 times per second.
On Linux exception frames represented within ELF file by sections:
  • .eh_frame: exception frames itself
  • .eh_frame_hdr: index over .eh_frame suitable for lookup

Thursday, December 6, 2012

HBase: finding balance

The disadvantage of abstractions

The interesting thing about abstractions. It's good to make independent parts of the system. And it's fine when it works as you expect. But suddenly it breaks, and you starting to realize you should dive into problem to formulate your expectations precisely. Because, just of of blue, you have to split your problem to gain "micro-expectations". Expectations of lower level than just "make this world happy". Sometimes abstractions just don't work. Sometimes, you have to unfold this black box, and start to hurl bricks.

Hey, it's not a lecture of gnosiology! It's just discourse about Java. And about Hadoop. Adherents of Java are always trying to create abstract things with a statement "it should just work". But when it breaks, you left with huge expectations and no idea how deal with it. And, Java style worsen the situation - it's just harder to "unfold" this black box because of sophistication of creator. All details are carefully hidden. Otherwise, books will not be so transparent, and the idea itself will be unclean.

Still, it's not a lecture :-) Just look at the following problem in HBase

Data locality in HBase

The whole idea of map-reduce (in terms of performance) is data-locality and independance. Jobs are work with their own data. You will gain maximum performance if your data spreaded equally within your cluster. Each job work with local data 'cause access to local HDDs much cheaper than remote HDD and network transmission.

Strong side of abstraction is what HBase itself is just a idea build upon HDFS. And therefore, it has to play HDFS' rules.

When HBase starts, it's regions are balanced throughout region-servers (RS). Bu how does data-locality work in this case? Regions are just a couple of files in HDFS. And HBase have no secret interfaces to HDFS. It simply works using this rule while creating new blocks:

  • Try to put initial block onto requesting server
  • Put second block as near as possible: to the same cluster, even to the same frame
  • Put third block as far as possible, just for backup. To another frame, or even another cluster

And it really works. But then you restart your HBase cluster. Because of error, just for prevention at the end! Anyway, your cluster starting to work slower than before. Why? It's a "law of Windows": to work perfectly after restart/re-install! Why portable Java doesn't follow this rule?!

The problem is: by-default HBase doesn't store block-map. It simply starts with absolutely another distribution of regions. RSes have no meaning about previous state. And you can see higher network load in your monitoring. Hadoop slowly rearrange your blocks. So slowly, what it's better to rewrite them all to recreate data-locality artificially.

I really don't know how to enforce HBase to memorize this state. But here is a simple script to measure locality. Just launch

./count_locality.sh tablename

It's output is data-locality of each RS in your cluster. Locality of 98% - 100% is perfect. Locality lower than 50 percent is certainly bad.

Sunday, August 12, 2012

Speedup file reading on linux

(Actually, this is pretty old post from my previos address.)

It's about how to speedup reading a pile of files from disk drives. Evident, such operation requires not so rarely - parsing couple of files, copying them over fast ethernet connection as like as moving them from one partition to another.

There are several things to speedup and I'm sure you know about IO-buffer size dependence, but one of main advantage achieves by "prereading" of data and doing this in right order, the things you usually can't control from userspace. Since the main obstacle while reading files is non-linear moving of disk drive heads, we should achieve as native physical order as possible.

This native order may be retrieved by using ioctl(FIBMAP) on opened file descriptor, but there are some limits: third argument of `ioctl' call presents pointer to integer - logical block being translated to physical on output, so obviously number of physical block able to be mapped is not very large. It may hit the limit on XFS and other huge FSes. There is also a big disadvantage of FIBMAP - it requires a superuser privilegies (don't know why). Instead of using old ioctl, new linux kernel provides an another one: FS_IOC_FIEMAP. This variant is much more flexible, universal and limit-safe. It also requires no superuser privilegies. This call provides you viewing of file as physical extents (even for filesystems allocating data by bitmaps, see flags). You can find much information in kernel documentation.

Here is the sample of how to retrieve first physical block by methods mentioned above:

#include <linux/fs.h>
#include <linux/fiemap.h>

uint64_t
get_physblock(const char *f)
{
    int fd = open(f, O_RDONLY);
    uint64_t block = ~0ULL;
    if (fd >= 0) {
#ifdef FS_IOC_FIEMAP
         union {
           struct fiemap fm;
           char buf[sizeof(struct fiemap) + sizeof(struct fiemap_extent) * 1];
         };
         memset(&fm, 0, sizeof fm);
         fm.fm_length = 1;       /* one byte mapping from logical offset=0 */
         fm.fm_extent_count = 1; /* buffer for one extent provided */
         if (ioctl(fd, FS_IOC_FIEMAP, &fm) != -1 && fm.fm_mapped_extents == 1)
           block = blk;
#else
         int blk = 0; /* first logical block */
         if (-1 != ioctl(fd, FIBMAP, &blk))
           block = blk;
#endif
        close(fd);
    }

    return block;
}

Test

Clearly right what relying on physical block ID, you may reorder files to read. In addition, there is a readahead(2) syscall which can be used to "preread" file data in VFS cache. It differs from the reading by read(2) since it has no "copy_to_userspace" overhead. Indeed, there is no much to talk about but give a test results. Testing principle is quite simple: read linux sources file by file. At first read them `as is', then apply readahead, and finally - preordering. FS cache between that cases may be purged by

$ echo 2 >/proc/sys/vm/drop_caches
Test results are following:
Method usedTime elapsed (sec)
as-is33
readahead26
reorder + readahead14

It's not difficult to see what applying readahead, especially with preordering, gives much benefit. So, this method may be used for caching - there are several implementation engaged in popular Linux distributions, for example readahead package, used by default in Fedora and Ubuntu.

You can read details about fiemap on LWN page.

I hope this short note will convince you using this tricky calls when FS reading speed is valuable. Surely, this method worthy only for reading much of files, but not for couple of huge files since it's already self-ordered. For myself, I used this approach when developed library which creates dictionaries for classifying phrases - there was over 60000 of input files, and until then read this files consuming the most time.

Wednesday, July 25, 2012

pstack for amd64

If you ever printed stack with gdb(1), you may noticed it's slow. It's OK while debugging, but surely not suitable for some-kind-of-realtime. That's because gdb performs extraction of every symbol of all files to which observing executable been linked.

Here is nice replacement for this - pstack(1), but only for x86 binaries. Here is attempt to do this for x86_64 too. It uses libunwind to unroll stack frames, and then Perl-script (omfg) to extract symbols and debug-info and to make pretty output.

Here maybe a nice use-case on servers: to automatically print out backtrace of monitored processes which is starved, right before killing then. In most cases it's enough to dig the problem, even w/o debug symbols. Or ... just to see how to perform unrolling remote stack with libunwind since I didn't find any example :-)

Example

Anyway, here is example of it's output:
$./pstack64 20794

20794:./a.out
#0 0x00007fb66ee42020 in /lib/x86_64-linux-gnu/libc-2.15.so: nanosleep@@GLIBC_2.2.5
#1 0x00007fb66ee41edc in /lib/x86_64-linux-gnu/libc-2.15.so: __sleep (/build/buildd/eglibc-2.15/posix/../sysdeps/unix/sysv/linux/sleep.c:138)
#2 0x0000000000400561 in /tmp/a.out: fn
#3 0x0000000000400571 in /tmp/a.out: a
#4 0x0000000000400581 in /tmp/a.out: main
#5 0x00007fb66eda576d in /lib/x86_64-linux-gnu/libc-2.15.so: __libc_start_main (/build/buildd/eglibc-2.15/csu/libc-start.c:258)
#6 0x0000000000400489 in /tmp/a.out: _start

This program don't use any dynamic libraries but libc. Here is example of another program written in C++ and using Qt:

./pstack64 9190
9190:keepassx
#0 0x00007f9c71a3eb03 in /lib/x86_64-linux-gnu/libc-2.15.so: __GI___poll (/build/buildd/eglibc-2.15/io/../sysdeps/unix/sysv/linux/poll.c:87)
#1 0x00007f9c70c2a036 in /lib/x86_64-linux-gnu/libglib-2.0.so.0.3200.3: -
#2 0x00007f9c70c2a164 in /lib/x86_64-linux-gnu/libglib-2.0.so.0.3200.3: g_main_context_iteration
#3 0x00007f9c726cf3bf in /usr/lib/x86_64-linux-gnu/libQtCore.so.4.8.1: QEventDispatcherGlib::processEvents(QFlags)
#4 0x00007f9c72c6ad5e in /usr/lib/x86_64-linux-gnu/libQtGui.so.4.8.1: -
#5 0x00007f9c7269ec82 in /usr/lib/x86_64-linux-gnu/libQtCore.so.4.8.1: QEventLoop::processEvents(QFlags)
#6 0x00007f9c7269eed7 in /usr/lib/x86_64-linux-gnu/libQtCore.so.4.8.1: QEventLoop::exec(QFlags)
#7 0x00007f9c726a3f67 in /usr/lib/x86_64-linux-gnu/libQtCore.so.4.8.1: QCoreApplication::exec()
#8 0x000000000041be9e in /usr/bin/keepassx: -
#9 0x00007f9c7197976d in /lib/x86_64-linux-gnu/libc-2.15.so: __libc_start_main (/build/buildd/eglibc-2.15/csu/libc-start.c:258)
#10 0x000000000041caa1 in /usr/bin/keepassx: -
Keepassx binary is stripped, so we can't see it's procedures, dashes printed instead. As you can see C++ names are demangled. BTW, I wondered it's straightforward with c++filt coming with GNU binutils. To make a short story longer ( :-) ), I'll put a piece of README here:

Benefits

It's easy to read :-) It shows symbols much faster than `gdb -batch` since it performs a lazy lookup. Works well with executables and shared objects. Falling back to dynamic symbols lookup if none of them found in (debug) table.

Drawbacks

It's strongly depends on GNU binutils and therefore it's Linux-only It doesn't support threads (even if you pick up right LWP)

Permissions to trace

Since unwind uses ptrace(2), it's worth to note what in latest Linux-distro it's forbidden to trace "foreign" processes by default. For example, see /etc/sysctl.d/10-ptrace.conf in *Ubuntu, or simply run pstack64 with sudo.

Separated debug-info

It's worth to say what many distributions of Linux provide so-called "separated debug-info": dynamic libraries or even executables containing DWARF records. Since debug info in DWARF doesn't affect other sections and do not require any transformation of executable code, it might be easily excluded (stripped) from object file. But since the size is critical, after being compiled shared libraries usually stripped and /usr/lib/ contains nothing but symbols for dynamic loader (likewise extracted by pstack64 too, anyway). But the original one may be installed too.

For example, here is a libc6-dbg in Ubuntu which provides /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.15.so. The thing is, it can be easily used instead of runtime libc since all virtual addresses (or section offsets) are valid for debug-version too.

BTW, it's interesting to glance on this short introduction to DWARF.

Sunday, July 1, 2012

memcached: dump to disk

Preface


Memcached is well-known, excellent memory storage. But what if you need to dump it's content to disk? This may be need, for example, in the following case: you have memcached with around 50% hitrate. Your service' average load is about 70% in rush hour. So, if your cache-server will reboot, you'll lose your cache, and requests will suddenly double, your users will suffer for this time due to timeouts.
Sounds realistic for you? Then, try this fork of memcached: memcached-dd.
As said in README, usage is straightforward: just add `-F file' option to command-line. Memcached will read this `file' at start and write to file.tmp when SIGUSR2 received. Then (after successfull write and sync), it will rename file.tmp -> file. So, `file' should be never truncated. For example:
$ memcached -F /tmp/memcache.dump -m 64 -p 11211 -l 127.0.0.1

Some notes to be clear

  • Dump performs in separate thread and doesn't block memcached itself
  • If you using TTL for your data, being restored the data will have the same TTL as in the time of dump.
  • All expired and flushed (flush_all command) content left behind
  • There is no any schedule-like maintaining for dumps, it's better to do with crontab and/or your own scripts

Example

I'll show the usage with Perl script. Assume, you have downloaded and built memcached-dd; see INSTALLATION section in README if in doubt. This Perl-scenario will load fake data into memcached:
use Cache::Memcached;

$memd = new Cache::Memcached {
    'servers' => [ "127.0.0.1:11211" ]
};

$| = 1;

for (my $i = 0; $i <= 10000; $i++) {
    $memd->set( "key_$i", "x"x100 . " [$i]" );
    # my $val = $memd->get( "xkey_$i");

    if ($i % 1000 == 0) {
        print "\r$i...";
    }
}
Having this load.pl script, launch the memcached-dd:
$ ./memcached -P /tmp/memcached.pid -F /tmp/memcached.dump -m 128 -p 11211 -l 127.0.0.1
# now load the data into memcached:
$ perl ./load.pl
# and now, assuming memcached has this data, dump it:
$ kill -USR2 `cat /tmp/memcached.pid`
1Mb dumped: 10001 items (0 expired during dump, 0 nuked by flush)
Moving temprorary /tmp/memcached.dump.tmp -> /tmp/memcached.dump

OK, now you have file /tmp/memcached.dump with all 10000 records dumped. You may reload it anytime launching memcached-dd with the same -F (assuming you killed memcached):

$ ./memcached -P /tmp/memcached.pid -F /tmp/memcached.dump -m 128 -p 11211 -l 127.0.0.1
Now check this keys with netcat:
$ echo get key_1 | nc 127.0.0.1 11211
VALUE key_1 0 104
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [1]
END

As you can see, content successfully restored from dump. Hope this will help for your particular usecase. If you have some problems with memcached-dd, feel yourself free to post this issue

Thursday, February 16, 2012

Nice C-static-assert

Buddy gave this example of C-static assert.
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))