# Developer’s Guide¶

This guide is targeted to people that want to write new features or fix bugs in rmlint.

## Bugs¶

Please use the issue tracker to post and discuss bugs and features:

## Philosophy¶

We try to adhere to some principles when adding features:

• Try to stay compatible to standard unix’ tools and ideas.
• Try to stay out of the users way and never be interactive.
• Try to make scripting as easy as possible.
• Never make rmlint modify the filesystem itself, only produce output to let the user easily do it.

Also keep this in mind, if you want to make a feature request.

## Making contributions¶

The code is hosted on GitHub, therefore our preferred way of receiving patches is using GitHub’s pull requests (normal git pull requests are okay too of course).

Note

origin/master should always contain working software. Base your patches and pull requests always on origin/develop.

Here’s a short step-by-step:

1. Fork it.
2. Create a branch from develop. (git checkout develop && git checkout -b my_feature)
3. Commit your changes. (git commit -am "Fixed it all.")
4. Check if your commit message is good. (If not: git commit --amend)
5. Push to the branch (git push origin my_feature)
6. Open a Pull Request.
7. Enjoy a refreshing Tea and wait until we get back to you.

Here are some other things to check before submitting your contribution:

• Does your code look alien to the other code? Is the style the same? You can run this command to make sure it is the same:

$clang-format -style=file -i$(find lib src -iname '*.[ch]')

• Do all tests run? Go to the test documentation for more info. Also after opening the pull request, your code will be checked via TravisCI.

• Is your commit message descriptive? whatthecommit.com has some good examples how they should not look like.

• Is rmlint running okay inside of valgrind (i.e. no leaks and no memory violations)?

For language-translations/updates it is also okay to send the .po files via mail at sahib@online.de, since not every translator is necessarily a software developer.

## Testsuite¶

rmlint has a not yet complete but quite powerful testsuite. It is not complete yet (and probably never will), but it’s already a valuable boost of confidence in rmlint's correctness.

The tests are based on nosetest and are written in python>=3.0. Every testcase just runs the (previously built) rmlint binary a and parses its json output. So they are technically blackbox-tests.

On every commit, those tests are additionally run on TravisCI.

### Control Variables¶

The behaviour of the testsuite can be controlled by certain environment variables which are:

• RM_TS_DIR: Testdir to create files in. Can be very large with some tests, sometimes tmpfs might therefore slow down your computer. By default /tmp will be used.
• RM_TS_USE_VALGRIND: Run each test inside of valgrind’s memcheck. (slow)
• RM_TS_CHECK_LEAKS: Fail test if valgrind indicates (definite) memory leak.
• RM_TS_USE_GDB: Run tests inside of gdb. Fatal signals will trigger a backtrace.
• RM_TS_PEDANTIC: Run each test several times with different optimization options and check for errors between the runs. (slow).
• RM_TS_SLEEP: Waits a long time before executing a command. Useful for starting the testcase and manually running rmlint on the priorly generated testdir.
• RM_TS_PRINT_CMD: Print the command that is currently run.
• RM_TS_KEEP_TESTDIR: If a test failed, keep the test files.

Additionally slow tests can be omitted with by appending -a '!slow' to the commandline. More information on this syntax can be found on the nosetest documentation.

Before each release we call the testsuite (at least) like this:

$sudo RM_TS_USE_VALGRIND=1 RM_TS_PRINT_CMD=1 RM_TS_PEDANTIC=1 nosetests-3.4 -s -a '!slow !known_issue'  The sudo here is there for executing some tests that need root access (like the creating of bad user and group ids). Most tests will work without. ### Coverage¶ To see which functions need more testcases we use gcov to detect which lines were executed (and how often) by the testsuite. Here’s a short quickstart using lcov: $ CFLAGS="-fprofile-arcs -ftest-coverage" LDFLAGS="-fprofile-arcs -ftest-coverage" scons -j4 DEBUG=1
$sudo RM_TS_USE_VALGRIND=1 RM_TS_PRINT_CMD=1 RM_TS_PEDANTIC=1 nosetests-3.4 -s -a '!slow !known_issue'$ lcov --capture --directory . --output-file coverage.info

## Sourcecode layout¶

• All C-source lives in lib, the file names should be self explanatory.
• As an exception, the main lives in src/rmlint.c.
• All documentation is inside docs.
• All translation stuff should go to po.
• All packaging should be done in pkg/<distribution>.
• Tests are written in Python and live in tests.

## Hashfunctions¶

Here is a short comparison of the existing hashfunctions in rmlint (linear scale). For reference: Those plots were rendered with these sources - which are very ugly, sorry.

If you want to add new hashfunctions, you should have some arguments why it is valuable and possibly even benchmark it with the above scripts to see if it’s really that much faster.

Also keep in mind that most of the time the hashfunction is not the bottleneck.

## Optimizations¶

For sake of overview, here is a short list of optimizations implemented in rmlint:

### Obvious ones¶

• Do not compare each file with each other by content, use a hashfunction to reduce comparison overhead drastically (introduces possibility of collisions though).
• Only compare files of same size with each other.
• Use incremental hashing, i.e. hash block-wise each size group and stop as soon a difference occurs or the file is read fully.
• Create one reading thread for each physical disk. This gives a big speedup if files are roughly evenly spread over multiple physical disks [note: currently using 2 reading threads per disk as a workaround for a speed regression but hoping to fix this for rmlint 2.5].
• Disk traversal is similarly multi-threaded, one thread per disk.
• Create separate hashing threads (one for each file) so that the reader threads don’t have to wait for hashing to catch up.

### Subtle ones¶

• Check only executable files to be non-stripped binaries.
• Use preadv(2) based reading for small speeedups.
• Every thread in rmlint is shared, so only few calls to pthread_create are made.

### Insane ones¶

• Use fiemap ioctl(2) to analyze the harddisk layout of each file, so each block can read it in perfect order on a rotational device.
• Check the device ID of each file to see if it on a rotational (normal hard disks) or on a non-rotational device (like an SSD). On the latter the fiemap optimisation is bypassed.
• Use a common buffer pool for IO buffers and recycle used buffers to reduce memory allocation overheads.
• Use only one hashsum per group of same-sized files.
• Implement paranoia check using the same algorithm as the incremental hash. The difference is that large chunks of the file are read and kept in memory instead of just keeping the hash in memory. This avoids the need for a two-pass algorithm (find matches using hashes then confirm via bytewise comparison). Each file is read once only. This achieves bytewise comparison in O(N) time, even if there are large clusters of same-size files. The downside is that it is somewhat memory-intensive (can be configured by --limit-mem option).