Search IP fragmentation pcap files

The following shell script searches IP fragment pcap files in a folder:

#!/bin/sh

for file in ./*.pcap
do
    frag_packets=$(tshark -r $file -Y "ip.flags.mf==1 || ip.frag_offset>0")
    if [[ "${frag_packets}" != "" ]]
    then
        echo "$file"
    fi
done

We should pay attention to -Y option which is for display filters; if what you want is capture filters, -f is the right choice.

P.S., the code can be downloaded here.

Handle IP fragmentation pcap file

Wireshark has a handy feature which can follow TCP stream, but sometimes, it may not work as you expect. Check following diagram:

The IP packet carries a GTP payload, but since it is fragmented, and only first one is captured, so Wireshark won’t dissect it, and if you try follow TCP stream of this session, this packet will be ignored.

stripe is a cool tool which can peel away encapsulating headers. But from my testing, you should add -f option, otherwise the IP fragmented packet which I mentioned previously will be skipped, but even with this option, stripe will not remove the headers. So I write a simple program which just removes headers for specified packet (The code is here for reference).

Reassemble packets for pcap file

In TCP protocol, because MSS limitation, sometimes one endpoint needs to split one TCP packet into multiple packets and send them. Today, I met a case which requires to reassemble them into one.

Firstly, I used Wireshark to “Hex Dump” first need-reassemble packet:

0000   18 cf 24 4c 71 4b 54 89 98 76 b8 30 08 00 45 00
......

Modify the length in IP header, append remaining TCP payload, then used colrm to remove offset:

# colrm 1 4 < data > data.txt

Used awk to prepend 0x and append , for every value:

awk '{ for(i = 1; i <= NF; i++) {$i="0x"$i","} print}' data.txt

Added the variable definition for array:

const u_char new_packet_4[] = {
    0x18, 0xcf, ......
    .......
}

Lastly, write a small program to insert new packet 4 and remove original packet 4 and 5, and code is here (Don’t forget to modify the header of packet 4).

Be aware of huge pages in Linux

On a freshly installed Linux machine, I find my application will crash unexpectly:

==5611==AddressSanitizer's allocator is terminating the process instead of returning 0
==5611==If you don't like this behavior set allocator_may_return_null=1
==5611==AddressSanitizer CHECK failed: ../../../../libsanitizer/sanitizer_common/sanitizer_allocator.cc:216 "((0)) != (0)" (0x0, 0x0)
    #0 0x7f7dc13b94a2  (/lib64/libasan.so.5+0xf94a2)
    #1 0x7f7dc13d60a9 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) (/lib64/libasan
.so.5+0x1160a9)
    #2 0x7f7dc13bf3d6  (/lib64/libasan.so.5+0xff3d6)
    #3 0x7f7dc13bf43a  (/lib64/libasan.so.5+0xff43a)
    #4 0x7f7dc12e9319  (/lib64/libasan.so.5+0x29319)
    #5 0x7f7dc12e6f56  (/lib64/libasan.so.5+0x26f56)
    #6 0x7f7dc13adeba in malloc (/lib64/libasan.so.5+0xedeba)
......

After debugging, the root cause is memory not enough. This is the memory usage in idle state:

The reason for memory usage is so high even in idle state is related to huge pages configuration:

$ cat /etc/default/grub
......
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=100"

After shrinking huge pages usage, the application runs smoothly. About huge pages, this post is a good reference.