The gRPC will terminate process when it can’t allocate memory

My program leverages gRPC, and after a stress testing, it crashed. Use gdb to debug the core dump file:

[Current thread is 1 (Thread 0x7f73ef5f1780 (LWP 147393))]
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007f73edee542a in __GI_abort () at abort.c:89
#2  0x00005580f74da93a in gpr_malloc ()
#3  0x00005580f74e4b65 in ?? ()
#4  0x00005580f74e28d4 in grpc_exec_ctx_flush ()
#5  0x00005580f758aee1 in ?? ()
#6  0x00005580f75061b7 in grpc_pollset_work ()
#7  0x00005580f74f1c78 in ?? ()
#8  0x00005580f72ab85e in grpc::CompletionQueue::Pluck (this=0x5580f96b3698, tag=0x7fff9cafbe90)
......

I doubt the root cause should be memory is not enough, but not sure. Check the source code of gpr_malloc:

void* gpr_malloc(size_t size) {
  void* p;
  if (size == 0) return nullptr;
  GPR_TIMER_BEGIN("gpr_malloc", 0);
  p = g_alloc_functions.malloc_fn(size);
  if (!p) {
    abort();
  }
  GPR_TIMER_END("gpr_malloc", 0);
  return p;
}

We can see if p is NULL, the abort() system call will be invoked. This verifies what I guessed. Besides gpr_malloc, other memory allocate functions (such as gpr_zalloc, gpr_realloc, etc) have the same behaviors.

Learn socket programming tips from netcat

Since netcat is honored as “TCP/IP swiss army knife”, I read its source code in OpenBSD to summarize some socket programming tips:

(1) Client connects in non-blocking mode:

......
s = socket(res->ai_family, res->ai_socktype |
            SOCK_NONBLOCK, res->ai_protocol);


......  
if ((ret = connect(s, name, namelen)) != 0 && errno == EINPROGRESS) {
        pfd.fd = s;
        pfd.events = POLLOUT;
        ret = poll(&pfd, 1, timeout));
}
......

Creating socket and set SOCK_NONBLOCK mode for it. Then calling connect() function, if ret is 0, it means connection is established successfully; if errno is EINPROGRESS, we can use timeout to control how long to wait; otherwise the connection can’t be built.

(2) The usage of poll():

......
/* stdin */
pfd[POLL_STDIN].fd = stdin_fd;
pfd[POLL_STDIN].events = POLLIN;

/* network out */
pfd[POLL_NETOUT].fd = net_fd;
pfd[POLL_NETOUT].events = 0;

/* network in */
pfd[POLL_NETIN].fd = net_fd;
pfd[POLL_NETIN].events = POLLIN;

/* stdout */
pfd[POLL_STDOUT].fd = stdout_fd;
pfd[POLL_STDOUT].events = 0;

......
/* poll */
num_fds = poll(pfd, 4, timeout);

/* treat poll errors */
if (num_fds == -1)
    err(1, "polling error");

/* timeout happened */
if (num_fds == 0)
    return;

/* treat socket error conditions */
for (n = 0; n < 4; n++) {
    if (pfd[n].revents & (POLLERR|POLLNVAL)) {
        pfd[n].fd = -1;
    }
}
/* reading is possible after HUP */
if (pfd[POLL_STDIN].events & POLLIN &&
    pfd[POLL_STDIN].revents & POLLHUP &&
    !(pfd[POLL_STDIN].revents & POLLIN))
    pfd[POLL_STDIN].fd = -1;

Usually, we just need to care about file descriptors for reading:

pfd[POLL_STDIN].fd = stdin_fd;
pfd[POLL_STDIN].events = POLLIN;

no need to monitor file descriptors for writing:

/* network out */
pfd[POLL_NETOUT].fd = net_fd;
pfd[POLL_NETOUT].events = 0;

According to poll() manual from OpenBSD, if no need for “high-priority” (maybe out-of-band) data, POLLIN is enough, otherwise the monitor events should be POLLIN|POLLPRI. And this is similar for POLLOUT and POLLWRBAND.

There are 3 values(POLLERR, POLLNVAL and POLLHUP) which are only used in struct pollfd‘s revents. If POLLERR or POLLNVAL is detected, it’s not necessary to poll this file descriptor furthermore:

if (pfd[n].revents & (POLLERR|POLLNVAL)) {
    pfd[n].fd = -1;
}

We should pay more attention to POLLHUP:
(a)

POLLHUP

The device or socket has been disconnected. This event and POLLOUT are mutually-exclusive; a descriptor can never be writable if a hangup has occurred. However, this event and POLLIN, POLLRDNORM, POLLRDBAND, or POLLPRI are not mutually-exclusive. This flag is only valid in the revents bitmask; it is ignored in the events member.

(b)

The second difference is that on EOF there is no guarantee that POLLIN will be set in revents, the caller must also check for POLLHUP.

So it means if POLLHUP and POLLIN are both set in revents, there must be data to read (maybe EOF?), otherwise if only POLLHUP is checked, there is no data to read from.

 

Test of freeing 2-dimension vector memory in C++

C++ Vector Memory Release introduces how to release vector memory in C++, but the example only involves 1-dimension vector. I write a small application to verify freeing 2-dimension vector (the OS is OpenBSD) :

#include <unistd.h>
#include <vector>
#include <iostream>

using namespace std;

int main(void) {
    vector<vector<char>> vec(1024 * 1024, vector<char>(1024));

    cout << "Before freeing memory, sleep 30 seconds ..." << endl;
    sleep(30);
    vector<vector<char>>().swap(vec);

    cout << "Sleep now ..." << endl;
    sleep(300);
    return 0;
}

Use clang++ to build and run it:

# clang++ -std=c++11 free.cpp
# ./a.out

(1) When “Before freeing memory, sleep 30 seconds ...” is printed, checked the memory usage of program:

1

We can see the program occupied more than 1G memory.

(2) After “Sleep now ...” is outputted, the memory usage began to descend, and when it became stable, the memory program consumed is only about 19K:

2

P.S., the full code is here.

Be careful of FHEcontext’s shallow copy feature in HElib

Check following code which uses HElib:

class A
{
    FHEcontext context;
public:
    FHEcontext& getContext()
    {
        return context;
    }
};

void func()
{
    auto context = a.getContext();
    ......
}

A a;

int main(void)
{
    ......
    func();
    ......
    return 0;
}

In func():

......
auto context = a.getContext();
......

It will allocate a local variable context whose type is FHEcontext, not “FHEcontext&“, and the point is it will be shallow copy of FHEcontext:

class FHEcontext {

......
  //! @breif A default EncryptedArray
  const EncryptedArray* ea;
......
}

FHEcontext::~FHEcontext()
{
  delete ea;
}

So when the local variable context is destroyed, the memory of ea is also released; this will lead to context member of class A references a already freed memory. That will be a disaster!

References:
auto specifier type deduction for references;
The issue about FHEcontext’s copy constructor/assignment operator.

 

Enable generating core dump file on Debian Linux

The default core dump file size is 0 on Debian Linux:

$ ulimit -c
0

To enable generating core dump file, I need to run following command:

$ ulimit -c unlimited  

But if you re-login, the core dump file size is changed back to 0 from unlimited. So “ulimit -c unlimited” need to be executed during your login. E.g., if you use zsh, append it in .zshrc file.