Fix “please format Go code with ‘gofmt -s'” issue in building Swarmkit

When you do modifications in Swarmkit source code, e.g., doc.go, sometimes executing “make” command will show the following errors:

# make
 fmt
doc.go
 please format Go code with 'gofmt -s'

But executing “gofmt -s” command on doc.go seems not take effect:

# gofmt -s doc.go
// Package swarmkit implements a framework for task orchestration.
package swarmkit
# make
 fmt
doc.go
 please format Go code with 'gofmt -s'
make: *** [fmt] Error 1

The solution should be running “go fmt” command on this file, then the build process is OK:

# go fmt doc.go
doc.go
# make
 fmt
 bin/swarmd
......

Wget only recognizes http_proxy, not https_proxy

My Ubuntu 16.04 LTS works behind proxy. I have set HTTP_PROXY and HTTPS_PROXY environmental variables:

HTTP_PROXY=http://web-proxy.corp.xxxxxx.com:8080/
HTTPS_PROXY=https://web-proxy.corp.xxxxx.com:8080/

But wget can’t work:

# wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc.3/nvidia-docker_1.0.0.rc.3-1_amd64.deb
--2016-07-14 22:51:12--  https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc.3/nvidia-docker_1.0.0.rc.3-1_amd64.deb
Resolving github.com (github.com)... 192.30.253.112
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
ERROR: cannot verify github.com's certificate, issued by ‘O=Fortinet Ltd.,CN=FG3K6C3A15800021’:
  Self-signed certificate encountered.
    ERROR: certificate common name ‘FG3K6C3A15800021’ doesn't match requested host name ‘github.com’.
To connect to github.com insecurely, use `--no-check-certificate'.
root@ubuntu:~# wget --no-check-certificate -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc.3/nvidia-docker_1.0.0.rc.3-1_amd64.deb

After setting http_proxy and https_proxy:

http_proxy=http://web-proxy.corp.xxxxxx.com:8080/
https_proxy=https://web-proxy.corp.xxxxx.com:8080/
HTTP_PROXY=http://web-proxy.corp.xxxxxx.com:8080/
HTTPS_PROXY=https://web-proxy.corp.xxxxx.com:8080/

Now wget works:

# wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc.3/nvidia-docker_1.0.0.rc.3-1_amd64.deb     --2016-07-14 22:57:30--  https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc.3/nvidia-docker_1.0.0.rc.3-1_amd64.deb
Resolving web-proxy.xxxxxx.hp.com (web-proxy.xxxxxx.hp.com)... xxx.xxx.xxx.xxx
Connecting to web-proxy.xxxxxx.hp.com (web-proxy.xxxxxx.hp.com)|xxx.xxx.xxx.xxx|:8080... connected.
Proxy request sent, awaiting response... 302 Found
......

So we can conclude that wget is picky about uppercase and lowercase words.

 

Build docker from source behind proxy

If you want to build Docker from source like this:

# git clone https://github.com/docker/docker.git
# cd docker
# make

But your working server is actually behind a proxy, I think you may run into errors as the follows:

# make
mkdir bundles
docker build  -t "docker-dev:master" -f "Dockerfile" .
Sending build context to Docker daemon 145.8 MB
Step 1 : FROM debian:jessie
 ---> f854eed3f31f
Step 2 : RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61  || apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
 ---> Running in fede03b56767
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.MjO7kIEOm8 --no-auto-check-trustdb --trust-model always --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-stable.gpg --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
gpg: requesting key F6B0FC61 from hkp server p80.pool.sks-keyservers.net
gpgkeys: key E871F18B51E0147C77796AC81196BA81F6B0FC61 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.6clUfj4AwL --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-stable.gpg --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
gpg: requesting key F6B0FC61 from hkp server pgp.mit.edu
gpgkeys: key E871F18B51E0147C77796AC81196BA81F6B0FC61 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
The command '/bin/sh -c apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61 || apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61' returned a non-zero code: 2
Makefile:70: recipe for target 'build' failed
make: *** [build] Error 1

Or:

......
RUN apt-get update && apt-get install -y       apparmor        apt-utils       aufs-tools      automake        bash-completion   binutils-mingw-w64       bsdmainutils    btrfs-tools     build-essential         clang   createrepo      curl    dpkg-sig        gcc-mingw-w64      git     iptables        jq      libapparmor-dev         libcap-dev      libltdl-dev     libsqlite3-dev  libsystemd-journal-dev  libtool    mercurial       net-tools       pkg-config      python-dev      python-mock     python-pip      python-websocket        ubuntu-zfs xfsprogs        libzfs-dev      tar     zip     --no-install-recommends         && pip install awscli==1.10.15
 ---> Running in 37080c364862
Get:1 http://ppa.launchpad.net trusty InRelease [8127 B]
Get:2 http://httpredir.debian.org jessie InRelease [8127 B]
Get:3 http://security.debian.org jessie/updates InRelease [8127 B]
Splitting up /var/lib/apt/lists/partial/ppa.launchpad.net_zfs-native_stable_ubuntu_dists_trusty_InRelease into data and signature failedIgn http://ppa.launchpad.net trusty InRelease
E: GPG error: http://ppa.launchpad.net trusty InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?)
The command '/bin/sh -c apt-get update && apt-get install -y    apparmor        apt-utils       aufs-tools      automake        bash-completion    binutils-mingw-w64      bsdmainutils    btrfs-tools     build-essential         clang   createrepo      curl    dpkg-sig        gcc-mingw-w64      git     iptables        jq      libapparmor-dev         libcap-dev      libltdl-dev     libsqlite3-dev  libsystemd-journal-dev     libtool         mercurial       net-tools       pkg-config      python-dev      python-mock     python-pip      python-websocket  ubuntu-zfs       xfsprogs        libzfs-dev      tar     zip     --no-install-recommends         && pip install awscli==1.10.15' returned a non-zero code: 100
make: *** [build] Error 1

These reports can make you crazy!

The solution is adding proxy into Dockerfile which resides in the root directory of Docker folder:

......
FROM debian:jessie
ENV http_proxy http://web-proxy.corp.xxxxxx.com:8080/
ENV https_proxy https://web-proxy.corp.xxxxxx.com:8080/
......

Then the make progress will be smooth!

P.S., after discussing in reddit, the correct and idiomatic method should be this:

make DOCKER_BUILD_ARGS="--build-arg http_proxy=http://web-proxy.corp.xxxxxx.com:8080/ --build-arg https_proxy=https://web-proxy.corp.xxxxxx.com:8080/"

A brief intro of TCP keep-alive in Go’s HTTP implementation

Let’s see a Go web program:

package main

import (
        "fmt"
        "io"
        "io/ioutil"
        "net/http"
        "os"
        "time"
)

func main() {
        for {
                resp, err := http.Get("https://www.google.com/")
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
                        os.Exit(1)
                }
                _, err = io.Copy(ioutil.Discard, resp.Body)
                resp.Body.Close()
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: reading: %v\n", err)
                        os.Exit(1)
                }
                time.Sleep(45 * time.Second)
        }
}

The logic of above application is not hard, just retrieve information from the specified Website URL. I am a layman of web development, and think the HTTP communication should be “short-lived”, which means when HTTPclient issues a request, it will start a TCP connection with HTTP server, once the client receives the response, this session should end, and the TCP connection should be demolished. But is the fact really like this? I use lsof command to check my guess:

# lsof -P -n -p 907
......
fetch   907 root    3u    IPv4 0xfffff80013677810              0t0     TCP 192.168.80.129:32618->xxx.xxx.xxx.xxx:8080 (ESTABLISHED)
......

Oh! My assumption is wrong, and there is a “long-lived” TCP connection. Why does this happen? When I come across the network related troubles, I will always seek help from tcpdump and wireshark and try to capture packets for analysis. This time should not be exception, and the communication flow is as the following picture:

1

(1) The start number of packet is 4, that’s because the first 3 packets are TCP handshake, and it is safe to ignore them;
(2) Packet 4 ~ 43 is the first HTTP GET flow, and this process lasts about 2 seconds, and ends at 19:20:37;
(3) Then after half a minute, at 19:21:07, there occurs a TCP keep-alive packet on the wire. Oh dear! The root cause has been found! Although the HTTP session is over, the TCP connection still exists and uses keep-alive mechanism to make the TCP passway alive, so this TCP route can be reused by following HTTP messages;
(4) As expected, 15 seconds later, a new HTTP session begins at packet 46, which is exactly 45 seconds pass after the first HTTPconversation.

That’s the effect of TCP keep-alive, which keeps the TCP connection alive and can be reused by following HTTP sessions. Another thing you should pay attention of reusing connection is the Response.Body must be read to completion and closed (Please refer here):

type Response struct {
    ......

    // Body represents the response body.
    //
    // The http Client and Transport guarantee that Body is always
    // non-nil, even on responses without a body or responses with
    // a zero-length body. It is the caller's responsibility to
    // close Body. The default HTTP client's Transport does not
    // attempt to reuse HTTP/1.0 or HTTP/1.1 TCP connections
    // ("keep-alive") unless the Body is read to completion and is
    // closed.
    //
    // The Body is automatically dechunked if the server replied
    // with a "chunked" Transfer-Encoding.
    Body io.ReadCloser

    ......
}

As an example, modify the above program as follows:

package main

import (
        "fmt"
        "io"
        "io/ioutil"
        "net/http"
        "os"
        "time"
)

func closeResp(resp *http.Response) {
        _, err := io.Copy(ioutil.Discard, resp.Body)
        resp.Body.Close()
        if err != nil {
                fmt.Fprintf(os.Stderr, "fetch: reading: %v\n", err)
                os.Exit(1)
        }
}

func main() {
        for {
                resp1, err := http.Get("https://www.google.com/")
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
                        os.Exit(1)
                }

                resp2, err := http.Get("https://www.facebook.com/")
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
                        os.Exit(1)
                }

                time.Sleep(45 * time.Second)
                for _, v := range []*http.Response{resp1, resp2} {
                        closeResp(v)
                }
        }
}

During running it, You will see 2 TCP connections, not 1:

# lsof -P -n -p 1982
......
fetch   1982 root    3u    IPv4 0xfffff80013677810              0t0     TCP 192.168.80.129:43793->xxx.xxx.xxx.xxx:8080 (ESTABLISHED)
......
fetch   1982 root    6u    IPv4 0xfffff80013677000              0t0     TCP 192.168.80.129:12105->xxx.xxx.xxx.xxx:8080 (ESTABLISHED)

If you call closeResp function before issuing new HTTP request, the TCP connection can be reused:

package main

import (
        "fmt"
        "io"
        "io/ioutil"
        "net/http"
        "os"
        "time"
)

func closeResp(resp *http.Response) {
        _, err := io.Copy(ioutil.Discard, resp.Body)
        resp.Body.Close()
        if err != nil {
                fmt.Fprintf(os.Stderr, "fetch: reading: %v\n", err)
                os.Exit(1)
        }
}

func main() {
        for {
                resp1, err := http.Get("https://www.google.com/")
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
                        os.Exit(1)
                }
                closeResp(resp1)


                time.Sleep(45 * time.Second)

                resp2, err := http.Get("https://www.facebook.com/")
                if err != nil {
                        fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
                        os.Exit(1)
                }
                closeResp(resp2)
        }
}

P.S., the full code is here.

References:
Package http;
Reusing http connections in Golang;
Is HTTP Shortlived?.

A trick of using “Find usages” in IntelliJ IDEA as Go IDE

Recently, I am using IntelliJ IDEA as a Go IDE to browse Docker Swarm code. When I want to search where Discovery.Watch method intoken package is called, the “Find usages“(Alt+F7) function of IntelliJ IDEA gives me a confusion:

method

It just prints one occurrence: the test code of token package. It doesn’t make sense, where the hell is the Discovery.Watch method called? When I search the usages of Watch in Watcher interface which token.Discovery satisfies by accident, I catch where theDiscovery.Watch method is used:

spec

The caveat from this lesson is if you can’t find where methods of a package are called, you should try to find the interfaces where the packages satisfy, maybe it will give you the answer.