我的站点

一个系统软件工程师的随手涂鸦

Tag: cpu (Page 1 of 3)

Linux系统上如何查看进程(线程)所运行的CPU

本文介绍如何在Linux系统上查看某个进程(线程)所运行的CPU,但在此之前我们需要弄清楚两个基本概念:

(1)Linux操作系统上的进程和线程没有本质区别,在内核看来都是一个task。属于同一个进程的各个线程共享某些资源,每一个线程都有一个ID,而“主线程”的线程ID同进程ID,也就是我们常说的PID是一样的。

(2)使用lscpu命令,可以得到当前系统CPU的数量:

$ lscpu
......
CPU(s):                24
On-line CPU(s) list:   0-23
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             2
......

系统有2个物理CPUSocket(s): 2),每个CPU6coreCore(s) per socket: 6),而每个core又有2hardware threadThread(s) per core: 2)。所以整个系统上一共有2X6X2=24CPU(s):24)个逻辑CPU,也就是实际运行程序的CPU

使用htop命令可以得到进程(线程)所运行的CPU信息,但是htop默认情况下不会显示这一信息:

1
开启方法如下:
(1)启动htop后,按F2Setup):

2
(2) Setup中选择Columns,然后在Available Columns中选择PROCESSOR - ID of the CPU the process last executed, 接下来按F5Add)和F10Done)即可:

3

现在htop就会显示CPU的相关信息了。需要注意的是,其实htop显示的只是“进程(线程)之前所运行的CPU”,而不是“进程(线程)当前所运行的CPU”,因为有可能在htop显示的同时,操作系统已经把进程(线程)调度到其它CPU上运行了。

下面是一个运行时会包含4个线程的程序:

#include <omp.h>

int main(void){

        #pragma omp parallel num_threads(4)
        for(;;)
        {
        }

        return 0;
}

编译并运行代码:

$ gcc -fopenmp thread.c
$ ./a.out &
[1] 17235

使用htop命令可以得到各个线程ID,以及在哪个CPU上运行:

4

参考资料:
How to find out which CPU core a process is running on
闲侃CPU(一)

CUDA编程笔记(4)——CPU THREAD VS GPU THREAD

这篇笔记摘自Professional CUDA C Programming

Threads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches are slow and expensive.
Threads on GPUs are extremely lightweight. In a typical system, thousands of threads are queued up for work. If the GPU must wait on one group of threads, it simply begins executing work on another.
CPU cores are designed to minimize latency for one or two threads at a time, whereas GPU cores are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput.
Today, a CPU with four quad core processors can run only 16 threads concurrently, or 32 if the CPUs support hyper-threading.
Modern NVIDIA GPUs can support up to 1,536 active threads concurrently per multiprocessor. On GPUs with 16 multiprocessors, this leads to more than 24,000 concurrently active threads.

CUDA编程笔记(3)——Heterogeneous architecture

这篇笔记摘自Professional CUDA C Programming

A typical heterogeneous compute node nowadays consists of two multicore CPU sockets and two or more many-core GPUs. A GPU is currently not a standalone platform but a co-processor to a CPU. Therefore, GPUs must operate in conjunction with a CPU-based host through a PCI-Express bus. That is why, in GPU computing terms, the CPU is called the host and the GPU is called the device.

capture

A heterogeneous application consists of two parts:
➤ Host code
➤ Device code
Host code runs on CPUs and device code runs on GPUs. An application executing on a heterogeneous platform is typically initialized by the CPU. The CPU code is responsible for managing the environment, code, and data for the device before loading compute-intensive tasks on the device.

There are two important features that describe GPU capability:
➤ Number of CUDA cores
➤ Memory size
Accordingly, there are two different metrics for describing GPU performance:
➤ Peak computational performance
➤ Memory bandwidth
Peak computational performance is a measure of computational capability, usually defined as how many single-precision or double-precision floating point calculations can be processed per second. Peak performance is usually expressed in gflops (billion floating-point operations per second) or tflops (trillion floating-point calculations per second). Memory bandwidth is a measure of the ratio at which data can be read from or stored to memory. Memory bandwidth is usually expressed in gigabytes per second, GB/s.

CUDA编程笔记(2)——GPU core VS CPU core

这篇笔记摘自Professional CUDA C Programming

Even though many-core and multicore are used to label GPU and CPU architectures, a GPU core is quite different than a CPU core.
A CPU core, relatively heavy-weight, is designed for very complex control logic, seeking to optimize the execution of sequential programs.
A GPU core, relatively light-weight, is optimized for data-parallel tasks with simpler control logic, focusing on the throughput of parallel programs.

 

Linux下使用vmstat命令获得系统CPU的使用状态

本文是使用vmstat命令监控CPU使用的续文。

Linux下使用vmstat命令可以得到系统CPU的使用状态:

# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 1860352    948 131040    0    0  2433   137  252  897  2  7 90  1  0

其中描述CPU状态的是最后5列:

------cpu-----
us sy id wa st
2  7 90  1  0

要注意,上面数字的含义是百分比。即CPU运行user space程序的时间占2%,。。。

各列含义如下:

ususer time):CPU运行user space代码的时间;
sysystem time):CPU运行kernel代码的时间,比如执行系统调用;
ididle time):CPU处于idle状态的时间;
waIO-wait time):CPU处于idle状态,因为所有正在运行的进程都在等待I/O操作完成,因此当前无可以调度的进程;
ststolen time):CPU花费在执行系统上运行的虚拟机的时间。

参考资料:
The precise meaning of I/O wait time in Linux
Linux Performance Analysis in 60,000 Milliseconds

Swarmkit笔记(12)——swarmctl创建service时指定资源限制

swarmctl创建service时可以指定CPUmemory资源限制:

# swarmctl service create --help
Create a service

Usage:
  swarmctl service create [flags]

Flags:
......
  --cpu-limit string            CPU cores limit (e.g. 0.5)
  --cpu-reservation string      number of CPU cores reserved (e.g. 0.5)
......
  --memory-limit string         memory limit (e.g. 512m)
  --memory-reservation string   amount of reserved memory (e.g. 512m)
    ......

*-reservation的作用是为container分配并“占住”相应的资源,所以这些资源对container一定是可用的;*-limit是限制container进程所使用的资源。解析资源的代码位于cmd/swarmctl/service/flagparser/resource.go文件。

参考资料:
Docker service Limits and Reservations

docker笔记(16)——为container指定CPU资源

Docker run命令的--cpuset-cpus选项,指定container运行在特定的CPU core上。举例如下:

# docker run -ti --rm --cpuset-cpus=1,6 redis

另外还有一个--cpu-shares选项,它是一个相对权重(relative weight),其默认值是1024。即如果两个运行的container--cpu-shares值都是1024的话,则占用CPU资源的比例就相等。

Idle状态的CPU在做什么?

What Does an Idle CPU Do?这篇文章介绍了CPUIdle状态下在干什么,讲解地很清晰。我总结一下:

当操作系统“无事可做”时,就会运行Idle进程。以LinuxIdle进程代码为例:

while (1) {
    while(!need_resched()) {
        cpuidle_idle_call();
    }

    /*
      [Note: Switch to a different task. We will return to this loop when the
      idle task is again selected to run.]
    */
    schedule_preempt_disabled();
}

可以看到只要没有“调度”需求(need_resched),就会执行cpuidle_idle_call函数。而对于Intel处理器而言,保持Idle状态意味着执行hlt指令。

 

使用vmstat命令监控CPU使用

vmstat命令可以用来监控CPU的使用状况。举例如下:

# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 5201924   1328 5578060    0    0     0     0 1582 6952  2  1 98  0  0
 1  0      0 5200984   1328 5577996    0    0     0     0 2020 20567  9  1 90  0  0
 0  0      0 5198668   1328 5577952    0    0     0     0 1568 7617  5  1 94  0  0
 0  0      0 5194844   1328 5578000    0    0     0   187 1249 7057  1  1 98  0  0
 0  0      0 5199956   1328 5578232    0    0     0     0 1496 7306  4  1 95  0  0

上述命令每隔1秒输出系统状态,最后5列是描述的是CPU状况。man手册上关于这5列的含义描述的很清楚:

CPU
       These are percentages of total CPU time.
       us: Time spent running non-kernel code.  (user time, including nice time)
       sy: Time spent running kernel code.  (system time)
       id: Time spent idle.  Prior to Linux 2.5.41, this includes IO-wait time.
       wa: Time spent waiting for IO.  Prior to Linux 2.5.41, included in idle.
       st: Time stolen from a virtual machine.  Prior to Linux 2.6.11, unknown.

vmstat实质上是从/proc/stat文件获得系统状态:

# cat /proc/stat
cpu  381584 711 299364 1398303520 429839 0 251 0 0 0
cpu0 90740 58 44641 174627550 131209 0 120 0 0 0
cpu1 43141 26 22925 174746812 108219 0 10 0 0 0
cpu2 41308 35 25097 174831161 25877 0 40 0 0 0
cpu3 39301 70 27514 174836084 27792 0 4 0 0 0
cpu4 39187 78 46191 174750027 109013 0 0 0 0 0
......

需要注意的是这里数字的单位是Jiffies

另外,vmstat计算CPU时间百分比使用的是“四舍五入”算法(vmstat.c):

static void new_format(void){
    ......
    duse = *cpu_use + *cpu_nic;
    dsys = *cpu_sys + *cpu_xxx + *cpu_yyy;
    didl = *cpu_idl;
    diow = *cpu_iow;
    dstl = *cpu_zzz;
    Div = duse + dsys + didl + diow + dstl;
    if (!Div) Div = 1, didl = 1;
    divo2 = Div / 2UL;
    printf(w_option ? wide_format : format,
           running, blocked,
           unitConvert(kb_swap_used), unitConvert(kb_main_free),
           unitConvert(a_option?kb_inactive:kb_main_buffers),
           unitConvert(a_option?kb_active:kb_main_cached),
           (unsigned)( (unitConvert(*pswpin  * kb_per_page) * hz + divo2) / Div ),
           (unsigned)( (unitConvert(*pswpout * kb_per_page) * hz + divo2) / Div ),
           (unsigned)( (*pgpgin        * hz + divo2) / Div ),
           (unsigned)( (*pgpgout           * hz + divo2) / Div ),
           (unsigned)( (*intr          * hz + divo2) / Div ),
           (unsigned)( (*ctxt          * hz + divo2) / Div ),
           (unsigned)( (100*duse            + divo2) / Div ),
           (unsigned)( (100*dsys            + divo2) / Div ),
           (unsigned)( (100*didl            + divo2) / Div ),
           (unsigned)( (100*diow            + divo2) / Div ),
           (unsigned)( (100*dstl            + divo2) / Div )
    );
    ......
}

所以会出现CPU利用百分比相加大于100的情况:2 + 1 + 98 = 101

另外,在Linux系统上,r字段表示的是当前正在运行和等待运行的task的总和。

 

参考资料:
/proc/stat explained
procps

 

Profiling CPU使用

本文内容取自于《Systems Performance: Enterprise and the Cloud》

Profiling CPU的方法是通过对CPU状态进行周期性地采样,然后进行分析。包含5个步骤:

1. Select the type of profile data to capture, and the rate.
2. Begin sampling at a timed interval.
3. Wait while the activity of interest occurs.
4. End sampling and collect sample data.
5. Process the data.

CPU采样数据基于下面两个因素:

a. User level, kernel level, or both
b. Function and offset (program-counter-based), function only, partial stack trace, or full stack trace

抓取user levelkernel level所有的函数调用栈固然可以完整地得到CPUprofile,但这样会产生太多的数据。因此通常只采样user levelkernel level部分函数调用栈就可以了,有时可能仅需要保留函数的名字。

下面是一个使用DTraceCPU采样的例子:

 # dtrace -qn 'profile-997 /arg1/ {@[execname, ufunc(arg1)] = count();} tick-10s{exit(0)}'

 top                                                 libc.so.7`0x801154fec                                             1
 top                                                 libc.so.7`0x8011e5f28                                             1
 top                                                 libc.so.7`0x8011f18a9                                             1

 

Page 1 of 3

Powered by WordPress & Theme by Anders Norén