go-events package简介

go-events实现了一种处理event的机制,其核心概念是Sink(定义在event.go):

// Event marks items that can be sent as events.
type Event interface{}

// Sink accepts and sends events.
type Sink interface {
    // Write an event to the Sink. If no error is returned, the caller will
    // assume that all events have been committed to the sink. If an error is
    // received, the caller may retry sending the event.
    Write(event Event) error

    // Close the sink, possibly waiting for pending events to flush.
    Close() error
}

可以把Sink想象成一个“池子”,它提供了2个方法:Write往“池子”里发消息,Close是不用时关闭这个“池子”。 其它几个文件其实都是围绕Sink做文章,构造出各种功能。举个例子:

package main

import (
    "fmt"
    "github.com/docker/go-events"
    "time"
)

type eventRecv struct {
    name string
}

func (e *eventRecv)Write(event events.Event) error {
    fmt.Printf("%s receives %d\n", e.name, event.(int))
    return nil
}

func (e *eventRecv)Close() error {
    return nil
}

func createEventRecv(name string) *eventRecv {
    return &eventRecv{name}
}

func main() {
    e1 := createEventRecv("Foo")
    e2 := createEventRecv("Bar")

    bc := events.NewBroadcaster(e1, e2)
    bc.Write(1)
    bc.Write(2)
    time.Sleep(time.Second)
}

执行结果如下:

Foo receives 1
Bar receives 1
Foo receives 2
Bar receives 2

NewBroadcaster作用是把一个event发送到多个Sink

再看一个使用NewQueue的例子:

package main

import (
    "fmt"
    "github.com/docker/go-events"
    "time"
)

type eventRecv struct {
    name string
}

func (e *eventRecv)Write(event events.Event) error {
    fmt.Printf("%s receives %d\n", e.name, event)
    return nil
}

func (e *eventRecv)Close() error {
    return nil
}

func createEventRecv(name string) *eventRecv {
    return &eventRecv{name}
}

func main() {
    q := events.NewQueue(createEventRecv("Foo"))
    q.Write(1)
    q.Write(2)
    time.Sleep(time.Second)
}

执行结果如下:

Foo receives 1
Foo receives 2

FreeBSD kernel 笔记(10)——mutex

FreeBSD kernel提供两种mutexspin mutexsleep mutex(下列内容摘自 FreeBSD Device Drivers):

Spin Mutexes
Spin mutexes are simple spin locks. If a thread attempts to acquire a spin lock that is being held by another thread, it will “spin” and wait for the lock to be released. Spin, in this case, means to loop infinitely on the CPU. This spinning can result in deadlock if a thread that is holding a spin lock is interrupted or if it context switches, and all subsequent threads attempt to acquire that lock. Consequently, while holding a spin mutex all interrupts are blocked on the local processor and a context switch cannot be performed.

Spin mutexes should be held only for short periods of time and should be used only to protect objects related to nonpreemptive interrupts and low- level scheduling code (McKusick and Neville-Neil, 2005). Ordinarily, you’ll never use spin mutexes.

Sleep Mutexes
Sleep mutexes are the most commonly used lock. If a thread attempts to acquire a sleep mutex that is being held by another thread, it will context switch (that is, sleep) and wait for the mutex to be released. Because of this behavior, sleep mutexes are not susceptible to the deadlock described above.

Sleep mutexes support priority propagation. When a thread sleeps on a sleep mutex and its priority is higher than the sleep mutex’s current owner, the current owner will inherit the priority of this thread (Baldwin, 2002). This characteristic prevents a lower priority thread from blocking a higher priority thread.

NOTE Sleeping (for example, calling a *sleep function) while holding a mutex is never safe and must be avoided; otherwise, there are numerous assertions that will fail and the kernel will panic.

使用spin mutex时,为了防止deadlock,要把local cpu关中断并且不能进行context switch。通常情况下,应该使用sleep mutex。另外要注意,获得mutex的线程不能sleep,否则会导致kernel panic

此外,还有shared/exclusive locks

Shared/exclusive locks (sx locks) are locks that threads can hold while asleep. As the name implies, multiple threads can have a shared hold on an sx lock, but only one thread can have an exclusive hold on an sx lock. When a thread has an exclusive hold on an sx lock, other threads cannot have a shared hold on that lock.

sx locks do not support priority propagation and are inefficient com- pared to mutexes. The main reason for using sx locks is that threads can sleep while holding one.

reader/writer locks

Reader/writer locks (rw locks) are basically mutexes with sx lock semantics. Like sx locks, threads can hold rw locks as a reader, which is identical to a shared hold, or as a writer, which is identical to an exclusive hold. Like mutexes, rw locks support priority propagation and threads cannot hold them while sleeping (or the kernel will panic).

rw locks are used when you need to protect an object that is mostly going to be read from instead of written to.

shared/exclusive locksreader/writer locks语义类似,但有以下区别:拥有shared/exclusive locks的线程可以sleep,但不支持priority propagation;拥有reader/writer locks的线程不可以sleep,但支持priority propagation