1
0
Fork 0
mirror of https://github.com/Luzifer/gcr-clean.git synced 2024-12-23 03:41:19 +00:00

Add vendored code

Signed-off-by: Knut Ahlers <knut@ahlers.me>
This commit is contained in:
Knut Ahlers 2019-02-04 16:00:25 +01:00
parent ed6a6f52c3
commit 04cd59c534
Signed by: luzifer
GPG key ID: DC2729FDD34BE99E
950 changed files with 302643 additions and 0 deletions

21
vendor/github.com/Azure/go-ansiterm/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

12
vendor/github.com/Azure/go-ansiterm/README.md generated vendored Normal file
View file

@ -0,0 +1,12 @@
# go-ansiterm
This is a cross platform Ansi Terminal Emulation library. It reads a stream of Ansi characters and produces the appropriate function calls. The results of the function calls are platform dependent.
For example the parser might receive "ESC, [, A" as a stream of three characters. This is the code for Cursor Up (http://www.vt100.net/docs/vt510-rm/CUU). The parser then calls the cursor up function (CUU()) on an event handler. The event handler determines what platform specific work must be done to cause the cursor to move up one position.
The parser (parser.go) is a partial implementation of this state machine (http://vt100.net/emu/vt500_parser.png). There are also two event handler implementations, one for tests (test_event_handler.go) to validate that the expected events are being produced and called, the other is a Windows implementation (winterm/win_event_handler.go).
See parser_test.go for examples exercising the state machine and generating appropriate function calls.
-----
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

188
vendor/github.com/Azure/go-ansiterm/constants.go generated vendored Normal file
View file

@ -0,0 +1,188 @@
package ansiterm
const LogEnv = "DEBUG_TERMINAL"
// ANSI constants
// References:
// -- http://www.ecma-international.org/publications/standards/Ecma-048.htm
// -- http://man7.org/linux/man-pages/man4/console_codes.4.html
// -- http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html
// -- http://en.wikipedia.org/wiki/ANSI_escape_code
// -- http://vt100.net/emu/dec_ansi_parser
// -- http://vt100.net/emu/vt500_parser.svg
// -- http://invisible-island.net/xterm/ctlseqs/ctlseqs.html
// -- http://www.inwap.com/pdp10/ansicode.txt
const (
// ECMA-48 Set Graphics Rendition
// Note:
// -- Constants leading with an underscore (e.g., _ANSI_xxx) are unsupported or reserved
// -- Fonts could possibly be supported via SetCurrentConsoleFontEx
// -- Windows does not expose the per-window cursor (i.e., caret) blink times
ANSI_SGR_RESET = 0
ANSI_SGR_BOLD = 1
ANSI_SGR_DIM = 2
_ANSI_SGR_ITALIC = 3
ANSI_SGR_UNDERLINE = 4
_ANSI_SGR_BLINKSLOW = 5
_ANSI_SGR_BLINKFAST = 6
ANSI_SGR_REVERSE = 7
_ANSI_SGR_INVISIBLE = 8
_ANSI_SGR_LINETHROUGH = 9
_ANSI_SGR_FONT_00 = 10
_ANSI_SGR_FONT_01 = 11
_ANSI_SGR_FONT_02 = 12
_ANSI_SGR_FONT_03 = 13
_ANSI_SGR_FONT_04 = 14
_ANSI_SGR_FONT_05 = 15
_ANSI_SGR_FONT_06 = 16
_ANSI_SGR_FONT_07 = 17
_ANSI_SGR_FONT_08 = 18
_ANSI_SGR_FONT_09 = 19
_ANSI_SGR_FONT_10 = 20
_ANSI_SGR_DOUBLEUNDERLINE = 21
ANSI_SGR_BOLD_DIM_OFF = 22
_ANSI_SGR_ITALIC_OFF = 23
ANSI_SGR_UNDERLINE_OFF = 24
_ANSI_SGR_BLINK_OFF = 25
_ANSI_SGR_RESERVED_00 = 26
ANSI_SGR_REVERSE_OFF = 27
_ANSI_SGR_INVISIBLE_OFF = 28
_ANSI_SGR_LINETHROUGH_OFF = 29
ANSI_SGR_FOREGROUND_BLACK = 30
ANSI_SGR_FOREGROUND_RED = 31
ANSI_SGR_FOREGROUND_GREEN = 32
ANSI_SGR_FOREGROUND_YELLOW = 33
ANSI_SGR_FOREGROUND_BLUE = 34
ANSI_SGR_FOREGROUND_MAGENTA = 35
ANSI_SGR_FOREGROUND_CYAN = 36
ANSI_SGR_FOREGROUND_WHITE = 37
_ANSI_SGR_RESERVED_01 = 38
ANSI_SGR_FOREGROUND_DEFAULT = 39
ANSI_SGR_BACKGROUND_BLACK = 40
ANSI_SGR_BACKGROUND_RED = 41
ANSI_SGR_BACKGROUND_GREEN = 42
ANSI_SGR_BACKGROUND_YELLOW = 43
ANSI_SGR_BACKGROUND_BLUE = 44
ANSI_SGR_BACKGROUND_MAGENTA = 45
ANSI_SGR_BACKGROUND_CYAN = 46
ANSI_SGR_BACKGROUND_WHITE = 47
_ANSI_SGR_RESERVED_02 = 48
ANSI_SGR_BACKGROUND_DEFAULT = 49
// 50 - 65: Unsupported
ANSI_MAX_CMD_LENGTH = 4096
MAX_INPUT_EVENTS = 128
DEFAULT_WIDTH = 80
DEFAULT_HEIGHT = 24
ANSI_BEL = 0x07
ANSI_BACKSPACE = 0x08
ANSI_TAB = 0x09
ANSI_LINE_FEED = 0x0A
ANSI_VERTICAL_TAB = 0x0B
ANSI_FORM_FEED = 0x0C
ANSI_CARRIAGE_RETURN = 0x0D
ANSI_ESCAPE_PRIMARY = 0x1B
ANSI_ESCAPE_SECONDARY = 0x5B
ANSI_OSC_STRING_ENTRY = 0x5D
ANSI_COMMAND_FIRST = 0x40
ANSI_COMMAND_LAST = 0x7E
DCS_ENTRY = 0x90
CSI_ENTRY = 0x9B
OSC_STRING = 0x9D
ANSI_PARAMETER_SEP = ";"
ANSI_CMD_G0 = '('
ANSI_CMD_G1 = ')'
ANSI_CMD_G2 = '*'
ANSI_CMD_G3 = '+'
ANSI_CMD_DECPNM = '>'
ANSI_CMD_DECPAM = '='
ANSI_CMD_OSC = ']'
ANSI_CMD_STR_TERM = '\\'
KEY_CONTROL_PARAM_2 = ";2"
KEY_CONTROL_PARAM_3 = ";3"
KEY_CONTROL_PARAM_4 = ";4"
KEY_CONTROL_PARAM_5 = ";5"
KEY_CONTROL_PARAM_6 = ";6"
KEY_CONTROL_PARAM_7 = ";7"
KEY_CONTROL_PARAM_8 = ";8"
KEY_ESC_CSI = "\x1B["
KEY_ESC_N = "\x1BN"
KEY_ESC_O = "\x1BO"
FILL_CHARACTER = ' '
)
func getByteRange(start byte, end byte) []byte {
bytes := make([]byte, 0, 32)
for i := start; i <= end; i++ {
bytes = append(bytes, byte(i))
}
return bytes
}
var toGroundBytes = getToGroundBytes()
var executors = getExecuteBytes()
// SPACE 20+A0 hex Always and everywhere a blank space
// Intermediate 20-2F hex !"#$%&'()*+,-./
var intermeds = getByteRange(0x20, 0x2F)
// Parameters 30-3F hex 0123456789:;<=>?
// CSI Parameters 30-39, 3B hex 0123456789;
var csiParams = getByteRange(0x30, 0x3F)
var csiCollectables = append(getByteRange(0x30, 0x39), getByteRange(0x3B, 0x3F)...)
// Uppercase 40-5F hex @ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_
var upperCase = getByteRange(0x40, 0x5F)
// Lowercase 60-7E hex `abcdefghijlkmnopqrstuvwxyz{|}~
var lowerCase = getByteRange(0x60, 0x7E)
// Alphabetics 40-7E hex (all of upper and lower case)
var alphabetics = append(upperCase, lowerCase...)
var printables = getByteRange(0x20, 0x7F)
var escapeIntermediateToGroundBytes = getByteRange(0x30, 0x7E)
var escapeToGroundBytes = getEscapeToGroundBytes()
// See http://www.vt100.net/emu/vt500_parser.png for description of the complex
// byte ranges below
func getEscapeToGroundBytes() []byte {
escapeToGroundBytes := getByteRange(0x30, 0x4F)
escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x51, 0x57)...)
escapeToGroundBytes = append(escapeToGroundBytes, 0x59)
escapeToGroundBytes = append(escapeToGroundBytes, 0x5A)
escapeToGroundBytes = append(escapeToGroundBytes, 0x5C)
escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x60, 0x7E)...)
return escapeToGroundBytes
}
func getExecuteBytes() []byte {
executeBytes := getByteRange(0x00, 0x17)
executeBytes = append(executeBytes, 0x19)
executeBytes = append(executeBytes, getByteRange(0x1C, 0x1F)...)
return executeBytes
}
func getToGroundBytes() []byte {
groundBytes := []byte{0x18}
groundBytes = append(groundBytes, 0x1A)
groundBytes = append(groundBytes, getByteRange(0x80, 0x8F)...)
groundBytes = append(groundBytes, getByteRange(0x91, 0x97)...)
groundBytes = append(groundBytes, 0x99)
groundBytes = append(groundBytes, 0x9A)
groundBytes = append(groundBytes, 0x9C)
return groundBytes
}
// Delete 7F hex Always and everywhere ignored
// C1 Control 80-9F hex 32 additional control characters
// G1 Displayable A1-FE hex 94 additional displayable characters
// Special A0+FF hex Same as SPACE and DELETE

7
vendor/github.com/Azure/go-ansiterm/context.go generated vendored Normal file
View file

@ -0,0 +1,7 @@
package ansiterm
type ansiContext struct {
currentChar byte
paramBuffer []byte
interBuffer []byte
}

49
vendor/github.com/Azure/go-ansiterm/csi_entry_state.go generated vendored Normal file
View file

@ -0,0 +1,49 @@
package ansiterm
type csiEntryState struct {
baseState
}
func (csiState csiEntryState) Handle(b byte) (s state, e error) {
csiState.parser.logf("CsiEntry::Handle %#x", b)
nextState, err := csiState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(alphabetics, b):
return csiState.parser.ground, nil
case sliceContains(csiCollectables, b):
return csiState.parser.csiParam, nil
case sliceContains(executors, b):
return csiState, csiState.parser.execute()
}
return csiState, nil
}
func (csiState csiEntryState) Transition(s state) error {
csiState.parser.logf("CsiEntry::Transition %s --> %s", csiState.Name(), s.Name())
csiState.baseState.Transition(s)
switch s {
case csiState.parser.ground:
return csiState.parser.csiDispatch()
case csiState.parser.csiParam:
switch {
case sliceContains(csiParams, csiState.parser.context.currentChar):
csiState.parser.collectParam()
case sliceContains(intermeds, csiState.parser.context.currentChar):
csiState.parser.collectInter()
}
}
return nil
}
func (csiState csiEntryState) Enter() error {
csiState.parser.clear()
return nil
}

38
vendor/github.com/Azure/go-ansiterm/csi_param_state.go generated vendored Normal file
View file

@ -0,0 +1,38 @@
package ansiterm
type csiParamState struct {
baseState
}
func (csiState csiParamState) Handle(b byte) (s state, e error) {
csiState.parser.logf("CsiParam::Handle %#x", b)
nextState, err := csiState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(alphabetics, b):
return csiState.parser.ground, nil
case sliceContains(csiCollectables, b):
csiState.parser.collectParam()
return csiState, nil
case sliceContains(executors, b):
return csiState, csiState.parser.execute()
}
return csiState, nil
}
func (csiState csiParamState) Transition(s state) error {
csiState.parser.logf("CsiParam::Transition %s --> %s", csiState.Name(), s.Name())
csiState.baseState.Transition(s)
switch s {
case csiState.parser.ground:
return csiState.parser.csiDispatch()
}
return nil
}

View file

@ -0,0 +1,36 @@
package ansiterm
type escapeIntermediateState struct {
baseState
}
func (escState escapeIntermediateState) Handle(b byte) (s state, e error) {
escState.parser.logf("escapeIntermediateState::Handle %#x", b)
nextState, err := escState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(intermeds, b):
return escState, escState.parser.collectInter()
case sliceContains(executors, b):
return escState, escState.parser.execute()
case sliceContains(escapeIntermediateToGroundBytes, b):
return escState.parser.ground, nil
}
return escState, nil
}
func (escState escapeIntermediateState) Transition(s state) error {
escState.parser.logf("escapeIntermediateState::Transition %s --> %s", escState.Name(), s.Name())
escState.baseState.Transition(s)
switch s {
case escState.parser.ground:
return escState.parser.escDispatch()
}
return nil
}

47
vendor/github.com/Azure/go-ansiterm/escape_state.go generated vendored Normal file
View file

@ -0,0 +1,47 @@
package ansiterm
type escapeState struct {
baseState
}
func (escState escapeState) Handle(b byte) (s state, e error) {
escState.parser.logf("escapeState::Handle %#x", b)
nextState, err := escState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case b == ANSI_ESCAPE_SECONDARY:
return escState.parser.csiEntry, nil
case b == ANSI_OSC_STRING_ENTRY:
return escState.parser.oscString, nil
case sliceContains(executors, b):
return escState, escState.parser.execute()
case sliceContains(escapeToGroundBytes, b):
return escState.parser.ground, nil
case sliceContains(intermeds, b):
return escState.parser.escapeIntermediate, nil
}
return escState, nil
}
func (escState escapeState) Transition(s state) error {
escState.parser.logf("Escape::Transition %s --> %s", escState.Name(), s.Name())
escState.baseState.Transition(s)
switch s {
case escState.parser.ground:
return escState.parser.escDispatch()
case escState.parser.escapeIntermediate:
return escState.parser.collectInter()
}
return nil
}
func (escState escapeState) Enter() error {
escState.parser.clear()
return nil
}

90
vendor/github.com/Azure/go-ansiterm/event_handler.go generated vendored Normal file
View file

@ -0,0 +1,90 @@
package ansiterm
type AnsiEventHandler interface {
// Print
Print(b byte) error
// Execute C0 commands
Execute(b byte) error
// CUrsor Up
CUU(int) error
// CUrsor Down
CUD(int) error
// CUrsor Forward
CUF(int) error
// CUrsor Backward
CUB(int) error
// Cursor to Next Line
CNL(int) error
// Cursor to Previous Line
CPL(int) error
// Cursor Horizontal position Absolute
CHA(int) error
// Vertical line Position Absolute
VPA(int) error
// CUrsor Position
CUP(int, int) error
// Horizontal and Vertical Position (depends on PUM)
HVP(int, int) error
// Text Cursor Enable Mode
DECTCEM(bool) error
// Origin Mode
DECOM(bool) error
// 132 Column Mode
DECCOLM(bool) error
// Erase in Display
ED(int) error
// Erase in Line
EL(int) error
// Insert Line
IL(int) error
// Delete Line
DL(int) error
// Insert Character
ICH(int) error
// Delete Character
DCH(int) error
// Set Graphics Rendition
SGR([]int) error
// Pan Down
SU(int) error
// Pan Up
SD(int) error
// Device Attributes
DA([]string) error
// Set Top and Bottom Margins
DECSTBM(int, int) error
// Index
IND() error
// Reverse Index
RI() error
// Flush updates from previous commands
Flush() error
}

24
vendor/github.com/Azure/go-ansiterm/ground_state.go generated vendored Normal file
View file

@ -0,0 +1,24 @@
package ansiterm
type groundState struct {
baseState
}
func (gs groundState) Handle(b byte) (s state, e error) {
gs.parser.context.currentChar = b
nextState, err := gs.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(printables, b):
return gs, gs.parser.print()
case sliceContains(executors, b):
return gs, gs.parser.execute()
}
return gs, nil
}

View file

@ -0,0 +1,31 @@
package ansiterm
type oscStringState struct {
baseState
}
func (oscState oscStringState) Handle(b byte) (s state, e error) {
oscState.parser.logf("OscString::Handle %#x", b)
nextState, err := oscState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case isOscStringTerminator(b):
return oscState.parser.ground, nil
}
return oscState, nil
}
// See below for OSC string terminators for linux
// http://man7.org/linux/man-pages/man4/console_codes.4.html
func isOscStringTerminator(b byte) bool {
if b == ANSI_BEL || b == 0x5C {
return true
}
return false
}

151
vendor/github.com/Azure/go-ansiterm/parser.go generated vendored Normal file
View file

@ -0,0 +1,151 @@
package ansiterm
import (
"errors"
"log"
"os"
)
type AnsiParser struct {
currState state
eventHandler AnsiEventHandler
context *ansiContext
csiEntry state
csiParam state
dcsEntry state
escape state
escapeIntermediate state
error state
ground state
oscString state
stateMap []state
logf func(string, ...interface{})
}
type Option func(*AnsiParser)
func WithLogf(f func(string, ...interface{})) Option {
return func(ap *AnsiParser) {
ap.logf = f
}
}
func CreateParser(initialState string, evtHandler AnsiEventHandler, opts ...Option) *AnsiParser {
ap := &AnsiParser{
eventHandler: evtHandler,
context: &ansiContext{},
}
for _, o := range opts {
o(ap)
}
if isDebugEnv := os.Getenv(LogEnv); isDebugEnv == "1" {
logFile, _ := os.Create("ansiParser.log")
logger := log.New(logFile, "", log.LstdFlags)
if ap.logf != nil {
l := ap.logf
ap.logf = func(s string, v ...interface{}) {
l(s, v...)
logger.Printf(s, v...)
}
} else {
ap.logf = logger.Printf
}
}
if ap.logf == nil {
ap.logf = func(string, ...interface{}) {}
}
ap.csiEntry = csiEntryState{baseState{name: "CsiEntry", parser: ap}}
ap.csiParam = csiParamState{baseState{name: "CsiParam", parser: ap}}
ap.dcsEntry = dcsEntryState{baseState{name: "DcsEntry", parser: ap}}
ap.escape = escapeState{baseState{name: "Escape", parser: ap}}
ap.escapeIntermediate = escapeIntermediateState{baseState{name: "EscapeIntermediate", parser: ap}}
ap.error = errorState{baseState{name: "Error", parser: ap}}
ap.ground = groundState{baseState{name: "Ground", parser: ap}}
ap.oscString = oscStringState{baseState{name: "OscString", parser: ap}}
ap.stateMap = []state{
ap.csiEntry,
ap.csiParam,
ap.dcsEntry,
ap.escape,
ap.escapeIntermediate,
ap.error,
ap.ground,
ap.oscString,
}
ap.currState = getState(initialState, ap.stateMap)
ap.logf("CreateParser: parser %p", ap)
return ap
}
func getState(name string, states []state) state {
for _, el := range states {
if el.Name() == name {
return el
}
}
return nil
}
func (ap *AnsiParser) Parse(bytes []byte) (int, error) {
for i, b := range bytes {
if err := ap.handle(b); err != nil {
return i, err
}
}
return len(bytes), ap.eventHandler.Flush()
}
func (ap *AnsiParser) handle(b byte) error {
ap.context.currentChar = b
newState, err := ap.currState.Handle(b)
if err != nil {
return err
}
if newState == nil {
ap.logf("WARNING: newState is nil")
return errors.New("New state of 'nil' is invalid.")
}
if newState != ap.currState {
if err := ap.changeState(newState); err != nil {
return err
}
}
return nil
}
func (ap *AnsiParser) changeState(newState state) error {
ap.logf("ChangeState %s --> %s", ap.currState.Name(), newState.Name())
// Exit old state
if err := ap.currState.Exit(); err != nil {
ap.logf("Exit state '%s' failed with : '%v'", ap.currState.Name(), err)
return err
}
// Perform transition action
if err := ap.currState.Transition(newState); err != nil {
ap.logf("Transition from '%s' to '%s' failed with: '%v'", ap.currState.Name(), newState.Name, err)
return err
}
// Enter new state
if err := newState.Enter(); err != nil {
ap.logf("Enter state '%s' failed with: '%v'", newState.Name(), err)
return err
}
ap.currState = newState
return nil
}

View file

@ -0,0 +1,99 @@
package ansiterm
import (
"strconv"
)
func parseParams(bytes []byte) ([]string, error) {
paramBuff := make([]byte, 0, 0)
params := []string{}
for _, v := range bytes {
if v == ';' {
if len(paramBuff) > 0 {
// Completed parameter, append it to the list
s := string(paramBuff)
params = append(params, s)
paramBuff = make([]byte, 0, 0)
}
} else {
paramBuff = append(paramBuff, v)
}
}
// Last parameter may not be terminated with ';'
if len(paramBuff) > 0 {
s := string(paramBuff)
params = append(params, s)
}
return params, nil
}
func parseCmd(context ansiContext) (string, error) {
return string(context.currentChar), nil
}
func getInt(params []string, dflt int) int {
i := getInts(params, 1, dflt)[0]
return i
}
func getInts(params []string, minCount int, dflt int) []int {
ints := []int{}
for _, v := range params {
i, _ := strconv.Atoi(v)
// Zero is mapped to the default value in VT100.
if i == 0 {
i = dflt
}
ints = append(ints, i)
}
if len(ints) < minCount {
remaining := minCount - len(ints)
for i := 0; i < remaining; i++ {
ints = append(ints, dflt)
}
}
return ints
}
func (ap *AnsiParser) modeDispatch(param string, set bool) error {
switch param {
case "?3":
return ap.eventHandler.DECCOLM(set)
case "?6":
return ap.eventHandler.DECOM(set)
case "?25":
return ap.eventHandler.DECTCEM(set)
}
return nil
}
func (ap *AnsiParser) hDispatch(params []string) error {
if len(params) == 1 {
return ap.modeDispatch(params[0], true)
}
return nil
}
func (ap *AnsiParser) lDispatch(params []string) error {
if len(params) == 1 {
return ap.modeDispatch(params[0], false)
}
return nil
}
func getEraseParam(params []string) int {
param := getInt(params, 0)
if param < 0 || 3 < param {
param = 0
}
return param
}

119
vendor/github.com/Azure/go-ansiterm/parser_actions.go generated vendored Normal file
View file

@ -0,0 +1,119 @@
package ansiterm
func (ap *AnsiParser) collectParam() error {
currChar := ap.context.currentChar
ap.logf("collectParam %#x", currChar)
ap.context.paramBuffer = append(ap.context.paramBuffer, currChar)
return nil
}
func (ap *AnsiParser) collectInter() error {
currChar := ap.context.currentChar
ap.logf("collectInter %#x", currChar)
ap.context.paramBuffer = append(ap.context.interBuffer, currChar)
return nil
}
func (ap *AnsiParser) escDispatch() error {
cmd, _ := parseCmd(*ap.context)
intermeds := ap.context.interBuffer
ap.logf("escDispatch currentChar: %#x", ap.context.currentChar)
ap.logf("escDispatch: %v(%v)", cmd, intermeds)
switch cmd {
case "D": // IND
return ap.eventHandler.IND()
case "E": // NEL, equivalent to CRLF
err := ap.eventHandler.Execute(ANSI_CARRIAGE_RETURN)
if err == nil {
err = ap.eventHandler.Execute(ANSI_LINE_FEED)
}
return err
case "M": // RI
return ap.eventHandler.RI()
}
return nil
}
func (ap *AnsiParser) csiDispatch() error {
cmd, _ := parseCmd(*ap.context)
params, _ := parseParams(ap.context.paramBuffer)
ap.logf("Parsed params: %v with length: %d", params, len(params))
ap.logf("csiDispatch: %v(%v)", cmd, params)
switch cmd {
case "@":
return ap.eventHandler.ICH(getInt(params, 1))
case "A":
return ap.eventHandler.CUU(getInt(params, 1))
case "B":
return ap.eventHandler.CUD(getInt(params, 1))
case "C":
return ap.eventHandler.CUF(getInt(params, 1))
case "D":
return ap.eventHandler.CUB(getInt(params, 1))
case "E":
return ap.eventHandler.CNL(getInt(params, 1))
case "F":
return ap.eventHandler.CPL(getInt(params, 1))
case "G":
return ap.eventHandler.CHA(getInt(params, 1))
case "H":
ints := getInts(params, 2, 1)
x, y := ints[0], ints[1]
return ap.eventHandler.CUP(x, y)
case "J":
param := getEraseParam(params)
return ap.eventHandler.ED(param)
case "K":
param := getEraseParam(params)
return ap.eventHandler.EL(param)
case "L":
return ap.eventHandler.IL(getInt(params, 1))
case "M":
return ap.eventHandler.DL(getInt(params, 1))
case "P":
return ap.eventHandler.DCH(getInt(params, 1))
case "S":
return ap.eventHandler.SU(getInt(params, 1))
case "T":
return ap.eventHandler.SD(getInt(params, 1))
case "c":
return ap.eventHandler.DA(params)
case "d":
return ap.eventHandler.VPA(getInt(params, 1))
case "f":
ints := getInts(params, 2, 1)
x, y := ints[0], ints[1]
return ap.eventHandler.HVP(x, y)
case "h":
return ap.hDispatch(params)
case "l":
return ap.lDispatch(params)
case "m":
return ap.eventHandler.SGR(getInts(params, 1, 0))
case "r":
ints := getInts(params, 2, 1)
top, bottom := ints[0], ints[1]
return ap.eventHandler.DECSTBM(top, bottom)
default:
ap.logf("ERROR: Unsupported CSI command: '%s', with full context: %v", cmd, ap.context)
return nil
}
}
func (ap *AnsiParser) print() error {
return ap.eventHandler.Print(ap.context.currentChar)
}
func (ap *AnsiParser) clear() error {
ap.context = &ansiContext{}
return nil
}
func (ap *AnsiParser) execute() error {
return ap.eventHandler.Execute(ap.context.currentChar)
}

71
vendor/github.com/Azure/go-ansiterm/states.go generated vendored Normal file
View file

@ -0,0 +1,71 @@
package ansiterm
type stateID int
type state interface {
Enter() error
Exit() error
Handle(byte) (state, error)
Name() string
Transition(state) error
}
type baseState struct {
name string
parser *AnsiParser
}
func (base baseState) Enter() error {
return nil
}
func (base baseState) Exit() error {
return nil
}
func (base baseState) Handle(b byte) (s state, e error) {
switch {
case b == CSI_ENTRY:
return base.parser.csiEntry, nil
case b == DCS_ENTRY:
return base.parser.dcsEntry, nil
case b == ANSI_ESCAPE_PRIMARY:
return base.parser.escape, nil
case b == OSC_STRING:
return base.parser.oscString, nil
case sliceContains(toGroundBytes, b):
return base.parser.ground, nil
}
return nil, nil
}
func (base baseState) Name() string {
return base.name
}
func (base baseState) Transition(s state) error {
if s == base.parser.ground {
execBytes := []byte{0x18}
execBytes = append(execBytes, 0x1A)
execBytes = append(execBytes, getByteRange(0x80, 0x8F)...)
execBytes = append(execBytes, getByteRange(0x91, 0x97)...)
execBytes = append(execBytes, 0x99)
execBytes = append(execBytes, 0x9A)
if sliceContains(execBytes, base.parser.context.currentChar) {
return base.parser.execute()
}
}
return nil
}
type dcsEntryState struct {
baseState
}
type errorState struct {
baseState
}

21
vendor/github.com/Azure/go-ansiterm/utilities.go generated vendored Normal file
View file

@ -0,0 +1,21 @@
package ansiterm
import (
"strconv"
)
func sliceContains(bytes []byte, b byte) bool {
for _, v := range bytes {
if v == b {
return true
}
}
return false
}
func convertBytesToInteger(bytes []byte) int {
s := string(bytes)
i, _ := strconv.Atoi(s)
return i
}

182
vendor/github.com/Azure/go-ansiterm/winterm/ansi.go generated vendored Normal file
View file

@ -0,0 +1,182 @@
// +build windows
package winterm
import (
"fmt"
"os"
"strconv"
"strings"
"syscall"
"github.com/Azure/go-ansiterm"
)
// Windows keyboard constants
// See https://msdn.microsoft.com/en-us/library/windows/desktop/dd375731(v=vs.85).aspx.
const (
VK_PRIOR = 0x21 // PAGE UP key
VK_NEXT = 0x22 // PAGE DOWN key
VK_END = 0x23 // END key
VK_HOME = 0x24 // HOME key
VK_LEFT = 0x25 // LEFT ARROW key
VK_UP = 0x26 // UP ARROW key
VK_RIGHT = 0x27 // RIGHT ARROW key
VK_DOWN = 0x28 // DOWN ARROW key
VK_SELECT = 0x29 // SELECT key
VK_PRINT = 0x2A // PRINT key
VK_EXECUTE = 0x2B // EXECUTE key
VK_SNAPSHOT = 0x2C // PRINT SCREEN key
VK_INSERT = 0x2D // INS key
VK_DELETE = 0x2E // DEL key
VK_HELP = 0x2F // HELP key
VK_F1 = 0x70 // F1 key
VK_F2 = 0x71 // F2 key
VK_F3 = 0x72 // F3 key
VK_F4 = 0x73 // F4 key
VK_F5 = 0x74 // F5 key
VK_F6 = 0x75 // F6 key
VK_F7 = 0x76 // F7 key
VK_F8 = 0x77 // F8 key
VK_F9 = 0x78 // F9 key
VK_F10 = 0x79 // F10 key
VK_F11 = 0x7A // F11 key
VK_F12 = 0x7B // F12 key
RIGHT_ALT_PRESSED = 0x0001
LEFT_ALT_PRESSED = 0x0002
RIGHT_CTRL_PRESSED = 0x0004
LEFT_CTRL_PRESSED = 0x0008
SHIFT_PRESSED = 0x0010
NUMLOCK_ON = 0x0020
SCROLLLOCK_ON = 0x0040
CAPSLOCK_ON = 0x0080
ENHANCED_KEY = 0x0100
)
type ansiCommand struct {
CommandBytes []byte
Command string
Parameters []string
IsSpecial bool
}
func newAnsiCommand(command []byte) *ansiCommand {
if isCharacterSelectionCmdChar(command[1]) {
// Is Character Set Selection commands
return &ansiCommand{
CommandBytes: command,
Command: string(command),
IsSpecial: true,
}
}
// last char is command character
lastCharIndex := len(command) - 1
ac := &ansiCommand{
CommandBytes: command,
Command: string(command[lastCharIndex]),
IsSpecial: false,
}
// more than a single escape
if lastCharIndex != 0 {
start := 1
// skip if double char escape sequence
if command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_ESCAPE_SECONDARY {
start++
}
// convert this to GetNextParam method
ac.Parameters = strings.Split(string(command[start:lastCharIndex]), ansiterm.ANSI_PARAMETER_SEP)
}
return ac
}
func (ac *ansiCommand) paramAsSHORT(index int, defaultValue int16) int16 {
if index < 0 || index >= len(ac.Parameters) {
return defaultValue
}
param, err := strconv.ParseInt(ac.Parameters[index], 10, 16)
if err != nil {
return defaultValue
}
return int16(param)
}
func (ac *ansiCommand) String() string {
return fmt.Sprintf("0x%v \"%v\" (\"%v\")",
bytesToHex(ac.CommandBytes),
ac.Command,
strings.Join(ac.Parameters, "\",\""))
}
// isAnsiCommandChar returns true if the passed byte falls within the range of ANSI commands.
// See http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html.
func isAnsiCommandChar(b byte) bool {
switch {
case ansiterm.ANSI_COMMAND_FIRST <= b && b <= ansiterm.ANSI_COMMAND_LAST && b != ansiterm.ANSI_ESCAPE_SECONDARY:
return true
case b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_OSC || b == ansiterm.ANSI_CMD_DECPAM || b == ansiterm.ANSI_CMD_DECPNM:
// non-CSI escape sequence terminator
return true
case b == ansiterm.ANSI_CMD_STR_TERM || b == ansiterm.ANSI_BEL:
// String escape sequence terminator
return true
}
return false
}
func isXtermOscSequence(command []byte, current byte) bool {
return (len(command) >= 2 && command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_CMD_OSC && current != ansiterm.ANSI_BEL)
}
func isCharacterSelectionCmdChar(b byte) bool {
return (b == ansiterm.ANSI_CMD_G0 || b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_G2 || b == ansiterm.ANSI_CMD_G3)
}
// bytesToHex converts a slice of bytes to a human-readable string.
func bytesToHex(b []byte) string {
hex := make([]string, len(b))
for i, ch := range b {
hex[i] = fmt.Sprintf("%X", ch)
}
return strings.Join(hex, "")
}
// ensureInRange adjusts the passed value, if necessary, to ensure it is within
// the passed min / max range.
func ensureInRange(n int16, min int16, max int16) int16 {
if n < min {
return min
} else if n > max {
return max
} else {
return n
}
}
func GetStdFile(nFile int) (*os.File, uintptr) {
var file *os.File
switch nFile {
case syscall.STD_INPUT_HANDLE:
file = os.Stdin
case syscall.STD_OUTPUT_HANDLE:
file = os.Stdout
case syscall.STD_ERROR_HANDLE:
file = os.Stderr
default:
panic(fmt.Errorf("Invalid standard handle identifier: %v", nFile))
}
fd, err := syscall.GetStdHandle(nFile)
if err != nil {
panic(fmt.Errorf("Invalid standard handle identifier: %v -- %v", nFile, err))
}
return file, uintptr(fd)
}

327
vendor/github.com/Azure/go-ansiterm/winterm/api.go generated vendored Normal file
View file

@ -0,0 +1,327 @@
// +build windows
package winterm
import (
"fmt"
"syscall"
"unsafe"
)
//===========================================================================================================
// IMPORTANT NOTE:
//
// The methods below make extensive use of the "unsafe" package to obtain the required pointers.
// Beginning in Go 1.3, the garbage collector may release local variables (e.g., incoming arguments, stack
// variables) the pointers reference *before* the API completes.
//
// As a result, in those cases, the code must hint that the variables remain in active by invoking the
// dummy method "use" (see below). Newer versions of Go are planned to change the mechanism to no longer
// require unsafe pointers.
//
// If you add or modify methods, ENSURE protection of local variables through the "use" builtin to inform
// the garbage collector the variables remain in use if:
//
// -- The value is not a pointer (e.g., int32, struct)
// -- The value is not referenced by the method after passing the pointer to Windows
//
// See http://golang.org/doc/go1.3.
//===========================================================================================================
var (
kernel32DLL = syscall.NewLazyDLL("kernel32.dll")
getConsoleCursorInfoProc = kernel32DLL.NewProc("GetConsoleCursorInfo")
setConsoleCursorInfoProc = kernel32DLL.NewProc("SetConsoleCursorInfo")
setConsoleCursorPositionProc = kernel32DLL.NewProc("SetConsoleCursorPosition")
setConsoleModeProc = kernel32DLL.NewProc("SetConsoleMode")
getConsoleScreenBufferInfoProc = kernel32DLL.NewProc("GetConsoleScreenBufferInfo")
setConsoleScreenBufferSizeProc = kernel32DLL.NewProc("SetConsoleScreenBufferSize")
scrollConsoleScreenBufferProc = kernel32DLL.NewProc("ScrollConsoleScreenBufferA")
setConsoleTextAttributeProc = kernel32DLL.NewProc("SetConsoleTextAttribute")
setConsoleWindowInfoProc = kernel32DLL.NewProc("SetConsoleWindowInfo")
writeConsoleOutputProc = kernel32DLL.NewProc("WriteConsoleOutputW")
readConsoleInputProc = kernel32DLL.NewProc("ReadConsoleInputW")
waitForSingleObjectProc = kernel32DLL.NewProc("WaitForSingleObject")
)
// Windows Console constants
const (
// Console modes
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx.
ENABLE_PROCESSED_INPUT = 0x0001
ENABLE_LINE_INPUT = 0x0002
ENABLE_ECHO_INPUT = 0x0004
ENABLE_WINDOW_INPUT = 0x0008
ENABLE_MOUSE_INPUT = 0x0010
ENABLE_INSERT_MODE = 0x0020
ENABLE_QUICK_EDIT_MODE = 0x0040
ENABLE_EXTENDED_FLAGS = 0x0080
ENABLE_AUTO_POSITION = 0x0100
ENABLE_VIRTUAL_TERMINAL_INPUT = 0x0200
ENABLE_PROCESSED_OUTPUT = 0x0001
ENABLE_WRAP_AT_EOL_OUTPUT = 0x0002
ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004
DISABLE_NEWLINE_AUTO_RETURN = 0x0008
ENABLE_LVB_GRID_WORLDWIDE = 0x0010
// Character attributes
// Note:
// -- The attributes are combined to produce various colors (e.g., Blue + Green will create Cyan).
// Clearing all foreground or background colors results in black; setting all creates white.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682088(v=vs.85).aspx#_win32_character_attributes.
FOREGROUND_BLUE uint16 = 0x0001
FOREGROUND_GREEN uint16 = 0x0002
FOREGROUND_RED uint16 = 0x0004
FOREGROUND_INTENSITY uint16 = 0x0008
FOREGROUND_MASK uint16 = 0x000F
BACKGROUND_BLUE uint16 = 0x0010
BACKGROUND_GREEN uint16 = 0x0020
BACKGROUND_RED uint16 = 0x0040
BACKGROUND_INTENSITY uint16 = 0x0080
BACKGROUND_MASK uint16 = 0x00F0
COMMON_LVB_MASK uint16 = 0xFF00
COMMON_LVB_REVERSE_VIDEO uint16 = 0x4000
COMMON_LVB_UNDERSCORE uint16 = 0x8000
// Input event types
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx.
KEY_EVENT = 0x0001
MOUSE_EVENT = 0x0002
WINDOW_BUFFER_SIZE_EVENT = 0x0004
MENU_EVENT = 0x0008
FOCUS_EVENT = 0x0010
// WaitForSingleObject return codes
WAIT_ABANDONED = 0x00000080
WAIT_FAILED = 0xFFFFFFFF
WAIT_SIGNALED = 0x0000000
WAIT_TIMEOUT = 0x00000102
// WaitForSingleObject wait duration
WAIT_INFINITE = 0xFFFFFFFF
WAIT_ONE_SECOND = 1000
WAIT_HALF_SECOND = 500
WAIT_QUARTER_SECOND = 250
)
// Windows API Console types
// -- See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682101(v=vs.85).aspx for Console specific types (e.g., COORD)
// -- See https://msdn.microsoft.com/en-us/library/aa296569(v=vs.60).aspx for comments on alignment
type (
CHAR_INFO struct {
UnicodeChar uint16
Attributes uint16
}
CONSOLE_CURSOR_INFO struct {
Size uint32
Visible int32
}
CONSOLE_SCREEN_BUFFER_INFO struct {
Size COORD
CursorPosition COORD
Attributes uint16
Window SMALL_RECT
MaximumWindowSize COORD
}
COORD struct {
X int16
Y int16
}
SMALL_RECT struct {
Left int16
Top int16
Right int16
Bottom int16
}
// INPUT_RECORD is a C/C++ union of which KEY_EVENT_RECORD is one case, it is also the largest
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx.
INPUT_RECORD struct {
EventType uint16
KeyEvent KEY_EVENT_RECORD
}
KEY_EVENT_RECORD struct {
KeyDown int32
RepeatCount uint16
VirtualKeyCode uint16
VirtualScanCode uint16
UnicodeChar uint16
ControlKeyState uint32
}
WINDOW_BUFFER_SIZE struct {
Size COORD
}
)
// boolToBOOL converts a Go bool into a Windows int32.
func boolToBOOL(f bool) int32 {
if f {
return int32(1)
} else {
return int32(0)
}
}
// GetConsoleCursorInfo retrieves information about the size and visiblity of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683163(v=vs.85).aspx.
func GetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error {
r1, r2, err := getConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0)
return checkError(r1, r2, err)
}
// SetConsoleCursorInfo sets the size and visiblity of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686019(v=vs.85).aspx.
func SetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error {
r1, r2, err := setConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0)
return checkError(r1, r2, err)
}
// SetConsoleCursorPosition location of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686025(v=vs.85).aspx.
func SetConsoleCursorPosition(handle uintptr, coord COORD) error {
r1, r2, err := setConsoleCursorPositionProc.Call(handle, coordToPointer(coord))
use(coord)
return checkError(r1, r2, err)
}
// GetConsoleMode gets the console mode for given file descriptor
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683167(v=vs.85).aspx.
func GetConsoleMode(handle uintptr) (mode uint32, err error) {
err = syscall.GetConsoleMode(syscall.Handle(handle), &mode)
return mode, err
}
// SetConsoleMode sets the console mode for given file descriptor
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx.
func SetConsoleMode(handle uintptr, mode uint32) error {
r1, r2, err := setConsoleModeProc.Call(handle, uintptr(mode), 0)
use(mode)
return checkError(r1, r2, err)
}
// GetConsoleScreenBufferInfo retrieves information about the specified console screen buffer.
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683171(v=vs.85).aspx.
func GetConsoleScreenBufferInfo(handle uintptr) (*CONSOLE_SCREEN_BUFFER_INFO, error) {
info := CONSOLE_SCREEN_BUFFER_INFO{}
err := checkError(getConsoleScreenBufferInfoProc.Call(handle, uintptr(unsafe.Pointer(&info)), 0))
if err != nil {
return nil, err
}
return &info, nil
}
func ScrollConsoleScreenBuffer(handle uintptr, scrollRect SMALL_RECT, clipRect SMALL_RECT, destOrigin COORD, char CHAR_INFO) error {
r1, r2, err := scrollConsoleScreenBufferProc.Call(handle, uintptr(unsafe.Pointer(&scrollRect)), uintptr(unsafe.Pointer(&clipRect)), coordToPointer(destOrigin), uintptr(unsafe.Pointer(&char)))
use(scrollRect)
use(clipRect)
use(destOrigin)
use(char)
return checkError(r1, r2, err)
}
// SetConsoleScreenBufferSize sets the size of the console screen buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686044(v=vs.85).aspx.
func SetConsoleScreenBufferSize(handle uintptr, coord COORD) error {
r1, r2, err := setConsoleScreenBufferSizeProc.Call(handle, coordToPointer(coord))
use(coord)
return checkError(r1, r2, err)
}
// SetConsoleTextAttribute sets the attributes of characters written to the
// console screen buffer by the WriteFile or WriteConsole function.
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686047(v=vs.85).aspx.
func SetConsoleTextAttribute(handle uintptr, attribute uint16) error {
r1, r2, err := setConsoleTextAttributeProc.Call(handle, uintptr(attribute), 0)
use(attribute)
return checkError(r1, r2, err)
}
// SetConsoleWindowInfo sets the size and position of the console screen buffer's window.
// Note that the size and location must be within and no larger than the backing console screen buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686125(v=vs.85).aspx.
func SetConsoleWindowInfo(handle uintptr, isAbsolute bool, rect SMALL_RECT) error {
r1, r2, err := setConsoleWindowInfoProc.Call(handle, uintptr(boolToBOOL(isAbsolute)), uintptr(unsafe.Pointer(&rect)))
use(isAbsolute)
use(rect)
return checkError(r1, r2, err)
}
// WriteConsoleOutput writes the CHAR_INFOs from the provided buffer to the active console buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687404(v=vs.85).aspx.
func WriteConsoleOutput(handle uintptr, buffer []CHAR_INFO, bufferSize COORD, bufferCoord COORD, writeRegion *SMALL_RECT) error {
r1, r2, err := writeConsoleOutputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), coordToPointer(bufferSize), coordToPointer(bufferCoord), uintptr(unsafe.Pointer(writeRegion)))
use(buffer)
use(bufferSize)
use(bufferCoord)
return checkError(r1, r2, err)
}
// ReadConsoleInput reads (and removes) data from the console input buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684961(v=vs.85).aspx.
func ReadConsoleInput(handle uintptr, buffer []INPUT_RECORD, count *uint32) error {
r1, r2, err := readConsoleInputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), uintptr(len(buffer)), uintptr(unsafe.Pointer(count)))
use(buffer)
return checkError(r1, r2, err)
}
// WaitForSingleObject waits for the passed handle to be signaled.
// It returns true if the handle was signaled; false otherwise.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687032(v=vs.85).aspx.
func WaitForSingleObject(handle uintptr, msWait uint32) (bool, error) {
r1, _, err := waitForSingleObjectProc.Call(handle, uintptr(uint32(msWait)))
switch r1 {
case WAIT_ABANDONED, WAIT_TIMEOUT:
return false, nil
case WAIT_SIGNALED:
return true, nil
}
use(msWait)
return false, err
}
// String helpers
func (info CONSOLE_SCREEN_BUFFER_INFO) String() string {
return fmt.Sprintf("Size(%v) Cursor(%v) Window(%v) Max(%v)", info.Size, info.CursorPosition, info.Window, info.MaximumWindowSize)
}
func (coord COORD) String() string {
return fmt.Sprintf("%v,%v", coord.X, coord.Y)
}
func (rect SMALL_RECT) String() string {
return fmt.Sprintf("(%v,%v),(%v,%v)", rect.Left, rect.Top, rect.Right, rect.Bottom)
}
// checkError evaluates the results of a Windows API call and returns the error if it failed.
func checkError(r1, r2 uintptr, err error) error {
// Windows APIs return non-zero to indicate success
if r1 != 0 {
return nil
}
// Return the error if provided, otherwise default to EINVAL
if err != nil {
return err
}
return syscall.EINVAL
}
// coordToPointer converts a COORD into a uintptr (by fooling the type system).
func coordToPointer(c COORD) uintptr {
// Note: This code assumes the two SHORTs are correctly laid out; the "cast" to uint32 is just to get a pointer to pass.
return uintptr(*((*uint32)(unsafe.Pointer(&c))))
}
// use is a no-op, but the compiler cannot see that it is.
// Calling use(p) ensures that p is kept live until that point.
func use(p interface{}) {}

View file

@ -0,0 +1,100 @@
// +build windows
package winterm
import "github.com/Azure/go-ansiterm"
const (
FOREGROUND_COLOR_MASK = FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE
BACKGROUND_COLOR_MASK = BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE
)
// collectAnsiIntoWindowsAttributes modifies the passed Windows text mode flags to reflect the
// request represented by the passed ANSI mode.
func collectAnsiIntoWindowsAttributes(windowsMode uint16, inverted bool, baseMode uint16, ansiMode int16) (uint16, bool) {
switch ansiMode {
// Mode styles
case ansiterm.ANSI_SGR_BOLD:
windowsMode = windowsMode | FOREGROUND_INTENSITY
case ansiterm.ANSI_SGR_DIM, ansiterm.ANSI_SGR_BOLD_DIM_OFF:
windowsMode &^= FOREGROUND_INTENSITY
case ansiterm.ANSI_SGR_UNDERLINE:
windowsMode = windowsMode | COMMON_LVB_UNDERSCORE
case ansiterm.ANSI_SGR_REVERSE:
inverted = true
case ansiterm.ANSI_SGR_REVERSE_OFF:
inverted = false
case ansiterm.ANSI_SGR_UNDERLINE_OFF:
windowsMode &^= COMMON_LVB_UNDERSCORE
// Foreground colors
case ansiterm.ANSI_SGR_FOREGROUND_DEFAULT:
windowsMode = (windowsMode &^ FOREGROUND_MASK) | (baseMode & FOREGROUND_MASK)
case ansiterm.ANSI_SGR_FOREGROUND_BLACK:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK)
case ansiterm.ANSI_SGR_FOREGROUND_RED:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED
case ansiterm.ANSI_SGR_FOREGROUND_GREEN:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN
case ansiterm.ANSI_SGR_FOREGROUND_YELLOW:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN
case ansiterm.ANSI_SGR_FOREGROUND_BLUE:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_MAGENTA:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_CYAN:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_WHITE:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE
// Background colors
case ansiterm.ANSI_SGR_BACKGROUND_DEFAULT:
// Black with no intensity
windowsMode = (windowsMode &^ BACKGROUND_MASK) | (baseMode & BACKGROUND_MASK)
case ansiterm.ANSI_SGR_BACKGROUND_BLACK:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK)
case ansiterm.ANSI_SGR_BACKGROUND_RED:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED
case ansiterm.ANSI_SGR_BACKGROUND_GREEN:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN
case ansiterm.ANSI_SGR_BACKGROUND_YELLOW:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN
case ansiterm.ANSI_SGR_BACKGROUND_BLUE:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_MAGENTA:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_CYAN:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_WHITE:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE
}
return windowsMode, inverted
}
// invertAttributes inverts the foreground and background colors of a Windows attributes value
func invertAttributes(windowsMode uint16) uint16 {
return (COMMON_LVB_MASK & windowsMode) | ((FOREGROUND_MASK & windowsMode) << 4) | ((BACKGROUND_MASK & windowsMode) >> 4)
}

View file

@ -0,0 +1,101 @@
// +build windows
package winterm
const (
horizontal = iota
vertical
)
func (h *windowsAnsiEventHandler) getCursorWindow(info *CONSOLE_SCREEN_BUFFER_INFO) SMALL_RECT {
if h.originMode {
sr := h.effectiveSr(info.Window)
return SMALL_RECT{
Top: sr.top,
Bottom: sr.bottom,
Left: 0,
Right: info.Size.X - 1,
}
} else {
return SMALL_RECT{
Top: info.Window.Top,
Bottom: info.Window.Bottom,
Left: 0,
Right: info.Size.X - 1,
}
}
}
// setCursorPosition sets the cursor to the specified position, bounded to the screen size
func (h *windowsAnsiEventHandler) setCursorPosition(position COORD, window SMALL_RECT) error {
position.X = ensureInRange(position.X, window.Left, window.Right)
position.Y = ensureInRange(position.Y, window.Top, window.Bottom)
err := SetConsoleCursorPosition(h.fd, position)
if err != nil {
return err
}
h.logf("Cursor position set: (%d, %d)", position.X, position.Y)
return err
}
func (h *windowsAnsiEventHandler) moveCursorVertical(param int) error {
return h.moveCursor(vertical, param)
}
func (h *windowsAnsiEventHandler) moveCursorHorizontal(param int) error {
return h.moveCursor(horizontal, param)
}
func (h *windowsAnsiEventHandler) moveCursor(moveMode int, param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
switch moveMode {
case horizontal:
position.X += int16(param)
case vertical:
position.Y += int16(param)
}
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) moveCursorLine(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
position.X = 0
position.Y += int16(param)
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) moveCursorColumn(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
position.X = int16(param) - 1
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,84 @@
// +build windows
package winterm
import "github.com/Azure/go-ansiterm"
func (h *windowsAnsiEventHandler) clearRange(attributes uint16, fromCoord COORD, toCoord COORD) error {
// Ignore an invalid (negative area) request
if toCoord.Y < fromCoord.Y {
return nil
}
var err error
var coordStart = COORD{}
var coordEnd = COORD{}
xCurrent, yCurrent := fromCoord.X, fromCoord.Y
xEnd, yEnd := toCoord.X, toCoord.Y
// Clear any partial initial line
if xCurrent > 0 {
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yCurrent
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
xCurrent = 0
yCurrent += 1
}
// Clear intervening rectangular section
if yCurrent < yEnd {
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yEnd-1
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
xCurrent = 0
yCurrent = yEnd
}
// Clear remaining partial ending line
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yEnd
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) clearRect(attributes uint16, fromCoord COORD, toCoord COORD) error {
region := SMALL_RECT{Top: fromCoord.Y, Left: fromCoord.X, Bottom: toCoord.Y, Right: toCoord.X}
width := toCoord.X - fromCoord.X + 1
height := toCoord.Y - fromCoord.Y + 1
size := uint32(width) * uint32(height)
if size <= 0 {
return nil
}
buffer := make([]CHAR_INFO, size)
char := CHAR_INFO{ansiterm.FILL_CHARACTER, attributes}
for i := 0; i < int(size); i++ {
buffer[i] = char
}
err := WriteConsoleOutput(h.fd, buffer, COORD{X: width, Y: height}, COORD{X: 0, Y: 0}, &region)
if err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,118 @@
// +build windows
package winterm
// effectiveSr gets the current effective scroll region in buffer coordinates
func (h *windowsAnsiEventHandler) effectiveSr(window SMALL_RECT) scrollRegion {
top := addInRange(window.Top, h.sr.top, window.Top, window.Bottom)
bottom := addInRange(window.Top, h.sr.bottom, window.Top, window.Bottom)
if top >= bottom {
top = window.Top
bottom = window.Bottom
}
return scrollRegion{top: top, bottom: bottom}
}
func (h *windowsAnsiEventHandler) scrollUp(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
sr := h.effectiveSr(info.Window)
return h.scroll(param, sr, info)
}
func (h *windowsAnsiEventHandler) scrollDown(param int) error {
return h.scrollUp(-param)
}
func (h *windowsAnsiEventHandler) deleteLines(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
start := info.CursorPosition.Y
sr := h.effectiveSr(info.Window)
// Lines cannot be inserted or deleted outside the scrolling region.
if start >= sr.top && start <= sr.bottom {
sr.top = start
return h.scroll(param, sr, info)
} else {
return nil
}
}
func (h *windowsAnsiEventHandler) insertLines(param int) error {
return h.deleteLines(-param)
}
// scroll scrolls the provided scroll region by param lines. The scroll region is in buffer coordinates.
func (h *windowsAnsiEventHandler) scroll(param int, sr scrollRegion, info *CONSOLE_SCREEN_BUFFER_INFO) error {
h.logf("scroll: scrollTop: %d, scrollBottom: %d", sr.top, sr.bottom)
h.logf("scroll: windowTop: %d, windowBottom: %d", info.Window.Top, info.Window.Bottom)
// Copy from and clip to the scroll region (full buffer width)
scrollRect := SMALL_RECT{
Top: sr.top,
Bottom: sr.bottom,
Left: 0,
Right: info.Size.X - 1,
}
// Origin to which area should be copied
destOrigin := COORD{
X: 0,
Y: sr.top - int16(param),
}
char := CHAR_INFO{
UnicodeChar: ' ',
Attributes: h.attributes,
}
if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) deleteCharacters(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
return h.scrollLine(param, info.CursorPosition, info)
}
func (h *windowsAnsiEventHandler) insertCharacters(param int) error {
return h.deleteCharacters(-param)
}
// scrollLine scrolls a line horizontally starting at the provided position by a number of columns.
func (h *windowsAnsiEventHandler) scrollLine(columns int, position COORD, info *CONSOLE_SCREEN_BUFFER_INFO) error {
// Copy from and clip to the scroll region (full buffer width)
scrollRect := SMALL_RECT{
Top: position.Y,
Bottom: position.Y,
Left: position.X,
Right: info.Size.X - 1,
}
// Origin to which area should be copied
destOrigin := COORD{
X: position.X - int16(columns),
Y: position.Y,
}
char := CHAR_INFO{
UnicodeChar: ' ',
Attributes: h.attributes,
}
if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,9 @@
// +build windows
package winterm
// AddInRange increments a value by the passed quantity while ensuring the values
// always remain within the supplied min / max range.
func addInRange(n int16, increment int16, min int16, max int16) int16 {
return ensureInRange(n+increment, min, max)
}

View file

@ -0,0 +1,743 @@
// +build windows
package winterm
import (
"bytes"
"log"
"os"
"strconv"
"github.com/Azure/go-ansiterm"
)
type windowsAnsiEventHandler struct {
fd uintptr
file *os.File
infoReset *CONSOLE_SCREEN_BUFFER_INFO
sr scrollRegion
buffer bytes.Buffer
attributes uint16
inverted bool
wrapNext bool
drewMarginByte bool
originMode bool
marginByte byte
curInfo *CONSOLE_SCREEN_BUFFER_INFO
curPos COORD
logf func(string, ...interface{})
}
type Option func(*windowsAnsiEventHandler)
func WithLogf(f func(string, ...interface{})) Option {
return func(w *windowsAnsiEventHandler) {
w.logf = f
}
}
func CreateWinEventHandler(fd uintptr, file *os.File, opts ...Option) ansiterm.AnsiEventHandler {
infoReset, err := GetConsoleScreenBufferInfo(fd)
if err != nil {
return nil
}
h := &windowsAnsiEventHandler{
fd: fd,
file: file,
infoReset: infoReset,
attributes: infoReset.Attributes,
}
for _, o := range opts {
o(h)
}
if isDebugEnv := os.Getenv(ansiterm.LogEnv); isDebugEnv == "1" {
logFile, _ := os.Create("winEventHandler.log")
logger := log.New(logFile, "", log.LstdFlags)
if h.logf != nil {
l := h.logf
h.logf = func(s string, v ...interface{}) {
l(s, v...)
logger.Printf(s, v...)
}
} else {
h.logf = logger.Printf
}
}
if h.logf == nil {
h.logf = func(string, ...interface{}) {}
}
return h
}
type scrollRegion struct {
top int16
bottom int16
}
// simulateLF simulates a LF or CR+LF by scrolling if necessary to handle the
// current cursor position and scroll region settings, in which case it returns
// true. If no special handling is necessary, then it does nothing and returns
// false.
//
// In the false case, the caller should ensure that a carriage return
// and line feed are inserted or that the text is otherwise wrapped.
func (h *windowsAnsiEventHandler) simulateLF(includeCR bool) (bool, error) {
if h.wrapNext {
if err := h.Flush(); err != nil {
return false, err
}
h.clearWrap()
}
pos, info, err := h.getCurrentInfo()
if err != nil {
return false, err
}
sr := h.effectiveSr(info.Window)
if pos.Y == sr.bottom {
// Scrolling is necessary. Let Windows automatically scroll if the scrolling region
// is the full window.
if sr.top == info.Window.Top && sr.bottom == info.Window.Bottom {
if includeCR {
pos.X = 0
h.updatePos(pos)
}
return false, nil
}
// A custom scroll region is active. Scroll the window manually to simulate
// the LF.
if err := h.Flush(); err != nil {
return false, err
}
h.logf("Simulating LF inside scroll region")
if err := h.scrollUp(1); err != nil {
return false, err
}
if includeCR {
pos.X = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return false, err
}
}
return true, nil
} else if pos.Y < info.Window.Bottom {
// Let Windows handle the LF.
pos.Y++
if includeCR {
pos.X = 0
}
h.updatePos(pos)
return false, nil
} else {
// The cursor is at the bottom of the screen but outside the scroll
// region. Skip the LF.
h.logf("Simulating LF outside scroll region")
if includeCR {
if err := h.Flush(); err != nil {
return false, err
}
pos.X = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return false, err
}
}
return true, nil
}
}
// executeLF executes a LF without a CR.
func (h *windowsAnsiEventHandler) executeLF() error {
handled, err := h.simulateLF(false)
if err != nil {
return err
}
if !handled {
// Windows LF will reset the cursor column position. Write the LF
// and restore the cursor position.
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED)
if pos.X != 0 {
if err := h.Flush(); err != nil {
return err
}
h.logf("Resetting cursor position for LF without CR")
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
}
}
return nil
}
func (h *windowsAnsiEventHandler) Print(b byte) error {
if h.wrapNext {
h.buffer.WriteByte(h.marginByte)
h.clearWrap()
if _, err := h.simulateLF(true); err != nil {
return err
}
}
pos, info, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X == info.Size.X-1 {
h.wrapNext = true
h.marginByte = b
} else {
pos.X++
h.updatePos(pos)
h.buffer.WriteByte(b)
}
return nil
}
func (h *windowsAnsiEventHandler) Execute(b byte) error {
switch b {
case ansiterm.ANSI_TAB:
h.logf("Execute(TAB)")
// Move to the next tab stop, but preserve auto-wrap if already set.
if !h.wrapNext {
pos, info, err := h.getCurrentInfo()
if err != nil {
return err
}
pos.X = (pos.X + 8) - pos.X%8
if pos.X >= info.Size.X {
pos.X = info.Size.X - 1
}
if err := h.Flush(); err != nil {
return err
}
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
}
return nil
case ansiterm.ANSI_BEL:
h.buffer.WriteByte(ansiterm.ANSI_BEL)
return nil
case ansiterm.ANSI_BACKSPACE:
if h.wrapNext {
if err := h.Flush(); err != nil {
return err
}
h.clearWrap()
}
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X > 0 {
pos.X--
h.updatePos(pos)
h.buffer.WriteByte(ansiterm.ANSI_BACKSPACE)
}
return nil
case ansiterm.ANSI_VERTICAL_TAB, ansiterm.ANSI_FORM_FEED:
// Treat as true LF.
return h.executeLF()
case ansiterm.ANSI_LINE_FEED:
// Simulate a CR and LF for now since there is no way in go-ansiterm
// to tell if the LF should include CR (and more things break when it's
// missing than when it's incorrectly added).
handled, err := h.simulateLF(true)
if handled || err != nil {
return err
}
return h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED)
case ansiterm.ANSI_CARRIAGE_RETURN:
if h.wrapNext {
if err := h.Flush(); err != nil {
return err
}
h.clearWrap()
}
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X != 0 {
pos.X = 0
h.updatePos(pos)
h.buffer.WriteByte(ansiterm.ANSI_CARRIAGE_RETURN)
}
return nil
default:
return nil
}
}
func (h *windowsAnsiEventHandler) CUU(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CUU: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorVertical(-param)
}
func (h *windowsAnsiEventHandler) CUD(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CUD: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorVertical(param)
}
func (h *windowsAnsiEventHandler) CUF(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CUF: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorHorizontal(param)
}
func (h *windowsAnsiEventHandler) CUB(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CUB: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorHorizontal(-param)
}
func (h *windowsAnsiEventHandler) CNL(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CNL: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorLine(param)
}
func (h *windowsAnsiEventHandler) CPL(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CPL: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorLine(-param)
}
func (h *windowsAnsiEventHandler) CHA(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CHA: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorColumn(param)
}
func (h *windowsAnsiEventHandler) VPA(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("VPA: [[%d]]", param)
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
window := h.getCursorWindow(info)
position := info.CursorPosition
position.Y = window.Top + int16(param) - 1
return h.setCursorPosition(position, window)
}
func (h *windowsAnsiEventHandler) CUP(row int, col int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("CUP: [[%d %d]]", row, col)
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
window := h.getCursorWindow(info)
position := COORD{window.Left + int16(col) - 1, window.Top + int16(row) - 1}
return h.setCursorPosition(position, window)
}
func (h *windowsAnsiEventHandler) HVP(row int, col int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("HVP: [[%d %d]]", row, col)
h.clearWrap()
return h.CUP(row, col)
}
func (h *windowsAnsiEventHandler) DECTCEM(visible bool) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DECTCEM: [%v]", []string{strconv.FormatBool(visible)})
h.clearWrap()
return nil
}
func (h *windowsAnsiEventHandler) DECOM(enable bool) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DECOM: [%v]", []string{strconv.FormatBool(enable)})
h.clearWrap()
h.originMode = enable
return h.CUP(1, 1)
}
func (h *windowsAnsiEventHandler) DECCOLM(use132 bool) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DECCOLM: [%v]", []string{strconv.FormatBool(use132)})
h.clearWrap()
if err := h.ED(2); err != nil {
return err
}
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
targetWidth := int16(80)
if use132 {
targetWidth = 132
}
if info.Size.X < targetWidth {
if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil {
h.logf("set buffer failed: %v", err)
return err
}
}
window := info.Window
window.Left = 0
window.Right = targetWidth - 1
if err := SetConsoleWindowInfo(h.fd, true, window); err != nil {
h.logf("set window failed: %v", err)
return err
}
if info.Size.X > targetWidth {
if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil {
h.logf("set buffer failed: %v", err)
return err
}
}
return SetConsoleCursorPosition(h.fd, COORD{0, 0})
}
func (h *windowsAnsiEventHandler) ED(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("ED: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
// [J -- Erases from the cursor to the end of the screen, including the cursor position.
// [1J -- Erases from the beginning of the screen to the cursor, including the cursor position.
// [2J -- Erases the complete display. The cursor does not move.
// Notes:
// -- Clearing the entire buffer, versus just the Window, works best for Windows Consoles
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
var start COORD
var end COORD
switch param {
case 0:
start = info.CursorPosition
end = COORD{info.Size.X - 1, info.Size.Y - 1}
case 1:
start = COORD{0, 0}
end = info.CursorPosition
case 2:
start = COORD{0, 0}
end = COORD{info.Size.X - 1, info.Size.Y - 1}
}
err = h.clearRange(h.attributes, start, end)
if err != nil {
return err
}
// If the whole buffer was cleared, move the window to the top while preserving
// the window-relative cursor position.
if param == 2 {
pos := info.CursorPosition
window := info.Window
pos.Y -= window.Top
window.Bottom -= window.Top
window.Top = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
if err := SetConsoleWindowInfo(h.fd, true, window); err != nil {
return err
}
}
return nil
}
func (h *windowsAnsiEventHandler) EL(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("EL: [%v]", strconv.Itoa(param))
h.clearWrap()
// [K -- Erases from the cursor to the end of the line, including the cursor position.
// [1K -- Erases from the beginning of the line to the cursor, including the cursor position.
// [2K -- Erases the complete line.
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
var start COORD
var end COORD
switch param {
case 0:
start = info.CursorPosition
end = COORD{info.Size.X, info.CursorPosition.Y}
case 1:
start = COORD{0, info.CursorPosition.Y}
end = info.CursorPosition
case 2:
start = COORD{0, info.CursorPosition.Y}
end = COORD{info.Size.X, info.CursorPosition.Y}
}
err = h.clearRange(h.attributes, start, end)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) IL(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("IL: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.insertLines(param)
}
func (h *windowsAnsiEventHandler) DL(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DL: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.deleteLines(param)
}
func (h *windowsAnsiEventHandler) ICH(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("ICH: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.insertCharacters(param)
}
func (h *windowsAnsiEventHandler) DCH(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DCH: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.deleteCharacters(param)
}
func (h *windowsAnsiEventHandler) SGR(params []int) error {
if err := h.Flush(); err != nil {
return err
}
strings := []string{}
for _, v := range params {
strings = append(strings, strconv.Itoa(v))
}
h.logf("SGR: [%v]", strings)
if len(params) <= 0 {
h.attributes = h.infoReset.Attributes
h.inverted = false
} else {
for _, attr := range params {
if attr == ansiterm.ANSI_SGR_RESET {
h.attributes = h.infoReset.Attributes
h.inverted = false
continue
}
h.attributes, h.inverted = collectAnsiIntoWindowsAttributes(h.attributes, h.inverted, h.infoReset.Attributes, int16(attr))
}
}
attributes := h.attributes
if h.inverted {
attributes = invertAttributes(attributes)
}
err := SetConsoleTextAttribute(h.fd, attributes)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) SU(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("SU: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.scrollUp(param)
}
func (h *windowsAnsiEventHandler) SD(param int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("SD: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.scrollDown(param)
}
func (h *windowsAnsiEventHandler) DA(params []string) error {
h.logf("DA: [%v]", params)
// DA cannot be implemented because it must send data on the VT100 input stream,
// which is not available to go-ansiterm.
return nil
}
func (h *windowsAnsiEventHandler) DECSTBM(top int, bottom int) error {
if err := h.Flush(); err != nil {
return err
}
h.logf("DECSTBM: [%d, %d]", top, bottom)
// Windows is 0 indexed, Linux is 1 indexed
h.sr.top = int16(top - 1)
h.sr.bottom = int16(bottom - 1)
// This command also moves the cursor to the origin.
h.clearWrap()
return h.CUP(1, 1)
}
func (h *windowsAnsiEventHandler) RI() error {
if err := h.Flush(); err != nil {
return err
}
h.logf("RI: []")
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
sr := h.effectiveSr(info.Window)
if info.CursorPosition.Y == sr.top {
return h.scrollDown(1)
}
return h.moveCursorVertical(-1)
}
func (h *windowsAnsiEventHandler) IND() error {
h.logf("IND: []")
return h.executeLF()
}
func (h *windowsAnsiEventHandler) Flush() error {
h.curInfo = nil
if h.buffer.Len() > 0 {
h.logf("Flush: [%s]", h.buffer.Bytes())
if _, err := h.buffer.WriteTo(h.file); err != nil {
return err
}
}
if h.wrapNext && !h.drewMarginByte {
h.logf("Flush: drawing margin byte '%c'", h.marginByte)
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
charInfo := []CHAR_INFO{{UnicodeChar: uint16(h.marginByte), Attributes: info.Attributes}}
size := COORD{1, 1}
position := COORD{0, 0}
region := SMALL_RECT{Left: info.CursorPosition.X, Top: info.CursorPosition.Y, Right: info.CursorPosition.X, Bottom: info.CursorPosition.Y}
if err := WriteConsoleOutput(h.fd, charInfo, size, position, &region); err != nil {
return err
}
h.drewMarginByte = true
}
return nil
}
// cacheConsoleInfo ensures that the current console screen information has been queried
// since the last call to Flush(). It must be called before accessing h.curInfo or h.curPos.
func (h *windowsAnsiEventHandler) getCurrentInfo() (COORD, *CONSOLE_SCREEN_BUFFER_INFO, error) {
if h.curInfo == nil {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return COORD{}, nil, err
}
h.curInfo = info
h.curPos = info.CursorPosition
}
return h.curPos, h.curInfo, nil
}
func (h *windowsAnsiEventHandler) updatePos(pos COORD) {
if h.curInfo == nil {
panic("failed to call getCurrentInfo before calling updatePos")
}
h.curPos = pos
}
// clearWrap clears the state where the cursor is in the margin
// waiting for the next character before wrapping the line. This must
// be done before most operations that act on the cursor.
func (h *windowsAnsiEventHandler) clearWrap() {
h.wrapNext = false
h.drewMarginByte = false
}

12
vendor/github.com/Luzifer/rconfig/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,12 @@
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- tip
script: go test -v -race -cover ./...

22
vendor/github.com/Luzifer/rconfig/History.md generated vendored Normal file
View file

@ -0,0 +1,22 @@
# 2.2.0 / 2018-09-18
* Add support for time.Time flags
# 2.1.0 / 2018-08-02
* Add AutoEnv feature
# 2.0.0 / 2018-08-02
* Breaking: Ensure an empty default string does not yield a slice with 1 element
Though this is a just a tiny change it does change the default behaviour, so I'm marking this as a breaking change. You should ensure your code is fine with the changes.
# 1.2.0 / 2017-06-19
* Add ParseAndValidate method
# 1.1.0 / 2016-06-28
* Support time.Duration config parameters
* Added goreportcard badge
* Added testcase for using bool with ENV and default

202
vendor/github.com/Luzifer/rconfig/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015- Knut Ahlers <knut@ahlers.me>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

88
vendor/github.com/Luzifer/rconfig/README.md generated vendored Normal file
View file

@ -0,0 +1,88 @@
[![Build Status](https://travis-ci.org/Luzifer/rconfig.svg?branch=master)](https://travis-ci.org/Luzifer/rconfig)
[![Go Report Card](https://goreportcard.com/badge/github.com/Luzifer/rconfig)](https://goreportcard.com/report/github.com/Luzifer/rconfig)
[![Documentation](https://badges.fyi/static/godoc/reference/5272B4)](https://godoc.org/github.com/Luzifer/rconfig)
![](https://badges.fyi/github/license/Luzifer/rconfig)
[![](https://badges.fyi/github/latest-tag/Luzifer/rconfig)](https://gopkg.in/Luzifer/rconfig.v2)
## Description
> Package rconfig implements a CLI configuration reader with struct-embedded defaults, environment variables and posix compatible flag parsing using the [pflag](https://github.com/spf13/pflag) library.
## Installation
Install by running:
```
go get -u github.com/Luzifer/rconfig
```
OR fetch a specific version:
```
go get -u gopkg.in/luzifer/rconfig.v2
```
Run tests by running:
```
go test -v -race -cover github.com/Luzifer/rconfig
```
## Usage
A very simple usecase is to just configure a struct inside the vars section of your `main.go` and to parse the commandline flags from the `main()` function:
```go
package main
import (
"fmt"
"github.com/Luzifer/rconfig"
)
var (
cfg = struct {
Username string `default:"unknown" flag:"user" description:"Your name"`
Details struct {
Age int `default:"25" flag:"age" env:"age" description:"Your age"`
}
}{}
)
func main() {
rconfig.Parse(&cfg)
fmt.Printf("Hello %s, happy birthday for your %dth birthday.",
cfg.Username,
cfg.Details.Age)
}
```
### Provide variable defaults by using a file
Given you have a file `~/.myapp.yml` containing some secrets or usernames (for the example below username is assumed to be "luzifer") as a default configuration for your application you can use this source code to load the defaults from that file using the `vardefault` tag in your configuration struct.
The order of the directives (lower number = higher precedence):
1. Flags provided in command line
1. Environment variables
1. Variable defaults (`vardefault` tag in the struct)
1. `default` tag in the struct
```go
var cfg = struct {
Username string `vardefault:"username" flag:"username" description:"Your username"`
}
func main() {
rconfig.SetVariableDefaults(rconfig.VarDefaultsFromYAMLFile("~/.myapp.yml"))
rconfig.Parse(&cfg)
fmt.Printf("Username = %s", cfg.Username)
// Output: Username = luzifer
}
```
## More info
You can see the full reference documentation of the rconfig package [at godoc.org](https://godoc.org/github.com/Luzifer/rconfig), or through go's standard documentation system by running `godoc -http=:6060` and browsing to [http://localhost:6060/pkg/github.com/Luzifer/rconfig](http://localhost:6060/pkg/github.com/Luzifer/rconfig) after installation.

64
vendor/github.com/Luzifer/rconfig/autoenv.go generated vendored Normal file
View file

@ -0,0 +1,64 @@
package rconfig
import "strings"
type characterClass [2]rune
func (c characterClass) Contains(r rune) bool {
return c[0] <= r && c[1] >= r
}
type characterClasses []characterClass
func (c characterClasses) Contains(r rune) bool {
for _, cc := range c {
if cc.Contains(r) {
return true
}
}
return false
}
var (
charGroupUpperLetter = characterClass{'A', 'Z'}
charGroupLowerLetter = characterClass{'a', 'z'}
charGroupNumber = characterClass{'0', '9'}
charGroupLowerNumber = characterClasses{charGroupLowerLetter, charGroupNumber}
)
func deriveEnvVarName(s string) string {
var (
words []string
word []rune
)
for _, l := range s {
switch {
case charGroupUpperLetter.Contains(l):
if len(word) > 0 && charGroupLowerNumber.Contains(word[len(word)-1]) {
words = append(words, string(word))
word = []rune{}
}
word = append(word, l)
case charGroupLowerLetter.Contains(l):
if len(word) > 1 && charGroupUpperLetter.Contains(word[len(word)-1]) {
words = append(words, string(word[0:len(word)-1]))
word = word[len(word)-1:]
}
word = append(word, l)
case charGroupNumber.Contains(l):
word = append(word, l)
default:
if len(word) > 0 {
words = append(words, string(word))
}
word = []rune{}
}
}
words = append(words, string(word))
return strings.ToUpper(strings.Join(words, "_"))
}

456
vendor/github.com/Luzifer/rconfig/config.go generated vendored Normal file
View file

@ -0,0 +1,456 @@
// Package rconfig implements a CLI configuration reader with struct-embedded
// defaults, environment variables and posix compatible flag parsing using
// the pflag library.
package rconfig
import (
"errors"
"fmt"
"os"
"reflect"
"strconv"
"strings"
"time"
"github.com/spf13/pflag"
validator "gopkg.in/validator.v2"
)
type afterFunc func() error
var (
autoEnv bool
fs *pflag.FlagSet
variableDefaults map[string]string
timeParserFormats = []string{
// Default constants
time.RFC3339Nano, time.RFC3339,
time.RFC1123Z, time.RFC1123,
time.RFC822Z, time.RFC822,
time.RFC850, time.RubyDate, time.UnixDate, time.ANSIC,
"2006-01-02 15:04:05.999999999 -0700 MST",
// More uncommon time formats
"2006-01-02 15:04:05", "2006-01-02 15:04:05Z07:00", // Simplified ISO time format
"01/02/2006 15:04:05", "01/02/2006 15:04:05Z07:00", // US time format
"02.01.2006 15:04:05", "02.01.2006 15:04:05Z07:00", // DE time format
}
)
func init() {
variableDefaults = make(map[string]string)
}
// Parse takes the pointer to a struct filled with variables which should be read
// from ENV, default or flag. The precedence in this is flag > ENV > default. So
// if a flag is specified on the CLI it will overwrite the ENV and otherwise ENV
// overwrites the default specified.
//
// For your configuration struct you can use the following struct-tags to control
// the behavior of rconfig:
//
// default: Set a default value
// vardefault: Read the default value from the variable defaults
// env: Read the value from this environment variable
// flag: Flag to read in format "long,short" (for example "listen,l")
// description: A help text for Usage output to guide your users
//
// The format you need to specify those values you can see in the example to this
// function.
//
func Parse(config interface{}) error {
return parse(config, nil)
}
// ParseAndValidate works exactly like Parse but implements an additional run of
// the go-validator package on the configuration struct. Therefore additonal struct
// tags are supported like described in the readme file of the go-validator package:
//
// https://github.com/go-validator/validator/tree/v2#usage
func ParseAndValidate(config interface{}) error {
return parseAndValidate(config, nil)
}
// Args returns the non-flag command-line arguments.
func Args() []string {
return fs.Args()
}
// AddTimeParserFormats adds custom formats to parse time.Time fields
func AddTimeParserFormats(f ...string) {
timeParserFormats = append(timeParserFormats, f...)
}
// AutoEnv enables or disables automated env variable guessing. If no `env` struct
// tag was set and AutoEnv is enabled the env variable name is derived from the
// name of the field: `MyFieldName` will get `MY_FIELD_NAME`
func AutoEnv(enable bool) {
autoEnv = enable
}
// Usage prints a basic usage with the corresponding defaults for the flags to
// os.Stdout. The defaults are derived from the `default` struct-tag and the ENV.
func Usage() {
if fs != nil && fs.Parsed() {
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
fs.PrintDefaults()
}
}
// SetVariableDefaults presets the parser with a map of default values to be used
// when specifying the vardefault tag
func SetVariableDefaults(defaults map[string]string) {
variableDefaults = defaults
}
func parseAndValidate(in interface{}, args []string) error {
if err := parse(in, args); err != nil {
return err
}
return validator.Validate(in)
}
func parse(in interface{}, args []string) error {
if args == nil {
args = os.Args
}
fs = pflag.NewFlagSet(os.Args[0], pflag.ExitOnError)
afterFuncs, err := execTags(in, fs)
if err != nil {
return err
}
if err := fs.Parse(args); err != nil {
return err
}
if afterFuncs != nil {
for _, f := range afterFuncs {
if err := f(); err != nil {
return err
}
}
}
return nil
}
func execTags(in interface{}, fs *pflag.FlagSet) ([]afterFunc, error) {
if reflect.TypeOf(in).Kind() != reflect.Ptr {
return nil, errors.New("Calling parser with non-pointer")
}
if reflect.ValueOf(in).Elem().Kind() != reflect.Struct {
return nil, errors.New("Calling parser with pointer to non-struct")
}
afterFuncs := []afterFunc{}
st := reflect.ValueOf(in).Elem()
for i := 0; i < st.NumField(); i++ {
valField := st.Field(i)
typeField := st.Type().Field(i)
if typeField.Tag.Get("default") == "" && typeField.Tag.Get("env") == "" && typeField.Tag.Get("flag") == "" && typeField.Type.Kind() != reflect.Struct {
// None of our supported tags is present and it's not a sub-struct
continue
}
value := varDefault(typeField.Tag.Get("vardefault"), typeField.Tag.Get("default"))
value = envDefault(typeField, value)
parts := strings.Split(typeField.Tag.Get("flag"), ",")
switch typeField.Type {
case reflect.TypeOf(time.Duration(0)):
v, err := time.ParseDuration(value)
if err != nil {
if value == "" {
v = time.Duration(0)
} else {
return nil, err
}
}
if typeField.Tag.Get("flag") != "" {
if len(parts) == 1 {
fs.DurationVar(valField.Addr().Interface().(*time.Duration), parts[0], v, typeField.Tag.Get("description"))
} else {
fs.DurationVarP(valField.Addr().Interface().(*time.Duration), parts[0], parts[1], v, typeField.Tag.Get("description"))
}
} else {
valField.Set(reflect.ValueOf(v))
}
continue
case reflect.TypeOf(time.Time{}):
var sVar string
if typeField.Tag.Get("flag") != "" {
if len(parts) == 1 {
fs.StringVar(&sVar, parts[0], value, typeField.Tag.Get("description"))
} else {
fs.StringVarP(&sVar, parts[0], parts[1], value, typeField.Tag.Get("description"))
}
} else {
sVar = value
}
afterFuncs = append(afterFuncs, func(valField reflect.Value, sVar *string) func() error {
return func() error {
if *sVar == "" {
// No time, no problem
return nil
}
// Check whether we could have a timestamp
if ts, err := strconv.ParseInt(*sVar, 10, 64); err == nil {
t := time.Unix(ts, 0)
valField.Set(reflect.ValueOf(t))
return nil
}
// We haven't so lets walk through possible time formats
matched := false
for _, tf := range timeParserFormats {
if t, err := time.Parse(tf, *sVar); err == nil {
matched = true
valField.Set(reflect.ValueOf(t))
return nil
}
}
if !matched {
return fmt.Errorf("Value %q did not match expected time formats", *sVar)
}
return nil
}
}(valField, &sVar))
continue
}
switch typeField.Type.Kind() {
case reflect.String:
if typeField.Tag.Get("flag") != "" {
if len(parts) == 1 {
fs.StringVar(valField.Addr().Interface().(*string), parts[0], value, typeField.Tag.Get("description"))
} else {
fs.StringVarP(valField.Addr().Interface().(*string), parts[0], parts[1], value, typeField.Tag.Get("description"))
}
} else {
valField.SetString(value)
}
case reflect.Bool:
v := value == "true"
if typeField.Tag.Get("flag") != "" {
if len(parts) == 1 {
fs.BoolVar(valField.Addr().Interface().(*bool), parts[0], v, typeField.Tag.Get("description"))
} else {
fs.BoolVarP(valField.Addr().Interface().(*bool), parts[0], parts[1], v, typeField.Tag.Get("description"))
}
} else {
valField.SetBool(v)
}
case reflect.Int, reflect.Int8, reflect.Int32, reflect.Int64:
vt, err := strconv.ParseInt(value, 10, 64)
if err != nil {
if value == "" {
vt = 0
} else {
return nil, err
}
}
if typeField.Tag.Get("flag") != "" {
registerFlagInt(typeField.Type.Kind(), fs, valField.Addr().Interface(), parts, vt, typeField.Tag.Get("description"))
} else {
valField.SetInt(vt)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
vt, err := strconv.ParseUint(value, 10, 64)
if err != nil {
if value == "" {
vt = 0
} else {
return nil, err
}
}
if typeField.Tag.Get("flag") != "" {
registerFlagUint(typeField.Type.Kind(), fs, valField.Addr().Interface(), parts, vt, typeField.Tag.Get("description"))
} else {
valField.SetUint(vt)
}
case reflect.Float32, reflect.Float64:
vt, err := strconv.ParseFloat(value, 64)
if err != nil {
if value == "" {
vt = 0.0
} else {
return nil, err
}
}
if typeField.Tag.Get("flag") != "" {
registerFlagFloat(typeField.Type.Kind(), fs, valField.Addr().Interface(), parts, vt, typeField.Tag.Get("description"))
} else {
valField.SetFloat(vt)
}
case reflect.Struct:
afs, err := execTags(valField.Addr().Interface(), fs)
if err != nil {
return nil, err
}
afterFuncs = append(afterFuncs, afs...)
case reflect.Slice:
switch typeField.Type.Elem().Kind() {
case reflect.Int:
def := []int{}
for _, v := range strings.Split(value, ",") {
it, err := strconv.ParseInt(strings.TrimSpace(v), 10, 64)
if err != nil {
return nil, err
}
def = append(def, int(it))
}
if len(parts) == 1 {
fs.IntSliceVar(valField.Addr().Interface().(*[]int), parts[0], def, typeField.Tag.Get("description"))
} else {
fs.IntSliceVarP(valField.Addr().Interface().(*[]int), parts[0], parts[1], def, typeField.Tag.Get("description"))
}
case reflect.String:
del := typeField.Tag.Get("delimiter")
if len(del) == 0 {
del = ","
}
var def = []string{}
if value != "" {
def = strings.Split(value, del)
}
if len(parts) == 1 {
fs.StringSliceVar(valField.Addr().Interface().(*[]string), parts[0], def, typeField.Tag.Get("description"))
} else {
fs.StringSliceVarP(valField.Addr().Interface().(*[]string), parts[0], parts[1], def, typeField.Tag.Get("description"))
}
}
}
}
return afterFuncs, nil
}
func registerFlagFloat(t reflect.Kind, fs *pflag.FlagSet, field interface{}, parts []string, vt float64, desc string) {
switch t {
case reflect.Float32:
if len(parts) == 1 {
fs.Float32Var(field.(*float32), parts[0], float32(vt), desc)
} else {
fs.Float32VarP(field.(*float32), parts[0], parts[1], float32(vt), desc)
}
case reflect.Float64:
if len(parts) == 1 {
fs.Float64Var(field.(*float64), parts[0], float64(vt), desc)
} else {
fs.Float64VarP(field.(*float64), parts[0], parts[1], float64(vt), desc)
}
}
}
func registerFlagInt(t reflect.Kind, fs *pflag.FlagSet, field interface{}, parts []string, vt int64, desc string) {
switch t {
case reflect.Int:
if len(parts) == 1 {
fs.IntVar(field.(*int), parts[0], int(vt), desc)
} else {
fs.IntVarP(field.(*int), parts[0], parts[1], int(vt), desc)
}
case reflect.Int8:
if len(parts) == 1 {
fs.Int8Var(field.(*int8), parts[0], int8(vt), desc)
} else {
fs.Int8VarP(field.(*int8), parts[0], parts[1], int8(vt), desc)
}
case reflect.Int32:
if len(parts) == 1 {
fs.Int32Var(field.(*int32), parts[0], int32(vt), desc)
} else {
fs.Int32VarP(field.(*int32), parts[0], parts[1], int32(vt), desc)
}
case reflect.Int64:
if len(parts) == 1 {
fs.Int64Var(field.(*int64), parts[0], int64(vt), desc)
} else {
fs.Int64VarP(field.(*int64), parts[0], parts[1], int64(vt), desc)
}
}
}
func registerFlagUint(t reflect.Kind, fs *pflag.FlagSet, field interface{}, parts []string, vt uint64, desc string) {
switch t {
case reflect.Uint:
if len(parts) == 1 {
fs.UintVar(field.(*uint), parts[0], uint(vt), desc)
} else {
fs.UintVarP(field.(*uint), parts[0], parts[1], uint(vt), desc)
}
case reflect.Uint8:
if len(parts) == 1 {
fs.Uint8Var(field.(*uint8), parts[0], uint8(vt), desc)
} else {
fs.Uint8VarP(field.(*uint8), parts[0], parts[1], uint8(vt), desc)
}
case reflect.Uint16:
if len(parts) == 1 {
fs.Uint16Var(field.(*uint16), parts[0], uint16(vt), desc)
} else {
fs.Uint16VarP(field.(*uint16), parts[0], parts[1], uint16(vt), desc)
}
case reflect.Uint32:
if len(parts) == 1 {
fs.Uint32Var(field.(*uint32), parts[0], uint32(vt), desc)
} else {
fs.Uint32VarP(field.(*uint32), parts[0], parts[1], uint32(vt), desc)
}
case reflect.Uint64:
if len(parts) == 1 {
fs.Uint64Var(field.(*uint64), parts[0], uint64(vt), desc)
} else {
fs.Uint64VarP(field.(*uint64), parts[0], parts[1], uint64(vt), desc)
}
}
}
func envDefault(field reflect.StructField, def string) string {
value := def
env := field.Tag.Get("env")
if env == "" && autoEnv {
env = deriveEnvVarName(field.Name)
}
if env != "" {
if e := os.Getenv(env); e != "" {
value = e
}
}
return value
}
func varDefault(name, def string) string {
value := def
if name != "" {
if v, ok := variableDefaults[name]; ok {
value = v
}
}
return value
}

View file

@ -0,0 +1,27 @@
package rconfig
import (
"io/ioutil"
"gopkg.in/yaml.v2"
)
// VarDefaultsFromYAMLFile reads contents of a file and calls VarDefaultsFromYAML
func VarDefaultsFromYAMLFile(filename string) map[string]string {
data, err := ioutil.ReadFile(filename)
if err != nil {
return make(map[string]string)
}
return VarDefaultsFromYAML(data)
}
// VarDefaultsFromYAML creates a vardefaults map from YAML raw data
func VarDefaultsFromYAML(in []byte) map[string]string {
out := make(map[string]string)
err := yaml.Unmarshal(in, &out)
if err != nil {
return make(map[string]string)
}
return out
}

1
vendor/github.com/Microsoft/go-winio/.gitignore generated vendored Normal file
View file

@ -0,0 +1 @@
*.exe

22
vendor/github.com/Microsoft/go-winio/LICENSE generated vendored Normal file
View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

22
vendor/github.com/Microsoft/go-winio/README.md generated vendored Normal file
View file

@ -0,0 +1,22 @@
# go-winio
This repository contains utilities for efficiently performing Win32 IO operations in
Go. Currently, this is focused on accessing named pipes and other file handles, and
for using named pipes as a net transport.
This code relies on IO completion ports to avoid blocking IO on system threads, allowing Go
to reuse the thread to schedule another goroutine. This limits support to Windows Vista and
newer operating systems. This is similar to the implementation of network sockets in Go's net
package.
Please see the LICENSE file for licensing information.
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information
see the [Code of Conduct
FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional
questions or comments.
Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
for another named pipe implementation.

280
vendor/github.com/Microsoft/go-winio/backup.go generated vendored Normal file
View file

@ -0,0 +1,280 @@
// +build windows
package winio
import (
"encoding/binary"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"runtime"
"syscall"
"unicode/utf16"
)
//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
//sys backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupWrite
const (
BackupData = uint32(iota + 1)
BackupEaData
BackupSecurity
BackupAlternateData
BackupLink
BackupPropertyData
BackupObjectId
BackupReparseData
BackupSparseBlock
BackupTxfsData
)
const (
StreamSparseAttributes = uint32(8)
)
const (
WRITE_DAC = 0x40000
WRITE_OWNER = 0x80000
ACCESS_SYSTEM_SECURITY = 0x1000000
)
// BackupHeader represents a backup stream of a file.
type BackupHeader struct {
Id uint32 // The backup stream ID
Attributes uint32 // Stream attributes
Size int64 // The size of the stream in bytes
Name string // The name of the stream (for BackupAlternateData only).
Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
}
type win32StreamId struct {
StreamId uint32
Attributes uint32
Size uint64
NameSize uint32
}
// BackupStreamReader reads from a stream produced by the BackupRead Win32 API and produces a series
// of BackupHeader values.
type BackupStreamReader struct {
r io.Reader
bytesLeft int64
}
// NewBackupStreamReader produces a BackupStreamReader from any io.Reader.
func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
return &BackupStreamReader{r, 0}
}
// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
// it was not completely read.
func (r *BackupStreamReader) Next() (*BackupHeader, error) {
if r.bytesLeft > 0 {
if s, ok := r.r.(io.Seeker); ok {
// Make sure Seek on io.SeekCurrent sometimes succeeds
// before trying the actual seek.
if _, err := s.Seek(0, io.SeekCurrent); err == nil {
if _, err = s.Seek(r.bytesLeft, io.SeekCurrent); err != nil {
return nil, err
}
r.bytesLeft = 0
}
}
if _, err := io.Copy(ioutil.Discard, r); err != nil {
return nil, err
}
}
var wsi win32StreamId
if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
return nil, err
}
hdr := &BackupHeader{
Id: wsi.StreamId,
Attributes: wsi.Attributes,
Size: int64(wsi.Size),
}
if wsi.NameSize != 0 {
name := make([]uint16, int(wsi.NameSize/2))
if err := binary.Read(r.r, binary.LittleEndian, name); err != nil {
return nil, err
}
hdr.Name = syscall.UTF16ToString(name)
}
if wsi.StreamId == BackupSparseBlock {
if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
return nil, err
}
hdr.Size -= 8
}
r.bytesLeft = hdr.Size
return hdr, nil
}
// Read reads from the current backup stream.
func (r *BackupStreamReader) Read(b []byte) (int, error) {
if r.bytesLeft == 0 {
return 0, io.EOF
}
if int64(len(b)) > r.bytesLeft {
b = b[:r.bytesLeft]
}
n, err := r.r.Read(b)
r.bytesLeft -= int64(n)
if err == io.EOF {
err = io.ErrUnexpectedEOF
} else if r.bytesLeft == 0 && err == nil {
err = io.EOF
}
return n, err
}
// BackupStreamWriter writes a stream compatible with the BackupWrite Win32 API.
type BackupStreamWriter struct {
w io.Writer
bytesLeft int64
}
// NewBackupStreamWriter produces a BackupStreamWriter on top of an io.Writer.
func NewBackupStreamWriter(w io.Writer) *BackupStreamWriter {
return &BackupStreamWriter{w, 0}
}
// WriteHeader writes the next backup stream header and prepares for calls to Write().
func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
if w.bytesLeft != 0 {
return fmt.Errorf("missing %d bytes", w.bytesLeft)
}
name := utf16.Encode([]rune(hdr.Name))
wsi := win32StreamId{
StreamId: hdr.Id,
Attributes: hdr.Attributes,
Size: uint64(hdr.Size),
NameSize: uint32(len(name) * 2),
}
if hdr.Id == BackupSparseBlock {
// Include space for the int64 block offset
wsi.Size += 8
}
if err := binary.Write(w.w, binary.LittleEndian, &wsi); err != nil {
return err
}
if len(name) != 0 {
if err := binary.Write(w.w, binary.LittleEndian, name); err != nil {
return err
}
}
if hdr.Id == BackupSparseBlock {
if err := binary.Write(w.w, binary.LittleEndian, hdr.Offset); err != nil {
return err
}
}
w.bytesLeft = hdr.Size
return nil
}
// Write writes to the current backup stream.
func (w *BackupStreamWriter) Write(b []byte) (int, error) {
if w.bytesLeft < int64(len(b)) {
return 0, fmt.Errorf("too many bytes by %d", int64(len(b))-w.bytesLeft)
}
n, err := w.w.Write(b)
w.bytesLeft -= int64(n)
return n, err
}
// BackupFileReader provides an io.ReadCloser interface on top of the BackupRead Win32 API.
type BackupFileReader struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileReader returns a new BackupFileReader from a file handle. If includeSecurity is true,
// Read will attempt to read the security descriptor of the file.
func NewBackupFileReader(f *os.File, includeSecurity bool) *BackupFileReader {
r := &BackupFileReader{f, includeSecurity, 0}
return r
}
// Read reads a backup stream from the file by calling the Win32 API BackupRead().
func (r *BackupFileReader) Read(b []byte) (int, error) {
var bytesRead uint32
err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
if err != nil {
return 0, &os.PathError{"BackupRead", r.f.Name(), err}
}
runtime.KeepAlive(r.f)
if bytesRead == 0 {
return 0, io.EOF
}
return int(bytesRead), nil
}
// Close frees Win32 resources associated with the BackupFileReader. It does not close
// the underlying file.
func (r *BackupFileReader) Close() error {
if r.ctx != 0 {
backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
runtime.KeepAlive(r.f)
r.ctx = 0
}
return nil
}
// BackupFileWriter provides an io.WriteCloser interface on top of the BackupWrite Win32 API.
type BackupFileWriter struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileWriter returns a new BackupFileWriter from a file handle. If includeSecurity is true,
// Write() will attempt to restore the security descriptor from the stream.
func NewBackupFileWriter(f *os.File, includeSecurity bool) *BackupFileWriter {
w := &BackupFileWriter{f, includeSecurity, 0}
return w
}
// Write restores a portion of the file using the provided backup stream.
func (w *BackupFileWriter) Write(b []byte) (int, error) {
var bytesWritten uint32
err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
if err != nil {
return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
}
runtime.KeepAlive(w.f)
if int(bytesWritten) != len(b) {
return int(bytesWritten), errors.New("not all bytes could be written")
}
return len(b), nil
}
// Close frees Win32 resources associated with the BackupFileWriter. It does not
// close the underlying file.
func (w *BackupFileWriter) Close() error {
if w.ctx != 0 {
backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
runtime.KeepAlive(w.f)
w.ctx = 0
}
return nil
}
// OpenForBackup opens a file or directory, potentially skipping access checks if the backup
// or restore privileges have been acquired.
//
// If the file opened was a directory, it cannot be used with Readdir().
func OpenForBackup(path string, access uint32, share uint32, createmode uint32) (*os.File, error) {
winPath, err := syscall.UTF16FromString(path)
if err != nil {
return nil, err
}
h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
if err != nil {
err = &os.PathError{Op: "open", Path: path, Err: err}
return nil, err
}
return os.NewFile(uintptr(h), path), nil
}

137
vendor/github.com/Microsoft/go-winio/ea.go generated vendored Normal file
View file

@ -0,0 +1,137 @@
package winio
import (
"bytes"
"encoding/binary"
"errors"
)
type fileFullEaInformation struct {
NextEntryOffset uint32
Flags uint8
NameLength uint8
ValueLength uint16
}
var (
fileFullEaInformationSize = binary.Size(&fileFullEaInformation{})
errInvalidEaBuffer = errors.New("invalid extended attribute buffer")
errEaNameTooLarge = errors.New("extended attribute name too large")
errEaValueTooLarge = errors.New("extended attribute value too large")
)
// ExtendedAttribute represents a single Windows EA.
type ExtendedAttribute struct {
Name string
Value []byte
Flags uint8
}
func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
var info fileFullEaInformation
err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
if err != nil {
err = errInvalidEaBuffer
return
}
nameOffset := fileFullEaInformationSize
nameLen := int(info.NameLength)
valueOffset := nameOffset + int(info.NameLength) + 1
valueLen := int(info.ValueLength)
nextOffset := int(info.NextEntryOffset)
if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
err = errInvalidEaBuffer
return
}
ea.Name = string(b[nameOffset : nameOffset+nameLen])
ea.Value = b[valueOffset : valueOffset+valueLen]
ea.Flags = info.Flags
if info.NextEntryOffset != 0 {
nb = b[info.NextEntryOffset:]
}
return
}
// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
// buffer retrieved from BackupRead, ZwQueryEaFile, etc.
func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
for len(b) != 0 {
ea, nb, err := parseEa(b)
if err != nil {
return nil, err
}
eas = append(eas, ea)
b = nb
}
return
}
func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {
if int(uint8(len(ea.Name))) != len(ea.Name) {
return errEaNameTooLarge
}
if int(uint16(len(ea.Value))) != len(ea.Value) {
return errEaValueTooLarge
}
entrySize := uint32(fileFullEaInformationSize + len(ea.Name) + 1 + len(ea.Value))
withPadding := (entrySize + 3) &^ 3
nextOffset := uint32(0)
if !last {
nextOffset = withPadding
}
info := fileFullEaInformation{
NextEntryOffset: nextOffset,
Flags: ea.Flags,
NameLength: uint8(len(ea.Name)),
ValueLength: uint16(len(ea.Value)),
}
err := binary.Write(buf, binary.LittleEndian, &info)
if err != nil {
return err
}
_, err = buf.Write([]byte(ea.Name))
if err != nil {
return err
}
err = buf.WriteByte(0)
if err != nil {
return err
}
_, err = buf.Write(ea.Value)
if err != nil {
return err
}
_, err = buf.Write([]byte{0, 0, 0}[0 : withPadding-entrySize])
if err != nil {
return err
}
return nil
}
// EncodeExtendedAttributes encodes a list of EAs into a FILE_FULL_EA_INFORMATION
// buffer for use with BackupWrite, ZwSetEaFile, etc.
func EncodeExtendedAttributes(eas []ExtendedAttribute) ([]byte, error) {
var buf bytes.Buffer
for i := range eas {
last := false
if i == len(eas)-1 {
last = true
}
err := writeEa(&buf, &eas[i], last)
if err != nil {
return nil, err
}
}
return buf.Bytes(), nil
}

307
vendor/github.com/Microsoft/go-winio/file.go generated vendored Normal file
View file

@ -0,0 +1,307 @@
// +build windows
package winio
import (
"errors"
"io"
"runtime"
"sync"
"sync/atomic"
"syscall"
"time"
)
//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
//sys createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) = CreateIoCompletionPort
//sys getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) = GetQueuedCompletionStatus
//sys setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) = SetFileCompletionNotificationModes
type atomicBool int32
func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
func (b *atomicBool) swap(new bool) bool {
var newInt int32
if new {
newInt = 1
}
return atomic.SwapInt32((*int32)(b), newInt) == 1
}
const (
cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
)
var (
ErrFileClosed = errors.New("file has already been closed")
ErrTimeout = &timeoutError{}
)
type timeoutError struct{}
func (e *timeoutError) Error() string { return "i/o timeout" }
func (e *timeoutError) Timeout() bool { return true }
func (e *timeoutError) Temporary() bool { return true }
type timeoutChan chan struct{}
var ioInitOnce sync.Once
var ioCompletionPort syscall.Handle
// ioResult contains the result of an asynchronous IO operation
type ioResult struct {
bytes uint32
err error
}
// ioOperation represents an outstanding asynchronous Win32 IO
type ioOperation struct {
o syscall.Overlapped
ch chan ioResult
}
func initIo() {
h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
if err != nil {
panic(err)
}
ioCompletionPort = h
go ioCompletionProcessor(h)
}
// win32File implements Reader, Writer, and Closer on a Win32 handle without blocking in a syscall.
// It takes ownership of this handle and will close it if it is garbage collected.
type win32File struct {
handle syscall.Handle
wg sync.WaitGroup
wgLock sync.RWMutex
closing atomicBool
readDeadline deadlineHandler
writeDeadline deadlineHandler
}
type deadlineHandler struct {
setLock sync.Mutex
channel timeoutChan
channelLock sync.RWMutex
timer *time.Timer
timedout atomicBool
}
// makeWin32File makes a new win32File from an existing file handle
func makeWin32File(h syscall.Handle) (*win32File, error) {
f := &win32File{handle: h}
ioInitOnce.Do(initIo)
_, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
if err != nil {
return nil, err
}
err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
if err != nil {
return nil, err
}
f.readDeadline.channel = make(timeoutChan)
f.writeDeadline.channel = make(timeoutChan)
return f, nil
}
func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
return makeWin32File(h)
}
// closeHandle closes the resources associated with a Win32 handle
func (f *win32File) closeHandle() {
f.wgLock.Lock()
// Atomically set that we are closing, releasing the resources only once.
if !f.closing.swap(true) {
f.wgLock.Unlock()
// cancel all IO and wait for it to complete
cancelIoEx(f.handle, nil)
f.wg.Wait()
// at this point, no new IO can start
syscall.Close(f.handle)
f.handle = 0
} else {
f.wgLock.Unlock()
}
}
// Close closes a win32File.
func (f *win32File) Close() error {
f.closeHandle()
return nil
}
// prepareIo prepares for a new IO operation.
// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
func (f *win32File) prepareIo() (*ioOperation, error) {
f.wgLock.RLock()
if f.closing.isSet() {
f.wgLock.RUnlock()
return nil, ErrFileClosed
}
f.wg.Add(1)
f.wgLock.RUnlock()
c := &ioOperation{}
c.ch = make(chan ioResult)
return c, nil
}
// ioCompletionProcessor processes completed async IOs forever
func ioCompletionProcessor(h syscall.Handle) {
for {
var bytes uint32
var key uintptr
var op *ioOperation
err := getQueuedCompletionStatus(h, &bytes, &key, &op, syscall.INFINITE)
if op == nil {
panic(err)
}
op.ch <- ioResult{bytes, err}
}
}
// asyncIo processes the return value from ReadFile or WriteFile, blocking until
// the operation has actually completed.
func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
if err != syscall.ERROR_IO_PENDING {
return int(bytes), err
}
if f.closing.isSet() {
cancelIoEx(f.handle, &c.o)
}
var timeout timeoutChan
if d != nil {
d.channelLock.Lock()
timeout = d.channel
d.channelLock.Unlock()
}
var r ioResult
select {
case r = <-c.ch:
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
if f.closing.isSet() {
err = ErrFileClosed
}
}
case <-timeout:
cancelIoEx(f.handle, &c.o)
r = <-c.ch
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
err = ErrTimeout
}
}
// runtime.KeepAlive is needed, as c is passed via native
// code to ioCompletionProcessor, c must remain alive
// until the channel read is complete.
runtime.KeepAlive(c)
return int(r.bytes), err
}
// Read reads from a file handle.
func (f *win32File) Read(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
defer f.wg.Done()
if f.readDeadline.timedout.isSet() {
return 0, ErrTimeout
}
var bytes uint32
err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.readDeadline, bytes, err)
runtime.KeepAlive(b)
// Handle EOF conditions.
if err == nil && n == 0 && len(b) != 0 {
return 0, io.EOF
} else if err == syscall.ERROR_BROKEN_PIPE {
return 0, io.EOF
} else {
return n, err
}
}
// Write writes to a file handle.
func (f *win32File) Write(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
defer f.wg.Done()
if f.writeDeadline.timedout.isSet() {
return 0, ErrTimeout
}
var bytes uint32
err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.writeDeadline, bytes, err)
runtime.KeepAlive(b)
return n, err
}
func (f *win32File) SetReadDeadline(deadline time.Time) error {
return f.readDeadline.set(deadline)
}
func (f *win32File) SetWriteDeadline(deadline time.Time) error {
return f.writeDeadline.set(deadline)
}
func (f *win32File) Flush() error {
return syscall.FlushFileBuffers(f.handle)
}
func (d *deadlineHandler) set(deadline time.Time) error {
d.setLock.Lock()
defer d.setLock.Unlock()
if d.timer != nil {
if !d.timer.Stop() {
<-d.channel
}
d.timer = nil
}
d.timedout.setFalse()
select {
case <-d.channel:
d.channelLock.Lock()
d.channel = make(chan struct{})
d.channelLock.Unlock()
default:
}
if deadline.IsZero() {
return nil
}
timeoutIO := func() {
d.timedout.setTrue()
close(d.channel)
}
now := time.Now()
duration := deadline.Sub(now)
if deadline.After(now) {
// Deadline is in the future, set a timer to wait
d.timer = time.AfterFunc(duration, timeoutIO)
} else {
// Deadline is in the past. Cancel all pending IO now.
timeoutIO()
}
return nil
}

61
vendor/github.com/Microsoft/go-winio/fileinfo.go generated vendored Normal file
View file

@ -0,0 +1,61 @@
// +build windows
package winio
import (
"os"
"runtime"
"syscall"
"unsafe"
)
//sys getFileInformationByHandleEx(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = GetFileInformationByHandleEx
//sys setFileInformationByHandle(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = SetFileInformationByHandle
const (
fileBasicInfo = 0
fileIDInfo = 0x12
)
// FileBasicInfo contains file access time and file attributes information.
type FileBasicInfo struct {
CreationTime, LastAccessTime, LastWriteTime, ChangeTime syscall.Filetime
FileAttributes uint32
pad uint32 // padding
}
// GetFileBasicInfo retrieves times and attributes for a file.
func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
bi := &FileBasicInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return bi, nil
}
// SetFileBasicInfo sets times and attributes for a file.
func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
if err := setFileInformationByHandle(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return nil
}
// FileIDInfo contains the volume serial number and file ID for a file. This pair should be
// unique on a system.
type FileIDInfo struct {
VolumeSerialNumber uint64
FileID [16]byte
}
// GetFileID retrieves the unique (volume, file ID) pair for a file.
func GetFileID(f *os.File) (*FileIDInfo, error) {
fileID := &FileIDInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileIDInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return fileID, nil
}

421
vendor/github.com/Microsoft/go-winio/pipe.go generated vendored Normal file
View file

@ -0,0 +1,421 @@
// +build windows
package winio
import (
"errors"
"io"
"net"
"os"
"syscall"
"time"
"unsafe"
)
//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
//sys createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateNamedPipeW
//sys createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateFileW
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
const (
cERROR_PIPE_BUSY = syscall.Errno(231)
cERROR_NO_DATA = syscall.Errno(232)
cERROR_PIPE_CONNECTED = syscall.Errno(535)
cERROR_SEM_TIMEOUT = syscall.Errno(121)
cPIPE_ACCESS_DUPLEX = 0x3
cFILE_FLAG_FIRST_PIPE_INSTANCE = 0x80000
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_REJECT_REMOTE_CLIENTS = 0x8
cPIPE_UNLIMITED_INSTANCES = 255
cNMPWAIT_USE_DEFAULT_WAIT = 0
cNMPWAIT_NOWAIT = 1
cPIPE_TYPE_MESSAGE = 4
cPIPE_READMODE_MESSAGE = 2
)
var (
// ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
// This error should match net.errClosing since docker takes a dependency on its text.
ErrPipeListenerClosed = errors.New("use of closed network connection")
errPipeWriteClosed = errors.New("pipe has been closed for write")
)
type win32Pipe struct {
*win32File
path string
}
type win32MessageBytePipe struct {
win32Pipe
writeClosed bool
readEOF bool
}
type pipeAddress string
func (f *win32Pipe) LocalAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) RemoteAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) SetDeadline(t time.Time) error {
f.SetReadDeadline(t)
f.SetWriteDeadline(t)
return nil
}
// CloseWrite closes the write side of a message pipe in byte mode.
func (f *win32MessageBytePipe) CloseWrite() error {
if f.writeClosed {
return errPipeWriteClosed
}
err := f.win32File.Flush()
if err != nil {
return err
}
_, err = f.win32File.Write(nil)
if err != nil {
return err
}
f.writeClosed = true
return nil
}
// Write writes bytes to a message pipe in byte mode. Zero-byte writes are ignored, since
// they are used to implement CloseWrite().
func (f *win32MessageBytePipe) Write(b []byte) (int, error) {
if f.writeClosed {
return 0, errPipeWriteClosed
}
if len(b) == 0 {
return 0, nil
}
return f.win32File.Write(b)
}
// Read reads bytes from a message pipe in byte mode. A read of a zero-byte message on a message
// mode pipe will return io.EOF, as will all subsequent reads.
func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
if f.readEOF {
return 0, io.EOF
}
n, err := f.win32File.Read(b)
if err == io.EOF {
// If this was the result of a zero-byte read, then
// it is possible that the read was due to a zero-size
// message. Since we are simulating CloseWrite with a
// zero-byte message, ensure that all future Read() calls
// also return EOF.
f.readEOF = true
} else if err == syscall.ERROR_MORE_DATA {
// ERROR_MORE_DATA indicates that the pipe's read mode is message mode
// and the message still has more bytes. Treat this as a success, since
// this package presents all named pipes as byte streams.
err = nil
}
return n, err
}
func (s pipeAddress) Network() string {
return "pipe"
}
func (s pipeAddress) String() string {
return string(s)
}
// DialPipe connects to a named pipe by path, timing out if the connection
// takes longer than the specified duration. If timeout is nil, then we use
// a default timeout of 5 seconds. (We do not use WaitNamedPipe.)
func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
var absTimeout time.Time
if timeout != nil {
absTimeout = time.Now().Add(*timeout)
} else {
absTimeout = time.Now().Add(time.Second * 2)
}
var err error
var h syscall.Handle
for {
h, err = createFile(path, syscall.GENERIC_READ|syscall.GENERIC_WRITE, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != cERROR_PIPE_BUSY {
break
}
if time.Now().After(absTimeout) {
return nil, ErrTimeout
}
// Wait 10 msec and try again. This is a rather simplistic
// view, as we always try each 10 milliseconds.
time.Sleep(time.Millisecond * 10)
}
if err != nil {
return nil, &os.PathError{Op: "open", Path: path, Err: err}
}
var flags uint32
err = getNamedPipeInfo(h, &flags, nil, nil, nil)
if err != nil {
return nil, err
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
// If the pipe is in message mode, return a message byte pipe, which
// supports CloseWrite().
if flags&cPIPE_TYPE_MESSAGE != 0 {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: f, path: path},
}, nil
}
return &win32Pipe{win32File: f, path: path}, nil
}
type acceptResponse struct {
f *win32File
err error
}
type win32PipeListener struct {
firstHandle syscall.Handle
path string
securityDescriptor []byte
config PipeConfig
acceptCh chan (chan acceptResponse)
closeCh chan int
doneCh chan int
}
func makeServerPipeHandle(path string, securityDescriptor []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
var flags uint32 = cPIPE_ACCESS_DUPLEX | syscall.FILE_FLAG_OVERLAPPED
if first {
flags |= cFILE_FLAG_FIRST_PIPE_INSTANCE
}
var mode uint32 = cPIPE_REJECT_REMOTE_CLIENTS
if c.MessageMode {
mode |= cPIPE_TYPE_MESSAGE
}
sa := &syscall.SecurityAttributes{}
sa.Length = uint32(unsafe.Sizeof(*sa))
if securityDescriptor != nil {
len := uint32(len(securityDescriptor))
sa.SecurityDescriptor = localAlloc(0, len)
defer localFree(sa.SecurityDescriptor)
copy((*[0xffff]byte)(unsafe.Pointer(sa.SecurityDescriptor))[:], securityDescriptor)
}
h, err := createNamedPipe(path, flags, mode, cPIPE_UNLIMITED_INSTANCES, uint32(c.OutputBufferSize), uint32(c.InputBufferSize), 0, sa)
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
return h, nil
}
func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
h, err := makeServerPipeHandle(l.path, l.securityDescriptor, &l.config, false)
if err != nil {
return nil, err
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
return f, nil
}
func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
p, err := l.makeServerPipe()
if err != nil {
return nil, err
}
// Wait for the client to connect.
ch := make(chan error)
go func(p *win32File) {
ch <- connectPipe(p)
}(p)
select {
case err = <-ch:
if err != nil {
p.Close()
p = nil
}
case <-l.closeCh:
// Abort the connect request by closing the handle.
p.Close()
p = nil
err = <-ch
if err == nil || err == ErrFileClosed {
err = ErrPipeListenerClosed
}
}
return p, err
}
func (l *win32PipeListener) listenerRoutine() {
closed := false
for !closed {
select {
case <-l.closeCh:
closed = true
case responseCh := <-l.acceptCh:
var (
p *win32File
err error
)
for {
p, err = l.makeConnectedServerPipe()
// If the connection was immediately closed by the client, try
// again.
if err != cERROR_NO_DATA {
break
}
}
responseCh <- acceptResponse{p, err}
closed = err == ErrPipeListenerClosed
}
}
syscall.Close(l.firstHandle)
l.firstHandle = 0
// Notify Close() and Accept() callers that the handle has been closed.
close(l.doneCh)
}
// PipeConfig contain configuration for the pipe listener.
type PipeConfig struct {
// SecurityDescriptor contains a Windows security descriptor in SDDL format.
SecurityDescriptor string
// MessageMode determines whether the pipe is in byte or message mode. In either
// case the pipe is read in byte mode by default. The only practical difference in
// this implementation is that CloseWrite() is only supported for message mode pipes;
// CloseWrite() is implemented as a zero-byte write, but zero-byte writes are only
// transferred to the reader (and returned as io.EOF in this implementation)
// when the pipe is in message mode.
MessageMode bool
// InputBufferSize specifies the size the input buffer, in bytes.
InputBufferSize int32
// OutputBufferSize specifies the size the input buffer, in bytes.
OutputBufferSize int32
}
// ListenPipe creates a listener on a Windows named pipe path, e.g. \\.\pipe\mypipe.
// The pipe must not already exist.
func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
var (
sd []byte
err error
)
if c == nil {
c = &PipeConfig{}
}
if c.SecurityDescriptor != "" {
sd, err = SddlToSecurityDescriptor(c.SecurityDescriptor)
if err != nil {
return nil, err
}
}
h, err := makeServerPipeHandle(path, sd, c, true)
if err != nil {
return nil, err
}
// Create a client handle and connect it. This results in the pipe
// instance always existing, so that clients see ERROR_PIPE_BUSY
// rather than ERROR_FILE_NOT_FOUND. This ties the first instance
// up so that no other instances can be used. This would have been
// cleaner if the Win32 API matched CreateFile with ConnectNamedPipe
// instead of CreateNamedPipe. (Apparently created named pipes are
// considered to be in listening state regardless of whether any
// active calls to ConnectNamedPipe are outstanding.)
h2, err := createFile(path, 0, 0, nil, syscall.OPEN_EXISTING, cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != nil {
syscall.Close(h)
return nil, err
}
// Close the client handle. The server side of the instance will
// still be busy, leading to ERROR_PIPE_BUSY instead of
// ERROR_NOT_FOUND, as long as we don't close the server handle,
// or disconnect the client with DisconnectNamedPipe.
syscall.Close(h2)
l := &win32PipeListener{
firstHandle: h,
path: path,
securityDescriptor: sd,
config: *c,
acceptCh: make(chan (chan acceptResponse)),
closeCh: make(chan int),
doneCh: make(chan int),
}
go l.listenerRoutine()
return l, nil
}
func connectPipe(p *win32File) error {
c, err := p.prepareIo()
if err != nil {
return err
}
defer p.wg.Done()
err = connectNamedPipe(p.handle, &c.o)
_, err = p.asyncIo(c, nil, 0, err)
if err != nil && err != cERROR_PIPE_CONNECTED {
return err
}
return nil
}
func (l *win32PipeListener) Accept() (net.Conn, error) {
ch := make(chan acceptResponse)
select {
case l.acceptCh <- ch:
response := <-ch
err := response.err
if err != nil {
return nil, err
}
if l.config.MessageMode {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: response.f, path: l.path},
}, nil
}
return &win32Pipe{win32File: response.f, path: l.path}, nil
case <-l.doneCh:
return nil, ErrPipeListenerClosed
}
}
func (l *win32PipeListener) Close() error {
select {
case l.closeCh <- 1:
<-l.doneCh
case <-l.doneCh:
}
return nil
}
func (l *win32PipeListener) Addr() net.Addr {
return pipeAddress(l.path)
}

202
vendor/github.com/Microsoft/go-winio/privilege.go generated vendored Normal file
View file

@ -0,0 +1,202 @@
// +build windows
package winio
import (
"bytes"
"encoding/binary"
"fmt"
"runtime"
"sync"
"syscall"
"unicode/utf16"
"golang.org/x/sys/windows"
)
//sys adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) [true] = advapi32.AdjustTokenPrivileges
//sys impersonateSelf(level uint32) (err error) = advapi32.ImpersonateSelf
//sys revertToSelf() (err error) = advapi32.RevertToSelf
//sys openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) = advapi32.OpenThreadToken
//sys getCurrentThread() (h syscall.Handle) = GetCurrentThread
//sys lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) = advapi32.LookupPrivilegeValueW
//sys lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) = advapi32.LookupPrivilegeNameW
//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
const (
SE_PRIVILEGE_ENABLED = 2
ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300
SeBackupPrivilege = "SeBackupPrivilege"
SeRestorePrivilege = "SeRestorePrivilege"
)
const (
securityAnonymous = iota
securityIdentification
securityImpersonation
securityDelegation
)
var (
privNames = make(map[string]uint64)
privNameMutex sync.Mutex
)
// PrivilegeError represents an error enabling privileges.
type PrivilegeError struct {
privileges []uint64
}
func (e *PrivilegeError) Error() string {
s := ""
if len(e.privileges) > 1 {
s = "Could not enable privileges "
} else {
s = "Could not enable privilege "
}
for i, p := range e.privileges {
if i != 0 {
s += ", "
}
s += `"`
s += getPrivilegeName(p)
s += `"`
}
return s
}
// RunWithPrivilege enables a single privilege for a function call.
func RunWithPrivilege(name string, fn func() error) error {
return RunWithPrivileges([]string{name}, fn)
}
// RunWithPrivileges enables privileges for a function call.
func RunWithPrivileges(names []string, fn func() error) error {
privileges, err := mapPrivileges(names)
if err != nil {
return err
}
runtime.LockOSThread()
defer runtime.UnlockOSThread()
token, err := newThreadToken()
if err != nil {
return err
}
defer releaseThreadToken(token)
err = adjustPrivileges(token, privileges, SE_PRIVILEGE_ENABLED)
if err != nil {
return err
}
return fn()
}
func mapPrivileges(names []string) ([]uint64, error) {
var privileges []uint64
privNameMutex.Lock()
defer privNameMutex.Unlock()
for _, name := range names {
p, ok := privNames[name]
if !ok {
err := lookupPrivilegeValue("", name, &p)
if err != nil {
return nil, err
}
privNames[name] = p
}
privileges = append(privileges, p)
}
return privileges, nil
}
// EnableProcessPrivileges enables privileges globally for the process.
func EnableProcessPrivileges(names []string) error {
return enableDisableProcessPrivilege(names, SE_PRIVILEGE_ENABLED)
}
// DisableProcessPrivileges disables privileges globally for the process.
func DisableProcessPrivileges(names []string) error {
return enableDisableProcessPrivilege(names, 0)
}
func enableDisableProcessPrivilege(names []string, action uint32) error {
privileges, err := mapPrivileges(names)
if err != nil {
return err
}
p, _ := windows.GetCurrentProcess()
var token windows.Token
err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
if err != nil {
return err
}
defer token.Close()
return adjustPrivileges(token, privileges, action)
}
func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
var b bytes.Buffer
binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
for _, p := range privileges {
binary.Write(&b, binary.LittleEndian, p)
binary.Write(&b, binary.LittleEndian, action)
}
prevState := make([]byte, b.Len())
reqSize := uint32(0)
success, err := adjustTokenPrivileges(token, false, &b.Bytes()[0], uint32(len(prevState)), &prevState[0], &reqSize)
if !success {
return err
}
if err == ERROR_NOT_ALL_ASSIGNED {
return &PrivilegeError{privileges}
}
return nil
}
func getPrivilegeName(luid uint64) string {
var nameBuffer [256]uint16
bufSize := uint32(len(nameBuffer))
err := lookupPrivilegeName("", &luid, &nameBuffer[0], &bufSize)
if err != nil {
return fmt.Sprintf("<unknown privilege %d>", luid)
}
var displayNameBuffer [256]uint16
displayBufSize := uint32(len(displayNameBuffer))
var langID uint32
err = lookupPrivilegeDisplayName("", &nameBuffer[0], &displayNameBuffer[0], &displayBufSize, &langID)
if err != nil {
return fmt.Sprintf("<unknown privilege %s>", string(utf16.Decode(nameBuffer[:bufSize])))
}
return string(utf16.Decode(displayNameBuffer[:displayBufSize]))
}
func newThreadToken() (windows.Token, error) {
err := impersonateSelf(securityImpersonation)
if err != nil {
return 0, err
}
var token windows.Token
err = openThreadToken(getCurrentThread(), syscall.TOKEN_ADJUST_PRIVILEGES|syscall.TOKEN_QUERY, false, &token)
if err != nil {
rerr := revertToSelf()
if rerr != nil {
panic(rerr)
}
return 0, err
}
return token, nil
}
func releaseThreadToken(h windows.Token) {
err := revertToSelf()
if err != nil {
panic(err)
}
h.Close()
}

128
vendor/github.com/Microsoft/go-winio/reparse.go generated vendored Normal file
View file

@ -0,0 +1,128 @@
package winio
import (
"bytes"
"encoding/binary"
"fmt"
"strings"
"unicode/utf16"
"unsafe"
)
const (
reparseTagMountPoint = 0xA0000003
reparseTagSymlink = 0xA000000C
)
type reparseDataBuffer struct {
ReparseTag uint32
ReparseDataLength uint16
Reserved uint16
SubstituteNameOffset uint16
SubstituteNameLength uint16
PrintNameOffset uint16
PrintNameLength uint16
}
// ReparsePoint describes a Win32 symlink or mount point.
type ReparsePoint struct {
Target string
IsMountPoint bool
}
// UnsupportedReparsePointError is returned when trying to decode a non-symlink or
// mount point reparse point.
type UnsupportedReparsePointError struct {
Tag uint32
}
func (e *UnsupportedReparsePointError) Error() string {
return fmt.Sprintf("unsupported reparse point %x", e.Tag)
}
// DecodeReparsePoint decodes a Win32 REPARSE_DATA_BUFFER structure containing either a symlink
// or a mount point.
func DecodeReparsePoint(b []byte) (*ReparsePoint, error) {
tag := binary.LittleEndian.Uint32(b[0:4])
return DecodeReparsePointData(tag, b[8:])
}
func DecodeReparsePointData(tag uint32, b []byte) (*ReparsePoint, error) {
isMountPoint := false
switch tag {
case reparseTagMountPoint:
isMountPoint = true
case reparseTagSymlink:
default:
return nil, &UnsupportedReparsePointError{tag}
}
nameOffset := 8 + binary.LittleEndian.Uint16(b[4:6])
if !isMountPoint {
nameOffset += 4
}
nameLength := binary.LittleEndian.Uint16(b[6:8])
name := make([]uint16, nameLength/2)
err := binary.Read(bytes.NewReader(b[nameOffset:nameOffset+nameLength]), binary.LittleEndian, &name)
if err != nil {
return nil, err
}
return &ReparsePoint{string(utf16.Decode(name)), isMountPoint}, nil
}
func isDriveLetter(c byte) bool {
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
}
// EncodeReparsePoint encodes a Win32 REPARSE_DATA_BUFFER structure describing a symlink or
// mount point.
func EncodeReparsePoint(rp *ReparsePoint) []byte {
// Generate an NT path and determine if this is a relative path.
var ntTarget string
relative := false
if strings.HasPrefix(rp.Target, `\\?\`) {
ntTarget = `\??\` + rp.Target[4:]
} else if strings.HasPrefix(rp.Target, `\\`) {
ntTarget = `\??\UNC\` + rp.Target[2:]
} else if len(rp.Target) >= 2 && isDriveLetter(rp.Target[0]) && rp.Target[1] == ':' {
ntTarget = `\??\` + rp.Target
} else {
ntTarget = rp.Target
relative = true
}
// The paths must be NUL-terminated even though they are counted strings.
target16 := utf16.Encode([]rune(rp.Target + "\x00"))
ntTarget16 := utf16.Encode([]rune(ntTarget + "\x00"))
size := int(unsafe.Sizeof(reparseDataBuffer{})) - 8
size += len(ntTarget16)*2 + len(target16)*2
tag := uint32(reparseTagMountPoint)
if !rp.IsMountPoint {
tag = reparseTagSymlink
size += 4 // Add room for symlink flags
}
data := reparseDataBuffer{
ReparseTag: tag,
ReparseDataLength: uint16(size),
SubstituteNameOffset: 0,
SubstituteNameLength: uint16((len(ntTarget16) - 1) * 2),
PrintNameOffset: uint16(len(ntTarget16) * 2),
PrintNameLength: uint16((len(target16) - 1) * 2),
}
var b bytes.Buffer
binary.Write(&b, binary.LittleEndian, &data)
if !rp.IsMountPoint {
flags := uint32(0)
if relative {
flags |= 1
}
binary.Write(&b, binary.LittleEndian, flags)
}
binary.Write(&b, binary.LittleEndian, ntTarget16)
binary.Write(&b, binary.LittleEndian, target16)
return b.Bytes()
}

98
vendor/github.com/Microsoft/go-winio/sd.go generated vendored Normal file
View file

@ -0,0 +1,98 @@
// +build windows
package winio
import (
"syscall"
"unsafe"
)
//sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW
//sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW
//sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW
//sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW
//sys localFree(mem uintptr) = LocalFree
//sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength
const (
cERROR_NONE_MAPPED = syscall.Errno(1332)
)
type AccountLookupError struct {
Name string
Err error
}
func (e *AccountLookupError) Error() string {
if e.Name == "" {
return "lookup account: empty account name specified"
}
var s string
switch e.Err {
case cERROR_NONE_MAPPED:
s = "not found"
default:
s = e.Err.Error()
}
return "lookup account " + e.Name + ": " + s
}
type SddlConversionError struct {
Sddl string
Err error
}
func (e *SddlConversionError) Error() string {
return "convert " + e.Sddl + ": " + e.Err.Error()
}
// LookupSidByName looks up the SID of an account by name
func LookupSidByName(name string) (sid string, err error) {
if name == "" {
return "", &AccountLookupError{name, cERROR_NONE_MAPPED}
}
var sidSize, sidNameUse, refDomainSize uint32
err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse)
if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER {
return "", &AccountLookupError{name, err}
}
sidBuffer := make([]byte, sidSize)
refDomainBuffer := make([]uint16, refDomainSize)
err = lookupAccountName(nil, name, &sidBuffer[0], &sidSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
if err != nil {
return "", &AccountLookupError{name, err}
}
var strBuffer *uint16
err = convertSidToStringSid(&sidBuffer[0], &strBuffer)
if err != nil {
return "", &AccountLookupError{name, err}
}
sid = syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(strBuffer))[:])
localFree(uintptr(unsafe.Pointer(strBuffer)))
return sid, nil
}
func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
var sdBuffer uintptr
err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil)
if err != nil {
return nil, &SddlConversionError{sddl, err}
}
defer localFree(sdBuffer)
sd := make([]byte, getSecurityDescriptorLength(sdBuffer))
copy(sd, (*[0xffff]byte)(unsafe.Pointer(sdBuffer))[:len(sd)])
return sd, nil
}
func SecurityDescriptorToSddl(sd []byte) (string, error) {
var sddl *uint16
// The returned string length seems to including an aribtrary number of terminating NULs.
// Don't use it.
err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil)
if err != nil {
return "", err
}
defer localFree(uintptr(unsafe.Pointer(sddl)))
return syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(sddl))[:]), nil
}

3
vendor/github.com/Microsoft/go-winio/syscall.go generated vendored Normal file
View file

@ -0,0 +1,3 @@
package winio
//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall_windows.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go

View file

@ -0,0 +1,520 @@
// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT
package winio
import (
"syscall"
"unsafe"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
// Do the interface allocations only once for common
// Errno values.
const (
errnoERROR_IO_PENDING = 997
)
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
)
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return nil
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
// TODO: add more here, after collecting data on the common
// error values see on Windows. (perhaps when running
// all.bat?)
return e
}
var (
modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
procCancelIoEx = modkernel32.NewProc("CancelIoEx")
procCreateIoCompletionPort = modkernel32.NewProc("CreateIoCompletionPort")
procGetQueuedCompletionStatus = modkernel32.NewProc("GetQueuedCompletionStatus")
procSetFileCompletionNotificationModes = modkernel32.NewProc("SetFileCompletionNotificationModes")
procConnectNamedPipe = modkernel32.NewProc("ConnectNamedPipe")
procCreateNamedPipeW = modkernel32.NewProc("CreateNamedPipeW")
procCreateFileW = modkernel32.NewProc("CreateFileW")
procWaitNamedPipeW = modkernel32.NewProc("WaitNamedPipeW")
procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo")
procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW")
procLocalAlloc = modkernel32.NewProc("LocalAlloc")
procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW")
procLocalFree = modkernel32.NewProc("LocalFree")
procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength")
procGetFileInformationByHandleEx = modkernel32.NewProc("GetFileInformationByHandleEx")
procSetFileInformationByHandle = modkernel32.NewProc("SetFileInformationByHandle")
procAdjustTokenPrivileges = modadvapi32.NewProc("AdjustTokenPrivileges")
procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf")
procRevertToSelf = modadvapi32.NewProc("RevertToSelf")
procOpenThreadToken = modadvapi32.NewProc("OpenThreadToken")
procGetCurrentThread = modkernel32.NewProc("GetCurrentThread")
procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
procBackupRead = modkernel32.NewProc("BackupRead")
procBackupWrite = modkernel32.NewProc("BackupWrite")
)
func cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) {
r1, _, e1 := syscall.Syscall(procCancelIoEx.Addr(), 2, uintptr(file), uintptr(unsafe.Pointer(o)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall6(procCreateIoCompletionPort.Addr(), 4, uintptr(file), uintptr(port), uintptr(key), uintptr(threadCount), 0, 0)
newport = syscall.Handle(r0)
if newport == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetQueuedCompletionStatus.Addr(), 5, uintptr(port), uintptr(unsafe.Pointer(bytes)), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(o)), uintptr(timeout), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) {
r1, _, e1 := syscall.Syscall(procSetFileCompletionNotificationModes.Addr(), 2, uintptr(h), uintptr(flags), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) {
r1, _, e1 := syscall.Syscall(procConnectNamedPipe.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(o)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _createNamedPipe(_p0, flags, pipeMode, maxInstances, outSize, inSize, defaultTimeout, sa)
}
func _createNamedPipe(name *uint16, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall9(procCreateNamedPipeW.Addr(), 8, uintptr(unsafe.Pointer(name)), uintptr(flags), uintptr(pipeMode), uintptr(maxInstances), uintptr(outSize), uintptr(inSize), uintptr(defaultTimeout), uintptr(unsafe.Pointer(sa)), 0)
handle = syscall.Handle(r0)
if handle == syscall.InvalidHandle {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _createFile(_p0, access, mode, sa, createmode, attrs, templatefile)
}
func _createFile(name *uint16, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall9(procCreateFileW.Addr(), 7, uintptr(unsafe.Pointer(name)), uintptr(access), uintptr(mode), uintptr(unsafe.Pointer(sa)), uintptr(createmode), uintptr(attrs), uintptr(templatefile), 0, 0)
handle = syscall.Handle(r0)
if handle == syscall.InvalidHandle {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func waitNamedPipe(name string, timeout uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _waitNamedPipe(_p0, timeout)
}
func _waitNamedPipe(name *uint16, timeout uint32) (err error) {
r1, _, e1 := syscall.Syscall(procWaitNamedPipeW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(timeout), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetNamedPipeInfo.Addr(), 5, uintptr(pipe), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(outSize)), uintptr(unsafe.Pointer(inSize)), uintptr(unsafe.Pointer(maxInstances)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) {
r1, _, e1 := syscall.Syscall9(procGetNamedPipeHandleStateW.Addr(), 7, uintptr(pipe), uintptr(unsafe.Pointer(state)), uintptr(unsafe.Pointer(curInstances)), uintptr(unsafe.Pointer(maxCollectionCount)), uintptr(unsafe.Pointer(collectDataTimeout)), uintptr(unsafe.Pointer(userName)), uintptr(maxUserNameSize), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func localAlloc(uFlags uint32, length uint32) (ptr uintptr) {
r0, _, _ := syscall.Syscall(procLocalAlloc.Addr(), 2, uintptr(uFlags), uintptr(length), 0)
ptr = uintptr(r0)
return
}
func lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(accountName)
if err != nil {
return
}
return _lookupAccountName(systemName, _p0, sid, sidSize, refDomain, refDomainSize, sidNameUse)
}
func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
r1, _, e1 := syscall.Syscall9(procLookupAccountNameW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(accountName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(sidSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertSidToStringSid(sid *byte, str **uint16) (err error) {
r1, _, e1 := syscall.Syscall(procConvertSidToStringSidW.Addr(), 2, uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(str)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(str)
if err != nil {
return
}
return _convertStringSecurityDescriptorToSecurityDescriptor(_p0, revision, sd, size)
}
func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision uint32, sd *uintptr, size *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procConvertStringSecurityDescriptorToSecurityDescriptorW.Addr(), 4, uintptr(unsafe.Pointer(str)), uintptr(revision), uintptr(unsafe.Pointer(sd)), uintptr(unsafe.Pointer(size)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procConvertSecurityDescriptorToStringSecurityDescriptorW.Addr(), 5, uintptr(unsafe.Pointer(sd)), uintptr(revision), uintptr(secInfo), uintptr(unsafe.Pointer(sddl)), uintptr(unsafe.Pointer(sddlSize)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func localFree(mem uintptr) {
syscall.Syscall(procLocalFree.Addr(), 1, uintptr(mem), 0, 0)
return
}
func getSecurityDescriptorLength(sd uintptr) (len uint32) {
r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0)
len = uint32(r0)
return
}
func getFileInformationByHandleEx(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetFileInformationByHandleEx.Addr(), 4, uintptr(h), uintptr(class), uintptr(unsafe.Pointer(buffer)), uintptr(size), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func setFileInformationByHandle(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procSetFileInformationByHandle.Addr(), 4, uintptr(h), uintptr(class), uintptr(unsafe.Pointer(buffer)), uintptr(size), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
var _p0 uint32
if releaseAll {
_p0 = 1
} else {
_p0 = 0
}
r0, _, e1 := syscall.Syscall6(procAdjustTokenPrivileges.Addr(), 6, uintptr(token), uintptr(_p0), uintptr(unsafe.Pointer(input)), uintptr(outputSize), uintptr(unsafe.Pointer(output)), uintptr(unsafe.Pointer(requiredSize)))
success = r0 != 0
if true {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func impersonateSelf(level uint32) (err error) {
r1, _, e1 := syscall.Syscall(procImpersonateSelf.Addr(), 1, uintptr(level), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func revertToSelf() (err error) {
r1, _, e1 := syscall.Syscall(procRevertToSelf.Addr(), 0, 0, 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) {
var _p0 uint32
if openAsSelf {
_p0 = 1
} else {
_p0 = 0
}
r1, _, e1 := syscall.Syscall6(procOpenThreadToken.Addr(), 4, uintptr(thread), uintptr(accessMask), uintptr(_p0), uintptr(unsafe.Pointer(token)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getCurrentThread() (h syscall.Handle) {
r0, _, _ := syscall.Syscall(procGetCurrentThread.Addr(), 0, 0, 0, 0)
h = syscall.Handle(r0)
return
}
func lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
var _p1 *uint16
_p1, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _lookupPrivilegeValue(_p0, _p1, luid)
}
func _lookupPrivilegeValue(systemName *uint16, name *uint16, luid *uint64) (err error) {
r1, _, e1 := syscall.Syscall(procLookupPrivilegeValueW.Addr(), 3, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(luid)))
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeName(_p0, luid, buffer, size)
}
func _lookupPrivilegeName(systemName *uint16, luid *uint64, buffer *uint16, size *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procLookupPrivilegeNameW.Addr(), 4, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(luid)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeDisplayName(_p0, name, buffer, size, languageId)
}
func _lookupPrivilegeDisplayName(systemName *uint16, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procLookupPrivilegeDisplayNameW.Addr(), 5, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(languageId)), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
var _p0 *byte
if len(b) > 0 {
_p0 = &b[0]
}
var _p1 uint32
if abort {
_p1 = 1
} else {
_p1 = 0
}
var _p2 uint32
if processSecurity {
_p2 = 1
} else {
_p2 = 0
}
r1, _, e1 := syscall.Syscall9(procBackupRead.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesRead)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
var _p0 *byte
if len(b) > 0 {
_p0 = &b[0]
}
var _p1 uint32
if abort {
_p1 = 1
} else {
_p1 = 0
}
var _p2 uint32
if processSecurity {
_p2 = 1
} else {
_p2 = 0
}
r1, _, e1 := syscall.Syscall9(procBackupWrite.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesWritten)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}

26
vendor/github.com/Nvveen/Gotty/LICENSE generated vendored Normal file
View file

@ -0,0 +1,26 @@
Copyright (c) 2012, Neal van Veen (nealvanveen@gmail.com)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those
of the authors and should not be interpreted as representing official policies,
either expressed or implied, of the FreeBSD Project.

5
vendor/github.com/Nvveen/Gotty/README generated vendored Normal file
View file

@ -0,0 +1,5 @@
Gotty is a library written in Go that determines and reads termcap database
files to produce an interface for interacting with the capabilities of a
terminal.
See the godoc documentation or the source code for more information about
function usage.

3
vendor/github.com/Nvveen/Gotty/TODO generated vendored Normal file
View file

@ -0,0 +1,3 @@
gotty.go:// TODO add more concurrency to name lookup, look for more opportunities.
all:// TODO add more documentation, with function usage in a doc.go file.
all:// TODO add more testing/benchmarking with go test.

514
vendor/github.com/Nvveen/Gotty/attributes.go generated vendored Normal file
View file

@ -0,0 +1,514 @@
// Copyright 2012 Neal van Veen. All rights reserved.
// Usage of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package gotty
// Boolean capabilities
var BoolAttr = [...]string{
"auto_left_margin", "bw",
"auto_right_margin", "am",
"no_esc_ctlc", "xsb",
"ceol_standout_glitch", "xhp",
"eat_newline_glitch", "xenl",
"erase_overstrike", "eo",
"generic_type", "gn",
"hard_copy", "hc",
"has_meta_key", "km",
"has_status_line", "hs",
"insert_null_glitch", "in",
"memory_above", "da",
"memory_below", "db",
"move_insert_mode", "mir",
"move_standout_mode", "msgr",
"over_strike", "os",
"status_line_esc_ok", "eslok",
"dest_tabs_magic_smso", "xt",
"tilde_glitch", "hz",
"transparent_underline", "ul",
"xon_xoff", "nxon",
"needs_xon_xoff", "nxon",
"prtr_silent", "mc5i",
"hard_cursor", "chts",
"non_rev_rmcup", "nrrmc",
"no_pad_char", "npc",
"non_dest_scroll_region", "ndscr",
"can_change", "ccc",
"back_color_erase", "bce",
"hue_lightness_saturation", "hls",
"col_addr_glitch", "xhpa",
"cr_cancels_micro_mode", "crxm",
"has_print_wheel", "daisy",
"row_addr_glitch", "xvpa",
"semi_auto_right_margin", "sam",
"cpi_changes_res", "cpix",
"lpi_changes_res", "lpix",
"backspaces_with_bs", "",
"crt_no_scrolling", "",
"no_correctly_working_cr", "",
"gnu_has_meta_key", "",
"linefeed_is_newline", "",
"has_hardware_tabs", "",
"return_does_clr_eol", "",
}
// Numerical capabilities
var NumAttr = [...]string{
"columns", "cols",
"init_tabs", "it",
"lines", "lines",
"lines_of_memory", "lm",
"magic_cookie_glitch", "xmc",
"padding_baud_rate", "pb",
"virtual_terminal", "vt",
"width_status_line", "wsl",
"num_labels", "nlab",
"label_height", "lh",
"label_width", "lw",
"max_attributes", "ma",
"maximum_windows", "wnum",
"max_colors", "colors",
"max_pairs", "pairs",
"no_color_video", "ncv",
"buffer_capacity", "bufsz",
"dot_vert_spacing", "spinv",
"dot_horz_spacing", "spinh",
"max_micro_address", "maddr",
"max_micro_jump", "mjump",
"micro_col_size", "mcs",
"micro_line_size", "mls",
"number_of_pins", "npins",
"output_res_char", "orc",
"output_res_line", "orl",
"output_res_horz_inch", "orhi",
"output_res_vert_inch", "orvi",
"print_rate", "cps",
"wide_char_size", "widcs",
"buttons", "btns",
"bit_image_entwining", "bitwin",
"bit_image_type", "bitype",
"magic_cookie_glitch_ul", "",
"carriage_return_delay", "",
"new_line_delay", "",
"backspace_delay", "",
"horizontal_tab_delay", "",
"number_of_function_keys", "",
}
// String capabilities
var StrAttr = [...]string{
"back_tab", "cbt",
"bell", "bel",
"carriage_return", "cr",
"change_scroll_region", "csr",
"clear_all_tabs", "tbc",
"clear_screen", "clear",
"clr_eol", "el",
"clr_eos", "ed",
"column_address", "hpa",
"command_character", "cmdch",
"cursor_address", "cup",
"cursor_down", "cud1",
"cursor_home", "home",
"cursor_invisible", "civis",
"cursor_left", "cub1",
"cursor_mem_address", "mrcup",
"cursor_normal", "cnorm",
"cursor_right", "cuf1",
"cursor_to_ll", "ll",
"cursor_up", "cuu1",
"cursor_visible", "cvvis",
"delete_character", "dch1",
"delete_line", "dl1",
"dis_status_line", "dsl",
"down_half_line", "hd",
"enter_alt_charset_mode", "smacs",
"enter_blink_mode", "blink",
"enter_bold_mode", "bold",
"enter_ca_mode", "smcup",
"enter_delete_mode", "smdc",
"enter_dim_mode", "dim",
"enter_insert_mode", "smir",
"enter_secure_mode", "invis",
"enter_protected_mode", "prot",
"enter_reverse_mode", "rev",
"enter_standout_mode", "smso",
"enter_underline_mode", "smul",
"erase_chars", "ech",
"exit_alt_charset_mode", "rmacs",
"exit_attribute_mode", "sgr0",
"exit_ca_mode", "rmcup",
"exit_delete_mode", "rmdc",
"exit_insert_mode", "rmir",
"exit_standout_mode", "rmso",
"exit_underline_mode", "rmul",
"flash_screen", "flash",
"form_feed", "ff",
"from_status_line", "fsl",
"init_1string", "is1",
"init_2string", "is2",
"init_3string", "is3",
"init_file", "if",
"insert_character", "ich1",
"insert_line", "il1",
"insert_padding", "ip",
"key_backspace", "kbs",
"key_catab", "ktbc",
"key_clear", "kclr",
"key_ctab", "kctab",
"key_dc", "kdch1",
"key_dl", "kdl1",
"key_down", "kcud1",
"key_eic", "krmir",
"key_eol", "kel",
"key_eos", "ked",
"key_f0", "kf0",
"key_f1", "kf1",
"key_f10", "kf10",
"key_f2", "kf2",
"key_f3", "kf3",
"key_f4", "kf4",
"key_f5", "kf5",
"key_f6", "kf6",
"key_f7", "kf7",
"key_f8", "kf8",
"key_f9", "kf9",
"key_home", "khome",
"key_ic", "kich1",
"key_il", "kil1",
"key_left", "kcub1",
"key_ll", "kll",
"key_npage", "knp",
"key_ppage", "kpp",
"key_right", "kcuf1",
"key_sf", "kind",
"key_sr", "kri",
"key_stab", "khts",
"key_up", "kcuu1",
"keypad_local", "rmkx",
"keypad_xmit", "smkx",
"lab_f0", "lf0",
"lab_f1", "lf1",
"lab_f10", "lf10",
"lab_f2", "lf2",
"lab_f3", "lf3",
"lab_f4", "lf4",
"lab_f5", "lf5",
"lab_f6", "lf6",
"lab_f7", "lf7",
"lab_f8", "lf8",
"lab_f9", "lf9",
"meta_off", "rmm",
"meta_on", "smm",
"newline", "_glitch",
"pad_char", "npc",
"parm_dch", "dch",
"parm_delete_line", "dl",
"parm_down_cursor", "cud",
"parm_ich", "ich",
"parm_index", "indn",
"parm_insert_line", "il",
"parm_left_cursor", "cub",
"parm_right_cursor", "cuf",
"parm_rindex", "rin",
"parm_up_cursor", "cuu",
"pkey_key", "pfkey",
"pkey_local", "pfloc",
"pkey_xmit", "pfx",
"print_screen", "mc0",
"prtr_off", "mc4",
"prtr_on", "mc5",
"repeat_char", "rep",
"reset_1string", "rs1",
"reset_2string", "rs2",
"reset_3string", "rs3",
"reset_file", "rf",
"restore_cursor", "rc",
"row_address", "mvpa",
"save_cursor", "row_address",
"scroll_forward", "ind",
"scroll_reverse", "ri",
"set_attributes", "sgr",
"set_tab", "hts",
"set_window", "wind",
"tab", "s_magic_smso",
"to_status_line", "tsl",
"underline_char", "uc",
"up_half_line", "hu",
"init_prog", "iprog",
"key_a1", "ka1",
"key_a3", "ka3",
"key_b2", "kb2",
"key_c1", "kc1",
"key_c3", "kc3",
"prtr_non", "mc5p",
"char_padding", "rmp",
"acs_chars", "acsc",
"plab_norm", "pln",
"key_btab", "kcbt",
"enter_xon_mode", "smxon",
"exit_xon_mode", "rmxon",
"enter_am_mode", "smam",
"exit_am_mode", "rmam",
"xon_character", "xonc",
"xoff_character", "xoffc",
"ena_acs", "enacs",
"label_on", "smln",
"label_off", "rmln",
"key_beg", "kbeg",
"key_cancel", "kcan",
"key_close", "kclo",
"key_command", "kcmd",
"key_copy", "kcpy",
"key_create", "kcrt",
"key_end", "kend",
"key_enter", "kent",
"key_exit", "kext",
"key_find", "kfnd",
"key_help", "khlp",
"key_mark", "kmrk",
"key_message", "kmsg",
"key_move", "kmov",
"key_next", "knxt",
"key_open", "kopn",
"key_options", "kopt",
"key_previous", "kprv",
"key_print", "kprt",
"key_redo", "krdo",
"key_reference", "kref",
"key_refresh", "krfr",
"key_replace", "krpl",
"key_restart", "krst",
"key_resume", "kres",
"key_save", "ksav",
"key_suspend", "kspd",
"key_undo", "kund",
"key_sbeg", "kBEG",
"key_scancel", "kCAN",
"key_scommand", "kCMD",
"key_scopy", "kCPY",
"key_screate", "kCRT",
"key_sdc", "kDC",
"key_sdl", "kDL",
"key_select", "kslt",
"key_send", "kEND",
"key_seol", "kEOL",
"key_sexit", "kEXT",
"key_sfind", "kFND",
"key_shelp", "kHLP",
"key_shome", "kHOM",
"key_sic", "kIC",
"key_sleft", "kLFT",
"key_smessage", "kMSG",
"key_smove", "kMOV",
"key_snext", "kNXT",
"key_soptions", "kOPT",
"key_sprevious", "kPRV",
"key_sprint", "kPRT",
"key_sredo", "kRDO",
"key_sreplace", "kRPL",
"key_sright", "kRIT",
"key_srsume", "kRES",
"key_ssave", "kSAV",
"key_ssuspend", "kSPD",
"key_sundo", "kUND",
"req_for_input", "rfi",
"key_f11", "kf11",
"key_f12", "kf12",
"key_f13", "kf13",
"key_f14", "kf14",
"key_f15", "kf15",
"key_f16", "kf16",
"key_f17", "kf17",
"key_f18", "kf18",
"key_f19", "kf19",
"key_f20", "kf20",
"key_f21", "kf21",
"key_f22", "kf22",
"key_f23", "kf23",
"key_f24", "kf24",
"key_f25", "kf25",
"key_f26", "kf26",
"key_f27", "kf27",
"key_f28", "kf28",
"key_f29", "kf29",
"key_f30", "kf30",
"key_f31", "kf31",
"key_f32", "kf32",
"key_f33", "kf33",
"key_f34", "kf34",
"key_f35", "kf35",
"key_f36", "kf36",
"key_f37", "kf37",
"key_f38", "kf38",
"key_f39", "kf39",
"key_f40", "kf40",
"key_f41", "kf41",
"key_f42", "kf42",
"key_f43", "kf43",
"key_f44", "kf44",
"key_f45", "kf45",
"key_f46", "kf46",
"key_f47", "kf47",
"key_f48", "kf48",
"key_f49", "kf49",
"key_f50", "kf50",
"key_f51", "kf51",
"key_f52", "kf52",
"key_f53", "kf53",
"key_f54", "kf54",
"key_f55", "kf55",
"key_f56", "kf56",
"key_f57", "kf57",
"key_f58", "kf58",
"key_f59", "kf59",
"key_f60", "kf60",
"key_f61", "kf61",
"key_f62", "kf62",
"key_f63", "kf63",
"clr_bol", "el1",
"clear_margins", "mgc",
"set_left_margin", "smgl",
"set_right_margin", "smgr",
"label_format", "fln",
"set_clock", "sclk",
"display_clock", "dclk",
"remove_clock", "rmclk",
"create_window", "cwin",
"goto_window", "wingo",
"hangup", "hup",
"dial_phone", "dial",
"quick_dial", "qdial",
"tone", "tone",
"pulse", "pulse",
"flash_hook", "hook",
"fixed_pause", "pause",
"wait_tone", "wait",
"user0", "u0",
"user1", "u1",
"user2", "u2",
"user3", "u3",
"user4", "u4",
"user5", "u5",
"user6", "u6",
"user7", "u7",
"user8", "u8",
"user9", "u9",
"orig_pair", "op",
"orig_colors", "oc",
"initialize_color", "initc",
"initialize_pair", "initp",
"set_color_pair", "scp",
"set_foreground", "setf",
"set_background", "setb",
"change_char_pitch", "cpi",
"change_line_pitch", "lpi",
"change_res_horz", "chr",
"change_res_vert", "cvr",
"define_char", "defc",
"enter_doublewide_mode", "swidm",
"enter_draft_quality", "sdrfq",
"enter_italics_mode", "sitm",
"enter_leftward_mode", "slm",
"enter_micro_mode", "smicm",
"enter_near_letter_quality", "snlq",
"enter_normal_quality", "snrmq",
"enter_shadow_mode", "sshm",
"enter_subscript_mode", "ssubm",
"enter_superscript_mode", "ssupm",
"enter_upward_mode", "sum",
"exit_doublewide_mode", "rwidm",
"exit_italics_mode", "ritm",
"exit_leftward_mode", "rlm",
"exit_micro_mode", "rmicm",
"exit_shadow_mode", "rshm",
"exit_subscript_mode", "rsubm",
"exit_superscript_mode", "rsupm",
"exit_upward_mode", "rum",
"micro_column_address", "mhpa",
"micro_down", "mcud1",
"micro_left", "mcub1",
"micro_right", "mcuf1",
"micro_row_address", "mvpa",
"micro_up", "mcuu1",
"order_of_pins", "porder",
"parm_down_micro", "mcud",
"parm_left_micro", "mcub",
"parm_right_micro", "mcuf",
"parm_up_micro", "mcuu",
"select_char_set", "scs",
"set_bottom_margin", "smgb",
"set_bottom_margin_parm", "smgbp",
"set_left_margin_parm", "smglp",
"set_right_margin_parm", "smgrp",
"set_top_margin", "smgt",
"set_top_margin_parm", "smgtp",
"start_bit_image", "sbim",
"start_char_set_def", "scsd",
"stop_bit_image", "rbim",
"stop_char_set_def", "rcsd",
"subscript_characters", "subcs",
"superscript_characters", "supcs",
"these_cause_cr", "docr",
"zero_motion", "zerom",
"char_set_names", "csnm",
"key_mouse", "kmous",
"mouse_info", "minfo",
"req_mouse_pos", "reqmp",
"get_mouse", "getm",
"set_a_foreground", "setaf",
"set_a_background", "setab",
"pkey_plab", "pfxl",
"device_type", "devt",
"code_set_init", "csin",
"set0_des_seq", "s0ds",
"set1_des_seq", "s1ds",
"set2_des_seq", "s2ds",
"set3_des_seq", "s3ds",
"set_lr_margin", "smglr",
"set_tb_margin", "smgtb",
"bit_image_repeat", "birep",
"bit_image_newline", "binel",
"bit_image_carriage_return", "bicr",
"color_names", "colornm",
"define_bit_image_region", "defbi",
"end_bit_image_region", "endbi",
"set_color_band", "setcolor",
"set_page_length", "slines",
"display_pc_char", "dispc",
"enter_pc_charset_mode", "smpch",
"exit_pc_charset_mode", "rmpch",
"enter_scancode_mode", "smsc",
"exit_scancode_mode", "rmsc",
"pc_term_options", "pctrm",
"scancode_escape", "scesc",
"alt_scancode_esc", "scesa",
"enter_horizontal_hl_mode", "ehhlm",
"enter_left_hl_mode", "elhlm",
"enter_low_hl_mode", "elohlm",
"enter_right_hl_mode", "erhlm",
"enter_top_hl_mode", "ethlm",
"enter_vertical_hl_mode", "evhlm",
"set_a_attributes", "sgr1",
"set_pglen_inch", "slength",
"termcap_init2", "",
"termcap_reset", "",
"linefeed_if_not_lf", "",
"backspace_if_not_bs", "",
"other_non_function_keys", "",
"arrow_key_map", "",
"acs_ulcorner", "",
"acs_llcorner", "",
"acs_urcorner", "",
"acs_lrcorner", "",
"acs_ltee", "",
"acs_rtee", "",
"acs_btee", "",
"acs_ttee", "",
"acs_hline", "",
"acs_vline", "",
"acs_plus", "",
"memory_lock", "",
"memory_unlock", "",
"box_chars_1", "",
}

238
vendor/github.com/Nvveen/Gotty/gotty.go generated vendored Normal file
View file

@ -0,0 +1,238 @@
// Copyright 2012 Neal van Veen. All rights reserved.
// Usage of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Gotty is a Go-package for reading and parsing the terminfo database
package gotty
// TODO add more concurrency to name lookup, look for more opportunities.
import (
"encoding/binary"
"errors"
"fmt"
"os"
"reflect"
"strings"
"sync"
)
// Open a terminfo file by the name given and construct a TermInfo object.
// If something went wrong reading the terminfo database file, an error is
// returned.
func OpenTermInfo(termName string) (*TermInfo, error) {
var term *TermInfo
var err error
// Find the environment variables
termloc := os.Getenv("TERMINFO")
if len(termloc) == 0 {
// Search like ncurses
locations := []string{os.Getenv("HOME") + "/.terminfo/", "/etc/terminfo/",
"/lib/terminfo/", "/usr/share/terminfo/"}
var path string
for _, str := range locations {
// Construct path
path = str + string(termName[0]) + "/" + termName
// Check if path can be opened
file, _ := os.Open(path)
if file != nil {
// Path can open, fall out and use current path
file.Close()
break
}
}
if len(path) > 0 {
term, err = readTermInfo(path)
} else {
err = errors.New(fmt.Sprintf("No terminfo file(-location) found"))
}
}
return term, err
}
// Open a terminfo file from the environment variable containing the current
// terminal name and construct a TermInfo object. If something went wrong
// reading the terminfo database file, an error is returned.
func OpenTermInfoEnv() (*TermInfo, error) {
termenv := os.Getenv("TERM")
return OpenTermInfo(termenv)
}
// Return an attribute by the name attr provided. If none can be found,
// an error is returned.
func (term *TermInfo) GetAttribute(attr string) (stacker, error) {
// Channel to store the main value in.
var value stacker
// Add a blocking WaitGroup
var block sync.WaitGroup
// Keep track of variable being written.
written := false
// Function to put into goroutine.
f := func(ats interface{}) {
var ok bool
var v stacker
// Switch on type of map to use and assign value to it.
switch reflect.TypeOf(ats).Elem().Kind() {
case reflect.Bool:
v, ok = ats.(map[string]bool)[attr]
case reflect.Int16:
v, ok = ats.(map[string]int16)[attr]
case reflect.String:
v, ok = ats.(map[string]string)[attr]
}
// If ok, a value is found, so we can write.
if ok {
value = v
written = true
}
// Goroutine is done
block.Done()
}
block.Add(3)
// Go for all 3 attribute lists.
go f(term.boolAttributes)
go f(term.numAttributes)
go f(term.strAttributes)
// Wait until every goroutine is done.
block.Wait()
// If a value has been written, return it.
if written {
return value, nil
}
// Otherwise, error.
return nil, fmt.Errorf("Erorr finding attribute")
}
// Return an attribute by the name attr provided. If none can be found,
// an error is returned. A name is first converted to its termcap value.
func (term *TermInfo) GetAttributeName(name string) (stacker, error) {
tc := GetTermcapName(name)
return term.GetAttribute(tc)
}
// A utility function that finds and returns the termcap equivalent of a
// variable name.
func GetTermcapName(name string) string {
// Termcap name
var tc string
// Blocking group
var wait sync.WaitGroup
// Function to put into a goroutine
f := func(attrs []string) {
// Find the string corresponding to the name
for i, s := range attrs {
if s == name {
tc = attrs[i+1]
}
}
// Goroutine is finished
wait.Done()
}
wait.Add(3)
// Go for all 3 attribute lists
go f(BoolAttr[:])
go f(NumAttr[:])
go f(StrAttr[:])
// Wait until every goroutine is done
wait.Wait()
// Return the termcap name
return tc
}
// This function takes a path to a terminfo file and reads it in binary
// form to construct the actual TermInfo file.
func readTermInfo(path string) (*TermInfo, error) {
// Open the terminfo file
file, err := os.Open(path)
defer file.Close()
if err != nil {
return nil, err
}
// magic, nameSize, boolSize, nrSNum, nrOffsetsStr, strSize
// Header is composed of the magic 0432 octal number, size of the name
// section, size of the boolean section, the amount of number values,
// the number of offsets of strings, and the size of the string section.
var header [6]int16
// Byte array is used to read in byte values
var byteArray []byte
// Short array is used to read in short values
var shArray []int16
// TermInfo object to store values
var term TermInfo
// Read in the header
err = binary.Read(file, binary.LittleEndian, &header)
if err != nil {
return nil, err
}
// If magic number isn't there or isn't correct, we have the wrong filetype
if header[0] != 0432 {
return nil, errors.New(fmt.Sprintf("Wrong filetype"))
}
// Read in the names
byteArray = make([]byte, header[1])
err = binary.Read(file, binary.LittleEndian, &byteArray)
if err != nil {
return nil, err
}
term.Names = strings.Split(string(byteArray), "|")
// Read in the booleans
byteArray = make([]byte, header[2])
err = binary.Read(file, binary.LittleEndian, &byteArray)
if err != nil {
return nil, err
}
term.boolAttributes = make(map[string]bool)
for i, b := range byteArray {
if b == 1 {
term.boolAttributes[BoolAttr[i*2+1]] = true
}
}
// If the number of bytes read is not even, a byte for alignment is added
if len(byteArray)%2 != 0 {
err = binary.Read(file, binary.LittleEndian, make([]byte, 1))
if err != nil {
return nil, err
}
}
// Read in shorts
shArray = make([]int16, header[3])
err = binary.Read(file, binary.LittleEndian, &shArray)
if err != nil {
return nil, err
}
term.numAttributes = make(map[string]int16)
for i, n := range shArray {
if n != 0377 && n > -1 {
term.numAttributes[NumAttr[i*2+1]] = n
}
}
// Read the offsets into the short array
shArray = make([]int16, header[4])
err = binary.Read(file, binary.LittleEndian, &shArray)
if err != nil {
return nil, err
}
// Read the actual strings in the byte array
byteArray = make([]byte, header[5])
err = binary.Read(file, binary.LittleEndian, &byteArray)
if err != nil {
return nil, err
}
term.strAttributes = make(map[string]string)
// We get an offset, and then iterate until the string is null-terminated
for i, offset := range shArray {
if offset > -1 {
r := offset
for ; byteArray[r] != 0; r++ {
}
term.strAttributes[StrAttr[i*2+1]] = string(byteArray[offset:r])
}
}
return &term, nil
}

362
vendor/github.com/Nvveen/Gotty/parser.go generated vendored Normal file
View file

@ -0,0 +1,362 @@
// Copyright 2012 Neal van Veen. All rights reserved.
// Usage of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package gotty
import (
"bytes"
"errors"
"fmt"
"regexp"
"strconv"
"strings"
)
var exp = [...]string{
"%%",
"%c",
"%s",
"%p(\\d)",
"%P([A-z])",
"%g([A-z])",
"%'(.)'",
"%{([0-9]+)}",
"%l",
"%\\+|%-|%\\*|%/|%m",
"%&|%\\||%\\^",
"%=|%>|%<",
"%A|%O",
"%!|%~",
"%i",
"%(:[\\ #\\-\\+]{0,4})?(\\d+\\.\\d+|\\d+)?[doxXs]",
"%\\?(.*?);",
}
var regex *regexp.Regexp
var staticVar map[byte]stacker
// Parses the attribute that is received with name attr and parameters params.
func (term *TermInfo) Parse(attr string, params ...interface{}) (string, error) {
// Get the attribute name first.
iface, err := term.GetAttribute(attr)
str, ok := iface.(string)
if err != nil {
return "", err
}
if !ok {
return str, errors.New("Only string capabilities can be parsed.")
}
// Construct the hidden parser struct so we can use a recursive stack based
// parser.
ps := &parser{}
// Dynamic variables only exist in this context.
ps.dynamicVar = make(map[byte]stacker, 26)
ps.parameters = make([]stacker, len(params))
// Convert the parameters to insert them into the parser struct.
for i, x := range params {
ps.parameters[i] = x
}
// Recursively walk and return.
result, err := ps.walk(str)
return result, err
}
// Parses the attribute that is received with name attr and parameters params.
// Only works on full name of a capability that is given, which it uses to
// search for the termcap name.
func (term *TermInfo) ParseName(attr string, params ...interface{}) (string, error) {
tc := GetTermcapName(attr)
return term.Parse(tc, params)
}
// Identify each token in a stack based manner and do the actual parsing.
func (ps *parser) walk(attr string) (string, error) {
// We use a buffer to get the modified string.
var buf bytes.Buffer
// Next, find and identify all tokens by their indices and strings.
tokens := regex.FindAllStringSubmatch(attr, -1)
if len(tokens) == 0 {
return attr, nil
}
indices := regex.FindAllStringIndex(attr, -1)
q := 0 // q counts the matches of one token
// Iterate through the string per character.
for i := 0; i < len(attr); i++ {
// If the current position is an identified token, execute the following
// steps.
if q < len(indices) && i >= indices[q][0] && i < indices[q][1] {
// Switch on token.
switch {
case tokens[q][0][:2] == "%%":
// Literal percentage character.
buf.WriteByte('%')
case tokens[q][0][:2] == "%c":
// Pop a character.
c, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
buf.WriteByte(c.(byte))
case tokens[q][0][:2] == "%s":
// Pop a string.
str, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
if _, ok := str.(string); !ok {
return buf.String(), errors.New("Stack head is not a string")
}
buf.WriteString(str.(string))
case tokens[q][0][:2] == "%p":
// Push a parameter on the stack.
index, err := strconv.ParseInt(tokens[q][1], 10, 8)
index--
if err != nil {
return buf.String(), err
}
if int(index) >= len(ps.parameters) {
return buf.String(), errors.New("Parameters index out of bound")
}
ps.st.push(ps.parameters[index])
case tokens[q][0][:2] == "%P":
// Pop a variable from the stack as a dynamic or static variable.
val, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
index := tokens[q][2]
if len(index) > 1 {
errorStr := fmt.Sprintf("%s is not a valid dynamic variables index",
index)
return buf.String(), errors.New(errorStr)
}
// Specify either dynamic or static.
if index[0] >= 'a' && index[0] <= 'z' {
ps.dynamicVar[index[0]] = val
} else if index[0] >= 'A' && index[0] <= 'Z' {
staticVar[index[0]] = val
}
case tokens[q][0][:2] == "%g":
// Push a variable from the stack as a dynamic or static variable.
index := tokens[q][3]
if len(index) > 1 {
errorStr := fmt.Sprintf("%s is not a valid static variables index",
index)
return buf.String(), errors.New(errorStr)
}
var val stacker
if index[0] >= 'a' && index[0] <= 'z' {
val = ps.dynamicVar[index[0]]
} else if index[0] >= 'A' && index[0] <= 'Z' {
val = staticVar[index[0]]
}
ps.st.push(val)
case tokens[q][0][:2] == "%'":
// Push a character constant.
con := tokens[q][4]
if len(con) > 1 {
errorStr := fmt.Sprintf("%s is not a valid character constant", con)
return buf.String(), errors.New(errorStr)
}
ps.st.push(con[0])
case tokens[q][0][:2] == "%{":
// Push an integer constant.
con, err := strconv.ParseInt(tokens[q][5], 10, 32)
if err != nil {
return buf.String(), err
}
ps.st.push(con)
case tokens[q][0][:2] == "%l":
// Push the length of the string that is popped from the stack.
popStr, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
if _, ok := popStr.(string); !ok {
errStr := fmt.Sprintf("Stack head is not a string")
return buf.String(), errors.New(errStr)
}
ps.st.push(len(popStr.(string)))
case tokens[q][0][:2] == "%?":
// If-then-else construct. First, the whole string is identified and
// then inside this substring, we can specify which parts to switch on.
ifReg, _ := regexp.Compile("%\\?(.*)%t(.*)%e(.*);|%\\?(.*)%t(.*);")
ifTokens := ifReg.FindStringSubmatch(tokens[q][0])
var (
ifStr string
err error
)
// Parse the if-part to determine if-else.
if len(ifTokens[1]) > 0 {
ifStr, err = ps.walk(ifTokens[1])
} else { // else
ifStr, err = ps.walk(ifTokens[4])
}
// Return any errors
if err != nil {
return buf.String(), err
} else if len(ifStr) > 0 {
// Self-defined limitation, not sure if this is correct, but didn't
// seem like it.
return buf.String(), errors.New("If-clause cannot print statements")
}
var thenStr string
// Pop the first value that is set by parsing the if-clause.
choose, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
// Switch to if or else.
if choose.(int) == 0 && len(ifTokens[1]) > 0 {
thenStr, err = ps.walk(ifTokens[3])
} else if choose.(int) != 0 {
if len(ifTokens[1]) > 0 {
thenStr, err = ps.walk(ifTokens[2])
} else {
thenStr, err = ps.walk(ifTokens[5])
}
}
if err != nil {
return buf.String(), err
}
buf.WriteString(thenStr)
case tokens[q][0][len(tokens[q][0])-1] == 'd': // Fallthrough for printing
fallthrough
case tokens[q][0][len(tokens[q][0])-1] == 'o': // digits.
fallthrough
case tokens[q][0][len(tokens[q][0])-1] == 'x':
fallthrough
case tokens[q][0][len(tokens[q][0])-1] == 'X':
fallthrough
case tokens[q][0][len(tokens[q][0])-1] == 's':
token := tokens[q][0]
// Remove the : that comes before a flag.
if token[1] == ':' {
token = token[:1] + token[2:]
}
digit, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
// The rest is determined like the normal formatted prints.
digitStr := fmt.Sprintf(token, digit.(int))
buf.WriteString(digitStr)
case tokens[q][0][:2] == "%i":
// Increment the parameters by one.
if len(ps.parameters) < 2 {
return buf.String(), errors.New("Not enough parameters to increment.")
}
val1, val2 := ps.parameters[0].(int), ps.parameters[1].(int)
val1++
val2++
ps.parameters[0], ps.parameters[1] = val1, val2
default:
// The rest of the tokens is a special case, where two values are
// popped and then operated on by the token that comes after them.
op1, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
op2, err := ps.st.pop()
if err != nil {
return buf.String(), err
}
var result stacker
switch tokens[q][0][:2] {
case "%+":
// Addition
result = op2.(int) + op1.(int)
case "%-":
// Subtraction
result = op2.(int) - op1.(int)
case "%*":
// Multiplication
result = op2.(int) * op1.(int)
case "%/":
// Division
result = op2.(int) / op1.(int)
case "%m":
// Modulo
result = op2.(int) % op1.(int)
case "%&":
// Bitwise AND
result = op2.(int) & op1.(int)
case "%|":
// Bitwise OR
result = op2.(int) | op1.(int)
case "%^":
// Bitwise XOR
result = op2.(int) ^ op1.(int)
case "%=":
// Equals
result = op2 == op1
case "%>":
// Greater-than
result = op2.(int) > op1.(int)
case "%<":
// Lesser-than
result = op2.(int) < op1.(int)
case "%A":
// Logical AND
result = op2.(bool) && op1.(bool)
case "%O":
// Logical OR
result = op2.(bool) || op1.(bool)
case "%!":
// Logical complement
result = !op1.(bool)
case "%~":
// Bitwise complement
result = ^(op1.(int))
}
ps.st.push(result)
}
i = indices[q][1] - 1
q++
} else {
// We are not "inside" a token, so just skip until the end or the next
// token, and add all characters to the buffer.
j := i
if q != len(indices) {
for !(j >= indices[q][0] && j < indices[q][1]) {
j++
}
} else {
j = len(attr)
}
buf.WriteString(string(attr[i:j]))
i = j
}
}
// Return the buffer as a string.
return buf.String(), nil
}
// Push a stacker-value onto the stack.
func (st *stack) push(s stacker) {
*st = append(*st, s)
}
// Pop a stacker-value from the stack.
func (st *stack) pop() (stacker, error) {
if len(*st) == 0 {
return nil, errors.New("Stack is empty.")
}
newStack := make(stack, len(*st)-1)
val := (*st)[len(*st)-1]
copy(newStack, (*st)[:len(*st)-1])
*st = newStack
return val, nil
}
// Initialize regexes and the static vars (that don't get changed between
// calls.
func init() {
// Initialize the main regex.
expStr := strings.Join(exp[:], "|")
regex, _ = regexp.Compile(expStr)
// Initialize the static variables.
staticVar = make(map[byte]stacker, 26)
}

23
vendor/github.com/Nvveen/Gotty/types.go generated vendored Normal file
View file

@ -0,0 +1,23 @@
// Copyright 2012 Neal van Veen. All rights reserved.
// Usage of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package gotty
type TermInfo struct {
boolAttributes map[string]bool
numAttributes map[string]int16
strAttributes map[string]string
// The various names of the TermInfo file.
Names []string
}
type stacker interface {
}
type stack []stacker
type parser struct {
st stack
parameters []stacker
dynamicVar map[byte]stacker
}

20
vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View file

@ -0,0 +1,20 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt generated vendored Normal file

File diff suppressed because it is too large Load diff

316
vendor/github.com/beorn7/perks/quantile/stream.go generated vendored Normal file
View file

@ -0,0 +1,316 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

16
vendor/github.com/containerd/continuity/AUTHORS generated vendored Normal file
View file

@ -0,0 +1,16 @@
Aaron Lehmann <aaron.lehmann@docker.com>
Akash Gupta <akagup@microsoft.com>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Andrew Pennebaker <apennebaker@datapipe.com>
Brandon Philips <brandon.philips@coreos.com>
Christopher Jones <tophj@linux.vnet.ibm.com>
Daniel, Dao Quang Minh <dqminh89@gmail.com>
Derek McGowan <derek@mcgstyle.net>
Edward Pilatowicz <edward.pilatowicz@oracle.com>
Ian Campbell <ijc@docker.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cummins <sul3n3t@gmail.com>
Phil Estes <estesp@gmail.com>
Stephen J Day <stephen.day@docker.com>
Tobias Klauser <tklauser@distanz.ch>
Tonis Tiigi <tonistiigi@gmail.com>

202
vendor/github.com/containerd/continuity/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,101 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package pathdriver
import (
"path/filepath"
)
// PathDriver provides all of the path manipulation functions in a common
// interface. The context should call these and never use the `filepath`
// package or any other package to manipulate paths.
type PathDriver interface {
Join(paths ...string) string
IsAbs(path string) bool
Rel(base, target string) (string, error)
Base(path string) string
Dir(path string) string
Clean(path string) string
Split(path string) (dir, file string)
Separator() byte
Abs(path string) (string, error)
Walk(string, filepath.WalkFunc) error
FromSlash(path string) string
ToSlash(path string) string
Match(pattern, name string) (matched bool, err error)
}
// pathDriver is a simple default implementation calls the filepath package.
type pathDriver struct{}
// LocalPathDriver is the exported pathDriver struct for convenience.
var LocalPathDriver PathDriver = &pathDriver{}
func (*pathDriver) Join(paths ...string) string {
return filepath.Join(paths...)
}
func (*pathDriver) IsAbs(path string) bool {
return filepath.IsAbs(path)
}
func (*pathDriver) Rel(base, target string) (string, error) {
return filepath.Rel(base, target)
}
func (*pathDriver) Base(path string) string {
return filepath.Base(path)
}
func (*pathDriver) Dir(path string) string {
return filepath.Dir(path)
}
func (*pathDriver) Clean(path string) string {
return filepath.Clean(path)
}
func (*pathDriver) Split(path string) (dir, file string) {
return filepath.Split(path)
}
func (*pathDriver) Separator() byte {
return filepath.Separator
}
func (*pathDriver) Abs(path string) (string, error) {
return filepath.Abs(path)
}
// Note that filepath.Walk calls os.Stat, so if the context wants to
// to call Driver.Stat() for Walk, they need to create a new struct that
// overrides this method.
func (*pathDriver) Walk(root string, walkFn filepath.WalkFunc) error {
return filepath.Walk(root, walkFn)
}
func (*pathDriver) FromSlash(path string) string {
return filepath.FromSlash(path)
}
func (*pathDriver) ToSlash(path string) string {
return filepath.ToSlash(path)
}
func (*pathDriver) Match(pattern, name string) (bool, error) {
return filepath.Match(pattern, name)
}

648
vendor/github.com/docker/cli/AUTHORS generated vendored Normal file
View file

@ -0,0 +1,648 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `scripts/docs/generate-authors.sh`.
Aanand Prasad <aanand.prasad@gmail.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron Lehmann <aaron.lehmann@docker.com>
Aaron.L.Xu <likexu@harmonycloud.cn>
Abdur Rehman <abdur_rehman@mentor.com>
Abhinandan Prativadi <abhi@docker.com>
Abin Shahab <ashahab@altiscale.com>
Addam Hardy <addam.hardy@gmail.com>
Adolfo Ochagavía <aochagavia92@gmail.com>
Adrien Duermael <adrien@duermael.com>
Adrien Folie <folie.adrien@gmail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com>
Aidan Feldman <aidan.feldman@gmail.com>
Aidan Hobson Sayers <aidanhs@cantab.net>
AJ Bowen <aj@gandi.net>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Akim Demaille <akim.demaille@docker.com>
Alan Thompson <cloojure@gmail.com>
Albert Callarisa <shark234@gmail.com>
Aleksa Sarai <asarai@suse.de>
Alessandro Boch <aboch@tetrationanalytics.com>
Alex Mavrogiannis <alex.mavrogiannis@docker.com>
Alexander Boyd <alex@opengroove.org>
Alexander Larsson <alexl@redhat.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Ryabov <i@sepa.spb.ru>
Alexandre González <agonzalezro@gmail.com>
Alfred Landrum <alfred.landrum@docker.com>
Alicia Lauerman <alicia@eta.im>
Allen Sun <allensun.shl@alibaba-inc.com>
Alvin Deng <alvin.q.deng@utexas.edu>
Amen Belayneh <amenbelayneh@gmail.com>
Amir Goldstein <amir73il@aquasec.com>
Amit Krishnan <amit.krishnan@oracle.com>
Amit Shukla <amit.shukla@docker.com>
Amy Lindburg <amy.lindburg@docker.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Köhler <andi5.py@gmx.net>
Andrew France <andrew@avito.co.uk>
Andrew Hsu <andrewhsu@docker.com>
Andrew Macpherson <hopscotch23@gmail.com>
Andrew McDonnell <bugs@andrewmcdonnell.net>
Andrew Po <absourd.noise@gmail.com>
Andrey Petrov <andrey.petrov@shazow.net>
André Martins <aanm90@gmail.com>
Andy Goldstein <agoldste@redhat.com>
Andy Rothfusz <github@developersupport.net>
Anil Madhavapeddy <anil@recoil.org>
Ankush Agarwal <ankushagarwal11@gmail.com>
Anton Polonskiy <anton.polonskiy@gmail.com>
Antonio Murdaca <antonio.murdaca@gmail.com>
Antonis Kalipetis <akalipetis@gmail.com>
Anusha Ragunathan <anusha.ragunathan@docker.com>
Arash Deshmeh <adeshmeh@ca.ibm.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Ashwini Oruganti <ashwini.oruganti@gmail.com>
Azat Khuyiyakhmetov <shadow_uz@mail.ru>
Bardia Keyoumarsi <bkeyouma@ucsc.edu>
Barnaby Gray <barnaby@pickle.me.uk>
Bastiaan Bakker <bbakker@xebia.com>
BastianHofmann <bastianhofmann@me.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Firshman <ben@firshman.co.uk>
Benjamin Boudreau <boudreau.benjamin@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com>
Bill Wang <ozbillwang@gmail.com>
Bin Liu <liubin0329@gmail.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Bogdan Anton <contact@bogdananton.ro>
Boris Pruessmann <boris@pruessmann.org>
Bradley Cicenas <bradley.cicenas@gmail.com>
Brandon Philips <brandon.philips@coreos.com>
Brent Salisbury <brent.salisbury@docker.com>
Bret Fisher <bret@bretfisher.com>
Brian (bex) Exelbierd <bexelbie@redhat.com>
Brian Goff <cpuguy83@gmail.com>
Bryan Bess <squarejaw@bsbess.com>
Bryan Boreham <bjboreham@gmail.com>
Bryan Murphy <bmurphy1976@gmail.com>
bryfry <bryon.fryer@gmail.com>
Cameron Spear <cameronspear@gmail.com>
Cao Weiwei <cao.weiwei30@zte.com.cn>
Carlo Mion <mion00@gmail.com>
Carlos Alexandro Becker <caarlos0@gmail.com>
Ce Gao <ce.gao@outlook.com>
Cedric Davies <cedricda@microsoft.com>
Cezar Sa Espinola <cezarsa@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com>
Charles Chan <charleswhchan@users.noreply.github.com>
Charles Law <claw@conduce.com>
Charles Smith <charles.smith@docker.com>
Charlie Drage <charlie@charliedrage.com>
ChaYoung You <yousbe@gmail.com>
Chen Chuanliang <chen.chuanliang@zte.com.cn>
Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chris Gavin <chris@chrisgavin.me>
Chris Gibson <chris@chrisg.io>
Chris McKinnel <chrismckinnel@gmail.com>
Chris Snow <chsnow123@gmail.com>
Chris Weyl <cweyl@alumni.drew.edu>
Christian Persson <saser@live.se>
Christian Stefanescu <st.chris@gmail.com>
Christophe Robin <crobin@nekoo.com>
Christophe Vidal <kriss@krizalys.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Jones <tophj@linux.vnet.ibm.com>
Christy Perez <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com>
Clinton Kitson <clintonskitson@gmail.com>
Coenraad Loubser <coenraad@wish.org.za>
Colin Hebert <hebert.colin@gmail.com>
Collin Guarino <collin.guarino@gmail.com>
Colm Hally <colmhally@gmail.com>
Corey Farrell <git@cfware.com>
Cristian Staretu <cristian.staretu@gmail.com>
Daehyeok Mun <daehyeok@gmail.com>
Dafydd Crosby <dtcrsby@gmail.com>
dalanlan <dalanlan925@gmail.com>
Damien Nadé <github@livna.org>
Dan Cotora <dan@bluevision.ro>
Daniel Dao <dqminh@cloudflare.com>
Daniel Farrell <dfarrell@redhat.com>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Goosen <daniel.goosen@surveysampling.com>
Daniel Hiltgen <daniel.hiltgen@docker.com>
Daniel J Walsh <dwalsh@redhat.com>
Daniel Nephin <dnephin@docker.com>
Daniel Norberg <dano@spotify.com>
Daniel Watkins <daniel@daniel-watkins.co.uk>
Daniel Zhang <jmzwcn@gmail.com>
Danny Berger <dpb587@gmail.com>
Darren Shepherd <darren.s.shepherd@gmail.com>
Darren Stahl <darst@microsoft.com>
Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com>
Dave Tucker <dt@docker.com>
David Beitey <david@davidjb.com>
David Calavera <david.calavera@gmail.com>
David Cramer <davcrame@cisco.com>
David Dooling <dooling@gmail.com>
David Gageot <david@gageot.net>
David Lechner <david@lechnology.com>
David Sheets <dsheets@docker.com>
David Williamson <david.williamson@docker.com>
David Xia <dxia@spotify.com>
David Young <yangboh@cn.ibm.com>
Deng Guangxing <dengguangxing@huawei.com>
Denis Defreyne <denis@soundcloud.com>
Denis Gladkikh <denis@gladkikh.email>
Denis Ollier <larchunix@users.noreply.github.com>
Dennis Docter <dennis@d23.nl>
Derek McGowan <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com>
Dharmit Shah <shahdharmit@gmail.com>
Dhawal Yogesh Bhanushali <dbhanushali@vmware.com>
Dieter Reuter <dieter.reuter@me.com>
Dima Stopel <dima@twistlock.com>
Dimitry Andric <d.andric@activevideo.com>
Ding Fei <dingfei@stars.org.cn>
Diogo Monica <diogo@docker.com>
Dmitry Gusev <dmitry.gusev@gmail.com>
Dmitry Smirnov <onlyjob@member.fsf.org>
Dmitry V. Krivenok <krivenok.dmitry@gmail.com>
Don Kjer <don.kjer@gmail.com>
Dong Chen <dongluo.chen@docker.com>
Doug Davis <dug@us.ibm.com>
Drew Erny <drew.erny@docker.com>
Ed Costello <epc@epcostello.com>
Eli Uriegas <eli.uriegas@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Elias Faxö <elias.faxo@tre.se>
Eric G. Noriega <enoriega@vizuri.com>
Eric Rosenberg <ehaydenr@gmail.com>
Eric Sage <eric.david.sage@gmail.com>
Eric-Olivier Lamey <eo@lamey.me>
Erica Windisch <erica@windisch.us>
Erik Hollensbe <github@hollensbe.org>
Erik St. Martin <alakriti@gmail.com>
Ethan Haynes <ethanhaynes@alumni.harvard.edu>
Eugene Yakubovich <eugene.yakubovich@coreos.com>
Evan Allrich <evan@unguku.com>
Evan Hazlett <ejhazlett@gmail.com>
Evan Krall <krall@yelp.com>
Evelyn Xu <evelynhsu21@gmail.com>
Everett Toews <everett.toews@rackspace.com>
Fabio Falci <fabiofalci@gmail.com>
Fabrizio Soppelsa <fsoppelsa@mirantis.com>
Felix Hupfeld <felix@quobyte.com>
Felix Rabe <felix@rabe.io>
Flavio Crisciani <flavio.crisciani@docker.com>
Florian Klein <florian.klein@free.fr>
Foysal Iqbal <foysal.iqbal.fb@gmail.com>
Fred Lifton <fred.lifton@docker.com>
Frederick F. Kautz IV <fkautz@redhat.com>
Frederik Nordahl Jul Sabroe <frederikns@gmail.com>
Frieder Bluemle <frieder.bluemle@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com>
Gary Schaetz <gary@schaetzkc.com>
Genki Takiuchi <genki@s21g.com>
George MacRorie <gmacr31@gmail.com>
George Xie <georgexsh@gmail.com>
Gianluca Borello <g.borello@gmail.com>
Gildas Cuisinier <gildas.cuisinier@gcuisinier.net>
Gou Rao <gou@portworx.com>
Grant Reaber <grant.reaber@gmail.com>
Greg Pflaum <gpflaum@users.noreply.github.com>
Guilhem Lettron <guilhem+github@lettron.fr>
Guillaume J. Charmes <guillaume.charmes@docker.com>
gwx296173 <gaojing3@huawei.com>
Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com>
Hao Zhang <21521210@zju.edu.cn>
Harald Albers <github@albersweb.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn>
Henning Sprang <henning.sprang@gmail.com>
Henry N <henrynmail-github@yahoo.de>
Hernan Garcia <hernandanielg@gmail.com>
Hongbin Lu <hongbin034@gmail.com>
Hu Keping <hukeping@huawei.com>
Huayi Zhang <irachex@gmail.com>
huqun <huqun@zju.edu.cn>
Huu Nguyen <huu@prismskylabs.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Ian Campbell <ian.campbell@docker.com>
Ian Philpot <ian.philpot@microsoft.com>
Ignacio Capurro <icapurrofagian@gmail.com>
Ilya Dmitrichenko <errordeveloper@gmail.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Ilya Sotkov <ilya@sotkov.com>
Isabel Jimenez <contact.isabeljimenez@gmail.com>
Ivan Grcic <igrcic@gmail.com>
Ivan Markin <sw@nogoegst.net>
Jacob Atzen <jacob@jacobatzen.dk>
Jacob Tomlinson <jacob@tom.linson.uk>
Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Sanders <jsand@google.com>
James Nesbitt <james.nesbitt@wunderkraut.com>
James Turnbull <james@lovedthanlost.net>
Jamie Hannaford <jamie@limetree.org>
Jan Koprowski <jan.koprowski@gmail.com>
Jan Pazdziora <jpazdziora@redhat.com>
Jan-Jaap Driessen <janjaapdriessen@gmail.com>
Jana Radhakrishnan <mrjana@docker.com>
Jared Hocutt <jaredh@netapp.com>
Jasmine Hegman <jasmine@jhegman.com>
Jason Heiss <jheiss@aput.net>
Jason Plum <jplum@devonit.com>
Jay Kamat <github@jgkamat.33mail.com>
Jean-Pierre Huynh <jean-pierre.huynh@ounet.fr>
Jean-Pierre Huynh <jp@moogsoft.com>
Jeff Lindsay <progrium@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com>
Jeff Silberman <jsilberm@gmail.com>
Jeremy Chambers <jeremy@thehipbot.com>
Jeremy Unruh <jeremybunruh@gmail.com>
Jeremy Yallop <yallop@docker.com>
Jeroen Franse <jeroenfranse@gmail.com>
Jesse Adametz <jesseadametz@gmail.com>
Jessica Frazelle <jessfraz@google.com>
Jezeniel Zapanta <jpzapanta22@gmail.com>
Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
Jie Luo <luo612@zju.edu.cn>
Jilles Oldenbeuving <ojilles@gmail.com>
Jim Galasyn <jim.galasyn@docker.com>
Jimmy Leger <jimmy.leger@gmail.com>
Jimmy Song <rootsongjc@gmail.com>
jimmyxian <jimmyxian2004@yahoo.com.cn>
Joao Fernandes <joao.fernandes@docker.com>
Joe Doliner <jdoliner@pachyderm.io>
Joe Gordon <joe.gordon0@gmail.com>
Joel Handwell <joelhandwell@gmail.com>
Joey Geiger <jgeiger@gmail.com>
Joffrey F <joffrey@docker.com>
Johan Euphrosine <proppy@google.com>
Johannes 'fish' Ziemke <github@freigeist.org>
John Feminella <jxf@jxf.me>
John Harris <john@johnharris.io>
John Howard (VM) <John.Howard@microsoft.com>
John Laswell <john.n.laswell@gmail.com>
John Maguire <jmaguire@duosecurity.com>
John Mulhausen <john@docker.com>
John Starks <jostarks@microsoft.com>
John Stephens <johnstep@docker.com>
John Tims <john.k.tims@gmail.com>
John V. Martinez <jvmatl@gmail.com>
John Willis <john.willis@docker.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jonathan Lee <jonjohn1232009@gmail.com>
Jonathan Lomas <jonathan@floatinglomas.ca>
Jonathan McCrohan <jmccrohan@gmail.com>
Jonh Wendell <jonh.wendell@redhat.com>
Jordan Jennings <jjn2009@gmail.com>
Joseph Kern <jkern@semafour.net>
Josh Bodah <jb3689@yahoo.com>
Josh Chorlton <jchorlton@gmail.com>
Josh Hawn <josh.hawn@docker.com>
Josh Horwitz <horwitz@addthis.com>
Josh Soref <jsoref@gmail.com>
Julien Barbier <write0@gmail.com>
Julien Kassar <github@kassisol.com>
Julien Maitrehenry <julien.maitrehenry@me.com>
Justas Brazauskas <brazauskasjustas@gmail.com>
Justin Cormack <justin.cormack@docker.com>
Justin Simonelis <justin.p.simonelis@gmail.com>
Jyrki Puttonen <jyrkiput@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com>
Jörg Thalheim <joerg@higgsboson.tk>
Kai Blin <kai@samba.org>
Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
Kara Alexandra <kalexandra@us.ibm.com>
Kareem Khazem <karkhaz@karkhaz.com>
Karthik Nayak <Karthik.188@gmail.com>
Kat Samperi <kat.samperi@gmail.com>
Katie McLaughlin <katie@glasnt.com>
Ke Xu <leonhartx.k@gmail.com>
Kei Ohmura <ohmura.kei@gmail.com>
Keith Hudgins <greenman@greenman.org>
Ken Cochrane <kencochrane@gmail.com>
Ken ICHIKAWA <ichikawa.ken@jp.fujitsu.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Burke <kev@inburke.com>
Kevin Feyrer <kevin.feyrer@btinternet.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Kirsche <Kev.Kirsche+GitHub@gmail.com>
Kevin Meredith <kevin.m.meredith@gmail.com>
Kevin Richardson <kevin@kevinrichardson.co>
khaled souf <khaled.souf@gmail.com>
Kim Eik <kim@heldig.org>
Kir Kolyshkin <kolyshkin@gmail.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Krasi Georgiev <krasi@vip-consult.solutions>
Kris-Mikael Krister <krismikael@protonmail.com>
Kun Zhang <zkazure@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kyle Spiers <kyle@spiers.me>
Lachlan Cooper <lachlancooper@gmail.com>
Lai Jiangshan <jiangshanlai@gmail.com>
Lars Kellogg-Stedman <lars@redhat.com>
Laura Frank <ljfrank@gmail.com>
Laurent Erignoux <lerignoux@gmail.com>
Lei Jitang <leijitang@huawei.com>
Lennie <github@consolejunkie.net>
Leo Gallucci <elgalu3@gmail.com>
Lewis Daly <lewisdaly@me.com>
Li Yi <denverdino@gmail.com>
Li Yi <weiyuan.yl@alibaba-inc.com>
Liang-Chi Hsieh <viirya@gmail.com>
Lily Guo <lily.guo@docker.com>
Lin Lu <doraalin@163.com>
Linus Heckemann <lheckemann@twig-world.com>
Liping Xue <lipingxue@gmail.com>
Liron Levin <liron@twistlock.com>
liwenqi <vikilwq@zju.edu.cn>
lixiaobing10051267 <li.xiaobing1@zte.com.cn>
Lloyd Dewolf <foolswisdom@gmail.com>
Lorenzo Fontana <lo@linux.com>
Louis Opter <kalessin@kalessin.fr>
Luca Favatella <luca.favatella@erlang-solutions.com>
Luca Marturana <lucamarturana@gmail.com>
Lucas Chan <lucas-github@lucaschan.com>
Luka Hartwig <mail@lukahartwig.de>
Lukasz Zajaczkowski <Lukasz.Zajaczkowski@ts.fujitsu.com>
Lydell Manganti <LydellManganti@users.noreply.github.com>
Lénaïc Huard <lhuard@amadeus.com>
Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>
Mabin <bin.ma@huawei.com>
Madhav Puri <madhav.puri@gmail.com>
Madhu Venugopal <madhu@socketplane.io>
Malte Janduda <mail@janduda.net>
Manjunath A Kumatagi <mkumatag@in.ibm.com>
Mansi Nahar <mmn4185@rit.edu>
mapk0y <mapk0y@gmail.com>
Marc Bihlmaier <marc.bihlmaier@reddoxx.com>
Marco Mariani <marco.mariani@alterway.fr>
Marcus Martins <marcus@docker.com>
Marianna Tessel <mtesselh@gmail.com>
Marius Sturm <marius@graylog.com>
Mark Oates <fl0yd@me.com>
Martin Mosegaard Amdisen <martin.amdisen@praqma.com>
Mary Anthony <mary.anthony@docker.com>
Mason Malone <mason.malone@gmail.com>
Mateusz Major <apkd@users.noreply.github.com>
Matt Gucci <matt9ucci@gmail.com>
Matt Robenolt <matt@ydekproductions.com>
Matthew Heon <mheon@redhat.com>
Matthieu Hauglustaine <matt.hauglustaine@gmail.com>
Max Shytikov <mshytikov@gmail.com>
Maxime Petazzoni <max@signalfuse.com>
Mei ChunTao <mei.chuntao@zte.com.cn>
Micah Zoltu <micah@newrelic.com>
Michael A. Smith <michael@smith-li.com>
Michael Bridgen <mikeb@squaremobius.net>
Michael Crosby <michael@docker.com>
Michael Friis <friism@gmail.com>
Michael Irwin <mikesir87@gmail.com>
Michael Käufl <docker@c.michael-kaeufl.de>
Michael Prokop <github@michael-prokop.at>
Michael Scharf <github@scharf.gr>
Michael Spetsiotis <michael_spets@hotmail.com>
Michael Steinert <mike.steinert@gmail.com>
Michael West <mwest@mdsol.com>
Michal Minář <miminar@redhat.com>
Michał Czeraszkiewicz <czerasz@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com>
Mihuleacc Sergiu <mihuleac.sergiu@gmail.com>
Mike Brown <brownwm@us.ibm.com>
Mike Casas <mkcsas0@gmail.com>
Mike Danese <mikedanese@google.com>
Mike Dillon <mike@embody.org>
Mike Goelzer <mike.goelzer@docker.com>
Mike MacCana <mike.maccana@gmail.com>
mikelinjie <294893458@qq.com>
Mikhail Vasin <vasin@cloud-tv.ru>
Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com>
Mohammad Banikazemi <mb@us.ibm.com>
Mohammed Aaqib Ansari <maaquib@gmail.com>
Moorthy RS <rsmoorthy@gmail.com>
Morgan Bauer <mbauer@us.ibm.com>
Moysés Borges <moysesb@gmail.com>
Mrunal Patel <mrunalp@gmail.com>
muicoder <muicoder@gmail.com>
Muthukumar R <muthur@gmail.com>
Máximo Cuadros <mcuadros@gmail.com>
Nace Oroz <orkica@gmail.com>
Nahum Shalman <nshalman@omniti.com>
Nalin Dahyabhai <nalin@redhat.com>
Nassim 'Nass' Eddequiouaq <eddequiouaq.nassim@gmail.com>
Natalie Parker <nparker@omnifone.com>
Nate Brennand <nate.brennand@clever.com>
Nathan Hsieh <hsieh.nathan@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com>
Nathan McCauley <nathan.mccauley@docker.com>
Neil Peterson <neilpeterson@outlook.com>
Nicola Kabar <nicolaka@gmail.com>
Nicolas Borboën <ponsfrilus@gmail.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Nikhil Chawla <chawlanikhil24@gmail.com>
Nikolas Garofil <nikolas.garofil@uantwerpen.be>
Nikolay Milovanov <nmil@itransformers.net>
Nishant Totla <nishanttotla@gmail.com>
NIWA Hideyuki <niwa.niwa@nifty.ne.jp>
Noah Treuhaft <noah.treuhaft@docker.com>
O.S. Tezer <ostezer@gmail.com>
ohmystack <jun.jiang02@ele.me>
Olle Jonsson <olle.jonsson@gmail.com>
Otto Kekäläinen <otto@seravo.fi>
Ovidio Mallo <ovidio.mallo@gmail.com>
Pascal Borreli <pascal@borreli.com>
Patrick Böänziger <patrick.baenziger@bsi-software.com>
Patrick Hemmer <patrick.hemmer@gmail.com>
Patrick Lang <plang@microsoft.com>
Paul <paul9869@gmail.com>
Paul Kehrer <paul.l.kehrer@gmail.com>
Paul Lietar <paul@lietar.net>
Paul Weaver <pauweave@cisco.com>
Pavel Pospisil <pospispa@gmail.com>
Paweł Szczekutowicz <pszczekutowicz@gmail.com>
Peeyush Gupta <gpeeyush@linux.vnet.ibm.com>
Per Lundberg <per.lundberg@ecraft.com>
Peter Edge <peter.edge@gmail.com>
Peter Hsu <shhsu@microsoft.com>
Peter Jaffe <pjaffe@nevo.com>
Peter Nagy <xificurC@gmail.com>
Peter Salvatore <peter@psftw.com>
Peter Waller <p@pwaller.net>
Phil Estes <estesp@linux.vnet.ibm.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com>
pidster <pid@pidster.com>
pixelistik <pixelistik@users.noreply.github.com>
Pratik Karki <prertik@outlook.com>
Prayag Verma <prayag.verma@gmail.com>
Preston Cowley <preston.cowley@sony.com>
Pure White <daniel48@126.com>
Qiang Huang <h.huangqiang@huawei.com>
Qinglan Peng <qinglanpeng@zju.edu.cn>
qudongfang <qudongfang@gmail.com>
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Ray Tsang <rayt@google.com>
Reficul <xuzhenglun@gmail.com>
Remy Suen <remy.suen@gmail.com>
Renaud Gaubert <rgaubert@nvidia.com>
Ricardo N Feliciano <FelicianoTech@gmail.com>
Rich Moyse <rich@moyse.us>
Richard Mathie <richard.mathie@amey.co.uk>
Richard Scothern <richard.scothern@gmail.com>
Rick Wieman <git@rickw.nl>
Ritesh H Shukla <sritesh@vmware.com>
Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
Robert Wallis <smilingrob@gmail.com>
Robin Naundorf <r.naundorf@fh-muenster.de>
Robin Speekenbrink <robin@kingsquare.nl>
Rodolfo Ortiz <rodolfo.ortiz@definityfirst.com>
Rogelio Canedo <rcanedo@mappy.priv>
Roland Kammerer <roland.kammerer@linbit.com>
Roman Dudin <katrmr@gmail.com>
Rory Hunter <roryhunter2@gmail.com>
Ross Boucher <rboucher@gmail.com>
Rubens Figueiredo <r.figueiredo.52@gmail.com>
Ryan Belgrave <rmb1993@gmail.com>
Ryan Detzel <ryan.detzel@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sakeven Jiang <jc5930@sina.cn>
Sally O'Malley <somalley@redhat.com>
Sam Neirinck <sam@samneirinck.com>
Sambuddha Basu <sambuddhabasu1@gmail.com>
Samuel Karp <skarp@amazon.com>
Santhosh Manohar <santhosh@docker.com>
Scott Collier <emailscottcollier@gmail.com>
Sean Christopherson <sean.j.christopherson@intel.com>
Sean Rodman <srodman7689@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Sergey Tryuber <Sergeant007@users.noreply.github.com>
Serhat Gülçiçek <serhat25@gmail.com>
Sevki Hasirci <s@sevki.org>
Shaun Kaasten <shaunk@gmail.com>
Sheng Yang <sheng@yasker.org>
Shijiang Wei <mountkin@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com>
Shoubhik Bose <sbose78@gmail.com>
Shukui Yang <yangshukui@huawei.com>
Sian Lerk Lau <kiawin@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com>
sidharthamani <sid@rancher.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Simei He <hesimei@zju.edu.cn>
Simon Ferquel <simon.ferquel@docker.com>
Sindhu S <sindhus@live.in>
Slava Semushin <semushin@redhat.com>
Solomon Hykes <solomon@docker.com>
Song Gao <song@gao.io>
Spencer Brown <spencer@spencerbrown.org>
squeegels <1674195+squeegels@users.noreply.github.com>
Srini Brahmaroutu <srbrahma@us.ibm.com>
Stefan S. <tronicum@user.github.com>
Stefan Scherer <scherer_stefan@icloud.com>
Stefan Weil <sw@weilnetz.de>
Stephen Day <stevvooe@gmail.com>
Stephen Rust <srust@blockbridge.com>
Steve Durrheimer <s.durrheimer@gmail.com>
Steven Burgess <steven.a.burgess@hotmail.com>
Subhajit Ghosh <isubuz.g@gmail.com>
Sun Jianbo <wonderflow.sun@gmail.com>
Sungwon Han <sungwon.han@navercorp.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sylvain Baubeau <sbaubeau@redhat.com>
Sébastien HOUZÉ <cto@verylastroom.com>
T K Sourabh <sourabhtk37@gmail.com>
TAGOMORI Satoshi <tagomoris@gmail.com>
Taylor Jones <monitorjbl@gmail.com>
Thatcher Peskens <thatcher@docker.com>
Thomas Gazagnaire <thomas@gazagnaire.org>
Thomas Krzero <thomas.kovatchitch@gmail.com>
Thomas Leonard <thomas.leonard@docker.com>
Thomas Léveil <thomasleveil@gmail.com>
Thomas Riccardi <riccardi@systran.fr>
Thomas Swift <tgs242@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tianyi Wang <capkurmagati@gmail.com>
Tibor Vass <teabee89@gmail.com>
Tim Dettrick <t.dettrick@uq.edu.au>
Tim Hockin <thockin@google.com>
Tim Smith <timbot@google.com>
Tim Waugh <twaugh@redhat.com>
Tim Wraight <tim.wraight@tangentlabs.co.uk>
timfeirg <kkcocogogo@gmail.com>
Timothy Hobbs <timothyhobbs@seznam.cz>
Tobias Bradtke <webwurst@gmail.com>
Tobias Gesellchen <tobias@gesellix.de>
Todd Whiteman <todd.whiteman@joyent.com>
Tom Denham <tom@tomdee.co.uk>
Tom Fotherby <tom+github@peopleperhour.com>
Tom X. Tobin <tomxtobin@tomxtobin.com>
Tomas Tomecek <ttomecek@redhat.com>
Tomasz Kopczynski <tomek@kopczynski.net.pl>
Tomáš Hrčka <thrcka@redhat.com>
Tony Abboud <tdabboud@hotmail.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Trapier Marshall <trapier.marshall@docker.com>
Travis Cline <travis.cline@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tycho Andersen <tycho@docker.com>
Tycho Andersen <tycho@tycho.ws>
uhayate <uhayate.gong@daocloud.io>
Umesh Yadav <umesh4257@gmail.com>
Valentin Lorentz <progval+git@progval.net>
Veres Lajos <vlajos@gmail.com>
Victor Vieux <victor.vieux@docker.com>
Victoria Bialas <victoria.bialas@docker.com>
Viktor Stanchev <me@viktorstanchev.com>
Vincent Batts <vbatts@redhat.com>
Vincent Bernat <Vincent.Bernat@exoscale.ch>
Vincent Demeester <vincent.demeester@docker.com>
Vincent Woo <me@vincentwoo.com>
Vishnu Kannan <vishnuk@google.com>
Vivek Goyal <vgoyal@redhat.com>
Wang Jie <wangjie5@chinaskycloud.com>
Wang Long <long.wanglong@huawei.com>
Wang Ping <present.wp@icloud.com>
Wang Xing <hzwangxing@corp.netease.com>
Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wataru Ishida <ishida.wataru@lab.ntt.co.jp>
Wayne Song <wsong@docker.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenzhi Liang <wenzhi.liang@gmail.com>
Wes Morgan <cap10morgan@gmail.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
William Henry <whenry@redhat.com>
Xianglin Gao <xlgao@zju.edu.cn>
Xinbo Weng <xihuanbo_0521@zju.edu.cn>
Xuecong Liao <satorulogic@gmail.com>
Yan Feng <yanfeng2@huawei.com>
Yanqiang Miao <miao.yanqiang@zte.com.cn>
Yassine Tijani <yasstij11@gmail.com>
Yi EungJun <eungjun.yi@navercorp.com>
Ying Li <ying.li@docker.com>
Yong Tang <yong.tang.github@outlook.com>
Yosef Fertel <yfertel@gmail.com>
Yu Peng <yu.peng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yunxiang Huang <hyxqshk@vip.qq.com>
zebrilee <zebrilee@gmail.com>
Zhang Kun <zkazure@gmail.com>
Zhang Wei <zhangwei555@huawei.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com>
zhenghenghuo <zhenghenghuo@zju.edu.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Álex González <agonzalezro@gmail.com>
Álvaro Lázaro <alvaro.lazaro.g@gmail.com>
Átila Camurça Alves <camurca.home@gmail.com>
徐俊杰 <paco.xu@daocloud.io>

191
vendor/github.com/docker/cli/LICENSE generated vendored Normal file
View file

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

19
vendor/github.com/docker/cli/NOTICE generated vendored Normal file
View file

@ -0,0 +1,19 @@
Docker
Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

View file

@ -0,0 +1,341 @@
package configfile
import (
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/docker/cli/cli/config/credentials"
"github.com/docker/cli/opts"
"github.com/docker/docker/api/types"
"github.com/pkg/errors"
)
const (
// This constant is only used for really old config files when the
// URL wasn't saved as part of the config file and it was just
// assumed to be this value.
defaultIndexServer = "https://index.docker.io/v1/"
)
// ConfigFile ~/.docker/config.json file info
type ConfigFile struct {
AuthConfigs map[string]types.AuthConfig `json:"auths"`
HTTPHeaders map[string]string `json:"HttpHeaders,omitempty"`
PsFormat string `json:"psFormat,omitempty"`
ImagesFormat string `json:"imagesFormat,omitempty"`
NetworksFormat string `json:"networksFormat,omitempty"`
PluginsFormat string `json:"pluginsFormat,omitempty"`
VolumesFormat string `json:"volumesFormat,omitempty"`
StatsFormat string `json:"statsFormat,omitempty"`
DetachKeys string `json:"detachKeys,omitempty"`
CredentialsStore string `json:"credsStore,omitempty"`
CredentialHelpers map[string]string `json:"credHelpers,omitempty"`
Filename string `json:"-"` // Note: for internal use only
ServiceInspectFormat string `json:"serviceInspectFormat,omitempty"`
ServicesFormat string `json:"servicesFormat,omitempty"`
TasksFormat string `json:"tasksFormat,omitempty"`
SecretFormat string `json:"secretFormat,omitempty"`
ConfigFormat string `json:"configFormat,omitempty"`
NodesFormat string `json:"nodesFormat,omitempty"`
PruneFilters []string `json:"pruneFilters,omitempty"`
Proxies map[string]ProxyConfig `json:"proxies,omitempty"`
Experimental string `json:"experimental,omitempty"`
StackOrchestrator string `json:"stackOrchestrator,omitempty"`
Kubernetes *KubernetesConfig `json:"kubernetes,omitempty"`
}
// ProxyConfig contains proxy configuration settings
type ProxyConfig struct {
HTTPProxy string `json:"httpProxy,omitempty"`
HTTPSProxy string `json:"httpsProxy,omitempty"`
NoProxy string `json:"noProxy,omitempty"`
FTPProxy string `json:"ftpProxy,omitempty"`
}
// KubernetesConfig contains Kubernetes orchestrator settings
type KubernetesConfig struct {
AllNamespaces string `json:"allNamespaces,omitempty"`
}
// New initializes an empty configuration file for the given filename 'fn'
func New(fn string) *ConfigFile {
return &ConfigFile{
AuthConfigs: make(map[string]types.AuthConfig),
HTTPHeaders: make(map[string]string),
Filename: fn,
}
}
// LegacyLoadFromReader reads the non-nested configuration data given and sets up the
// auth config information with given directory and populates the receiver object
func (configFile *ConfigFile) LegacyLoadFromReader(configData io.Reader) error {
b, err := ioutil.ReadAll(configData)
if err != nil {
return err
}
if err := json.Unmarshal(b, &configFile.AuthConfigs); err != nil {
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return errors.Errorf("The Auth config file is empty")
}
authConfig := types.AuthConfig{}
origAuth := strings.Split(arr[0], " = ")
if len(origAuth) != 2 {
return errors.Errorf("Invalid Auth config file")
}
authConfig.Username, authConfig.Password, err = decodeAuth(origAuth[1])
if err != nil {
return err
}
authConfig.ServerAddress = defaultIndexServer
configFile.AuthConfigs[defaultIndexServer] = authConfig
} else {
for k, authConfig := range configFile.AuthConfigs {
authConfig.Username, authConfig.Password, err = decodeAuth(authConfig.Auth)
if err != nil {
return err
}
authConfig.Auth = ""
authConfig.ServerAddress = k
configFile.AuthConfigs[k] = authConfig
}
}
return nil
}
// LoadFromReader reads the configuration data given and sets up the auth config
// information with given directory and populates the receiver object
func (configFile *ConfigFile) LoadFromReader(configData io.Reader) error {
if err := json.NewDecoder(configData).Decode(&configFile); err != nil {
return err
}
var err error
for addr, ac := range configFile.AuthConfigs {
ac.Username, ac.Password, err = decodeAuth(ac.Auth)
if err != nil {
return err
}
ac.Auth = ""
ac.ServerAddress = addr
configFile.AuthConfigs[addr] = ac
}
return checkKubernetesConfiguration(configFile.Kubernetes)
}
// ContainsAuth returns whether there is authentication configured
// in this file or not.
func (configFile *ConfigFile) ContainsAuth() bool {
return configFile.CredentialsStore != "" ||
len(configFile.CredentialHelpers) > 0 ||
len(configFile.AuthConfigs) > 0
}
// GetAuthConfigs returns the mapping of repo to auth configuration
func (configFile *ConfigFile) GetAuthConfigs() map[string]types.AuthConfig {
return configFile.AuthConfigs
}
// SaveToWriter encodes and writes out all the authorization information to
// the given writer
func (configFile *ConfigFile) SaveToWriter(writer io.Writer) error {
// Encode sensitive data into a new/temp struct
tmpAuthConfigs := make(map[string]types.AuthConfig, len(configFile.AuthConfigs))
for k, authConfig := range configFile.AuthConfigs {
authCopy := authConfig
// encode and save the authstring, while blanking out the original fields
authCopy.Auth = encodeAuth(&authCopy)
authCopy.Username = ""
authCopy.Password = ""
authCopy.ServerAddress = ""
tmpAuthConfigs[k] = authCopy
}
saveAuthConfigs := configFile.AuthConfigs
configFile.AuthConfigs = tmpAuthConfigs
defer func() { configFile.AuthConfigs = saveAuthConfigs }()
data, err := json.MarshalIndent(configFile, "", "\t")
if err != nil {
return err
}
_, err = writer.Write(data)
return err
}
// Save encodes and writes out all the authorization information
func (configFile *ConfigFile) Save() error {
if configFile.Filename == "" {
return errors.Errorf("Can't save config with empty filename")
}
dir := filepath.Dir(configFile.Filename)
if err := os.MkdirAll(dir, 0700); err != nil {
return err
}
temp, err := ioutil.TempFile(dir, filepath.Base(configFile.Filename))
if err != nil {
return err
}
err = configFile.SaveToWriter(temp)
temp.Close()
if err != nil {
os.Remove(temp.Name())
return err
}
return os.Rename(temp.Name(), configFile.Filename)
}
// ParseProxyConfig computes proxy configuration by retrieving the config for the provided host and
// then checking this against any environment variables provided to the container
func (configFile *ConfigFile) ParseProxyConfig(host string, runOpts []string) map[string]*string {
var cfgKey string
if _, ok := configFile.Proxies[host]; !ok {
cfgKey = "default"
} else {
cfgKey = host
}
config := configFile.Proxies[cfgKey]
permitted := map[string]*string{
"HTTP_PROXY": &config.HTTPProxy,
"HTTPS_PROXY": &config.HTTPSProxy,
"NO_PROXY": &config.NoProxy,
"FTP_PROXY": &config.FTPProxy,
}
m := opts.ConvertKVStringsToMapWithNil(runOpts)
for k := range permitted {
if *permitted[k] == "" {
continue
}
if _, ok := m[k]; !ok {
m[k] = permitted[k]
}
if _, ok := m[strings.ToLower(k)]; !ok {
m[strings.ToLower(k)] = permitted[k]
}
}
return m
}
// encodeAuth creates a base64 encoded string to containing authorization information
func encodeAuth(authConfig *types.AuthConfig) string {
if authConfig.Username == "" && authConfig.Password == "" {
return ""
}
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
base64.StdEncoding.Encode(encoded, msg)
return string(encoded)
}
// decodeAuth decodes a base64 encoded string and returns username and password
func decodeAuth(authStr string) (string, string, error) {
if authStr == "" {
return "", "", nil
}
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
n, err := base64.StdEncoding.Decode(decoded, authByte)
if err != nil {
return "", "", err
}
if n > decLen {
return "", "", errors.Errorf("Something went wrong decoding auth config")
}
arr := strings.SplitN(string(decoded), ":", 2)
if len(arr) != 2 {
return "", "", errors.Errorf("Invalid auth configuration file")
}
password := strings.Trim(arr[1], "\x00")
return arr[0], password, nil
}
// GetCredentialsStore returns a new credentials store from the settings in the
// configuration file
func (configFile *ConfigFile) GetCredentialsStore(registryHostname string) credentials.Store {
if helper := getConfiguredCredentialStore(configFile, registryHostname); helper != "" {
return newNativeStore(configFile, helper)
}
return credentials.NewFileStore(configFile)
}
// var for unit testing.
var newNativeStore = func(configFile *ConfigFile, helperSuffix string) credentials.Store {
return credentials.NewNativeStore(configFile, helperSuffix)
}
// GetAuthConfig for a repository from the credential store
func (configFile *ConfigFile) GetAuthConfig(registryHostname string) (types.AuthConfig, error) {
return configFile.GetCredentialsStore(registryHostname).Get(registryHostname)
}
// getConfiguredCredentialStore returns the credential helper configured for the
// given registry, the default credsStore, or the empty string if neither are
// configured.
func getConfiguredCredentialStore(c *ConfigFile, registryHostname string) string {
if c.CredentialHelpers != nil && registryHostname != "" {
if helper, exists := c.CredentialHelpers[registryHostname]; exists {
return helper
}
}
return c.CredentialsStore
}
// GetAllCredentials returns all of the credentials stored in all of the
// configured credential stores.
func (configFile *ConfigFile) GetAllCredentials() (map[string]types.AuthConfig, error) {
auths := make(map[string]types.AuthConfig)
addAll := func(from map[string]types.AuthConfig) {
for reg, ac := range from {
auths[reg] = ac
}
}
defaultStore := configFile.GetCredentialsStore("")
newAuths, err := defaultStore.GetAll()
if err != nil {
return nil, err
}
addAll(newAuths)
// Auth configs from a registry-specific helper should override those from the default store.
for registryHostname := range configFile.CredentialHelpers {
newAuth, err := configFile.GetAuthConfig(registryHostname)
if err != nil {
return nil, err
}
auths[registryHostname] = newAuth
}
return auths, nil
}
// GetFilename returns the file name that this config file is based on.
func (configFile *ConfigFile) GetFilename() string {
return configFile.Filename
}
func checkKubernetesConfiguration(kubeConfig *KubernetesConfig) error {
if kubeConfig == nil {
return nil
}
switch kubeConfig.AllNamespaces {
case "":
case "enabled":
case "disabled":
default:
return fmt.Errorf("invalid 'kubernetes.allNamespaces' value, should be 'enabled' or 'disabled': %s", kubeConfig.AllNamespaces)
}
return nil
}

View file

@ -0,0 +1,17 @@
package credentials
import (
"github.com/docker/docker/api/types"
)
// Store is the interface that any credentials store must implement.
type Store interface {
// Erase removes credentials from the store for a given server.
Erase(serverAddress string) error
// Get retrieves credentials from the store for a given server.
Get(serverAddress string) (types.AuthConfig, error)
// GetAll retrieves all the credentials from the store.
GetAll() (map[string]types.AuthConfig, error)
// Store saves credentials in the store.
Store(authConfig types.AuthConfig) error
}

View file

@ -0,0 +1,21 @@
package credentials
import (
"os/exec"
)
// DetectDefaultStore return the default credentials store for the platform if
// the store executable is available.
func DetectDefaultStore(store string) string {
platformDefault := defaultCredentialsStore()
// user defined or no default for platform
if store != "" || platformDefault == "" {
return store
}
if _, err := exec.LookPath(remoteCredentialsPrefix + platformDefault); err == nil {
return platformDefault
}
return ""
}

View file

@ -0,0 +1,5 @@
package credentials
func defaultCredentialsStore() string {
return "osxkeychain"
}

View file

@ -0,0 +1,13 @@
package credentials
import (
"os/exec"
)
func defaultCredentialsStore() string {
if _, err := exec.LookPath("pass"); err == nil {
return "pass"
}
return "secretservice"
}

View file

@ -0,0 +1,7 @@
// +build !windows,!darwin,!linux
package credentials
func defaultCredentialsStore() string {
return ""
}

View file

@ -0,0 +1,5 @@
package credentials
func defaultCredentialsStore() string {
return "wincred"
}

View file

@ -0,0 +1,64 @@
package credentials
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/registry"
)
type store interface {
Save() error
GetAuthConfigs() map[string]types.AuthConfig
GetFilename() string
}
// fileStore implements a credentials store using
// the docker configuration file to keep the credentials in plain text.
type fileStore struct {
file store
}
// NewFileStore creates a new file credentials store.
func NewFileStore(file store) Store {
return &fileStore{file: file}
}
// Erase removes the given credentials from the file store.
func (c *fileStore) Erase(serverAddress string) error {
delete(c.file.GetAuthConfigs(), serverAddress)
return c.file.Save()
}
// Get retrieves credentials for a specific server from the file store.
func (c *fileStore) Get(serverAddress string) (types.AuthConfig, error) {
authConfig, ok := c.file.GetAuthConfigs()[serverAddress]
if !ok {
// Maybe they have a legacy config file, we will iterate the keys converting
// them to the new format and testing
for r, ac := range c.file.GetAuthConfigs() {
if serverAddress == registry.ConvertToHostname(r) {
return ac, nil
}
}
authConfig = types.AuthConfig{}
}
return authConfig, nil
}
func (c *fileStore) GetAll() (map[string]types.AuthConfig, error) {
return c.file.GetAuthConfigs(), nil
}
// Store saves the given credentials in the file store.
func (c *fileStore) Store(authConfig types.AuthConfig) error {
c.file.GetAuthConfigs()[authConfig.ServerAddress] = authConfig
return c.file.Save()
}
func (c *fileStore) GetFilename() string {
return c.file.GetFilename()
}
func (c *fileStore) IsFileStore() bool {
return true
}

View file

@ -0,0 +1,143 @@
package credentials
import (
"github.com/docker/docker-credential-helpers/client"
"github.com/docker/docker-credential-helpers/credentials"
"github.com/docker/docker/api/types"
)
const (
remoteCredentialsPrefix = "docker-credential-"
tokenUsername = "<token>"
)
// nativeStore implements a credentials store
// using native keychain to keep credentials secure.
// It piggybacks into a file store to keep users' emails.
type nativeStore struct {
programFunc client.ProgramFunc
fileStore Store
}
// NewNativeStore creates a new native store that
// uses a remote helper program to manage credentials.
func NewNativeStore(file store, helperSuffix string) Store {
name := remoteCredentialsPrefix + helperSuffix
return &nativeStore{
programFunc: client.NewShellProgramFunc(name),
fileStore: NewFileStore(file),
}
}
// Erase removes the given credentials from the native store.
func (c *nativeStore) Erase(serverAddress string) error {
if err := client.Erase(c.programFunc, serverAddress); err != nil {
return err
}
// Fallback to plain text store to remove email
return c.fileStore.Erase(serverAddress)
}
// Get retrieves credentials for a specific server from the native store.
func (c *nativeStore) Get(serverAddress string) (types.AuthConfig, error) {
// load user email if it exist or an empty auth config.
auth, _ := c.fileStore.Get(serverAddress)
creds, err := c.getCredentialsFromStore(serverAddress)
if err != nil {
return auth, err
}
auth.Username = creds.Username
auth.IdentityToken = creds.IdentityToken
auth.Password = creds.Password
return auth, nil
}
// GetAll retrieves all the credentials from the native store.
func (c *nativeStore) GetAll() (map[string]types.AuthConfig, error) {
auths, err := c.listCredentialsInStore()
if err != nil {
return nil, err
}
// Emails are only stored in the file store.
// This call can be safely eliminated when emails are removed.
fileConfigs, _ := c.fileStore.GetAll()
authConfigs := make(map[string]types.AuthConfig)
for registry := range auths {
creds, err := c.getCredentialsFromStore(registry)
if err != nil {
return nil, err
}
ac := fileConfigs[registry] // might contain Email
ac.Username = creds.Username
ac.Password = creds.Password
ac.IdentityToken = creds.IdentityToken
authConfigs[registry] = ac
}
return authConfigs, nil
}
// Store saves the given credentials in the file store.
func (c *nativeStore) Store(authConfig types.AuthConfig) error {
if err := c.storeCredentialsInStore(authConfig); err != nil {
return err
}
authConfig.Username = ""
authConfig.Password = ""
authConfig.IdentityToken = ""
// Fallback to old credential in plain text to save only the email
return c.fileStore.Store(authConfig)
}
// storeCredentialsInStore executes the command to store the credentials in the native store.
func (c *nativeStore) storeCredentialsInStore(config types.AuthConfig) error {
creds := &credentials.Credentials{
ServerURL: config.ServerAddress,
Username: config.Username,
Secret: config.Password,
}
if config.IdentityToken != "" {
creds.Username = tokenUsername
creds.Secret = config.IdentityToken
}
return client.Store(c.programFunc, creds)
}
// getCredentialsFromStore executes the command to get the credentials from the native store.
func (c *nativeStore) getCredentialsFromStore(serverAddress string) (types.AuthConfig, error) {
var ret types.AuthConfig
creds, err := client.Get(c.programFunc, serverAddress)
if err != nil {
if credentials.IsErrCredentialsNotFound(err) {
// do not return an error if the credentials are not
// in the keychain. Let docker ask for new credentials.
return ret, nil
}
return ret, err
}
if creds.Username == tokenUsername {
ret.IdentityToken = creds.Secret
} else {
ret.Password = creds.Secret
ret.Username = creds.Username
}
ret.ServerAddress = serverAddress
return ret, nil
}
// listCredentialsInStore returns a listing of stored credentials as a map of
// URL -> username.
func (c *nativeStore) listCredentialsInStore() (map[string]string, error) {
return client.List(c.programFunc)
}

98
vendor/github.com/docker/cli/opts/config.go generated vendored Normal file
View file

@ -0,0 +1,98 @@
package opts
import (
"encoding/csv"
"fmt"
"os"
"strconv"
"strings"
swarmtypes "github.com/docker/docker/api/types/swarm"
)
// ConfigOpt is a Value type for parsing configs
type ConfigOpt struct {
values []*swarmtypes.ConfigReference
}
// Set a new config value
func (o *ConfigOpt) Set(value string) error {
csvReader := csv.NewReader(strings.NewReader(value))
fields, err := csvReader.Read()
if err != nil {
return err
}
options := &swarmtypes.ConfigReference{
File: &swarmtypes.ConfigReferenceFileTarget{
UID: "0",
GID: "0",
Mode: 0444,
},
}
// support a simple syntax of --config foo
if len(fields) == 1 {
options.File.Name = fields[0]
options.ConfigName = fields[0]
o.values = append(o.values, options)
return nil
}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
key := strings.ToLower(parts[0])
if len(parts) != 2 {
return fmt.Errorf("invalid field '%s' must be a key=value pair", field)
}
value := parts[1]
switch key {
case "source", "src":
options.ConfigName = value
case "target":
options.File.Name = value
case "uid":
options.File.UID = value
case "gid":
options.File.GID = value
case "mode":
m, err := strconv.ParseUint(value, 0, 32)
if err != nil {
return fmt.Errorf("invalid mode specified: %v", err)
}
options.File.Mode = os.FileMode(m)
default:
return fmt.Errorf("invalid field in config request: %s", key)
}
}
if options.ConfigName == "" {
return fmt.Errorf("source is required")
}
o.values = append(o.values, options)
return nil
}
// Type returns the type of this option
func (o *ConfigOpt) Type() string {
return "config"
}
// String returns a string repr of this option
func (o *ConfigOpt) String() string {
configs := []string{}
for _, config := range o.values {
repr := fmt.Sprintf("%s -> %s", config.ConfigName, config.File.Name)
configs = append(configs, repr)
}
return strings.Join(configs, ", ")
}
// Value returns the config requests
func (o *ConfigOpt) Value() []*swarmtypes.ConfigReference {
return o.values
}

64
vendor/github.com/docker/cli/opts/duration.go generated vendored Normal file
View file

@ -0,0 +1,64 @@
package opts
import (
"time"
"github.com/pkg/errors"
)
// PositiveDurationOpt is an option type for time.Duration that uses a pointer.
// It behave similarly to DurationOpt but only allows positive duration values.
type PositiveDurationOpt struct {
DurationOpt
}
// Set a new value on the option. Setting a negative duration value will cause
// an error to be returned.
func (d *PositiveDurationOpt) Set(s string) error {
err := d.DurationOpt.Set(s)
if err != nil {
return err
}
if *d.DurationOpt.value < 0 {
return errors.Errorf("duration cannot be negative")
}
return nil
}
// DurationOpt is an option type for time.Duration that uses a pointer. This
// allows us to get nil values outside, instead of defaulting to 0
type DurationOpt struct {
value *time.Duration
}
// NewDurationOpt creates a DurationOpt with the specified duration
func NewDurationOpt(value *time.Duration) *DurationOpt {
return &DurationOpt{
value: value,
}
}
// Set a new value on the option
func (d *DurationOpt) Set(s string) error {
v, err := time.ParseDuration(s)
d.value = &v
return err
}
// Type returns the type of this option, which will be displayed in `--help` output
func (d *DurationOpt) Type() string {
return "duration"
}
// String returns a string repr of this option
func (d *DurationOpt) String() string {
if d.value != nil {
return d.value.String()
}
return ""
}
// Value returns the time.Duration
func (d *DurationOpt) Value() *time.Duration {
return d.value
}

46
vendor/github.com/docker/cli/opts/env.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package opts
import (
"fmt"
"os"
"runtime"
"strings"
)
// ValidateEnv validates an environment variable and returns it.
// If no value is specified, it returns the current value using os.Getenv.
//
// As on ParseEnvFile and related to #16585, environment variable names
// are not validate what so ever, it's up to application inside docker
// to validate them or not.
//
// The only validation here is to check if name is empty, per #25099
func ValidateEnv(val string) (string, error) {
arr := strings.Split(val, "=")
if arr[0] == "" {
return "", fmt.Errorf("invalid environment variable: %s", val)
}
if len(arr) > 1 {
return val, nil
}
if !doesEnvExist(val) {
return val, nil
}
return fmt.Sprintf("%s=%s", val, os.Getenv(val)), nil
}
func doesEnvExist(name string) bool {
for _, entry := range os.Environ() {
parts := strings.SplitN(entry, "=", 2)
if runtime.GOOS == "windows" {
// Environment variable are case-insensitive on Windows. PaTh, path and PATH are equivalent.
if strings.EqualFold(parts[0], name) {
return true
}
}
if parts[0] == name {
return true
}
}
return false
}

22
vendor/github.com/docker/cli/opts/envfile.go generated vendored Normal file
View file

@ -0,0 +1,22 @@
package opts
import (
"os"
)
// ParseEnvFile reads a file with environment variables enumerated by lines
//
// ``Environment variable names used by the utilities in the Shell and
// Utilities volume of IEEE Std 1003.1-2001 consist solely of uppercase
// letters, digits, and the '_' (underscore) from the characters defined in
// Portable Character Set and do not begin with a digit. *But*, other
// characters may be permitted by an implementation; applications shall
// tolerate the presence of such names.''
// -- http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap08.html
//
// As of #16585, it's up to application inside docker to validate or not
// environment variables, that's why we just strip leading whitespace and
// nothing more.
func ParseEnvFile(filename string) ([]string, error) {
return parseKeyValueFile(filename, os.LookupEnv)
}

77
vendor/github.com/docker/cli/opts/file.go generated vendored Normal file
View file

@ -0,0 +1,77 @@
package opts
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"unicode"
"unicode/utf8"
)
var whiteSpaces = " \t"
// ErrBadKey typed error for bad environment variable
type ErrBadKey struct {
msg string
}
func (e ErrBadKey) Error() string {
return fmt.Sprintf("poorly formatted environment: %s", e.msg)
}
func parseKeyValueFile(filename string, emptyFn func(string) (string, bool)) ([]string, error) {
fh, err := os.Open(filename)
if err != nil {
return []string{}, err
}
defer fh.Close()
lines := []string{}
scanner := bufio.NewScanner(fh)
currentLine := 0
utf8bom := []byte{0xEF, 0xBB, 0xBF}
for scanner.Scan() {
scannedBytes := scanner.Bytes()
if !utf8.Valid(scannedBytes) {
return []string{}, fmt.Errorf("env file %s contains invalid utf8 bytes at line %d: %v", filename, currentLine+1, scannedBytes)
}
// We trim UTF8 BOM
if currentLine == 0 {
scannedBytes = bytes.TrimPrefix(scannedBytes, utf8bom)
}
// trim the line from all leading whitespace first
line := strings.TrimLeftFunc(string(scannedBytes), unicode.IsSpace)
currentLine++
// line is not empty, and not starting with '#'
if len(line) > 0 && !strings.HasPrefix(line, "#") {
data := strings.SplitN(line, "=", 2)
// trim the front of a variable, but nothing else
variable := strings.TrimLeft(data[0], whiteSpaces)
if strings.ContainsAny(variable, whiteSpaces) {
return []string{}, ErrBadKey{fmt.Sprintf("variable '%s' has white spaces", variable)}
}
if len(variable) == 0 {
return []string{}, ErrBadKey{fmt.Sprintf("no variable name on line '%s'", line)}
}
if len(data) > 1 {
// pass the value through, no trimming
lines = append(lines, fmt.Sprintf("%s=%s", variable, data[1]))
} else {
var value string
var present bool
if emptyFn != nil {
value, present = emptyFn(line)
}
if present {
// if only a pass-through variable is given, clean it up.
lines = append(lines, fmt.Sprintf("%s=%s", strings.TrimSpace(line), value))
}
}
}
}
return lines, scanner.Err()
}

165
vendor/github.com/docker/cli/opts/hosts.go generated vendored Normal file
View file

@ -0,0 +1,165 @@
package opts
import (
"fmt"
"net"
"net/url"
"strconv"
"strings"
)
var (
// DefaultHTTPPort Default HTTP Port used if only the protocol is provided to -H flag e.g. dockerd -H tcp://
// These are the IANA registered port numbers for use with Docker
// see http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=docker
DefaultHTTPPort = 2375 // Default HTTP Port
// DefaultTLSHTTPPort Default HTTP Port used when TLS enabled
DefaultTLSHTTPPort = 2376 // Default TLS encrypted HTTP Port
// DefaultUnixSocket Path for the unix socket.
// Docker daemon by default always listens on the default unix socket
DefaultUnixSocket = "/var/run/docker.sock"
// DefaultTCPHost constant defines the default host string used by docker on Windows
DefaultTCPHost = fmt.Sprintf("tcp://%s:%d", DefaultHTTPHost, DefaultHTTPPort)
// DefaultTLSHost constant defines the default host string used by docker for TLS sockets
DefaultTLSHost = fmt.Sprintf("tcp://%s:%d", DefaultHTTPHost, DefaultTLSHTTPPort)
// DefaultNamedPipe defines the default named pipe used by docker on Windows
DefaultNamedPipe = `//./pipe/docker_engine`
)
// ValidateHost validates that the specified string is a valid host and returns it.
func ValidateHost(val string) (string, error) {
host := strings.TrimSpace(val)
// The empty string means default and is not handled by parseDockerDaemonHost
if host != "" {
_, err := parseDockerDaemonHost(host)
if err != nil {
return val, err
}
}
// Note: unlike most flag validators, we don't return the mutated value here
// we need to know what the user entered later (using ParseHost) to adjust for TLS
return val, nil
}
// ParseHost and set defaults for a Daemon host string
func ParseHost(defaultToTLS bool, val string) (string, error) {
host := strings.TrimSpace(val)
if host == "" {
if defaultToTLS {
host = DefaultTLSHost
} else {
host = DefaultHost
}
} else {
var err error
host, err = parseDockerDaemonHost(host)
if err != nil {
return val, err
}
}
return host, nil
}
// parseDockerDaemonHost parses the specified address and returns an address that will be used as the host.
// Depending of the address specified, this may return one of the global Default* strings defined in hosts.go.
func parseDockerDaemonHost(addr string) (string, error) {
addrParts := strings.SplitN(addr, "://", 2)
if len(addrParts) == 1 && addrParts[0] != "" {
addrParts = []string{"tcp", addrParts[0]}
}
switch addrParts[0] {
case "tcp":
return ParseTCPAddr(addrParts[1], DefaultTCPHost)
case "unix":
return parseSimpleProtoAddr("unix", addrParts[1], DefaultUnixSocket)
case "npipe":
return parseSimpleProtoAddr("npipe", addrParts[1], DefaultNamedPipe)
case "fd":
return addr, nil
default:
return "", fmt.Errorf("Invalid bind address format: %s", addr)
}
}
// parseSimpleProtoAddr parses and validates that the specified address is a valid
// socket address for simple protocols like unix and npipe. It returns a formatted
// socket address, either using the address parsed from addr, or the contents of
// defaultAddr if addr is a blank string.
func parseSimpleProtoAddr(proto, addr, defaultAddr string) (string, error) {
addr = strings.TrimPrefix(addr, proto+"://")
if strings.Contains(addr, "://") {
return "", fmt.Errorf("Invalid proto, expected %s: %s", proto, addr)
}
if addr == "" {
addr = defaultAddr
}
return fmt.Sprintf("%s://%s", proto, addr), nil
}
// ParseTCPAddr parses and validates that the specified address is a valid TCP
// address. It returns a formatted TCP address, either using the address parsed
// from tryAddr, or the contents of defaultAddr if tryAddr is a blank string.
// tryAddr is expected to have already been Trim()'d
// defaultAddr must be in the full `tcp://host:port` form
func ParseTCPAddr(tryAddr string, defaultAddr string) (string, error) {
if tryAddr == "" || tryAddr == "tcp://" {
return defaultAddr, nil
}
addr := strings.TrimPrefix(tryAddr, "tcp://")
if strings.Contains(addr, "://") || addr == "" {
return "", fmt.Errorf("Invalid proto, expected tcp: %s", tryAddr)
}
defaultAddr = strings.TrimPrefix(defaultAddr, "tcp://")
defaultHost, defaultPort, err := net.SplitHostPort(defaultAddr)
if err != nil {
return "", err
}
// url.Parse fails for trailing colon on IPv6 brackets on Go 1.5, but
// not 1.4. See https://github.com/golang/go/issues/12200 and
// https://github.com/golang/go/issues/6530.
if strings.HasSuffix(addr, "]:") {
addr += defaultPort
}
u, err := url.Parse("tcp://" + addr)
if err != nil {
return "", err
}
host, port, err := net.SplitHostPort(u.Host)
if err != nil {
// try port addition once
host, port, err = net.SplitHostPort(net.JoinHostPort(u.Host, defaultPort))
}
if err != nil {
return "", fmt.Errorf("Invalid bind address format: %s", tryAddr)
}
if host == "" {
host = defaultHost
}
if port == "" {
port = defaultPort
}
p, err := strconv.Atoi(port)
if err != nil && p == 0 {
return "", fmt.Errorf("Invalid bind address format: %s", tryAddr)
}
return fmt.Sprintf("tcp://%s%s", net.JoinHostPort(host, port), u.Path), nil
}
// ValidateExtraHost validates that the specified string is a valid extrahost and returns it.
// ExtraHost is in the form of name:ip where the ip has to be a valid ip (IPv4 or IPv6).
func ValidateExtraHost(val string) (string, error) {
// allow for IPv6 addresses in extra hosts by only splitting on first ":"
arr := strings.SplitN(val, ":", 2)
if len(arr) != 2 || len(arr[0]) == 0 {
return "", fmt.Errorf("bad format for add-host: %q", val)
}
if _, err := ValidateIPAddress(arr[1]); err != nil {
return "", fmt.Errorf("invalid IP address in add-host: %q", arr[1])
}
return val, nil
}

8
vendor/github.com/docker/cli/opts/hosts_unix.go generated vendored Normal file
View file

@ -0,0 +1,8 @@
// +build !windows
package opts
import "fmt"
// DefaultHost constant defines the default host string used by docker on other hosts than Windows
var DefaultHost = fmt.Sprintf("unix://%s", DefaultUnixSocket)

6
vendor/github.com/docker/cli/opts/hosts_windows.go generated vendored Normal file
View file

@ -0,0 +1,6 @@
// +build windows
package opts
// DefaultHost constant defines the default host string used by docker on Windows
var DefaultHost = "npipe://" + DefaultNamedPipe

47
vendor/github.com/docker/cli/opts/ip.go generated vendored Normal file
View file

@ -0,0 +1,47 @@
package opts
import (
"fmt"
"net"
)
// IPOpt holds an IP. It is used to store values from CLI flags.
type IPOpt struct {
*net.IP
}
// NewIPOpt creates a new IPOpt from a reference net.IP and a
// string representation of an IP. If the string is not a valid
// IP it will fallback to the specified reference.
func NewIPOpt(ref *net.IP, defaultVal string) *IPOpt {
o := &IPOpt{
IP: ref,
}
o.Set(defaultVal)
return o
}
// Set sets an IPv4 or IPv6 address from a given string. If the given
// string is not parseable as an IP address it returns an error.
func (o *IPOpt) Set(val string) error {
ip := net.ParseIP(val)
if ip == nil {
return fmt.Errorf("%s is not an ip address", val)
}
*o.IP = ip
return nil
}
// String returns the IP address stored in the IPOpt. If stored IP is a
// nil pointer, it returns an empty string.
func (o *IPOpt) String() string {
if *o.IP == nil {
return ""
}
return o.IP.String()
}
// Type returns the type of the option
func (o *IPOpt) Type() string {
return "ip"
}

174
vendor/github.com/docker/cli/opts/mount.go generated vendored Normal file
View file

@ -0,0 +1,174 @@
package opts
import (
"encoding/csv"
"fmt"
"os"
"strconv"
"strings"
mounttypes "github.com/docker/docker/api/types/mount"
"github.com/docker/go-units"
)
// MountOpt is a Value type for parsing mounts
type MountOpt struct {
values []mounttypes.Mount
}
// Set a new mount value
// nolint: gocyclo
func (m *MountOpt) Set(value string) error {
csvReader := csv.NewReader(strings.NewReader(value))
fields, err := csvReader.Read()
if err != nil {
return err
}
mount := mounttypes.Mount{}
volumeOptions := func() *mounttypes.VolumeOptions {
if mount.VolumeOptions == nil {
mount.VolumeOptions = &mounttypes.VolumeOptions{
Labels: make(map[string]string),
}
}
if mount.VolumeOptions.DriverConfig == nil {
mount.VolumeOptions.DriverConfig = &mounttypes.Driver{}
}
return mount.VolumeOptions
}
bindOptions := func() *mounttypes.BindOptions {
if mount.BindOptions == nil {
mount.BindOptions = new(mounttypes.BindOptions)
}
return mount.BindOptions
}
tmpfsOptions := func() *mounttypes.TmpfsOptions {
if mount.TmpfsOptions == nil {
mount.TmpfsOptions = new(mounttypes.TmpfsOptions)
}
return mount.TmpfsOptions
}
setValueOnMap := func(target map[string]string, value string) {
parts := strings.SplitN(value, "=", 2)
if len(parts) == 1 {
target[value] = ""
} else {
target[parts[0]] = parts[1]
}
}
mount.Type = mounttypes.TypeVolume // default to volume mounts
// Set writable as the default
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
key := strings.ToLower(parts[0])
if len(parts) == 1 {
switch key {
case "readonly", "ro":
mount.ReadOnly = true
continue
case "volume-nocopy":
volumeOptions().NoCopy = true
continue
}
}
if len(parts) != 2 {
return fmt.Errorf("invalid field '%s' must be a key=value pair", field)
}
value := parts[1]
switch key {
case "type":
mount.Type = mounttypes.Type(strings.ToLower(value))
case "source", "src":
mount.Source = value
case "target", "dst", "destination":
mount.Target = value
case "readonly", "ro":
mount.ReadOnly, err = strconv.ParseBool(value)
if err != nil {
return fmt.Errorf("invalid value for %s: %s", key, value)
}
case "consistency":
mount.Consistency = mounttypes.Consistency(strings.ToLower(value))
case "bind-propagation":
bindOptions().Propagation = mounttypes.Propagation(strings.ToLower(value))
case "volume-nocopy":
volumeOptions().NoCopy, err = strconv.ParseBool(value)
if err != nil {
return fmt.Errorf("invalid value for volume-nocopy: %s", value)
}
case "volume-label":
setValueOnMap(volumeOptions().Labels, value)
case "volume-driver":
volumeOptions().DriverConfig.Name = value
case "volume-opt":
if volumeOptions().DriverConfig.Options == nil {
volumeOptions().DriverConfig.Options = make(map[string]string)
}
setValueOnMap(volumeOptions().DriverConfig.Options, value)
case "tmpfs-size":
sizeBytes, err := units.RAMInBytes(value)
if err != nil {
return fmt.Errorf("invalid value for %s: %s", key, value)
}
tmpfsOptions().SizeBytes = sizeBytes
case "tmpfs-mode":
ui64, err := strconv.ParseUint(value, 8, 32)
if err != nil {
return fmt.Errorf("invalid value for %s: %s", key, value)
}
tmpfsOptions().Mode = os.FileMode(ui64)
default:
return fmt.Errorf("unexpected key '%s' in '%s'", key, field)
}
}
if mount.Type == "" {
return fmt.Errorf("type is required")
}
if mount.Target == "" {
return fmt.Errorf("target is required")
}
if mount.VolumeOptions != nil && mount.Type != mounttypes.TypeVolume {
return fmt.Errorf("cannot mix 'volume-*' options with mount type '%s'", mount.Type)
}
if mount.BindOptions != nil && mount.Type != mounttypes.TypeBind {
return fmt.Errorf("cannot mix 'bind-*' options with mount type '%s'", mount.Type)
}
if mount.TmpfsOptions != nil && mount.Type != mounttypes.TypeTmpfs {
return fmt.Errorf("cannot mix 'tmpfs-*' options with mount type '%s'", mount.Type)
}
m.values = append(m.values, mount)
return nil
}
// Type returns the type of this option
func (m *MountOpt) Type() string {
return "mount"
}
// String returns a string repr of this option
func (m *MountOpt) String() string {
mounts := []string{}
for _, mount := range m.values {
repr := fmt.Sprintf("%s %s %s", mount.Type, mount.Source, mount.Target)
mounts = append(mounts, repr)
}
return strings.Join(mounts, ", ")
}
// Value returns the mounts
func (m *MountOpt) Value() []mounttypes.Mount {
return m.values
}

106
vendor/github.com/docker/cli/opts/network.go generated vendored Normal file
View file

@ -0,0 +1,106 @@
package opts
import (
"encoding/csv"
"fmt"
"regexp"
"strings"
)
const (
networkOptName = "name"
networkOptAlias = "alias"
driverOpt = "driver-opt"
)
// NetworkAttachmentOpts represents the network options for endpoint creation
type NetworkAttachmentOpts struct {
Target string
Aliases []string
DriverOpts map[string]string
}
// NetworkOpt represents a network config in swarm mode.
type NetworkOpt struct {
options []NetworkAttachmentOpts
}
// Set networkopts value
func (n *NetworkOpt) Set(value string) error {
longSyntax, err := regexp.MatchString(`\w+=\w+(,\w+=\w+)*`, value)
if err != nil {
return err
}
var netOpt NetworkAttachmentOpts
if longSyntax {
csvReader := csv.NewReader(strings.NewReader(value))
fields, err := csvReader.Read()
if err != nil {
return err
}
netOpt.Aliases = []string{}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) < 2 {
return fmt.Errorf("invalid field %s", field)
}
key := strings.TrimSpace(strings.ToLower(parts[0]))
value := strings.TrimSpace(strings.ToLower(parts[1]))
switch key {
case networkOptName:
netOpt.Target = value
case networkOptAlias:
netOpt.Aliases = append(netOpt.Aliases, value)
case driverOpt:
key, value, err = parseDriverOpt(value)
if err == nil {
if netOpt.DriverOpts == nil {
netOpt.DriverOpts = make(map[string]string)
}
netOpt.DriverOpts[key] = value
} else {
return err
}
default:
return fmt.Errorf("invalid field key %s", key)
}
}
if len(netOpt.Target) == 0 {
return fmt.Errorf("network name/id is not specified")
}
} else {
netOpt.Target = value
}
n.options = append(n.options, netOpt)
return nil
}
// Type returns the type of this option
func (n *NetworkOpt) Type() string {
return "network"
}
// Value returns the networkopts
func (n *NetworkOpt) Value() []NetworkAttachmentOpts {
return n.options
}
// String returns the network opts as a string
func (n *NetworkOpt) String() string {
return ""
}
func parseDriverOpt(driverOpt string) (string, string, error) {
parts := strings.SplitN(driverOpt, "=", 2)
if len(parts) != 2 {
return "", "", fmt.Errorf("invalid key value pair format in driver options")
}
key := strings.TrimSpace(strings.ToLower(parts[0]))
value := strings.TrimSpace(strings.ToLower(parts[1]))
return key, value, nil
}

509
vendor/github.com/docker/cli/opts/opts.go generated vendored Normal file
View file

@ -0,0 +1,509 @@
package opts
import (
"fmt"
"math/big"
"net"
"path"
"regexp"
"strings"
"github.com/docker/docker/api/types/filters"
units "github.com/docker/go-units"
"github.com/pkg/errors"
)
var (
alphaRegexp = regexp.MustCompile(`[a-zA-Z]`)
domainRegexp = regexp.MustCompile(`^(:?(:?[a-zA-Z0-9]|(:?[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9]))(:?\.(:?[a-zA-Z0-9]|(:?[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])))*)\.?\s*$`)
)
// ListOpts holds a list of values and a validation function.
type ListOpts struct {
values *[]string
validator ValidatorFctType
}
// NewListOpts creates a new ListOpts with the specified validator.
func NewListOpts(validator ValidatorFctType) ListOpts {
var values []string
return *NewListOptsRef(&values, validator)
}
// NewListOptsRef creates a new ListOpts with the specified values and validator.
func NewListOptsRef(values *[]string, validator ValidatorFctType) *ListOpts {
return &ListOpts{
values: values,
validator: validator,
}
}
func (opts *ListOpts) String() string {
if len(*opts.values) == 0 {
return ""
}
return fmt.Sprintf("%v", *opts.values)
}
// Set validates if needed the input value and adds it to the
// internal slice.
func (opts *ListOpts) Set(value string) error {
if opts.validator != nil {
v, err := opts.validator(value)
if err != nil {
return err
}
value = v
}
(*opts.values) = append((*opts.values), value)
return nil
}
// Delete removes the specified element from the slice.
func (opts *ListOpts) Delete(key string) {
for i, k := range *opts.values {
if k == key {
(*opts.values) = append((*opts.values)[:i], (*opts.values)[i+1:]...)
return
}
}
}
// GetMap returns the content of values in a map in order to avoid
// duplicates.
func (opts *ListOpts) GetMap() map[string]struct{} {
ret := make(map[string]struct{})
for _, k := range *opts.values {
ret[k] = struct{}{}
}
return ret
}
// GetAll returns the values of slice.
func (opts *ListOpts) GetAll() []string {
return (*opts.values)
}
// GetAllOrEmpty returns the values of the slice
// or an empty slice when there are no values.
func (opts *ListOpts) GetAllOrEmpty() []string {
v := *opts.values
if v == nil {
return make([]string, 0)
}
return v
}
// Get checks the existence of the specified key.
func (opts *ListOpts) Get(key string) bool {
for _, k := range *opts.values {
if k == key {
return true
}
}
return false
}
// Len returns the amount of element in the slice.
func (opts *ListOpts) Len() int {
return len((*opts.values))
}
// Type returns a string name for this Option type
func (opts *ListOpts) Type() string {
return "list"
}
// WithValidator returns the ListOpts with validator set.
func (opts *ListOpts) WithValidator(validator ValidatorFctType) *ListOpts {
opts.validator = validator
return opts
}
// NamedOption is an interface that list and map options
// with names implement.
type NamedOption interface {
Name() string
}
// NamedListOpts is a ListOpts with a configuration name.
// This struct is useful to keep reference to the assigned
// field name in the internal configuration struct.
type NamedListOpts struct {
name string
ListOpts
}
var _ NamedOption = &NamedListOpts{}
// NewNamedListOptsRef creates a reference to a new NamedListOpts struct.
func NewNamedListOptsRef(name string, values *[]string, validator ValidatorFctType) *NamedListOpts {
return &NamedListOpts{
name: name,
ListOpts: *NewListOptsRef(values, validator),
}
}
// Name returns the name of the NamedListOpts in the configuration.
func (o *NamedListOpts) Name() string {
return o.name
}
// MapOpts holds a map of values and a validation function.
type MapOpts struct {
values map[string]string
validator ValidatorFctType
}
// Set validates if needed the input value and add it to the
// internal map, by splitting on '='.
func (opts *MapOpts) Set(value string) error {
if opts.validator != nil {
v, err := opts.validator(value)
if err != nil {
return err
}
value = v
}
vals := strings.SplitN(value, "=", 2)
if len(vals) == 1 {
(opts.values)[vals[0]] = ""
} else {
(opts.values)[vals[0]] = vals[1]
}
return nil
}
// GetAll returns the values of MapOpts as a map.
func (opts *MapOpts) GetAll() map[string]string {
return opts.values
}
func (opts *MapOpts) String() string {
return fmt.Sprintf("%v", opts.values)
}
// Type returns a string name for this Option type
func (opts *MapOpts) Type() string {
return "map"
}
// NewMapOpts creates a new MapOpts with the specified map of values and a validator.
func NewMapOpts(values map[string]string, validator ValidatorFctType) *MapOpts {
if values == nil {
values = make(map[string]string)
}
return &MapOpts{
values: values,
validator: validator,
}
}
// NamedMapOpts is a MapOpts struct with a configuration name.
// This struct is useful to keep reference to the assigned
// field name in the internal configuration struct.
type NamedMapOpts struct {
name string
MapOpts
}
var _ NamedOption = &NamedMapOpts{}
// NewNamedMapOpts creates a reference to a new NamedMapOpts struct.
func NewNamedMapOpts(name string, values map[string]string, validator ValidatorFctType) *NamedMapOpts {
return &NamedMapOpts{
name: name,
MapOpts: *NewMapOpts(values, validator),
}
}
// Name returns the name of the NamedMapOpts in the configuration.
func (o *NamedMapOpts) Name() string {
return o.name
}
// ValidatorFctType defines a validator function that returns a validated string and/or an error.
type ValidatorFctType func(val string) (string, error)
// ValidatorFctListType defines a validator function that returns a validated list of string and/or an error
type ValidatorFctListType func(val string) ([]string, error)
// ValidateIPAddress validates an Ip address.
func ValidateIPAddress(val string) (string, error) {
var ip = net.ParseIP(strings.TrimSpace(val))
if ip != nil {
return ip.String(), nil
}
return "", fmt.Errorf("%s is not an ip address", val)
}
// ValidateMACAddress validates a MAC address.
func ValidateMACAddress(val string) (string, error) {
_, err := net.ParseMAC(strings.TrimSpace(val))
if err != nil {
return "", err
}
return val, nil
}
// ValidateDNSSearch validates domain for resolvconf search configuration.
// A zero length domain is represented by a dot (.).
func ValidateDNSSearch(val string) (string, error) {
if val = strings.Trim(val, " "); val == "." {
return val, nil
}
return validateDomain(val)
}
func validateDomain(val string) (string, error) {
if alphaRegexp.FindString(val) == "" {
return "", fmt.Errorf("%s is not a valid domain", val)
}
ns := domainRegexp.FindSubmatch([]byte(val))
if len(ns) > 0 && len(ns[1]) < 255 {
return string(ns[1]), nil
}
return "", fmt.Errorf("%s is not a valid domain", val)
}
// ValidateLabel validates that the specified string is a valid label, and returns it.
// Labels are in the form on key=value.
func ValidateLabel(val string) (string, error) {
if strings.Count(val, "=") < 1 {
return "", fmt.Errorf("bad attribute format: %s", val)
}
return val, nil
}
// ValidateSysctl validates a sysctl and returns it.
func ValidateSysctl(val string) (string, error) {
validSysctlMap := map[string]bool{
"kernel.msgmax": true,
"kernel.msgmnb": true,
"kernel.msgmni": true,
"kernel.sem": true,
"kernel.shmall": true,
"kernel.shmmax": true,
"kernel.shmmni": true,
"kernel.shm_rmid_forced": true,
}
validSysctlPrefixes := []string{
"net.",
"fs.mqueue.",
}
arr := strings.Split(val, "=")
if len(arr) < 2 {
return "", fmt.Errorf("sysctl '%s' is not whitelisted", val)
}
if validSysctlMap[arr[0]] {
return val, nil
}
for _, vp := range validSysctlPrefixes {
if strings.HasPrefix(arr[0], vp) {
return val, nil
}
}
return "", fmt.Errorf("sysctl '%s' is not whitelisted", val)
}
// ValidateProgressOutput errors out if an invalid value is passed to --progress
func ValidateProgressOutput(val string) error {
valid := []string{"auto", "plain", "tty"}
for _, s := range valid {
if s == val {
return nil
}
}
return fmt.Errorf("invalid value %q passed to --progress, valid values are: %s", val, strings.Join(valid, ", "))
}
// FilterOpt is a flag type for validating filters
type FilterOpt struct {
filter filters.Args
}
// NewFilterOpt returns a new FilterOpt
func NewFilterOpt() FilterOpt {
return FilterOpt{filter: filters.NewArgs()}
}
func (o *FilterOpt) String() string {
repr, err := filters.ToJSON(o.filter)
if err != nil {
return "invalid filters"
}
return repr
}
// Set sets the value of the opt by parsing the command line value
func (o *FilterOpt) Set(value string) error {
if value == "" {
return nil
}
if !strings.Contains(value, "=") {
return errors.New("bad format of filter (expected name=value)")
}
f := strings.SplitN(value, "=", 2)
name := strings.ToLower(strings.TrimSpace(f[0]))
value = strings.TrimSpace(f[1])
o.filter.Add(name, value)
return nil
}
// Type returns the option type
func (o *FilterOpt) Type() string {
return "filter"
}
// Value returns the value of this option
func (o *FilterOpt) Value() filters.Args {
return o.filter
}
// NanoCPUs is a type for fixed point fractional number.
type NanoCPUs int64
// String returns the string format of the number
func (c *NanoCPUs) String() string {
if *c == 0 {
return ""
}
return big.NewRat(c.Value(), 1e9).FloatString(3)
}
// Set sets the value of the NanoCPU by passing a string
func (c *NanoCPUs) Set(value string) error {
cpus, err := ParseCPUs(value)
*c = NanoCPUs(cpus)
return err
}
// Type returns the type
func (c *NanoCPUs) Type() string {
return "decimal"
}
// Value returns the value in int64
func (c *NanoCPUs) Value() int64 {
return int64(*c)
}
// ParseCPUs takes a string ratio and returns an integer value of nano cpus
func ParseCPUs(value string) (int64, error) {
cpu, ok := new(big.Rat).SetString(value)
if !ok {
return 0, fmt.Errorf("failed to parse %v as a rational number", value)
}
nano := cpu.Mul(cpu, big.NewRat(1e9, 1))
if !nano.IsInt() {
return 0, fmt.Errorf("value is too precise")
}
return nano.Num().Int64(), nil
}
// ParseLink parses and validates the specified string as a link format (name:alias)
func ParseLink(val string) (string, string, error) {
if val == "" {
return "", "", fmt.Errorf("empty string specified for links")
}
arr := strings.Split(val, ":")
if len(arr) > 2 {
return "", "", fmt.Errorf("bad format for links: %s", val)
}
if len(arr) == 1 {
return val, val, nil
}
// This is kept because we can actually get a HostConfig with links
// from an already created container and the format is not `foo:bar`
// but `/foo:/c1/bar`
if strings.HasPrefix(arr[0], "/") {
_, alias := path.Split(arr[1])
return arr[0][1:], alias, nil
}
return arr[0], arr[1], nil
}
// ValidateLink validates that the specified string has a valid link format (containerName:alias).
func ValidateLink(val string) (string, error) {
_, _, err := ParseLink(val)
return val, err
}
// MemBytes is a type for human readable memory bytes (like 128M, 2g, etc)
type MemBytes int64
// String returns the string format of the human readable memory bytes
func (m *MemBytes) String() string {
// NOTE: In spf13/pflag/flag.go, "0" is considered as "zero value" while "0 B" is not.
// We return "0" in case value is 0 here so that the default value is hidden.
// (Sometimes "default 0 B" is actually misleading)
if m.Value() != 0 {
return units.BytesSize(float64(m.Value()))
}
return "0"
}
// Set sets the value of the MemBytes by passing a string
func (m *MemBytes) Set(value string) error {
val, err := units.RAMInBytes(value)
*m = MemBytes(val)
return err
}
// Type returns the type
func (m *MemBytes) Type() string {
return "bytes"
}
// Value returns the value in int64
func (m *MemBytes) Value() int64 {
return int64(*m)
}
// UnmarshalJSON is the customized unmarshaler for MemBytes
func (m *MemBytes) UnmarshalJSON(s []byte) error {
if len(s) <= 2 || s[0] != '"' || s[len(s)-1] != '"' {
return fmt.Errorf("invalid size: %q", s)
}
val, err := units.RAMInBytes(string(s[1 : len(s)-1]))
*m = MemBytes(val)
return err
}
// MemSwapBytes is a type for human readable memory bytes (like 128M, 2g, etc).
// It differs from MemBytes in that -1 is valid and the default.
type MemSwapBytes int64
// Set sets the value of the MemSwapBytes by passing a string
func (m *MemSwapBytes) Set(value string) error {
if value == "-1" {
*m = MemSwapBytes(-1)
return nil
}
val, err := units.RAMInBytes(value)
*m = MemSwapBytes(val)
return err
}
// Type returns the type
func (m *MemSwapBytes) Type() string {
return "bytes"
}
// Value returns the value in int64
func (m *MemSwapBytes) Value() int64 {
return int64(*m)
}
func (m *MemSwapBytes) String() string {
b := MemBytes(*m)
return b.String()
}
// UnmarshalJSON is the customized unmarshaler for MemSwapBytes
func (m *MemSwapBytes) UnmarshalJSON(s []byte) error {
b := MemBytes(*m)
return b.UnmarshalJSON(s)
}

6
vendor/github.com/docker/cli/opts/opts_unix.go generated vendored Normal file
View file

@ -0,0 +1,6 @@
// +build !windows
package opts
// DefaultHTTPHost Default HTTP Host used if only port is provided to -H flag e.g. dockerd -H tcp://:8080
const DefaultHTTPHost = "localhost"

56
vendor/github.com/docker/cli/opts/opts_windows.go generated vendored Normal file
View file

@ -0,0 +1,56 @@
package opts
// TODO Windows. Identify bug in GOLang 1.5.1+ and/or Windows Server 2016 TP5.
// @jhowardmsft, @swernli.
//
// On Windows, this mitigates a problem with the default options of running
// a docker client against a local docker daemon on TP5.
//
// What was found that if the default host is "localhost", even if the client
// (and daemon as this is local) is not physically on a network, and the DNS
// cache is flushed (ipconfig /flushdns), then the client will pause for
// exactly one second when connecting to the daemon for calls. For example
// using docker run windowsservercore cmd, the CLI will send a create followed
// by an attach. You see the delay between the attach finishing and the attach
// being seen by the daemon.
//
// Here's some daemon debug logs with additional debug spew put in. The
// AfterWriteJSON log is the very last thing the daemon does as part of the
// create call. The POST /attach is the second CLI call. Notice the second
// time gap.
//
// time="2015-11-06T13:38:37.259627400-08:00" level=debug msg="After createRootfs"
// time="2015-11-06T13:38:37.263626300-08:00" level=debug msg="After setHostConfig"
// time="2015-11-06T13:38:37.267631200-08:00" level=debug msg="before createContainerPl...."
// time="2015-11-06T13:38:37.271629500-08:00" level=debug msg=ToDiskLocking....
// time="2015-11-06T13:38:37.275643200-08:00" level=debug msg="loggin event...."
// time="2015-11-06T13:38:37.277627600-08:00" level=debug msg="logged event...."
// time="2015-11-06T13:38:37.279631800-08:00" level=debug msg="In defer func"
// time="2015-11-06T13:38:37.282628100-08:00" level=debug msg="After daemon.create"
// time="2015-11-06T13:38:37.286651700-08:00" level=debug msg="return 2"
// time="2015-11-06T13:38:37.289629500-08:00" level=debug msg="Returned from daemon.ContainerCreate"
// time="2015-11-06T13:38:37.311629100-08:00" level=debug msg="After WriteJSON"
// ... 1 second gap here....
// time="2015-11-06T13:38:38.317866200-08:00" level=debug msg="Calling POST /v1.22/containers/984758282b842f779e805664b2c95d563adc9a979c8a3973e68c807843ee4757/attach"
// time="2015-11-06T13:38:38.326882500-08:00" level=info msg="POST /v1.22/containers/984758282b842f779e805664b2c95d563adc9a979c8a3973e68c807843ee4757/attach?stderr=1&stdin=1&stdout=1&stream=1"
//
// We suspect this is either a bug introduced in GOLang 1.5.1, or that a change
// in GOLang 1.5.1 (from 1.4.3) is exposing a bug in Windows. In theory,
// the Windows networking stack is supposed to resolve "localhost" internally,
// without hitting DNS, or even reading the hosts file (which is why localhost
// is commented out in the hosts file on Windows).
//
// We have validated that working around this using the actual IPv4 localhost
// address does not cause the delay.
//
// This does not occur with the docker client built with 1.4.3 on the same
// Windows build, regardless of whether the daemon is built using 1.5.1
// or 1.4.3. It does not occur on Linux. We also verified we see the same thing
// on a cross-compiled Windows binary (from Linux).
//
// Final note: This is a mitigation, not a 'real' fix. It is still susceptible
// to the delay if a user were to do 'docker run -H=tcp://localhost:2375...'
// explicitly.
// DefaultHTTPHost Default HTTP Host used if only port is provided to -H flag e.g. dockerd -H tcp://:8080
const DefaultHTTPHost = "127.0.0.1"

99
vendor/github.com/docker/cli/opts/parse.go generated vendored Normal file
View file

@ -0,0 +1,99 @@
package opts
import (
"fmt"
"os"
"strconv"
"strings"
"github.com/docker/docker/api/types/container"
)
// ReadKVStrings reads a file of line terminated key=value pairs, and overrides any keys
// present in the file with additional pairs specified in the override parameter
func ReadKVStrings(files []string, override []string) ([]string, error) {
return readKVStrings(files, override, nil)
}
// ReadKVEnvStrings reads a file of line terminated key=value pairs, and overrides any keys
// present in the file with additional pairs specified in the override parameter.
// If a key has no value, it will get the value from the environment.
func ReadKVEnvStrings(files []string, override []string) ([]string, error) {
return readKVStrings(files, override, os.LookupEnv)
}
func readKVStrings(files []string, override []string, emptyFn func(string) (string, bool)) ([]string, error) {
variables := []string{}
for _, ef := range files {
parsedVars, err := parseKeyValueFile(ef, emptyFn)
if err != nil {
return nil, err
}
variables = append(variables, parsedVars...)
}
// parse the '-e' and '--env' after, to allow override
variables = append(variables, override...)
return variables, nil
}
// ConvertKVStringsToMap converts ["key=value"] to {"key":"value"}
func ConvertKVStringsToMap(values []string) map[string]string {
result := make(map[string]string, len(values))
for _, value := range values {
kv := strings.SplitN(value, "=", 2)
if len(kv) == 1 {
result[kv[0]] = ""
} else {
result[kv[0]] = kv[1]
}
}
return result
}
// ConvertKVStringsToMapWithNil converts ["key=value"] to {"key":"value"}
// but set unset keys to nil - meaning the ones with no "=" in them.
// We use this in cases where we need to distinguish between
// FOO= and FOO
// where the latter case just means FOO was mentioned but not given a value
func ConvertKVStringsToMapWithNil(values []string) map[string]*string {
result := make(map[string]*string, len(values))
for _, value := range values {
kv := strings.SplitN(value, "=", 2)
if len(kv) == 1 {
result[kv[0]] = nil
} else {
result[kv[0]] = &kv[1]
}
}
return result
}
// ParseRestartPolicy returns the parsed policy or an error indicating what is incorrect
func ParseRestartPolicy(policy string) (container.RestartPolicy, error) {
p := container.RestartPolicy{}
if policy == "" {
return p, nil
}
parts := strings.Split(policy, ":")
if len(parts) > 2 {
return p, fmt.Errorf("invalid restart policy format")
}
if len(parts) == 2 {
count, err := strconv.Atoi(parts[1])
if err != nil {
return p, fmt.Errorf("maximum retry count must be an integer")
}
p.MaximumRetryCount = count
}
p.Name = parts[0]
return p, nil
}

172
vendor/github.com/docker/cli/opts/port.go generated vendored Normal file
View file

@ -0,0 +1,172 @@
package opts
import (
"encoding/csv"
"fmt"
"regexp"
"strconv"
"strings"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/go-connections/nat"
"github.com/sirupsen/logrus"
)
const (
portOptTargetPort = "target"
portOptPublishedPort = "published"
portOptProtocol = "protocol"
portOptMode = "mode"
)
// PortOpt represents a port config in swarm mode.
type PortOpt struct {
ports []swarm.PortConfig
}
// Set a new port value
// nolint: gocyclo
func (p *PortOpt) Set(value string) error {
longSyntax, err := regexp.MatchString(`\w+=\w+(,\w+=\w+)*`, value)
if err != nil {
return err
}
if longSyntax {
csvReader := csv.NewReader(strings.NewReader(value))
fields, err := csvReader.Read()
if err != nil {
return err
}
pConfig := swarm.PortConfig{}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) != 2 {
return fmt.Errorf("invalid field %s", field)
}
key := strings.ToLower(parts[0])
value := strings.ToLower(parts[1])
switch key {
case portOptProtocol:
if value != string(swarm.PortConfigProtocolTCP) && value != string(swarm.PortConfigProtocolUDP) && value != string(swarm.PortConfigProtocolSCTP) {
return fmt.Errorf("invalid protocol value %s", value)
}
pConfig.Protocol = swarm.PortConfigProtocol(value)
case portOptMode:
if value != string(swarm.PortConfigPublishModeIngress) && value != string(swarm.PortConfigPublishModeHost) {
return fmt.Errorf("invalid publish mode value %s", value)
}
pConfig.PublishMode = swarm.PortConfigPublishMode(value)
case portOptTargetPort:
tPort, err := strconv.ParseUint(value, 10, 16)
if err != nil {
return err
}
pConfig.TargetPort = uint32(tPort)
case portOptPublishedPort:
pPort, err := strconv.ParseUint(value, 10, 16)
if err != nil {
return err
}
pConfig.PublishedPort = uint32(pPort)
default:
return fmt.Errorf("invalid field key %s", key)
}
}
if pConfig.TargetPort == 0 {
return fmt.Errorf("missing mandatory field %q", portOptTargetPort)
}
if pConfig.PublishMode == "" {
pConfig.PublishMode = swarm.PortConfigPublishModeIngress
}
if pConfig.Protocol == "" {
pConfig.Protocol = swarm.PortConfigProtocolTCP
}
p.ports = append(p.ports, pConfig)
} else {
// short syntax
portConfigs := []swarm.PortConfig{}
ports, portBindingMap, err := nat.ParsePortSpecs([]string{value})
if err != nil {
return err
}
for _, portBindings := range portBindingMap {
for _, portBinding := range portBindings {
if portBinding.HostIP != "" {
return fmt.Errorf("hostip is not supported")
}
}
}
for port := range ports {
portConfig, err := ConvertPortToPortConfig(port, portBindingMap)
if err != nil {
return err
}
portConfigs = append(portConfigs, portConfig...)
}
p.ports = append(p.ports, portConfigs...)
}
return nil
}
// Type returns the type of this option
func (p *PortOpt) Type() string {
return "port"
}
// String returns a string repr of this option
func (p *PortOpt) String() string {
ports := []string{}
for _, port := range p.ports {
repr := fmt.Sprintf("%v:%v/%s/%s", port.PublishedPort, port.TargetPort, port.Protocol, port.PublishMode)
ports = append(ports, repr)
}
return strings.Join(ports, ", ")
}
// Value returns the ports
func (p *PortOpt) Value() []swarm.PortConfig {
return p.ports
}
// ConvertPortToPortConfig converts ports to the swarm type
func ConvertPortToPortConfig(
port nat.Port,
portBindings map[nat.Port][]nat.PortBinding,
) ([]swarm.PortConfig, error) {
ports := []swarm.PortConfig{}
for _, binding := range portBindings[port] {
if binding.HostIP != "" && binding.HostIP != "0.0.0.0" {
logrus.Warnf("ignoring IP-address (%s:%s:%s) service will listen on '0.0.0.0'", binding.HostIP, binding.HostPort, port)
}
startHostPort, endHostPort, err := nat.ParsePortRange(binding.HostPort)
if err != nil && binding.HostPort != "" {
return nil, fmt.Errorf("invalid hostport binding (%s) for port (%s)", binding.HostPort, port.Port())
}
for i := startHostPort; i <= endHostPort; i++ {
ports = append(ports, swarm.PortConfig{
//TODO Name: ?
Protocol: swarm.PortConfigProtocol(strings.ToLower(port.Proto())),
TargetPort: uint32(port.Int()),
PublishedPort: uint32(i),
PublishMode: swarm.PortConfigPublishModeIngress,
})
}
}
return ports, nil
}

37
vendor/github.com/docker/cli/opts/quotedstring.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package opts
// QuotedString is a string that may have extra quotes around the value. The
// quotes are stripped from the value.
type QuotedString struct {
value *string
}
// Set sets a new value
func (s *QuotedString) Set(val string) error {
*s.value = trimQuotes(val)
return nil
}
// Type returns the type of the value
func (s *QuotedString) Type() string {
return "string"
}
func (s *QuotedString) String() string {
return *s.value
}
func trimQuotes(value string) string {
lastIndex := len(value) - 1
for _, char := range []byte{'\'', '"'} {
if value[0] == char && value[lastIndex] == char {
return value[1:lastIndex]
}
}
return value
}
// NewQuotedString returns a new quoted string option
func NewQuotedString(value *string) *QuotedString {
return &QuotedString{value: value}
}

79
vendor/github.com/docker/cli/opts/runtime.go generated vendored Normal file
View file

@ -0,0 +1,79 @@
package opts
import (
"fmt"
"strings"
"github.com/docker/docker/api/types"
)
// RuntimeOpt defines a map of Runtimes
type RuntimeOpt struct {
name string
stockRuntimeName string
values *map[string]types.Runtime
}
// NewNamedRuntimeOpt creates a new RuntimeOpt
func NewNamedRuntimeOpt(name string, ref *map[string]types.Runtime, stockRuntime string) *RuntimeOpt {
if ref == nil {
ref = &map[string]types.Runtime{}
}
return &RuntimeOpt{name: name, values: ref, stockRuntimeName: stockRuntime}
}
// Name returns the name of the NamedListOpts in the configuration.
func (o *RuntimeOpt) Name() string {
return o.name
}
// Set validates and updates the list of Runtimes
func (o *RuntimeOpt) Set(val string) error {
parts := strings.SplitN(val, "=", 2)
if len(parts) != 2 {
return fmt.Errorf("invalid runtime argument: %s", val)
}
parts[0] = strings.TrimSpace(parts[0])
parts[1] = strings.TrimSpace(parts[1])
if parts[0] == "" || parts[1] == "" {
return fmt.Errorf("invalid runtime argument: %s", val)
}
parts[0] = strings.ToLower(parts[0])
if parts[0] == o.stockRuntimeName {
return fmt.Errorf("runtime name '%s' is reserved", o.stockRuntimeName)
}
if _, ok := (*o.values)[parts[0]]; ok {
return fmt.Errorf("runtime '%s' was already defined", parts[0])
}
(*o.values)[parts[0]] = types.Runtime{Path: parts[1]}
return nil
}
// String returns Runtime values as a string.
func (o *RuntimeOpt) String() string {
var out []string
for k := range *o.values {
out = append(out, k)
}
return fmt.Sprintf("%v", out)
}
// GetMap returns a map of Runtimes (name: path)
func (o *RuntimeOpt) GetMap() map[string]types.Runtime {
if o.values != nil {
return *o.values
}
return map[string]types.Runtime{}
}
// Type returns the type of the option
func (o *RuntimeOpt) Type() string {
return "runtime"
}

98
vendor/github.com/docker/cli/opts/secret.go generated vendored Normal file
View file

@ -0,0 +1,98 @@
package opts
import (
"encoding/csv"
"fmt"
"os"
"strconv"
"strings"
swarmtypes "github.com/docker/docker/api/types/swarm"
)
// SecretOpt is a Value type for parsing secrets
type SecretOpt struct {
values []*swarmtypes.SecretReference
}
// Set a new secret value
func (o *SecretOpt) Set(value string) error {
csvReader := csv.NewReader(strings.NewReader(value))
fields, err := csvReader.Read()
if err != nil {
return err
}
options := &swarmtypes.SecretReference{
File: &swarmtypes.SecretReferenceFileTarget{
UID: "0",
GID: "0",
Mode: 0444,
},
}
// support a simple syntax of --secret foo
if len(fields) == 1 {
options.File.Name = fields[0]
options.SecretName = fields[0]
o.values = append(o.values, options)
return nil
}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
key := strings.ToLower(parts[0])
if len(parts) != 2 {
return fmt.Errorf("invalid field '%s' must be a key=value pair", field)
}
value := parts[1]
switch key {
case "source", "src":
options.SecretName = value
case "target":
options.File.Name = value
case "uid":
options.File.UID = value
case "gid":
options.File.GID = value
case "mode":
m, err := strconv.ParseUint(value, 0, 32)
if err != nil {
return fmt.Errorf("invalid mode specified: %v", err)
}
options.File.Mode = os.FileMode(m)
default:
return fmt.Errorf("invalid field in secret request: %s", key)
}
}
if options.SecretName == "" {
return fmt.Errorf("source is required")
}
o.values = append(o.values, options)
return nil
}
// Type returns the type of this option
func (o *SecretOpt) Type() string {
return "secret"
}
// String returns a string repr of this option
func (o *SecretOpt) String() string {
secrets := []string{}
for _, secret := range o.values {
repr := fmt.Sprintf("%s -> %s", secret.SecretName, secret.File.Name)
secrets = append(secrets, repr)
}
return strings.Join(secrets, ", ")
}
// Value returns the secret requests
func (o *SecretOpt) Value() []*swarmtypes.SecretReference {
return o.values
}

108
vendor/github.com/docker/cli/opts/throttledevice.go generated vendored Normal file
View file

@ -0,0 +1,108 @@
package opts
import (
"fmt"
"strconv"
"strings"
"github.com/docker/docker/api/types/blkiodev"
"github.com/docker/go-units"
)
// ValidatorThrottleFctType defines a validator function that returns a validated struct and/or an error.
type ValidatorThrottleFctType func(val string) (*blkiodev.ThrottleDevice, error)
// ValidateThrottleBpsDevice validates that the specified string has a valid device-rate format.
func ValidateThrottleBpsDevice(val string) (*blkiodev.ThrottleDevice, error) {
split := strings.SplitN(val, ":", 2)
if len(split) != 2 {
return nil, fmt.Errorf("bad format: %s", val)
}
if !strings.HasPrefix(split[0], "/dev/") {
return nil, fmt.Errorf("bad format for device path: %s", val)
}
rate, err := units.RAMInBytes(split[1])
if err != nil {
return nil, fmt.Errorf("invalid rate for device: %s. The correct format is <device-path>:<number>[<unit>]. Number must be a positive integer. Unit is optional and can be kb, mb, or gb", val)
}
if rate < 0 {
return nil, fmt.Errorf("invalid rate for device: %s. The correct format is <device-path>:<number>[<unit>]. Number must be a positive integer. Unit is optional and can be kb, mb, or gb", val)
}
return &blkiodev.ThrottleDevice{
Path: split[0],
Rate: uint64(rate),
}, nil
}
// ValidateThrottleIOpsDevice validates that the specified string has a valid device-rate format.
func ValidateThrottleIOpsDevice(val string) (*blkiodev.ThrottleDevice, error) {
split := strings.SplitN(val, ":", 2)
if len(split) != 2 {
return nil, fmt.Errorf("bad format: %s", val)
}
if !strings.HasPrefix(split[0], "/dev/") {
return nil, fmt.Errorf("bad format for device path: %s", val)
}
rate, err := strconv.ParseUint(split[1], 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid rate for device: %s. The correct format is <device-path>:<number>. Number must be a positive integer", val)
}
if rate < 0 {
return nil, fmt.Errorf("invalid rate for device: %s. The correct format is <device-path>:<number>. Number must be a positive integer", val)
}
return &blkiodev.ThrottleDevice{Path: split[0], Rate: rate}, nil
}
// ThrottledeviceOpt defines a map of ThrottleDevices
type ThrottledeviceOpt struct {
values []*blkiodev.ThrottleDevice
validator ValidatorThrottleFctType
}
// NewThrottledeviceOpt creates a new ThrottledeviceOpt
func NewThrottledeviceOpt(validator ValidatorThrottleFctType) ThrottledeviceOpt {
values := []*blkiodev.ThrottleDevice{}
return ThrottledeviceOpt{
values: values,
validator: validator,
}
}
// Set validates a ThrottleDevice and sets its name as a key in ThrottledeviceOpt
func (opt *ThrottledeviceOpt) Set(val string) error {
var value *blkiodev.ThrottleDevice
if opt.validator != nil {
v, err := opt.validator(val)
if err != nil {
return err
}
value = v
}
(opt.values) = append((opt.values), value)
return nil
}
// String returns ThrottledeviceOpt values as a string.
func (opt *ThrottledeviceOpt) String() string {
var out []string
for _, v := range opt.values {
out = append(out, v.String())
}
return fmt.Sprintf("%v", out)
}
// GetList returns a slice of pointers to ThrottleDevices.
func (opt *ThrottledeviceOpt) GetList() []*blkiodev.ThrottleDevice {
var throttledevice []*blkiodev.ThrottleDevice
throttledevice = append(throttledevice, opt.values...)
return throttledevice
}
// Type returns the option type
func (opt *ThrottledeviceOpt) Type() string {
return "list"
}

57
vendor/github.com/docker/cli/opts/ulimit.go generated vendored Normal file
View file

@ -0,0 +1,57 @@
package opts
import (
"fmt"
"github.com/docker/go-units"
)
// UlimitOpt defines a map of Ulimits
type UlimitOpt struct {
values *map[string]*units.Ulimit
}
// NewUlimitOpt creates a new UlimitOpt
func NewUlimitOpt(ref *map[string]*units.Ulimit) *UlimitOpt {
if ref == nil {
ref = &map[string]*units.Ulimit{}
}
return &UlimitOpt{ref}
}
// Set validates a Ulimit and sets its name as a key in UlimitOpt
func (o *UlimitOpt) Set(val string) error {
l, err := units.ParseUlimit(val)
if err != nil {
return err
}
(*o.values)[l.Name] = l
return nil
}
// String returns Ulimit values as a string.
func (o *UlimitOpt) String() string {
var out []string
for _, v := range *o.values {
out = append(out, v.String())
}
return fmt.Sprintf("%v", out)
}
// GetList returns a slice of pointers to Ulimits.
func (o *UlimitOpt) GetList() []*units.Ulimit {
var ulimits []*units.Ulimit
for _, v := range *o.values {
ulimits = append(ulimits, v)
}
return ulimits
}
// Type returns the option type
func (o *UlimitOpt) Type() string {
return "ulimit"
}

84
vendor/github.com/docker/cli/opts/weightdevice.go generated vendored Normal file
View file

@ -0,0 +1,84 @@
package opts
import (
"fmt"
"strconv"
"strings"
"github.com/docker/docker/api/types/blkiodev"
)
// ValidatorWeightFctType defines a validator function that returns a validated struct and/or an error.
type ValidatorWeightFctType func(val string) (*blkiodev.WeightDevice, error)
// ValidateWeightDevice validates that the specified string has a valid device-weight format.
func ValidateWeightDevice(val string) (*blkiodev.WeightDevice, error) {
split := strings.SplitN(val, ":", 2)
if len(split) != 2 {
return nil, fmt.Errorf("bad format: %s", val)
}
if !strings.HasPrefix(split[0], "/dev/") {
return nil, fmt.Errorf("bad format for device path: %s", val)
}
weight, err := strconv.ParseUint(split[1], 10, 0)
if err != nil {
return nil, fmt.Errorf("invalid weight for device: %s", val)
}
if weight > 0 && (weight < 10 || weight > 1000) {
return nil, fmt.Errorf("invalid weight for device: %s", val)
}
return &blkiodev.WeightDevice{
Path: split[0],
Weight: uint16(weight),
}, nil
}
// WeightdeviceOpt defines a map of WeightDevices
type WeightdeviceOpt struct {
values []*blkiodev.WeightDevice
validator ValidatorWeightFctType
}
// NewWeightdeviceOpt creates a new WeightdeviceOpt
func NewWeightdeviceOpt(validator ValidatorWeightFctType) WeightdeviceOpt {
values := []*blkiodev.WeightDevice{}
return WeightdeviceOpt{
values: values,
validator: validator,
}
}
// Set validates a WeightDevice and sets its name as a key in WeightdeviceOpt
func (opt *WeightdeviceOpt) Set(val string) error {
var value *blkiodev.WeightDevice
if opt.validator != nil {
v, err := opt.validator(val)
if err != nil {
return err
}
value = v
}
(opt.values) = append((opt.values), value)
return nil
}
// String returns WeightdeviceOpt values as a string.
func (opt *WeightdeviceOpt) String() string {
var out []string
for _, v := range opt.values {
out = append(out, v.String())
}
return fmt.Sprintf("%v", out)
}
// GetList returns a slice of pointers to WeightDevices.
func (opt *WeightdeviceOpt) GetList() []*blkiodev.WeightDevice {
return opt.values
}
// Type returns the option type
func (opt *WeightdeviceOpt) Type() string {
return "list"
}

38
vendor/github.com/docker/distribution/.gitignore generated vendored Normal file
View file

@ -0,0 +1,38 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
# never checkin from the bin file (for now)
bin/*
# Test key files
*.pem
# Cover profiles
*.out
# Editor/IDE specific files.
*.sublime-project
*.sublime-workspace
.idea/*

View file

@ -0,0 +1,16 @@
{
"Vendor": true,
"Deadline": "2m",
"Sort": ["linter", "severity", "path", "line"],
"EnableGC": true,
"Enable": [
"structcheck",
"staticcheck",
"unconvert",
"gofmt",
"goimports",
"golint",
"vet"
]
}

18
vendor/github.com/docker/distribution/.mailmap generated vendored Normal file
View file

@ -0,0 +1,18 @@
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@users.noreply.github.com>
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@gmail.com>
Olivier Gambier <olivier@docker.com> Olivier Gambier <dmp42@users.noreply.github.com>
Brian Bland <brian.bland@docker.com> Brian Bland <r4nd0m1n4t0r@gmail.com>
Brian Bland <brian.bland@docker.com> Brian Bland <brian.t.bland@gmail.com>
Josh Hawn <josh.hawn@docker.com> Josh Hawn <jlhawn@berkeley.edu>
Richard Scothern <richard.scothern@docker.com> Richard <richard.scothern@gmail.com>
Richard Scothern <richard.scothern@docker.com> Richard Scothern <richard.scothern@gmail.com>
Andrew Meredith <andymeredith@gmail.com> Andrew Meredith <kendru@users.noreply.github.com>
harche <p.harshal@gmail.com> harche <harche@users.noreply.github.com>
Jessie Frazelle <jessie@docker.com> <jfrazelle@users.noreply.github.com>
Sharif Nassar <sharif@mrwacky.com> Sharif Nassar <mrwacky42@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> Sven Dowideit <SvenDowideit@users.noreply.github.com>
Vincent Giersch <vincent.giersch@ovh.net> Vincent Giersch <vincent@giersch.fr>
davidli <wenquan.li@hp.com> davidli <wenquan.li@hpe.com>
Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net>
Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com>
Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com>

51
vendor/github.com/docker/distribution/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,51 @@
dist: trusty
sudo: required
# setup travis so that we can run containers for integration tests
services:
- docker
language: go
go:
- "1.11.x"
go_import_path: github.com/docker/distribution
addons:
apt:
packages:
- python-minimal
env:
- TRAVIS_GOOS=linux DOCKER_BUILDTAGS="include_oss include_gcs" TRAVIS_CGO_ENABLED=1
before_install:
- uname -r
- sudo apt-get -q update
install:
- go get -u github.com/vbatts/git-validation
# TODO: Add enforcement of license
# - go get -u github.com/kunalkushwaha/ltag
- cd $TRAVIS_BUILD_DIR
script:
- export GOOS=$TRAVIS_GOOS
- export CGO_ENABLED=$TRAVIS_CGO_ENABLED
- DCO_VERBOSITY=-q script/validate/dco
- GOOS=linux script/setup/install-dev-tools
- script/validate/vendor
- go build -i .
- make check
- make build
- make binaries
# Currently takes too long
#- if [ "$GOOS" = "linux" ]; then make test-race ; fi
- if [ "$GOOS" = "linux" ]; then make coverage ; fi
after_success:
- bash <(curl -s https://codecov.io/bash) -F linux
before_deploy:
# Run tests with storage driver configurations

182
vendor/github.com/docker/distribution/AUTHORS generated vendored Normal file
View file

@ -0,0 +1,182 @@
a-palchikov <deemok@gmail.com>
Aaron Lehmann <aaron.lehmann@docker.com>
Aaron Schlesinger <aschlesinger@deis.com>
Aaron Vinson <avinson.public@gmail.com>
Adam Duke <adam.v.duke@gmail.com>
Adam Enger <adamenger@gmail.com>
Adrian Mouat <adrian.mouat@gmail.com>
Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Alex Chan <alex.chan@metaswitch.com>
Alex Elman <aelman@indeed.com>
Alexey Gladkov <gladkov.alexey@gmail.com>
allencloud <allen.sun@daocloud.io>
amitshukla <ashukla73@hotmail.com>
Amy Lindburg <amy.lindburg@docker.com>
Andrew Hsu <andrewhsu@acm.org>
Andrew Meredith <andymeredith@gmail.com>
Andrew T Nguyen <andrew.nguyen@docker.com>
Andrey Kostov <kostov.andrey@gmail.com>
Andy Goldstein <agoldste@redhat.com>
Anis Elleuch <vadmeste@gmail.com>
Anton Tiurin <noxiouz@yandex.ru>
Antonio Mercado <amercado@thinknode.com>
Antonio Murdaca <runcom@redhat.com>
Anusha Ragunathan <anusha@docker.com>
Arien Holthuizen <aholthuizen@schubergphilis.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arthur Baars <arthur@semmle.com>
Asuka Suzuki <hello@tanksuzuki.com>
Avi Miller <avi.miller@oracle.com>
Ayose Cazorla <ayosec@gmail.com>
BadZen <dave.trombley@gmail.com>
Ben Bodenmiller <bbodenmiller@hotmail.com>
Ben Firshman <ben@firshman.co.uk>
bin liu <liubin0329@gmail.com>
Brian Bland <brian.bland@docker.com>
burnettk <burnettk@gmail.com>
Carson A <ca@carsonoid.net>
Cezar Sa Espinola <cezarsa@gmail.com>
Charles Smith <charles.smith@docker.com>
Chris Dillon <squarism@gmail.com>
cuiwei13 <cuiwei13@pku.edu.cn>
cyli <cyli@twistedmatrix.com>
Daisuke Fujita <dtanshi45@gmail.com>
Daniel Huhn <daniel@danielhuhn.de>
Darren Shepherd <darren@rancher.com>
Dave Trombley <dave.trombley@gmail.com>
Dave Tucker <dt@docker.com>
David Lawrence <david.lawrence@docker.com>
David Verhasselt <david@crowdway.com>
David Xia <dxia@spotify.com>
davidli <wenquan.li@hp.com>
Dejan Golja <dejan@golja.org>
Derek McGowan <derek@mcgstyle.net>
Diogo Mónica <diogo.monica@gmail.com>
DJ Enriquez <dj.enriquez@infospace.com>
Donald Huang <don.hcd@gmail.com>
Doug Davis <dug@us.ibm.com>
Edgar Lee <edgar.lee@docker.com>
Eric Yang <windfarer@gmail.com>
Fabio Berchtold <jamesclonk@jamesclonk.ch>
Fabio Huser <fabio@fh1.ch>
farmerworking <farmerworking@gmail.com>
Felix Yan <felixonmars@archlinux.org>
Florentin Raud <florentin.raud@gmail.com>
Frank Chen <frankchn@gmail.com>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
gabriell nascimento <gabriell@bluesoft.com.br>
Gleb Schukin <gschukin@ptsecurity.com>
harche <p.harshal@gmail.com>
Henri Gomez <henri.gomez@gmail.com>
Hu Keping <hukeping@huawei.com>
Hua Wang <wanghua.humble@gmail.com>
HuKeping <hukeping@huawei.com>
Ian Babrou <ibobrik@gmail.com>
igayoso <igayoso@gmail.com>
Jack Griffin <jackpg14@gmail.com>
James Findley <jfindley@fastmail.com>
Jason Freidman <jason.freidman@gmail.com>
Jason Heiss <jheiss@aput.net>
Jeff Nickoloff <jeff@allingeek.com>
Jess Frazelle <acidburn@google.com>
Jessie Frazelle <jessie@docker.com>
jhaohai <jhaohai@foxmail.com>
Jianqing Wang <tsing@jianqing.org>
Jihoon Chung <jihoon@gmail.com>
Joao Fernandes <joao.fernandes@docker.com>
John Mulhausen <john@docker.com>
John Starks <jostarks@microsoft.com>
Jon Johnson <jonjohnson@google.com>
Jon Poler <jonathan.poler@apcera.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jordan Liggitt <jliggitt@redhat.com>
Josh Chorlton <josh.chorlton@docker.com>
Josh Hawn <josh.hawn@docker.com>
Julien Fernandez <julien.fernandez@gmail.com>
Ke Xu <leonhartx.k@gmail.com>
Keerthan Mala <kmala@engineyard.com>
Kelsey Hightower <kelsey.hightower@gmail.com>
Kenneth Lim <kennethlimcp@gmail.com>
Kenny Leung <kleung@google.com>
Li Yi <denverdino@gmail.com>
Liu Hua <sdu.liu@huawei.com>
liuchang0812 <liuchang0812@gmail.com>
Lloyd Ramey <lnr0626@gmail.com>
Louis Kottmann <louis.kottmann@gmail.com>
Luke Carpenter <x@rubynerd.net>
Marcus Martins <marcus@docker.com>
Mary Anthony <mary@docker.com>
Matt Bentley <mbentley@mbentley.net>
Matt Duch <matt@learnmetrics.com>
Matt Moore <mattmoor@google.com>
Matt Robenolt <matt@ydekproductions.com>
Matthew Green <greenmr@live.co.uk>
Michael Prokop <mika@grml.org>
Michal Minar <miminar@redhat.com>
Michal Minář <miminar@redhat.com>
Mike Brown <brownwm@us.ibm.com>
Miquel Sabaté <msabate@suse.com>
Misty Stanley-Jones <misty@apache.org>
Misty Stanley-Jones <misty@docker.com>
Morgan Bauer <mbauer@us.ibm.com>
moxiegirl <mary@docker.com>
Nathan Sullivan <nathan@nightsys.net>
nevermosby <robolwq@qq.com>
Nghia Tran <tcnghia@gmail.com>
Nikita Tarasov <nikita@mygento.ru>
Noah Treuhaft <noah.treuhaft@docker.com>
Nuutti Kotivuori <nuutti.kotivuori@poplatek.fi>
Oilbeater <liumengxinfly@gmail.com>
Olivier Gambier <olivier@docker.com>
Olivier Jacques <olivier.jacques@hp.com>
Omer Cohen <git@omer.io>
Patrick Devine <patrick.devine@docker.com>
Phil Estes <estesp@linux.vnet.ibm.com>
Philip Misiowiec <philip@atlashealth.com>
Pierre-Yves Ritschard <pyr@spootnik.org>
Qiao Anran <qiaoanran@gmail.com>
Randy Barlow <randy@electronsweatshop.com>
Richard Scothern <richard.scothern@docker.com>
Rodolfo Carvalho <rhcarvalho@gmail.com>
Rusty Conover <rusty@luckydinosaur.com>
Sean Boran <Boran@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastien Coavoux <s.coavoux@free.fr>
Serge Dubrouski <sergeyfd@gmail.com>
Sharif Nassar <sharif@mrwacky.com>
Shawn Falkner-Horine <dreadpirateshawn@gmail.com>
Shreyas Karnik <karnik.shreyas@gmail.com>
Simon Thulbourn <simon+github@thulbourn.com>
spacexnice <yaoyao.xyy@alibaba-inc.com>
Spencer Rinehart <anubis@overthemonkey.com>
Stan Hu <stanhu@gmail.com>
Stefan Majewsky <stefan.majewsky@sap.com>
Stefan Weil <sw@weilnetz.de>
Stephen J Day <stephen.day@docker.com>
Sungho Moon <sungho.moon@navercorp.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sylvain Baubeau <sbaubeau@redhat.com>
Ted Reed <ted.reed@gmail.com>
tgic <farmer1992@gmail.com>
Thomas Sjögren <konstruktoid@users.noreply.github.com>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <teabee89@gmail.com>
Tonis Tiigi <tonistiigi@gmail.com>
Tony Holdstock-Brown <tony@docker.com>
Trevor Pounds <trevor.pounds@gmail.com>
Troels Thomsen <troels@thomsen.io>
Victor Vieux <vieux@docker.com>
Victoria Bialas <victoria.bialas@docker.com>
Vincent Batts <vbatts@redhat.com>
Vincent Demeester <vincent@sbr.pm>
Vincent Giersch <vincent.giersch@ovh.net>
W. Trevor King <wking@tremily.us>
weiyuan.yl <weiyuan.yl@alibaba-inc.com>
xg.song <xg.song@venusource.com>
xiekeyang <xiekeyang@huawei.com>
Yann ROBERT <yann.robert@anantaplex.fr>
yaoyao.xyy <yaoyao.xyy@alibaba-inc.com>
yuexiao-wang <wang.yuexiao@zte.com.cn>
yuzou <zouyu7@huawei.com>
zhouhaibing089 <zhouhaibing089@gmail.com>
姜继忠 <jizhong.jiangjz@alibaba-inc.com>

117
vendor/github.com/docker/distribution/BUILDING.md generated vendored Normal file
View file

@ -0,0 +1,117 @@
# Building the registry source
## Use-case
This is useful if you intend to actively work on the registry.
### Alternatives
Most people should use the [official Registry docker image](https://hub.docker.com/r/library/registry/).
People looking for advanced operational use cases might consider rolling their own image with a custom Dockerfile inheriting `FROM registry:2`.
OS X users who want to run natively can do so following [the instructions here](https://github.com/docker/docker.github.io/blob/master/registry/recipes/osx-setup-guide.md).
### Gotchas
You are expected to know your way around with go & git.
If you are a casual user with no development experience, and no preliminary knowledge of go, building from source is probably not a good solution for you.
## Build the development environment
The first prerequisite of properly building distribution targets is to have a Go
development environment setup. Please follow [How to Write Go Code](https://golang.org/doc/code.html)
for proper setup. If done correctly, you should have a GOROOT and GOPATH set in the
environment.
If a Go development environment is setup, one can use `go get` to install the
`registry` command from the current latest:
go get github.com/docker/distribution/cmd/registry
The above will install the source repository into the `GOPATH`.
Now create the directory for the registry data (this might require you to set permissions properly)
mkdir -p /var/lib/registry
... or alternatively `export REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/somewhere` if you want to store data into another location.
The `registry`
binary can then be run with the following:
$ $GOPATH/bin/registry --version
$GOPATH/bin/registry github.com/docker/distribution v2.0.0-alpha.1+unknown
> __NOTE:__ While you do not need to use `go get` to checkout the distribution
> project, for these build instructions to work, the project must be checked
> out in the correct location in the `GOPATH`. This should almost always be
> `$GOPATH/src/github.com/docker/distribution`.
The registry can be run with the default config using the following
incantation:
$ $GOPATH/bin/registry serve $GOPATH/src/github.com/docker/distribution/cmd/registry/config-example.yml
INFO[0000] endpoint local-5003 disabled, skipping app.id=34bbec38-a91a-494a-9a3f-b72f9010081f version=v2.0.0-alpha.1+unknown
INFO[0000] endpoint local-8083 disabled, skipping app.id=34bbec38-a91a-494a-9a3f-b72f9010081f version=v2.0.0-alpha.1+unknown
INFO[0000] listening on :5000 app.id=34bbec38-a91a-494a-9a3f-b72f9010081f version=v2.0.0-alpha.1+unknown
INFO[0000] debug server listening localhost:5001
If it is working, one should see the above log messages.
### Repeatable Builds
For the full development experience, one should `cd` into
`$GOPATH/src/github.com/docker/distribution`. From there, the regular `go`
commands, such as `go test`, should work per package (please see
[Developing](#developing) if they don't work).
A `Makefile` has been provided as a convenience to support repeatable builds.
Please install the following into `GOPATH` for it to work:
go get github.com/golang/lint/golint
Once these commands are available in the `GOPATH`, run `make` to get a full
build:
$ make
+ clean
+ fmt
+ vet
+ lint
+ build
github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar
github.com/sirupsen/logrus
github.com/docker/libtrust
...
github.com/yvasiyarov/gorelic
github.com/docker/distribution/registry/handlers
github.com/docker/distribution/cmd/registry
+ test
...
ok github.com/docker/distribution/digest 7.875s
ok github.com/docker/distribution/manifest 0.028s
ok github.com/docker/distribution/notifications 17.322s
? github.com/docker/distribution/registry [no test files]
ok github.com/docker/distribution/registry/api/v2 0.101s
? github.com/docker/distribution/registry/auth [no test files]
ok github.com/docker/distribution/registry/auth/silly 0.011s
...
+ /Users/sday/go/src/github.com/docker/distribution/bin/registry
+ /Users/sday/go/src/github.com/docker/distribution/bin/registry-api-descriptor-template
+ binaries
The above provides a repeatable build using the contents of the vendor
directory. This includes formatting, vetting, linting, building,
testing and generating tagged binaries. We can verify this worked by running
the registry binary generated in the "./bin" directory:
$ ./bin/registry --version
./bin/registry github.com/docker/distribution v2.0.0-alpha.2-80-g16d8b2c.m
### Optional build tags
Optional [build tags](http://golang.org/pkg/go/build/) can be provided using
the environment variable `DOCKER_BUILDTAGS`.

108
vendor/github.com/docker/distribution/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,108 @@
# Changelog
## 2.6.0 (2017-01-18)
#### Storage
- S3: fixed bug in delete due to read-after-write inconsistency
- S3: allow EC2 IAM roles to be used when authorizing region endpoints
- S3: add Object ACL Support
- S3: fix delete method's notion of subpaths
- S3: use multipart upload API in `Move` method for performance
- S3: add v2 signature signing for legacy S3 clones
- Swift: add simple heuristic to detect incomplete DLOs during read ops
- Swift: support different user and tenant domains
- Swift: bulk deletes in chunks
- Aliyun OSS: fix delete method's notion of subpaths
- Aliyun OSS: optimize data copy after upload finishes
- Azure: close leaking response body
- Fix storage drivers dropping non-EOF errors when listing repositories
- Compare path properly when listing repositories in catalog
- Add a foreign layer URL host whitelist
- Improve catalog enumerate runtime
#### Registry
- Export `storage.CreateOptions` in top-level package
- Enable notifications to endpoints that use self-signed certificates
- Properly validate multi-URL foreign layers
- Add control over validation of URLs in pushed manifests
- Proxy mode: fix socket leak when pull is cancelled
- Tag service: properly handle error responses on HEAD request
- Support for custom authentication URL in proxying registry
- Add configuration option to disable access logging
- Add notification filtering by target media type
- Manifest: `References()` returns all children
- Honor `X-Forwarded-Port` and Forwarded headers
- Reference: Preserve tag and digest in With* functions
- Add policy configuration for enforcing repository classes
#### Client
- Changes the client Tags `All()` method to follow links
- Allow registry clients to connect via HTTP2
- Better handling of OAuth errors in client
#### Spec
- Manifest: clarify relationship between urls and foreign layers
- Authorization: add support for repository classes
#### Manifest
- Override media type returned from `Stat()` for existing manifests
- Add plugin mediatype to distribution manifest
#### Docs
- Document `TOOMANYREQUESTS` error code
- Document required Let's Encrypt port
- Improve documentation around implementation of OAuth2
- Improve documentation for configuration
#### Auth
- Add support for registry type in scope
- Add support for using v2 ping challenges for v1
- Add leeway to JWT `nbf` and `exp` checking
- htpasswd: dynamically parse htpasswd file
- Fix missing auth headers with PATCH HTTP request when pushing to default port
#### Dockerfile
- Update to go1.7
- Reorder Dockerfile steps for better layer caching
#### Notes
Documentation has moved to the documentation repository at
`github.com/docker/docker.github.io/tree/master/registry`
The registry is go 1.7 compliant, and passes newer, more restrictive `lint` and `vet` ing.
## 2.5.0 (2016-06-14)
#### Storage
- Ensure uploads directory is cleaned after upload is committed
- Add ability to cap concurrent operations in filesystem driver
- S3: Add 'us-gov-west-1' to the valid region list
- Swift: Handle ceph not returning Last-Modified header for HEAD requests
- Add redirect middleware
#### Registry
- Add support for blobAccessController middleware
- Add support for layers from foreign sources
- Remove signature store
- Add support for Let's Encrypt
- Correct yaml key names in configuration
#### Client
- Add option to get content digest from manifest get
#### Spec
- Update the auth spec scope grammar to reflect the fact that hostnames are optionally supported
- Clarify API documentation around catalog fetch behavior
#### API
- Support returning HTTP 429 (Too Many Requests)
#### Documentation
- Update auth documentation examples to show "expires in" as int
#### Docker Image
- Use Alpine Linux as base image

148
vendor/github.com/docker/distribution/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,148 @@
# Contributing to the registry
## Before reporting an issue...
### If your problem is with...
- automated builds
- your account on the [Docker Hub](https://hub.docker.com/)
- any other [Docker Hub](https://hub.docker.com/) issue
Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
### If you...
- need help setting up your registry
- can't figure out something
- are not sure what's going on or what your problem is
Then please do not open an issue here yet - you should first try one of the following support forums:
- irc: #docker-distribution on freenode
- mailing-list: <distribution@dockerproject.org> or https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
### Reporting security issues
The Docker maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
## Reporting an issue properly
By following these simple rules you will get better and faster feedback on your issue.
- search the bugtracker for an already reported issue
### If you found an issue that describes your problem:
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
- please refrain from adding "same thing here" or "+1" comments
- you don't need to comment on an issue to get notified of updates: just hit the "subscribe" button
- comment if you have some new, technical and relevant information to add to the case
- __DO NOT__ comment on closed issues or merged PRs. If you think you have a related problem, open up a new issue and reference the PR or issue.
### If you have not found an existing issue that describes your problem:
1. create a new issue, with a succinct title that describes your issue:
- bad title: "It doesn't work with my docker"
- good title: "Private registry push fail: 400 error with E_INVALID_DIGEST"
2. copy the output of:
- `docker version`
- `docker info`
- `docker exec <registry-container> registry --version`
3. copy the command line you used to launch your Registry
4. restart your docker daemon in debug mode (add `-D` to the daemon launch arguments)
5. reproduce your problem and get your docker daemon logs showing the error
6. if relevant, copy your registry logs that show the error
7. provide any relevant detail about your specific Registry configuration (e.g., storage backend used)
8. indicate if you are using an enterprise proxy, Nginx, or anything else between you and your Registry
## Contributing a patch for a known bug, or a small correction
You should follow the basic GitHub workflow:
1. fork
2. commit a change
3. make sure the tests pass
4. PR
Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
- configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
- sign your commits using `-s`: `git commit -s -m "My commit"`
Some simple rules to ensure quick merge:
- clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
- prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
- if you need to amend your PR following comments, please squash instead of adding more commits
## Contributing new features
You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work.
Then you should submit your implementation, clearly linking to the issue (and possible proposal).
Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
It's mandatory to:
- interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
- address maintainers' comments and modify your submission accordingly
- write tests for any new code
Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.
Have a look at a great, successful contribution: the [Swift driver PR](https://github.com/docker/distribution/pull/493)
## Coding Style
Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
It is possible that the code base does not currently comply with these
guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
4. Comment the code. Tell us the why, the history and the context.
5. Document _all_ declarations and methods, even private ones. Declare
expectations, caveats and anything else that may be important. If a type
gets exported, having the comments already there will ensure it's ready.
6. Variable name length should be proportional to its context and no longer.
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
In practice, short methods will have short variable names and globals will
have longer names.
7. No underscores in package names. If you need a compound name, step back,
and re-examine why you need a compound name. If you still think you need a
compound name, lose the underscore.
8. No utils or helpers packages. If a function is not general enough to
warrant its own package, it has not been written generally enough to be a
part of a util package. Just leave it unexported and well-documented.
9. All tests should run with `go test` and outside tooling should not be
required. No, we don't need another unit testing framework. Assertion
packages are acceptable if they provide _real_ incremental value.
10. Even though we call these "rules" above, they are actually just
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](http://golang.org/doc/effective_go.html). The
[Go Blog](http://blog.golang.org/) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty.

21
vendor/github.com/docker/distribution/Dockerfile generated vendored Normal file
View file

@ -0,0 +1,21 @@
FROM golang:1.10-alpine
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV DOCKER_BUILDTAGS include_oss include_gcs
ARG GOOS=linux
ARG GOARCH=amd64
RUN set -ex \
&& apk add --no-cache make git
WORKDIR $DISTRIBUTION_DIR
COPY . $DISTRIBUTION_DIR
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
RUN make PREFIX=/go clean binaries
VOLUME ["/var/lib/registry"]
EXPOSE 5000
ENTRYPOINT ["registry"]
CMD ["serve", "/etc/docker/registry/config.yml"]

Some files were not shown because too many files have changed in this diff Show more