diff --git a/vendor/github.com/flosch/pongo2/.gitignore b/vendor/github.com/flosch/pongo2/.gitignore
deleted file mode 100644
index 37eaf44..0000000
--- a/vendor/github.com/flosch/pongo2/.gitignore
+++ /dev/null
@@ -1,40 +0,0 @@
-# Compiled Object files, Static and Dynamic libs (Shared Objects)
-*.o
-*.a
-*.so
-
-# Folders
-_obj
-_test
-.idea
-
-# Architecture specific extensions/prefixes
-*.[568vq]
-[568vq].out
-
-*.cgo1.go
-*.cgo2.c
-_cgo_defun.c
-_cgo_gotypes.go
-_cgo_export.*
-
-_testmain.go
-
-*.exe
-
-.project
-EBNF.txt
-test1.tpl
-pongo2_internal_test.go
-tpl-error.out
-/count.out
-/cover.out
-*.swp
-*.iml
-/cpu.out
-/mem.out
-/pongo2.test
-*.error
-/profile
-/coverage.out
-/pongo2_internal_test.ignore
diff --git a/vendor/github.com/flosch/pongo2/.travis.yml b/vendor/github.com/flosch/pongo2/.travis.yml
deleted file mode 100644
index a22ad21..0000000
--- a/vendor/github.com/flosch/pongo2/.travis.yml
+++ /dev/null
@@ -1,12 +0,0 @@
-language: go
-
-go:
- - 1.3
- - tip
-install:
- - go get code.google.com/p/go.tools/cmd/cover
- - go get github.com/mattn/goveralls
- - go get gopkg.in/check.v1
-script:
- - go test -v -covermode=count -coverprofile=coverage.out -bench . -cpu 1,4
- - '[ "${TRAVIS_PULL_REQUEST}" = "false" ] && $HOME/gopath/bin/goveralls -coverprofile=coverage.out -service=travis-ci -repotoken $COVERALLS_TOKEN || true'
diff --git a/vendor/github.com/flosch/pongo2/README.md b/vendor/github.com/flosch/pongo2/README.md
index 7c61e9e..33def30 100644
--- a/vendor/github.com/flosch/pongo2/README.md
+++ b/vendor/github.com/flosch/pongo2/README.md
@@ -1,8 +1,9 @@
# [pongo](https://en.wikipedia.org/wiki/Pongo_%28genus%29)2
-[![GoDoc](https://godoc.org/github.com/flosch/pongo2?status.png)](https://godoc.org/github.com/flosch/pongo2)
+[![Join the chat at https://gitter.im/flosch/pongo2](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/flosch/pongo2)
+[![GoDoc](https://godoc.org/github.com/flosch/pongo2?status.svg)](https://godoc.org/github.com/flosch/pongo2)
[![Build Status](https://travis-ci.org/flosch/pongo2.svg?branch=master)](https://travis-ci.org/flosch/pongo2)
-[![Coverage Status](https://coveralls.io/repos/flosch/pongo2/badge.png?branch=master)](https://coveralls.io/r/flosch/pongo2?branch=master)
+[![Coverage Status](https://coveralls.io/repos/flosch/pongo2/badge.svg?branch=master)](https://coveralls.io/r/flosch/pongo2?branch=master)
[![gratipay](http://img.shields.io/badge/gratipay-support%20pongo-brightgreen.svg)](https://gratipay.com/flosch/)
[![Bountysource](https://www.bountysource.com/badge/tracker?tracker_id=3654947)](https://www.bountysource.com/trackers/3654947-pongo2?utm_source=3654947&utm_medium=shield&utm_campaign=TRACKER_BADGE)
@@ -95,6 +96,7 @@ Please also have a look on the [caveats](https://github.com/flosch/pongo2#caveat
If you're using the `master`-branch of pongo2, you might be interested in this section. Since pongo2 is still in development (even though there is a first stable release!), there could be (backwards-incompatible) API changes over time. To keep track of these and therefore make it painless for you to adapt your codebase, I'll list them here.
+ * Function signature for tag execution changed: not taking a `bytes.Buffer` anymore; instead `Execute()`-functions are now taking a `TemplateWriter` interface.
* Function signature for tag and filter parsing/execution changed (`error` return type changed to `*Error`).
* `INodeEvaluator` has been removed and got replaced by `IEvaluator`. You can change your existing tags/filters by simply replacing the interface.
* Two new helper functions: [`RenderTemplateFile()`](https://godoc.org/github.com/flosch/pongo2#RenderTemplateFile) and [`RenderTemplateString()`](https://godoc.org/github.com/flosch/pongo2#RenderTemplateString).
@@ -104,7 +106,7 @@ If you're using the `master`-branch of pongo2, you might be interested in this s
## How you can help
* Write [filters](https://github.com/flosch/pongo2/blob/master/filters_builtin.go#L3) / [tags](https://github.com/flosch/pongo2/blob/master/tags.go#L4) (see [tutorial](https://www.florian-schlachter.de/post/pongo2/)) by forking pongo2 and sending pull requests
- * Write/improve code tests (use the following command to see what tests are missing: `go test -v -cover -covermode=count -coverprofile=cover.out && go tool cover -html=cover.out`)
+ * Write/improve code tests (use the following command to see what tests are missing: `go test -v -cover -covermode=count -coverprofile=cover.out && go tool cover -html=cover.out` or have a look on [gocover.io/github.com/flosch/pongo2](http://gocover.io/github.com/flosch/pongo2))
* Write/improve template tests (see the `template_tests/` directory)
* Write middleware, libraries and websites using pongo2. :-)
@@ -115,7 +117,8 @@ For a documentation on how the templating language works you can [head over to t
You can access pongo2's API documentation on [godoc](https://godoc.org/github.com/flosch/pongo2).
## Blog post series
-
+
+ * [pongo2 v3 released](https://www.florian-schlachter.de/post/pongo2-v3/)
* [pongo2 v2 released](https://www.florian-schlachter.de/post/pongo2-v2/)
* [pongo2 1.0 released](https://www.florian-schlachter.de/post/pongo2-10/) [August 8th 2014]
* [pongo2 playground](https://www.florian-schlachter.de/post/pongo2-playground/) [August 1st 2014]
@@ -154,8 +157,12 @@ You can access pongo2's API documentation on [godoc](https://godoc.org/github.co
* [beego-pongo2.v2](https://github.com/ipfans/beego-pongo2.v2) - Same as `beego-pongo2`, but for pongo2 v2.
* [macaron-pongo2](https://github.com/macaron-contrib/pongo2) - pongo2 support for [Macaron](https://github.com/Unknwon/macaron), a modular web framework.
* [ginpongo2](https://github.com/ngerakines/ginpongo2) - middleware for [gin](github.com/gin-gonic/gin) to use pongo2 templates
- * [pongo2-trans](https://github.com/fromYukki/pongo2trans) - `trans`-tag implementation for internationalization
-
+ * [Build'n support for Iris' template engine](https://github.com/kataras/iris)
+ * [pongo2gin](https://github.com/robvdl/pongo2gin) - alternative renderer for [gin](github.com/gin-gonic/gin) to use pongo2 templates
+ * [pongo2-trans](https://github.com/digitalcrab/pongo2trans) - `trans`-tag implementation for internationalization
+ * [tpongo2](https://github.com/tango-contrib/tpongo2) - pongo2 support for [Tango](https://github.com/lunny/tango), a micro-kernel & pluggable web framework.
+ * [p2cli](https://github.com/wrouesnel/p2cli) - command line templating utility based on pongo2
+
Please add your project to this list and send me a pull request when you've developed something nice for pongo2.
# API-usage examples
diff --git a/vendor/github.com/flosch/pongo2/context.go b/vendor/github.com/flosch/pongo2/context.go
index df587c8..6e3c166 100644
--- a/vendor/github.com/flosch/pongo2/context.go
+++ b/vendor/github.com/flosch/pongo2/context.go
@@ -1,13 +1,14 @@
package pongo2
import (
- "fmt"
"regexp"
+
+ "github.com/juju/errors"
)
var reIdentifiers = regexp.MustCompile("^[a-zA-Z0-9_]+$")
-// Use this Context type to provide constants, variables, instances or functions to your template.
+// A Context type provides constants, variables, instances or functions to a template.
//
// pongo2 automatically provides meta-information or functions through the "pongo2"-key.
// Currently, context["pongo2"] contains the following keys:
@@ -24,14 +25,15 @@ func (c Context) checkForValidIdentifiers() *Error {
for k, v := range c {
if !reIdentifiers.MatchString(k) {
return &Error{
- Sender: "checkForValidIdentifiers",
- ErrorMsg: fmt.Sprintf("Context-key '%s' (value: '%+v') is not a valid identifier.", k, v),
+ Sender: "checkForValidIdentifiers",
+ OrigError: errors.Errorf("context-key '%s' (value: '%+v') is not a valid identifier", k, v),
}
}
}
return nil
}
+// Update updates this context with the key/value-pairs from another context.
func (c Context) Update(other Context) Context {
for k, v := range other {
c[k] = v
@@ -39,6 +41,8 @@ func (c Context) Update(other Context) Context {
return c
}
+// ExecutionContext contains all data important for the current rendering state.
+//
// If you're writing a custom tag, your tag's Execute()-function will
// have access to the ExecutionContext. This struct stores anything
// about the current rendering process's Context including
@@ -97,6 +101,10 @@ func NewChildExecutionContext(parent *ExecutionContext) *ExecutionContext {
}
func (ctx *ExecutionContext) Error(msg string, token *Token) *Error {
+ return ctx.OrigError(errors.New(msg), token)
+}
+
+func (ctx *ExecutionContext) OrigError(err error, token *Token) *Error {
filename := ctx.template.name
var line, col int
if token != nil {
@@ -107,13 +115,13 @@ func (ctx *ExecutionContext) Error(msg string, token *Token) *Error {
col = token.Col
}
return &Error{
- Template: ctx.template,
- Filename: filename,
- Line: line,
- Column: col,
- Token: token,
- Sender: "execution",
- ErrorMsg: msg,
+ Template: ctx.template,
+ Filename: filename,
+ Line: line,
+ Column: col,
+ Token: token,
+ Sender: "execution",
+ OrigError: err,
}
}
diff --git a/vendor/github.com/flosch/pongo2/error.go b/vendor/github.com/flosch/pongo2/error.go
index c1ee86e..8aec8c1 100644
--- a/vendor/github.com/flosch/pongo2/error.go
+++ b/vendor/github.com/flosch/pongo2/error.go
@@ -6,20 +6,20 @@ import (
"os"
)
-// This Error type is being used to address an error during lexing, parsing or
+// The Error type is being used to address an error during lexing, parsing or
// execution. If you want to return an error object (for example in your own
// tag or filter) fill this object with as much information as you have.
// Make sure "Sender" is always given (if you're returning an error within
// a filter, make Sender equals 'filter:yourfilter'; same goes for tags: 'tag:mytag').
// It's okay if you only fill in ErrorMsg if you don't have any other details at hand.
type Error struct {
- Template *Template
- Filename string
- Line int
- Column int
- Token *Token
- Sender string
- ErrorMsg string
+ Template *Template
+ Filename string
+ Line int
+ Column int
+ Token *Token
+ Sender string
+ OrigError error
}
func (e *Error) updateFromTokenIfNeeded(template *Template, t *Token) *Error {
@@ -54,14 +54,14 @@ func (e *Error) Error() string {
}
}
s += "] "
- s += e.ErrorMsg
+ s += e.OrigError.Error()
return s
}
-// Returns the affected line from the original template, if available.
-func (e *Error) RawLine() (line string, available bool) {
+// RawLine returns the affected line from the original template, if available.
+func (e *Error) RawLine() (line string, available bool, outErr error) {
if e.Line <= 0 || e.Filename == "" {
- return "", false
+ return "", false, nil
}
filename := e.Filename
@@ -70,17 +70,22 @@ func (e *Error) RawLine() (line string, available bool) {
}
file, err := os.Open(filename)
if err != nil {
- panic(err)
+ return "", false, err
}
- defer file.Close()
+ defer func() {
+ err := file.Close()
+ if err != nil && outErr == nil {
+ outErr = err
+ }
+ }()
scanner := bufio.NewScanner(file)
l := 0
for scanner.Scan() {
l++
if l == e.Line {
- return scanner.Text(), true
+ return scanner.Text(), true, nil
}
}
- return "", false
+ return "", false, nil
}
diff --git a/vendor/github.com/flosch/pongo2/filters.go b/vendor/github.com/flosch/pongo2/filters.go
index 229f7fe..1092705 100644
--- a/vendor/github.com/flosch/pongo2/filters.go
+++ b/vendor/github.com/flosch/pongo2/filters.go
@@ -2,8 +2,11 @@ package pongo2
import (
"fmt"
+
+ "github.com/juju/errors"
)
+// FilterFunction is the type filter functions must fulfil
type FilterFunction func(in *Value, param *Value) (out *Value, err *Error)
var filters map[string]FilterFunction
@@ -12,32 +15,38 @@ func init() {
filters = make(map[string]FilterFunction)
}
-// Registers a new filter. If there's already a filter with the same
+// FilterExists returns true if the given filter is already registered
+func FilterExists(name string) bool {
+ _, existing := filters[name]
+ return existing
+}
+
+// RegisterFilter registers a new filter. If there's already a filter with the same
// name, RegisterFilter will panic. You usually want to call this
// function in the filter's init() function:
// http://golang.org/doc/effective_go.html#init
//
// See http://www.florian-schlachter.de/post/pongo2/ for more about
// writing filters and tags.
-func RegisterFilter(name string, fn FilterFunction) {
- _, existing := filters[name]
- if existing {
- panic(fmt.Sprintf("Filter with name '%s' is already registered.", name))
+func RegisterFilter(name string, fn FilterFunction) error {
+ if FilterExists(name) {
+ return errors.Errorf("filter with name '%s' is already registered", name)
}
filters[name] = fn
+ return nil
}
-// Replaces an already registered filter with a new implementation. Use this
+// ReplaceFilter replaces an already registered filter with a new implementation. Use this
// function with caution since it allows you to change existing filter behaviour.
-func ReplaceFilter(name string, fn FilterFunction) {
- _, existing := filters[name]
- if !existing {
- panic(fmt.Sprintf("Filter with name '%s' does not exist (therefore cannot be overridden).", name))
+func ReplaceFilter(name string, fn FilterFunction) error {
+ if !FilterExists(name) {
+ return errors.Errorf("filter with name '%s' does not exist (therefore cannot be overridden)", name)
}
filters[name] = fn
+ return nil
}
-// Like ApplyFilter, but panics on an error
+// MustApplyFilter behaves like ApplyFilter, but panics on an error.
func MustApplyFilter(name string, value *Value, param *Value) *Value {
val, err := ApplyFilter(name, value, param)
if err != nil {
@@ -46,13 +55,14 @@ func MustApplyFilter(name string, value *Value, param *Value) *Value {
return val
}
-// Applies a filter to a given value using the given parameters. Returns a *pongo2.Value or an error.
+// ApplyFilter applies a filter to a given value using the given parameters.
+// Returns a *pongo2.Value or an error.
func ApplyFilter(name string, value *Value, param *Value) (*Value, *Error) {
fn, existing := filters[name]
if !existing {
return nil, &Error{
- Sender: "applyfilter",
- ErrorMsg: fmt.Sprintf("Filter with name '%s' not found.", name),
+ Sender: "applyfilter",
+ OrigError: errors.Errorf("Filter with name '%s' not found.", name),
}
}
@@ -86,31 +96,31 @@ func (fc *filterCall) Execute(v *Value, ctx *ExecutionContext) (*Value, *Error)
param = AsValue(nil)
}
- filtered_value, err := fc.filterFunc(v, param)
+ filteredValue, err := fc.filterFunc(v, param)
if err != nil {
return nil, err.updateFromTokenIfNeeded(ctx.template, fc.token)
}
- return filtered_value, nil
+ return filteredValue, nil
}
// Filter = IDENT | IDENT ":" FilterArg | IDENT "|" Filter
func (p *Parser) parseFilter() (*filterCall, *Error) {
- ident_token := p.MatchType(TokenIdentifier)
+ identToken := p.MatchType(TokenIdentifier)
// Check filter ident
- if ident_token == nil {
+ if identToken == nil {
return nil, p.Error("Filter name must be an identifier.", nil)
}
filter := &filterCall{
- token: ident_token,
- name: ident_token.Val,
+ token: identToken,
+ name: identToken.Val,
}
// Get the appropriate filter function and bind it
- filterFn, exists := filters[ident_token.Val]
+ filterFn, exists := filters[identToken.Val]
if !exists {
- return nil, p.Error(fmt.Sprintf("Filter '%s' does not exist.", ident_token.Val), ident_token)
+ return nil, p.Error(fmt.Sprintf("Filter '%s' does not exist.", identToken.Val), identToken)
}
filter.filterFunc = filterFn
diff --git a/vendor/github.com/flosch/pongo2/filters_builtin.go b/vendor/github.com/flosch/pongo2/filters_builtin.go
index aaa68b1..f02b491 100644
--- a/vendor/github.com/flosch/pongo2/filters_builtin.go
+++ b/vendor/github.com/flosch/pongo2/filters_builtin.go
@@ -35,6 +35,8 @@ import (
"strings"
"time"
"unicode/utf8"
+
+ "github.com/juju/errors"
)
func init() {
@@ -73,14 +75,15 @@ func init() {
RegisterFilter("removetags", filterRemovetags)
RegisterFilter("rjust", filterRjust)
RegisterFilter("slice", filterSlice)
+ RegisterFilter("split", filterSplit)
RegisterFilter("stringformat", filterStringformat)
RegisterFilter("striptags", filterStriptags)
RegisterFilter("time", filterDate) // time uses filterDate (same golang-format)
RegisterFilter("title", filterTitle)
RegisterFilter("truncatechars", filterTruncatechars)
- RegisterFilter("truncatechars_html", filterTruncatecharsHtml)
+ RegisterFilter("truncatechars_html", filterTruncatecharsHTML)
RegisterFilter("truncatewords", filterTruncatewords)
- RegisterFilter("truncatewords_html", filterTruncatewordsHtml)
+ RegisterFilter("truncatewords_html", filterTruncatewordsHTML)
RegisterFilter("upper", filterUpper)
RegisterFilter("urlencode", filterUrlencode)
RegisterFilter("urlize", filterUrlize)
@@ -105,9 +108,9 @@ func filterTruncatecharsHelper(s string, newLen int) string {
return string(runes)
}
-func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func() bool, fn func(c rune, s int, idx int) int, finalize func()) {
+func filterTruncateHTMLHelper(value string, newOutput *bytes.Buffer, cond func() bool, fn func(c rune, s int, idx int) int, finalize func()) {
vLen := len(value)
- tag_stack := make([]string, 0)
+ var tagStack []string
idx := 0
for idx < vLen && !cond() {
@@ -118,17 +121,17 @@ func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func(
}
if c == '<' {
- new_output.WriteRune(c)
+ newOutput.WriteRune(c)
idx += s // consume "<"
if idx+1 < vLen {
if value[idx] == '/' {
// Close tag
- new_output.WriteString("/")
+ newOutput.WriteString("/")
tag := ""
- idx += 1 // consume "/"
+ idx++ // consume "/"
for idx < vLen {
c2, size2 := utf8.DecodeRuneInString(value[idx:])
@@ -146,21 +149,21 @@ func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func(
idx += size2
}
- if len(tag_stack) > 0 {
+ if len(tagStack) > 0 {
// Ideally, the close tag is TOP of tag stack
// In malformed HTML, it must not be, so iterate through the stack and remove the tag
- for i := len(tag_stack) - 1; i >= 0; i-- {
- if tag_stack[i] == tag {
+ for i := len(tagStack) - 1; i >= 0; i-- {
+ if tagStack[i] == tag {
// Found the tag
- tag_stack[i] = tag_stack[len(tag_stack)-1]
- tag_stack = tag_stack[:len(tag_stack)-1]
+ tagStack[i] = tagStack[len(tagStack)-1]
+ tagStack = tagStack[:len(tagStack)-1]
break
}
}
}
- new_output.WriteString(tag)
- new_output.WriteString(">")
+ newOutput.WriteString(tag)
+ newOutput.WriteString(">")
} else {
// Open tag
@@ -174,7 +177,7 @@ func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func(
continue
}
- new_output.WriteRune(c2)
+ newOutput.WriteRune(c2)
// End of tag found
if c2 == '>' {
@@ -194,7 +197,7 @@ func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func(
}
// Add tag to stack
- tag_stack = append(tag_stack, tag)
+ tagStack = append(tagStack, tag)
}
}
} else {
@@ -204,10 +207,10 @@ func filterTruncateHtmlHelper(value string, new_output *bytes.Buffer, cond func(
finalize()
- for i := len(tag_stack) - 1; i >= 0; i-- {
- tag := tag_stack[i]
+ for i := len(tagStack) - 1; i >= 0; i-- {
+ tag := tagStack[i]
// Close everything from the regular tag stack
- new_output.WriteString(fmt.Sprintf("%s>", tag))
+ newOutput.WriteString(fmt.Sprintf("%s>", tag))
}
}
@@ -217,28 +220,28 @@ func filterTruncatechars(in *Value, param *Value) (*Value, *Error) {
return AsValue(filterTruncatecharsHelper(s, newLen)), nil
}
-func filterTruncatecharsHtml(in *Value, param *Value) (*Value, *Error) {
+func filterTruncatecharsHTML(in *Value, param *Value) (*Value, *Error) {
value := in.String()
newLen := max(param.Integer()-3, 0)
- new_output := bytes.NewBuffer(nil)
+ newOutput := bytes.NewBuffer(nil)
textcounter := 0
- filterTruncateHtmlHelper(value, new_output, func() bool {
+ filterTruncateHTMLHelper(value, newOutput, func() bool {
return textcounter >= newLen
}, func(c rune, s int, idx int) int {
textcounter++
- new_output.WriteRune(c)
+ newOutput.WriteRune(c)
return idx + s
}, func() {
if textcounter >= newLen && textcounter < len(value) {
- new_output.WriteString("...")
+ newOutput.WriteString("...")
}
})
- return AsSafeValue(new_output.String()), nil
+ return AsSafeValue(newOutput.String()), nil
}
func filterTruncatewords(in *Value, param *Value) (*Value, *Error) {
@@ -260,19 +263,19 @@ func filterTruncatewords(in *Value, param *Value) (*Value, *Error) {
return AsValue(strings.Join(out, " ")), nil
}
-func filterTruncatewordsHtml(in *Value, param *Value) (*Value, *Error) {
+func filterTruncatewordsHTML(in *Value, param *Value) (*Value, *Error) {
value := in.String()
newLen := max(param.Integer(), 0)
- new_output := bytes.NewBuffer(nil)
+ newOutput := bytes.NewBuffer(nil)
wordcounter := 0
- filterTruncateHtmlHelper(value, new_output, func() bool {
+ filterTruncateHTMLHelper(value, newOutput, func() bool {
return wordcounter >= newLen
}, func(_ rune, _ int, idx int) int {
// Get next word
- word_found := false
+ wordFound := false
for idx < len(value) {
c2, size2 := utf8.DecodeRuneInString(value[idx:])
@@ -286,29 +289,29 @@ func filterTruncatewordsHtml(in *Value, param *Value) (*Value, *Error) {
return idx
}
- new_output.WriteRune(c2)
+ newOutput.WriteRune(c2)
idx += size2
if c2 == ' ' || c2 == '.' || c2 == ',' || c2 == ';' {
// Word ends here, stop capturing it now
break
} else {
- word_found = true
+ wordFound = true
}
}
- if word_found {
+ if wordFound {
wordcounter++
}
return idx
}, func() {
if wordcounter >= newLen {
- new_output.WriteString("...")
+ newOutput.WriteString("...")
}
})
- return AsSafeValue(new_output.String()), nil
+ return AsSafeValue(newOutput.String()), nil
}
func filterEscape(in *Value, param *Value) (*Value, *Error) {
@@ -377,12 +380,11 @@ func filterAdd(in *Value, param *Value) (*Value, *Error) {
if in.IsNumber() && param.IsNumber() {
if in.IsFloat() || param.IsFloat() {
return AsValue(in.Float() + param.Float()), nil
- } else {
- return AsValue(in.Integer() + param.Integer()), nil
}
+ return AsValue(in.Integer() + param.Integer()), nil
}
// If in/param is not a number, we're relying on the
- // Value's String() convertion and just add them both together
+ // Value's String() conversion and just add them both together
return AsValue(in.String() + param.String()), nil
}
@@ -550,11 +552,11 @@ func filterCenter(in *Value, param *Value) (*Value, *Error) {
}
func filterDate(in *Value, param *Value) (*Value, *Error) {
- t, is_time := in.Interface().(time.Time)
- if !is_time {
+ t, isTime := in.Interface().(time.Time)
+ if !isTime {
return nil, &Error{
- Sender: "filter:date",
- ErrorMsg: "Filter input argument must be of type 'time.Time'.",
+ Sender: "filter:date",
+ OrigError: errors.New("filter input argument must be of type 'time.Time'"),
}
}
return AsValue(t.Format(param.String())), nil
@@ -612,6 +614,12 @@ func filterLinebreaks(in *Value, param *Value) (*Value, *Error) {
return AsValue(b.String()), nil
}
+func filterSplit(in *Value, param *Value) (*Value, *Error) {
+ chunks := strings.Split(in.String(), param.String())
+
+ return AsValue(chunks), nil
+}
+
func filterLinebreaksbr(in *Value, param *Value) (*Value, *Error) {
return AsValue(strings.Replace(in.String(), "\n", "
", -1)), nil
}
@@ -641,7 +649,8 @@ func filterUrlencode(in *Value, param *Value) (*Value, *Error) {
var filterUrlizeURLRegexp = regexp.MustCompile(`((((http|https)://)|www\.|((^|[ ])[0-9A-Za-z_\-]+(\.com|\.net|\.org|\.info|\.biz|\.de))))(?U:.*)([ ]+|$)`)
var filterUrlizeEmailRegexp = regexp.MustCompile(`(\w+@\w+\.\w{2,4})`)
-func filterUrlizeHelper(input string, autoescape bool, trunc int) string {
+func filterUrlizeHelper(input string, autoescape bool, trunc int) (string, error) {
+ var soutErr error
sout := filterUrlizeURLRegexp.ReplaceAllStringFunc(input, func(raw_url string) string {
var prefix string
var suffix string
@@ -656,7 +665,8 @@ func filterUrlizeHelper(input string, autoescape bool, trunc int) string {
t, err := ApplyFilter("iriencode", AsValue(raw_url), nil)
if err != nil {
- panic(err)
+ soutErr = err
+ return ""
}
url := t.String()
@@ -673,16 +683,19 @@ func filterUrlizeHelper(input string, autoescape bool, trunc int) string {
if autoescape {
t, err := ApplyFilter("escape", AsValue(title), nil)
if err != nil {
- panic(err)
+ soutErr = err
+ return ""
}
title = t.String()
}
return fmt.Sprintf(`%s%s%s`, prefix, url, title, suffix)
})
+ if soutErr != nil {
+ return "", soutErr
+ }
sout = filterUrlizeEmailRegexp.ReplaceAllStringFunc(sout, func(mail string) string {
-
title := mail
if trunc > 3 && len(title) > trunc {
@@ -692,7 +705,7 @@ func filterUrlizeHelper(input string, autoescape bool, trunc int) string {
return fmt.Sprintf(`%s`, mail, title)
})
- return sout
+ return sout, nil
}
func filterUrlize(in *Value, param *Value) (*Value, *Error) {
@@ -701,24 +714,36 @@ func filterUrlize(in *Value, param *Value) (*Value, *Error) {
autoescape = param.Bool()
}
- return AsValue(filterUrlizeHelper(in.String(), autoescape, -1)), nil
+ s, err := filterUrlizeHelper(in.String(), autoescape, -1)
+ if err != nil {
+
+ }
+
+ return AsValue(s), nil
}
func filterUrlizetrunc(in *Value, param *Value) (*Value, *Error) {
- return AsValue(filterUrlizeHelper(in.String(), true, param.Integer())), nil
+ s, err := filterUrlizeHelper(in.String(), true, param.Integer())
+ if err != nil {
+ return nil, &Error{
+ Sender: "filter:urlizetrunc",
+ OrigError: errors.New("you cannot pass more than 2 arguments to filter 'pluralize'"),
+ }
+ }
+ return AsValue(s), nil
}
func filterStringformat(in *Value, param *Value) (*Value, *Error) {
return AsValue(fmt.Sprintf(param.String(), in.Interface())), nil
}
-var re_striptags = regexp.MustCompile("<[^>]*?>")
+var reStriptags = regexp.MustCompile("<[^>]*?>")
func filterStriptags(in *Value, param *Value) (*Value, *Error) {
s := in.String()
// Strip all tags
- s = re_striptags.ReplaceAllString(s, "")
+ s = reStriptags.ReplaceAllString(s, "")
return AsValue(strings.TrimSpace(s)), nil
}
@@ -746,8 +771,8 @@ func filterPluralize(in *Value, param *Value) (*Value, *Error) {
endings := strings.Split(param.String(), ",")
if len(endings) > 2 {
return nil, &Error{
- Sender: "filter:pluralize",
- ErrorMsg: "You cannot pass more than 2 arguments to filter 'pluralize'.",
+ Sender: "filter:pluralize",
+ OrigError: errors.New("you cannot pass more than 2 arguments to filter 'pluralize'"),
}
}
if len(endings) == 1 {
@@ -770,11 +795,10 @@ func filterPluralize(in *Value, param *Value) (*Value, *Error) {
}
return AsValue(""), nil
- } else {
- return nil, &Error{
- Sender: "filter:pluralize",
- ErrorMsg: "Filter 'pluralize' does only work on numbers.",
- }
+ }
+ return nil, &Error{
+ Sender: "filter:pluralize",
+ OrigError: errors.New("filter 'pluralize' does only work on numbers"),
}
}
@@ -807,8 +831,8 @@ func filterSlice(in *Value, param *Value) (*Value, *Error) {
comp := strings.Split(param.String(), ":")
if len(comp) != 2 {
return nil, &Error{
- Sender: "filter:slice",
- ErrorMsg: "Slice string must have the format 'from:to' [from/to can be omitted, but the ':' is required]",
+ Sender: "filter:slice",
+ OrigError: errors.New("Slice string must have the format 'from:to' [from/to can be omitted, but the ':' is required]"),
}
}
@@ -844,16 +868,16 @@ func filterWordcount(in *Value, param *Value) (*Value, *Error) {
func filterWordwrap(in *Value, param *Value) (*Value, *Error) {
words := strings.Fields(in.String())
- words_len := len(words)
- wrap_at := param.Integer()
- if wrap_at <= 0 {
+ wordsLen := len(words)
+ wrapAt := param.Integer()
+ if wrapAt <= 0 {
return in, nil
}
- linecount := words_len/wrap_at + words_len%wrap_at
+ linecount := wordsLen/wrapAt + wordsLen%wrapAt
lines := make([]string, 0, linecount)
for i := 0; i < linecount; i++ {
- lines = append(lines, strings.Join(words[wrap_at*i:min(wrap_at*(i+1), words_len)], " "))
+ lines = append(lines, strings.Join(words[wrapAt*i:min(wrapAt*(i+1), wordsLen)], " "))
}
return AsValue(strings.Join(lines, "\n")), nil
}
@@ -864,27 +888,27 @@ func filterYesno(in *Value, param *Value) (*Value, *Error) {
1: "no",
2: "maybe",
}
- param_string := param.String()
- custom_choices := strings.Split(param_string, ",")
- if len(param_string) > 0 {
- if len(custom_choices) > 3 {
+ paramString := param.String()
+ customChoices := strings.Split(paramString, ",")
+ if len(paramString) > 0 {
+ if len(customChoices) > 3 {
return nil, &Error{
- Sender: "filter:yesno",
- ErrorMsg: fmt.Sprintf("You cannot pass more than 3 options to the 'yesno'-filter (got: '%s').", param_string),
+ Sender: "filter:yesno",
+ OrigError: errors.Errorf("You cannot pass more than 3 options to the 'yesno'-filter (got: '%s').", paramString),
}
}
- if len(custom_choices) < 2 {
+ if len(customChoices) < 2 {
return nil, &Error{
- Sender: "filter:yesno",
- ErrorMsg: fmt.Sprintf("You must pass either no or at least 2 arguments to the 'yesno'-filter (got: '%s').", param_string),
+ Sender: "filter:yesno",
+ OrigError: errors.Errorf("You must pass either no or at least 2 arguments to the 'yesno'-filter (got: '%s').", paramString),
}
}
// Map to the options now
- choices[0] = custom_choices[0]
- choices[1] = custom_choices[1]
- if len(custom_choices) == 3 {
- choices[2] = custom_choices[2]
+ choices[0] = customChoices[0]
+ choices[1] = customChoices[1]
+ if len(customChoices) == 3 {
+ choices[2] = customChoices[2]
}
}
diff --git a/vendor/github.com/flosch/pongo2/lexer.go b/vendor/github.com/flosch/pongo2/lexer.go
index 8956f9c..36280de 100644
--- a/vendor/github.com/flosch/pongo2/lexer.go
+++ b/vendor/github.com/flosch/pongo2/lexer.go
@@ -4,6 +4,8 @@ import (
"fmt"
"strings"
"unicode/utf8"
+
+ "github.com/juju/errors"
)
const (
@@ -63,8 +65,8 @@ type lexer struct {
line int
col int
- in_verbatim bool
- verbatim_name string
+ inVerbatim bool
+ verbatimName string
}
func (t *Token) String() string {
@@ -111,11 +113,11 @@ func lex(name string, input string) ([]*Token, *Error) {
if l.errored {
errtoken := l.tokens[len(l.tokens)-1]
return nil, &Error{
- Filename: name,
- Line: errtoken.Line,
- Column: errtoken.Col,
- Sender: "lexer",
- ErrorMsg: errtoken.Val,
+ Filename: name,
+ Line: errtoken.Line,
+ Column: errtoken.Col,
+ Sender: "lexer",
+ OrigError: errors.New(errtoken.Val),
}
}
return l.tokens, nil
@@ -216,8 +218,8 @@ func (l *lexer) run() {
for {
// TODO: Support verbatim tag names
// https://docs.djangoproject.com/en/dev/ref/templates/builtins/#verbatim
- if l.in_verbatim {
- name := l.verbatim_name
+ if l.inVerbatim {
+ name := l.verbatimName
if name != "" {
name += " "
}
@@ -229,20 +231,20 @@ func (l *lexer) run() {
l.pos += w
l.col += w
l.ignore()
- l.in_verbatim = false
+ l.inVerbatim = false
}
} else if strings.HasPrefix(l.input[l.pos:], "{% verbatim %}") { // tag
if l.pos > l.start {
l.emit(TokenHTML)
}
- l.in_verbatim = true
+ l.inVerbatim = true
w := len("{% verbatim %}")
l.pos += w
l.col += w
l.ignore()
}
- if !l.in_verbatim {
+ if !l.inVerbatim {
// Ignore single-line comments {# ... #}
if strings.HasPrefix(l.input[l.pos:], "{#") {
if l.pos > l.start {
@@ -303,7 +305,7 @@ func (l *lexer) run() {
l.emit(TokenHTML)
}
- if l.in_verbatim {
+ if l.inVerbatim {
l.errorf("verbatim-tag not closed, got EOF.")
}
}
@@ -328,7 +330,7 @@ outer_loop:
return l.stateIdentifier
case l.accept(tokenDigits):
return l.stateNumber
- case l.accept(`"`):
+ case l.accept(`"'`):
return l.stateString
}
@@ -348,10 +350,6 @@ outer_loop:
}
}
- if l.pos < len(l.input) {
- return l.errorf("Unknown character: %q (%d)", l.peek(), l.peek())
- }
-
break
}
@@ -374,6 +372,11 @@ func (l *lexer) stateIdentifier() lexerStateFn {
func (l *lexer) stateNumber() lexerStateFn {
l.acceptRun(tokenDigits)
+ if l.accept(tokenIdentifierCharsWithDigits) {
+ // This seems to be an identifier starting with a number.
+ // See https://github.com/flosch/pongo2/issues/151
+ return l.stateIdentifier()
+ }
/*
Maybe context-sensitive number lexing?
* comments.0.Text // first comment
@@ -393,9 +396,10 @@ func (l *lexer) stateNumber() lexerStateFn {
}
func (l *lexer) stateString() lexerStateFn {
+ quotationMark := l.value()
l.ignore()
- l.startcol -= 1 // we're starting the position at the first "
- for !l.accept(`"`) {
+ l.startcol-- // we're starting the position at the first "
+ for !l.accept(quotationMark) {
switch l.next() {
case '\\':
// escape sequence
diff --git a/vendor/github.com/flosch/pongo2/nodes.go b/vendor/github.com/flosch/pongo2/nodes.go
index 5fd5a6c..5b039cd 100644
--- a/vendor/github.com/flosch/pongo2/nodes.go
+++ b/vendor/github.com/flosch/pongo2/nodes.go
@@ -1,17 +1,13 @@
package pongo2
-import (
- "bytes"
-)
-
// The root document
type nodeDocument struct {
Nodes []INode
}
-func (doc *nodeDocument) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (doc *nodeDocument) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
for _, n := range doc.Nodes {
- err := n.Execute(ctx, buffer)
+ err := n.Execute(ctx, writer)
if err != nil {
return err
}
diff --git a/vendor/github.com/flosch/pongo2/nodes_html.go b/vendor/github.com/flosch/pongo2/nodes_html.go
index 9aa630c..9680285 100644
--- a/vendor/github.com/flosch/pongo2/nodes_html.go
+++ b/vendor/github.com/flosch/pongo2/nodes_html.go
@@ -1,14 +1,10 @@
package pongo2
-import (
- "bytes"
-)
-
type nodeHTML struct {
token *Token
}
-func (n *nodeHTML) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- buffer.WriteString(n.token.Val)
+func (n *nodeHTML) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ writer.WriteString(n.token.Val)
return nil
}
diff --git a/vendor/github.com/flosch/pongo2/nodes_wrapper.go b/vendor/github.com/flosch/pongo2/nodes_wrapper.go
index 9180dc7..d1bcb8d 100644
--- a/vendor/github.com/flosch/pongo2/nodes_wrapper.go
+++ b/vendor/github.com/flosch/pongo2/nodes_wrapper.go
@@ -1,17 +1,13 @@
package pongo2
-import (
- "bytes"
-)
-
type NodeWrapper struct {
Endtag string
nodes []INode
}
-func (wrapper *NodeWrapper) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (wrapper *NodeWrapper) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
for _, n := range wrapper.nodes {
- err := n.Execute(ctx, buffer)
+ err := n.Execute(ctx, writer)
if err != nil {
return err
}
diff --git a/vendor/github.com/flosch/pongo2/parser.go b/vendor/github.com/flosch/pongo2/parser.go
index c85e2d2..d0e08b5 100644
--- a/vendor/github.com/flosch/pongo2/parser.go
+++ b/vendor/github.com/flosch/pongo2/parser.go
@@ -1,13 +1,14 @@
package pongo2
import (
- "bytes"
"fmt"
"strings"
+
+ "github.com/juju/errors"
)
type INode interface {
- Execute(*ExecutionContext, *bytes.Buffer) *Error
+ Execute(*ExecutionContext, TemplateWriter) *Error
}
type IEvaluator interface {
@@ -27,10 +28,10 @@ type IEvaluator interface {
//
// (See Token's documentation for more about tokens)
type Parser struct {
- name string
- idx int
- tokens []*Token
- last_token *Token
+ name string
+ idx int
+ tokens []*Token
+ lastToken *Token
// if the parser parses a template document, here will be
// a reference to it (needed to access the template through Tags)
@@ -47,7 +48,7 @@ func newParser(name string, tokens []*Token, template *Template) *Parser {
template: template,
}
if len(tokens) > 0 {
- p.last_token = tokens[len(tokens)-1]
+ p.lastToken = tokens[len(tokens)-1]
}
return p
}
@@ -175,7 +176,7 @@ func (p *Parser) GetR(shift int) *Token {
return p.Get(i)
}
-// Produces a nice error message and returns an error-object.
+// Error produces a nice error message and returns an error-object.
// The 'token'-argument is optional. If provided, it will take
// the token's position information. If not provided, it will
// automatically use the CURRENT token's position information.
@@ -196,13 +197,13 @@ func (p *Parser) Error(msg string, token *Token) *Error {
col = token.Col
}
return &Error{
- Template: p.template,
- Filename: p.name,
- Sender: "parser",
- Line: line,
- Column: col,
- Token: token,
- ErrorMsg: msg,
+ Template: p.template,
+ Filename: p.name,
+ Sender: "parser",
+ Line: line,
+ Column: col,
+ Token: token,
+ OrigError: errors.New(msg),
}
}
@@ -212,19 +213,19 @@ func (p *Parser) Error(msg string, token *Token) *Error {
func (p *Parser) WrapUntilTag(names ...string) (*NodeWrapper, *Parser, *Error) {
wrapper := &NodeWrapper{}
- tagArgs := make([]*Token, 0)
+ var tagArgs []*Token
for p.Remaining() > 0 {
// New tag, check whether we have to stop wrapping here
if p.Peek(TokenSymbol, "{%") != nil {
- tag_ident := p.PeekTypeN(1, TokenIdentifier)
+ tagIdent := p.PeekTypeN(1, TokenIdentifier)
- if tag_ident != nil {
+ if tagIdent != nil {
// We've found a (!) end-tag
found := false
for _, n := range names {
- if tag_ident.Val == n {
+ if tagIdent.Val == n {
found = true
break
}
@@ -238,16 +239,15 @@ func (p *Parser) WrapUntilTag(names ...string) (*NodeWrapper, *Parser, *Error) {
for {
if p.Match(TokenSymbol, "%}") != nil {
// Okay, end the wrapping here
- wrapper.Endtag = tag_ident.Val
+ wrapper.Endtag = tagIdent.Val
return wrapper, newParser(p.template.name, tagArgs, p.template), nil
- } else {
- t := p.Current()
- p.Consume()
- if t == nil {
- return nil, nil, p.Error("Unexpected EOF.", p.last_token)
- }
- tagArgs = append(tagArgs, t)
}
+ t := p.Current()
+ p.Consume()
+ if t == nil {
+ return nil, nil, p.Error("Unexpected EOF.", p.lastToken)
+ }
+ tagArgs = append(tagArgs, t)
}
}
}
@@ -263,5 +263,47 @@ func (p *Parser) WrapUntilTag(names ...string) (*NodeWrapper, *Parser, *Error) {
}
return nil, nil, p.Error(fmt.Sprintf("Unexpected EOF, expected tag %s.", strings.Join(names, " or ")),
- p.last_token)
+ p.lastToken)
+}
+
+// Skips all nodes between starting tag and "{% endtag %}"
+func (p *Parser) SkipUntilTag(names ...string) *Error {
+ for p.Remaining() > 0 {
+ // New tag, check whether we have to stop wrapping here
+ if p.Peek(TokenSymbol, "{%") != nil {
+ tagIdent := p.PeekTypeN(1, TokenIdentifier)
+
+ if tagIdent != nil {
+ // We've found a (!) end-tag
+
+ found := false
+ for _, n := range names {
+ if tagIdent.Val == n {
+ found = true
+ break
+ }
+ }
+
+ // We only process the tag if we've found an end tag
+ if found {
+ // Okay, endtag found.
+ p.ConsumeN(2) // '{%' tagname
+
+ for {
+ if p.Match(TokenSymbol, "%}") != nil {
+ // Done skipping, exit.
+ return nil
+ }
+ }
+ }
+ }
+ }
+ t := p.Current()
+ p.Consume()
+ if t == nil {
+ return p.Error("Unexpected EOF.", p.lastToken)
+ }
+ }
+
+ return p.Error(fmt.Sprintf("Unexpected EOF, expected tag %s.", strings.Join(names, " or ")), p.lastToken)
}
diff --git a/vendor/github.com/flosch/pongo2/parser_expression.go b/vendor/github.com/flosch/pongo2/parser_expression.go
index c1002de..988468e 100644
--- a/vendor/github.com/flosch/pongo2/parser_expression.go
+++ b/vendor/github.com/flosch/pongo2/parser_expression.go
@@ -1,38 +1,37 @@
package pongo2
import (
- "bytes"
"fmt"
"math"
)
type Expression struct {
// TODO: Add location token?
- expr1 IEvaluator
- expr2 IEvaluator
- op_token *Token
+ expr1 IEvaluator
+ expr2 IEvaluator
+ opToken *Token
}
type relationalExpression struct {
// TODO: Add location token?
- expr1 IEvaluator
- expr2 IEvaluator
- op_token *Token
+ expr1 IEvaluator
+ expr2 IEvaluator
+ opToken *Token
}
type simpleExpression struct {
- negate bool
- negative_sign bool
- term1 IEvaluator
- term2 IEvaluator
- op_token *Token
+ negate bool
+ negativeSign bool
+ term1 IEvaluator
+ term2 IEvaluator
+ opToken *Token
}
type term struct {
// TODO: Add location token?
- factor1 IEvaluator
- factor2 IEvaluator
- op_token *Token
+ factor1 IEvaluator
+ factor2 IEvaluator
+ opToken *Token
}
type power struct {
@@ -56,14 +55,14 @@ func (expr *simpleExpression) FilterApplied(name string) bool {
(expr.term2 != nil && expr.term2.FilterApplied(name)))
}
-func (t *term) FilterApplied(name string) bool {
- return t.factor1.FilterApplied(name) && (t.factor2 == nil ||
- (t.factor2 != nil && t.factor2.FilterApplied(name)))
+func (expr *term) FilterApplied(name string) bool {
+ return expr.factor1.FilterApplied(name) && (expr.factor2 == nil ||
+ (expr.factor2 != nil && expr.factor2.FilterApplied(name)))
}
-func (p *power) FilterApplied(name string) bool {
- return p.power1.FilterApplied(name) && (p.power2 == nil ||
- (p.power2 != nil && p.power2.FilterApplied(name)))
+func (expr *power) FilterApplied(name string) bool {
+ return expr.power1.FilterApplied(name) && (expr.power2 == nil ||
+ (expr.power2 != nil && expr.power2.FilterApplied(name)))
}
func (expr *Expression) GetPositionToken() *Token {
@@ -86,48 +85,48 @@ func (expr *power) GetPositionToken() *Token {
return expr.power1.GetPositionToken()
}
-func (expr *Expression) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (expr *Expression) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *relationalExpression) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (expr *relationalExpression) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *simpleExpression) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (expr *simpleExpression) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *term) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (expr *term) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *power) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (expr *power) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
@@ -141,13 +140,13 @@ func (expr *Expression) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
if err != nil {
return nil, err
}
- switch expr.op_token.Val {
+ switch expr.opToken.Val {
case "and", "&&":
return AsValue(v1.IsTrue() && v2.IsTrue()), nil
case "or", "||":
return AsValue(v1.IsTrue() || v2.IsTrue()), nil
default:
- panic(fmt.Sprintf("unimplemented: %s", expr.op_token.Val))
+ return nil, ctx.Error(fmt.Sprintf("unimplemented: %s", expr.opToken.Val), expr.opToken)
}
} else {
return v1, nil
@@ -164,39 +163,35 @@ func (expr *relationalExpression) Evaluate(ctx *ExecutionContext) (*Value, *Erro
if err != nil {
return nil, err
}
- switch expr.op_token.Val {
+ switch expr.opToken.Val {
case "<=":
if v1.IsFloat() || v2.IsFloat() {
return AsValue(v1.Float() <= v2.Float()), nil
- } else {
- return AsValue(v1.Integer() <= v2.Integer()), nil
}
+ return AsValue(v1.Integer() <= v2.Integer()), nil
case ">=":
if v1.IsFloat() || v2.IsFloat() {
return AsValue(v1.Float() >= v2.Float()), nil
- } else {
- return AsValue(v1.Integer() >= v2.Integer()), nil
}
+ return AsValue(v1.Integer() >= v2.Integer()), nil
case "==":
return AsValue(v1.EqualValueTo(v2)), nil
case ">":
if v1.IsFloat() || v2.IsFloat() {
return AsValue(v1.Float() > v2.Float()), nil
- } else {
- return AsValue(v1.Integer() > v2.Integer()), nil
}
+ return AsValue(v1.Integer() > v2.Integer()), nil
case "<":
if v1.IsFloat() || v2.IsFloat() {
return AsValue(v1.Float() < v2.Float()), nil
- } else {
- return AsValue(v1.Integer() < v2.Integer()), nil
}
+ return AsValue(v1.Integer() < v2.Integer()), nil
case "!=", "<>":
return AsValue(!v1.EqualValueTo(v2)), nil
case "in":
return AsValue(v2.Contains(v1)), nil
default:
- panic(fmt.Sprintf("unimplemented: %s", expr.op_token.Val))
+ return nil, ctx.Error(fmt.Sprintf("unimplemented: %s", expr.opToken.Val), expr.opToken)
}
} else {
return v1, nil
@@ -214,7 +209,7 @@ func (expr *simpleExpression) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
result = result.Negate()
}
- if expr.negative_sign {
+ if expr.negativeSign {
if result.IsNumber() {
switch {
case result.IsFloat():
@@ -222,7 +217,7 @@ func (expr *simpleExpression) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
case result.IsInteger():
result = AsValue(-1 * result.Integer())
default:
- panic("not possible")
+ return nil, ctx.Error("Operation between a number and a non-(float/integer) is not possible", nil)
}
} else {
return nil, ctx.Error("Negative sign on a non-number expression", expr.GetPositionToken())
@@ -234,42 +229,40 @@ func (expr *simpleExpression) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
if err != nil {
return nil, err
}
- switch expr.op_token.Val {
+ switch expr.opToken.Val {
case "+":
if result.IsFloat() || t2.IsFloat() {
// Result will be a float
return AsValue(result.Float() + t2.Float()), nil
- } else {
- // Result will be an integer
- return AsValue(result.Integer() + t2.Integer()), nil
}
+ // Result will be an integer
+ return AsValue(result.Integer() + t2.Integer()), nil
case "-":
if result.IsFloat() || t2.IsFloat() {
// Result will be a float
return AsValue(result.Float() - t2.Float()), nil
- } else {
- // Result will be an integer
- return AsValue(result.Integer() - t2.Integer()), nil
}
+ // Result will be an integer
+ return AsValue(result.Integer() - t2.Integer()), nil
default:
- panic("unimplemented")
+ return nil, ctx.Error("Unimplemented", expr.GetPositionToken())
}
}
return result, nil
}
-func (t *term) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
- f1, err := t.factor1.Evaluate(ctx)
+func (expr *term) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
+ f1, err := expr.factor1.Evaluate(ctx)
if err != nil {
return nil, err
}
- if t.factor2 != nil {
- f2, err := t.factor2.Evaluate(ctx)
+ if expr.factor2 != nil {
+ f2, err := expr.factor2.Evaluate(ctx)
if err != nil {
return nil, err
}
- switch t.op_token.Val {
+ switch expr.opToken.Val {
case "*":
if f1.IsFloat() || f2.IsFloat() {
// Result will be float
@@ -288,27 +281,26 @@ func (t *term) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
// Result will be int
return AsValue(f1.Integer() % f2.Integer()), nil
default:
- panic("unimplemented")
+ return nil, ctx.Error("unimplemented", expr.opToken)
}
} else {
return f1, nil
}
}
-func (pw *power) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
- p1, err := pw.power1.Evaluate(ctx)
+func (expr *power) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
+ p1, err := expr.power1.Evaluate(ctx)
if err != nil {
return nil, err
}
- if pw.power2 != nil {
- p2, err := pw.power2.Evaluate(ctx)
+ if expr.power2 != nil {
+ p2, err := expr.power2.Evaluate(ctx)
if err != nil {
return nil, err
}
return AsValue(math.Pow(p1.Float(), p2.Float())), nil
- } else {
- return p1, nil
}
+ return p1, nil
}
func (p *Parser) parseFactor() (IEvaluator, *Error) {
@@ -352,19 +344,19 @@ func (p *Parser) parsePower() (IEvaluator, *Error) {
}
func (p *Parser) parseTerm() (IEvaluator, *Error) {
- return_term := new(term)
+ returnTerm := new(term)
factor1, err := p.parsePower()
if err != nil {
return nil, err
}
- return_term.factor1 = factor1
+ returnTerm.factor1 = factor1
for p.PeekOne(TokenSymbol, "*", "/", "%") != nil {
- if return_term.op_token != nil {
+ if returnTerm.opToken != nil {
// Create new sub-term
- return_term = &term{
- factor1: return_term,
+ returnTerm = &term{
+ factor1: returnTerm,
}
}
@@ -376,16 +368,16 @@ func (p *Parser) parseTerm() (IEvaluator, *Error) {
return nil, err
}
- return_term.op_token = op
- return_term.factor2 = factor2
+ returnTerm.opToken = op
+ returnTerm.factor2 = factor2
}
- if return_term.op_token == nil {
+ if returnTerm.opToken == nil {
// Shortcut for faster evaluation
- return return_term.factor1, nil
+ return returnTerm.factor1, nil
}
- return return_term, nil
+ return returnTerm, nil
}
func (p *Parser) parseSimpleExpression() (IEvaluator, *Error) {
@@ -393,7 +385,7 @@ func (p *Parser) parseSimpleExpression() (IEvaluator, *Error) {
if sign := p.MatchOne(TokenSymbol, "+", "-"); sign != nil {
if sign.Val == "-" {
- expr.negative_sign = true
+ expr.negativeSign = true
}
}
@@ -408,7 +400,7 @@ func (p *Parser) parseSimpleExpression() (IEvaluator, *Error) {
expr.term1 = term1
for p.PeekOne(TokenSymbol, "+", "-") != nil {
- if expr.op_token != nil {
+ if expr.opToken != nil {
// New sub expr
expr = &simpleExpression{
term1: expr,
@@ -424,10 +416,10 @@ func (p *Parser) parseSimpleExpression() (IEvaluator, *Error) {
}
expr.term2 = term2
- expr.op_token = op
+ expr.opToken = op
}
- if expr.negate == false && expr.negative_sign == false && expr.term2 == nil {
+ if expr.negate == false && expr.negativeSign == false && expr.term2 == nil {
// Shortcut for faster evaluation
return expr.term1, nil
}
@@ -450,14 +442,14 @@ func (p *Parser) parseRelationalExpression() (IEvaluator, *Error) {
if err != nil {
return nil, err
}
- expr.op_token = t
+ expr.opToken = t
expr.expr2 = expr2
} else if t := p.MatchOne(TokenKeyword, "in"); t != nil {
expr2, err := p.parseSimpleExpression()
if err != nil {
return nil, err
}
- expr.op_token = t
+ expr.opToken = t
expr.expr2 = expr2
}
@@ -487,7 +479,7 @@ func (p *Parser) ParseExpression() (IEvaluator, *Error) {
return nil, err
}
exp.expr2 = expr2
- exp.op_token = op
+ exp.opToken = op
}
if exp.expr2 == nil {
diff --git a/vendor/github.com/flosch/pongo2/pongo2.go b/vendor/github.com/flosch/pongo2/pongo2.go
index e61faa4..eda3aa0 100644
--- a/vendor/github.com/flosch/pongo2/pongo2.go
+++ b/vendor/github.com/flosch/pongo2/pongo2.go
@@ -1,10 +1,10 @@
package pongo2
// Version string
-const Version = "v3"
+const Version = "dev"
-// Helper function which panics, if a Template couldn't
-// successfully parsed. This is how you would use it:
+// Must panics, if a Template couldn't successfully parsed. This is how you
+// would use it:
// var baseTemplate = pongo2.Must(pongo2.FromFile("templates/base.html"))
func Must(tpl *Template, err error) *Template {
if err != nil {
diff --git a/vendor/github.com/flosch/pongo2/pongo2_issues_test.go b/vendor/github.com/flosch/pongo2/pongo2_issues_test.go
index 731a290..725ab41 100644
--- a/vendor/github.com/flosch/pongo2/pongo2_issues_test.go
+++ b/vendor/github.com/flosch/pongo2/pongo2_issues_test.go
@@ -1,20 +1,29 @@
-package pongo2
+package pongo2_test
import (
"testing"
- . "gopkg.in/check.v1"
+ "github.com/flosch/pongo2"
)
-// Hook up gocheck into the "go test" runner.
+func TestIssue151(t *testing.T) {
+ tpl, err := pongo2.FromString("{{ mydict.51232_3 }}{{ 12345_123}}{{ 995189baz }}")
+ if err != nil {
+ t.Fatal(err)
+ }
-func TestIssues(t *testing.T) { TestingT(t) }
+ str, err := tpl.Execute(pongo2.Context{
+ "mydict": map[string]string{
+ "51232_3": "foo",
+ },
+ "12345_123": "bar",
+ "995189baz": "baz",
+ })
+ if err != nil {
+ t.Fatal(err)
+ }
-type IssueTestSuite struct{}
-
-var _ = Suite(&IssueTestSuite{})
-
-func (s *TestSuite) TestIssues(c *C) {
- // Add a test for any issue
- c.Check(42, Equals, 42)
+ if str != "foobarbaz" {
+ t.Fatalf("Expected output 'foobarbaz', but got '%s'.", str)
+ }
}
diff --git a/vendor/github.com/flosch/pongo2/pongo2_template_test.go b/vendor/github.com/flosch/pongo2/pongo2_template_test.go
index b6dc8fa..4b7d8fa 100644
--- a/vendor/github.com/flosch/pongo2/pongo2_template_test.go
+++ b/vendor/github.com/flosch/pongo2/pongo2_template_test.go
@@ -1,4 +1,4 @@
-package pongo2
+package pongo2_test
import (
"bytes"
@@ -9,9 +9,11 @@ import (
"strings"
"testing"
"time"
+
+ "github.com/flosch/pongo2"
)
-var admin_list = []string{"user2"}
+var adminList = []string{"user2"}
var time1 = time.Date(2014, 06, 10, 15, 30, 15, 0, time.UTC)
var time2 = time.Date(2011, 03, 21, 8, 37, 56, 12, time.UTC)
@@ -32,8 +34,8 @@ type comment struct {
Text string
}
-func is_admin(u *user) bool {
- for _, a := range admin_list {
+func isAdmin(u *user) bool {
+ for _, a := range adminList {
if a == u.Name {
return true
}
@@ -41,12 +43,12 @@ func is_admin(u *user) bool {
return false
}
-func (u *user) Is_admin() *Value {
- return AsValue(is_admin(u))
+func (u *user) Is_admin() *pongo2.Value {
+ return pongo2.AsValue(isAdmin(u))
}
func (u *user) Is_admin2() bool {
- return is_admin(u)
+ return isAdmin(u)
}
func (p *post) String() string {
@@ -60,74 +62,53 @@ func (p *post) String() string {
type tagSandboxDemoTag struct {
}
-func (node *tagSandboxDemoTag) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- buffer.WriteString("hello")
+func (node *tagSandboxDemoTag) Execute(ctx *pongo2.ExecutionContext, writer pongo2.TemplateWriter) *pongo2.Error {
+ writer.WriteString("hello")
return nil
}
-func tagSandboxDemoTagParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
+func tagSandboxDemoTagParser(doc *pongo2.Parser, start *pongo2.Token, arguments *pongo2.Parser) (pongo2.INodeTag, *pongo2.Error) {
return &tagSandboxDemoTag{}, nil
}
-func BannedFilterFn(in *Value, params *Value) (*Value, *Error) {
+func BannedFilterFn(in *pongo2.Value, params *pongo2.Value) (*pongo2.Value, *pongo2.Error) {
return in, nil
}
func init() {
- DefaultSet.Debug = true
+ pongo2.DefaultSet.Debug = true
- RegisterFilter("banned_filter", BannedFilterFn)
- RegisterFilter("unbanned_filter", BannedFilterFn)
- RegisterTag("banned_tag", tagSandboxDemoTagParser)
- RegisterTag("unbanned_tag", tagSandboxDemoTagParser)
+ pongo2.RegisterFilter("banned_filter", BannedFilterFn)
+ pongo2.RegisterFilter("unbanned_filter", BannedFilterFn)
+ pongo2.RegisterTag("banned_tag", tagSandboxDemoTagParser)
+ pongo2.RegisterTag("unbanned_tag", tagSandboxDemoTagParser)
- DefaultSet.BanFilter("banned_filter")
- DefaultSet.BanTag("banned_tag")
-
- // Allow different kind of levels inside template_tests/
- abs_path, err := filepath.Abs("./template_tests/*")
- if err != nil {
- panic(err)
- }
- DefaultSet.SandboxDirectories = append(DefaultSet.SandboxDirectories, abs_path)
-
- abs_path, err = filepath.Abs("./template_tests/*/*")
- if err != nil {
- panic(err)
- }
- DefaultSet.SandboxDirectories = append(DefaultSet.SandboxDirectories, abs_path)
-
- abs_path, err = filepath.Abs("./template_tests/*/*/*")
- if err != nil {
- panic(err)
- }
- DefaultSet.SandboxDirectories = append(DefaultSet.SandboxDirectories, abs_path)
-
- // Allow pongo2 temp files
- DefaultSet.SandboxDirectories = append(DefaultSet.SandboxDirectories, "/tmp/pongo2_*")
+ pongo2.DefaultSet.BanFilter("banned_filter")
+ pongo2.DefaultSet.BanTag("banned_tag")
f, err := ioutil.TempFile("/tmp/", "pongo2_")
if err != nil {
panic("cannot write to /tmp/")
}
f.Write([]byte("Hello from pongo2"))
- DefaultSet.Globals["temp_file"] = f.Name()
+ pongo2.DefaultSet.Globals["temp_file"] = f.Name()
}
/*
* End setup sandbox
*/
-var tplContext = Context{
+var tplContext = pongo2.Context{
"number": 11,
"simple": map[string]interface{}{
- "number": 42,
- "name": "john doe",
- "included_file": "INCLUDES.helper",
- "nil": nil,
- "uint": uint(8),
- "float": float64(3.1415),
- "str": "string",
+ "number": 42,
+ "name": "john doe",
+ "included_file": "INCLUDES.helper",
+ "included_file_not_exists": "INCLUDES.helper.not_exists",
+ "nil": nil,
+ "uint": uint(8),
+ "float": float64(3.1415),
+ "str": "string",
"chinese_hello_world": "你好世界",
"bool_true": true,
"bool_false": false,
@@ -142,13 +123,23 @@ Yep!`,
"escape_js_test": `escape sequences \r\n\'\" special chars "?!=$<>`,
"one_item_list": []int{99},
"multiple_item_list": []int{1, 1, 2, 3, 5, 8, 13, 21, 34, 55},
+ "unsorted_int_list": []int{192, 581, 22, 1, 249, 9999, 1828591, 8271},
+ "fixed_item_list": [...]int{1, 2, 3, 4},
"misc_list": []interface{}{"Hello", 99, 3.14, "good"},
"escape_text": "This is \\a Test. \"Yep\". 'Yep'.",
"xss": "",
"intmap": map[int]string{
1: "one",
- 2: "two",
5: "five",
+ 2: "two",
+ },
+ "strmap": map[string]string{
+ "abc": "def",
+ "bcd": "efg",
+ "zab": "cde",
+ "gh": "kqm",
+ "ukq": "qqa",
+ "aab": "aba",
},
"func_add": func(a, b int) int {
return a + b
@@ -167,17 +158,17 @@ Yep!`,
}
return s
},
- "func_variadic_sum_int2": func(args ...*Value) *Value {
+ "func_variadic_sum_int2": func(args ...*pongo2.Value) *pongo2.Value {
// Create a sum
s := 0
for _, i := range args {
s += i.Integer()
}
- return AsValue(s)
+ return pongo2.AsValue(s)
},
},
"complex": map[string]interface{}{
- "is_admin": is_admin,
+ "is_admin": isAdmin,
"post": post{
Text: "Hello!
Welcome to my new blog page. I'm using pongo2 which supports {{ variables }} and {% tags %}.
",
Created: time2,
@@ -238,10 +229,8 @@ Yep!`,
}
func TestTemplates(t *testing.T) {
- debug = true
-
// Add a global to the default set
- Globals["this_is_a_global_variable"] = "this is a global text"
+ pongo2.Globals["this_is_a_global_variable"] = "this is a global text"
matches, err := filepath.Glob("./template_tests/*.tpl")
if err != nil {
@@ -249,34 +238,34 @@ func TestTemplates(t *testing.T) {
}
for idx, match := range matches {
t.Logf("[Template %3d] Testing '%s'", idx+1, match)
- tpl, err := FromFile(match)
+ tpl, err := pongo2.FromFile(match)
if err != nil {
t.Fatalf("Error on FromFile('%s'): %s", match, err.Error())
}
- test_filename := fmt.Sprintf("%s.out", match)
- test_out, rerr := ioutil.ReadFile(test_filename)
+ testFilename := fmt.Sprintf("%s.out", match)
+ testOut, rerr := ioutil.ReadFile(testFilename)
if rerr != nil {
- t.Fatalf("Error on ReadFile('%s'): %s", test_filename, rerr.Error())
+ t.Fatalf("Error on ReadFile('%s'): %s", testFilename, rerr.Error())
}
- tpl_out, err := tpl.ExecuteBytes(tplContext)
+ tplOut, err := tpl.ExecuteBytes(tplContext)
if err != nil {
t.Fatalf("Error on Execute('%s'): %s", match, err.Error())
}
- if bytes.Compare(test_out, tpl_out) != 0 {
- t.Logf("Template (rendered) '%s': '%s'", match, tpl_out)
- err_filename := filepath.Base(fmt.Sprintf("%s.error", match))
- err := ioutil.WriteFile(err_filename, []byte(tpl_out), 0600)
+ if bytes.Compare(testOut, tplOut) != 0 {
+ t.Logf("Template (rendered) '%s': '%s'", match, tplOut)
+ errFilename := filepath.Base(fmt.Sprintf("%s.error", match))
+ err := ioutil.WriteFile(errFilename, []byte(tplOut), 0600)
if err != nil {
t.Fatalf(err.Error())
}
- t.Logf("get a complete diff with command: 'diff -ya %s %s'", test_filename, err_filename)
+ t.Logf("get a complete diff with command: 'diff -ya %s %s'", testFilename, errFilename)
t.Errorf("Failed: test_out != tpl_out for %s", match)
}
}
}
func TestExecutionErrors(t *testing.T) {
- debug = true
+ //debug = true
matches, err := filepath.Glob("./template_tests/*-execution.err")
if err != nil {
@@ -285,15 +274,15 @@ func TestExecutionErrors(t *testing.T) {
for idx, match := range matches {
t.Logf("[Errors %3d] Testing '%s'", idx+1, match)
- test_data, err := ioutil.ReadFile(match)
- tests := strings.Split(string(test_data), "\n")
+ testData, err := ioutil.ReadFile(match)
+ tests := strings.Split(string(testData), "\n")
- check_filename := fmt.Sprintf("%s.out", match)
- check_data, err := ioutil.ReadFile(check_filename)
+ checkFilename := fmt.Sprintf("%s.out", match)
+ checkData, err := ioutil.ReadFile(checkFilename)
if err != nil {
- t.Fatalf("Error on ReadFile('%s'): %s", check_filename, err.Error())
+ t.Fatalf("Error on ReadFile('%s'): %s", checkFilename, err.Error())
}
- checks := strings.Split(string(check_data), "\n")
+ checks := strings.Split(string(checkData), "\n")
if len(checks) != len(tests) {
t.Fatal("Template lines != Checks lines")
@@ -308,11 +297,16 @@ func TestExecutionErrors(t *testing.T) {
match, idx+1)
}
- tpl, err := FromString(test)
+ tpl, err := pongo2.FromString(test)
if err != nil {
t.Fatalf("Error on FromString('%s'): %s", test, err.Error())
}
+ tpl, err = pongo2.FromBytes([]byte(test))
+ if err != nil {
+ t.Fatalf("Error on FromBytes('%s'): %s", test, err.Error())
+ }
+
_, err = tpl.ExecuteBytes(tplContext)
if err == nil {
t.Fatalf("[%s Line %d] Expected error for (got none): %s",
@@ -329,7 +323,7 @@ func TestExecutionErrors(t *testing.T) {
}
func TestCompilationErrors(t *testing.T) {
- debug = true
+ //debug = true
matches, err := filepath.Glob("./template_tests/*-compilation.err")
if err != nil {
@@ -338,15 +332,15 @@ func TestCompilationErrors(t *testing.T) {
for idx, match := range matches {
t.Logf("[Errors %3d] Testing '%s'", idx+1, match)
- test_data, err := ioutil.ReadFile(match)
- tests := strings.Split(string(test_data), "\n")
+ testData, err := ioutil.ReadFile(match)
+ tests := strings.Split(string(testData), "\n")
- check_filename := fmt.Sprintf("%s.out", match)
- check_data, err := ioutil.ReadFile(check_filename)
+ checkFilename := fmt.Sprintf("%s.out", match)
+ checkData, err := ioutil.ReadFile(checkFilename)
if err != nil {
- t.Fatalf("Error on ReadFile('%s'): %s", check_filename, err.Error())
+ t.Fatalf("Error on ReadFile('%s'): %s", checkFilename, err.Error())
}
- checks := strings.Split(string(check_data), "\n")
+ checks := strings.Split(string(checkData), "\n")
if len(checks) != len(tests) {
t.Fatal("Template lines != Checks lines")
@@ -361,7 +355,7 @@ func TestCompilationErrors(t *testing.T) {
match, idx+1)
}
- _, err = FromString(test)
+ _, err = pongo2.FromString(test)
if err == nil {
t.Fatalf("[%s | Line %d] Expected error for (got none): %s", match, idx+1, tests[idx])
}
@@ -377,9 +371,10 @@ func TestCompilationErrors(t *testing.T) {
func TestBaseDirectory(t *testing.T) {
mustStr := "Hello from template_tests/base_dir_test/"
- s := NewSet("test set with base directory")
+ fs := pongo2.MustNewLocalFileSystemLoader("")
+ s := pongo2.NewSet("test set with base directory", fs)
s.Globals["base_directory"] = "template_tests/base_dir_test/"
- if err := s.SetBaseDirectory(s.Globals["base_directory"].(string)); err != nil {
+ if err := fs.SetBaseDir(s.Globals["base_directory"].(string)); err != nil {
t.Fatal(err)
}
@@ -405,13 +400,13 @@ func TestBaseDirectory(t *testing.T) {
}
func BenchmarkCache(b *testing.B) {
- cache_set := NewSet("cache set")
+ cacheSet := pongo2.NewSet("cache set", pongo2.MustNewLocalFileSystemLoader(""))
for i := 0; i < b.N; i++ {
- tpl, err := cache_set.FromCache("template_tests/complex.tpl")
+ tpl, err := cacheSet.FromCache("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
}
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -419,14 +414,14 @@ func BenchmarkCache(b *testing.B) {
}
func BenchmarkCacheDebugOn(b *testing.B) {
- cache_debug_set := NewSet("cache set")
- cache_debug_set.Debug = true
+ cacheDebugSet := pongo2.NewSet("cache set", pongo2.MustNewLocalFileSystemLoader(""))
+ cacheDebugSet.Debug = true
for i := 0; i < b.N; i++ {
- tpl, err := cache_debug_set.FromFile("template_tests/complex.tpl")
+ tpl, err := cacheDebugSet.FromFile("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
}
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -434,13 +429,13 @@ func BenchmarkCacheDebugOn(b *testing.B) {
}
func BenchmarkExecuteComplexWithSandboxActive(b *testing.B) {
- tpl, err := FromFile("template_tests/complex.tpl")
+ tpl, err := pongo2.FromFile("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -455,12 +450,12 @@ func BenchmarkCompileAndExecuteComplexWithSandboxActive(b *testing.B) {
preloadedTpl := string(buf)
b.ResetTimer()
for i := 0; i < b.N; i++ {
- tpl, err := FromString(preloadedTpl)
+ tpl, err := pongo2.FromString(preloadedTpl)
if err != nil {
b.Fatal(err)
}
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -468,14 +463,14 @@ func BenchmarkCompileAndExecuteComplexWithSandboxActive(b *testing.B) {
}
func BenchmarkParallelExecuteComplexWithSandboxActive(b *testing.B) {
- tpl, err := FromFile("template_tests/complex.tpl")
+ tpl, err := pongo2.FromFile("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
- _, err := tpl.ExecuteBytes(tplContext)
+ err := tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -484,14 +479,14 @@ func BenchmarkParallelExecuteComplexWithSandboxActive(b *testing.B) {
}
func BenchmarkExecuteComplexWithoutSandbox(b *testing.B) {
- s := NewSet("set without sandbox")
+ s := pongo2.NewSet("set without sandbox", pongo2.MustNewLocalFileSystemLoader(""))
tpl, err := s.FromFile("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -505,7 +500,7 @@ func BenchmarkCompileAndExecuteComplexWithoutSandbox(b *testing.B) {
}
preloadedTpl := string(buf)
- s := NewSet("set without sandbox")
+ s := pongo2.NewSet("set without sandbox", pongo2.MustNewLocalFileSystemLoader(""))
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -514,7 +509,7 @@ func BenchmarkCompileAndExecuteComplexWithoutSandbox(b *testing.B) {
b.Fatal(err)
}
- _, err = tpl.ExecuteBytes(tplContext)
+ err = tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
@@ -522,7 +517,7 @@ func BenchmarkCompileAndExecuteComplexWithoutSandbox(b *testing.B) {
}
func BenchmarkParallelExecuteComplexWithoutSandbox(b *testing.B) {
- s := NewSet("set without sandbox")
+ s := pongo2.NewSet("set without sandbox", pongo2.MustNewLocalFileSystemLoader(""))
tpl, err := s.FromFile("template_tests/complex.tpl")
if err != nil {
b.Fatal(err)
@@ -530,7 +525,7 @@ func BenchmarkParallelExecuteComplexWithoutSandbox(b *testing.B) {
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
- _, err := tpl.ExecuteBytes(tplContext)
+ err := tpl.ExecuteWriterUnbuffered(tplContext, ioutil.Discard)
if err != nil {
b.Fatal(err)
}
diff --git a/vendor/github.com/flosch/pongo2/pongo2_test.go b/vendor/github.com/flosch/pongo2/pongo2_test.go
index 5f54584..3a4f6b7 100644
--- a/vendor/github.com/flosch/pongo2/pongo2_test.go
+++ b/vendor/github.com/flosch/pongo2/pongo2_test.go
@@ -1,26 +1,26 @@
-package pongo2
+package pongo2_test
import (
"testing"
+ "github.com/flosch/pongo2"
. "gopkg.in/check.v1"
)
// Hook up gocheck into the "go test" runner.
-
func Test(t *testing.T) { TestingT(t) }
type TestSuite struct {
- tpl *Template
+ tpl *pongo2.Template
}
var (
- _ = Suite(&TestSuite{})
- test_suite2 = NewSet("test suite 2")
+ _ = Suite(&TestSuite{})
+ testSuite2 = pongo2.NewSet("test suite 2", pongo2.MustNewLocalFileSystemLoader(""))
)
-func parseTemplate(s string, c Context) string {
- t, err := test_suite2.FromString(s)
+func parseTemplate(s string, c pongo2.Context) string {
+ t, err := testSuite2.FromString(s)
if err != nil {
panic(err)
}
@@ -31,7 +31,7 @@ func parseTemplate(s string, c Context) string {
return out
}
-func parseTemplateFn(s string, c Context) func() {
+func parseTemplateFn(s string, c pongo2.Context) func() {
return func() {
parseTemplate(s, c)
}
@@ -40,27 +40,64 @@ func parseTemplateFn(s string, c Context) func() {
func (s *TestSuite) TestMisc(c *C) {
// Must
// TODO: Add better error message (see issue #18)
- c.Check(func() { Must(test_suite2.FromFile("template_tests/inheritance/base2.tpl")) },
+ c.Check(
+ func() { pongo2.Must(testSuite2.FromFile("template_tests/inheritance/base2.tpl")) },
PanicMatches,
- `\[Error \(where: fromfile\) in template_tests/inheritance/doesnotexist.tpl | Line 1 Col 12 near 'doesnotexist.tpl'\] open template_tests/inheritance/doesnotexist.tpl: no such file or directory`)
+ `\[Error \(where: fromfile\) in .*template_tests/inheritance/doesnotexist.tpl | Line 1 Col 12 near 'doesnotexist.tpl'\] open .*template_tests/inheritance/doesnotexist.tpl: no such file or directory`,
+ )
// Context
- c.Check(parseTemplateFn("", Context{"'illegal": nil}), PanicMatches, ".*not a valid identifier.*")
+ c.Check(parseTemplateFn("", pongo2.Context{"'illegal": nil}), PanicMatches, ".*not a valid identifier.*")
// Registers
- c.Check(func() { RegisterFilter("escape", nil) }, PanicMatches, ".*is already registered.*")
- c.Check(func() { RegisterTag("for", nil) }, PanicMatches, ".*is already registered.*")
+ c.Check(pongo2.RegisterFilter("escape", nil).Error(), Matches, ".*is already registered")
+ c.Check(pongo2.RegisterTag("for", nil).Error(), Matches, ".*is already registered")
// ApplyFilter
- v, err := ApplyFilter("title", AsValue("this is a title"), nil)
+ v, err := pongo2.ApplyFilter("title", pongo2.AsValue("this is a title"), nil)
if err != nil {
c.Fatal(err)
}
c.Check(v.String(), Equals, "This Is A Title")
c.Check(func() {
- _, err := ApplyFilter("doesnotexist", nil, nil)
+ _, err := pongo2.ApplyFilter("doesnotexist", nil, nil)
if err != nil {
panic(err)
}
}, PanicMatches, `\[Error \(where: applyfilter\)\] Filter with name 'doesnotexist' not found.`)
}
+
+func (s *TestSuite) TestImplicitExecCtx(c *C) {
+ tpl, err := pongo2.FromString("{{ ImplicitExec }}")
+ if err != nil {
+ c.Fatalf("Error in FromString: %v", err)
+ }
+
+ val := "a stringy thing"
+
+ res, err := tpl.Execute(pongo2.Context{
+ "Value": val,
+ "ImplicitExec": func(ctx *pongo2.ExecutionContext) string {
+ return ctx.Public["Value"].(string)
+ },
+ })
+
+ if err != nil {
+ c.Fatalf("Error executing template: %v", err)
+ }
+
+ c.Check(res, Equals, val)
+
+ // The implicit ctx should not be persisted from call-to-call
+ res, err = tpl.Execute(pongo2.Context{
+ "ImplicitExec": func() string {
+ return val
+ },
+ })
+
+ if err != nil {
+ c.Fatalf("Error executing template: %v", err)
+ }
+
+ c.Check(res, Equals, val)
+}
diff --git a/vendor/github.com/flosch/pongo2/tags.go b/vendor/github.com/flosch/pongo2/tags.go
index 292c30d..3668b06 100644
--- a/vendor/github.com/flosch/pongo2/tags.go
+++ b/vendor/github.com/flosch/pongo2/tags.go
@@ -21,6 +21,8 @@ package pongo2
import (
"fmt"
+
+ "github.com/juju/errors"
)
type INodeTag interface {
@@ -53,80 +55,81 @@ func init() {
tags = make(map[string]*tag)
}
-// Registers a new tag. If there's already a tag with the same
-// name, RegisterTag will panic. You usually want to call this
+// Registers a new tag. You usually want to call this
// function in the tag's init() function:
// http://golang.org/doc/effective_go.html#init
//
// See http://www.florian-schlachter.de/post/pongo2/ for more about
// writing filters and tags.
-func RegisterTag(name string, parserFn TagParser) {
+func RegisterTag(name string, parserFn TagParser) error {
_, existing := tags[name]
if existing {
- panic(fmt.Sprintf("Tag with name '%s' is already registered.", name))
+ return errors.Errorf("tag with name '%s' is already registered", name)
}
tags[name] = &tag{
name: name,
parser: parserFn,
}
+ return nil
}
// Replaces an already registered tag with a new implementation. Use this
// function with caution since it allows you to change existing tag behaviour.
-func ReplaceTag(name string, parserFn TagParser) {
+func ReplaceTag(name string, parserFn TagParser) error {
_, existing := tags[name]
if !existing {
- panic(fmt.Sprintf("Tag with name '%s' does not exist (therefore cannot be overridden).", name))
+ return errors.Errorf("tag with name '%s' does not exist (therefore cannot be overridden)", name)
}
tags[name] = &tag{
name: name,
parser: parserFn,
}
+ return nil
}
// Tag = "{%" IDENT ARGS "%}"
func (p *Parser) parseTagElement() (INodeTag, *Error) {
p.Consume() // consume "{%"
- token_name := p.MatchType(TokenIdentifier)
+ tokenName := p.MatchType(TokenIdentifier)
// Check for identifier
- if token_name == nil {
+ if tokenName == nil {
return nil, p.Error("Tag name must be an identifier.", nil)
}
// Check for the existing tag
- tag, exists := tags[token_name.Val]
+ tag, exists := tags[tokenName.Val]
if !exists {
// Does not exists
- return nil, p.Error(fmt.Sprintf("Tag '%s' not found (or beginning tag not provided)", token_name.Val), token_name)
+ return nil, p.Error(fmt.Sprintf("Tag '%s' not found (or beginning tag not provided)", tokenName.Val), tokenName)
}
// Check sandbox tag restriction
- if _, is_banned := p.template.set.bannedTags[token_name.Val]; is_banned {
- return nil, p.Error(fmt.Sprintf("Usage of tag '%s' is not allowed (sandbox restriction active).", token_name.Val), token_name)
+ if _, isBanned := p.template.set.bannedTags[tokenName.Val]; isBanned {
+ return nil, p.Error(fmt.Sprintf("Usage of tag '%s' is not allowed (sandbox restriction active).", tokenName.Val), tokenName)
}
- args_token := make([]*Token, 0)
+ var argsToken []*Token
for p.Peek(TokenSymbol, "%}") == nil && p.Remaining() > 0 {
// Add token to args
- args_token = append(args_token, p.Current())
+ argsToken = append(argsToken, p.Current())
p.Consume() // next token
}
// EOF?
if p.Remaining() == 0 {
- return nil, p.Error("Unexpectedly reached EOF, no tag end found.", p.last_token)
+ return nil, p.Error("Unexpectedly reached EOF, no tag end found.", p.lastToken)
}
p.Match(TokenSymbol, "%}")
- arg_parser := newParser(p.name, args_token, p.template)
- if len(args_token) == 0 {
+ argParser := newParser(p.name, argsToken, p.template)
+ if len(argsToken) == 0 {
// This is done to have nice EOF error messages
- arg_parser.last_token = token_name
+ argParser.lastToken = tokenName
}
p.template.level++
defer func() { p.template.level-- }()
- return tag.parser(p, token_name, arg_parser)
+ return tag.parser(p, tokenName, argParser)
}
diff --git a/vendor/github.com/flosch/pongo2/tags_autoescape.go b/vendor/github.com/flosch/pongo2/tags_autoescape.go
index ec30438..590a1db 100644
--- a/vendor/github.com/flosch/pongo2/tags_autoescape.go
+++ b/vendor/github.com/flosch/pongo2/tags_autoescape.go
@@ -1,19 +1,15 @@
package pongo2
-import (
- "bytes"
-)
-
type tagAutoescapeNode struct {
wrapper *NodeWrapper
autoescape bool
}
-func (node *tagAutoescapeNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagAutoescapeNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
old := ctx.Autoescape
ctx.Autoescape = node.autoescape
- err := node.wrapper.Execute(ctx, buffer)
+ err := node.wrapper.Execute(ctx, writer)
if err != nil {
return err
}
@@ -24,22 +20,22 @@ func (node *tagAutoescapeNode) Execute(ctx *ExecutionContext, buffer *bytes.Buff
}
func tagAutoescapeParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- autoescape_node := &tagAutoescapeNode{}
+ autoescapeNode := &tagAutoescapeNode{}
wrapper, _, err := doc.WrapUntilTag("endautoescape")
if err != nil {
return nil, err
}
- autoescape_node.wrapper = wrapper
+ autoescapeNode.wrapper = wrapper
- mode_token := arguments.MatchType(TokenIdentifier)
- if mode_token == nil {
+ modeToken := arguments.MatchType(TokenIdentifier)
+ if modeToken == nil {
return nil, arguments.Error("A mode is required for autoescape-tag.", nil)
}
- if mode_token.Val == "on" {
- autoescape_node.autoescape = true
- } else if mode_token.Val == "off" {
- autoescape_node.autoescape = false
+ if modeToken.Val == "on" {
+ autoescapeNode.autoescape = true
+ } else if modeToken.Val == "off" {
+ autoescapeNode.autoescape = false
} else {
return nil, arguments.Error("Only 'on' or 'off' is valid as an autoescape-mode.", nil)
}
@@ -48,7 +44,7 @@ func tagAutoescapeParser(doc *Parser, start *Token, arguments *Parser) (INodeTag
return nil, arguments.Error("Malformed autoescape-tag arguments.", nil)
}
- return autoescape_node, nil
+ return autoescapeNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_block.go b/vendor/github.com/flosch/pongo2/tags_block.go
index 30e205a..86145f3 100644
--- a/vendor/github.com/flosch/pongo2/tags_block.go
+++ b/vendor/github.com/flosch/pongo2/tags_block.go
@@ -9,47 +9,82 @@ type tagBlockNode struct {
name string
}
-func (node *tagBlockNode) getBlockWrapperByName(tpl *Template) *NodeWrapper {
+func (node *tagBlockNode) getBlockWrappers(tpl *Template) []*NodeWrapper {
+ nodeWrappers := make([]*NodeWrapper, 0)
var t *NodeWrapper
- if tpl.child != nil {
- // First ask the child for the block
- t = node.getBlockWrapperByName(tpl.child)
- }
- if t == nil {
- // Child has no block, lets look up here at parent
+
+ for tpl != nil {
t = tpl.blocks[node.name]
+ if t != nil {
+ nodeWrappers = append(nodeWrappers, t)
+ }
+ tpl = tpl.child
}
- return t
+
+ return nodeWrappers
}
-func (node *tagBlockNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagBlockNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
tpl := ctx.template
if tpl == nil {
panic("internal error: tpl == nil")
}
+
// Determine the block to execute
- block_wrapper := node.getBlockWrapperByName(tpl)
- if block_wrapper == nil {
- // fmt.Printf("could not find: %s\n", node.name)
- return ctx.Error("internal error: block_wrapper == nil in tagBlockNode.Execute()", nil)
+ blockWrappers := node.getBlockWrappers(tpl)
+ lenBlockWrappers := len(blockWrappers)
+
+ if lenBlockWrappers == 0 {
+ return ctx.Error("internal error: len(block_wrappers) == 0 in tagBlockNode.Execute()", nil)
}
- err := block_wrapper.Execute(ctx, buffer)
+
+ blockWrapper := blockWrappers[lenBlockWrappers-1]
+ ctx.Private["block"] = tagBlockInformation{
+ ctx: ctx,
+ wrappers: blockWrappers[0 : lenBlockWrappers-1],
+ }
+ err := blockWrapper.Execute(ctx, writer)
if err != nil {
return err
}
- // TODO: Add support for {{ block.super }}
-
return nil
}
+type tagBlockInformation struct {
+ ctx *ExecutionContext
+ wrappers []*NodeWrapper
+}
+
+func (t tagBlockInformation) Super() string {
+ lenWrappers := len(t.wrappers)
+
+ if lenWrappers == 0 {
+ return ""
+ }
+
+ superCtx := NewChildExecutionContext(t.ctx)
+ superCtx.Private["block"] = tagBlockInformation{
+ ctx: t.ctx,
+ wrappers: t.wrappers[0 : lenWrappers-1],
+ }
+
+ blockWrapper := t.wrappers[lenWrappers-1]
+ buf := bytes.NewBufferString("")
+ err := blockWrapper.Execute(superCtx, &templateWriter{buf})
+ if err != nil {
+ return ""
+ }
+ return buf.String()
+}
+
func tagBlockParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
if arguments.Count() == 0 {
return nil, arguments.Error("Tag 'block' requires an identifier.", nil)
}
- name_token := arguments.MatchType(TokenIdentifier)
- if name_token == nil {
+ nameToken := arguments.MatchType(TokenIdentifier)
+ if nameToken == nil {
return nil, arguments.Error("First argument for tag 'block' must be an identifier.", nil)
}
@@ -62,15 +97,15 @@ func tagBlockParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
return nil, err
}
if endtagargs.Remaining() > 0 {
- endtagname_token := endtagargs.MatchType(TokenIdentifier)
- if endtagname_token != nil {
- if endtagname_token.Val != name_token.Val {
+ endtagnameToken := endtagargs.MatchType(TokenIdentifier)
+ if endtagnameToken != nil {
+ if endtagnameToken.Val != nameToken.Val {
return nil, endtagargs.Error(fmt.Sprintf("Name for 'endblock' must equal to 'block'-tag's name ('%s' != '%s').",
- name_token.Val, endtagname_token.Val), nil)
+ nameToken.Val, endtagnameToken.Val), nil)
}
}
- if endtagname_token == nil || endtagargs.Remaining() > 0 {
+ if endtagnameToken == nil || endtagargs.Remaining() > 0 {
return nil, endtagargs.Error("Either no or only one argument (identifier) allowed for 'endblock'.", nil)
}
}
@@ -79,14 +114,14 @@ func tagBlockParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
if tpl == nil {
panic("internal error: tpl == nil")
}
- _, has_block := tpl.blocks[name_token.Val]
- if !has_block {
- tpl.blocks[name_token.Val] = wrapper
+ _, hasBlock := tpl.blocks[nameToken.Val]
+ if !hasBlock {
+ tpl.blocks[nameToken.Val] = wrapper
} else {
- return nil, arguments.Error(fmt.Sprintf("Block named '%s' already defined", name_token.Val), nil)
+ return nil, arguments.Error(fmt.Sprintf("Block named '%s' already defined", nameToken.Val), nil)
}
- return &tagBlockNode{name: name_token.Val}, nil
+ return &tagBlockNode{name: nameToken.Val}, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_comment.go b/vendor/github.com/flosch/pongo2/tags_comment.go
index 8c22496..56a02ed 100644
--- a/vendor/github.com/flosch/pongo2/tags_comment.go
+++ b/vendor/github.com/flosch/pongo2/tags_comment.go
@@ -1,20 +1,16 @@
package pongo2
-import (
- "bytes"
-)
-
type tagCommentNode struct{}
-func (node *tagCommentNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagCommentNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
return nil
}
func tagCommentParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- comment_node := &tagCommentNode{}
+ commentNode := &tagCommentNode{}
// TODO: Process the endtag's arguments (see django 'comment'-tag documentation)
- _, _, err := doc.WrapUntilTag("endcomment")
+ err := doc.SkipUntilTag("endcomment")
if err != nil {
return nil, err
}
@@ -23,7 +19,7 @@ func tagCommentParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
return nil, arguments.Error("Tag 'comment' does not take any argument.", nil)
}
- return comment_node, nil
+ return commentNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_cycle.go b/vendor/github.com/flosch/pongo2/tags_cycle.go
index 6a6830e..ffbd254 100644
--- a/vendor/github.com/flosch/pongo2/tags_cycle.go
+++ b/vendor/github.com/flosch/pongo2/tags_cycle.go
@@ -1,9 +1,5 @@
package pongo2
-import (
- "bytes"
-)
-
type tagCycleValue struct {
node *tagCycleNode
value *Value
@@ -13,7 +9,7 @@ type tagCycleNode struct {
position *Token
args []IEvaluator
idx int
- as_name string
+ asName string
silent bool
}
@@ -21,7 +17,7 @@ func (cv *tagCycleValue) String() string {
return cv.value.String()
}
-func (node *tagCycleNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagCycleNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
item := node.args[node.idx%len(node.args)]
node.idx++
@@ -46,30 +42,30 @@ func (node *tagCycleNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *
t.value = val
if !t.node.silent {
- buffer.WriteString(val.String())
+ writer.WriteString(val.String())
}
} else {
// Regular call
- cycle_value := &tagCycleValue{
+ cycleValue := &tagCycleValue{
node: node,
value: val,
}
- if node.as_name != "" {
- ctx.Private[node.as_name] = cycle_value
+ if node.asName != "" {
+ ctx.Private[node.asName] = cycleValue
}
if !node.silent {
- buffer.WriteString(val.String())
+ writer.WriteString(val.String())
}
}
return nil
}
-// HINT: We're not supporting the old comma-seperated list of expresions argument-style
+// HINT: We're not supporting the old comma-separated list of expressions argument-style
func tagCycleParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- cycle_node := &tagCycleNode{
+ cycleNode := &tagCycleNode{
position: start,
}
@@ -78,19 +74,19 @@ func tagCycleParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
if err != nil {
return nil, err
}
- cycle_node.args = append(cycle_node.args, node)
+ cycleNode.args = append(cycleNode.args, node)
if arguments.MatchOne(TokenKeyword, "as") != nil {
// as
- name_token := arguments.MatchType(TokenIdentifier)
- if name_token == nil {
+ nameToken := arguments.MatchType(TokenIdentifier)
+ if nameToken == nil {
return nil, arguments.Error("Name (identifier) expected after 'as'.", nil)
}
- cycle_node.as_name = name_token.Val
+ cycleNode.asName = nameToken.Val
if arguments.MatchOne(TokenIdentifier, "silent") != nil {
- cycle_node.silent = true
+ cycleNode.silent = true
}
// Now we're finished
@@ -102,7 +98,7 @@ func tagCycleParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
return nil, arguments.Error("Malformed cycle-tag.", nil)
}
- return cycle_node, nil
+ return cycleNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_extends.go b/vendor/github.com/flosch/pongo2/tags_extends.go
index 6abbb6b..5771020 100644
--- a/vendor/github.com/flosch/pongo2/tags_extends.go
+++ b/vendor/github.com/flosch/pongo2/tags_extends.go
@@ -1,19 +1,15 @@
package pongo2
-import (
- "bytes"
-)
-
type tagExtendsNode struct {
filename string
}
-func (node *tagExtendsNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagExtendsNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
return nil
}
func tagExtendsParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- extends_node := &tagExtendsNode{}
+ extendsNode := &tagExtendsNode{}
if doc.template.level > 1 {
return nil, arguments.Error("The 'extends' tag can only defined on root level.", start)
@@ -24,22 +20,22 @@ func tagExtendsParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
return nil, arguments.Error("This template has already one parent.", start)
}
- if filename_token := arguments.MatchType(TokenString); filename_token != nil {
+ if filenameToken := arguments.MatchType(TokenString); filenameToken != nil {
// prepared, static template
// Get parent's filename
- parent_filename := doc.template.set.resolveFilename(doc.template, filename_token.Val)
+ parentFilename := doc.template.set.resolveFilename(doc.template, filenameToken.Val)
// Parse the parent
- parent_template, err := doc.template.set.FromFile(parent_filename)
+ parentTemplate, err := doc.template.set.FromFile(parentFilename)
if err != nil {
return nil, err.(*Error)
}
// Keep track of things
- parent_template.child = doc.template
- doc.template.parent = parent_template
- extends_node.filename = parent_filename
+ parentTemplate.child = doc.template
+ doc.template.parent = parentTemplate
+ extendsNode.filename = parentFilename
} else {
return nil, arguments.Error("Tag 'extends' requires a template filename as string.", nil)
}
@@ -48,7 +44,7 @@ func tagExtendsParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
return nil, arguments.Error("Tag 'extends' does only take 1 argument.", nil)
}
- return extends_node, nil
+ return extendsNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_filter.go b/vendor/github.com/flosch/pongo2/tags_filter.go
index f421e5a..b38fd92 100644
--- a/vendor/github.com/flosch/pongo2/tags_filter.go
+++ b/vendor/github.com/flosch/pongo2/tags_filter.go
@@ -5,8 +5,8 @@ import (
)
type nodeFilterCall struct {
- name string
- param_expr IEvaluator
+ name string
+ paramExpr IEvaluator
}
type tagFilterNode struct {
@@ -15,7 +15,7 @@ type tagFilterNode struct {
filterChain []*nodeFilterCall
}
-func (node *tagFilterNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagFilterNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
temp := bytes.NewBuffer(make([]byte, 0, 1024)) // 1 KiB size
err := node.bodyWrapper.Execute(ctx, temp)
@@ -27,8 +27,8 @@ func (node *tagFilterNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
for _, call := range node.filterChain {
var param *Value
- if call.param_expr != nil {
- param, err = call.param_expr.Evaluate(ctx)
+ if call.paramExpr != nil {
+ param, err = call.paramExpr.Evaluate(ctx)
if err != nil {
return err
}
@@ -41,13 +41,13 @@ func (node *tagFilterNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
}
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
func tagFilterParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- filter_node := &tagFilterNode{
+ filterNode := &tagFilterNode{
position: start,
}
@@ -55,16 +55,16 @@ func tagFilterParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *E
if err != nil {
return nil, err
}
- filter_node.bodyWrapper = wrapper
+ filterNode.bodyWrapper = wrapper
for arguments.Remaining() > 0 {
filterCall := &nodeFilterCall{}
- name_token := arguments.MatchType(TokenIdentifier)
- if name_token == nil {
+ nameToken := arguments.MatchType(TokenIdentifier)
+ if nameToken == nil {
return nil, arguments.Error("Expected a filter name (identifier).", nil)
}
- filterCall.name = name_token.Val
+ filterCall.name = nameToken.Val
if arguments.MatchOne(TokenSymbol, ":") != nil {
// Filter parameter
@@ -73,10 +73,10 @@ func tagFilterParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *E
if err != nil {
return nil, err
}
- filterCall.param_expr = expr
+ filterCall.paramExpr = expr
}
- filter_node.filterChain = append(filter_node.filterChain, filterCall)
+ filterNode.filterChain = append(filterNode.filterChain, filterCall)
if arguments.MatchOne(TokenSymbol, "|") == nil {
break
@@ -87,7 +87,7 @@ func tagFilterParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *E
return nil, arguments.Error("Malformed filter-tag arguments.", nil)
}
- return filter_node, nil
+ return filterNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_firstof.go b/vendor/github.com/flosch/pongo2/tags_firstof.go
index b677979..5b2888e 100644
--- a/vendor/github.com/flosch/pongo2/tags_firstof.go
+++ b/vendor/github.com/flosch/pongo2/tags_firstof.go
@@ -1,15 +1,11 @@
package pongo2
-import (
- "bytes"
-)
-
type tagFirstofNode struct {
position *Token
args []IEvaluator
}
-func (node *tagFirstofNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagFirstofNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
for _, arg := range node.args {
val, err := arg.Evaluate(ctx)
if err != nil {
@@ -24,7 +20,7 @@ func (node *tagFirstofNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
}
}
- buffer.WriteString(val.String())
+ writer.WriteString(val.String())
return nil
}
}
@@ -33,7 +29,7 @@ func (node *tagFirstofNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
}
func tagFirstofParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- firstof_node := &tagFirstofNode{
+ firstofNode := &tagFirstofNode{
position: start,
}
@@ -42,10 +38,10 @@ func tagFirstofParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
if err != nil {
return nil, err
}
- firstof_node.args = append(firstof_node.args, node)
+ firstofNode.args = append(firstofNode.args, node)
}
- return firstof_node, nil
+ return firstofNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_for.go b/vendor/github.com/flosch/pongo2/tags_for.go
index de56699..5b0b555 100644
--- a/vendor/github.com/flosch/pongo2/tags_for.go
+++ b/vendor/github.com/flosch/pongo2/tags_for.go
@@ -1,14 +1,11 @@
package pongo2
-import (
- "bytes"
-)
-
type tagForNode struct {
- key string
- value string // only for maps: for key, value in map
- object_evaluator IEvaluator
- reversed bool
+ key string
+ value string // only for maps: for key, value in map
+ objectEvaluator IEvaluator
+ reversed bool
+ sorted bool
bodyWrapper *NodeWrapper
emptyWrapper *NodeWrapper
@@ -24,7 +21,7 @@ type tagForLoopInformation struct {
Parentloop *tagForLoopInformation
}
-func (node *tagForNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) (forError *Error) {
+func (node *tagForNode) Execute(ctx *ExecutionContext, writer TemplateWriter) (forError *Error) {
// Backup forloop (as parentloop in public context), key-name and value-name
forCtx := NewChildExecutionContext(ctx)
parentloop := forCtx.Private["forloop"]
@@ -42,7 +39,7 @@ func (node *tagForNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) (fo
// Register loopInfo in public context
forCtx.Private["forloop"] = loopInfo
- obj, err := node.object_evaluator.Evaluate(forCtx)
+ obj, err := node.objectEvaluator.Evaluate(forCtx)
if err != nil {
return err
}
@@ -67,7 +64,7 @@ func (node *tagForNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) (fo
loopInfo.Revcounter0 = count - (idx + 1) // TODO: Not sure about this, have to look it up
// Render elements with updated context
- err := node.bodyWrapper.Execute(forCtx, buffer)
+ err := node.bodyWrapper.Execute(forCtx, writer)
if err != nil {
forError = err
return false
@@ -76,30 +73,30 @@ func (node *tagForNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) (fo
}, func() {
// Nothing to iterate over (maybe wrong type or no items)
if node.emptyWrapper != nil {
- err := node.emptyWrapper.Execute(forCtx, buffer)
+ err := node.emptyWrapper.Execute(forCtx, writer)
if err != nil {
forError = err
}
}
- }, node.reversed)
+ }, node.reversed, node.sorted)
- return nil
+ return forError
}
func tagForParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- for_node := &tagForNode{}
+ forNode := &tagForNode{}
// Arguments parsing
- var value_token *Token
- key_token := arguments.MatchType(TokenIdentifier)
- if key_token == nil {
+ var valueToken *Token
+ keyToken := arguments.MatchType(TokenIdentifier)
+ if keyToken == nil {
return nil, arguments.Error("Expected an key identifier as first argument for 'for'-tag", nil)
}
if arguments.Match(TokenSymbol, ",") != nil {
// Value name is provided
- value_token = arguments.MatchType(TokenIdentifier)
- if value_token == nil {
+ valueToken = arguments.MatchType(TokenIdentifier)
+ if valueToken == nil {
return nil, arguments.Error("Value name must be an identifier.", nil)
}
}
@@ -108,18 +105,22 @@ func tagForParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Erro
return nil, arguments.Error("Expected keyword 'in'.", nil)
}
- object_evaluator, err := arguments.ParseExpression()
+ objectEvaluator, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- for_node.object_evaluator = object_evaluator
- for_node.key = key_token.Val
- if value_token != nil {
- for_node.value = value_token.Val
+ forNode.objectEvaluator = objectEvaluator
+ forNode.key = keyToken.Val
+ if valueToken != nil {
+ forNode.value = valueToken.Val
}
if arguments.MatchOne(TokenIdentifier, "reversed") != nil {
- for_node.reversed = true
+ forNode.reversed = true
+ }
+
+ if arguments.MatchOne(TokenIdentifier, "sorted") != nil {
+ forNode.sorted = true
}
if arguments.Remaining() > 0 {
@@ -131,7 +132,7 @@ func tagForParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Erro
if err != nil {
return nil, err
}
- for_node.bodyWrapper = wrapper
+ forNode.bodyWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
@@ -143,14 +144,14 @@ func tagForParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Erro
if err != nil {
return nil, err
}
- for_node.emptyWrapper = wrapper
+ forNode.emptyWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
}
}
- return for_node, nil
+ return forNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_if.go b/vendor/github.com/flosch/pongo2/tags_if.go
index 2515c44..3eeaf3b 100644
--- a/vendor/github.com/flosch/pongo2/tags_if.go
+++ b/vendor/github.com/flosch/pongo2/tags_if.go
@@ -1,15 +1,11 @@
package pongo2
-import (
- "bytes"
-)
-
type tagIfNode struct {
conditions []IEvaluator
wrappers []*NodeWrapper
}
-func (node *tagIfNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagIfNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
for i, condition := range node.conditions {
result, err := condition.Evaluate(ctx)
if err != nil {
@@ -17,26 +13,25 @@ func (node *tagIfNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Err
}
if result.IsTrue() {
- return node.wrappers[i].Execute(ctx, buffer)
- } else {
- // Last condition?
- if len(node.conditions) == i+1 && len(node.wrappers) > i+1 {
- return node.wrappers[i+1].Execute(ctx, buffer)
- }
+ return node.wrappers[i].Execute(ctx, writer)
+ }
+ // Last condition?
+ if len(node.conditions) == i+1 && len(node.wrappers) > i+1 {
+ return node.wrappers[i+1].Execute(ctx, writer)
}
}
return nil
}
func tagIfParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- if_node := &tagIfNode{}
+ ifNode := &tagIfNode{}
// Parse first and main IF condition
condition, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- if_node.conditions = append(if_node.conditions, condition)
+ ifNode.conditions = append(ifNode.conditions, condition)
if arguments.Remaining() > 0 {
return nil, arguments.Error("If-condition is malformed.", nil)
@@ -44,27 +39,27 @@ func tagIfParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error
// Check the rest
for {
- wrapper, tag_args, err := doc.WrapUntilTag("elif", "else", "endif")
+ wrapper, tagArgs, err := doc.WrapUntilTag("elif", "else", "endif")
if err != nil {
return nil, err
}
- if_node.wrappers = append(if_node.wrappers, wrapper)
+ ifNode.wrappers = append(ifNode.wrappers, wrapper)
if wrapper.Endtag == "elif" {
// elif can take a condition
- condition, err := tag_args.ParseExpression()
+ condition, err = tagArgs.ParseExpression()
if err != nil {
return nil, err
}
- if_node.conditions = append(if_node.conditions, condition)
+ ifNode.conditions = append(ifNode.conditions, condition)
- if tag_args.Remaining() > 0 {
- return nil, tag_args.Error("Elif-condition is malformed.", nil)
+ if tagArgs.Remaining() > 0 {
+ return nil, tagArgs.Error("Elif-condition is malformed.", nil)
}
} else {
- if tag_args.Count() > 0 {
+ if tagArgs.Count() > 0 {
// else/endif can't take any conditions
- return nil, tag_args.Error("Arguments not allowed here.", nil)
+ return nil, tagArgs.Error("Arguments not allowed here.", nil)
}
}
@@ -73,7 +68,7 @@ func tagIfParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error
}
}
- return if_node, nil
+ return ifNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_ifchanged.go b/vendor/github.com/flosch/pongo2/tags_ifchanged.go
index 4412ace..45296a0 100644
--- a/vendor/github.com/flosch/pongo2/tags_ifchanged.go
+++ b/vendor/github.com/flosch/pongo2/tags_ifchanged.go
@@ -5,16 +5,15 @@ import (
)
type tagIfchangedNode struct {
- watched_expr []IEvaluator
- last_values []*Value
- last_content []byte
- thenWrapper *NodeWrapper
- elseWrapper *NodeWrapper
+ watchedExpr []IEvaluator
+ lastValues []*Value
+ lastContent []byte
+ thenWrapper *NodeWrapper
+ elseWrapper *NodeWrapper
}
-func (node *tagIfchangedNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
-
- if len(node.watched_expr) == 0 {
+func (node *tagIfchangedNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ if len(node.watchedExpr) == 0 {
// Check against own rendered body
buf := bytes.NewBuffer(make([]byte, 0, 1024)) // 1 KiB
@@ -23,43 +22,43 @@ func (node *tagIfchangedNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffe
return err
}
- buf_bytes := buf.Bytes()
- if !bytes.Equal(node.last_content, buf_bytes) {
+ bufBytes := buf.Bytes()
+ if !bytes.Equal(node.lastContent, bufBytes) {
// Rendered content changed, output it
- buffer.Write(buf_bytes)
- node.last_content = buf_bytes
+ writer.Write(bufBytes)
+ node.lastContent = bufBytes
}
} else {
- now_values := make([]*Value, 0, len(node.watched_expr))
- for _, expr := range node.watched_expr {
+ nowValues := make([]*Value, 0, len(node.watchedExpr))
+ for _, expr := range node.watchedExpr {
val, err := expr.Evaluate(ctx)
if err != nil {
return err
}
- now_values = append(now_values, val)
+ nowValues = append(nowValues, val)
}
// Compare old to new values now
- changed := len(node.last_values) == 0
+ changed := len(node.lastValues) == 0
- for idx, old_val := range node.last_values {
- if !old_val.EqualValueTo(now_values[idx]) {
+ for idx, oldVal := range node.lastValues {
+ if !oldVal.EqualValueTo(nowValues[idx]) {
changed = true
break // we can stop here because ONE value changed
}
}
- node.last_values = now_values
+ node.lastValues = nowValues
if changed {
// Render thenWrapper
- err := node.thenWrapper.Execute(ctx, buffer)
+ err := node.thenWrapper.Execute(ctx, writer)
if err != nil {
return err
}
} else {
// Render elseWrapper
- err := node.elseWrapper.Execute(ctx, buffer)
+ err := node.elseWrapper.Execute(ctx, writer)
if err != nil {
return err
}
@@ -70,7 +69,7 @@ func (node *tagIfchangedNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffe
}
func tagIfchangedParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- ifchanged_node := &tagIfchangedNode{}
+ ifchangedNode := &tagIfchangedNode{}
for arguments.Remaining() > 0 {
// Parse condition
@@ -78,7 +77,7 @@ func tagIfchangedParser(doc *Parser, start *Token, arguments *Parser) (INodeTag,
if err != nil {
return nil, err
}
- ifchanged_node.watched_expr = append(ifchanged_node.watched_expr, expr)
+ ifchangedNode.watchedExpr = append(ifchangedNode.watchedExpr, expr)
}
if arguments.Remaining() > 0 {
@@ -90,7 +89,7 @@ func tagIfchangedParser(doc *Parser, start *Token, arguments *Parser) (INodeTag,
if err != nil {
return nil, err
}
- ifchanged_node.thenWrapper = wrapper
+ ifchangedNode.thenWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
@@ -102,14 +101,14 @@ func tagIfchangedParser(doc *Parser, start *Token, arguments *Parser) (INodeTag,
if err != nil {
return nil, err
}
- ifchanged_node.elseWrapper = wrapper
+ ifchangedNode.elseWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
}
}
- return ifchanged_node, nil
+ return ifchangedNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_ifequal.go b/vendor/github.com/flosch/pongo2/tags_ifequal.go
index 035b8fd..103f1c7 100644
--- a/vendor/github.com/flosch/pongo2/tags_ifequal.go
+++ b/vendor/github.com/flosch/pongo2/tags_ifequal.go
@@ -1,16 +1,12 @@
package pongo2
-import (
- "bytes"
-)
-
type tagIfEqualNode struct {
var1, var2 IEvaluator
thenWrapper *NodeWrapper
elseWrapper *NodeWrapper
}
-func (node *tagIfEqualNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagIfEqualNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
r1, err := node.var1.Evaluate(ctx)
if err != nil {
return err
@@ -23,17 +19,16 @@ func (node *tagIfEqualNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
result := r1.EqualValueTo(r2)
if result {
- return node.thenWrapper.Execute(ctx, buffer)
- } else {
- if node.elseWrapper != nil {
- return node.elseWrapper.Execute(ctx, buffer)
- }
+ return node.thenWrapper.Execute(ctx, writer)
+ }
+ if node.elseWrapper != nil {
+ return node.elseWrapper.Execute(ctx, writer)
}
return nil
}
func tagIfEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- ifequal_node := &tagIfEqualNode{}
+ ifequalNode := &tagIfEqualNode{}
// Parse two expressions
var1, err := arguments.ParseExpression()
@@ -44,8 +39,8 @@ func tagIfEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
if err != nil {
return nil, err
}
- ifequal_node.var1 = var1
- ifequal_node.var2 = var2
+ ifequalNode.var1 = var1
+ ifequalNode.var2 = var2
if arguments.Remaining() > 0 {
return nil, arguments.Error("ifequal only takes 2 arguments.", nil)
@@ -56,7 +51,7 @@ func tagIfEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
if err != nil {
return nil, err
}
- ifequal_node.thenWrapper = wrapper
+ ifequalNode.thenWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
@@ -68,14 +63,14 @@ func tagIfEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
if err != nil {
return nil, err
}
- ifequal_node.elseWrapper = wrapper
+ ifequalNode.elseWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
}
}
- return ifequal_node, nil
+ return ifequalNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_ifnotequal.go b/vendor/github.com/flosch/pongo2/tags_ifnotequal.go
index 1c1ba53..0d287d3 100644
--- a/vendor/github.com/flosch/pongo2/tags_ifnotequal.go
+++ b/vendor/github.com/flosch/pongo2/tags_ifnotequal.go
@@ -1,16 +1,12 @@
package pongo2
-import (
- "bytes"
-)
-
type tagIfNotEqualNode struct {
var1, var2 IEvaluator
thenWrapper *NodeWrapper
elseWrapper *NodeWrapper
}
-func (node *tagIfNotEqualNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagIfNotEqualNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
r1, err := node.var1.Evaluate(ctx)
if err != nil {
return err
@@ -23,17 +19,16 @@ func (node *tagIfNotEqualNode) Execute(ctx *ExecutionContext, buffer *bytes.Buff
result := !r1.EqualValueTo(r2)
if result {
- return node.thenWrapper.Execute(ctx, buffer)
- } else {
- if node.elseWrapper != nil {
- return node.elseWrapper.Execute(ctx, buffer)
- }
+ return node.thenWrapper.Execute(ctx, writer)
+ }
+ if node.elseWrapper != nil {
+ return node.elseWrapper.Execute(ctx, writer)
}
return nil
}
func tagIfNotEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- ifnotequal_node := &tagIfNotEqualNode{}
+ ifnotequalNode := &tagIfNotEqualNode{}
// Parse two expressions
var1, err := arguments.ParseExpression()
@@ -44,19 +39,19 @@ func tagIfNotEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag
if err != nil {
return nil, err
}
- ifnotequal_node.var1 = var1
- ifnotequal_node.var2 = var2
+ ifnotequalNode.var1 = var1
+ ifnotequalNode.var2 = var2
if arguments.Remaining() > 0 {
return nil, arguments.Error("ifequal only takes 2 arguments.", nil)
}
// Wrap then/else-blocks
- wrapper, endargs, err := doc.WrapUntilTag("else", "endifequal")
+ wrapper, endargs, err := doc.WrapUntilTag("else", "endifnotequal")
if err != nil {
return nil, err
}
- ifnotequal_node.thenWrapper = wrapper
+ ifnotequalNode.thenWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
@@ -64,18 +59,18 @@ func tagIfNotEqualParser(doc *Parser, start *Token, arguments *Parser) (INodeTag
if wrapper.Endtag == "else" {
// if there's an else in the if-statement, we need the else-Block as well
- wrapper, endargs, err = doc.WrapUntilTag("endifequal")
+ wrapper, endargs, err = doc.WrapUntilTag("endifnotequal")
if err != nil {
return nil, err
}
- ifnotequal_node.elseWrapper = wrapper
+ ifnotequalNode.elseWrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
}
}
- return ifnotequal_node, nil
+ return ifnotequalNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_import.go b/vendor/github.com/flosch/pongo2/tags_import.go
index 2abeccd..7e0d6a0 100644
--- a/vendor/github.com/flosch/pongo2/tags_import.go
+++ b/vendor/github.com/flosch/pongo2/tags_import.go
@@ -1,18 +1,16 @@
package pongo2
import (
- "bytes"
"fmt"
)
type tagImportNode struct {
position *Token
filename string
- template *Template
macros map[string]*tagMacroNode // alias/name -> macro instance
}
-func (node *tagImportNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagImportNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
for name, macro := range node.macros {
func(name string, macro *tagMacroNode) {
ctx.Private[name] = func(args ...*Value) *Value {
@@ -24,50 +22,50 @@ func (node *tagImportNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
}
func tagImportParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- import_node := &tagImportNode{
+ importNode := &tagImportNode{
position: start,
macros: make(map[string]*tagMacroNode),
}
- filename_token := arguments.MatchType(TokenString)
- if filename_token == nil {
+ filenameToken := arguments.MatchType(TokenString)
+ if filenameToken == nil {
return nil, arguments.Error("Import-tag needs a filename as string.", nil)
}
- import_node.filename = doc.template.set.resolveFilename(doc.template, filename_token.Val)
+ importNode.filename = doc.template.set.resolveFilename(doc.template, filenameToken.Val)
if arguments.Remaining() == 0 {
return nil, arguments.Error("You must at least specify one macro to import.", nil)
}
// Compile the given template
- tpl, err := doc.template.set.FromFile(import_node.filename)
+ tpl, err := doc.template.set.FromFile(importNode.filename)
if err != nil {
return nil, err.(*Error).updateFromTokenIfNeeded(doc.template, start)
}
for arguments.Remaining() > 0 {
- macro_name_token := arguments.MatchType(TokenIdentifier)
- if macro_name_token == nil {
+ macroNameToken := arguments.MatchType(TokenIdentifier)
+ if macroNameToken == nil {
return nil, arguments.Error("Expected macro name (identifier).", nil)
}
- as_name := macro_name_token.Val
+ asName := macroNameToken.Val
if arguments.Match(TokenKeyword, "as") != nil {
- alias_token := arguments.MatchType(TokenIdentifier)
- if alias_token == nil {
+ aliasToken := arguments.MatchType(TokenIdentifier)
+ if aliasToken == nil {
return nil, arguments.Error("Expected macro alias name (identifier).", nil)
}
- as_name = alias_token.Val
+ asName = aliasToken.Val
}
- macro_instance, has := tpl.exported_macros[macro_name_token.Val]
+ macroInstance, has := tpl.exportedMacros[macroNameToken.Val]
if !has {
- return nil, arguments.Error(fmt.Sprintf("Macro '%s' not found (or not exported) in '%s'.", macro_name_token.Val,
- import_node.filename), macro_name_token)
+ return nil, arguments.Error(fmt.Sprintf("Macro '%s' not found (or not exported) in '%s'.", macroNameToken.Val,
+ importNode.filename), macroNameToken)
}
- import_node.macros[as_name] = macro_instance
+ importNode.macros[asName] = macroInstance
if arguments.Remaining() == 0 {
break
@@ -78,7 +76,7 @@ func tagImportParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *E
}
}
- return import_node, nil
+ return importNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_include.go b/vendor/github.com/flosch/pongo2/tags_include.go
index 7a7cce2..6d619fd 100644
--- a/vendor/github.com/flosch/pongo2/tags_include.go
+++ b/vendor/github.com/flosch/pongo2/tags_include.go
@@ -1,41 +1,38 @@
package pongo2
-import (
- "bytes"
-)
-
type tagIncludeNode struct {
- tpl *Template
- filename_evaluator IEvaluator
- lazy bool
- only bool
- filename string
- with_pairs map[string]IEvaluator
+ tpl *Template
+ filenameEvaluator IEvaluator
+ lazy bool
+ only bool
+ filename string
+ withPairs map[string]IEvaluator
+ ifExists bool
}
-func (node *tagIncludeNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagIncludeNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
// Building the context for the template
- include_ctx := make(Context)
+ includeCtx := make(Context)
// Fill the context with all data from the parent
if !node.only {
- include_ctx.Update(ctx.Public)
- include_ctx.Update(ctx.Private)
+ includeCtx.Update(ctx.Public)
+ includeCtx.Update(ctx.Private)
}
// Put all custom with-pairs into the context
- for key, value := range node.with_pairs {
+ for key, value := range node.withPairs {
val, err := value.Evaluate(ctx)
if err != nil {
return err
}
- include_ctx[key] = val
+ includeCtx[key] = val
}
// Execute the template
if node.lazy {
// Evaluate the filename
- filename, err := node.filename_evaluator.Evaluate(ctx)
+ filename, err := node.filenameEvaluator.Evaluate(ctx)
if err != nil {
return err
}
@@ -45,76 +42,93 @@ func (node *tagIncludeNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer)
}
// Get include-filename
- included_filename := ctx.template.set.resolveFilename(ctx.template, filename.String())
+ includedFilename := ctx.template.set.resolveFilename(ctx.template, filename.String())
- included_tpl, err2 := ctx.template.set.FromFile(included_filename)
+ includedTpl, err2 := ctx.template.set.FromFile(includedFilename)
if err2 != nil {
+ // if this is ReadFile error, and "if_exists" flag is enabled
+ if node.ifExists && err2.(*Error).Sender == "fromfile" {
+ return nil
+ }
return err2.(*Error)
}
- err2 = included_tpl.ExecuteWriter(include_ctx, buffer)
+ err2 = includedTpl.ExecuteWriter(includeCtx, writer)
if err2 != nil {
return err2.(*Error)
}
return nil
- } else {
- // Template is already parsed with static filename
- err := node.tpl.ExecuteWriter(include_ctx, buffer)
- if err != nil {
- return err.(*Error)
- }
- return nil
}
+ // Template is already parsed with static filename
+ err := node.tpl.ExecuteWriter(includeCtx, writer)
+ if err != nil {
+ return err.(*Error)
+ }
+ return nil
+}
+
+type tagIncludeEmptyNode struct{}
+
+func (node *tagIncludeEmptyNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ return nil
}
func tagIncludeParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- include_node := &tagIncludeNode{
- with_pairs: make(map[string]IEvaluator),
+ includeNode := &tagIncludeNode{
+ withPairs: make(map[string]IEvaluator),
}
- if filename_token := arguments.MatchType(TokenString); filename_token != nil {
+ if filenameToken := arguments.MatchType(TokenString); filenameToken != nil {
// prepared, static template
+ // "if_exists" flag
+ ifExists := arguments.Match(TokenIdentifier, "if_exists") != nil
+
// Get include-filename
- included_filename := doc.template.set.resolveFilename(doc.template, filename_token.Val)
+ includedFilename := doc.template.set.resolveFilename(doc.template, filenameToken.Val)
// Parse the parent
- include_node.filename = included_filename
- included_tpl, err := doc.template.set.FromFile(included_filename)
+ includeNode.filename = includedFilename
+ includedTpl, err := doc.template.set.FromFile(includedFilename)
if err != nil {
- return nil, err.(*Error).updateFromTokenIfNeeded(doc.template, filename_token)
+ // if this is ReadFile error, and "if_exists" token presents we should create and empty node
+ if err.(*Error).Sender == "fromfile" && ifExists {
+ return &tagIncludeEmptyNode{}, nil
+ }
+ return nil, err.(*Error).updateFromTokenIfNeeded(doc.template, filenameToken)
}
- include_node.tpl = included_tpl
+ includeNode.tpl = includedTpl
} else {
// No String, then the user wants to use lazy-evaluation (slower, but possible)
- filename_evaluator, err := arguments.ParseExpression()
+ filenameEvaluator, err := arguments.ParseExpression()
if err != nil {
- return nil, err.updateFromTokenIfNeeded(doc.template, filename_token)
+ return nil, err.updateFromTokenIfNeeded(doc.template, filenameToken)
}
- include_node.filename_evaluator = filename_evaluator
- include_node.lazy = true
+ includeNode.filenameEvaluator = filenameEvaluator
+ includeNode.lazy = true
+ includeNode.ifExists = arguments.Match(TokenIdentifier, "if_exists") != nil // "if_exists" flag
}
// After having parsed the filename we're gonna parse the with+only options
if arguments.Match(TokenIdentifier, "with") != nil {
for arguments.Remaining() > 0 {
// We have at least one key=expr pair (because of starting "with")
- key_token := arguments.MatchType(TokenIdentifier)
- if key_token == nil {
+ keyToken := arguments.MatchType(TokenIdentifier)
+ if keyToken == nil {
return nil, arguments.Error("Expected an identifier", nil)
}
if arguments.Match(TokenSymbol, "=") == nil {
return nil, arguments.Error("Expected '='.", nil)
}
- value_expr, err := arguments.ParseExpression()
+ valueExpr, err := arguments.ParseExpression()
if err != nil {
- return nil, err.updateFromTokenIfNeeded(doc.template, key_token)
+ return nil, err.updateFromTokenIfNeeded(doc.template, keyToken)
}
- include_node.with_pairs[key_token.Val] = value_expr
+ includeNode.withPairs[keyToken.Val] = valueExpr
// Only?
if arguments.Match(TokenIdentifier, "only") != nil {
- include_node.only = true
+ includeNode.only = true
break // stop parsing arguments because it's the last option
}
}
@@ -124,7 +138,7 @@ func tagIncludeParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *
return nil, arguments.Error("Malformed 'include'-tag arguments.", nil)
}
- return include_node, nil
+ return includeNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_lorem.go b/vendor/github.com/flosch/pongo2/tags_lorem.go
index 16b018c..1d353f2 100644
--- a/vendor/github.com/flosch/pongo2/tags_lorem.go
+++ b/vendor/github.com/flosch/pongo2/tags_lorem.go
@@ -1,10 +1,11 @@
package pongo2
import (
- "bytes"
"math/rand"
"strings"
"time"
+
+ "github.com/juju/errors"
)
var (
@@ -19,102 +20,102 @@ type tagLoremNode struct {
random bool // does not use the default paragraph "Lorem ipsum dolor sit amet, ..."
}
-func (node *tagLoremNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagLoremNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
switch node.method {
case "b":
if node.random {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString("\n")
+ writer.WriteString("\n")
}
par := tagLoremParagraphs[rand.Intn(len(tagLoremParagraphs))]
- buffer.WriteString(par)
+ writer.WriteString(par)
}
} else {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString("\n")
+ writer.WriteString("\n")
}
par := tagLoremParagraphs[i%len(tagLoremParagraphs)]
- buffer.WriteString(par)
+ writer.WriteString(par)
}
}
case "w":
if node.random {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString(" ")
+ writer.WriteString(" ")
}
word := tagLoremWords[rand.Intn(len(tagLoremWords))]
- buffer.WriteString(word)
+ writer.WriteString(word)
}
} else {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString(" ")
+ writer.WriteString(" ")
}
word := tagLoremWords[i%len(tagLoremWords)]
- buffer.WriteString(word)
+ writer.WriteString(word)
}
}
case "p":
if node.random {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString("\n")
+ writer.WriteString("\n")
}
- buffer.WriteString("")
+ writer.WriteString("
")
par := tagLoremParagraphs[rand.Intn(len(tagLoremParagraphs))]
- buffer.WriteString(par)
- buffer.WriteString("
")
+ writer.WriteString(par)
+ writer.WriteString("
")
}
} else {
for i := 0; i < node.count; i++ {
if i > 0 {
- buffer.WriteString("\n")
+ writer.WriteString("\n")
}
- buffer.WriteString("")
+ writer.WriteString("
")
par := tagLoremParagraphs[i%len(tagLoremParagraphs)]
- buffer.WriteString(par)
- buffer.WriteString("
")
+ writer.WriteString(par)
+ writer.WriteString("")
}
}
default:
- panic("unsupported method")
+ return ctx.OrigError(errors.Errorf("unsupported method: %s", node.method), nil)
}
return nil
}
func tagLoremParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- lorem_node := &tagLoremNode{
+ loremNode := &tagLoremNode{
position: start,
count: 1,
method: "b",
}
- if count_token := arguments.MatchType(TokenNumber); count_token != nil {
- lorem_node.count = AsValue(count_token.Val).Integer()
+ if countToken := arguments.MatchType(TokenNumber); countToken != nil {
+ loremNode.count = AsValue(countToken.Val).Integer()
}
- if method_token := arguments.MatchType(TokenIdentifier); method_token != nil {
- if method_token.Val != "w" && method_token.Val != "p" && method_token.Val != "b" {
+ if methodToken := arguments.MatchType(TokenIdentifier); methodToken != nil {
+ if methodToken.Val != "w" && methodToken.Val != "p" && methodToken.Val != "b" {
return nil, arguments.Error("lorem-method must be either 'w', 'p' or 'b'.", nil)
}
- lorem_node.method = method_token.Val
+ loremNode.method = methodToken.Val
}
if arguments.MatchOne(TokenIdentifier, "random") != nil {
- lorem_node.random = true
+ loremNode.random = true
}
if arguments.Remaining() > 0 {
return nil, arguments.Error("Malformed lorem-tag arguments.", nil)
}
- return lorem_node, nil
+ return loremNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_macro.go b/vendor/github.com/flosch/pongo2/tags_macro.go
index 41cba99..dd3e0bf 100644
--- a/vendor/github.com/flosch/pongo2/tags_macro.go
+++ b/vendor/github.com/flosch/pongo2/tags_macro.go
@@ -6,16 +6,16 @@ import (
)
type tagMacroNode struct {
- position *Token
- name string
- args_order []string
- args map[string]IEvaluator
- exported bool
+ position *Token
+ name string
+ argsOrder []string
+ args map[string]IEvaluator
+ exported bool
wrapper *NodeWrapper
}
-func (node *tagMacroNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagMacroNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
ctx.Private[node.name] = func(args ...*Value) *Value {
return node.call(ctx, args...)
}
@@ -24,28 +24,28 @@ func (node *tagMacroNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *
}
func (node *tagMacroNode) call(ctx *ExecutionContext, args ...*Value) *Value {
- args_ctx := make(Context)
+ argsCtx := make(Context)
for k, v := range node.args {
if v == nil {
// User did not provided a default value
- args_ctx[k] = nil
+ argsCtx[k] = nil
} else {
// Evaluate the default value
- value_expr, err := v.Evaluate(ctx)
+ valueExpr, err := v.Evaluate(ctx)
if err != nil {
ctx.Logf(err.Error())
return AsSafeValue(err.Error())
}
- args_ctx[k] = value_expr
+ argsCtx[k] = valueExpr
}
}
- if len(args) > len(node.args_order) {
+ if len(args) > len(node.argsOrder) {
// Too many arguments, we're ignoring them and just logging into debug mode.
err := ctx.Error(fmt.Sprintf("Macro '%s' called with too many arguments (%d instead of %d).",
- node.name, len(args), len(node.args_order)), nil).updateFromTokenIfNeeded(ctx.template, node.position)
+ node.name, len(args), len(node.argsOrder)), nil).updateFromTokenIfNeeded(ctx.template, node.position)
ctx.Logf(err.Error()) // TODO: This is a workaround, because the error is not returned yet to the Execution()-methods
return AsSafeValue(err.Error())
@@ -55,10 +55,10 @@ func (node *tagMacroNode) call(ctx *ExecutionContext, args ...*Value) *Value {
macroCtx := NewChildExecutionContext(ctx)
// Register all arguments in the private context
- macroCtx.Private.Update(args_ctx)
+ macroCtx.Private.Update(argsCtx)
- for idx, arg_value := range args {
- macroCtx.Private[node.args_order[idx]] = arg_value.Interface()
+ for idx, argValue := range args {
+ macroCtx.Private[node.argsOrder[idx]] = argValue.Interface()
}
var b bytes.Buffer
@@ -71,38 +71,38 @@ func (node *tagMacroNode) call(ctx *ExecutionContext, args ...*Value) *Value {
}
func tagMacroParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- macro_node := &tagMacroNode{
+ macroNode := &tagMacroNode{
position: start,
args: make(map[string]IEvaluator),
}
- name_token := arguments.MatchType(TokenIdentifier)
- if name_token == nil {
+ nameToken := arguments.MatchType(TokenIdentifier)
+ if nameToken == nil {
return nil, arguments.Error("Macro-tag needs at least an identifier as name.", nil)
}
- macro_node.name = name_token.Val
+ macroNode.name = nameToken.Val
if arguments.MatchOne(TokenSymbol, "(") == nil {
return nil, arguments.Error("Expected '('.", nil)
}
for arguments.Match(TokenSymbol, ")") == nil {
- arg_name_token := arguments.MatchType(TokenIdentifier)
- if arg_name_token == nil {
+ argNameToken := arguments.MatchType(TokenIdentifier)
+ if argNameToken == nil {
return nil, arguments.Error("Expected argument name as identifier.", nil)
}
- macro_node.args_order = append(macro_node.args_order, arg_name_token.Val)
+ macroNode.argsOrder = append(macroNode.argsOrder, argNameToken.Val)
if arguments.Match(TokenSymbol, "=") != nil {
// Default expression follows
- arg_default_expr, err := arguments.ParseExpression()
+ argDefaultExpr, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- macro_node.args[arg_name_token.Val] = arg_default_expr
+ macroNode.args[argNameToken.Val] = argDefaultExpr
} else {
// No default expression
- macro_node.args[arg_name_token.Val] = nil
+ macroNode.args[argNameToken.Val] = nil
}
if arguments.Match(TokenSymbol, ")") != nil {
@@ -114,7 +114,7 @@ func tagMacroParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
}
if arguments.Match(TokenKeyword, "export") != nil {
- macro_node.exported = true
+ macroNode.exported = true
}
if arguments.Remaining() > 0 {
@@ -126,22 +126,22 @@ func tagMacroParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Er
if err != nil {
return nil, err
}
- macro_node.wrapper = wrapper
+ macroNode.wrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
}
- if macro_node.exported {
+ if macroNode.exported {
// Now register the macro if it wants to be exported
- _, has := doc.template.exported_macros[macro_node.name]
+ _, has := doc.template.exportedMacros[macroNode.name]
if has {
- return nil, doc.Error(fmt.Sprintf("Another macro with name '%s' already exported.", macro_node.name), start)
+ return nil, doc.Error(fmt.Sprintf("another macro with name '%s' already exported", macroNode.name), start)
}
- doc.template.exported_macros[macro_node.name] = macro_node
+ doc.template.exportedMacros[macroNode.name] = macroNode
}
- return macro_node, nil
+ return macroNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_now.go b/vendor/github.com/flosch/pongo2/tags_now.go
index 0f4320f..d9fa4a3 100644
--- a/vendor/github.com/flosch/pongo2/tags_now.go
+++ b/vendor/github.com/flosch/pongo2/tags_now.go
@@ -1,7 +1,6 @@
package pongo2
import (
- "bytes"
"time"
)
@@ -11,7 +10,7 @@ type tagNowNode struct {
fake bool
}
-func (node *tagNowNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagNowNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
var t time.Time
if node.fake {
t = time.Date(2014, time.February, 05, 18, 31, 45, 00, time.UTC)
@@ -19,31 +18,31 @@ func (node *tagNowNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Er
t = time.Now()
}
- buffer.WriteString(t.Format(node.format))
+ writer.WriteString(t.Format(node.format))
return nil
}
func tagNowParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- now_node := &tagNowNode{
+ nowNode := &tagNowNode{
position: start,
}
- format_token := arguments.MatchType(TokenString)
- if format_token == nil {
+ formatToken := arguments.MatchType(TokenString)
+ if formatToken == nil {
return nil, arguments.Error("Expected a format string.", nil)
}
- now_node.format = format_token.Val
+ nowNode.format = formatToken.Val
if arguments.MatchOne(TokenIdentifier, "fake") != nil {
- now_node.fake = true
+ nowNode.fake = true
}
if arguments.Remaining() > 0 {
return nil, arguments.Error("Malformed now-tag arguments.", nil)
}
- return now_node, nil
+ return nowNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_set.go b/vendor/github.com/flosch/pongo2/tags_set.go
index 2729f44..be121c1 100644
--- a/vendor/github.com/flosch/pongo2/tags_set.go
+++ b/vendor/github.com/flosch/pongo2/tags_set.go
@@ -1,13 +1,11 @@
package pongo2
-import "bytes"
-
type tagSetNode struct {
name string
expression IEvaluator
}
-func (node *tagSetNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagSetNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
// Evaluate expression
value, err := node.expression.Evaluate(ctx)
if err != nil {
diff --git a/vendor/github.com/flosch/pongo2/tags_spaceless.go b/vendor/github.com/flosch/pongo2/tags_spaceless.go
index a4b3003..4fa851b 100644
--- a/vendor/github.com/flosch/pongo2/tags_spaceless.go
+++ b/vendor/github.com/flosch/pongo2/tags_spaceless.go
@@ -11,7 +11,7 @@ type tagSpacelessNode struct {
var tagSpacelessRegexp = regexp.MustCompile(`(?U:(<.*>))([\t\n\v\f\r ]+)(?U:(<.*>))`)
-func (node *tagSpacelessNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagSpacelessNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
b := bytes.NewBuffer(make([]byte, 0, 1024)) // 1 KiB
err := node.wrapper.Execute(ctx, b)
@@ -28,25 +28,25 @@ func (node *tagSpacelessNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffe
s = s2
}
- buffer.WriteString(s)
+ writer.WriteString(s)
return nil
}
func tagSpacelessParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- spaceless_node := &tagSpacelessNode{}
+ spacelessNode := &tagSpacelessNode{}
wrapper, _, err := doc.WrapUntilTag("endspaceless")
if err != nil {
return nil, err
}
- spaceless_node.wrapper = wrapper
+ spacelessNode.wrapper = wrapper
if arguments.Remaining() > 0 {
return nil, arguments.Error("Malformed spaceless-tag arguments.", nil)
}
- return spaceless_node, nil
+ return spacelessNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_ssi.go b/vendor/github.com/flosch/pongo2/tags_ssi.go
index 3c3894f..c33858d 100644
--- a/vendor/github.com/flosch/pongo2/tags_ssi.go
+++ b/vendor/github.com/flosch/pongo2/tags_ssi.go
@@ -1,7 +1,6 @@
package pongo2
import (
- "bytes"
"io/ioutil"
)
@@ -11,47 +10,47 @@ type tagSSINode struct {
template *Template
}
-func (node *tagSSINode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagSSINode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
if node.template != nil {
// Execute the template within the current context
includeCtx := make(Context)
includeCtx.Update(ctx.Public)
includeCtx.Update(ctx.Private)
- err := node.template.ExecuteWriter(includeCtx, buffer)
+ err := node.template.execute(includeCtx, writer)
if err != nil {
return err.(*Error)
}
} else {
// Just print out the content
- buffer.WriteString(node.content)
+ writer.WriteString(node.content)
}
return nil
}
func tagSSIParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- ssi_node := &tagSSINode{}
+ SSINode := &tagSSINode{}
- if file_token := arguments.MatchType(TokenString); file_token != nil {
- ssi_node.filename = file_token.Val
+ if fileToken := arguments.MatchType(TokenString); fileToken != nil {
+ SSINode.filename = fileToken.Val
if arguments.Match(TokenIdentifier, "parsed") != nil {
// parsed
- temporary_tpl, err := doc.template.set.FromFile(doc.template.set.resolveFilename(doc.template, file_token.Val))
+ temporaryTpl, err := doc.template.set.FromFile(doc.template.set.resolveFilename(doc.template, fileToken.Val))
if err != nil {
- return nil, err.(*Error).updateFromTokenIfNeeded(doc.template, file_token)
+ return nil, err.(*Error).updateFromTokenIfNeeded(doc.template, fileToken)
}
- ssi_node.template = temporary_tpl
+ SSINode.template = temporaryTpl
} else {
// plaintext
- buf, err := ioutil.ReadFile(doc.template.set.resolveFilename(doc.template, file_token.Val))
+ buf, err := ioutil.ReadFile(doc.template.set.resolveFilename(doc.template, fileToken.Val))
if err != nil {
return nil, (&Error{
- Sender: "tag:ssi",
- ErrorMsg: err.Error(),
- }).updateFromTokenIfNeeded(doc.template, file_token)
+ Sender: "tag:ssi",
+ OrigError: err,
+ }).updateFromTokenIfNeeded(doc.template, fileToken)
}
- ssi_node.content = string(buf)
+ SSINode.content = string(buf)
}
} else {
return nil, arguments.Error("First argument must be a string.", nil)
@@ -61,7 +60,7 @@ func tagSSIParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Erro
return nil, arguments.Error("Malformed SSI-tag argument.", nil)
}
- return ssi_node, nil
+ return SSINode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_templatetag.go b/vendor/github.com/flosch/pongo2/tags_templatetag.go
index ffd3d9d..164b4dc 100644
--- a/vendor/github.com/flosch/pongo2/tags_templatetag.go
+++ b/vendor/github.com/flosch/pongo2/tags_templatetag.go
@@ -1,9 +1,5 @@
package pongo2
-import (
- "bytes"
-)
-
type tagTemplateTagNode struct {
content string
}
@@ -19,20 +15,20 @@ var templateTagMapping = map[string]string{
"closecomment": "#}",
}
-func (node *tagTemplateTagNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- buffer.WriteString(node.content)
+func (node *tagTemplateTagNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ writer.WriteString(node.content)
return nil
}
func tagTemplateTagParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- tt_node := &tagTemplateTagNode{}
+ ttNode := &tagTemplateTagNode{}
- if arg_token := arguments.MatchType(TokenIdentifier); arg_token != nil {
- output, found := templateTagMapping[arg_token.Val]
+ if argToken := arguments.MatchType(TokenIdentifier); argToken != nil {
+ output, found := templateTagMapping[argToken.Val]
if !found {
- return nil, arguments.Error("Argument not found", arg_token)
+ return nil, arguments.Error("Argument not found", argToken)
}
- tt_node.content = output
+ ttNode.content = output
} else {
return nil, arguments.Error("Identifier expected.", nil)
}
@@ -41,7 +37,7 @@ func tagTemplateTagParser(doc *Parser, start *Token, arguments *Parser) (INodeTa
return nil, arguments.Error("Malformed templatetag-tag argument.", nil)
}
- return tt_node, nil
+ return ttNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_widthratio.go b/vendor/github.com/flosch/pongo2/tags_widthratio.go
index d7d7141..70c9c3e 100644
--- a/vendor/github.com/flosch/pongo2/tags_widthratio.go
+++ b/vendor/github.com/flosch/pongo2/tags_widthratio.go
@@ -1,7 +1,6 @@
package pongo2
import (
- "bytes"
"fmt"
"math"
)
@@ -10,10 +9,10 @@ type tagWidthratioNode struct {
position *Token
current, max IEvaluator
width IEvaluator
- ctx_name string
+ ctxName string
}
-func (node *tagWidthratioNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagWidthratioNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
current, err := node.current.Evaluate(ctx)
if err != nil {
return err
@@ -31,17 +30,17 @@ func (node *tagWidthratioNode) Execute(ctx *ExecutionContext, buffer *bytes.Buff
value := int(math.Ceil(current.Float()/max.Float()*width.Float() + 0.5))
- if node.ctx_name == "" {
- buffer.WriteString(fmt.Sprintf("%d", value))
+ if node.ctxName == "" {
+ writer.WriteString(fmt.Sprintf("%d", value))
} else {
- ctx.Private[node.ctx_name] = value
+ ctx.Private[node.ctxName] = value
}
return nil
}
func tagWidthratioParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- widthratio_node := &tagWidthratioNode{
+ widthratioNode := &tagWidthratioNode{
position: start,
}
@@ -49,34 +48,34 @@ func tagWidthratioParser(doc *Parser, start *Token, arguments *Parser) (INodeTag
if err != nil {
return nil, err
}
- widthratio_node.current = current
+ widthratioNode.current = current
max, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- widthratio_node.max = max
+ widthratioNode.max = max
width, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- widthratio_node.width = width
+ widthratioNode.width = width
if arguments.MatchOne(TokenKeyword, "as") != nil {
// Name follows
- name_token := arguments.MatchType(TokenIdentifier)
- if name_token == nil {
+ nameToken := arguments.MatchType(TokenIdentifier)
+ if nameToken == nil {
return nil, arguments.Error("Expected name (identifier).", nil)
}
- widthratio_node.ctx_name = name_token.Val
+ widthratioNode.ctxName = nameToken.Val
}
if arguments.Remaining() > 0 {
return nil, arguments.Error("Malformed widthratio-tag arguments.", nil)
}
- return widthratio_node, nil
+ return widthratioNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/tags_with.go b/vendor/github.com/flosch/pongo2/tags_with.go
index 5bf4af0..32b3c1c 100644
--- a/vendor/github.com/flosch/pongo2/tags_with.go
+++ b/vendor/github.com/flosch/pongo2/tags_with.go
@@ -1,20 +1,16 @@
package pongo2
-import (
- "bytes"
-)
-
type tagWithNode struct {
- with_pairs map[string]IEvaluator
- wrapper *NodeWrapper
+ withPairs map[string]IEvaluator
+ wrapper *NodeWrapper
}
-func (node *tagWithNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (node *tagWithNode) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
//new context for block
withctx := NewChildExecutionContext(ctx)
// Put all custom with-pairs into the context
- for key, value := range node.with_pairs {
+ for key, value := range node.withPairs {
val, err := value.Evaluate(ctx)
if err != nil {
return err
@@ -22,12 +18,12 @@ func (node *tagWithNode) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *E
withctx.Private[key] = val
}
- return node.wrapper.Execute(withctx, buffer)
+ return node.wrapper.Execute(withctx, writer)
}
func tagWithParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Error) {
- with_node := &tagWithNode{
- with_pairs: make(map[string]IEvaluator),
+ withNode := &tagWithNode{
+ withPairs: make(map[string]IEvaluator),
}
if arguments.Count() == 0 {
@@ -38,7 +34,7 @@ func tagWithParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Err
if err != nil {
return nil, err
}
- with_node.wrapper = wrapper
+ withNode.wrapper = wrapper
if endargs.Count() > 0 {
return nil, endargs.Error("Arguments not allowed here.", nil)
@@ -46,45 +42,45 @@ func tagWithParser(doc *Parser, start *Token, arguments *Parser) (INodeTag, *Err
// Scan through all arguments to see which style the user uses (old or new style).
// If we find any "as" keyword we will enforce old style; otherwise we will use new style.
- old_style := false // by default we're using the new_style
+ oldStyle := false // by default we're using the new_style
for i := 0; i < arguments.Count(); i++ {
if arguments.PeekN(i, TokenKeyword, "as") != nil {
- old_style = true
+ oldStyle = true
break
}
}
for arguments.Remaining() > 0 {
- if old_style {
- value_expr, err := arguments.ParseExpression()
+ if oldStyle {
+ valueExpr, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
if arguments.Match(TokenKeyword, "as") == nil {
return nil, arguments.Error("Expected 'as' keyword.", nil)
}
- key_token := arguments.MatchType(TokenIdentifier)
- if key_token == nil {
+ keyToken := arguments.MatchType(TokenIdentifier)
+ if keyToken == nil {
return nil, arguments.Error("Expected an identifier", nil)
}
- with_node.with_pairs[key_token.Val] = value_expr
+ withNode.withPairs[keyToken.Val] = valueExpr
} else {
- key_token := arguments.MatchType(TokenIdentifier)
- if key_token == nil {
+ keyToken := arguments.MatchType(TokenIdentifier)
+ if keyToken == nil {
return nil, arguments.Error("Expected an identifier", nil)
}
if arguments.Match(TokenSymbol, "=") == nil {
return nil, arguments.Error("Expected '='.", nil)
}
- value_expr, err := arguments.ParseExpression()
+ valueExpr, err := arguments.ParseExpression()
if err != nil {
return nil, err
}
- with_node.with_pairs[key_token.Val] = value_expr
+ withNode.withPairs[keyToken.Val] = valueExpr
}
}
- return with_node, nil
+ return withNode, nil
}
func init() {
diff --git a/vendor/github.com/flosch/pongo2/template.go b/vendor/github.com/flosch/pongo2/template.go
index c7fe98b..869adce 100644
--- a/vendor/github.com/flosch/pongo2/template.go
+++ b/vendor/github.com/flosch/pongo2/template.go
@@ -2,52 +2,72 @@ package pongo2
import (
"bytes"
- "fmt"
"io"
+
+ "github.com/juju/errors"
)
+type TemplateWriter interface {
+ io.Writer
+ WriteString(string) (int, error)
+}
+
+type templateWriter struct {
+ w io.Writer
+}
+
+func (tw *templateWriter) WriteString(s string) (int, error) {
+ return tw.w.Write([]byte(s))
+}
+
+func (tw *templateWriter) Write(b []byte) (int, error) {
+ return tw.w.Write(b)
+}
+
type Template struct {
set *TemplateSet
// Input
- is_tpl_string bool
- name string
- tpl string
- size int
+ isTplString bool
+ name string
+ tpl string
+ size int
// Calculation
tokens []*Token
parser *Parser
// first come, first serve (it's important to not override existing entries in here)
- level int
- parent *Template
- child *Template
- blocks map[string]*NodeWrapper
- exported_macros map[string]*tagMacroNode
+ level int
+ parent *Template
+ child *Template
+ blocks map[string]*NodeWrapper
+ exportedMacros map[string]*tagMacroNode
// Output
root *nodeDocument
}
-func newTemplateString(set *TemplateSet, tpl string) (*Template, error) {
+func newTemplateString(set *TemplateSet, tpl []byte) (*Template, error) {
return newTemplate(set, "", true, tpl)
}
-func newTemplate(set *TemplateSet, name string, is_tpl_string bool, tpl string) (*Template, error) {
+func newTemplate(set *TemplateSet, name string, isTplString bool, tpl []byte) (*Template, error) {
+ strTpl := string(tpl)
+
// Create the template
t := &Template{
- set: set,
- is_tpl_string: is_tpl_string,
- name: name,
- tpl: tpl,
- size: len(tpl),
- blocks: make(map[string]*NodeWrapper),
- exported_macros: make(map[string]*tagMacroNode),
+ set: set,
+ isTplString: isTplString,
+ name: name,
+ tpl: strTpl,
+ size: len(strTpl),
+ blocks: make(map[string]*NodeWrapper),
+ exportedMacros: make(map[string]*tagMacroNode),
}
// Tokenize it
- tokens, err := lex(name, tpl)
+ tokens, err := lex(name, strTpl)
if err != nil {
return nil, err
}
@@ -67,11 +87,7 @@ func newTemplate(set *TemplateSet, name string, is_tpl_string bool, tpl string)
return t, nil
}
-func (tpl *Template) execute(context Context) (*bytes.Buffer, error) {
- // Create output buffer
- // We assume that the rendered template will be 30% larger
- buffer := bytes.NewBuffer(make([]byte, 0, int(float64(tpl.size)*1.3)))
-
+func (tpl *Template) execute(context Context, writer TemplateWriter) error {
// Determine the parent to be executed (for template inheritance)
parent := tpl
for parent.parent != nil {
@@ -89,17 +105,17 @@ func (tpl *Template) execute(context Context) (*bytes.Buffer, error) {
// Check for context name syntax
err := newContext.checkForValidIdentifiers()
if err != nil {
- return nil, err
+ return err
}
// Check for clashes with macro names
- for k, _ := range newContext {
- _, has := tpl.exported_macros[k]
+ for k := range newContext {
+ _, has := tpl.exportedMacros[k]
if has {
- return nil, &Error{
- Filename: tpl.name,
- Sender: "execution",
- ErrorMsg: fmt.Sprintf("Context key name '%s' clashes with macro '%s'.", k, k),
+ return &Error{
+ Filename: tpl.name,
+ Sender: "execution",
+ OrigError: errors.Errorf("context key name '%s' clashes with macro '%s'", k, k),
}
}
}
@@ -110,8 +126,22 @@ func (tpl *Template) execute(context Context) (*bytes.Buffer, error) {
ctx := newExecutionContext(parent, newContext)
// Run the selected document
- err := parent.root.Execute(ctx, buffer)
- if err != nil {
+ if err := parent.root.Execute(ctx, writer); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (tpl *Template) newTemplateWriterAndExecute(context Context, writer io.Writer) error {
+ return tpl.execute(context, &templateWriter{w: writer})
+}
+
+func (tpl *Template) newBufferAndExecute(context Context) (*bytes.Buffer, error) {
+ // Create output buffer
+ // We assume that the rendered template will be 30% larger
+ buffer := bytes.NewBuffer(make([]byte, 0, int(float64(tpl.size)*1.3)))
+ if err := tpl.execute(context, buffer); err != nil {
return nil, err
}
return buffer, nil
@@ -121,30 +151,30 @@ func (tpl *Template) execute(context Context) (*bytes.Buffer, error) {
// on success. Context can be nil. Nothing is written on error; instead the error
// is being returned.
func (tpl *Template) ExecuteWriter(context Context, writer io.Writer) error {
- buffer, err := tpl.execute(context)
+ buf, err := tpl.newBufferAndExecute(context)
if err != nil {
return err
}
-
- l := buffer.Len()
- n, werr := buffer.WriteTo(writer)
- if int(n) != l {
- panic(fmt.Sprintf("error on writing template: n(%d) != buffer.Len(%d)", n, l))
- }
- if werr != nil {
- return &Error{
- Filename: tpl.name,
- Sender: "execution",
- ErrorMsg: werr.Error(),
- }
+ _, err = buf.WriteTo(writer)
+ if err != nil {
+ return err
}
return nil
}
+// Same as ExecuteWriter. The only difference between both functions is that
+// this function might already have written parts of the generated template in the
+// case of an execution error because there's no intermediate buffer involved for
+// performance reasons. This is handy if you need high performance template
+// generation or if you want to manage your own pool of buffers.
+func (tpl *Template) ExecuteWriterUnbuffered(context Context, writer io.Writer) error {
+ return tpl.newTemplateWriterAndExecute(context, writer)
+}
+
// Executes the template and returns the rendered template as a []byte
func (tpl *Template) ExecuteBytes(context Context) ([]byte, error) {
// Execute template
- buffer, err := tpl.execute(context)
+ buffer, err := tpl.newBufferAndExecute(context)
if err != nil {
return nil, err
}
@@ -154,7 +184,7 @@ func (tpl *Template) ExecuteBytes(context Context) ([]byte, error) {
// Executes the template and returns the rendered template as a string
func (tpl *Template) Execute(context Context) (string, error) {
// Execute template
- buffer, err := tpl.execute(context)
+ buffer, err := tpl.newBufferAndExecute(context)
if err != nil {
return "", err
}
diff --git a/vendor/github.com/flosch/pongo2/template_loader.go b/vendor/github.com/flosch/pongo2/template_loader.go
new file mode 100644
index 0000000..bc80f4a
--- /dev/null
+++ b/vendor/github.com/flosch/pongo2/template_loader.go
@@ -0,0 +1,157 @@
+package pongo2
+
+import (
+ "bytes"
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+ "path/filepath"
+
+ "github.com/juju/errors"
+)
+
+// LocalFilesystemLoader represents a local filesystem loader with basic
+// BaseDirectory capabilities. The access to the local filesystem is unrestricted.
+type LocalFilesystemLoader struct {
+ baseDir string
+}
+
+// MustNewLocalFileSystemLoader creates a new LocalFilesystemLoader instance
+// and panics if there's any error during instantiation. The parameters
+// are the same like NewLocalFileSystemLoader.
+func MustNewLocalFileSystemLoader(baseDir string) *LocalFilesystemLoader {
+ fs, err := NewLocalFileSystemLoader(baseDir)
+ if err != nil {
+ log.Panic(err)
+ }
+ return fs
+}
+
+// NewLocalFileSystemLoader creates a new LocalFilesystemLoader and allows
+// templatesto be loaded from disk (unrestricted). If any base directory
+// is given (or being set using SetBaseDir), this base directory is being used
+// for path calculation in template inclusions/imports. Otherwise the path
+// is calculated based relatively to the including template's path.
+func NewLocalFileSystemLoader(baseDir string) (*LocalFilesystemLoader, error) {
+ fs := &LocalFilesystemLoader{}
+ if baseDir != "" {
+ if err := fs.SetBaseDir(baseDir); err != nil {
+ return nil, err
+ }
+ }
+ return fs, nil
+}
+
+// SetBaseDir sets the template's base directory. This directory will
+// be used for any relative path in filters, tags and From*-functions to determine
+// your template. See the comment for NewLocalFileSystemLoader as well.
+func (fs *LocalFilesystemLoader) SetBaseDir(path string) error {
+ // Make the path absolute
+ if !filepath.IsAbs(path) {
+ abs, err := filepath.Abs(path)
+ if err != nil {
+ return err
+ }
+ path = abs
+ }
+
+ // Check for existence
+ fi, err := os.Stat(path)
+ if err != nil {
+ return err
+ }
+ if !fi.IsDir() {
+ return errors.Errorf("The given path '%s' is not a directory.", path)
+ }
+
+ fs.baseDir = path
+ return nil
+}
+
+// Get reads the path's content from your local filesystem.
+func (fs *LocalFilesystemLoader) Get(path string) (io.Reader, error) {
+ buf, err := ioutil.ReadFile(path)
+ if err != nil {
+ return nil, err
+ }
+ return bytes.NewReader(buf), nil
+}
+
+// Abs resolves a filename relative to the base directory. Absolute paths are allowed.
+// When there's no base dir set, the absolute path to the filename
+// will be calculated based on either the provided base directory (which
+// might be a path of a template which includes another template) or
+// the current working directory.
+func (fs *LocalFilesystemLoader) Abs(base, name string) string {
+ if filepath.IsAbs(name) {
+ return name
+ }
+
+ // Our own base dir has always priority; if there's none
+ // we use the path provided in base.
+ var err error
+ if fs.baseDir == "" {
+ if base == "" {
+ base, err = os.Getwd()
+ if err != nil {
+ panic(err)
+ }
+ return filepath.Join(base, name)
+ }
+
+ return filepath.Join(filepath.Dir(base), name)
+ }
+
+ return filepath.Join(fs.baseDir, name)
+}
+
+// SandboxedFilesystemLoader is still WIP.
+type SandboxedFilesystemLoader struct {
+ *LocalFilesystemLoader
+}
+
+// NewSandboxedFilesystemLoader creates a new sandboxed local file system instance.
+func NewSandboxedFilesystemLoader(baseDir string) (*SandboxedFilesystemLoader, error) {
+ fs, err := NewLocalFileSystemLoader(baseDir)
+ if err != nil {
+ return nil, err
+ }
+ return &SandboxedFilesystemLoader{
+ LocalFilesystemLoader: fs,
+ }, nil
+}
+
+// Move sandbox to a virtual fs
+
+/*
+if len(set.SandboxDirectories) > 0 {
+ defer func() {
+ // Remove any ".." or other crap
+ resolvedPath = filepath.Clean(resolvedPath)
+
+ // Make the path absolute
+ absPath, err := filepath.Abs(resolvedPath)
+ if err != nil {
+ panic(err)
+ }
+ resolvedPath = absPath
+
+ // Check against the sandbox directories (once one pattern matches, we're done and can allow it)
+ for _, pattern := range set.SandboxDirectories {
+ matched, err := filepath.Match(pattern, resolvedPath)
+ if err != nil {
+ panic("Wrong sandbox directory match pattern (see http://golang.org/pkg/path/filepath/#Match).")
+ }
+ if matched {
+ // OK!
+ return
+ }
+ }
+
+ // No pattern matched, we have to log+deny the request
+ set.logf("Access attempt outside of the sandbox directories (blocked): '%s'", resolvedPath)
+ resolvedPath = ""
+ }()
+}
+*/
diff --git a/vendor/github.com/flosch/pongo2/template_sets.go b/vendor/github.com/flosch/pongo2/template_sets.go
index c582c5d..6b4533c 100644
--- a/vendor/github.com/flosch/pongo2/template_sets.go
+++ b/vendor/github.com/flosch/pongo2/template_sets.go
@@ -2,48 +2,49 @@ package pongo2
import (
"fmt"
+ "io"
"io/ioutil"
"log"
"os"
- "path/filepath"
"sync"
+
+ "github.com/juju/errors"
)
-// A template set allows you to create your own group of templates with their own global context (which is shared
-// among all members of the set), their own configuration (like a specific base directory) and their own sandbox.
-// It's useful for a separation of different kind of templates (e. g. web templates vs. mail templates).
+// TemplateLoader allows to implement a virtual file system.
+type TemplateLoader interface {
+ // Abs calculates the path to a given template. Whenever a path must be resolved
+ // due to an import from another template, the base equals the parent template's path.
+ Abs(base, name string) string
+
+ // Get returns an io.Reader where the template's content can be read from.
+ Get(path string) (io.Reader, error)
+}
+
+// TemplateSet allows you to create your own group of templates with their own
+// global context (which is shared among all members of the set) and their own
+// configuration.
+// It's useful for a separation of different kind of templates
+// (e. g. web templates vs. mail templates).
type TemplateSet struct {
- name string
+ name string
+ loader TemplateLoader
// Globals will be provided to all templates created within this template set
Globals Context
- // If debug is true (default false), ExecutionContext.Logf() will work and output to STDOUT. Furthermore,
- // FromCache() won't cache the templates. Make sure to synchronize the access to it in case you're changing this
+ // If debug is true (default false), ExecutionContext.Logf() will work and output
+ // to STDOUT. Furthermore, FromCache() won't cache the templates.
+ // Make sure to synchronize the access to it in case you're changing this
// variable during program execution (and template compilation/execution).
Debug bool
- // Base directory: If you set the base directory (string is non-empty), all filename lookups in tags/filters are
- // relative to this directory. If it's empty, all lookups are relative to the current filename which is importing.
- baseDirectory string
-
// Sandbox features
- // - Limit access to directories (using SandboxDirectories)
// - Disallow access to specific tags and/or filters (using BanTag() and BanFilter())
//
- // You can limit file accesses (for all tags/filters which are using pongo2's file resolver technique)
- // to these sandbox directories. All default pongo2 filters/tags are respecting these restrictions.
- // For example, if you only have your base directory in the list, a {% ssi "/etc/passwd" %} will not work.
- // No items in SandboxDirectories means no restrictions at all.
- //
- // For efficiency reasons you can ban tags/filters only *before* you have added your first
- // template to the set (restrictions are statically checked). After you added one, it's not possible anymore
- // (for your personal security).
- //
- // SandboxDirectories can be changed at runtime. Please synchronize the access to it if you need to change it
- // after you've added your first template to the set. You *must* use this match pattern for your directories:
- // http://golang.org/pkg/path/filepath/#Match
- SandboxDirectories []string
+ // For efficiency reasons you can ban tags/filters only *before* you have
+ // added your first template to the set (restrictions are statically checked).
+ // After you added one, it's not possible anymore (for your personal security).
firstTemplateCreated bool
bannedTags map[string]bool
bannedFilters map[string]bool
@@ -53,11 +54,13 @@ type TemplateSet struct {
templateCacheMutex sync.Mutex
}
-// Create your own template sets to separate different kind of templates (e. g. web from mail templates) with
-// different globals or other configurations (like base directories).
-func NewSet(name string) *TemplateSet {
+// NewSet can be used to create sets with different kind of templates
+// (e. g. web from mail templates), with different globals or
+// other configurations.
+func NewSet(name string, loader TemplateLoader) *TemplateSet {
return &TemplateSet{
name: name,
+ loader: loader,
Globals: make(Context),
bannedTags: make(map[string]bool),
bannedFilters: make(map[string]bool),
@@ -65,151 +68,157 @@ func NewSet(name string) *TemplateSet {
}
}
-// Use this function to set your template set's base directory. This directory will be used for any relative
-// path in filters, tags and From*-functions to determine your template.
-func (set *TemplateSet) SetBaseDirectory(name string) error {
- // Make the path absolute
- if !filepath.IsAbs(name) {
- abs, err := filepath.Abs(name)
- if err != nil {
- return err
- }
- name = abs
+func (set *TemplateSet) resolveFilename(tpl *Template, path string) string {
+ name := ""
+ if tpl != nil && tpl.isTplString {
+ return path
}
-
- // Check for existence
- fi, err := os.Stat(name)
- if err != nil {
- return err
+ if tpl != nil {
+ name = tpl.name
}
- if !fi.IsDir() {
- return fmt.Errorf("The given path '%s' is not a directory.")
- }
-
- set.baseDirectory = name
- return nil
+ return set.loader.Abs(name, path)
}
-func (set *TemplateSet) BaseDirectory() string {
- return set.baseDirectory
-}
-
-// Ban a specific tag for this template set. See more in the documentation for TemplateSet.
-func (set *TemplateSet) BanTag(name string) {
+// BanTag bans a specific tag for this template set. See more in the documentation for TemplateSet.
+func (set *TemplateSet) BanTag(name string) error {
_, has := tags[name]
if !has {
- panic(fmt.Sprintf("Tag '%s' not found.", name))
+ return errors.Errorf("tag '%s' not found", name)
}
if set.firstTemplateCreated {
- panic("You cannot ban any tags after you've added your first template to your template set.")
+ return errors.New("you cannot ban any tags after you've added your first template to your template set")
}
_, has = set.bannedTags[name]
if has {
- panic(fmt.Sprintf("Tag '%s' is already banned.", name))
+ return errors.Errorf("tag '%s' is already banned", name)
}
set.bannedTags[name] = true
+
+ return nil
}
-// Ban a specific filter for this template set. See more in the documentation for TemplateSet.
-func (set *TemplateSet) BanFilter(name string) {
+// BanFilter bans a specific filter for this template set. See more in the documentation for TemplateSet.
+func (set *TemplateSet) BanFilter(name string) error {
_, has := filters[name]
if !has {
- panic(fmt.Sprintf("Filter '%s' not found.", name))
+ return errors.Errorf("filter '%s' not found", name)
}
if set.firstTemplateCreated {
- panic("You cannot ban any filters after you've added your first template to your template set.")
+ return errors.New("you cannot ban any filters after you've added your first template to your template set")
}
_, has = set.bannedFilters[name]
if has {
- panic(fmt.Sprintf("Filter '%s' is already banned.", name))
+ return errors.Errorf("filter '%s' is already banned", name)
}
set.bannedFilters[name] = true
+
+ return nil
}
-// FromCache() is a convenient method to cache templates. It is thread-safe
+// FromCache is a convenient method to cache templates. It is thread-safe
// and will only compile the template associated with a filename once.
// If TemplateSet.Debug is true (for example during development phase),
// FromCache() will not cache the template and instead recompile it on any
// call (to make changes to a template live instantaneously).
-// Like FromFile(), FromCache() takes a relative path to a set base directory.
-// Sandbox restrictions apply (if given).
func (set *TemplateSet) FromCache(filename string) (*Template, error) {
if set.Debug {
// Recompile on any request
return set.FromFile(filename)
- } else {
- // Cache the template
- cleaned_filename := set.resolveFilename(nil, filename)
+ }
+ // Cache the template
+ cleanedFilename := set.resolveFilename(nil, filename)
- set.templateCacheMutex.Lock()
- defer set.templateCacheMutex.Unlock()
+ set.templateCacheMutex.Lock()
+ defer set.templateCacheMutex.Unlock()
- tpl, has := set.templateCache[cleaned_filename]
+ tpl, has := set.templateCache[cleanedFilename]
- // Cache miss
- if !has {
- tpl, err := set.FromFile(cleaned_filename)
- if err != nil {
- return nil, err
- }
- set.templateCache[cleaned_filename] = tpl
- return tpl, nil
+ // Cache miss
+ if !has {
+ tpl, err := set.FromFile(cleanedFilename)
+ if err != nil {
+ return nil, err
}
-
- // Cache hit
+ set.templateCache[cleanedFilename] = tpl
return tpl, nil
}
+
+ // Cache hit
+ return tpl, nil
}
-// Loads a template from string and returns a Template instance.
+// FromString loads a template from string and returns a Template instance.
func (set *TemplateSet) FromString(tpl string) (*Template, error) {
set.firstTemplateCreated = true
+ return newTemplateString(set, []byte(tpl))
+}
+
+// FromBytes loads a template from bytes and returns a Template instance.
+func (set *TemplateSet) FromBytes(tpl []byte) (*Template, error) {
+ set.firstTemplateCreated = true
+
return newTemplateString(set, tpl)
}
-// Loads a template from a filename and returns a Template instance.
-// If a base directory is set, the filename must be either relative to it
-// or be an absolute path. Sandbox restrictions (SandboxDirectories) apply
-// if given.
+// FromFile loads a template from a filename and returns a Template instance.
func (set *TemplateSet) FromFile(filename string) (*Template, error) {
set.firstTemplateCreated = true
- buf, err := ioutil.ReadFile(set.resolveFilename(nil, filename))
+ fd, err := set.loader.Get(set.resolveFilename(nil, filename))
if err != nil {
return nil, &Error{
- Filename: filename,
- Sender: "fromfile",
- ErrorMsg: err.Error(),
+ Filename: filename,
+ Sender: "fromfile",
+ OrigError: err,
}
}
- return newTemplate(set, filename, false, string(buf))
+ buf, err := ioutil.ReadAll(fd)
+ if err != nil {
+ return nil, &Error{
+ Filename: filename,
+ Sender: "fromfile",
+ OrigError: err,
+ }
+ }
+
+ return newTemplate(set, filename, false, buf)
}
-// Shortcut; renders a template string directly. Panics when providing a
-// malformed template or an error occurs during execution.
-func (set *TemplateSet) RenderTemplateString(s string, ctx Context) string {
+// RenderTemplateString is a shortcut and renders a template string directly.
+func (set *TemplateSet) RenderTemplateString(s string, ctx Context) (string, error) {
set.firstTemplateCreated = true
tpl := Must(set.FromString(s))
result, err := tpl.Execute(ctx)
if err != nil {
- panic(err)
+ return "", err
}
- return result
+ return result, nil
}
-// Shortcut; renders a template file directly. Panics when providing a
-// malformed template or an error occurs during execution.
-func (set *TemplateSet) RenderTemplateFile(fn string, ctx Context) string {
+// RenderTemplateBytes is a shortcut and renders template bytes directly.
+func (set *TemplateSet) RenderTemplateBytes(b []byte, ctx Context) (string, error) {
+ set.firstTemplateCreated = true
+
+ tpl := Must(set.FromBytes(b))
+ result, err := tpl.Execute(ctx)
+ if err != nil {
+ return "", err
+ }
+ return result, nil
+}
+
+// RenderTemplateFile is a shortcut and renders a template file directly.
+func (set *TemplateSet) RenderTemplateFile(fn string, ctx Context) (string, error) {
set.firstTemplateCreated = true
tpl := Must(set.FromFile(fn))
result, err := tpl.Execute(ctx)
if err != nil {
- panic(err)
+ return "", err
}
- return result
+ return result, nil
}
func (set *TemplateSet) logf(format string, args ...interface{}) {
@@ -218,58 +227,6 @@ func (set *TemplateSet) logf(format string, args ...interface{}) {
}
}
-// Resolves a filename relative to the base directory. Absolute paths are allowed.
-// If sandbox restrictions are given (SandboxDirectories), they will be respected and checked.
-// On sandbox restriction violation, resolveFilename() panics.
-func (set *TemplateSet) resolveFilename(tpl *Template, filename string) (resolved_path string) {
- if len(set.SandboxDirectories) > 0 {
- defer func() {
- // Remove any ".." or other crap
- resolved_path = filepath.Clean(resolved_path)
-
- // Make the path absolute
- abs_path, err := filepath.Abs(resolved_path)
- if err != nil {
- panic(err)
- }
- resolved_path = abs_path
-
- // Check against the sandbox directories (once one pattern matches, we're done and can allow it)
- for _, pattern := range set.SandboxDirectories {
- matched, err := filepath.Match(pattern, resolved_path)
- if err != nil {
- panic("Wrong sandbox directory match pattern (see http://golang.org/pkg/path/filepath/#Match).")
- }
- if matched {
- // OK!
- return
- }
- }
-
- // No pattern matched, we have to log+deny the request
- set.logf("Access attempt outside of the sandbox directories (blocked): '%s'", resolved_path)
- resolved_path = ""
- }()
- }
-
- if filepath.IsAbs(filename) {
- return filename
- }
-
- if set.baseDirectory == "" {
- if tpl != nil {
- if tpl.is_tpl_string {
- return filename
- }
- base := filepath.Dir(tpl.name)
- return filepath.Join(base, filename)
- }
- return filename
- } else {
- return filepath.Join(set.baseDirectory, filename)
- }
-}
-
// Logging function (internally used)
func logf(format string, items ...interface{}) {
if debug {
@@ -279,13 +236,18 @@ func logf(format string, items ...interface{}) {
var (
debug bool // internal debugging
- logger = log.New(os.Stdout, "[pongo2] ", log.LstdFlags)
+ logger = log.New(os.Stdout, "[pongo2] ", log.LstdFlags|log.Lshortfile)
- // Creating a default set
- DefaultSet = NewSet("default")
+ // DefaultLoader allows the default un-sandboxed access to the local file
+ // system and is being used by the DefaultSet.
+ DefaultLoader = MustNewLocalFileSystemLoader("")
+
+ // DefaultSet is a set created for you for convinience reasons.
+ DefaultSet = NewSet("default", DefaultLoader)
// Methods on the default set
FromString = DefaultSet.FromString
+ FromBytes = DefaultSet.FromBytes
FromFile = DefaultSet.FromFile
FromCache = DefaultSet.FromCache
RenderTemplateString = DefaultSet.RenderTemplateString
diff --git a/vendor/github.com/flosch/pongo2/value.go b/vendor/github.com/flosch/pongo2/value.go
index 8cf8552..df70bbc 100644
--- a/vendor/github.com/flosch/pongo2/value.go
+++ b/vendor/github.com/flosch/pongo2/value.go
@@ -3,6 +3,7 @@ package pongo2
import (
"fmt"
"reflect"
+ "sort"
"strconv"
"strings"
)
@@ -12,7 +13,7 @@ type Value struct {
safe bool // used to indicate whether a Value needs explicit escaping in the template
}
-// Converts any given value to a pongo2.Value
+// AsValue converts any given value to a pongo2.Value
// Usually being used within own functions passed to a template
// through a Context or within filter functions.
//
@@ -24,7 +25,7 @@ func AsValue(i interface{}) *Value {
}
}
-// Like AsValue, but does not apply the 'escape' filter.
+// AsSafeValue works like AsValue, but does not apply the 'escape' filter.
func AsSafeValue(i interface{}) *Value {
return &Value{
val: reflect.ValueOf(i),
@@ -39,23 +40,23 @@ func (v *Value) getResolvedValue() reflect.Value {
return v.val
}
-// Checks whether the underlying value is a string
+// IsString checks whether the underlying value is a string
func (v *Value) IsString() bool {
return v.getResolvedValue().Kind() == reflect.String
}
-// Checks whether the underlying value is a bool
+// IsBool checks whether the underlying value is a bool
func (v *Value) IsBool() bool {
return v.getResolvedValue().Kind() == reflect.Bool
}
-// Checks whether the underlying value is a float
+// IsFloat checks whether the underlying value is a float
func (v *Value) IsFloat() bool {
return v.getResolvedValue().Kind() == reflect.Float32 ||
v.getResolvedValue().Kind() == reflect.Float64
}
-// Checks whether the underlying value is an integer
+// IsInteger checks whether the underlying value is an integer
func (v *Value) IsInteger() bool {
return v.getResolvedValue().Kind() == reflect.Int ||
v.getResolvedValue().Kind() == reflect.Int8 ||
@@ -69,19 +70,19 @@ func (v *Value) IsInteger() bool {
v.getResolvedValue().Kind() == reflect.Uint64
}
-// Checks whether the underlying value is either an integer
+// IsNumber checks whether the underlying value is either an integer
// or a float.
func (v *Value) IsNumber() bool {
return v.IsInteger() || v.IsFloat()
}
-// Checks whether the underlying value is NIL
+// IsNil checks whether the underlying value is NIL
func (v *Value) IsNil() bool {
//fmt.Printf("%+v\n", v.getResolvedValue().Type().String())
return !v.getResolvedValue().IsValid()
}
-// Returns a string for the underlying value. If this value is not
+// String returns a string for the underlying value. If this value is not
// of type string, pongo2 tries to convert it. Currently the following
// types for underlying values are supported:
//
@@ -111,9 +112,8 @@ func (v *Value) String() string {
case reflect.Bool:
if v.Bool() {
return "True"
- } else {
- return "False"
}
+ return "False"
case reflect.Struct:
if t, ok := v.Interface().(fmt.Stringer); ok {
return t.String()
@@ -124,7 +124,7 @@ func (v *Value) String() string {
return v.getResolvedValue().String()
}
-// Returns the underlying value as an integer (converts the underlying
+// Integer returns the underlying value as an integer (converts the underlying
// value, if necessary). If it's not possible to convert the underlying value,
// it will return 0.
func (v *Value) Integer() int {
@@ -148,7 +148,7 @@ func (v *Value) Integer() int {
}
}
-// Returns the underlying value as a float (converts the underlying
+// Float returns the underlying value as a float (converts the underlying
// value, if necessary). If it's not possible to convert the underlying value,
// it will return 0.0.
func (v *Value) Float() float64 {
@@ -172,7 +172,7 @@ func (v *Value) Float() float64 {
}
}
-// Returns the underlying value as bool. If the value is not bool, false
+// Bool returns the underlying value as bool. If the value is not bool, false
// will always be returned. If you're looking for true/false-evaluation of the
// underlying value, have a look on the IsTrue()-function.
func (v *Value) Bool() bool {
@@ -185,7 +185,7 @@ func (v *Value) Bool() bool {
}
}
-// Tries to evaluate the underlying value the Pythonic-way:
+// IsTrue tries to evaluate the underlying value the Pythonic-way:
//
// Returns TRUE in one the following cases:
//
@@ -217,7 +217,7 @@ func (v *Value) IsTrue() bool {
}
}
-// Tries to negate the underlying value. It's mainly used for
+// Negate tries to negate the underlying value. It's mainly used for
// the NOT-operator and in conjunction with a call to
// return_value.IsTrue() afterwards.
//
@@ -229,26 +229,26 @@ func (v *Value) Negate() *Value {
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
if v.Integer() != 0 {
return AsValue(0)
- } else {
- return AsValue(1)
}
+ return AsValue(1)
case reflect.Float32, reflect.Float64:
if v.Float() != 0.0 {
return AsValue(float64(0.0))
- } else {
- return AsValue(float64(1.1))
}
+ return AsValue(float64(1.1))
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return AsValue(v.getResolvedValue().Len() == 0)
case reflect.Bool:
return AsValue(!v.getResolvedValue().Bool())
+ case reflect.Struct:
+ return AsValue(false)
default:
logf("Value.IsTrue() not available for type: %s\n", v.getResolvedValue().Kind().String())
return AsValue(true)
}
}
-// Returns the length for an array, chan, map, slice or string.
+// Len returns the length for an array, chan, map, slice or string.
// Otherwise it will return 0.
func (v *Value) Len() int {
switch v.getResolvedValue().Kind() {
@@ -263,7 +263,7 @@ func (v *Value) Len() int {
}
}
-// Slices an array, slice or string. Otherwise it will
+// Slice slices an array, slice or string. Otherwise it will
// return an empty []int.
func (v *Value) Slice(i, j int) *Value {
switch v.getResolvedValue().Kind() {
@@ -278,7 +278,7 @@ func (v *Value) Slice(i, j int) *Value {
}
}
-// Get the i-th item of an array, slice or string. Otherwise
+// Index gets the i-th item of an array, slice or string. Otherwise
// it will return NIL.
func (v *Value) Index(i int) *Value {
switch v.getResolvedValue().Kind() {
@@ -301,7 +301,7 @@ func (v *Value) Index(i int) *Value {
}
}
-// Checks whether the underlying value (which must be of type struct, map,
+// Contains checks whether the underlying value (which must be of type struct, map,
// string, array or slice) contains of another Value (e. g. used to check
// whether a struct contains of a specific field or a map contains a specific key).
//
@@ -310,25 +310,32 @@ func (v *Value) Index(i int) *Value {
func (v *Value) Contains(other *Value) bool {
switch v.getResolvedValue().Kind() {
case reflect.Struct:
- field_value := v.getResolvedValue().FieldByName(other.String())
- return field_value.IsValid()
+ fieldValue := v.getResolvedValue().FieldByName(other.String())
+ return fieldValue.IsValid()
case reflect.Map:
- var map_value reflect.Value
+ var mapValue reflect.Value
switch other.Interface().(type) {
case int:
- map_value = v.getResolvedValue().MapIndex(other.getResolvedValue())
+ mapValue = v.getResolvedValue().MapIndex(other.getResolvedValue())
case string:
- map_value = v.getResolvedValue().MapIndex(other.getResolvedValue())
+ mapValue = v.getResolvedValue().MapIndex(other.getResolvedValue())
default:
logf("Value.Contains() does not support lookup type '%s'\n", other.getResolvedValue().Kind().String())
return false
}
- return map_value.IsValid()
+ return mapValue.IsValid()
case reflect.String:
return strings.Contains(v.getResolvedValue().String(), other.String())
- // TODO: reflect.Array, reflect.Slice
+ case reflect.Slice, reflect.Array:
+ for i := 0; i < v.getResolvedValue().Len(); i++ {
+ item := v.getResolvedValue().Index(i)
+ if other.Interface() == item.Interface() {
+ return true
+ }
+ }
+ return false
default:
logf("Value.Contains() not available for type: %s\n", v.getResolvedValue().Kind().String())
@@ -336,7 +343,7 @@ func (v *Value) Contains(other *Value) bool {
}
}
-// Checks whether the underlying value is of type array, slice or string.
+// CanSlice checks whether the underlying value is of type array, slice or string.
// You normally would use CanSlice() before using the Slice() operation.
func (v *Value) CanSlice() bool {
switch v.getResolvedValue().Kind() {
@@ -346,7 +353,7 @@ func (v *Value) CanSlice() bool {
return false
}
-// Iterates over a map, array, slice or a string. It calls the
+// Iterate iterates over a map, array, slice or a string. It calls the
// function's first argument for every value with the following arguments:
//
// idx current 0-index
@@ -357,16 +364,23 @@ func (v *Value) CanSlice() bool {
// If the underlying value has no items or is not one of the types above,
// the empty function (function's second argument) will be called.
func (v *Value) Iterate(fn func(idx, count int, key, value *Value) bool, empty func()) {
- v.IterateOrder(fn, empty, false)
+ v.IterateOrder(fn, empty, false, false)
}
-// Like Value.Iterate, but can iterate through an array/slice/string in reverse. Does
+// IterateOrder behaves like Value.Iterate, but can iterate through an array/slice/string in reverse. Does
// not affect the iteration through a map because maps don't have any particular order.
-func (v *Value) IterateOrder(fn func(idx, count int, key, value *Value) bool, empty func(), reverse bool) {
+// However, you can force an order using the `sorted` keyword (and even use `reversed sorted`).
+func (v *Value) IterateOrder(fn func(idx, count int, key, value *Value) bool, empty func(), reverse bool, sorted bool) {
switch v.getResolvedValue().Kind() {
case reflect.Map:
- // Reverse not needed for maps, since they are not ordered
- keys := v.getResolvedValue().MapKeys()
+ keys := sortedKeys(v.getResolvedValue().MapKeys())
+ if sorted {
+ if reverse {
+ sort.Sort(sort.Reverse(keys))
+ } else {
+ sort.Sort(keys)
+ }
+ }
keyLen := len(keys)
for idx, key := range keys {
value := v.getResolvedValue().MapIndex(key)
@@ -379,19 +393,31 @@ func (v *Value) IterateOrder(fn func(idx, count int, key, value *Value) bool, em
}
return // done
case reflect.Array, reflect.Slice:
+ var items valuesList
+
itemCount := v.getResolvedValue().Len()
- if itemCount > 0 {
+ for i := 0; i < itemCount; i++ {
+ items = append(items, &Value{val: v.getResolvedValue().Index(i)})
+ }
+
+ if sorted {
if reverse {
- for i := itemCount - 1; i >= 0; i-- {
- if !fn(i, itemCount, &Value{val: v.getResolvedValue().Index(i)}, nil) {
- return
- }
- }
+ sort.Sort(sort.Reverse(items))
} else {
- for i := 0; i < itemCount; i++ {
- if !fn(i, itemCount, &Value{val: v.getResolvedValue().Index(i)}, nil) {
- return
- }
+ sort.Sort(items)
+ }
+ } else {
+ if reverse {
+ for i := 0; i < itemCount/2; i++ {
+ items[i], items[itemCount-1-i] = items[itemCount-1-i], items[i]
+ }
+ }
+ }
+
+ if len(items) > 0 {
+ for idx, item := range items {
+ if !fn(idx, itemCount, item, nil) {
+ return
}
}
} else {
@@ -399,7 +425,12 @@ func (v *Value) IterateOrder(fn func(idx, count int, key, value *Value) bool, em
}
return // done
case reflect.String:
- // TODO: Not utf8-compatible (utf8-decoding neccessary)
+ if sorted {
+ // TODO(flosch): Handle sorted
+ panic("TODO: handle sort for type string")
+ }
+
+ // TODO(flosch): Not utf8-compatible (utf8-decoding necessary)
charCount := v.getResolvedValue().Len()
if charCount > 0 {
if reverse {
@@ -425,7 +456,7 @@ func (v *Value) IterateOrder(fn func(idx, count int, key, value *Value) bool, em
empty()
}
-// Gives you access to the underlying value.
+// Interface gives you access to the underlying value.
func (v *Value) Interface() interface{} {
if v.val.IsValid() {
return v.val.Interface()
@@ -433,7 +464,57 @@ func (v *Value) Interface() interface{} {
return nil
}
-// Checks whether two values are containing the same value or object.
+// EqualValueTo checks whether two values are containing the same value or object.
func (v *Value) EqualValueTo(other *Value) bool {
+ // comparison of uint with int fails using .Interface()-comparison (see issue #64)
+ if v.IsInteger() && other.IsInteger() {
+ return v.Integer() == other.Integer()
+ }
return v.Interface() == other.Interface()
}
+
+type sortedKeys []reflect.Value
+
+func (sk sortedKeys) Len() int {
+ return len(sk)
+}
+
+func (sk sortedKeys) Less(i, j int) bool {
+ vi := &Value{val: sk[i]}
+ vj := &Value{val: sk[j]}
+ switch {
+ case vi.IsInteger() && vj.IsInteger():
+ return vi.Integer() < vj.Integer()
+ case vi.IsFloat() && vj.IsFloat():
+ return vi.Float() < vj.Float()
+ default:
+ return vi.String() < vj.String()
+ }
+}
+
+func (sk sortedKeys) Swap(i, j int) {
+ sk[i], sk[j] = sk[j], sk[i]
+}
+
+type valuesList []*Value
+
+func (vl valuesList) Len() int {
+ return len(vl)
+}
+
+func (vl valuesList) Less(i, j int) bool {
+ vi := vl[i]
+ vj := vl[j]
+ switch {
+ case vi.IsInteger() && vj.IsInteger():
+ return vi.Integer() < vj.Integer()
+ case vi.IsFloat() && vj.IsFloat():
+ return vi.Float() < vj.Float()
+ default:
+ return vi.String() < vj.String()
+ }
+}
+
+func (vl valuesList) Swap(i, j int) {
+ vl[i], vl[j] = vl[j], vl[i]
+}
diff --git a/vendor/github.com/flosch/pongo2/variable.go b/vendor/github.com/flosch/pongo2/variable.go
index 9ec6a59..4a1ee69 100644
--- a/vendor/github.com/flosch/pongo2/variable.go
+++ b/vendor/github.com/flosch/pongo2/variable.go
@@ -1,11 +1,12 @@
package pongo2
import (
- "bytes"
"fmt"
"reflect"
"strconv"
"strings"
+
+ "github.com/juju/errors"
)
const (
@@ -13,13 +14,18 @@ const (
varTypeIdent
)
+var (
+ typeOfValuePtr = reflect.TypeOf(new(Value))
+ typeOfExecCtxPtr = reflect.TypeOf(new(ExecutionContext))
+)
+
type variablePart struct {
typ int
s string
i int
- is_function_call bool
- calling_args []functionCallArgument // needed for a function call, represents all argument nodes (INode supports nested function calls)
+ isFunctionCall bool
+ callingArgs []functionCallArgument // needed for a function call, represents all argument nodes (INode supports nested function calls)
}
type functionCallArgument interface {
@@ -28,119 +34,121 @@ type functionCallArgument interface {
// TODO: Add location tokens
type stringResolver struct {
- location_token *Token
- val string
+ locationToken *Token
+ val string
}
type intResolver struct {
- location_token *Token
- val int
+ locationToken *Token
+ val int
}
type floatResolver struct {
- location_token *Token
- val float64
+ locationToken *Token
+ val float64
}
type boolResolver struct {
- location_token *Token
- val bool
+ locationToken *Token
+ val bool
}
type variableResolver struct {
- location_token *Token
+ locationToken *Token
parts []*variablePart
}
type nodeFilteredVariable struct {
- location_token *Token
+ locationToken *Token
resolver IEvaluator
filterChain []*filterCall
}
type nodeVariable struct {
- location_token *Token
- expr IEvaluator
+ locationToken *Token
+ expr IEvaluator
}
-func (expr *nodeFilteredVariable) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+type executionCtxEval struct{}
+
+func (v *nodeFilteredVariable) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := v.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *variableResolver) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+func (vr *variableResolver) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := vr.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *stringResolver) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+func (s *stringResolver) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := s.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *intResolver) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+func (i *intResolver) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := i.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *floatResolver) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+func (f *floatResolver) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := f.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
-func (expr *boolResolver) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
- value, err := expr.Evaluate(ctx)
+func (b *boolResolver) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
+ value, err := b.Evaluate(ctx)
if err != nil {
return err
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
func (v *nodeFilteredVariable) GetPositionToken() *Token {
- return v.location_token
+ return v.locationToken
}
-func (v *variableResolver) GetPositionToken() *Token {
- return v.location_token
+func (vr *variableResolver) GetPositionToken() *Token {
+ return vr.locationToken
}
-func (v *stringResolver) GetPositionToken() *Token {
- return v.location_token
+func (s *stringResolver) GetPositionToken() *Token {
+ return s.locationToken
}
-func (v *intResolver) GetPositionToken() *Token {
- return v.location_token
+func (i *intResolver) GetPositionToken() *Token {
+ return i.locationToken
}
-func (v *floatResolver) GetPositionToken() *Token {
- return v.location_token
+func (f *floatResolver) GetPositionToken() *Token {
+ return f.locationToken
}
-func (v *boolResolver) GetPositionToken() *Token {
- return v.location_token
+func (b *boolResolver) GetPositionToken() *Token {
+ return b.locationToken
}
func (s *stringResolver) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
@@ -179,7 +187,7 @@ func (nv *nodeVariable) FilterApplied(name string) bool {
return nv.expr.FilterApplied(name)
}
-func (nv *nodeVariable) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Error {
+func (nv *nodeVariable) Execute(ctx *ExecutionContext, writer TemplateWriter) *Error {
value, err := nv.expr.Evaluate(ctx)
if err != nil {
return err
@@ -193,10 +201,14 @@ func (nv *nodeVariable) Execute(ctx *ExecutionContext, buffer *bytes.Buffer) *Er
}
}
- buffer.WriteString(value.String())
+ writer.WriteString(value.String())
return nil
}
+func (executionCtxEval) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
+ return AsValue(ctx), nil
+}
+
func (vr *variableResolver) FilterApplied(name string) bool {
return false
}
@@ -218,15 +230,15 @@ func (vr *variableResolver) String() string {
func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
var current reflect.Value
- var is_safe bool
+ var isSafe bool
for idx, part := range vr.parts {
if idx == 0 {
// We're looking up the first part of the variable.
// First we're having a look in our private
// context (e. g. information provided by tags, like the forloop)
- val, in_private := ctx.Private[vr.parts[0].s]
- if !in_private {
+ val, inPrivate := ctx.Private[vr.parts[0].s]
+ if !inPrivate {
// Nothing found? Then have a final lookup in the public context
val = ctx.Public[vr.parts[0].s]
}
@@ -236,16 +248,16 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
// Before resolving the pointer, let's see if we have a method to call
// Problem with resolving the pointer is we're changing the receiver
- is_func := false
+ isFunc := false
if part.typ == varTypeIdent {
- func_value := current.MethodByName(part.s)
- if func_value.IsValid() {
- current = func_value
- is_func = true
+ funcValue := current.MethodByName(part.s)
+ if funcValue.IsValid() {
+ current = funcValue
+ isFunc = true
}
}
- if !is_func {
+ if !isFunc {
// If current a pointer, resolve it
if current.Kind() == reflect.Ptr {
current = current.Elem()
@@ -262,9 +274,14 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
// * slices/arrays/strings
switch current.Kind() {
case reflect.String, reflect.Array, reflect.Slice:
- current = current.Index(part.i)
+ if part.i >= 0 && current.Len() > part.i {
+ current = current.Index(part.i)
+ } else {
+ // In Django, exceeding the length of a list is just empty.
+ return AsValue(nil), nil
+ }
default:
- return nil, fmt.Errorf("Can't access an index on type %s (variable %s)",
+ return nil, errors.Errorf("Can't access an index on type %s (variable %s)",
current.Kind().String(), vr.String())
}
case varTypeIdent:
@@ -278,7 +295,7 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
case reflect.Map:
current = current.MapIndex(reflect.ValueOf(part.s))
default:
- return nil, fmt.Errorf("Can't access a field by name on type %s (variable %s)",
+ return nil, errors.Errorf("Can't access a field by name on type %s (variable %s)",
current.Kind().String(), vr.String())
}
default:
@@ -295,10 +312,10 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
// If current is a reflect.ValueOf(pongo2.Value), then unpack it
// Happens in function calls (as a return value) or by injecting
// into the execution context (e.g. in a for-loop)
- if current.Type() == reflect.TypeOf(&Value{}) {
- tmp_value := current.Interface().(*Value)
- current = tmp_value.val
- is_safe = tmp_value.safe
+ if current.Type() == typeOfValuePtr {
+ tmpValue := current.Interface().(*Value)
+ current = tmpValue.val
+ isSafe = tmpValue.safe
}
// Check whether this is an interface and resolve it where required
@@ -307,69 +324,73 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
}
// Check if the part is a function call
- if part.is_function_call || current.Kind() == reflect.Func {
+ if part.isFunctionCall || current.Kind() == reflect.Func {
// Check for callable
if current.Kind() != reflect.Func {
- return nil, fmt.Errorf("'%s' is not a function (it is %s).", vr.String(), current.Kind().String())
+ return nil, errors.Errorf("'%s' is not a function (it is %s)", vr.String(), current.Kind().String())
}
// Check for correct function syntax and types
// func(*Value, ...) *Value
t := current.Type()
+ currArgs := part.callingArgs
+
+ // If an implicit ExecCtx is needed
+ if t.NumIn() > 0 && t.In(0) == typeOfExecCtxPtr {
+ currArgs = append([]functionCallArgument{executionCtxEval{}}, currArgs...)
+ }
// Input arguments
- if len(part.calling_args) != t.NumIn() && !(len(part.calling_args) >= t.NumIn()-1 && t.IsVariadic()) {
+ if len(currArgs) != t.NumIn() && !(len(currArgs) >= t.NumIn()-1 && t.IsVariadic()) {
return nil,
- fmt.Errorf("Function input argument count (%d) of '%s' must be equal to the calling argument count (%d).",
- t.NumIn(), vr.String(), len(part.calling_args))
+ errors.Errorf("Function input argument count (%d) of '%s' must be equal to the calling argument count (%d).",
+ t.NumIn(), vr.String(), len(currArgs))
}
// Output arguments
if t.NumOut() != 1 {
- return nil, fmt.Errorf("'%s' must have exactly 1 output argument.", vr.String())
+ return nil, errors.Errorf("'%s' must have exactly 1 output argument", vr.String())
}
// Evaluate all parameters
- parameters := make([]reflect.Value, 0)
+ var parameters []reflect.Value
- num_args := t.NumIn()
- is_variadic := t.IsVariadic()
- var fn_arg reflect.Type
+ numArgs := t.NumIn()
+ isVariadic := t.IsVariadic()
+ var fnArg reflect.Type
- for idx, arg := range part.calling_args {
+ for idx, arg := range currArgs {
pv, err := arg.Evaluate(ctx)
if err != nil {
return nil, err
}
- if is_variadic {
+ if isVariadic {
if idx >= t.NumIn()-1 {
- fn_arg = t.In(num_args - 1).Elem()
+ fnArg = t.In(numArgs - 1).Elem()
} else {
- fn_arg = t.In(idx)
+ fnArg = t.In(idx)
}
} else {
- fn_arg = t.In(idx)
+ fnArg = t.In(idx)
}
- if fn_arg != reflect.TypeOf(new(Value)) {
+ if fnArg != typeOfValuePtr {
// Function's argument is not a *pongo2.Value, then we have to check whether input argument is of the same type as the function's argument
- if !is_variadic {
- if fn_arg != reflect.TypeOf(pv.Interface()) && fn_arg.Kind() != reflect.Interface {
- return nil, fmt.Errorf("Function input argument %d of '%s' must be of type %s or *pongo2.Value (not %T).",
- idx, vr.String(), fn_arg.String(), pv.Interface())
- } else {
- // Function's argument has another type, using the interface-value
- parameters = append(parameters, reflect.ValueOf(pv.Interface()))
+ if !isVariadic {
+ if fnArg != reflect.TypeOf(pv.Interface()) && fnArg.Kind() != reflect.Interface {
+ return nil, errors.Errorf("Function input argument %d of '%s' must be of type %s or *pongo2.Value (not %T).",
+ idx, vr.String(), fnArg.String(), pv.Interface())
}
+ // Function's argument has another type, using the interface-value
+ parameters = append(parameters, reflect.ValueOf(pv.Interface()))
} else {
- if fn_arg != reflect.TypeOf(pv.Interface()) && fn_arg.Kind() != reflect.Interface {
- return nil, fmt.Errorf("Function variadic input argument of '%s' must be of type %s or *pongo2.Value (not %T).",
- vr.String(), fn_arg.String(), pv.Interface())
- } else {
- // Function's argument has another type, using the interface-value
- parameters = append(parameters, reflect.ValueOf(pv.Interface()))
+ if fnArg != reflect.TypeOf(pv.Interface()) && fnArg.Kind() != reflect.Interface {
+ return nil, errors.Errorf("Function variadic input argument of '%s' must be of type %s or *pongo2.Value (not %T).",
+ vr.String(), fnArg.String(), pv.Interface())
}
+ // Function's argument has another type, using the interface-value
+ parameters = append(parameters, reflect.ValueOf(pv.Interface()))
}
} else {
// Function's argument is a *pongo2.Value
@@ -377,31 +398,38 @@ func (vr *variableResolver) resolve(ctx *ExecutionContext) (*Value, error) {
}
}
+ // Check if any of the values are invalid
+ for _, p := range parameters {
+ if p.Kind() == reflect.Invalid {
+ return nil, errors.Errorf("Calling a function using an invalid parameter")
+ }
+ }
+
// Call it and get first return parameter back
rv := current.Call(parameters)[0]
- if rv.Type() != reflect.TypeOf(new(Value)) {
+ if rv.Type() != typeOfValuePtr {
current = reflect.ValueOf(rv.Interface())
} else {
// Return the function call value
current = rv.Interface().(*Value).val
- is_safe = rv.Interface().(*Value).safe
+ isSafe = rv.Interface().(*Value).safe
}
}
+
+ if !current.IsValid() {
+ // Value is not valid (e. g. NIL value)
+ return AsValue(nil), nil
+ }
}
- if !current.IsValid() {
- // Value is not valid (e. g. NIL value)
- return AsValue(nil), nil
- }
-
- return &Value{val: current, safe: is_safe}, nil
+ return &Value{val: current, safe: isSafe}, nil
}
func (vr *variableResolver) Evaluate(ctx *ExecutionContext) (*Value, *Error) {
value, err := vr.resolve(ctx)
if err != nil {
- return AsValue(nil), ctx.Error(err.Error(), vr.location_token)
+ return AsValue(nil), ctx.Error(err.Error(), vr.locationToken)
}
return value, nil
}
@@ -436,7 +464,7 @@ func (p *Parser) parseVariableOrLiteral() (IEvaluator, *Error) {
t := p.Current()
if t == nil {
- return nil, p.Error("Unexpected EOF, expected a number, string, keyword or identifier.", p.last_token)
+ return nil, p.Error("Unexpected EOF, expected a number, string, keyword or identifier.", p.lastToken)
}
// Is first part a number or a string, there's nothing to resolve (because there's only to return the value then)
@@ -460,26 +488,26 @@ func (p *Parser) parseVariableOrLiteral() (IEvaluator, *Error) {
return nil, p.Error(err.Error(), t)
}
fr := &floatResolver{
- location_token: t,
- val: f,
+ locationToken: t,
+ val: f,
}
return fr, nil
- } else {
- i, err := strconv.Atoi(t.Val)
- if err != nil {
- return nil, p.Error(err.Error(), t)
- }
- nr := &intResolver{
- location_token: t,
- val: i,
- }
- return nr, nil
}
+ i, err := strconv.Atoi(t.Val)
+ if err != nil {
+ return nil, p.Error(err.Error(), t)
+ }
+ nr := &intResolver{
+ locationToken: t,
+ val: i,
+ }
+ return nr, nil
+
case TokenString:
p.Consume()
sr := &stringResolver{
- location_token: t,
- val: t.Val,
+ locationToken: t,
+ val: t.Val,
}
return sr, nil
case TokenKeyword:
@@ -487,14 +515,14 @@ func (p *Parser) parseVariableOrLiteral() (IEvaluator, *Error) {
switch t.Val {
case "true":
br := &boolResolver{
- location_token: t,
- val: true,
+ locationToken: t,
+ val: true,
}
return br, nil
case "false":
br := &boolResolver{
- location_token: t,
- val: false,
+ locationToken: t,
+ val: false,
}
return br, nil
default:
@@ -503,7 +531,7 @@ func (p *Parser) parseVariableOrLiteral() (IEvaluator, *Error) {
}
resolver := &variableResolver{
- location_token: t,
+ locationToken: t,
}
// First part of a variable MUST be an identifier
@@ -551,26 +579,26 @@ variableLoop:
} else {
// EOF
return nil, p.Error("Unexpected EOF, expected either IDENTIFIER or NUMBER after DOT.",
- p.last_token)
+ p.lastToken)
}
} else if p.Match(TokenSymbol, "(") != nil {
// Function call
// FunctionName '(' Comma-separated list of expressions ')'
part := resolver.parts[len(resolver.parts)-1]
- part.is_function_call = true
+ part.isFunctionCall = true
argumentLoop:
for {
if p.Remaining() == 0 {
- return nil, p.Error("Unexpected EOF, expected function call argument list.", p.last_token)
+ return nil, p.Error("Unexpected EOF, expected function call argument list.", p.lastToken)
}
if p.Peek(TokenSymbol, ")") == nil {
// No closing bracket, so we're parsing an expression
- expr_arg, err := p.ParseExpression()
+ exprArg, err := p.ParseExpression()
if err != nil {
return nil, err
}
- part.calling_args = append(part.calling_args, expr_arg)
+ part.callingArgs = append(part.callingArgs, exprArg)
if p.Match(TokenSymbol, ")") != nil {
// If there's a closing bracket after an expression, we will stop parsing the arguments
@@ -601,7 +629,7 @@ variableLoop:
func (p *Parser) parseVariableOrLiteralWithFilter() (*nodeFilteredVariable, *Error) {
v := &nodeFilteredVariable{
- location_token: p.Current(),
+ locationToken: p.Current(),
}
// Parse the variable name
@@ -621,15 +649,13 @@ filterLoop:
}
// Check sandbox filter restriction
- if _, is_banned := p.template.set.bannedFilters[filter.name]; is_banned {
+ if _, isBanned := p.template.set.bannedFilters[filter.name]; isBanned {
return nil, p.Error(fmt.Sprintf("Usage of filter '%s' is not allowed (sandbox restriction active).", filter.name), nil)
}
v.filterChain = append(v.filterChain, filter)
continue filterLoop
-
- return nil, p.Error("This token is not allowed within a variable.", nil)
}
return v, nil
@@ -637,7 +663,7 @@ filterLoop:
func (p *Parser) parseVariableElement() (INode, *Error) {
node := &nodeVariable{
- location_token: p.Current(),
+ locationToken: p.Current(),
}
p.Consume() // consume '{{'
diff --git a/vendor/github.com/juju/errors/LICENSE b/vendor/github.com/juju/errors/LICENSE
new file mode 100644
index 0000000..ade9307
--- /dev/null
+++ b/vendor/github.com/juju/errors/LICENSE
@@ -0,0 +1,191 @@
+All files in this repository are licensed as follows. If you contribute
+to this repository, it is assumed that you license your contribution
+under the same license unless you state otherwise.
+
+All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
+
+This software is licensed under the LGPLv3, included below.
+
+As a special exception to the GNU Lesser General Public License version 3
+("LGPL3"), the copyright holders of this Library give you permission to
+convey to a third party a Combined Work that links statically or dynamically
+to this Library without providing any Minimal Corresponding Source or
+Minimal Application Code as set out in 4d or providing the installation
+information set out in section 4e, provided that you comply with the other
+provisions of LGPL3 and provided that you meet, for the Application the
+terms and conditions of the license(s) which apply to the Application.
+
+Except as stated in this special exception, the provisions of LGPL3 will
+continue to comply in full to this Library. If you modify this Library, you
+may apply this exception to your version of this Library, but you are not
+obliged to do so. If you do not wish to do so, delete this exception
+statement from your version. This exception does not (and cannot) modify any
+license terms which apply to the Application, with which you must still
+comply.
+
+
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+ This version of the GNU Lesser General Public License incorporates
+the terms and conditions of version 3 of the GNU General Public
+License, supplemented by the additional permissions listed below.
+
+ 0. Additional Definitions.
+
+ As used herein, "this License" refers to version 3 of the GNU Lesser
+General Public License, and the "GNU GPL" refers to version 3 of the GNU
+General Public License.
+
+ "The Library" refers to a covered work governed by this License,
+other than an Application or a Combined Work as defined below.
+
+ An "Application" is any work that makes use of an interface provided
+by the Library, but which is not otherwise based on the Library.
+Defining a subclass of a class defined by the Library is deemed a mode
+of using an interface provided by the Library.
+
+ A "Combined Work" is a work produced by combining or linking an
+Application with the Library. The particular version of the Library
+with which the Combined Work was made is also called the "Linked
+Version".
+
+ The "Minimal Corresponding Source" for a Combined Work means the
+Corresponding Source for the Combined Work, excluding any source code
+for portions of the Combined Work that, considered in isolation, are
+based on the Application, and not on the Linked Version.
+
+ The "Corresponding Application Code" for a Combined Work means the
+object code and/or source code for the Application, including any data
+and utility programs needed for reproducing the Combined Work from the
+Application, but excluding the System Libraries of the Combined Work.
+
+ 1. Exception to Section 3 of the GNU GPL.
+
+ You may convey a covered work under sections 3 and 4 of this License
+without being bound by section 3 of the GNU GPL.
+
+ 2. Conveying Modified Versions.
+
+ If you modify a copy of the Library, and, in your modifications, a
+facility refers to a function or data to be supplied by an Application
+that uses the facility (other than as an argument passed when the
+facility is invoked), then you may convey a copy of the modified
+version:
+
+ a) under this License, provided that you make a good faith effort to
+ ensure that, in the event an Application does not supply the
+ function or data, the facility still operates, and performs
+ whatever part of its purpose remains meaningful, or
+
+ b) under the GNU GPL, with none of the additional permissions of
+ this License applicable to that copy.
+
+ 3. Object Code Incorporating Material from Library Header Files.
+
+ The object code form of an Application may incorporate material from
+a header file that is part of the Library. You may convey such object
+code under terms of your choice, provided that, if the incorporated
+material is not limited to numerical parameters, data structure
+layouts and accessors, or small macros, inline functions and templates
+(ten or fewer lines in length), you do both of the following:
+
+ a) Give prominent notice with each copy of the object code that the
+ Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the object code with a copy of the GNU GPL and this license
+ document.
+
+ 4. Combined Works.
+
+ You may convey a Combined Work under terms of your choice that,
+taken together, effectively do not restrict modification of the
+portions of the Library contained in the Combined Work and reverse
+engineering for debugging such modifications, if you also do each of
+the following:
+
+ a) Give prominent notice with each copy of the Combined Work that
+ the Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
+ document.
+
+ c) For a Combined Work that displays copyright notices during
+ execution, include the copyright notice for the Library among
+ these notices, as well as a reference directing the user to the
+ copies of the GNU GPL and this license document.
+
+ d) Do one of the following:
+
+ 0) Convey the Minimal Corresponding Source under the terms of this
+ License, and the Corresponding Application Code in a form
+ suitable for, and under terms that permit, the user to
+ recombine or relink the Application with a modified version of
+ the Linked Version to produce a modified Combined Work, in the
+ manner specified by section 6 of the GNU GPL for conveying
+ Corresponding Source.
+
+ 1) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (a) uses at run time
+ a copy of the Library already present on the user's computer
+ system, and (b) will operate properly with a modified version
+ of the Library that is interface-compatible with the Linked
+ Version.
+
+ e) Provide Installation Information, but only if you would otherwise
+ be required to provide such information under section 6 of the
+ GNU GPL, and only to the extent that such information is
+ necessary to install and execute a modified version of the
+ Combined Work produced by recombining or relinking the
+ Application with a modified version of the Linked Version. (If
+ you use option 4d0, the Installation Information must accompany
+ the Minimal Corresponding Source and Corresponding Application
+ Code. If you use option 4d1, you must provide the Installation
+ Information in the manner specified by section 6 of the GNU GPL
+ for conveying Corresponding Source.)
+
+ 5. Combined Libraries.
+
+ You may place library facilities that are a work based on the
+Library side by side in a single library together with other library
+facilities that are not Applications and are not covered by this
+License, and convey such a combined library under terms of your
+choice, if you do both of the following:
+
+ a) Accompany the combined library with a copy of the same work based
+ on the Library, uncombined with any other library facilities,
+ conveyed under the terms of this License.
+
+ b) Give prominent notice with the combined library that part of it
+ is a work based on the Library, and explaining where to find the
+ accompanying uncombined form of the same work.
+
+ 6. Revised Versions of the GNU Lesser General Public License.
+
+ The Free Software Foundation may publish revised and/or new versions
+of the GNU Lesser General Public License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Library as you received it specifies that a certain numbered version
+of the GNU Lesser General Public License "or any later version"
+applies to it, you have the option of following the terms and
+conditions either of that published version or of any later version
+published by the Free Software Foundation. If the Library as you
+received it does not specify a version number of the GNU Lesser
+General Public License, you may choose any version of the GNU Lesser
+General Public License ever published by the Free Software Foundation.
+
+ If the Library as you received it specifies that a proxy can decide
+whether future versions of the GNU Lesser General Public License shall
+apply, that proxy's public statement of acceptance of any version is
+permanent authorization for you to choose that version for the
+Library.
diff --git a/vendor/github.com/juju/errors/Makefile b/vendor/github.com/juju/errors/Makefile
new file mode 100644
index 0000000..ab7c2e6
--- /dev/null
+++ b/vendor/github.com/juju/errors/Makefile
@@ -0,0 +1,11 @@
+default: check
+
+check:
+ go test && go test -compiler gccgo
+
+docs:
+ godoc2md github.com/juju/errors > README.md
+ sed -i 's|\[godoc-link-here\]|[![GoDoc](https://godoc.org/github.com/juju/errors?status.svg)](https://godoc.org/github.com/juju/errors)|' README.md
+
+
+.PHONY: default check docs
diff --git a/vendor/github.com/juju/errors/README.md b/vendor/github.com/juju/errors/README.md
new file mode 100644
index 0000000..782a6f4
--- /dev/null
+++ b/vendor/github.com/juju/errors/README.md
@@ -0,0 +1,543 @@
+
+# errors
+ import "github.com/juju/errors"
+
+[![GoDoc](https://godoc.org/github.com/juju/errors?status.svg)](https://godoc.org/github.com/juju/errors)
+
+The juju/errors provides an easy way to annotate errors without losing the
+orginal error context.
+
+The exported `New` and `Errorf` functions are designed to replace the
+`errors.New` and `fmt.Errorf` functions respectively. The same underlying
+error is there, but the package also records the location at which the error
+was created.
+
+A primary use case for this library is to add extra context any time an
+error is returned from a function.
+
+
+ if err := SomeFunc(); err != nil {
+ return err
+ }
+
+This instead becomes:
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Trace(err)
+ }
+
+which just records the file and line number of the Trace call, or
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Annotate(err, "more context")
+ }
+
+which also adds an annotation to the error.
+
+When you want to check to see if an error is of a particular type, a helper
+function is normally exported by the package that returned the error, like the
+`os` package does. The underlying cause of the error is available using the
+`Cause` function.
+
+
+ os.IsNotExist(errors.Cause(err))
+
+The result of the `Error()` call on an annotated error is the annotations joined
+with colons, then the result of the `Error()` method for the underlying error
+that was the cause.
+
+
+ err := errors.Errorf("original")
+ err = errors.Annotatef(err, "context")
+ err = errors.Annotatef(err, "more context")
+ err.Error() -> "more context: context: original"
+
+Obviously recording the file, line and functions is not very useful if you
+cannot get them back out again.
+
+
+ errors.ErrorStack(err)
+
+will return something like:
+
+
+ first error
+ github.com/juju/errors/annotation_test.go:193:
+ github.com/juju/errors/annotation_test.go:194: annotation
+ github.com/juju/errors/annotation_test.go:195:
+ github.com/juju/errors/annotation_test.go:196: more context
+ github.com/juju/errors/annotation_test.go:197:
+
+The first error was generated by an external system, so there was no location
+associated. The second, fourth, and last lines were generated with Trace calls,
+and the other two through Annotate.
+
+Sometimes when responding to an error you want to return a more specific error
+for the situation.
+
+
+ if err := FindField(field); err != nil {
+ return errors.Wrap(err, errors.NotFoundf(field))
+ }
+
+This returns an error where the complete error stack is still available, and
+`errors.Cause()` will return the `NotFound` error.
+
+
+
+
+
+
+## func AlreadyExistsf
+``` go
+func AlreadyExistsf(format string, args ...interface{}) error
+```
+AlreadyExistsf returns an error which satisfies IsAlreadyExists().
+
+
+## func Annotate
+``` go
+func Annotate(other error, message string) error
+```
+Annotate is used to add extra context to an existing error. The location of
+the Annotate call is recorded with the annotations. The file, line and
+function are also recorded.
+
+For example:
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Annotate(err, "failed to frombulate")
+ }
+
+
+## func Annotatef
+``` go
+func Annotatef(other error, format string, args ...interface{}) error
+```
+Annotatef is used to add extra context to an existing error. The location of
+the Annotate call is recorded with the annotations. The file, line and
+function are also recorded.
+
+For example:
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Annotatef(err, "failed to frombulate the %s", arg)
+ }
+
+
+## func Cause
+``` go
+func Cause(err error) error
+```
+Cause returns the cause of the given error. This will be either the
+original error, or the result of a Wrap or Mask call.
+
+Cause is the usual way to diagnose errors that may have been wrapped by
+the other errors functions.
+
+
+## func DeferredAnnotatef
+``` go
+func DeferredAnnotatef(err *error, format string, args ...interface{})
+```
+DeferredAnnotatef annotates the given error (when it is not nil) with the given
+format string and arguments (like fmt.Sprintf). If *err is nil, DeferredAnnotatef
+does nothing. This method is used in a defer statement in order to annotate any
+resulting error with the same message.
+
+For example:
+
+
+ defer DeferredAnnotatef(&err, "failed to frombulate the %s", arg)
+
+
+## func Details
+``` go
+func Details(err error) string
+```
+Details returns information about the stack of errors wrapped by err, in
+the format:
+
+
+ [{filename:99: error one} {otherfile:55: cause of error one}]
+
+This is a terse alternative to ErrorStack as it returns a single line.
+
+
+## func ErrorStack
+``` go
+func ErrorStack(err error) string
+```
+ErrorStack returns a string representation of the annotated error. If the
+error passed as the parameter is not an annotated error, the result is
+simply the result of the Error() method on that error.
+
+If the error is an annotated error, a multi-line string is returned where
+each line represents one entry in the annotation stack. The full filename
+from the call stack is used in the output.
+
+
+ first error
+ github.com/juju/errors/annotation_test.go:193:
+ github.com/juju/errors/annotation_test.go:194: annotation
+ github.com/juju/errors/annotation_test.go:195:
+ github.com/juju/errors/annotation_test.go:196: more context
+ github.com/juju/errors/annotation_test.go:197:
+
+
+## func Errorf
+``` go
+func Errorf(format string, args ...interface{}) error
+```
+Errorf creates a new annotated error and records the location that the
+error is created. This should be a drop in replacement for fmt.Errorf.
+
+For example:
+
+
+ return errors.Errorf("validation failed: %s", message)
+
+
+## func IsAlreadyExists
+``` go
+func IsAlreadyExists(err error) bool
+```
+IsAlreadyExists reports whether the error was created with
+AlreadyExistsf() or NewAlreadyExists().
+
+
+## func IsNotFound
+``` go
+func IsNotFound(err error) bool
+```
+IsNotFound reports whether err was created with NotFoundf() or
+NewNotFound().
+
+
+## func IsNotImplemented
+``` go
+func IsNotImplemented(err error) bool
+```
+IsNotImplemented reports whether err was created with
+NotImplementedf() or NewNotImplemented().
+
+
+## func IsNotSupported
+``` go
+func IsNotSupported(err error) bool
+```
+IsNotSupported reports whether the error was created with
+NotSupportedf() or NewNotSupported().
+
+
+## func IsNotValid
+``` go
+func IsNotValid(err error) bool
+```
+IsNotValid reports whether the error was created with NotValidf() or
+NewNotValid().
+
+
+## func IsUnauthorized
+``` go
+func IsUnauthorized(err error) bool
+```
+IsUnauthorized reports whether err was created with Unauthorizedf() or
+NewUnauthorized().
+
+
+## func Mask
+``` go
+func Mask(other error) error
+```
+Mask hides the underlying error type, and records the location of the masking.
+
+
+## func Maskf
+``` go
+func Maskf(other error, format string, args ...interface{}) error
+```
+Mask masks the given error with the given format string and arguments (like
+fmt.Sprintf), returning a new error that maintains the error stack, but
+hides the underlying error type. The error string still contains the full
+annotations. If you want to hide the annotations, call Wrap.
+
+
+## func New
+``` go
+func New(message string) error
+```
+New is a drop in replacement for the standard libary errors module that records
+the location that the error is created.
+
+For example:
+
+
+ return errors.New("validation failed")
+
+
+## func NewAlreadyExists
+``` go
+func NewAlreadyExists(err error, msg string) error
+```
+NewAlreadyExists returns an error which wraps err and satisfies
+IsAlreadyExists().
+
+
+## func NewNotFound
+``` go
+func NewNotFound(err error, msg string) error
+```
+NewNotFound returns an error which wraps err that satisfies
+IsNotFound().
+
+
+## func NewNotImplemented
+``` go
+func NewNotImplemented(err error, msg string) error
+```
+NewNotImplemented returns an error which wraps err and satisfies
+IsNotImplemented().
+
+
+## func NewNotSupported
+``` go
+func NewNotSupported(err error, msg string) error
+```
+NewNotSupported returns an error which wraps err and satisfies
+IsNotSupported().
+
+
+## func NewNotValid
+``` go
+func NewNotValid(err error, msg string) error
+```
+NewNotValid returns an error which wraps err and satisfies IsNotValid().
+
+
+## func NewUnauthorized
+``` go
+func NewUnauthorized(err error, msg string) error
+```
+NewUnauthorized returns an error which wraps err and satisfies
+IsUnauthorized().
+
+
+## func NotFoundf
+``` go
+func NotFoundf(format string, args ...interface{}) error
+```
+NotFoundf returns an error which satisfies IsNotFound().
+
+
+## func NotImplementedf
+``` go
+func NotImplementedf(format string, args ...interface{}) error
+```
+NotImplementedf returns an error which satisfies IsNotImplemented().
+
+
+## func NotSupportedf
+``` go
+func NotSupportedf(format string, args ...interface{}) error
+```
+NotSupportedf returns an error which satisfies IsNotSupported().
+
+
+## func NotValidf
+``` go
+func NotValidf(format string, args ...interface{}) error
+```
+NotValidf returns an error which satisfies IsNotValid().
+
+
+## func Trace
+``` go
+func Trace(other error) error
+```
+Trace adds the location of the Trace call to the stack. The Cause of the
+resulting error is the same as the error parameter. If the other error is
+nil, the result will be nil.
+
+For example:
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Trace(err)
+ }
+
+
+## func Unauthorizedf
+``` go
+func Unauthorizedf(format string, args ...interface{}) error
+```
+Unauthorizedf returns an error which satisfies IsUnauthorized().
+
+
+## func Forbiddenf
+``` go
+func Forbiddenf(format string, args ...interface{}) error
+```
+Forbiddenf returns an error which satisfies IsForbidden().
+
+
+## func Wrap
+``` go
+func Wrap(other, newDescriptive error) error
+```
+Wrap changes the Cause of the error. The location of the Wrap call is also
+stored in the error stack.
+
+For example:
+
+
+ if err := SomeFunc(); err != nil {
+ newErr := &packageError{"more context", private_value}
+ return errors.Wrap(err, newErr)
+ }
+
+
+## func Wrapf
+``` go
+func Wrapf(other, newDescriptive error, format string, args ...interface{}) error
+```
+Wrapf changes the Cause of the error, and adds an annotation. The location
+of the Wrap call is also stored in the error stack.
+
+For example:
+
+
+ if err := SomeFunc(); err != nil {
+ return errors.Wrapf(err, simpleErrorType, "invalid value %q", value)
+ }
+
+
+
+## type Err
+``` go
+type Err struct {
+ // contains filtered or unexported fields
+}
+```
+Err holds a description of an error along with information about
+where the error was created.
+
+It may be embedded in custom error types to add extra information that
+this errors package can understand.
+
+
+
+
+
+
+
+
+
+### func NewErr
+``` go
+func NewErr(format string, args ...interface{}) Err
+```
+NewErr is used to return an Err for the purpose of embedding in other
+structures. The location is not specified, and needs to be set with a call
+to SetLocation.
+
+For example:
+
+
+ type FooError struct {
+ errors.Err
+ code int
+ }
+
+ func NewFooError(code int) error {
+ err := &FooError{errors.NewErr("foo"), code}
+ err.SetLocation(1)
+ return err
+ }
+
+
+
+
+### func (\*Err) Cause
+``` go
+func (e *Err) Cause() error
+```
+The Cause of an error is the most recent error in the error stack that
+meets one of these criteria: the original error that was raised; the new
+error that was passed into the Wrap function; the most recently masked
+error; or nil if the error itself is considered the Cause. Normally this
+method is not invoked directly, but instead through the Cause stand alone
+function.
+
+
+
+### func (\*Err) Error
+``` go
+func (e *Err) Error() string
+```
+Error implements error.Error.
+
+
+
+### func (\*Err) Location
+``` go
+func (e *Err) Location() (filename string, line int)
+```
+Location is the file and line of where the error was most recently
+created or annotated.
+
+
+
+### func (\*Err) Message
+``` go
+func (e *Err) Message() string
+```
+Message returns the message stored with the most recent location. This is
+the empty string if the most recent call was Trace, or the message stored
+with Annotate or Mask.
+
+
+
+### func (\*Err) SetLocation
+``` go
+func (e *Err) SetLocation(callDepth int)
+```
+SetLocation records the source location of the error at callDepth stack
+frames above the call.
+
+
+
+### func (\*Err) StackTrace
+``` go
+func (e *Err) StackTrace() []string
+```
+StackTrace returns one string for each location recorded in the stack of
+errors. The first value is the originating error, with a line for each
+other annotation or tracing of the error.
+
+
+
+### func (\*Err) Underlying
+``` go
+func (e *Err) Underlying() error
+```
+Underlying returns the previous error in the error stack, if any. A client
+should not ever really call this method. It is used to build the error
+stack and should not be introspected by client calls. Or more
+specifically, clients should not depend on anything but the `Cause` of an
+error.
+
+
+
+
+
+
+
+
+
+- - -
+Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
\ No newline at end of file
diff --git a/vendor/github.com/juju/errors/doc.go b/vendor/github.com/juju/errors/doc.go
new file mode 100644
index 0000000..35b119a
--- /dev/null
+++ b/vendor/github.com/juju/errors/doc.go
@@ -0,0 +1,81 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+/*
+[godoc-link-here]
+
+The juju/errors provides an easy way to annotate errors without losing the
+orginal error context.
+
+The exported `New` and `Errorf` functions are designed to replace the
+`errors.New` and `fmt.Errorf` functions respectively. The same underlying
+error is there, but the package also records the location at which the error
+was created.
+
+A primary use case for this library is to add extra context any time an
+error is returned from a function.
+
+ if err := SomeFunc(); err != nil {
+ return err
+ }
+
+This instead becomes:
+
+ if err := SomeFunc(); err != nil {
+ return errors.Trace(err)
+ }
+
+which just records the file and line number of the Trace call, or
+
+ if err := SomeFunc(); err != nil {
+ return errors.Annotate(err, "more context")
+ }
+
+which also adds an annotation to the error.
+
+When you want to check to see if an error is of a particular type, a helper
+function is normally exported by the package that returned the error, like the
+`os` package does. The underlying cause of the error is available using the
+`Cause` function.
+
+ os.IsNotExist(errors.Cause(err))
+
+The result of the `Error()` call on an annotated error is the annotations joined
+with colons, then the result of the `Error()` method for the underlying error
+that was the cause.
+
+ err := errors.Errorf("original")
+ err = errors.Annotatef(err, "context")
+ err = errors.Annotatef(err, "more context")
+ err.Error() -> "more context: context: original"
+
+Obviously recording the file, line and functions is not very useful if you
+cannot get them back out again.
+
+ errors.ErrorStack(err)
+
+will return something like:
+
+ first error
+ github.com/juju/errors/annotation_test.go:193:
+ github.com/juju/errors/annotation_test.go:194: annotation
+ github.com/juju/errors/annotation_test.go:195:
+ github.com/juju/errors/annotation_test.go:196: more context
+ github.com/juju/errors/annotation_test.go:197:
+
+The first error was generated by an external system, so there was no location
+associated. The second, fourth, and last lines were generated with Trace calls,
+and the other two through Annotate.
+
+Sometimes when responding to an error you want to return a more specific error
+for the situation.
+
+ if err := FindField(field); err != nil {
+ return errors.Wrap(err, errors.NotFoundf(field))
+ }
+
+This returns an error where the complete error stack is still available, and
+`errors.Cause()` will return the `NotFound` error.
+
+*/
+package errors
diff --git a/vendor/github.com/juju/errors/error.go b/vendor/github.com/juju/errors/error.go
new file mode 100644
index 0000000..b7df735
--- /dev/null
+++ b/vendor/github.com/juju/errors/error.go
@@ -0,0 +1,172 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors
+
+import (
+ "fmt"
+ "reflect"
+ "runtime"
+)
+
+// Err holds a description of an error along with information about
+// where the error was created.
+//
+// It may be embedded in custom error types to add extra information that
+// this errors package can understand.
+type Err struct {
+ // message holds an annotation of the error.
+ message string
+
+ // cause holds the cause of the error as returned
+ // by the Cause method.
+ cause error
+
+ // previous holds the previous error in the error stack, if any.
+ previous error
+
+ // file and line hold the source code location where the error was
+ // created.
+ file string
+ line int
+}
+
+// NewErr is used to return an Err for the purpose of embedding in other
+// structures. The location is not specified, and needs to be set with a call
+// to SetLocation.
+//
+// For example:
+// type FooError struct {
+// errors.Err
+// code int
+// }
+//
+// func NewFooError(code int) error {
+// err := &FooError{errors.NewErr("foo"), code}
+// err.SetLocation(1)
+// return err
+// }
+func NewErr(format string, args ...interface{}) Err {
+ return Err{
+ message: fmt.Sprintf(format, args...),
+ }
+}
+
+// NewErrWithCause is used to return an Err with case by other error for the purpose of embedding in other
+// structures. The location is not specified, and needs to be set with a call
+// to SetLocation.
+//
+// For example:
+// type FooError struct {
+// errors.Err
+// code int
+// }
+//
+// func (e *FooError) Annotate(format string, args ...interface{}) error {
+// err := &FooError{errors.NewErrWithCause(e.Err, format, args...), e.code}
+// err.SetLocation(1)
+// return err
+// })
+func NewErrWithCause(other error, format string, args ...interface{}) Err {
+ return Err{
+ message: fmt.Sprintf(format, args...),
+ cause: Cause(other),
+ previous: other,
+ }
+}
+
+// Location is the file and line of where the error was most recently
+// created or annotated.
+func (e *Err) Location() (filename string, line int) {
+ return e.file, e.line
+}
+
+// Underlying returns the previous error in the error stack, if any. A client
+// should not ever really call this method. It is used to build the error
+// stack and should not be introspected by client calls. Or more
+// specifically, clients should not depend on anything but the `Cause` of an
+// error.
+func (e *Err) Underlying() error {
+ return e.previous
+}
+
+// The Cause of an error is the most recent error in the error stack that
+// meets one of these criteria: the original error that was raised; the new
+// error that was passed into the Wrap function; the most recently masked
+// error; or nil if the error itself is considered the Cause. Normally this
+// method is not invoked directly, but instead through the Cause stand alone
+// function.
+func (e *Err) Cause() error {
+ return e.cause
+}
+
+// Message returns the message stored with the most recent location. This is
+// the empty string if the most recent call was Trace, or the message stored
+// with Annotate or Mask.
+func (e *Err) Message() string {
+ return e.message
+}
+
+// Error implements error.Error.
+func (e *Err) Error() string {
+ // We want to walk up the stack of errors showing the annotations
+ // as long as the cause is the same.
+ err := e.previous
+ if !sameError(Cause(err), e.cause) && e.cause != nil {
+ err = e.cause
+ }
+ switch {
+ case err == nil:
+ return e.message
+ case e.message == "":
+ return err.Error()
+ }
+ return fmt.Sprintf("%s: %v", e.message, err)
+}
+
+// Format implements fmt.Formatter
+// When printing errors with %+v it also prints the stack trace.
+// %#v unsurprisingly will print the real underlying type.
+func (e *Err) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ switch {
+ case s.Flag('+'):
+ fmt.Fprintf(s, "%s", ErrorStack(e))
+ return
+ case s.Flag('#'):
+ // avoid infinite recursion by wrapping e into a type
+ // that doesn't implement Formatter.
+ fmt.Fprintf(s, "%#v", (*unformatter)(e))
+ return
+ }
+ fallthrough
+ case 's':
+ fmt.Fprintf(s, "%s", e.Error())
+ }
+}
+
+// helper for Format
+type unformatter Err
+
+func (unformatter) Format() { /* break the fmt.Formatter interface */ }
+
+// SetLocation records the source location of the error at callDepth stack
+// frames above the call.
+func (e *Err) SetLocation(callDepth int) {
+ _, file, line, _ := runtime.Caller(callDepth + 1)
+ e.file = trimGoPath(file)
+ e.line = line
+}
+
+// StackTrace returns one string for each location recorded in the stack of
+// errors. The first value is the originating error, with a line for each
+// other annotation or tracing of the error.
+func (e *Err) StackTrace() []string {
+ return errorStack(e)
+}
+
+// Ideally we'd have a way to check identity, but deep equals will do.
+func sameError(e1, e2 error) bool {
+ return reflect.DeepEqual(e1, e2)
+}
diff --git a/vendor/github.com/juju/errors/error_test.go b/vendor/github.com/juju/errors/error_test.go
new file mode 100644
index 0000000..ba9b718
--- /dev/null
+++ b/vendor/github.com/juju/errors/error_test.go
@@ -0,0 +1,178 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ "fmt"
+ "runtime"
+
+ jc "github.com/juju/testing/checkers"
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/errors"
+)
+
+type errorsSuite struct{}
+
+var _ = gc.Suite(&errorsSuite{})
+
+var someErr = errors.New("some error") //err varSomeErr
+
+func (*errorsSuite) TestErrorString(c *gc.C) {
+ for i, test := range []struct {
+ message string
+ generator func() error
+ expected string
+ }{
+ {
+ message: "uncomparable errors",
+ generator: func() error {
+ err := errors.Annotatef(newNonComparableError("uncomparable"), "annotation")
+ return errors.Annotatef(err, "another")
+ },
+ expected: "another: annotation: uncomparable",
+ }, {
+ message: "Errorf",
+ generator: func() error {
+ return errors.Errorf("first error")
+ },
+ expected: "first error",
+ }, {
+ message: "annotated error",
+ generator: func() error {
+ err := errors.Errorf("first error")
+ return errors.Annotatef(err, "annotation")
+ },
+ expected: "annotation: first error",
+ }, {
+ message: "test annotation format",
+ generator: func() error {
+ err := errors.Errorf("first %s", "error")
+ return errors.Annotatef(err, "%s", "annotation")
+ },
+ expected: "annotation: first error",
+ }, {
+ message: "wrapped error",
+ generator: func() error {
+ err := newError("first error")
+ return errors.Wrap(err, newError("detailed error"))
+ },
+ expected: "detailed error",
+ }, {
+ message: "wrapped annotated error",
+ generator: func() error {
+ err := errors.Errorf("first error")
+ err = errors.Annotatef(err, "annotated")
+ return errors.Wrap(err, fmt.Errorf("detailed error"))
+ },
+ expected: "detailed error",
+ }, {
+ message: "annotated wrapped error",
+ generator: func() error {
+ err := errors.Errorf("first error")
+ err = errors.Wrap(err, fmt.Errorf("detailed error"))
+ return errors.Annotatef(err, "annotated")
+ },
+ expected: "annotated: detailed error",
+ }, {
+ message: "traced, and annotated",
+ generator: func() error {
+ err := errors.New("first error")
+ err = errors.Trace(err)
+ err = errors.Annotate(err, "some context")
+ err = errors.Trace(err)
+ err = errors.Annotate(err, "more context")
+ return errors.Trace(err)
+ },
+ expected: "more context: some context: first error",
+ }, {
+ message: "traced, and annotated, masked and annotated",
+ generator: func() error {
+ err := errors.New("first error")
+ err = errors.Trace(err)
+ err = errors.Annotate(err, "some context")
+ err = errors.Maskf(err, "masked")
+ err = errors.Annotate(err, "more context")
+ return errors.Trace(err)
+ },
+ expected: "more context: masked: some context: first error",
+ },
+ } {
+ c.Logf("%v: %s", i, test.message)
+ err := test.generator()
+ ok := c.Check(err.Error(), gc.Equals, test.expected)
+ if !ok {
+ c.Logf("%#v", test.generator())
+ }
+ }
+}
+
+type embed struct {
+ errors.Err
+}
+
+func newEmbed(format string, args ...interface{}) *embed {
+ err := &embed{errors.NewErr(format, args...)}
+ err.SetLocation(1)
+ return err
+}
+
+func (*errorsSuite) TestNewErr(c *gc.C) {
+ if runtime.Compiler == "gccgo" {
+ c.Skip("gccgo can't determine the location")
+ }
+ err := newEmbed("testing %d", 42) //err embedErr
+ c.Assert(err.Error(), gc.Equals, "testing 42")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["embedErr"].String())
+}
+
+func newEmbedWithCause(other error, format string, args ...interface{}) *embed {
+ err := &embed{errors.NewErrWithCause(other, format, args...)}
+ err.SetLocation(1)
+ return err
+}
+
+func (*errorsSuite) TestNewErrWithCause(c *gc.C) {
+ if runtime.Compiler == "gccgo" {
+ c.Skip("gccgo can't determine the location")
+ }
+ causeErr := fmt.Errorf("external error")
+ err := newEmbedWithCause(causeErr, "testing %d", 43) //err embedCause
+ c.Assert(err.Error(), gc.Equals, "testing 43: external error")
+ c.Assert(errors.Cause(err), gc.Equals, causeErr)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["embedCause"].String())
+}
+
+var _ error = (*embed)(nil)
+
+// This is an uncomparable error type, as it is a struct that supports the
+// error interface (as opposed to a pointer type).
+type error_ struct {
+ info string
+ slice []string
+}
+
+// Create a non-comparable error
+func newNonComparableError(message string) error {
+ return error_{info: message}
+}
+
+func (e error_) Error() string {
+ return e.info
+}
+
+func newError(message string) error {
+ return testError{message}
+}
+
+// The testError is a value type error for ease of seeing results
+// when the test fails.
+type testError struct {
+ message string
+}
+
+func (e testError) Error() string {
+ return e.message
+}
diff --git a/vendor/github.com/juju/errors/errortypes.go b/vendor/github.com/juju/errors/errortypes.go
new file mode 100644
index 0000000..9b731c4
--- /dev/null
+++ b/vendor/github.com/juju/errors/errortypes.go
@@ -0,0 +1,309 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors
+
+import (
+ "fmt"
+)
+
+// wrap is a helper to construct an *wrapper.
+func wrap(err error, format, suffix string, args ...interface{}) Err {
+ newErr := Err{
+ message: fmt.Sprintf(format+suffix, args...),
+ previous: err,
+ }
+ newErr.SetLocation(2)
+ return newErr
+}
+
+// notFound represents an error when something has not been found.
+type notFound struct {
+ Err
+}
+
+// NotFoundf returns an error which satisfies IsNotFound().
+func NotFoundf(format string, args ...interface{}) error {
+ return ¬Found{wrap(nil, format, " not found", args...)}
+}
+
+// NewNotFound returns an error which wraps err that satisfies
+// IsNotFound().
+func NewNotFound(err error, msg string) error {
+ return ¬Found{wrap(err, msg, "")}
+}
+
+// IsNotFound reports whether err was created with NotFoundf() or
+// NewNotFound().
+func IsNotFound(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notFound)
+ return ok
+}
+
+// userNotFound represents an error when an inexistent user is looked up.
+type userNotFound struct {
+ Err
+}
+
+// UserNotFoundf returns an error which satisfies IsUserNotFound().
+func UserNotFoundf(format string, args ...interface{}) error {
+ return &userNotFound{wrap(nil, format, " user not found", args...)}
+}
+
+// NewUserNotFound returns an error which wraps err and satisfies
+// IsUserNotFound().
+func NewUserNotFound(err error, msg string) error {
+ return &userNotFound{wrap(err, msg, "")}
+}
+
+// IsUserNotFound reports whether err was created with UserNotFoundf() or
+// NewUserNotFound().
+func IsUserNotFound(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*userNotFound)
+ return ok
+}
+
+// unauthorized represents an error when an operation is unauthorized.
+type unauthorized struct {
+ Err
+}
+
+// Unauthorizedf returns an error which satisfies IsUnauthorized().
+func Unauthorizedf(format string, args ...interface{}) error {
+ return &unauthorized{wrap(nil, format, "", args...)}
+}
+
+// NewUnauthorized returns an error which wraps err and satisfies
+// IsUnauthorized().
+func NewUnauthorized(err error, msg string) error {
+ return &unauthorized{wrap(err, msg, "")}
+}
+
+// IsUnauthorized reports whether err was created with Unauthorizedf() or
+// NewUnauthorized().
+func IsUnauthorized(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*unauthorized)
+ return ok
+}
+
+// notImplemented represents an error when something is not
+// implemented.
+type notImplemented struct {
+ Err
+}
+
+// NotImplementedf returns an error which satisfies IsNotImplemented().
+func NotImplementedf(format string, args ...interface{}) error {
+ return ¬Implemented{wrap(nil, format, " not implemented", args...)}
+}
+
+// NewNotImplemented returns an error which wraps err and satisfies
+// IsNotImplemented().
+func NewNotImplemented(err error, msg string) error {
+ return ¬Implemented{wrap(err, msg, "")}
+}
+
+// IsNotImplemented reports whether err was created with
+// NotImplementedf() or NewNotImplemented().
+func IsNotImplemented(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notImplemented)
+ return ok
+}
+
+// alreadyExists represents and error when something already exists.
+type alreadyExists struct {
+ Err
+}
+
+// AlreadyExistsf returns an error which satisfies IsAlreadyExists().
+func AlreadyExistsf(format string, args ...interface{}) error {
+ return &alreadyExists{wrap(nil, format, " already exists", args...)}
+}
+
+// NewAlreadyExists returns an error which wraps err and satisfies
+// IsAlreadyExists().
+func NewAlreadyExists(err error, msg string) error {
+ return &alreadyExists{wrap(err, msg, "")}
+}
+
+// IsAlreadyExists reports whether the error was created with
+// AlreadyExistsf() or NewAlreadyExists().
+func IsAlreadyExists(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*alreadyExists)
+ return ok
+}
+
+// notSupported represents an error when something is not supported.
+type notSupported struct {
+ Err
+}
+
+// NotSupportedf returns an error which satisfies IsNotSupported().
+func NotSupportedf(format string, args ...interface{}) error {
+ return ¬Supported{wrap(nil, format, " not supported", args...)}
+}
+
+// NewNotSupported returns an error which wraps err and satisfies
+// IsNotSupported().
+func NewNotSupported(err error, msg string) error {
+ return ¬Supported{wrap(err, msg, "")}
+}
+
+// IsNotSupported reports whether the error was created with
+// NotSupportedf() or NewNotSupported().
+func IsNotSupported(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notSupported)
+ return ok
+}
+
+// notValid represents an error when something is not valid.
+type notValid struct {
+ Err
+}
+
+// NotValidf returns an error which satisfies IsNotValid().
+func NotValidf(format string, args ...interface{}) error {
+ return ¬Valid{wrap(nil, format, " not valid", args...)}
+}
+
+// NewNotValid returns an error which wraps err and satisfies IsNotValid().
+func NewNotValid(err error, msg string) error {
+ return ¬Valid{wrap(err, msg, "")}
+}
+
+// IsNotValid reports whether the error was created with NotValidf() or
+// NewNotValid().
+func IsNotValid(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notValid)
+ return ok
+}
+
+// notProvisioned represents an error when something is not yet provisioned.
+type notProvisioned struct {
+ Err
+}
+
+// NotProvisionedf returns an error which satisfies IsNotProvisioned().
+func NotProvisionedf(format string, args ...interface{}) error {
+ return ¬Provisioned{wrap(nil, format, " not provisioned", args...)}
+}
+
+// NewNotProvisioned returns an error which wraps err that satisfies
+// IsNotProvisioned().
+func NewNotProvisioned(err error, msg string) error {
+ return ¬Provisioned{wrap(err, msg, "")}
+}
+
+// IsNotProvisioned reports whether err was created with NotProvisionedf() or
+// NewNotProvisioned().
+func IsNotProvisioned(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notProvisioned)
+ return ok
+}
+
+// notAssigned represents an error when something is not yet assigned to
+// something else.
+type notAssigned struct {
+ Err
+}
+
+// NotAssignedf returns an error which satisfies IsNotAssigned().
+func NotAssignedf(format string, args ...interface{}) error {
+ return ¬Assigned{wrap(nil, format, " not assigned", args...)}
+}
+
+// NewNotAssigned returns an error which wraps err that satisfies
+// IsNotAssigned().
+func NewNotAssigned(err error, msg string) error {
+ return ¬Assigned{wrap(err, msg, "")}
+}
+
+// IsNotAssigned reports whether err was created with NotAssignedf() or
+// NewNotAssigned().
+func IsNotAssigned(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*notAssigned)
+ return ok
+}
+
+// badRequest represents an error when a request has bad parameters.
+type badRequest struct {
+ Err
+}
+
+// BadRequestf returns an error which satisfies IsBadRequest().
+func BadRequestf(format string, args ...interface{}) error {
+ return &badRequest{wrap(nil, format, "", args...)}
+}
+
+// NewBadRequest returns an error which wraps err that satisfies
+// IsBadRequest().
+func NewBadRequest(err error, msg string) error {
+ return &badRequest{wrap(err, msg, "")}
+}
+
+// IsBadRequest reports whether err was created with BadRequestf() or
+// NewBadRequest().
+func IsBadRequest(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*badRequest)
+ return ok
+}
+
+// methodNotAllowed represents an error when an HTTP request
+// is made with an inappropriate method.
+type methodNotAllowed struct {
+ Err
+}
+
+// MethodNotAllowedf returns an error which satisfies IsMethodNotAllowed().
+func MethodNotAllowedf(format string, args ...interface{}) error {
+ return &methodNotAllowed{wrap(nil, format, "", args...)}
+}
+
+// NewMethodNotAllowed returns an error which wraps err that satisfies
+// IsMethodNotAllowed().
+func NewMethodNotAllowed(err error, msg string) error {
+ return &methodNotAllowed{wrap(err, msg, "")}
+}
+
+// IsMethodNotAllowed reports whether err was created with MethodNotAllowedf() or
+// NewMethodNotAllowed().
+func IsMethodNotAllowed(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*methodNotAllowed)
+ return ok
+}
+
+// forbidden represents an error when a request cannot be completed because of
+// missing privileges
+type forbidden struct {
+ Err
+}
+
+// Forbiddenf returns an error which satistifes IsForbidden()
+func Forbiddenf(format string, args ...interface{}) error {
+ return &forbidden{wrap(nil, format, "", args...)}
+}
+
+// NewForbidden returns an error which wraps err that satisfies
+// IsForbidden().
+func NewForbidden(err error, msg string) error {
+ return &forbidden{wrap(err, msg, "")}
+}
+
+// IsForbidden reports whether err was created with Forbiddenf() or
+// NewForbidden().
+func IsForbidden(err error) bool {
+ err = Cause(err)
+ _, ok := err.(*forbidden)
+ return ok
+}
diff --git a/vendor/github.com/juju/errors/errortypes_test.go b/vendor/github.com/juju/errors/errortypes_test.go
new file mode 100644
index 0000000..bb0ed36
--- /dev/null
+++ b/vendor/github.com/juju/errors/errortypes_test.go
@@ -0,0 +1,174 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ stderrors "errors"
+ "fmt"
+ "reflect"
+ "runtime"
+
+ "github.com/juju/errors"
+ jc "github.com/juju/testing/checkers"
+ gc "gopkg.in/check.v1"
+)
+
+// errorInfo holds information about a single error type: a satisfier
+// function, wrapping and variable arguments constructors and message
+// suffix.
+type errorInfo struct {
+ satisfier func(error) bool
+ argsConstructor func(string, ...interface{}) error
+ wrapConstructor func(error, string) error
+ suffix string
+}
+
+// allErrors holds information for all defined errors. When adding new
+// errors, add them here as well to include them in tests.
+var allErrors = []*errorInfo{
+ {errors.IsNotFound, errors.NotFoundf, errors.NewNotFound, " not found"},
+ {errors.IsUserNotFound, errors.UserNotFoundf, errors.NewUserNotFound, " user not found"},
+ {errors.IsUnauthorized, errors.Unauthorizedf, errors.NewUnauthorized, ""},
+ {errors.IsNotImplemented, errors.NotImplementedf, errors.NewNotImplemented, " not implemented"},
+ {errors.IsAlreadyExists, errors.AlreadyExistsf, errors.NewAlreadyExists, " already exists"},
+ {errors.IsNotSupported, errors.NotSupportedf, errors.NewNotSupported, " not supported"},
+ {errors.IsNotValid, errors.NotValidf, errors.NewNotValid, " not valid"},
+ {errors.IsNotProvisioned, errors.NotProvisionedf, errors.NewNotProvisioned, " not provisioned"},
+ {errors.IsNotAssigned, errors.NotAssignedf, errors.NewNotAssigned, " not assigned"},
+ {errors.IsMethodNotAllowed, errors.MethodNotAllowedf, errors.NewMethodNotAllowed, ""},
+ {errors.IsBadRequest, errors.BadRequestf, errors.NewBadRequest, ""},
+ {errors.IsForbidden, errors.Forbiddenf, errors.NewForbidden, ""},
+}
+
+type errorTypeSuite struct{}
+
+var _ = gc.Suite(&errorTypeSuite{})
+
+func (t *errorInfo) satisfierName() string {
+ value := reflect.ValueOf(t.satisfier)
+ f := runtime.FuncForPC(value.Pointer())
+ return f.Name()
+}
+
+func (t *errorInfo) equal(t0 *errorInfo) bool {
+ if t0 == nil {
+ return false
+ }
+ return t.satisfierName() == t0.satisfierName()
+}
+
+type errorTest struct {
+ err error
+ message string
+ errInfo *errorInfo
+}
+
+func deferredAnnotatef(err error, format string, args ...interface{}) error {
+ errors.DeferredAnnotatef(&err, format, args...)
+ return err
+}
+
+func mustSatisfy(c *gc.C, err error, errInfo *errorInfo) {
+ if errInfo != nil {
+ msg := fmt.Sprintf("%#v must satisfy %v", err, errInfo.satisfierName())
+ c.Check(err, jc.Satisfies, errInfo.satisfier, gc.Commentf(msg))
+ }
+}
+
+func mustNotSatisfy(c *gc.C, err error, errInfo *errorInfo) {
+ if errInfo != nil {
+ msg := fmt.Sprintf("%#v must not satisfy %v", err, errInfo.satisfierName())
+ c.Check(err, gc.Not(jc.Satisfies), errInfo.satisfier, gc.Commentf(msg))
+ }
+}
+
+func checkErrorMatches(c *gc.C, err error, message string, errInfo *errorInfo) {
+ if message == "" {
+ c.Check(err, gc.IsNil)
+ c.Check(errInfo, gc.IsNil)
+ } else {
+ c.Check(err, gc.ErrorMatches, message)
+ }
+}
+
+func runErrorTests(c *gc.C, errorTests []errorTest, checkMustSatisfy bool) {
+ for i, t := range errorTests {
+ c.Logf("test %d: %T: %v", i, t.err, t.err)
+ checkErrorMatches(c, t.err, t.message, t.errInfo)
+ if checkMustSatisfy {
+ mustSatisfy(c, t.err, t.errInfo)
+ }
+
+ // Check all other satisfiers to make sure none match.
+ for _, otherErrInfo := range allErrors {
+ if checkMustSatisfy && otherErrInfo.equal(t.errInfo) {
+ continue
+ }
+ mustNotSatisfy(c, t.err, otherErrInfo)
+ }
+ }
+}
+
+func (*errorTypeSuite) TestDeferredAnnotatef(c *gc.C) {
+ // Ensure DeferredAnnotatef annotates the errors.
+ errorTests := []errorTest{}
+ for _, errInfo := range allErrors {
+ errorTests = append(errorTests, []errorTest{{
+ deferredAnnotatef(nil, "comment"),
+ "",
+ nil,
+ }, {
+ deferredAnnotatef(stderrors.New("blast"), "comment"),
+ "comment: blast",
+ nil,
+ }, {
+ deferredAnnotatef(errInfo.argsConstructor("foo %d", 42), "comment %d", 69),
+ "comment 69: foo 42" + errInfo.suffix,
+ errInfo,
+ }, {
+ deferredAnnotatef(errInfo.argsConstructor(""), "comment"),
+ "comment: " + errInfo.suffix,
+ errInfo,
+ }, {
+ deferredAnnotatef(errInfo.wrapConstructor(stderrors.New("pow!"), "woo"), "comment"),
+ "comment: woo: pow!",
+ errInfo,
+ }}...)
+ }
+
+ runErrorTests(c, errorTests, true)
+}
+
+func (*errorTypeSuite) TestAllErrors(c *gc.C) {
+ errorTests := []errorTest{}
+ for _, errInfo := range allErrors {
+ errorTests = append(errorTests, []errorTest{{
+ nil,
+ "",
+ nil,
+ }, {
+ errInfo.argsConstructor("foo %d", 42),
+ "foo 42" + errInfo.suffix,
+ errInfo,
+ }, {
+ errInfo.argsConstructor(""),
+ errInfo.suffix,
+ errInfo,
+ }, {
+ errInfo.wrapConstructor(stderrors.New("pow!"), "prefix"),
+ "prefix: pow!",
+ errInfo,
+ }, {
+ errInfo.wrapConstructor(stderrors.New("pow!"), ""),
+ "pow!",
+ errInfo,
+ }, {
+ errInfo.wrapConstructor(nil, "prefix"),
+ "prefix",
+ errInfo,
+ }}...)
+ }
+
+ runErrorTests(c, errorTests, true)
+}
diff --git a/vendor/github.com/juju/errors/example_test.go b/vendor/github.com/juju/errors/example_test.go
new file mode 100644
index 0000000..2a79cf4
--- /dev/null
+++ b/vendor/github.com/juju/errors/example_test.go
@@ -0,0 +1,23 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ "fmt"
+
+ "github.com/juju/errors"
+)
+
+func ExampleTrace() {
+ var err1 error = fmt.Errorf("something wicked this way comes")
+ var err2 error = nil
+
+ // Tracing a non nil error will return an error
+ fmt.Println(errors.Trace(err1))
+ // Tracing nil will return nil
+ fmt.Println(errors.Trace(err2))
+
+ // Output: something wicked this way comes
+ //
+}
diff --git a/vendor/github.com/juju/errors/export_test.go b/vendor/github.com/juju/errors/export_test.go
new file mode 100644
index 0000000..db57ec8
--- /dev/null
+++ b/vendor/github.com/juju/errors/export_test.go
@@ -0,0 +1,12 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors
+
+// Since variables are declared before the init block, in order to get the goPath
+// we need to return it rather than just reference it.
+func GoPath() string {
+ return goPath
+}
+
+var TrimGoPath = trimGoPath
diff --git a/vendor/github.com/juju/errors/functions.go b/vendor/github.com/juju/errors/functions.go
new file mode 100644
index 0000000..f86b09b
--- /dev/null
+++ b/vendor/github.com/juju/errors/functions.go
@@ -0,0 +1,330 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors
+
+import (
+ "fmt"
+ "strings"
+)
+
+// New is a drop in replacement for the standard library errors module that records
+// the location that the error is created.
+//
+// For example:
+// return errors.New("validation failed")
+//
+func New(message string) error {
+ err := &Err{message: message}
+ err.SetLocation(1)
+ return err
+}
+
+// Errorf creates a new annotated error and records the location that the
+// error is created. This should be a drop in replacement for fmt.Errorf.
+//
+// For example:
+// return errors.Errorf("validation failed: %s", message)
+//
+func Errorf(format string, args ...interface{}) error {
+ err := &Err{message: fmt.Sprintf(format, args...)}
+ err.SetLocation(1)
+ return err
+}
+
+// Trace adds the location of the Trace call to the stack. The Cause of the
+// resulting error is the same as the error parameter. If the other error is
+// nil, the result will be nil.
+//
+// For example:
+// if err := SomeFunc(); err != nil {
+// return errors.Trace(err)
+// }
+//
+func Trace(other error) error {
+ if other == nil {
+ return nil
+ }
+ err := &Err{previous: other, cause: Cause(other)}
+ err.SetLocation(1)
+ return err
+}
+
+// Annotate is used to add extra context to an existing error. The location of
+// the Annotate call is recorded with the annotations. The file, line and
+// function are also recorded.
+//
+// For example:
+// if err := SomeFunc(); err != nil {
+// return errors.Annotate(err, "failed to frombulate")
+// }
+//
+func Annotate(other error, message string) error {
+ if other == nil {
+ return nil
+ }
+ err := &Err{
+ previous: other,
+ cause: Cause(other),
+ message: message,
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// Annotatef is used to add extra context to an existing error. The location of
+// the Annotate call is recorded with the annotations. The file, line and
+// function are also recorded.
+//
+// For example:
+// if err := SomeFunc(); err != nil {
+// return errors.Annotatef(err, "failed to frombulate the %s", arg)
+// }
+//
+func Annotatef(other error, format string, args ...interface{}) error {
+ if other == nil {
+ return nil
+ }
+ err := &Err{
+ previous: other,
+ cause: Cause(other),
+ message: fmt.Sprintf(format, args...),
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// DeferredAnnotatef annotates the given error (when it is not nil) with the given
+// format string and arguments (like fmt.Sprintf). If *err is nil, DeferredAnnotatef
+// does nothing. This method is used in a defer statement in order to annotate any
+// resulting error with the same message.
+//
+// For example:
+//
+// defer DeferredAnnotatef(&err, "failed to frombulate the %s", arg)
+//
+func DeferredAnnotatef(err *error, format string, args ...interface{}) {
+ if *err == nil {
+ return
+ }
+ newErr := &Err{
+ message: fmt.Sprintf(format, args...),
+ cause: Cause(*err),
+ previous: *err,
+ }
+ newErr.SetLocation(1)
+ *err = newErr
+}
+
+// Wrap changes the Cause of the error. The location of the Wrap call is also
+// stored in the error stack.
+//
+// For example:
+// if err := SomeFunc(); err != nil {
+// newErr := &packageError{"more context", private_value}
+// return errors.Wrap(err, newErr)
+// }
+//
+func Wrap(other, newDescriptive error) error {
+ err := &Err{
+ previous: other,
+ cause: newDescriptive,
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// Wrapf changes the Cause of the error, and adds an annotation. The location
+// of the Wrap call is also stored in the error stack.
+//
+// For example:
+// if err := SomeFunc(); err != nil {
+// return errors.Wrapf(err, simpleErrorType, "invalid value %q", value)
+// }
+//
+func Wrapf(other, newDescriptive error, format string, args ...interface{}) error {
+ err := &Err{
+ message: fmt.Sprintf(format, args...),
+ previous: other,
+ cause: newDescriptive,
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// Mask masks the given error with the given format string and arguments (like
+// fmt.Sprintf), returning a new error that maintains the error stack, but
+// hides the underlying error type. The error string still contains the full
+// annotations. If you want to hide the annotations, call Wrap.
+func Maskf(other error, format string, args ...interface{}) error {
+ if other == nil {
+ return nil
+ }
+ err := &Err{
+ message: fmt.Sprintf(format, args...),
+ previous: other,
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// Mask hides the underlying error type, and records the location of the masking.
+func Mask(other error) error {
+ if other == nil {
+ return nil
+ }
+ err := &Err{
+ previous: other,
+ }
+ err.SetLocation(1)
+ return err
+}
+
+// Cause returns the cause of the given error. This will be either the
+// original error, or the result of a Wrap or Mask call.
+//
+// Cause is the usual way to diagnose errors that may have been wrapped by
+// the other errors functions.
+func Cause(err error) error {
+ var diag error
+ if err, ok := err.(causer); ok {
+ diag = err.Cause()
+ }
+ if diag != nil {
+ return diag
+ }
+ return err
+}
+
+type causer interface {
+ Cause() error
+}
+
+type wrapper interface {
+ // Message returns the top level error message,
+ // not including the message from the Previous
+ // error.
+ Message() string
+
+ // Underlying returns the Previous error, or nil
+ // if there is none.
+ Underlying() error
+}
+
+type locationer interface {
+ Location() (string, int)
+}
+
+var (
+ _ wrapper = (*Err)(nil)
+ _ locationer = (*Err)(nil)
+ _ causer = (*Err)(nil)
+)
+
+// Details returns information about the stack of errors wrapped by err, in
+// the format:
+//
+// [{filename:99: error one} {otherfile:55: cause of error one}]
+//
+// This is a terse alternative to ErrorStack as it returns a single line.
+func Details(err error) string {
+ if err == nil {
+ return "[]"
+ }
+ var s []byte
+ s = append(s, '[')
+ for {
+ s = append(s, '{')
+ if err, ok := err.(locationer); ok {
+ file, line := err.Location()
+ if file != "" {
+ s = append(s, fmt.Sprintf("%s:%d", file, line)...)
+ s = append(s, ": "...)
+ }
+ }
+ if cerr, ok := err.(wrapper); ok {
+ s = append(s, cerr.Message()...)
+ err = cerr.Underlying()
+ } else {
+ s = append(s, err.Error()...)
+ err = nil
+ }
+ s = append(s, '}')
+ if err == nil {
+ break
+ }
+ s = append(s, ' ')
+ }
+ s = append(s, ']')
+ return string(s)
+}
+
+// ErrorStack returns a string representation of the annotated error. If the
+// error passed as the parameter is not an annotated error, the result is
+// simply the result of the Error() method on that error.
+//
+// If the error is an annotated error, a multi-line string is returned where
+// each line represents one entry in the annotation stack. The full filename
+// from the call stack is used in the output.
+//
+// first error
+// github.com/juju/errors/annotation_test.go:193:
+// github.com/juju/errors/annotation_test.go:194: annotation
+// github.com/juju/errors/annotation_test.go:195:
+// github.com/juju/errors/annotation_test.go:196: more context
+// github.com/juju/errors/annotation_test.go:197:
+func ErrorStack(err error) string {
+ return strings.Join(errorStack(err), "\n")
+}
+
+func errorStack(err error) []string {
+ if err == nil {
+ return nil
+ }
+
+ // We want the first error first
+ var lines []string
+ for {
+ var buff []byte
+ if err, ok := err.(locationer); ok {
+ file, line := err.Location()
+ // Strip off the leading GOPATH/src path elements.
+ file = trimGoPath(file)
+ if file != "" {
+ buff = append(buff, fmt.Sprintf("%s:%d", file, line)...)
+ buff = append(buff, ": "...)
+ }
+ }
+ if cerr, ok := err.(wrapper); ok {
+ message := cerr.Message()
+ buff = append(buff, message...)
+ // If there is a cause for this error, and it is different to the cause
+ // of the underlying error, then output the error string in the stack trace.
+ var cause error
+ if err1, ok := err.(causer); ok {
+ cause = err1.Cause()
+ }
+ err = cerr.Underlying()
+ if cause != nil && !sameError(Cause(err), cause) {
+ if message != "" {
+ buff = append(buff, ": "...)
+ }
+ buff = append(buff, cause.Error()...)
+ }
+ } else {
+ buff = append(buff, err.Error()...)
+ err = nil
+ }
+ lines = append(lines, string(buff))
+ if err == nil {
+ break
+ }
+ }
+ // reverse the lines to get the original error, which was at the end of
+ // the list, back to the start.
+ var result []string
+ for i := len(lines); i > 0; i-- {
+ result = append(result, lines[i-1])
+ }
+ return result
+}
diff --git a/vendor/github.com/juju/errors/functions_test.go b/vendor/github.com/juju/errors/functions_test.go
new file mode 100644
index 0000000..9f5a063
--- /dev/null
+++ b/vendor/github.com/juju/errors/functions_test.go
@@ -0,0 +1,305 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "runtime"
+ "strings"
+
+ jc "github.com/juju/testing/checkers"
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/errors"
+)
+
+type functionSuite struct {
+}
+
+var _ = gc.Suite(&functionSuite{})
+
+func (*functionSuite) TestNew(c *gc.C) {
+ err := errors.New("testing") //err newTest
+ c.Assert(err.Error(), gc.Equals, "testing")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["newTest"].String())
+}
+
+func (*functionSuite) TestErrorf(c *gc.C) {
+ err := errors.Errorf("testing %d", 42) //err errorfTest
+ c.Assert(err.Error(), gc.Equals, "testing 42")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["errorfTest"].String())
+}
+
+func (*functionSuite) TestTrace(c *gc.C) {
+ first := errors.New("first")
+ err := errors.Trace(first) //err traceTest
+ c.Assert(err.Error(), gc.Equals, "first")
+ c.Assert(errors.Cause(err), gc.Equals, first)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["traceTest"].String())
+
+ c.Assert(errors.Trace(nil), gc.IsNil)
+}
+
+func (*functionSuite) TestAnnotate(c *gc.C) {
+ first := errors.New("first")
+ err := errors.Annotate(first, "annotation") //err annotateTest
+ c.Assert(err.Error(), gc.Equals, "annotation: first")
+ c.Assert(errors.Cause(err), gc.Equals, first)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["annotateTest"].String())
+
+ c.Assert(errors.Annotate(nil, "annotate"), gc.IsNil)
+}
+
+func (*functionSuite) TestAnnotatef(c *gc.C) {
+ first := errors.New("first")
+ err := errors.Annotatef(first, "annotation %d", 2) //err annotatefTest
+ c.Assert(err.Error(), gc.Equals, "annotation 2: first")
+ c.Assert(errors.Cause(err), gc.Equals, first)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["annotatefTest"].String())
+
+ c.Assert(errors.Annotatef(nil, "annotate"), gc.IsNil)
+}
+
+func (*functionSuite) TestDeferredAnnotatef(c *gc.C) {
+ // NOTE: this test fails with gccgo
+ if runtime.Compiler == "gccgo" {
+ c.Skip("gccgo can't determine the location")
+ }
+ first := errors.New("first")
+ test := func() (err error) {
+ defer errors.DeferredAnnotatef(&err, "deferred %s", "annotate")
+ return first //err deferredAnnotate
+ }
+ err := test()
+ c.Assert(err.Error(), gc.Equals, "deferred annotate: first")
+ c.Assert(errors.Cause(err), gc.Equals, first)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["deferredAnnotate"].String())
+
+ err = nil
+ errors.DeferredAnnotatef(&err, "deferred %s", "annotate")
+ c.Assert(err, gc.IsNil)
+}
+
+func (*functionSuite) TestWrap(c *gc.C) {
+ first := errors.New("first") //err wrapFirst
+ detailed := errors.New("detailed")
+ err := errors.Wrap(first, detailed) //err wrapTest
+ c.Assert(err.Error(), gc.Equals, "detailed")
+ c.Assert(errors.Cause(err), gc.Equals, detailed)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapFirst"].String())
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapTest"].String())
+}
+
+func (*functionSuite) TestWrapOfNil(c *gc.C) {
+ detailed := errors.New("detailed")
+ err := errors.Wrap(nil, detailed) //err nilWrapTest
+ c.Assert(err.Error(), gc.Equals, "detailed")
+ c.Assert(errors.Cause(err), gc.Equals, detailed)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["nilWrapTest"].String())
+}
+
+func (*functionSuite) TestWrapf(c *gc.C) {
+ first := errors.New("first") //err wrapfFirst
+ detailed := errors.New("detailed")
+ err := errors.Wrapf(first, detailed, "value %d", 42) //err wrapfTest
+ c.Assert(err.Error(), gc.Equals, "value 42: detailed")
+ c.Assert(errors.Cause(err), gc.Equals, detailed)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapfFirst"].String())
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapfTest"].String())
+}
+
+func (*functionSuite) TestWrapfOfNil(c *gc.C) {
+ detailed := errors.New("detailed")
+ err := errors.Wrapf(nil, detailed, "value %d", 42) //err nilWrapfTest
+ c.Assert(err.Error(), gc.Equals, "value 42: detailed")
+ c.Assert(errors.Cause(err), gc.Equals, detailed)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["nilWrapfTest"].String())
+}
+
+func (*functionSuite) TestMask(c *gc.C) {
+ first := errors.New("first")
+ err := errors.Mask(first) //err maskTest
+ c.Assert(err.Error(), gc.Equals, "first")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["maskTest"].String())
+
+ c.Assert(errors.Mask(nil), gc.IsNil)
+}
+
+func (*functionSuite) TestMaskf(c *gc.C) {
+ first := errors.New("first")
+ err := errors.Maskf(first, "masked %d", 42) //err maskfTest
+ c.Assert(err.Error(), gc.Equals, "masked 42: first")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+ c.Assert(errors.Details(err), jc.Contains, tagToLocation["maskfTest"].String())
+
+ c.Assert(errors.Maskf(nil, "mask"), gc.IsNil)
+}
+
+func (*functionSuite) TestCause(c *gc.C) {
+ c.Assert(errors.Cause(nil), gc.IsNil)
+ c.Assert(errors.Cause(someErr), gc.Equals, someErr)
+
+ fmtErr := fmt.Errorf("simple")
+ c.Assert(errors.Cause(fmtErr), gc.Equals, fmtErr)
+
+ err := errors.Wrap(someErr, fmtErr)
+ c.Assert(errors.Cause(err), gc.Equals, fmtErr)
+
+ err = errors.Annotate(err, "annotated")
+ c.Assert(errors.Cause(err), gc.Equals, fmtErr)
+
+ err = errors.Maskf(err, "maksed")
+ c.Assert(errors.Cause(err), gc.Equals, err)
+
+ // Look for a file that we know isn't there.
+ dir := c.MkDir()
+ _, err = os.Stat(filepath.Join(dir, "not-there"))
+ c.Assert(os.IsNotExist(err), jc.IsTrue)
+
+ err = errors.Annotatef(err, "wrap it")
+ // Now the error itself isn't a 'IsNotExist'.
+ c.Assert(os.IsNotExist(err), jc.IsFalse)
+ // However if we use the Check method, it is.
+ c.Assert(os.IsNotExist(errors.Cause(err)), jc.IsTrue)
+}
+
+func (s *functionSuite) TestDetails(c *gc.C) {
+ if runtime.Compiler == "gccgo" {
+ c.Skip("gccgo can't determine the location")
+ }
+ c.Assert(errors.Details(nil), gc.Equals, "[]")
+
+ otherErr := fmt.Errorf("other")
+ checkDetails(c, otherErr, "[{other}]")
+
+ err0 := newEmbed("foo") //err TestStack#0
+ checkDetails(c, err0, "[{$TestStack#0$: foo}]")
+
+ err1 := errors.Annotate(err0, "bar") //err TestStack#1
+ checkDetails(c, err1, "[{$TestStack#1$: bar} {$TestStack#0$: foo}]")
+
+ err2 := errors.Trace(err1) //err TestStack#2
+ checkDetails(c, err2, "[{$TestStack#2$: } {$TestStack#1$: bar} {$TestStack#0$: foo}]")
+}
+
+type tracer interface {
+ StackTrace() []string
+}
+
+func (*functionSuite) TestErrorStack(c *gc.C) {
+ for i, test := range []struct {
+ message string
+ generator func() error
+ expected string
+ tracer bool
+ }{
+ {
+ message: "nil",
+ generator: func() error {
+ return nil
+ },
+ }, {
+ message: "raw error",
+ generator: func() error {
+ return fmt.Errorf("raw")
+ },
+ expected: "raw",
+ }, {
+ message: "single error stack",
+ generator: func() error {
+ return errors.New("first error") //err single
+ },
+ expected: "$single$: first error",
+ tracer: true,
+ }, {
+ message: "annotated error",
+ generator: func() error {
+ err := errors.New("first error") //err annotated-0
+ return errors.Annotate(err, "annotation") //err annotated-1
+ },
+ expected: "" +
+ "$annotated-0$: first error\n" +
+ "$annotated-1$: annotation",
+ tracer: true,
+ }, {
+ message: "wrapped error",
+ generator: func() error {
+ err := errors.New("first error") //err wrapped-0
+ return errors.Wrap(err, newError("detailed error")) //err wrapped-1
+ },
+ expected: "" +
+ "$wrapped-0$: first error\n" +
+ "$wrapped-1$: detailed error",
+ tracer: true,
+ }, {
+ message: "annotated wrapped error",
+ generator: func() error {
+ err := errors.Errorf("first error") //err ann-wrap-0
+ err = errors.Wrap(err, fmt.Errorf("detailed error")) //err ann-wrap-1
+ return errors.Annotatef(err, "annotated") //err ann-wrap-2
+ },
+ expected: "" +
+ "$ann-wrap-0$: first error\n" +
+ "$ann-wrap-1$: detailed error\n" +
+ "$ann-wrap-2$: annotated",
+ tracer: true,
+ }, {
+ message: "traced, and annotated",
+ generator: func() error {
+ err := errors.New("first error") //err stack-0
+ err = errors.Trace(err) //err stack-1
+ err = errors.Annotate(err, "some context") //err stack-2
+ err = errors.Trace(err) //err stack-3
+ err = errors.Annotate(err, "more context") //err stack-4
+ return errors.Trace(err) //err stack-5
+ },
+ expected: "" +
+ "$stack-0$: first error\n" +
+ "$stack-1$: \n" +
+ "$stack-2$: some context\n" +
+ "$stack-3$: \n" +
+ "$stack-4$: more context\n" +
+ "$stack-5$: ",
+ tracer: true,
+ }, {
+ message: "uncomparable, wrapped with a value error",
+ generator: func() error {
+ err := newNonComparableError("first error") //err mixed-0
+ err = errors.Trace(err) //err mixed-1
+ err = errors.Wrap(err, newError("value error")) //err mixed-2
+ err = errors.Maskf(err, "masked") //err mixed-3
+ err = errors.Annotate(err, "more context") //err mixed-4
+ return errors.Trace(err) //err mixed-5
+ },
+ expected: "" +
+ "first error\n" +
+ "$mixed-1$: \n" +
+ "$mixed-2$: value error\n" +
+ "$mixed-3$: masked\n" +
+ "$mixed-4$: more context\n" +
+ "$mixed-5$: ",
+ tracer: true,
+ },
+ } {
+ c.Logf("%v: %s", i, test.message)
+ err := test.generator()
+ expected := replaceLocations(test.expected)
+ stack := errors.ErrorStack(err)
+ ok := c.Check(stack, gc.Equals, expected)
+ if !ok {
+ c.Logf("%#v", err)
+ }
+ tracer, ok := err.(tracer)
+ c.Check(ok, gc.Equals, test.tracer)
+ if ok {
+ stackTrace := tracer.StackTrace()
+ c.Check(stackTrace, gc.DeepEquals, strings.Split(stack, "\n"))
+ }
+ }
+}
diff --git a/vendor/github.com/juju/errors/package_test.go b/vendor/github.com/juju/errors/package_test.go
new file mode 100644
index 0000000..5bbb8f0
--- /dev/null
+++ b/vendor/github.com/juju/errors/package_test.go
@@ -0,0 +1,95 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ "fmt"
+ "io/ioutil"
+ "strings"
+ "testing"
+
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/errors"
+)
+
+func Test(t *testing.T) {
+ gc.TestingT(t)
+}
+
+func checkDetails(c *gc.C, err error, details string) {
+ c.Assert(err, gc.NotNil)
+ expectedDetails := replaceLocations(details)
+ c.Assert(errors.Details(err), gc.Equals, expectedDetails)
+}
+
+func checkErr(c *gc.C, err, cause error, msg string, details string) {
+ c.Assert(err, gc.NotNil)
+ c.Assert(err.Error(), gc.Equals, msg)
+ c.Assert(errors.Cause(err), gc.Equals, cause)
+ expectedDetails := replaceLocations(details)
+ c.Assert(errors.Details(err), gc.Equals, expectedDetails)
+}
+
+func replaceLocations(line string) string {
+ result := ""
+ for {
+ i := strings.Index(line, "$")
+ if i == -1 {
+ break
+ }
+ result += line[0:i]
+ line = line[i+1:]
+ i = strings.Index(line, "$")
+ if i == -1 {
+ panic("no second $")
+ }
+ result += location(line[0:i]).String()
+ line = line[i+1:]
+ }
+ result += line
+ return result
+}
+
+func location(tag string) Location {
+ loc, ok := tagToLocation[tag]
+ if !ok {
+ panic(fmt.Sprintf("tag %q not found", tag))
+ }
+ return loc
+}
+
+type Location struct {
+ file string
+ line int
+}
+
+func (loc Location) String() string {
+ return fmt.Sprintf("%s:%d", loc.file, loc.line)
+}
+
+var tagToLocation = make(map[string]Location)
+
+func setLocationsForErrorTags(filename string) {
+ data, err := ioutil.ReadFile(filename)
+ if err != nil {
+ panic(err)
+ }
+ filename = "github.com/juju/errors/" + filename
+ lines := strings.Split(string(data), "\n")
+ for i, line := range lines {
+ if j := strings.Index(line, "//err "); j >= 0 {
+ tag := line[j+len("//err "):]
+ if _, found := tagToLocation[tag]; found {
+ panic(fmt.Sprintf("tag %q already processed previously", tag))
+ }
+ tagToLocation[tag] = Location{file: filename, line: i + 1}
+ }
+ }
+}
+
+func init() {
+ setLocationsForErrorTags("error_test.go")
+ setLocationsForErrorTags("functions_test.go")
+}
diff --git a/vendor/github.com/juju/errors/path.go b/vendor/github.com/juju/errors/path.go
new file mode 100644
index 0000000..a7b726a
--- /dev/null
+++ b/vendor/github.com/juju/errors/path.go
@@ -0,0 +1,38 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors
+
+import (
+ "runtime"
+ "strings"
+)
+
+// prefixSize is used internally to trim the user specific path from the
+// front of the returned filenames from the runtime call stack.
+var prefixSize int
+
+// goPath is the deduced path based on the location of this file as compiled.
+var goPath string
+
+func init() {
+ _, file, _, ok := runtime.Caller(0)
+ if file == "?" {
+ return
+ }
+ if ok {
+ // We know that the end of the file should be:
+ // github.com/juju/errors/path.go
+ size := len(file)
+ suffix := len("github.com/juju/errors/path.go")
+ goPath = file[:size-suffix]
+ prefixSize = len(goPath)
+ }
+}
+
+func trimGoPath(filename string) string {
+ if strings.HasPrefix(filename, goPath) {
+ return filename[prefixSize:]
+ }
+ return filename
+}
diff --git a/vendor/github.com/juju/errors/path_test.go b/vendor/github.com/juju/errors/path_test.go
new file mode 100644
index 0000000..ef4f34f
--- /dev/null
+++ b/vendor/github.com/juju/errors/path_test.go
@@ -0,0 +1,29 @@
+// Copyright 2013, 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package errors_test
+
+import (
+ "path"
+
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/errors"
+)
+
+type pathSuite struct{}
+
+var _ = gc.Suite(&pathSuite{})
+
+func (*pathSuite) TestGoPathSet(c *gc.C) {
+ c.Assert(errors.GoPath(), gc.Not(gc.Equals), "")
+}
+
+func (*pathSuite) TestTrimGoPath(c *gc.C) {
+ relativeImport := "github.com/foo/bar/baz.go"
+ filename := path.Join(errors.GoPath(), relativeImport)
+ c.Assert(errors.TrimGoPath(filename), gc.Equals, relativeImport)
+
+ absoluteImport := "/usr/share/foo/bar/baz.go"
+ c.Assert(errors.TrimGoPath(absoluteImport), gc.Equals, absoluteImport)
+}
diff --git a/vendor/github.com/juju/loggo/LICENSE b/vendor/github.com/juju/loggo/LICENSE
new file mode 100644
index 0000000..ade9307
--- /dev/null
+++ b/vendor/github.com/juju/loggo/LICENSE
@@ -0,0 +1,191 @@
+All files in this repository are licensed as follows. If you contribute
+to this repository, it is assumed that you license your contribution
+under the same license unless you state otherwise.
+
+All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
+
+This software is licensed under the LGPLv3, included below.
+
+As a special exception to the GNU Lesser General Public License version 3
+("LGPL3"), the copyright holders of this Library give you permission to
+convey to a third party a Combined Work that links statically or dynamically
+to this Library without providing any Minimal Corresponding Source or
+Minimal Application Code as set out in 4d or providing the installation
+information set out in section 4e, provided that you comply with the other
+provisions of LGPL3 and provided that you meet, for the Application the
+terms and conditions of the license(s) which apply to the Application.
+
+Except as stated in this special exception, the provisions of LGPL3 will
+continue to comply in full to this Library. If you modify this Library, you
+may apply this exception to your version of this Library, but you are not
+obliged to do so. If you do not wish to do so, delete this exception
+statement from your version. This exception does not (and cannot) modify any
+license terms which apply to the Application, with which you must still
+comply.
+
+
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+ This version of the GNU Lesser General Public License incorporates
+the terms and conditions of version 3 of the GNU General Public
+License, supplemented by the additional permissions listed below.
+
+ 0. Additional Definitions.
+
+ As used herein, "this License" refers to version 3 of the GNU Lesser
+General Public License, and the "GNU GPL" refers to version 3 of the GNU
+General Public License.
+
+ "The Library" refers to a covered work governed by this License,
+other than an Application or a Combined Work as defined below.
+
+ An "Application" is any work that makes use of an interface provided
+by the Library, but which is not otherwise based on the Library.
+Defining a subclass of a class defined by the Library is deemed a mode
+of using an interface provided by the Library.
+
+ A "Combined Work" is a work produced by combining or linking an
+Application with the Library. The particular version of the Library
+with which the Combined Work was made is also called the "Linked
+Version".
+
+ The "Minimal Corresponding Source" for a Combined Work means the
+Corresponding Source for the Combined Work, excluding any source code
+for portions of the Combined Work that, considered in isolation, are
+based on the Application, and not on the Linked Version.
+
+ The "Corresponding Application Code" for a Combined Work means the
+object code and/or source code for the Application, including any data
+and utility programs needed for reproducing the Combined Work from the
+Application, but excluding the System Libraries of the Combined Work.
+
+ 1. Exception to Section 3 of the GNU GPL.
+
+ You may convey a covered work under sections 3 and 4 of this License
+without being bound by section 3 of the GNU GPL.
+
+ 2. Conveying Modified Versions.
+
+ If you modify a copy of the Library, and, in your modifications, a
+facility refers to a function or data to be supplied by an Application
+that uses the facility (other than as an argument passed when the
+facility is invoked), then you may convey a copy of the modified
+version:
+
+ a) under this License, provided that you make a good faith effort to
+ ensure that, in the event an Application does not supply the
+ function or data, the facility still operates, and performs
+ whatever part of its purpose remains meaningful, or
+
+ b) under the GNU GPL, with none of the additional permissions of
+ this License applicable to that copy.
+
+ 3. Object Code Incorporating Material from Library Header Files.
+
+ The object code form of an Application may incorporate material from
+a header file that is part of the Library. You may convey such object
+code under terms of your choice, provided that, if the incorporated
+material is not limited to numerical parameters, data structure
+layouts and accessors, or small macros, inline functions and templates
+(ten or fewer lines in length), you do both of the following:
+
+ a) Give prominent notice with each copy of the object code that the
+ Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the object code with a copy of the GNU GPL and this license
+ document.
+
+ 4. Combined Works.
+
+ You may convey a Combined Work under terms of your choice that,
+taken together, effectively do not restrict modification of the
+portions of the Library contained in the Combined Work and reverse
+engineering for debugging such modifications, if you also do each of
+the following:
+
+ a) Give prominent notice with each copy of the Combined Work that
+ the Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
+ document.
+
+ c) For a Combined Work that displays copyright notices during
+ execution, include the copyright notice for the Library among
+ these notices, as well as a reference directing the user to the
+ copies of the GNU GPL and this license document.
+
+ d) Do one of the following:
+
+ 0) Convey the Minimal Corresponding Source under the terms of this
+ License, and the Corresponding Application Code in a form
+ suitable for, and under terms that permit, the user to
+ recombine or relink the Application with a modified version of
+ the Linked Version to produce a modified Combined Work, in the
+ manner specified by section 6 of the GNU GPL for conveying
+ Corresponding Source.
+
+ 1) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (a) uses at run time
+ a copy of the Library already present on the user's computer
+ system, and (b) will operate properly with a modified version
+ of the Library that is interface-compatible with the Linked
+ Version.
+
+ e) Provide Installation Information, but only if you would otherwise
+ be required to provide such information under section 6 of the
+ GNU GPL, and only to the extent that such information is
+ necessary to install and execute a modified version of the
+ Combined Work produced by recombining or relinking the
+ Application with a modified version of the Linked Version. (If
+ you use option 4d0, the Installation Information must accompany
+ the Minimal Corresponding Source and Corresponding Application
+ Code. If you use option 4d1, you must provide the Installation
+ Information in the manner specified by section 6 of the GNU GPL
+ for conveying Corresponding Source.)
+
+ 5. Combined Libraries.
+
+ You may place library facilities that are a work based on the
+Library side by side in a single library together with other library
+facilities that are not Applications and are not covered by this
+License, and convey such a combined library under terms of your
+choice, if you do both of the following:
+
+ a) Accompany the combined library with a copy of the same work based
+ on the Library, uncombined with any other library facilities,
+ conveyed under the terms of this License.
+
+ b) Give prominent notice with the combined library that part of it
+ is a work based on the Library, and explaining where to find the
+ accompanying uncombined form of the same work.
+
+ 6. Revised Versions of the GNU Lesser General Public License.
+
+ The Free Software Foundation may publish revised and/or new versions
+of the GNU Lesser General Public License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Library as you received it specifies that a certain numbered version
+of the GNU Lesser General Public License "or any later version"
+applies to it, you have the option of following the terms and
+conditions either of that published version or of any later version
+published by the Free Software Foundation. If the Library as you
+received it does not specify a version number of the GNU Lesser
+General Public License, you may choose any version of the GNU Lesser
+General Public License ever published by the Free Software Foundation.
+
+ If the Library as you received it specifies that a proxy can decide
+whether future versions of the GNU Lesser General Public License shall
+apply, that proxy's public statement of acceptance of any version is
+permanent authorization for you to choose that version for the
+Library.
diff --git a/vendor/github.com/juju/loggo/Makefile b/vendor/github.com/juju/loggo/Makefile
new file mode 100644
index 0000000..89afa49
--- /dev/null
+++ b/vendor/github.com/juju/loggo/Makefile
@@ -0,0 +1,11 @@
+default: check
+
+check:
+ go test
+
+docs:
+ godoc2md github.com/juju/loggo > README.md
+ sed -i 's|\[godoc-link-here\]|[![GoDoc](https://godoc.org/github.com/juju/loggo?status.svg)](https://godoc.org/github.com/juju/loggo)|' README.md
+
+
+.PHONY: default check docs
diff --git a/vendor/github.com/juju/loggo/README.md b/vendor/github.com/juju/loggo/README.md
new file mode 100644
index 0000000..a73a9db
--- /dev/null
+++ b/vendor/github.com/juju/loggo/README.md
@@ -0,0 +1,711 @@
+
+# loggo
+ import "github.com/juju/loggo"
+
+[![GoDoc](https://godoc.org/github.com/juju/loggo?status.svg)](https://godoc.org/github.com/juju/loggo)
+
+### Module level logging for Go
+This package provides an alternative to the standard library log package.
+
+The actual logging functions never return errors. If you are logging
+something, you really don't want to be worried about the logging
+having trouble.
+
+Modules have names that are defined by dotted strings.
+
+
+ "first.second.third"
+
+There is a root module that has the name `""`. Each module
+(except the root module) has a parent, identified by the part of
+the name without the last dotted value.
+* the parent of "first.second.third" is "first.second"
+* the parent of "first.second" is "first"
+* the parent of "first" is "" (the root module)
+
+Each module can specify its own severity level. Logging calls that are of
+a lower severity than the module's effective severity level are not written
+out.
+
+Loggers are created using the GetLogger function.
+
+
+ logger := loggo.GetLogger("foo.bar")
+
+By default there is one writer registered, which will write to Stderr,
+and the root module, which will only emit warnings and above.
+If you want to continue using the default
+logger, but have it emit all logging levels you need to do the following.
+
+
+ writer, _, err := loggo.RemoveWriter("default")
+ // err is non-nil if and only if the name isn't found.
+ loggo.RegisterWriter("default", writer, loggo.TRACE)
+
+
+
+
+## Constants
+``` go
+const DefaultWriterName = "default"
+```
+DefaultWriterName is the name of the default writer for
+a Context.
+
+
+## Variables
+``` go
+var (
+ // SeverityColor defines the colors for the levels output by the ColorWriter.
+ SeverityColor = map[Level]*ansiterm.Context{
+ TRACE: ansiterm.Foreground(ansiterm.Default),
+ DEBUG: ansiterm.Foreground(ansiterm.Green),
+ INFO: ansiterm.Foreground(ansiterm.BrightBlue),
+ WARNING: ansiterm.Foreground(ansiterm.Yellow),
+ ERROR: ansiterm.Foreground(ansiterm.BrightRed),
+ CRITICAL: &ansiterm.Context{
+ Foreground: ansiterm.White,
+ Background: ansiterm.Red,
+ },
+ }
+ // LocationColor defines the colors for the location output by the ColorWriter.
+ LocationColor = ansiterm.Foreground(ansiterm.BrightBlue)
+)
+```
+``` go
+var TimeFormat = initTimeFormat()
+```
+TimeFormat is the time format used for the default writer.
+This can be set with the environment variable LOGGO_TIME_FORMAT.
+
+
+## func ConfigureLoggers
+``` go
+func ConfigureLoggers(specification string) error
+```
+ConfigureLoggers configures loggers according to the given string
+specification, which specifies a set of modules and their associated
+logging levels. Loggers are colon- or semicolon-separated; each
+module is specified as =. White space outside of
+module names and levels is ignored. The root module is specified
+with the name "".
+
+An example specification:
+
+
+ `=ERROR; foo.bar=WARNING`
+
+
+## func DefaultFormatter
+``` go
+func DefaultFormatter(entry Entry) string
+```
+DefaultFormatter returns the parameters separated by spaces except for
+filename and line which are separated by a colon. The timestamp is shown
+to second resolution in UTC. For example:
+
+
+ 2016-07-02 15:04:05
+
+
+## func LoggerInfo
+``` go
+func LoggerInfo() string
+```
+LoggerInfo returns information about the configured loggers and their
+logging levels. The information is returned in the format expected by
+ConfigureLoggers. Loggers with UNSPECIFIED level will not
+be included.
+
+
+## func RegisterWriter
+``` go
+func RegisterWriter(name string, writer Writer) error
+```
+RegisterWriter adds the writer to the list of writers in the DefaultContext
+that get notified when logging. If there is already a registered writer
+with that name, an error is returned.
+
+
+## func ResetLogging
+``` go
+func ResetLogging()
+```
+ResetLogging iterates through the known modules and sets the levels of all
+to UNSPECIFIED, except for which is set to WARNING. The call also
+removes all writers in the DefaultContext and puts the original default
+writer back as the only writer.
+
+
+## func ResetWriters
+``` go
+func ResetWriters()
+```
+ResetWriters puts the list of writers back into the initial state.
+
+
+
+## type Config
+``` go
+type Config map[string]Level
+```
+Config is a mapping of logger module names to logging severity levels.
+
+
+
+
+
+
+
+
+
+### func ParseConfigString
+``` go
+func ParseConfigString(specification string) (Config, error)
+```
+ParseConfigString parses a logger configuration string into a map of logger
+names and their associated log level. This method is provided to allow
+other programs to pre-validate a configuration string rather than just
+calling ConfigureLoggers.
+
+Logging modules are colon- or semicolon-separated; each module is specified
+as =. White space outside of module names and levels is
+ignored. The root module is specified with the name "".
+
+As a special case, a log level may be specified on its own.
+This is equivalent to specifying the level of the root module,
+so "DEBUG" is equivalent to `=DEBUG`
+
+An example specification:
+
+
+ `=ERROR; foo.bar=WARNING`
+
+
+
+
+### func (Config) String
+``` go
+func (c Config) String() string
+```
+String returns a logger configuration string that may be parsed
+using ParseConfigurationString.
+
+
+
+## type Context
+``` go
+type Context struct {
+ // contains filtered or unexported fields
+}
+```
+Context produces loggers for a hierarchy of modules. The context holds
+a collection of hierarchical loggers and their writers.
+
+
+
+
+
+
+
+
+
+### func DefaultContext
+``` go
+func DefaultContext() *Context
+```
+DefaultContext returns the global default logging context.
+
+
+### func NewContext
+``` go
+func NewContext(rootLevel Level) *Context
+```
+NewLoggers returns a new Context with no writers set.
+If the root level is UNSPECIFIED, WARNING is used.
+
+
+
+
+### func (\*Context) AddWriter
+``` go
+func (c *Context) AddWriter(name string, writer Writer) error
+```
+AddWriter adds a writer to the list to be called for each logging call.
+The name cannot be empty, and the writer cannot be nil. If an existing
+writer exists with the specified name, an error is returned.
+
+
+
+### func (\*Context) ApplyConfig
+``` go
+func (c *Context) ApplyConfig(config Config)
+```
+ApplyConfig configures the logging modules according to the provided config.
+
+
+
+### func (\*Context) CompleteConfig
+``` go
+func (c *Context) CompleteConfig() Config
+```
+CompleteConfig returns all the loggers and their defined levels,
+even if that level is UNSPECIFIED.
+
+
+
+### func (\*Context) Config
+``` go
+func (c *Context) Config() Config
+```
+Config returns the current configuration of the Loggers. Loggers
+with UNSPECIFIED level will not be included.
+
+
+
+### func (\*Context) GetLogger
+``` go
+func (c *Context) GetLogger(name string) Logger
+```
+GetLogger returns a Logger for the given module name, creating it and
+its parents if necessary.
+
+
+
+### func (\*Context) RemoveWriter
+``` go
+func (c *Context) RemoveWriter(name string) (Writer, error)
+```
+RemoveWriter remotes the specified writer. If a writer is not found with
+the specified name an error is returned. The writer that was removed is also
+returned.
+
+
+
+### func (\*Context) ReplaceWriter
+``` go
+func (c *Context) ReplaceWriter(name string, writer Writer) (Writer, error)
+```
+ReplaceWriter is a convenience method that does the equivalent of RemoveWriter
+followed by AddWriter with the same name. The replaced writer is returned.
+
+
+
+### func (\*Context) ResetLoggerLevels
+``` go
+func (c *Context) ResetLoggerLevels()
+```
+ResetLoggerLevels iterates through the known logging modules and sets the
+levels of all to UNSPECIFIED, except for which is set to WARNING.
+
+
+
+### func (\*Context) ResetWriters
+``` go
+func (c *Context) ResetWriters()
+```
+ResetWriters is generally only used in testing and removes all the writers.
+
+
+
+## type Entry
+``` go
+type Entry struct {
+ // Level is the severity of the log message.
+ Level Level
+ // Module is the dotted module name from the logger.
+ Module string
+ // Filename is the full path the file that logged the message.
+ Filename string
+ // Line is the line number of the Filename.
+ Line int
+ // Timestamp is when the log message was created
+ Timestamp time.Time
+ // Message is the formatted string from teh log call.
+ Message string
+}
+```
+Entry represents a single log message.
+
+
+
+
+
+
+
+
+
+
+
+## type Level
+``` go
+type Level uint32
+```
+Level holds a severity level.
+
+
+
+``` go
+const (
+ UNSPECIFIED Level = iota
+ TRACE
+ DEBUG
+ INFO
+ WARNING
+ ERROR
+ CRITICAL
+)
+```
+The severity levels. Higher values are more considered more
+important.
+
+
+
+
+
+
+
+### func ParseLevel
+``` go
+func ParseLevel(level string) (Level, bool)
+```
+ParseLevel converts a string representation of a logging level to a
+Level. It returns the level and whether it was valid or not.
+
+
+
+
+### func (Level) Short
+``` go
+func (level Level) Short() string
+```
+Short returns a five character string to use in
+aligned logging output.
+
+
+
+### func (Level) String
+``` go
+func (level Level) String() string
+```
+String implements Stringer.
+
+
+
+## type Logger
+``` go
+type Logger struct {
+ // contains filtered or unexported fields
+}
+```
+A Logger represents a logging module. It has an associated logging
+level which can be changed; messages of lesser severity will
+be dropped. Loggers have a hierarchical relationship - see
+the package documentation.
+
+The zero Logger value is usable - any messages logged
+to it will be sent to the root Logger.
+
+
+
+
+
+
+
+
+
+### func GetLogger
+``` go
+func GetLogger(name string) Logger
+```
+GetLogger returns a Logger for the given module name,
+creating it and its parents if necessary.
+
+
+
+
+### func (Logger) Criticalf
+``` go
+func (logger Logger) Criticalf(message string, args ...interface{})
+```
+Criticalf logs the printf-formatted message at critical level.
+
+
+
+### func (Logger) Debugf
+``` go
+func (logger Logger) Debugf(message string, args ...interface{})
+```
+Debugf logs the printf-formatted message at debug level.
+
+
+
+### func (Logger) EffectiveLogLevel
+``` go
+func (logger Logger) EffectiveLogLevel() Level
+```
+EffectiveLogLevel returns the effective min log level of
+the receiver - that is, messages with a lesser severity
+level will be discarded.
+
+If the log level of the receiver is unspecified,
+it will be taken from the effective log level of its
+parent.
+
+
+
+### func (Logger) Errorf
+``` go
+func (logger Logger) Errorf(message string, args ...interface{})
+```
+Errorf logs the printf-formatted message at error level.
+
+
+
+### func (Logger) Infof
+``` go
+func (logger Logger) Infof(message string, args ...interface{})
+```
+Infof logs the printf-formatted message at info level.
+
+
+
+### func (Logger) IsDebugEnabled
+``` go
+func (logger Logger) IsDebugEnabled() bool
+```
+IsDebugEnabled returns whether debugging is enabled
+at debug level.
+
+
+
+### func (Logger) IsErrorEnabled
+``` go
+func (logger Logger) IsErrorEnabled() bool
+```
+IsErrorEnabled returns whether debugging is enabled
+at error level.
+
+
+
+### func (Logger) IsInfoEnabled
+``` go
+func (logger Logger) IsInfoEnabled() bool
+```
+IsInfoEnabled returns whether debugging is enabled
+at info level.
+
+
+
+### func (Logger) IsLevelEnabled
+``` go
+func (logger Logger) IsLevelEnabled(level Level) bool
+```
+IsLevelEnabled returns whether debugging is enabled
+for the given log level.
+
+
+
+### func (Logger) IsTraceEnabled
+``` go
+func (logger Logger) IsTraceEnabled() bool
+```
+IsTraceEnabled returns whether debugging is enabled
+at trace level.
+
+
+
+### func (Logger) IsWarningEnabled
+``` go
+func (logger Logger) IsWarningEnabled() bool
+```
+IsWarningEnabled returns whether debugging is enabled
+at warning level.
+
+
+
+### func (Logger) LogCallf
+``` go
+func (logger Logger) LogCallf(calldepth int, level Level, message string, args ...interface{})
+```
+LogCallf logs a printf-formatted message at the given level.
+The location of the call is indicated by the calldepth argument.
+A calldepth of 1 means the function that called this function.
+A message will be discarded if level is less than the
+the effective log level of the logger.
+Note that the writers may also filter out messages that
+are less than their registered minimum severity level.
+
+
+
+### func (Logger) LogLevel
+``` go
+func (logger Logger) LogLevel() Level
+```
+LogLevel returns the configured min log level of the logger.
+
+
+
+### func (Logger) Logf
+``` go
+func (logger Logger) Logf(level Level, message string, args ...interface{})
+```
+Logf logs a printf-formatted message at the given level.
+A message will be discarded if level is less than the
+the effective log level of the logger.
+Note that the writers may also filter out messages that
+are less than their registered minimum severity level.
+
+
+
+### func (Logger) Name
+``` go
+func (logger Logger) Name() string
+```
+Name returns the logger's module name.
+
+
+
+### func (Logger) SetLogLevel
+``` go
+func (logger Logger) SetLogLevel(level Level)
+```
+SetLogLevel sets the severity level of the given logger.
+The root logger cannot be set to UNSPECIFIED level.
+See EffectiveLogLevel for how this affects the
+actual messages logged.
+
+
+
+### func (Logger) Tracef
+``` go
+func (logger Logger) Tracef(message string, args ...interface{})
+```
+Tracef logs the printf-formatted message at trace level.
+
+
+
+### func (Logger) Warningf
+``` go
+func (logger Logger) Warningf(message string, args ...interface{})
+```
+Warningf logs the printf-formatted message at warning level.
+
+
+
+## type TestWriter
+``` go
+type TestWriter struct {
+ // contains filtered or unexported fields
+}
+```
+TestWriter is a useful Writer for testing purposes. Each component of the
+logging message is stored in the Log array.
+
+
+
+
+
+
+
+
+
+
+
+### func (\*TestWriter) Clear
+``` go
+func (writer *TestWriter) Clear()
+```
+Clear removes any saved log messages.
+
+
+
+### func (\*TestWriter) Log
+``` go
+func (writer *TestWriter) Log() []Entry
+```
+Log returns a copy of the current logged values.
+
+
+
+### func (\*TestWriter) Write
+``` go
+func (writer *TestWriter) Write(entry Entry)
+```
+Write saves the params as members in the TestLogValues struct appended to the Log array.
+
+
+
+## type Writer
+``` go
+type Writer interface {
+ // Write writes a message to the Writer with the given level and module
+ // name. The filename and line hold the file name and line number of the
+ // code that is generating the log message; the time stamp holds the time
+ // the log message was generated, and message holds the log message
+ // itself.
+ Write(entry Entry)
+}
+```
+Writer is implemented by any recipient of log messages.
+
+
+
+
+
+
+
+
+
+### func NewColorWriter
+``` go
+func NewColorWriter(writer io.Writer) Writer
+```
+NewColorWriter will write out colored severity levels if the writer is
+outputting to a terminal.
+
+
+### func NewMinimumLevelWriter
+``` go
+func NewMinimumLevelWriter(writer Writer, minLevel Level) Writer
+```
+NewMinLevelWriter returns a Writer that will only pass on the Write calls
+to the provided writer if the log level is at or above the specified
+minimum level.
+
+
+### func NewSimpleWriter
+``` go
+func NewSimpleWriter(writer io.Writer, formatter func(entry Entry) string) Writer
+```
+NewSimpleWriter returns a new writer that writes log messages to the given
+io.Writer formatting the messages with the given formatter.
+
+
+### func RemoveWriter
+``` go
+func RemoveWriter(name string) (Writer, error)
+```
+RemoveWriter removes the Writer identified by 'name' and returns it.
+If the Writer is not found, an error is returned.
+
+
+### func ReplaceDefaultWriter
+``` go
+func ReplaceDefaultWriter(writer Writer) (Writer, error)
+```
+ReplaceDefaultWriter is a convenience method that does the equivalent of
+RemoveWriter and then RegisterWriter with the name "default". The previous
+default writer, if any is returned.
+
+
+
+
+
+
+
+
+
+
+- - -
+Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
\ No newline at end of file
diff --git a/vendor/github.com/juju/loggo/benchmarks_test.go b/vendor/github.com/juju/loggo/benchmarks_test.go
new file mode 100644
index 0000000..e0b55bd
--- /dev/null
+++ b/vendor/github.com/juju/loggo/benchmarks_test.go
@@ -0,0 +1,105 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "io/ioutil"
+ "os"
+
+ "github.com/juju/loggo"
+ gc "gopkg.in/check.v1"
+)
+
+type BenchmarksSuite struct {
+ logger loggo.Logger
+ writer *writer
+}
+
+var _ = gc.Suite(&BenchmarksSuite{})
+
+func (s *BenchmarksSuite) SetUpTest(c *gc.C) {
+ loggo.ResetLogging()
+ s.logger = loggo.GetLogger("test.writer")
+ s.writer = &writer{}
+ err := loggo.RegisterWriter("test", s.writer)
+ c.Assert(err, gc.IsNil)
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingNoWriters(c *gc.C) {
+ // No writers
+ loggo.RemoveWriter("test")
+ for i := 0; i < c.N; i++ {
+ s.logger.Warningf("just a simple warning for %d", i)
+ }
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingNoWritersNoFormat(c *gc.C) {
+ // No writers
+ loggo.RemoveWriter("test")
+ for i := 0; i < c.N; i++ {
+ s.logger.Warningf("just a simple warning")
+ }
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingTestWriters(c *gc.C) {
+ for i := 0; i < c.N; i++ {
+ s.logger.Warningf("just a simple warning for %d", i)
+ }
+ c.Assert(s.writer.Log(), gc.HasLen, c.N)
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingDiskWriter(c *gc.C) {
+ logFile := s.setupTempFileWriter(c)
+ defer logFile.Close()
+ msg := "just a simple warning for %d"
+ for i := 0; i < c.N; i++ {
+ s.logger.Warningf(msg, i)
+ }
+ offset, err := logFile.Seek(0, os.SEEK_CUR)
+ c.Assert(err, gc.IsNil)
+ c.Assert((offset > int64(len(msg))*int64(c.N)), gc.Equals, true,
+ gc.Commentf("Not enough data was written to the log file."))
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingDiskWriterNoMessages(c *gc.C) {
+ logFile := s.setupTempFileWriter(c)
+ defer logFile.Close()
+ // Change the log level
+ writer, err := loggo.RemoveWriter("testfile")
+ c.Assert(err, gc.IsNil)
+ loggo.RegisterWriter("testfile", loggo.NewMinimumLevelWriter(writer, loggo.WARNING))
+ msg := "just a simple warning for %d"
+ for i := 0; i < c.N; i++ {
+ s.logger.Debugf(msg, i)
+ }
+ offset, err := logFile.Seek(0, os.SEEK_CUR)
+ c.Assert(err, gc.IsNil)
+ c.Assert(offset, gc.Equals, int64(0),
+ gc.Commentf("Data was written to the log file."))
+}
+
+func (s *BenchmarksSuite) BenchmarkLoggingDiskWriterNoMessagesLogLevel(c *gc.C) {
+ logFile := s.setupTempFileWriter(c)
+ defer logFile.Close()
+ // Change the log level
+ s.logger.SetLogLevel(loggo.WARNING)
+ msg := "just a simple warning for %d"
+ for i := 0; i < c.N; i++ {
+ s.logger.Debugf(msg, i)
+ }
+ offset, err := logFile.Seek(0, os.SEEK_CUR)
+ c.Assert(err, gc.IsNil)
+ c.Assert(offset, gc.Equals, int64(0),
+ gc.Commentf("Data was written to the log file."))
+}
+
+func (s *BenchmarksSuite) setupTempFileWriter(c *gc.C) *os.File {
+ loggo.RemoveWriter("test")
+ logFile, err := ioutil.TempFile(c.MkDir(), "loggo-test")
+ c.Assert(err, gc.IsNil)
+ writer := loggo.NewSimpleWriter(logFile, loggo.DefaultFormatter)
+ err = loggo.RegisterWriter("testfile", writer)
+ c.Assert(err, gc.IsNil)
+ return logFile
+}
diff --git a/vendor/github.com/juju/loggo/checkers_test.go b/vendor/github.com/juju/loggo/checkers_test.go
new file mode 100644
index 0000000..0f5b931
--- /dev/null
+++ b/vendor/github.com/juju/loggo/checkers_test.go
@@ -0,0 +1,44 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "fmt"
+ "time"
+
+ gc "gopkg.in/check.v1"
+)
+
+func Between(start, end time.Time) gc.Checker {
+ if end.Before(start) {
+ return &betweenChecker{end, start}
+ }
+ return &betweenChecker{start, end}
+}
+
+type betweenChecker struct {
+ start, end time.Time
+}
+
+func (checker *betweenChecker) Info() *gc.CheckerInfo {
+ info := gc.CheckerInfo{
+ Name: "Between",
+ Params: []string{"obtained"},
+ }
+ return &info
+}
+
+func (checker *betweenChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ when, ok := params[0].(time.Time)
+ if !ok {
+ return false, "obtained value type must be time.Time"
+ }
+ if when.Before(checker.start) {
+ return false, fmt.Sprintf("obtained time %q is before start time %q", when, checker.start)
+ }
+ if when.After(checker.end) {
+ return false, fmt.Sprintf("obtained time %q is after end time %q", when, checker.end)
+ }
+ return true, ""
+}
diff --git a/vendor/github.com/juju/loggo/config.go b/vendor/github.com/juju/loggo/config.go
new file mode 100644
index 0000000..1b3eaa5
--- /dev/null
+++ b/vendor/github.com/juju/loggo/config.go
@@ -0,0 +1,96 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "fmt"
+ "sort"
+ "strings"
+)
+
+// Config is a mapping of logger module names to logging severity levels.
+type Config map[string]Level
+
+// String returns a logger configuration string that may be parsed
+// using ParseConfigurationString.
+func (c Config) String() string {
+ if c == nil {
+ return ""
+ }
+ // output in alphabetical order.
+ names := []string{}
+ for name := range c {
+ names = append(names, name)
+ }
+ sort.Strings(names)
+
+ var entries []string
+ for _, name := range names {
+ level := c[name]
+ if name == "" {
+ name = rootString
+ }
+ entry := fmt.Sprintf("%s=%s", name, level)
+ entries = append(entries, entry)
+ }
+ return strings.Join(entries, ";")
+}
+
+func parseConfigValue(value string) (string, Level, error) {
+ pair := strings.SplitN(value, "=", 2)
+ if len(pair) < 2 {
+ return "", UNSPECIFIED, fmt.Errorf("config value expected '=', found %q", value)
+ }
+ name := strings.TrimSpace(pair[0])
+ if name == "" {
+ return "", UNSPECIFIED, fmt.Errorf("config value %q has missing module name", value)
+ }
+
+ levelStr := strings.TrimSpace(pair[1])
+ level, ok := ParseLevel(levelStr)
+ if !ok {
+ return "", UNSPECIFIED, fmt.Errorf("unknown severity level %q", levelStr)
+ }
+ if name == rootString {
+ name = ""
+ }
+ return name, level, nil
+}
+
+// ParseConfigString parses a logger configuration string into a map of logger
+// names and their associated log level. This method is provided to allow
+// other programs to pre-validate a configuration string rather than just
+// calling ConfigureLoggers.
+//
+// Logging modules are colon- or semicolon-separated; each module is specified
+// as =. White space outside of module names and levels is
+// ignored. The root module is specified with the name "".
+//
+// As a special case, a log level may be specified on its own.
+// This is equivalent to specifying the level of the root module,
+// so "DEBUG" is equivalent to `=DEBUG`
+//
+// An example specification:
+// `=ERROR; foo.bar=WARNING`
+func ParseConfigString(specification string) (Config, error) {
+ specification = strings.TrimSpace(specification)
+ if specification == "" {
+ return nil, nil
+ }
+ cfg := make(Config)
+ if level, ok := ParseLevel(specification); ok {
+ cfg[""] = level
+ return cfg, nil
+ }
+
+ values := strings.FieldsFunc(specification, func(r rune) bool { return r == ';' || r == ':' })
+ for _, value := range values {
+ name, level, err := parseConfigValue(value)
+ if err != nil {
+ return nil, err
+ }
+ cfg[name] = level
+ }
+ return cfg, nil
+}
diff --git a/vendor/github.com/juju/loggo/config_test.go b/vendor/github.com/juju/loggo/config_test.go
new file mode 100644
index 0000000..fa64d50
--- /dev/null
+++ b/vendor/github.com/juju/loggo/config_test.go
@@ -0,0 +1,152 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import gc "gopkg.in/check.v1"
+
+type ConfigSuite struct{}
+
+var _ = gc.Suite(&ConfigSuite{})
+
+func (*ConfigSuite) TestParseConfigValue(c *gc.C) {
+ for i, test := range []struct {
+ value string
+ module string
+ level Level
+ err string
+ }{{
+ err: `config value expected '=', found ""`,
+ }, {
+ value: "WARNING",
+ err: `config value expected '=', found "WARNING"`,
+ }, {
+ value: "=WARNING",
+ err: `config value "=WARNING" has missing module name`,
+ }, {
+ value: " name = WARNING ",
+ module: "name",
+ level: WARNING,
+ }, {
+ value: "name = foo",
+ err: `unknown severity level "foo"`,
+ }, {
+ value: "name=DEBUG=INFO",
+ err: `unknown severity level "DEBUG=INFO"`,
+ }, {
+ value: " = info",
+ module: "",
+ level: INFO,
+ }} {
+ c.Logf("%d: %s", i, test.value)
+ module, level, err := parseConfigValue(test.value)
+ if test.err == "" {
+ c.Check(err, gc.IsNil)
+ c.Check(module, gc.Equals, test.module)
+ c.Check(level, gc.Equals, test.level)
+ } else {
+ c.Check(module, gc.Equals, "")
+ c.Check(level, gc.Equals, UNSPECIFIED)
+ c.Check(err.Error(), gc.Equals, test.err)
+ }
+ }
+}
+
+func (*ConfigSuite) TestPaarseConfigurationString(c *gc.C) {
+ for i, test := range []struct {
+ configuration string
+ expected Config
+ err string
+ }{{
+ configuration: "",
+ // nil Config, no error
+ }, {
+ configuration: "INFO",
+ expected: Config{"": INFO},
+ }, {
+ configuration: "=INFO",
+ err: `config value "=INFO" has missing module name`,
+ }, {
+ configuration: "=UNSPECIFIED",
+ expected: Config{"": UNSPECIFIED},
+ }, {
+ configuration: "=DEBUG",
+ expected: Config{"": DEBUG},
+ }, {
+ configuration: "test.module=debug",
+ expected: Config{"test.module": DEBUG},
+ }, {
+ configuration: "module=info; sub.module=debug; other.module=warning",
+ expected: Config{
+ "module": INFO,
+ "sub.module": DEBUG,
+ "other.module": WARNING,
+ },
+ }, {
+ // colons not semicolons
+ configuration: "module=info: sub.module=debug: other.module=warning",
+ expected: Config{
+ "module": INFO,
+ "sub.module": DEBUG,
+ "other.module": WARNING,
+ },
+ }, {
+ configuration: " foo.bar \t\r\n= \t\r\nCRITICAL \t\r\n; \t\r\nfoo \r\t\n = DEBUG",
+ expected: Config{
+ "foo": DEBUG,
+ "foo.bar": CRITICAL,
+ },
+ }, {
+ configuration: "foo;bar",
+ err: `config value expected '=', found "foo"`,
+ }, {
+ configuration: "foo=",
+ err: `unknown severity level ""`,
+ }, {
+ configuration: "foo=unknown",
+ err: `unknown severity level "unknown"`,
+ }} {
+ c.Logf("%d: %q", i, test.configuration)
+ config, err := ParseConfigString(test.configuration)
+ if test.err == "" {
+ c.Check(err, gc.IsNil)
+ c.Check(config, gc.DeepEquals, test.expected)
+ } else {
+ c.Check(config, gc.IsNil)
+ c.Check(err.Error(), gc.Equals, test.err)
+ }
+ }
+}
+
+func (*ConfigSuite) TestConfigString(c *gc.C) {
+ for i, test := range []struct {
+ config Config
+ expected string
+ }{{
+ config: nil,
+ expected: "",
+ }, {
+ config: Config{"": INFO},
+ expected: "=INFO",
+ }, {
+ config: Config{"": UNSPECIFIED},
+ expected: "=UNSPECIFIED",
+ }, {
+ config: Config{"": DEBUG},
+ expected: "=DEBUG",
+ }, {
+ config: Config{"test.module": DEBUG},
+ expected: "test.module=DEBUG",
+ }, {
+ config: Config{
+ "": WARNING,
+ "module": INFO,
+ "sub.module": DEBUG,
+ "other.module": WARNING,
+ },
+ expected: "=WARNING;module=INFO;other.module=WARNING;sub.module=DEBUG",
+ }} {
+ c.Logf("%d: %q", i, test.expected)
+ c.Check(test.config.String(), gc.Equals, test.expected)
+ }
+}
diff --git a/vendor/github.com/juju/loggo/context.go b/vendor/github.com/juju/loggo/context.go
new file mode 100644
index 0000000..f5739d9
--- /dev/null
+++ b/vendor/github.com/juju/loggo/context.go
@@ -0,0 +1,198 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "fmt"
+ "strings"
+ "sync"
+)
+
+// Context produces loggers for a hierarchy of modules. The context holds
+// a collection of hierarchical loggers and their writers.
+type Context struct {
+ root *module
+
+ // Perhaps have one mutex?
+ modulesMutex sync.Mutex
+ modules map[string]*module
+
+ writersMutex sync.Mutex
+ writers map[string]Writer
+
+ // writeMuxtex is used to serialise write operations.
+ writeMutex sync.Mutex
+}
+
+// NewLoggers returns a new Context with no writers set.
+// If the root level is UNSPECIFIED, WARNING is used.
+func NewContext(rootLevel Level) *Context {
+ if rootLevel < TRACE || rootLevel > CRITICAL {
+ rootLevel = WARNING
+ }
+ context := &Context{
+ modules: make(map[string]*module),
+ writers: make(map[string]Writer),
+ }
+ context.root = &module{
+ level: rootLevel,
+ context: context,
+ }
+ context.modules[""] = context.root
+ return context
+}
+
+// GetLogger returns a Logger for the given module name, creating it and
+// its parents if necessary.
+func (c *Context) GetLogger(name string) Logger {
+ name = strings.TrimSpace(strings.ToLower(name))
+ c.modulesMutex.Lock()
+ defer c.modulesMutex.Unlock()
+ return Logger{c.getLoggerModule(name)}
+}
+
+func (c *Context) getLoggerModule(name string) *module {
+ if name == rootString {
+ name = ""
+ }
+ impl, found := c.modules[name]
+ if found {
+ return impl
+ }
+ parentName := ""
+ if i := strings.LastIndex(name, "."); i >= 0 {
+ parentName = name[0:i]
+ }
+ parent := c.getLoggerModule(parentName)
+ impl = &module{name, UNSPECIFIED, parent, c}
+ c.modules[name] = impl
+ return impl
+}
+
+// Config returns the current configuration of the Loggers. Loggers
+// with UNSPECIFIED level will not be included.
+func (c *Context) Config() Config {
+ result := make(Config)
+ c.modulesMutex.Lock()
+ defer c.modulesMutex.Unlock()
+
+ for name, module := range c.modules {
+ if module.level != UNSPECIFIED {
+ result[name] = module.level
+ }
+ }
+ return result
+}
+
+// CompleteConfig returns all the loggers and their defined levels,
+// even if that level is UNSPECIFIED.
+func (c *Context) CompleteConfig() Config {
+ result := make(Config)
+ c.modulesMutex.Lock()
+ defer c.modulesMutex.Unlock()
+
+ for name, module := range c.modules {
+ result[name] = module.level
+ }
+ return result
+}
+
+// ApplyConfig configures the logging modules according to the provided config.
+func (c *Context) ApplyConfig(config Config) {
+ c.modulesMutex.Lock()
+ defer c.modulesMutex.Unlock()
+ for name, level := range config {
+ module := c.getLoggerModule(name)
+ module.setLevel(level)
+ }
+}
+
+// ResetLoggerLevels iterates through the known logging modules and sets the
+// levels of all to UNSPECIFIED, except for which is set to WARNING.
+func (c *Context) ResetLoggerLevels() {
+ c.modulesMutex.Lock()
+ defer c.modulesMutex.Unlock()
+ // Setting the root module to UNSPECIFIED will set it to WARNING.
+ for _, module := range c.modules {
+ module.setLevel(UNSPECIFIED)
+ }
+}
+
+func (c *Context) write(entry Entry) {
+ c.writeMutex.Lock()
+ defer c.writeMutex.Unlock()
+ for _, writer := range c.getWriters() {
+ writer.Write(entry)
+ }
+}
+
+func (c *Context) getWriters() []Writer {
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ var result []Writer
+ for _, writer := range c.writers {
+ result = append(result, writer)
+ }
+ return result
+}
+
+// AddWriter adds a writer to the list to be called for each logging call.
+// The name cannot be empty, and the writer cannot be nil. If an existing
+// writer exists with the specified name, an error is returned.
+func (c *Context) AddWriter(name string, writer Writer) error {
+ if name == "" {
+ return fmt.Errorf("name cannot be empty")
+ }
+ if writer == nil {
+ return fmt.Errorf("writer cannot be nil")
+ }
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ if _, found := c.writers[name]; found {
+ return fmt.Errorf("context already has a writer named %q", name)
+ }
+ c.writers[name] = writer
+ return nil
+}
+
+// RemoveWriter remotes the specified writer. If a writer is not found with
+// the specified name an error is returned. The writer that was removed is also
+// returned.
+func (c *Context) RemoveWriter(name string) (Writer, error) {
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ reg, found := c.writers[name]
+ if !found {
+ return nil, fmt.Errorf("context has no writer named %q", name)
+ }
+ delete(c.writers, name)
+ return reg, nil
+}
+
+// ReplaceWriter is a convenience method that does the equivalent of RemoveWriter
+// followed by AddWriter with the same name. The replaced writer is returned.
+func (c *Context) ReplaceWriter(name string, writer Writer) (Writer, error) {
+ if name == "" {
+ return nil, fmt.Errorf("name cannot be empty")
+ }
+ if writer == nil {
+ return nil, fmt.Errorf("writer cannot be nil")
+ }
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ reg, found := c.writers[name]
+ if !found {
+ return nil, fmt.Errorf("context has no writer named %q", name)
+ }
+ oldWriter := reg
+ c.writers[name] = writer
+ return oldWriter, nil
+}
+
+// ResetWriters is generally only used in testing and removes all the writers.
+func (c *Context) ResetWriters() {
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ c.writers = make(map[string]Writer)
+}
diff --git a/vendor/github.com/juju/loggo/context_test.go b/vendor/github.com/juju/loggo/context_test.go
new file mode 100644
index 0000000..68d5a91
--- /dev/null
+++ b/vendor/github.com/juju/loggo/context_test.go
@@ -0,0 +1,328 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "github.com/juju/loggo"
+ gc "gopkg.in/check.v1"
+)
+
+type ContextSuite struct{}
+
+var _ = gc.Suite(&ContextSuite{})
+
+func (*ContextSuite) TestNewContextRootLevel(c *gc.C) {
+ for i, test := range []struct {
+ level loggo.Level
+ expected loggo.Level
+ }{{
+ level: loggo.UNSPECIFIED,
+ expected: loggo.WARNING,
+ }, {
+ level: loggo.DEBUG,
+ expected: loggo.DEBUG,
+ }, {
+ level: loggo.INFO,
+ expected: loggo.INFO,
+ }, {
+ level: loggo.WARNING,
+ expected: loggo.WARNING,
+ }, {
+ level: loggo.ERROR,
+ expected: loggo.ERROR,
+ }, {
+ level: loggo.CRITICAL,
+ expected: loggo.CRITICAL,
+ }, {
+ level: loggo.Level(42),
+ expected: loggo.WARNING,
+ }} {
+ c.Log("%d: %s", i, test.level)
+ context := loggo.NewContext(test.level)
+ cfg := context.Config()
+ c.Check(cfg, gc.HasLen, 1)
+ value, found := cfg[""]
+ c.Check(found, gc.Equals, true)
+ c.Check(value, gc.Equals, test.expected)
+ }
+}
+
+func logAllSeverities(logger loggo.Logger) {
+ logger.Criticalf("something critical")
+ logger.Errorf("an error")
+ logger.Warningf("a warning message")
+ logger.Infof("an info message")
+ logger.Debugf("a debug message")
+ logger.Tracef("a trace message")
+}
+
+func checkLogEntry(c *gc.C, entry, expected loggo.Entry) {
+ c.Check(entry.Level, gc.Equals, expected.Level)
+ c.Check(entry.Module, gc.Equals, expected.Module)
+ c.Check(entry.Message, gc.Equals, expected.Message)
+}
+
+func checkLogEntries(c *gc.C, obtained, expected []loggo.Entry) {
+ if c.Check(len(obtained), gc.Equals, len(expected)) {
+ for i := range obtained {
+ checkLogEntry(c, obtained[i], expected[i])
+ }
+ }
+}
+
+func (*ContextSuite) TestGetLoggerRoot(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ blank := context.GetLogger("")
+ root := context.GetLogger("")
+ c.Assert(blank, gc.Equals, root)
+}
+
+func (*ContextSuite) TestGetLoggerCase(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ upper := context.GetLogger("TEST")
+ lower := context.GetLogger("test")
+ c.Assert(upper, gc.Equals, lower)
+ c.Assert(upper.Name(), gc.Equals, "test")
+}
+
+func (*ContextSuite) TestGetLoggerSpace(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ space := context.GetLogger(" test ")
+ lower := context.GetLogger("test")
+ c.Assert(space, gc.Equals, lower)
+ c.Assert(space.Name(), gc.Equals, "test")
+}
+
+func (*ContextSuite) TestNewContextNoWriter(c *gc.C) {
+ // Should be no output.
+ context := loggo.NewContext(loggo.DEBUG)
+ logger := context.GetLogger("test")
+ logAllSeverities(logger)
+}
+
+func (*ContextSuite) newContextWithTestWriter(c *gc.C, level loggo.Level) (*loggo.Context, *loggo.TestWriter) {
+ writer := &loggo.TestWriter{}
+ context := loggo.NewContext(level)
+ context.AddWriter("test", writer)
+ return context, writer
+}
+
+func (s *ContextSuite) TestNewContextRootSeverityWarning(c *gc.C) {
+ context, writer := s.newContextWithTestWriter(c, loggo.WARNING)
+ logger := context.GetLogger("test")
+ logAllSeverities(logger)
+ checkLogEntries(c, writer.Log(), []loggo.Entry{
+ {Level: loggo.CRITICAL, Module: "test", Message: "something critical"},
+ {Level: loggo.ERROR, Module: "test", Message: "an error"},
+ {Level: loggo.WARNING, Module: "test", Message: "a warning message"},
+ })
+}
+
+func (s *ContextSuite) TestNewContextRootSeverityTrace(c *gc.C) {
+ context, writer := s.newContextWithTestWriter(c, loggo.TRACE)
+ logger := context.GetLogger("test")
+ logAllSeverities(logger)
+ checkLogEntries(c, writer.Log(), []loggo.Entry{
+ {Level: loggo.CRITICAL, Module: "test", Message: "something critical"},
+ {Level: loggo.ERROR, Module: "test", Message: "an error"},
+ {Level: loggo.WARNING, Module: "test", Message: "a warning message"},
+ {Level: loggo.INFO, Module: "test", Message: "an info message"},
+ {Level: loggo.DEBUG, Module: "test", Message: "a debug message"},
+ {Level: loggo.TRACE, Module: "test", Message: "a trace message"},
+ })
+}
+
+func (*ContextSuite) TestNewContextConfig(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ config := context.Config()
+ c.Assert(config, gc.DeepEquals, loggo.Config{"": loggo.DEBUG})
+}
+
+func (*ContextSuite) TestNewLoggerAddsConfig(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ _ = context.GetLogger("test.module")
+ c.Assert(context.Config(), gc.DeepEquals, loggo.Config{
+ "": loggo.DEBUG,
+ })
+ c.Assert(context.CompleteConfig(), gc.DeepEquals, loggo.Config{
+ "": loggo.DEBUG,
+ "test": loggo.UNSPECIFIED,
+ "test.module": loggo.UNSPECIFIED,
+ })
+}
+
+func (*ContextSuite) TestApplyNilConfig(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ context.ApplyConfig(nil)
+ c.Assert(context.Config(), gc.DeepEquals, loggo.Config{"": loggo.DEBUG})
+}
+
+func (*ContextSuite) TestApplyConfigRootUnspecified(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ context.ApplyConfig(loggo.Config{"": loggo.UNSPECIFIED})
+ c.Assert(context.Config(), gc.DeepEquals, loggo.Config{"": loggo.WARNING})
+}
+
+func (*ContextSuite) TestApplyConfigRootTrace(c *gc.C) {
+ context := loggo.NewContext(loggo.WARNING)
+ context.ApplyConfig(loggo.Config{"": loggo.TRACE})
+ c.Assert(context.Config(), gc.DeepEquals, loggo.Config{"": loggo.TRACE})
+}
+
+func (*ContextSuite) TestApplyConfigCreatesModules(c *gc.C) {
+ context := loggo.NewContext(loggo.WARNING)
+ context.ApplyConfig(loggo.Config{"first.second": loggo.TRACE})
+ c.Assert(context.Config(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ "first.second": loggo.TRACE,
+ })
+ c.Assert(context.CompleteConfig(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ "first": loggo.UNSPECIFIED,
+ "first.second": loggo.TRACE,
+ })
+}
+
+func (*ContextSuite) TestApplyConfigAdditive(c *gc.C) {
+ context := loggo.NewContext(loggo.WARNING)
+ context.ApplyConfig(loggo.Config{"first.second": loggo.TRACE})
+ context.ApplyConfig(loggo.Config{"other.module": loggo.DEBUG})
+ c.Assert(context.Config(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ "first.second": loggo.TRACE,
+ "other.module": loggo.DEBUG,
+ })
+ c.Assert(context.CompleteConfig(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ "first": loggo.UNSPECIFIED,
+ "first.second": loggo.TRACE,
+ "other": loggo.UNSPECIFIED,
+ "other.module": loggo.DEBUG,
+ })
+}
+
+func (*ContextSuite) TestResetLoggerLevels(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ context.ApplyConfig(loggo.Config{"first.second": loggo.TRACE})
+ context.ResetLoggerLevels()
+ c.Assert(context.Config(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ })
+ c.Assert(context.CompleteConfig(), gc.DeepEquals,
+ loggo.Config{
+ "": loggo.WARNING,
+ "first": loggo.UNSPECIFIED,
+ "first.second": loggo.UNSPECIFIED,
+ })
+}
+
+func (*ContextSuite) TestWriterNamesNone(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ writers := context.WriterNames()
+ c.Assert(writers, gc.HasLen, 0)
+}
+
+func (*ContextSuite) TestAddWriterNoName(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ err := context.AddWriter("", nil)
+ c.Assert(err.Error(), gc.Equals, "name cannot be empty")
+}
+
+func (*ContextSuite) TestAddWriterNil(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ err := context.AddWriter("foo", nil)
+ c.Assert(err.Error(), gc.Equals, "writer cannot be nil")
+}
+
+func (*ContextSuite) TestNamedAddWriter(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ err := context.AddWriter("foo", &writer{name: "foo"})
+ c.Assert(err, gc.IsNil)
+ err = context.AddWriter("foo", &writer{name: "foo"})
+ c.Assert(err.Error(), gc.Equals, `context already has a writer named "foo"`)
+
+ writers := context.WriterNames()
+ c.Assert(writers, gc.DeepEquals, []string{"foo"})
+}
+
+func (*ContextSuite) TestRemoveWriter(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ w, err := context.RemoveWriter("unknown")
+ c.Assert(err.Error(), gc.Equals, `context has no writer named "unknown"`)
+ c.Assert(w, gc.IsNil)
+}
+
+func (*ContextSuite) TestRemoveWriterFound(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ original := &writer{name: "foo"}
+ err := context.AddWriter("foo", original)
+ c.Assert(err, gc.IsNil)
+ existing, err := context.RemoveWriter("foo")
+ c.Assert(err, gc.IsNil)
+ c.Assert(existing, gc.Equals, original)
+
+ writers := context.WriterNames()
+ c.Assert(writers, gc.HasLen, 0)
+}
+
+func (*ContextSuite) TestReplaceWriterNoName(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ existing, err := context.ReplaceWriter("", nil)
+ c.Assert(err.Error(), gc.Equals, "name cannot be empty")
+ c.Assert(existing, gc.IsNil)
+}
+
+func (*ContextSuite) TestReplaceWriterNil(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ existing, err := context.ReplaceWriter("foo", nil)
+ c.Assert(err.Error(), gc.Equals, "writer cannot be nil")
+ c.Assert(existing, gc.IsNil)
+}
+
+func (*ContextSuite) TestReplaceWriterNotFound(c *gc.C) {
+ context := loggo.NewContext(loggo.DEBUG)
+ existing, err := context.ReplaceWriter("foo", &writer{})
+ c.Assert(err.Error(), gc.Equals, `context has no writer named "foo"`)
+ c.Assert(existing, gc.IsNil)
+}
+
+func (*ContextSuite) TestMultipleWriters(c *gc.C) {
+ first := &writer{}
+ second := &writer{}
+ third := &writer{}
+ context := loggo.NewContext(loggo.TRACE)
+ err := context.AddWriter("first", first)
+ c.Assert(err, gc.IsNil)
+ err = context.AddWriter("second", second)
+ c.Assert(err, gc.IsNil)
+ err = context.AddWriter("third", third)
+ c.Assert(err, gc.IsNil)
+
+ logger := context.GetLogger("test")
+ logAllSeverities(logger)
+
+ expected := []loggo.Entry{
+ {Level: loggo.CRITICAL, Module: "test", Message: "something critical"},
+ {Level: loggo.ERROR, Module: "test", Message: "an error"},
+ {Level: loggo.WARNING, Module: "test", Message: "a warning message"},
+ {Level: loggo.INFO, Module: "test", Message: "an info message"},
+ {Level: loggo.DEBUG, Module: "test", Message: "a debug message"},
+ {Level: loggo.TRACE, Module: "test", Message: "a trace message"},
+ }
+
+ checkLogEntries(c, first.Log(), expected)
+ checkLogEntries(c, second.Log(), expected)
+ checkLogEntries(c, third.Log(), expected)
+}
+
+type writer struct {
+ loggo.TestWriter
+ // The name exists to discriminate writer equality.
+ name string
+}
diff --git a/vendor/github.com/juju/loggo/dependencies.tsv b/vendor/github.com/juju/loggo/dependencies.tsv
new file mode 100644
index 0000000..2b7e4d0
--- /dev/null
+++ b/vendor/github.com/juju/loggo/dependencies.tsv
@@ -0,0 +1,6 @@
+github.com/juju/ansiterm git b99631de12cf04a906c1d4e4ec54fb86eae5863d 2016-09-07T23:45:32Z
+github.com/lunixbochs/vtclean git 4fbf7632a2c6d3fbdb9931439bdbbeded02cbe36 2016-01-25T03:51:06Z
+github.com/mattn/go-colorable git ed8eb9e318d7a84ce5915b495b7d35e0cfe7b5a8 2016-07-31T23:54:17Z
+github.com/mattn/go-isatty git 66b8e73f3f5cda9f96b69efd03dd3d7fc4a5cdb8 2016-08-06T12:27:52Z
+golang.org/x/sys git 9bb9f0998d48b31547d975974935ae9b48c7a03c 2016-10-12T00:19:20Z
+gopkg.in/check.v1 git 4f90aeace3a26ad7021961c297b22c42160c7b25 2016-01-05T16:49:36Z
diff --git a/vendor/github.com/juju/loggo/doc.go b/vendor/github.com/juju/loggo/doc.go
new file mode 100644
index 0000000..754733c
--- /dev/null
+++ b/vendor/github.com/juju/loggo/doc.go
@@ -0,0 +1,47 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+/*
+[godoc-link-here]
+
+Module level logging for Go
+
+This package provides an alternative to the standard library log package.
+
+The actual logging functions never return errors. If you are logging
+something, you really don't want to be worried about the logging
+having trouble.
+
+Modules have names that are defined by dotted strings.
+ "first.second.third"
+
+There is a root module that has the name `""`. Each module
+(except the root module) has a parent, identified by the part of
+the name without the last dotted value.
+* the parent of "first.second.third" is "first.second"
+* the parent of "first.second" is "first"
+* the parent of "first" is "" (the root module)
+
+Each module can specify its own severity level. Logging calls that are of
+a lower severity than the module's effective severity level are not written
+out.
+
+Loggers are created using the GetLogger function.
+ logger := loggo.GetLogger("foo.bar")
+
+By default there is one writer registered, which will write to Stderr,
+and the root module, which will only emit warnings and above.
+If you want to continue using the default
+logger, but have it emit all logging levels you need to do the following.
+
+ writer, _, err := loggo.RemoveWriter("default")
+ // err is non-nil if and only if the name isn't found.
+ loggo.RegisterWriter("default", writer)
+
+To make loggo produce colored output, you can do the following,
+having imported github.com/juju/loggo/loggocolor:
+
+ loggo.RemoveWriter("default")
+ loggo.RegisterWriter("default", loggocolor.NewWriter(os.Stderr))
+*/
+package loggo
diff --git a/vendor/github.com/juju/loggo/entry.go b/vendor/github.com/juju/loggo/entry.go
new file mode 100644
index 0000000..b61aa54
--- /dev/null
+++ b/vendor/github.com/juju/loggo/entry.go
@@ -0,0 +1,22 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import "time"
+
+// Entry represents a single log message.
+type Entry struct {
+ // Level is the severity of the log message.
+ Level Level
+ // Module is the dotted module name from the logger.
+ Module string
+ // Filename is the full path the file that logged the message.
+ Filename string
+ // Line is the line number of the Filename.
+ Line int
+ // Timestamp is when the log message was created
+ Timestamp time.Time
+ // Message is the formatted string from teh log call.
+ Message string
+}
diff --git a/vendor/github.com/juju/loggo/export_test.go b/vendor/github.com/juju/loggo/export_test.go
new file mode 100644
index 0000000..8810064
--- /dev/null
+++ b/vendor/github.com/juju/loggo/export_test.go
@@ -0,0 +1,20 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+// WriterNames returns the names of the context's writers for testing purposes.
+func (c *Context) WriterNames() []string {
+ c.writersMutex.Lock()
+ defer c.writersMutex.Unlock()
+ var result []string
+ for name := range c.writers {
+ result = append(result, name)
+ }
+ return result
+}
+
+func ResetDefaultContext() {
+ ResetLogging()
+ DefaultContext().AddWriter(DefaultWriterName, defaultWriter())
+}
diff --git a/vendor/github.com/juju/loggo/formatter.go b/vendor/github.com/juju/loggo/formatter.go
new file mode 100644
index 0000000..ef8aa7a
--- /dev/null
+++ b/vendor/github.com/juju/loggo/formatter.go
@@ -0,0 +1,38 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "time"
+)
+
+// DefaultFormatter returns the parameters separated by spaces except for
+// filename and line which are separated by a colon. The timestamp is shown
+// to second resolution in UTC. For example:
+// 2016-07-02 15:04:05
+func DefaultFormatter(entry Entry) string {
+ ts := entry.Timestamp.In(time.UTC).Format("2006-01-02 15:04:05")
+ // Just get the basename from the filename
+ filename := filepath.Base(entry.Filename)
+ return fmt.Sprintf("%s %s %s %s:%d %s", ts, entry.Level, entry.Module, filename, entry.Line, entry.Message)
+}
+
+// TimeFormat is the time format used for the default writer.
+// This can be set with the environment variable LOGGO_TIME_FORMAT.
+var TimeFormat = initTimeFormat()
+
+func initTimeFormat() string {
+ format := os.Getenv("LOGGO_TIME_FORMAT")
+ if format != "" {
+ return format
+ }
+ return "15:04:05"
+}
+
+func formatTime(ts time.Time) string {
+ return ts.Format(TimeFormat)
+}
diff --git a/vendor/github.com/juju/loggo/formatter_test.go b/vendor/github.com/juju/loggo/formatter_test.go
new file mode 100644
index 0000000..d66379d
--- /dev/null
+++ b/vendor/github.com/juju/loggo/formatter_test.go
@@ -0,0 +1,32 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "time"
+
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/loggo"
+)
+
+type formatterSuite struct{}
+
+var _ = gc.Suite(&formatterSuite{})
+
+func (*formatterSuite) TestDefaultFormat(c *gc.C) {
+ location, err := time.LoadLocation("UTC")
+ testTime := time.Date(2013, 5, 3, 10, 53, 24, 123456, location)
+ c.Assert(err, gc.IsNil)
+ entry := loggo.Entry{
+ Level: loggo.WARNING,
+ Module: "test.module",
+ Filename: "some/deep/filename",
+ Line: 42,
+ Timestamp: testTime,
+ Message: "hello world!",
+ }
+ formatted := loggo.DefaultFormatter(entry)
+ c.Assert(formatted, gc.Equals, "2013-05-03 10:53:24 WARNING test.module filename:42 hello world!")
+}
diff --git a/vendor/github.com/juju/loggo/global.go b/vendor/github.com/juju/loggo/global.go
new file mode 100644
index 0000000..7cf95ca
--- /dev/null
+++ b/vendor/github.com/juju/loggo/global.go
@@ -0,0 +1,85 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+var (
+ defaultContext = newDefaultContxt()
+)
+
+func newDefaultContxt() *Context {
+ ctx := NewContext(WARNING)
+ ctx.AddWriter(DefaultWriterName, defaultWriter())
+ return ctx
+}
+
+// DefaultContext returns the global default logging context.
+func DefaultContext() *Context {
+ return defaultContext
+}
+
+// LoggerInfo returns information about the configured loggers and their
+// logging levels. The information is returned in the format expected by
+// ConfigureLoggers. Loggers with UNSPECIFIED level will not
+// be included.
+func LoggerInfo() string {
+ return defaultContext.Config().String()
+}
+
+// GetLogger returns a Logger for the given module name,
+// creating it and its parents if necessary.
+func GetLogger(name string) Logger {
+ return defaultContext.GetLogger(name)
+}
+
+// ResetLogging iterates through the known modules and sets the levels of all
+// to UNSPECIFIED, except for which is set to WARNING. The call also
+// removes all writers in the DefaultContext and puts the original default
+// writer back as the only writer.
+func ResetLogging() {
+ defaultContext.ResetLoggerLevels()
+ defaultContext.ResetWriters()
+}
+
+// ResetWriters puts the list of writers back into the initial state.
+func ResetWriters() {
+ defaultContext.ResetWriters()
+}
+
+// ReplaceDefaultWriter is a convenience method that does the equivalent of
+// RemoveWriter and then RegisterWriter with the name "default". The previous
+// default writer, if any is returned.
+func ReplaceDefaultWriter(writer Writer) (Writer, error) {
+ return defaultContext.ReplaceWriter(DefaultWriterName, writer)
+}
+
+// RegisterWriter adds the writer to the list of writers in the DefaultContext
+// that get notified when logging. If there is already a registered writer
+// with that name, an error is returned.
+func RegisterWriter(name string, writer Writer) error {
+ return defaultContext.AddWriter(name, writer)
+}
+
+// RemoveWriter removes the Writer identified by 'name' and returns it.
+// If the Writer is not found, an error is returned.
+func RemoveWriter(name string) (Writer, error) {
+ return defaultContext.RemoveWriter(name)
+}
+
+// ConfigureLoggers configures loggers according to the given string
+// specification, which specifies a set of modules and their associated
+// logging levels. Loggers are colon- or semicolon-separated; each
+// module is specified as =. White space outside of
+// module names and levels is ignored. The root module is specified
+// with the name "".
+//
+// An example specification:
+// `=ERROR; foo.bar=WARNING`
+func ConfigureLoggers(specification string) error {
+ config, err := ParseConfigString(specification)
+ if err != nil {
+ return err
+ }
+ defaultContext.ApplyConfig(config)
+ return nil
+}
diff --git a/vendor/github.com/juju/loggo/global_test.go b/vendor/github.com/juju/loggo/global_test.go
new file mode 100644
index 0000000..60e6df2
--- /dev/null
+++ b/vendor/github.com/juju/loggo/global_test.go
@@ -0,0 +1,87 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "github.com/juju/loggo"
+ gc "gopkg.in/check.v1"
+)
+
+type GlobalSuite struct{}
+
+var _ = gc.Suite(&GlobalSuite{})
+
+func (*GlobalSuite) SetUpTest(c *gc.C) {
+ loggo.ResetDefaultContext()
+}
+
+func (*GlobalSuite) TestRootLogger(c *gc.C) {
+ var root loggo.Logger
+
+ got := loggo.GetLogger("")
+
+ c.Check(got.Name(), gc.Equals, root.Name())
+ c.Check(got.LogLevel(), gc.Equals, root.LogLevel())
+}
+
+func (*GlobalSuite) TestModuleName(c *gc.C) {
+ logger := loggo.GetLogger("loggo.testing")
+ c.Check(logger.Name(), gc.Equals, "loggo.testing")
+}
+
+func (*GlobalSuite) TestLevel(c *gc.C) {
+ logger := loggo.GetLogger("testing")
+ level := logger.LogLevel()
+ c.Check(level, gc.Equals, loggo.UNSPECIFIED)
+}
+
+func (*GlobalSuite) TestEffectiveLevel(c *gc.C) {
+ logger := loggo.GetLogger("testing")
+ level := logger.EffectiveLogLevel()
+ c.Check(level, gc.Equals, loggo.WARNING)
+}
+
+func (*GlobalSuite) TestLevelsSharedForSameModule(c *gc.C) {
+ logger1 := loggo.GetLogger("testing.module")
+ logger2 := loggo.GetLogger("testing.module")
+
+ logger1.SetLogLevel(loggo.INFO)
+ c.Assert(logger1.IsInfoEnabled(), gc.Equals, true)
+ c.Assert(logger2.IsInfoEnabled(), gc.Equals, true)
+}
+
+func (*GlobalSuite) TestModuleLowered(c *gc.C) {
+ logger1 := loggo.GetLogger("TESTING.MODULE")
+ logger2 := loggo.GetLogger("Testing")
+
+ c.Assert(logger1.Name(), gc.Equals, "testing.module")
+ c.Assert(logger2.Name(), gc.Equals, "testing")
+}
+
+func (s *GlobalSuite) TestConfigureLoggers(c *gc.C) {
+ err := loggo.ConfigureLoggers("testing.module=debug")
+ c.Assert(err, gc.IsNil)
+ expected := "=WARNING;testing.module=DEBUG"
+ c.Assert(loggo.DefaultContext().Config().String(), gc.Equals, expected)
+ c.Assert(loggo.LoggerInfo(), gc.Equals, expected)
+}
+
+func (*GlobalSuite) TestRegisterWriterExistingName(c *gc.C) {
+ err := loggo.RegisterWriter("default", &writer{})
+ c.Assert(err, gc.ErrorMatches, `context already has a writer named "default"`)
+}
+
+func (*GlobalSuite) TestReplaceDefaultWriter(c *gc.C) {
+ oldWriter, err := loggo.ReplaceDefaultWriter(&writer{})
+ c.Assert(oldWriter, gc.NotNil)
+ c.Assert(err, gc.IsNil)
+ c.Assert(loggo.DefaultContext().WriterNames(), gc.DeepEquals, []string{"default"})
+}
+
+func (*GlobalSuite) TestRemoveWriter(c *gc.C) {
+ oldWriter, err := loggo.RemoveWriter("default")
+ c.Assert(oldWriter, gc.NotNil)
+ c.Assert(err, gc.IsNil)
+ c.Assert(loggo.DefaultContext().WriterNames(), gc.HasLen, 0)
+}
diff --git a/vendor/github.com/juju/loggo/level.go b/vendor/github.com/juju/loggo/level.go
new file mode 100644
index 0000000..f6a5c4f
--- /dev/null
+++ b/vendor/github.com/juju/loggo/level.go
@@ -0,0 +1,102 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "strings"
+ "sync/atomic"
+)
+
+// The severity levels. Higher values are more considered more
+// important.
+const (
+ UNSPECIFIED Level = iota
+ TRACE
+ DEBUG
+ INFO
+ WARNING
+ ERROR
+ CRITICAL
+)
+
+// Level holds a severity level.
+type Level uint32
+
+// ParseLevel converts a string representation of a logging level to a
+// Level. It returns the level and whether it was valid or not.
+func ParseLevel(level string) (Level, bool) {
+ level = strings.ToUpper(level)
+ switch level {
+ case "UNSPECIFIED":
+ return UNSPECIFIED, true
+ case "TRACE":
+ return TRACE, true
+ case "DEBUG":
+ return DEBUG, true
+ case "INFO":
+ return INFO, true
+ case "WARN", "WARNING":
+ return WARNING, true
+ case "ERROR":
+ return ERROR, true
+ case "CRITICAL":
+ return CRITICAL, true
+ default:
+ return UNSPECIFIED, false
+ }
+}
+
+// String implements Stringer.
+func (level Level) String() string {
+ switch level {
+ case UNSPECIFIED:
+ return "UNSPECIFIED"
+ case TRACE:
+ return "TRACE"
+ case DEBUG:
+ return "DEBUG"
+ case INFO:
+ return "INFO"
+ case WARNING:
+ return "WARNING"
+ case ERROR:
+ return "ERROR"
+ case CRITICAL:
+ return "CRITICAL"
+ default:
+ return ""
+ }
+}
+
+// Short returns a five character string to use in
+// aligned logging output.
+func (level Level) Short() string {
+ switch level {
+ case TRACE:
+ return "TRACE"
+ case DEBUG:
+ return "DEBUG"
+ case INFO:
+ return "INFO "
+ case WARNING:
+ return "WARN "
+ case ERROR:
+ return "ERROR"
+ case CRITICAL:
+ return "CRITC"
+ default:
+ return " "
+ }
+}
+
+// get atomically gets the value of the given level.
+func (level *Level) get() Level {
+ return Level(atomic.LoadUint32((*uint32)(level)))
+}
+
+// set atomically sets the value of the receiver
+// to the given level.
+func (level *Level) set(newLevel Level) {
+ atomic.StoreUint32((*uint32)(level), uint32(newLevel))
+}
diff --git a/vendor/github.com/juju/loggo/level_test.go b/vendor/github.com/juju/loggo/level_test.go
new file mode 100644
index 0000000..084cf0c
--- /dev/null
+++ b/vendor/github.com/juju/loggo/level_test.go
@@ -0,0 +1,96 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/loggo"
+)
+
+type LevelSuite struct{}
+
+var _ = gc.Suite(&LevelSuite{})
+
+var parseLevelTests = []struct {
+ str string
+ level loggo.Level
+ fail bool
+}{{
+ str: "trace",
+ level: loggo.TRACE,
+}, {
+ str: "TrAce",
+ level: loggo.TRACE,
+}, {
+ str: "TRACE",
+ level: loggo.TRACE,
+}, {
+ str: "debug",
+ level: loggo.DEBUG,
+}, {
+ str: "DEBUG",
+ level: loggo.DEBUG,
+}, {
+ str: "info",
+ level: loggo.INFO,
+}, {
+ str: "INFO",
+ level: loggo.INFO,
+}, {
+ str: "warn",
+ level: loggo.WARNING,
+}, {
+ str: "WARN",
+ level: loggo.WARNING,
+}, {
+ str: "warning",
+ level: loggo.WARNING,
+}, {
+ str: "WARNING",
+ level: loggo.WARNING,
+}, {
+ str: "error",
+ level: loggo.ERROR,
+}, {
+ str: "ERROR",
+ level: loggo.ERROR,
+}, {
+ str: "critical",
+ level: loggo.CRITICAL,
+}, {
+ str: "not_specified",
+ fail: true,
+}, {
+ str: "other",
+ fail: true,
+}, {
+ str: "",
+ fail: true,
+}}
+
+func (s *LevelSuite) TestParseLevel(c *gc.C) {
+ for _, test := range parseLevelTests {
+ level, ok := loggo.ParseLevel(test.str)
+ c.Assert(level, gc.Equals, test.level)
+ c.Assert(ok, gc.Equals, !test.fail)
+ }
+}
+
+var levelStringValueTests = map[loggo.Level]string{
+ loggo.UNSPECIFIED: "UNSPECIFIED",
+ loggo.DEBUG: "DEBUG",
+ loggo.TRACE: "TRACE",
+ loggo.INFO: "INFO",
+ loggo.WARNING: "WARNING",
+ loggo.ERROR: "ERROR",
+ loggo.CRITICAL: "CRITICAL",
+ loggo.Level(42): "", // other values are unknown
+}
+
+func (s *LevelSuite) TestLevelStringValue(c *gc.C) {
+ for level, str := range levelStringValueTests {
+ c.Assert(level.String(), gc.Equals, str)
+ }
+}
diff --git a/vendor/github.com/juju/loggo/logger.go b/vendor/github.com/juju/loggo/logger.go
new file mode 100644
index 0000000..fbdfd9e
--- /dev/null
+++ b/vendor/github.com/juju/loggo/logger.go
@@ -0,0 +1,176 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "fmt"
+ "runtime"
+ "time"
+)
+
+// A Logger represents a logging module. It has an associated logging
+// level which can be changed; messages of lesser severity will
+// be dropped. Loggers have a hierarchical relationship - see
+// the package documentation.
+//
+// The zero Logger value is usable - any messages logged
+// to it will be sent to the root Logger.
+type Logger struct {
+ impl *module
+}
+
+func (logger Logger) getModule() *module {
+ if logger.impl == nil {
+ return defaultContext.root
+ }
+ return logger.impl
+}
+
+// Name returns the logger's module name.
+func (logger Logger) Name() string {
+ return logger.getModule().Name()
+}
+
+// LogLevel returns the configured min log level of the logger.
+func (logger Logger) LogLevel() Level {
+ return logger.getModule().level
+}
+
+// EffectiveLogLevel returns the effective min log level of
+// the receiver - that is, messages with a lesser severity
+// level will be discarded.
+//
+// If the log level of the receiver is unspecified,
+// it will be taken from the effective log level of its
+// parent.
+func (logger Logger) EffectiveLogLevel() Level {
+ return logger.getModule().getEffectiveLogLevel()
+}
+
+// SetLogLevel sets the severity level of the given logger.
+// The root logger cannot be set to UNSPECIFIED level.
+// See EffectiveLogLevel for how this affects the
+// actual messages logged.
+func (logger Logger) SetLogLevel(level Level) {
+ logger.getModule().setLevel(level)
+}
+
+// Logf logs a printf-formatted message at the given level.
+// A message will be discarded if level is less than the
+// the effective log level of the logger.
+// Note that the writers may also filter out messages that
+// are less than their registered minimum severity level.
+func (logger Logger) Logf(level Level, message string, args ...interface{}) {
+ logger.LogCallf(2, level, message, args...)
+}
+
+// LogCallf logs a printf-formatted message at the given level.
+// The location of the call is indicated by the calldepth argument.
+// A calldepth of 1 means the function that called this function.
+// A message will be discarded if level is less than the
+// the effective log level of the logger.
+// Note that the writers may also filter out messages that
+// are less than their registered minimum severity level.
+func (logger Logger) LogCallf(calldepth int, level Level, message string, args ...interface{}) {
+ module := logger.getModule()
+ if !module.willWrite(level) {
+ return
+ }
+ // Gather time, and filename, line number.
+ now := time.Now() // get this early.
+ // Param to Caller is the call depth. Since this method is called from
+ // the Logger methods, we want the place that those were called from.
+ _, file, line, ok := runtime.Caller(calldepth + 1)
+ if !ok {
+ file = "???"
+ line = 0
+ }
+ // Trim newline off format string, following usual
+ // Go logging conventions.
+ if len(message) > 0 && message[len(message)-1] == '\n' {
+ message = message[0 : len(message)-1]
+ }
+
+ // To avoid having a proliferation of Info/Infof methods,
+ // only use Sprintf if there are any args, and rely on the
+ // `go vet` tool for the obvious cases where someone has forgotten
+ // to provide an arg.
+ formattedMessage := message
+ if len(args) > 0 {
+ formattedMessage = fmt.Sprintf(message, args...)
+ }
+ module.write(Entry{
+ Level: level,
+ Filename: file,
+ Line: line,
+ Timestamp: now,
+ Message: formattedMessage,
+ })
+}
+
+// Criticalf logs the printf-formatted message at critical level.
+func (logger Logger) Criticalf(message string, args ...interface{}) {
+ logger.Logf(CRITICAL, message, args...)
+}
+
+// Errorf logs the printf-formatted message at error level.
+func (logger Logger) Errorf(message string, args ...interface{}) {
+ logger.Logf(ERROR, message, args...)
+}
+
+// Warningf logs the printf-formatted message at warning level.
+func (logger Logger) Warningf(message string, args ...interface{}) {
+ logger.Logf(WARNING, message, args...)
+}
+
+// Infof logs the printf-formatted message at info level.
+func (logger Logger) Infof(message string, args ...interface{}) {
+ logger.Logf(INFO, message, args...)
+}
+
+// Debugf logs the printf-formatted message at debug level.
+func (logger Logger) Debugf(message string, args ...interface{}) {
+ logger.Logf(DEBUG, message, args...)
+}
+
+// Tracef logs the printf-formatted message at trace level.
+func (logger Logger) Tracef(message string, args ...interface{}) {
+ logger.Logf(TRACE, message, args...)
+}
+
+// IsLevelEnabled returns whether debugging is enabled
+// for the given log level.
+func (logger Logger) IsLevelEnabled(level Level) bool {
+ return logger.getModule().willWrite(level)
+}
+
+// IsErrorEnabled returns whether debugging is enabled
+// at error level.
+func (logger Logger) IsErrorEnabled() bool {
+ return logger.IsLevelEnabled(ERROR)
+}
+
+// IsWarningEnabled returns whether debugging is enabled
+// at warning level.
+func (logger Logger) IsWarningEnabled() bool {
+ return logger.IsLevelEnabled(WARNING)
+}
+
+// IsInfoEnabled returns whether debugging is enabled
+// at info level.
+func (logger Logger) IsInfoEnabled() bool {
+ return logger.IsLevelEnabled(INFO)
+}
+
+// IsDebugEnabled returns whether debugging is enabled
+// at debug level.
+func (logger Logger) IsDebugEnabled() bool {
+ return logger.IsLevelEnabled(DEBUG)
+}
+
+// IsTraceEnabled returns whether debugging is enabled
+// at trace level.
+func (logger Logger) IsTraceEnabled() bool {
+ return logger.IsLevelEnabled(TRACE)
+}
diff --git a/vendor/github.com/juju/loggo/logger_test.go b/vendor/github.com/juju/loggo/logger_test.go
new file mode 100644
index 0000000..353c229
--- /dev/null
+++ b/vendor/github.com/juju/loggo/logger_test.go
@@ -0,0 +1,139 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/loggo"
+)
+
+type LoggerSuite struct{}
+
+var _ = gc.Suite(&LoggerSuite{})
+
+func (*LoggerSuite) SetUpTest(c *gc.C) {
+ loggo.ResetDefaultContext()
+}
+
+func (s *LoggerSuite) TestRootLogger(c *gc.C) {
+ root := loggo.Logger{}
+ c.Check(root.Name(), gc.Equals, "")
+ c.Check(root.LogLevel(), gc.Equals, loggo.WARNING)
+ c.Check(root.IsErrorEnabled(), gc.Equals, true)
+ c.Check(root.IsWarningEnabled(), gc.Equals, true)
+ c.Check(root.IsInfoEnabled(), gc.Equals, false)
+ c.Check(root.IsDebugEnabled(), gc.Equals, false)
+ c.Check(root.IsTraceEnabled(), gc.Equals, false)
+}
+
+func (s *LoggerSuite) TestSetLevel(c *gc.C) {
+ logger := loggo.GetLogger("testing")
+
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.WARNING)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, true)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, false)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, false)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ logger.SetLogLevel(loggo.TRACE)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.TRACE)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.TRACE)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, true)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, true)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, true)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, true)
+ logger.SetLogLevel(loggo.DEBUG)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, true)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, true)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, true)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ logger.SetLogLevel(loggo.INFO)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.INFO)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.INFO)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, true)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, true)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, false)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ logger.SetLogLevel(loggo.WARNING)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.WARNING)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.WARNING)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, true)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, false)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, false)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ logger.SetLogLevel(loggo.ERROR)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, true)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, false)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, false)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, false)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ // This is added for completeness, but not really expected to be used.
+ logger.SetLogLevel(loggo.CRITICAL)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.CRITICAL)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.CRITICAL)
+ c.Assert(logger.IsErrorEnabled(), gc.Equals, false)
+ c.Assert(logger.IsWarningEnabled(), gc.Equals, false)
+ c.Assert(logger.IsInfoEnabled(), gc.Equals, false)
+ c.Assert(logger.IsDebugEnabled(), gc.Equals, false)
+ c.Assert(logger.IsTraceEnabled(), gc.Equals, false)
+ logger.SetLogLevel(loggo.UNSPECIFIED)
+ c.Assert(logger.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(logger.EffectiveLogLevel(), gc.Equals, loggo.WARNING)
+}
+
+func (s *LoggerSuite) TestModuleLowered(c *gc.C) {
+ logger1 := loggo.GetLogger("TESTING.MODULE")
+ logger2 := loggo.GetLogger("Testing")
+
+ c.Assert(logger1.Name(), gc.Equals, "testing.module")
+ c.Assert(logger2.Name(), gc.Equals, "testing")
+}
+
+func (s *LoggerSuite) TestLevelsInherited(c *gc.C) {
+ root := loggo.GetLogger("")
+ first := loggo.GetLogger("first")
+ second := loggo.GetLogger("first.second")
+
+ root.SetLogLevel(loggo.ERROR)
+ c.Assert(root.LogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(root.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(first.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(first.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(second.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(second.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+
+ first.SetLogLevel(loggo.DEBUG)
+ c.Assert(root.LogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(root.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(first.LogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(first.EffectiveLogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(second.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(second.EffectiveLogLevel(), gc.Equals, loggo.DEBUG)
+
+ second.SetLogLevel(loggo.INFO)
+ c.Assert(root.LogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(root.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(first.LogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(first.EffectiveLogLevel(), gc.Equals, loggo.DEBUG)
+ c.Assert(second.LogLevel(), gc.Equals, loggo.INFO)
+ c.Assert(second.EffectiveLogLevel(), gc.Equals, loggo.INFO)
+
+ first.SetLogLevel(loggo.UNSPECIFIED)
+ c.Assert(root.LogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(root.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(first.LogLevel(), gc.Equals, loggo.UNSPECIFIED)
+ c.Assert(first.EffectiveLogLevel(), gc.Equals, loggo.ERROR)
+ c.Assert(second.LogLevel(), gc.Equals, loggo.INFO)
+ c.Assert(second.EffectiveLogLevel(), gc.Equals, loggo.INFO)
+}
diff --git a/vendor/github.com/juju/loggo/logging_test.go b/vendor/github.com/juju/loggo/logging_test.go
new file mode 100644
index 0000000..fa62cb4
--- /dev/null
+++ b/vendor/github.com/juju/loggo/logging_test.go
@@ -0,0 +1,92 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "time"
+
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/loggo"
+)
+
+type LoggingSuite struct {
+ context *loggo.Context
+ writer *writer
+ logger loggo.Logger
+}
+
+var _ = gc.Suite(&LoggingSuite{})
+
+func (s *LoggingSuite) SetUpTest(c *gc.C) {
+ s.writer = &writer{}
+ s.context = loggo.NewContext(loggo.TRACE)
+ s.context.AddWriter("test", s.writer)
+ s.logger = s.context.GetLogger("test")
+}
+
+func (s *LoggingSuite) TestLoggingStrings(c *gc.C) {
+ s.logger.Infof("simple")
+ s.logger.Infof("with args %d", 42)
+ s.logger.Infof("working 100%")
+ s.logger.Infof("missing %s")
+
+ checkLogEntries(c, s.writer.Log(), []loggo.Entry{
+ {Level: loggo.INFO, Module: "test", Message: "simple"},
+ {Level: loggo.INFO, Module: "test", Message: "with args 42"},
+ {Level: loggo.INFO, Module: "test", Message: "working 100%"},
+ {Level: loggo.INFO, Module: "test", Message: "missing %s"},
+ })
+}
+
+func (s *LoggingSuite) TestLoggingLimitWarning(c *gc.C) {
+ s.logger.SetLogLevel(loggo.WARNING)
+ start := time.Now()
+ logAllSeverities(s.logger)
+ end := time.Now()
+ entries := s.writer.Log()
+ checkLogEntries(c, entries, []loggo.Entry{
+ {Level: loggo.CRITICAL, Module: "test", Message: "something critical"},
+ {Level: loggo.ERROR, Module: "test", Message: "an error"},
+ {Level: loggo.WARNING, Module: "test", Message: "a warning message"},
+ })
+
+ for _, entry := range entries {
+ c.Check(entry.Timestamp, Between(start, end))
+ }
+}
+
+func (s *LoggingSuite) TestLocationCapture(c *gc.C) {
+ s.logger.Criticalf("critical message") //tag critical-location
+ s.logger.Errorf("error message") //tag error-location
+ s.logger.Warningf("warning message") //tag warning-location
+ s.logger.Infof("info message") //tag info-location
+ s.logger.Debugf("debug message") //tag debug-location
+ s.logger.Tracef("trace message") //tag trace-location
+
+ log := s.writer.Log()
+ tags := []string{
+ "critical-location",
+ "error-location",
+ "warning-location",
+ "info-location",
+ "debug-location",
+ "trace-location",
+ }
+ c.Assert(log, gc.HasLen, len(tags))
+ for x := range tags {
+ assertLocation(c, log[x], tags[x])
+ }
+}
+
+func (s *LoggingSuite) TestLogDoesntLogWeirdLevels(c *gc.C) {
+ s.logger.Logf(loggo.UNSPECIFIED, "message")
+ c.Assert(s.writer.Log(), gc.HasLen, 0)
+
+ s.logger.Logf(loggo.Level(42), "message")
+ c.Assert(s.writer.Log(), gc.HasLen, 0)
+
+ s.logger.Logf(loggo.CRITICAL+loggo.Level(1), "message")
+ c.Assert(s.writer.Log(), gc.HasLen, 0)
+}
diff --git a/vendor/github.com/juju/loggo/module.go b/vendor/github.com/juju/loggo/module.go
new file mode 100644
index 0000000..8153be5
--- /dev/null
+++ b/vendor/github.com/juju/loggo/module.go
@@ -0,0 +1,61 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+// Do not change rootName: modules.resolve() will misbehave if it isn't "".
+const (
+ rootString = ""
+ defaultRootLevel = WARNING
+ defaultLevel = UNSPECIFIED
+)
+
+type module struct {
+ name string
+ level Level
+ parent *module
+ context *Context
+}
+
+// Name returns the module's name.
+func (module *module) Name() string {
+ if module.name == "" {
+ return rootString
+ }
+ return module.name
+}
+
+func (m *module) willWrite(level Level) bool {
+ if level < TRACE || level > CRITICAL {
+ return false
+ }
+ return level >= m.getEffectiveLogLevel()
+}
+
+func (module *module) getEffectiveLogLevel() Level {
+ // Note: the root module is guaranteed to have a
+ // specified logging level, so acts as a suitable sentinel
+ // for this loop.
+ for {
+ if level := module.level.get(); level != UNSPECIFIED {
+ return level
+ }
+ module = module.parent
+ }
+ panic("unreachable")
+}
+
+// setLevel sets the severity level of the given module.
+// The root module cannot be set to UNSPECIFIED level.
+func (module *module) setLevel(level Level) {
+ // The root module can't be unspecified.
+ if module.name == "" && level == UNSPECIFIED {
+ level = WARNING
+ }
+ module.level.set(level)
+}
+
+func (m *module) write(entry Entry) {
+ entry.Module = m.name
+ m.context.write(entry)
+}
diff --git a/vendor/github.com/juju/loggo/package_test.go b/vendor/github.com/juju/loggo/package_test.go
new file mode 100644
index 0000000..791d65e
--- /dev/null
+++ b/vendor/github.com/juju/loggo/package_test.go
@@ -0,0 +1,14 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "testing"
+
+ gc "gopkg.in/check.v1"
+)
+
+func Test(t *testing.T) {
+ gc.TestingT(t)
+}
diff --git a/vendor/github.com/juju/loggo/testwriter.go b/vendor/github.com/juju/loggo/testwriter.go
new file mode 100644
index 0000000..b20e470
--- /dev/null
+++ b/vendor/github.com/juju/loggo/testwriter.go
@@ -0,0 +1,40 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "path"
+ "sync"
+)
+
+// TestWriter is a useful Writer for testing purposes. Each component of the
+// logging message is stored in the Log array.
+type TestWriter struct {
+ mu sync.Mutex
+ log []Entry
+}
+
+// Write saves the params as members in the TestLogValues struct appended to the Log array.
+func (writer *TestWriter) Write(entry Entry) {
+ writer.mu.Lock()
+ defer writer.mu.Unlock()
+ entry.Filename = path.Base(entry.Filename)
+ writer.log = append(writer.log, entry)
+}
+
+// Clear removes any saved log messages.
+func (writer *TestWriter) Clear() {
+ writer.mu.Lock()
+ defer writer.mu.Unlock()
+ writer.log = nil
+}
+
+// Log returns a copy of the current logged values.
+func (writer *TestWriter) Log() []Entry {
+ writer.mu.Lock()
+ defer writer.mu.Unlock()
+ v := make([]Entry, len(writer.log))
+ copy(v, writer.log)
+ return v
+}
diff --git a/vendor/github.com/juju/loggo/util_test.go b/vendor/github.com/juju/loggo/util_test.go
new file mode 100644
index 0000000..0230ec9
--- /dev/null
+++ b/vendor/github.com/juju/loggo/util_test.go
@@ -0,0 +1,68 @@
+// Copyright 2016 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "fmt"
+ "io/ioutil"
+ "strings"
+
+ "github.com/juju/loggo"
+
+ gc "gopkg.in/check.v1"
+)
+
+func init() {
+ setLocationsForTags("logging_test.go")
+ setLocationsForTags("writer_test.go")
+}
+
+func assertLocation(c *gc.C, msg loggo.Entry, tag string) {
+ loc := location(tag)
+ c.Assert(msg.Filename, gc.Equals, loc.file)
+ c.Assert(msg.Line, gc.Equals, loc.line)
+}
+
+// All this location stuff is to avoid having hard coded line numbers
+// in the tests. Any line where as a test writer you want to capture the
+// file and line number, add a comment that has `//tag name` as the end of
+// the line. The name must be unique across all the tests, and the test
+// will panic if it is not. This name is then used to read the actual
+// file and line numbers.
+
+func location(tag string) Location {
+ loc, ok := tagToLocation[tag]
+ if !ok {
+ panic(fmt.Errorf("tag %q not found", tag))
+ }
+ return loc
+}
+
+type Location struct {
+ file string
+ line int
+}
+
+func (loc Location) String() string {
+ return fmt.Sprintf("%s:%d", loc.file, loc.line)
+}
+
+var tagToLocation = make(map[string]Location)
+
+func setLocationsForTags(filename string) {
+ data, err := ioutil.ReadFile(filename)
+ if err != nil {
+ panic(err)
+ }
+ lines := strings.Split(string(data), "\n")
+ for i, line := range lines {
+ if j := strings.Index(line, "//tag "); j >= 0 {
+ tag := line[j+len("//tag "):]
+ if _, found := tagToLocation[tag]; found {
+ panic(fmt.Errorf("tag %q already processed previously"))
+ }
+ tagToLocation[tag] = Location{file: filename, line: i + 1}
+ }
+ }
+}
diff --git a/vendor/github.com/juju/loggo/writer.go b/vendor/github.com/juju/loggo/writer.go
new file mode 100644
index 0000000..d6b9c23
--- /dev/null
+++ b/vendor/github.com/juju/loggo/writer.go
@@ -0,0 +1,70 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo
+
+import (
+ "fmt"
+ "io"
+ "os"
+)
+
+// DefaultWriterName is the name of the default writer for
+// a Context.
+const DefaultWriterName = "default"
+
+// Writer is implemented by any recipient of log messages.
+type Writer interface {
+ // Write writes a message to the Writer with the given level and module
+ // name. The filename and line hold the file name and line number of the
+ // code that is generating the log message; the time stamp holds the time
+ // the log message was generated, and message holds the log message
+ // itself.
+ Write(entry Entry)
+}
+
+// NewMinLevelWriter returns a Writer that will only pass on the Write calls
+// to the provided writer if the log level is at or above the specified
+// minimum level.
+func NewMinimumLevelWriter(writer Writer, minLevel Level) Writer {
+ return &minLevelWriter{
+ writer: writer,
+ level: minLevel,
+ }
+}
+
+type minLevelWriter struct {
+ writer Writer
+ level Level
+}
+
+// Write writes the log record.
+func (w minLevelWriter) Write(entry Entry) {
+ if entry.Level < w.level {
+ return
+ }
+ w.writer.Write(entry)
+}
+
+type simpleWriter struct {
+ writer io.Writer
+ formatter func(entry Entry) string
+}
+
+// NewSimpleWriter returns a new writer that writes log messages to the given
+// io.Writer formatting the messages with the given formatter.
+func NewSimpleWriter(writer io.Writer, formatter func(entry Entry) string) Writer {
+ if formatter == nil {
+ formatter = DefaultFormatter
+ }
+ return &simpleWriter{writer, formatter}
+}
+
+func (simple *simpleWriter) Write(entry Entry) {
+ logLine := simple.formatter(entry)
+ fmt.Fprintln(simple.writer, logLine)
+}
+
+func defaultWriter() Writer {
+ return NewSimpleWriter(os.Stderr, DefaultFormatter)
+}
diff --git a/vendor/github.com/juju/loggo/writer_test.go b/vendor/github.com/juju/loggo/writer_test.go
new file mode 100644
index 0000000..8be9924
--- /dev/null
+++ b/vendor/github.com/juju/loggo/writer_test.go
@@ -0,0 +1,37 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package loggo_test
+
+import (
+ "bytes"
+ "time"
+
+ gc "gopkg.in/check.v1"
+
+ "github.com/juju/loggo"
+)
+
+type SimpleWriterSuite struct{}
+
+var _ = gc.Suite(&SimpleWriterSuite{})
+
+func (s *SimpleWriterSuite) TestNewSimpleWriter(c *gc.C) {
+ now := time.Now()
+ formatter := func(entry loggo.Entry) string {
+ return "<< " + entry.Message + " >>"
+ }
+ buf := &bytes.Buffer{}
+
+ writer := loggo.NewSimpleWriter(buf, formatter)
+ writer.Write(loggo.Entry{
+ Level: loggo.INFO,
+ Module: "test",
+ Filename: "somefile.go",
+ Line: 12,
+ Timestamp: now,
+ Message: "a message",
+ })
+
+ c.Check(buf.String(), gc.Equals, "<< a message >>\n")
+}
diff --git a/vendor/github.com/juju/testing/checkers/LICENSE-golang b/vendor/github.com/juju/testing/checkers/LICENSE-golang
new file mode 100644
index 0000000..7448756
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/LICENSE-golang
@@ -0,0 +1,27 @@
+Copyright (c) 2012 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/juju/testing/checkers/bool.go b/vendor/github.com/juju/testing/checkers/bool.go
new file mode 100644
index 0000000..02e3eec
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/bool.go
@@ -0,0 +1,117 @@
+// Copyright 2011 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "fmt"
+ "reflect"
+
+ gc "gopkg.in/check.v1"
+)
+
+type isTrueChecker struct {
+ *gc.CheckerInfo
+}
+
+// IsTrue checks whether a value has an underlying
+// boolean type and is true.
+var IsTrue gc.Checker = &isTrueChecker{
+ &gc.CheckerInfo{Name: "IsTrue", Params: []string{"obtained"}},
+}
+
+// IsTrue checks whether a value has an underlying
+// boolean type and is false.
+var IsFalse gc.Checker = gc.Not(IsTrue)
+
+func (checker *isTrueChecker) Check(params []interface{}, names []string) (result bool, error string) {
+
+ value := reflect.ValueOf(params[0])
+ if !value.IsValid() {
+ return false, fmt.Sprintf("expected type bool, received %s", value)
+ }
+ switch value.Kind() {
+ case reflect.Bool:
+ return value.Bool(), ""
+ }
+
+ return false, fmt.Sprintf("expected type bool, received type %s", value.Type())
+}
+
+type satisfiesChecker struct {
+ *gc.CheckerInfo
+}
+
+// Satisfies checks whether a value causes the argument
+// function to return true. The function must be of
+// type func(T) bool where the value being checked
+// is assignable to T.
+var Satisfies gc.Checker = &satisfiesChecker{
+ &gc.CheckerInfo{
+ Name: "Satisfies",
+ Params: []string{"obtained", "func(T) bool"},
+ },
+}
+
+func (checker *satisfiesChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ f := reflect.ValueOf(params[1])
+ ft := f.Type()
+ if ft.Kind() != reflect.Func ||
+ ft.NumIn() != 1 ||
+ ft.NumOut() != 1 ||
+ ft.Out(0) != reflect.TypeOf(true) {
+ return false, fmt.Sprintf("expected func(T) bool, got %s", ft)
+ }
+ v := reflect.ValueOf(params[0])
+ if !v.IsValid() {
+ if !canBeNil(ft.In(0)) {
+ return false, fmt.Sprintf("cannot assign nil to argument %T", ft.In(0))
+ }
+ v = reflect.Zero(ft.In(0))
+ }
+ if !v.Type().AssignableTo(ft.In(0)) {
+ return false, fmt.Sprintf("wrong argument type %s for %s", v.Type(), ft)
+ }
+ return f.Call([]reflect.Value{v})[0].Interface().(bool), ""
+}
+
+func canBeNil(t reflect.Type) bool {
+ switch t.Kind() {
+ case reflect.Chan,
+ reflect.Func,
+ reflect.Interface,
+ reflect.Map,
+ reflect.Ptr,
+ reflect.Slice:
+ return true
+ }
+ return false
+}
+
+type deepEqualsChecker struct {
+ *gc.CheckerInfo
+}
+
+// The DeepEquals checker verifies that the obtained value is deep-equal to
+// the expected value. The check will work correctly even when facing
+// slices, interfaces, and values of different types (which always fail
+// the test).
+//
+// For example:
+//
+// c.Assert(value, DeepEquals, 42)
+// c.Assert(array, DeepEquals, []string{"hi", "there"})
+//
+// This checker differs from gocheck.DeepEquals in that
+// it will compare a nil slice equal to an empty slice,
+// and a nil map equal to an empty map.
+var DeepEquals gc.Checker = &deepEqualsChecker{
+ &gc.CheckerInfo{Name: "DeepEquals", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *deepEqualsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ if ok, err := DeepEqual(params[0], params[1]); !ok {
+ return false, err.Error()
+ }
+ return true, ""
+}
diff --git a/vendor/github.com/juju/testing/checkers/bool_test.go b/vendor/github.com/juju/testing/checkers/bool_test.go
new file mode 100644
index 0000000..887a265
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/bool_test.go
@@ -0,0 +1,125 @@
+// Copyright 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ "errors"
+ "os"
+
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+type BoolSuite struct{}
+
+var _ = gc.Suite(&BoolSuite{})
+
+func (s *BoolSuite) TestIsTrue(c *gc.C) {
+ c.Assert(true, jc.IsTrue)
+ c.Assert(false, gc.Not(jc.IsTrue))
+
+ result, msg := jc.IsTrue.Check([]interface{}{false}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(msg, gc.Equals, "")
+
+ result, msg = jc.IsTrue.Check([]interface{}{"foo"}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Check(msg, gc.Equals, `expected type bool, received type string`)
+
+ result, msg = jc.IsTrue.Check([]interface{}{42}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(msg, gc.Equals, `expected type bool, received type int`)
+
+ result, msg = jc.IsTrue.Check([]interface{}{nil}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(msg, gc.Matches, `expected type bool, received `)
+}
+
+func (s *BoolSuite) TestIsFalse(c *gc.C) {
+ c.Check(false, jc.IsFalse)
+ c.Check(true, gc.Not(jc.IsFalse))
+}
+
+func is42(i int) bool {
+ return i == 42
+}
+
+var satisfiesTests = []struct {
+ f interface{}
+ arg interface{}
+ result bool
+ msg string
+}{{
+ f: is42,
+ arg: 42,
+ result: true,
+}, {
+ f: is42,
+ arg: 41,
+ result: false,
+}, {
+ f: is42,
+ arg: "",
+ result: false,
+ msg: "wrong argument type string for func(int) bool",
+}, {
+ f: os.IsNotExist,
+ arg: errors.New("foo"),
+ result: false,
+}, {
+ f: os.IsNotExist,
+ arg: os.ErrNotExist,
+ result: true,
+}, {
+ f: os.IsNotExist,
+ arg: nil,
+ result: false,
+}, {
+ f: func(chan int) bool { return true },
+ arg: nil,
+ result: true,
+}, {
+ f: func(func()) bool { return true },
+ arg: nil,
+ result: true,
+}, {
+ f: func(interface{}) bool { return true },
+ arg: nil,
+ result: true,
+}, {
+ f: func(map[string]bool) bool { return true },
+ arg: nil,
+ result: true,
+}, {
+ f: func(*int) bool { return true },
+ arg: nil,
+ result: true,
+}, {
+ f: func([]string) bool { return true },
+ arg: nil,
+ result: true,
+}}
+
+func (s *BoolSuite) TestSatisfies(c *gc.C) {
+ for i, test := range satisfiesTests {
+ c.Logf("test %d. %T %T", i, test.f, test.arg)
+ result, msg := jc.Satisfies.Check([]interface{}{test.arg, test.f}, nil)
+ c.Check(result, gc.Equals, test.result)
+ c.Check(msg, gc.Equals, test.msg)
+ }
+}
+
+func (s *BoolSuite) TestDeepEquals(c *gc.C) {
+ for i, test := range deepEqualTests {
+ c.Logf("test %d. %v == %v is %v", i, test.a, test.b, test.eq)
+ result, msg := jc.DeepEquals.Check([]interface{}{test.a, test.b}, nil)
+ c.Check(result, gc.Equals, test.eq)
+ if test.eq {
+ c.Check(msg, gc.Equals, "")
+ } else {
+ c.Check(msg, gc.Not(gc.Equals), "")
+ }
+ }
+}
diff --git a/vendor/github.com/juju/testing/checkers/checker.go b/vendor/github.com/juju/testing/checkers/checker.go
new file mode 100644
index 0000000..2be481a
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/checker.go
@@ -0,0 +1,255 @@
+// Copyright 2012, 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "fmt"
+ "reflect"
+ "strings"
+ "time"
+
+ gc "gopkg.in/check.v1"
+)
+
+func TimeBetween(start, end time.Time) gc.Checker {
+ if end.Before(start) {
+ return &timeBetweenChecker{end, start}
+ }
+ return &timeBetweenChecker{start, end}
+}
+
+type timeBetweenChecker struct {
+ start, end time.Time
+}
+
+func (checker *timeBetweenChecker) Info() *gc.CheckerInfo {
+ info := gc.CheckerInfo{
+ Name: "TimeBetween",
+ Params: []string{"obtained"},
+ }
+ return &info
+}
+
+func (checker *timeBetweenChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ when, ok := params[0].(time.Time)
+ if !ok {
+ return false, "obtained value type must be time.Time"
+ }
+ if when.Before(checker.start) {
+ return false, fmt.Sprintf("obtained time %q is before start time %q", when, checker.start)
+ }
+ if when.After(checker.end) {
+ return false, fmt.Sprintf("obtained time %q is after end time %q", when, checker.end)
+ }
+ return true, ""
+}
+
+// DurationLessThan checker
+
+type durationLessThanChecker struct {
+ *gc.CheckerInfo
+}
+
+var DurationLessThan gc.Checker = &durationLessThanChecker{
+ &gc.CheckerInfo{Name: "DurationLessThan", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *durationLessThanChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ obtained, ok := params[0].(time.Duration)
+ if !ok {
+ return false, "obtained value type must be time.Duration"
+ }
+ expected, ok := params[1].(time.Duration)
+ if !ok {
+ return false, "expected value type must be time.Duration"
+ }
+ return obtained.Nanoseconds() < expected.Nanoseconds(), ""
+}
+
+// HasPrefix checker for checking strings
+
+func stringOrStringer(value interface{}) (string, bool) {
+ result, isString := value.(string)
+ if !isString {
+ if stringer, isStringer := value.(fmt.Stringer); isStringer {
+ result, isString = stringer.String(), true
+ }
+ }
+ return result, isString
+}
+
+type hasPrefixChecker struct {
+ *gc.CheckerInfo
+}
+
+var HasPrefix gc.Checker = &hasPrefixChecker{
+ &gc.CheckerInfo{Name: "HasPrefix", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *hasPrefixChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ expected, ok := params[1].(string)
+ if !ok {
+ return false, "expected must be a string"
+ }
+
+ obtained, isString := stringOrStringer(params[0])
+ if isString {
+ return strings.HasPrefix(obtained, expected), ""
+ }
+
+ return false, "Obtained value is not a string and has no .String()"
+}
+
+type hasSuffixChecker struct {
+ *gc.CheckerInfo
+}
+
+var HasSuffix gc.Checker = &hasSuffixChecker{
+ &gc.CheckerInfo{Name: "HasSuffix", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *hasSuffixChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ expected, ok := params[1].(string)
+ if !ok {
+ return false, "expected must be a string"
+ }
+
+ obtained, isString := stringOrStringer(params[0])
+ if isString {
+ return strings.HasSuffix(obtained, expected), ""
+ }
+
+ return false, "Obtained value is not a string and has no .String()"
+}
+
+type containsChecker struct {
+ *gc.CheckerInfo
+}
+
+var Contains gc.Checker = &containsChecker{
+ &gc.CheckerInfo{Name: "Contains", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *containsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ expected, ok := params[1].(string)
+ if !ok {
+ return false, "expected must be a string"
+ }
+
+ obtained, isString := stringOrStringer(params[0])
+ if isString {
+ return strings.Contains(obtained, expected), ""
+ }
+
+ return false, "Obtained value is not a string and has no .String()"
+}
+
+type sameContents struct {
+ *gc.CheckerInfo
+}
+
+// SameContents checks that the obtained slice contains all the values (and
+// same number of values) of the expected slice and vice versa, without respect
+// to order or duplicates. Uses DeepEquals on mapped contents to compare.
+var SameContents gc.Checker = &sameContents{
+ &gc.CheckerInfo{Name: "SameContents", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *sameContents) Check(params []interface{}, names []string) (result bool, error string) {
+ if len(params) != 2 {
+ return false, "SameContents expects two slice arguments"
+ }
+ obtained := params[0]
+ expected := params[1]
+
+ tob := reflect.TypeOf(obtained)
+ if tob.Kind() != reflect.Slice {
+ return false, fmt.Sprintf("SameContents expects the obtained value to be a slice, got %q",
+ tob.Kind())
+ }
+
+ texp := reflect.TypeOf(expected)
+ if texp.Kind() != reflect.Slice {
+ return false, fmt.Sprintf("SameContents expects the expected value to be a slice, got %q",
+ texp.Kind())
+ }
+
+ if texp != tob {
+ return false, fmt.Sprintf(
+ "SameContents expects two slices of the same type, expected: %q, got: %q",
+ texp, tob)
+ }
+
+ vexp := reflect.ValueOf(expected)
+ vob := reflect.ValueOf(obtained)
+ length := vexp.Len()
+
+ if vob.Len() != length {
+ // Slice has incorrect number of elements
+ return false, ""
+ }
+
+ // spin up maps with the entries as keys and the counts as values
+ mob := make(map[interface{}]int, length)
+ mexp := make(map[interface{}]int, length)
+
+ for i := 0; i < length; i++ {
+ mexp[reflect.Indirect(vexp.Index(i)).Interface()]++
+ mob[reflect.Indirect(vob.Index(i)).Interface()]++
+ }
+
+ return reflect.DeepEqual(mob, mexp), ""
+}
+
+type errorIsNilChecker struct {
+ *gc.CheckerInfo
+}
+
+// The ErrorIsNil checker tests whether the obtained value is nil.
+// Explicitly tests against only `nil`.
+//
+// For example:
+//
+// c.Assert(err, ErrorIsNil)
+//
+var ErrorIsNil gc.Checker = &errorIsNilChecker{
+ &gc.CheckerInfo{Name: "ErrorIsNil", Params: []string{"value"}},
+}
+
+type ErrorStacker interface {
+ error
+ StackTrace() []string
+}
+
+func (checker *errorIsNilChecker) Check(params []interface{}, names []string) (bool, string) {
+ result, message := errorIsNil(params[0])
+ if !result {
+ if stacker, ok := params[0].(ErrorStacker); ok && message == "" {
+ stack := stacker.StackTrace()
+ if stack != nil {
+ message = "error stack:\n\t" + strings.Join(stack, "\n\t")
+ }
+ }
+ }
+ return result, message
+}
+
+func errorIsNil(obtained interface{}) (result bool, message string) {
+ if obtained == nil {
+ return true, ""
+ }
+
+ if _, ok := obtained.(error); !ok {
+ return false, fmt.Sprintf("obtained type (%T) is not an error", obtained)
+ }
+
+ switch v := reflect.ValueOf(obtained); v.Kind() {
+ case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
+ if v.IsNil() {
+ return false, fmt.Sprintf("value of (%T) is nil, but a typed nil", obtained)
+ }
+ }
+
+ return false, ""
+}
diff --git a/vendor/github.com/juju/testing/checkers/checker_test.go b/vendor/github.com/juju/testing/checkers/checker_test.go
new file mode 100644
index 0000000..987035d
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/checker_test.go
@@ -0,0 +1,268 @@
+// Copyright 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ "fmt"
+ "testing"
+ "time"
+
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+func Test(t *testing.T) { gc.TestingT(t) }
+
+type CheckerSuite struct{}
+
+var _ = gc.Suite(&CheckerSuite{})
+
+func (s *CheckerSuite) TestHasPrefix(c *gc.C) {
+ c.Assert("foo bar", jc.HasPrefix, "foo")
+ c.Assert("foo bar", gc.Not(jc.HasPrefix), "omg")
+}
+
+func (s *CheckerSuite) TestHasSuffix(c *gc.C) {
+ c.Assert("foo bar", jc.HasSuffix, "bar")
+ c.Assert("foo bar", gc.Not(jc.HasSuffix), "omg")
+}
+
+func (s *CheckerSuite) TestContains(c *gc.C) {
+ c.Assert("foo bar baz", jc.Contains, "foo")
+ c.Assert("foo bar baz", jc.Contains, "bar")
+ c.Assert("foo bar baz", jc.Contains, "baz")
+ c.Assert("foo bar baz", gc.Not(jc.Contains), "omg")
+}
+
+func (s *CheckerSuite) TestTimeBetween(c *gc.C) {
+ now := time.Now()
+ earlier := now.Add(-1 * time.Second)
+ later := now.Add(time.Second)
+
+ checkOK := func(value interface{}, start, end time.Time) {
+ checker := jc.TimeBetween(start, end)
+ value, msg := checker.Check([]interface{}{value}, nil)
+ c.Check(value, jc.IsTrue)
+ c.Check(msg, gc.Equals, "")
+ }
+
+ checkFails := func(value interface{}, start, end time.Time, match string) {
+ checker := jc.TimeBetween(start, end)
+ value, msg := checker.Check([]interface{}{value}, nil)
+ c.Check(value, jc.IsFalse)
+ c.Check(msg, gc.Matches, match)
+ }
+
+ checkOK(now, earlier, later)
+ // Later can be before earlier...
+ checkOK(now, later, earlier)
+ // check at bounds
+ checkOK(earlier, earlier, later)
+ checkOK(later, earlier, later)
+
+ checkFails(earlier, now, later, `obtained time .* is before start time .*`)
+ checkFails(later, now, earlier, `obtained time .* is after end time .*`)
+ checkFails(42, now, earlier, `obtained value type must be time.Time`)
+}
+
+type someStruct struct {
+ a uint
+}
+
+func (s *CheckerSuite) TestSameContents(c *gc.C) {
+ //// positive cases ////
+
+ // same
+ c.Check(
+ []int{1, 2, 3}, jc.SameContents,
+ []int{1, 2, 3})
+
+ // empty
+ c.Check(
+ []int{}, jc.SameContents,
+ []int{})
+
+ // single
+ c.Check(
+ []int{1}, jc.SameContents,
+ []int{1})
+
+ // different order
+ c.Check(
+ []int{1, 2, 3}, jc.SameContents,
+ []int{3, 2, 1})
+
+ // multiple copies of same
+ c.Check(
+ []int{1, 1, 2}, jc.SameContents,
+ []int{2, 1, 1})
+
+ type test struct {
+ s string
+ i int
+ }
+
+ // test structs
+ c.Check(
+ []test{{"a", 1}, {"b", 2}}, jc.SameContents,
+ []test{{"b", 2}, {"a", 1}})
+
+ //// negative cases ////
+
+ // different contents
+ c.Check(
+ []int{1, 3, 2, 5}, gc.Not(jc.SameContents),
+ []int{5, 2, 3, 4})
+
+ // different size slices
+ c.Check(
+ []int{1, 2, 3}, gc.Not(jc.SameContents),
+ []int{1, 2})
+
+ // different counts of same items
+ c.Check(
+ []int{1, 1, 2}, gc.Not(jc.SameContents),
+ []int{1, 2, 2})
+
+ // Tests that check that we compare the contents of structs,
+ // that we point to, not just the pointers to them.
+ a1 := someStruct{1}
+ a2 := someStruct{2}
+ a3 := someStruct{3}
+ b1 := someStruct{1}
+ b2 := someStruct{2}
+ // Same order, same contents
+ c.Check(
+ []*someStruct{&a1, &a2}, jc.SameContents,
+ []*someStruct{&b1, &b2})
+
+ // Empty vs not
+ c.Check(
+ []*someStruct{&a1, &a2}, gc.Not(jc.SameContents),
+ []*someStruct{})
+
+ // Empty vs empty
+ // Same order, same contents
+ c.Check(
+ []*someStruct{}, jc.SameContents,
+ []*someStruct{})
+
+ // Different order, same contents
+ c.Check(
+ []*someStruct{&a1, &a2}, jc.SameContents,
+ []*someStruct{&b2, &b1})
+
+ // different contents
+ c.Check(
+ []*someStruct{&a3, &a2}, gc.Not(jc.SameContents),
+ []*someStruct{&b2, &b1})
+
+ // Different sizes, same contents (duplicate item)
+ c.Check(
+ []*someStruct{&a1, &a2, &a1}, gc.Not(jc.SameContents),
+ []*someStruct{&b2, &b1})
+
+ // Different sizes, same contents
+ c.Check(
+ []*someStruct{&a1, &a1, &a2}, gc.Not(jc.SameContents),
+ []*someStruct{&b2, &b1})
+
+ // Same sizes, same contents, different quantities
+ c.Check(
+ []*someStruct{&a1, &a2, &a2}, gc.Not(jc.SameContents),
+ []*someStruct{&b1, &b1, &b2})
+
+ /// Error cases ///
+ // note: for these tests, we can't use gc.Not, since Not passes the error value through
+ // and checks with a non-empty error always count as failed
+ // Oddly, there doesn't seem to actually be a way to check for an error from a Checker.
+
+ // different type
+ res, err := jc.SameContents.Check([]interface{}{
+ []string{"1", "2"},
+ []int{1, 2},
+ }, []string{})
+ c.Check(res, jc.IsFalse)
+ c.Check(err, gc.Not(gc.Equals), "")
+
+ // obtained not a slice
+ res, err = jc.SameContents.Check([]interface{}{
+ "test",
+ []int{1},
+ }, []string{})
+ c.Check(res, jc.IsFalse)
+ c.Check(err, gc.Not(gc.Equals), "")
+
+ // expected not a slice
+ res, err = jc.SameContents.Check([]interface{}{
+ []int{1},
+ "test",
+ }, []string{})
+ c.Check(res, jc.IsFalse)
+ c.Check(err, gc.Not(gc.Equals), "")
+}
+
+type stack_error struct {
+ message string
+ stack []string
+}
+
+type embedded struct {
+ typed *stack_error
+ err error
+}
+
+func (s *stack_error) Error() string {
+ return s.message
+}
+func (s *stack_error) StackTrace() []string {
+ return s.stack
+}
+
+type value_error string
+
+func (e value_error) Error() string {
+ return string(e)
+}
+
+func (s *CheckerSuite) TestErrorIsNil(c *gc.C) {
+ checkOK := func(value interface{}) {
+ value, msg := jc.ErrorIsNil.Check([]interface{}{value}, nil)
+ c.Check(value, jc.IsTrue)
+ c.Check(msg, gc.Equals, "")
+ }
+
+ checkFails := func(value interface{}, match string) {
+ value, msg := jc.ErrorIsNil.Check([]interface{}{value}, nil)
+ c.Check(value, jc.IsFalse)
+ c.Check(msg, gc.Matches, match)
+ }
+
+ var typedNil *stack_error
+ var typedNilAsInterface error = typedNil
+ var nilError error
+ var value value_error
+ var emptyValueErrorAsInterface error = value
+ var embed embedded
+
+ checkOK(nil)
+ checkOK(nilError)
+ checkOK(embed.err)
+
+ checkFails([]string{}, `obtained type \(.*\) is not an error`)
+ checkFails("", `obtained type \(.*\) is not an error`)
+ checkFails(embed.typed, `value of \(.*\) is nil, but a typed nil`)
+ checkFails(typedNilAsInterface, `value of \(.*\) is nil, but a typed nil`)
+ checkFails(fmt.Errorf("an error"), "")
+ checkFails(value, "")
+ checkFails(emptyValueErrorAsInterface, "")
+
+ emptyStack := &stack_error{"message", nil}
+ checkFails(emptyStack, "")
+
+ withStack := &stack_error{"message", []string{
+ "filename:line", "filename2:line2"}}
+ checkFails(withStack, "error stack:\n\tfilename:line\n\tfilename2:line2")
+}
diff --git a/vendor/github.com/juju/testing/checkers/codec.go b/vendor/github.com/juju/testing/checkers/codec.go
new file mode 100644
index 0000000..844effd
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/codec.go
@@ -0,0 +1,87 @@
+// Copyright 2012-2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "encoding/json"
+ "fmt"
+
+ gc "gopkg.in/check.v1"
+ "gopkg.in/mgo.v2/bson"
+ "gopkg.in/yaml.v2"
+)
+
+type codecEqualChecker struct {
+ name string
+ marshal func(interface{}) ([]byte, error)
+ unmarshal func([]byte, interface{}) error
+}
+
+// BSONEquals defines a checker that checks whether a byte slice, when
+// unmarshaled as BSON, is equal to the given value. Rather than
+// unmarshaling into something of the expected body type, we reform
+// the expected body in BSON and back to interface{} so we can check
+// the whole content. Otherwise we lose information when unmarshaling.
+var BSONEquals = &codecEqualChecker{
+ name: "BSONEquals",
+ marshal: bson.Marshal,
+ unmarshal: bson.Unmarshal,
+}
+
+// JSONEquals defines a checker that checks whether a byte slice, when
+// unmarshaled as JSON, is equal to the given value.
+// Rather than unmarshaling into something of the expected
+// body type, we reform the expected body in JSON and
+// back to interface{}, so we can check the whole content.
+// Otherwise we lose information when unmarshaling.
+var JSONEquals = &codecEqualChecker{
+ name: "JSONEquals",
+ marshal: json.Marshal,
+ unmarshal: json.Unmarshal,
+}
+
+// YAMLEquals defines a checker that checks whether a byte slice, when
+// unmarshaled as YAML, is equal to the given value.
+// Rather than unmarshaling into something of the expected
+// body type, we reform the expected body in YAML and
+// back to interface{}, so we can check the whole content.
+// Otherwise we lose information when unmarshaling.
+var YAMLEquals = &codecEqualChecker{
+ name: "YAMLEquals",
+ marshal: yaml.Marshal,
+ unmarshal: yaml.Unmarshal,
+}
+
+func (checker *codecEqualChecker) Info() *gc.CheckerInfo {
+ return &gc.CheckerInfo{
+ Name: checker.name,
+ Params: []string{"obtained", "expected"},
+ }
+}
+
+func (checker *codecEqualChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ gotContent, ok := params[0].(string)
+ if !ok {
+ return false, fmt.Sprintf("expected string, got %T", params[0])
+ }
+ expectContent := params[1]
+ expectContentBytes, err := checker.marshal(expectContent)
+ if err != nil {
+ return false, fmt.Sprintf("cannot marshal expected contents: %v", err)
+ }
+ var expectContentVal interface{}
+ if err := checker.unmarshal(expectContentBytes, &expectContentVal); err != nil {
+ return false, fmt.Sprintf("cannot unmarshal expected contents: %v", err)
+ }
+
+ var gotContentVal interface{}
+ if err := checker.unmarshal([]byte(gotContent), &gotContentVal); err != nil {
+ return false, fmt.Sprintf("cannot unmarshal obtained contents: %v; %q", err, gotContent)
+ }
+
+ if ok, err := DeepEqual(gotContentVal, expectContentVal); !ok {
+ return false, err.Error()
+ }
+ return true, ""
+}
diff --git a/vendor/github.com/juju/testing/checkers/codec_test.go b/vendor/github.com/juju/testing/checkers/codec_test.go
new file mode 100644
index 0000000..a061cca
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/codec_test.go
@@ -0,0 +1,157 @@
+// Copyright 2014 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+type Inner struct {
+ First string
+ Second int `json:",omitempty" yaml:",omitempty"`
+ Third map[string]bool `json:",omitempty" yaml:",omitempty"`
+}
+
+type Outer struct {
+ First float64
+ Second []*Inner `json:"Last,omitempty" yaml:"last,omitempty"`
+}
+
+func (s *CheckerSuite) TestJSONEquals(c *gc.C) {
+ tests := []struct {
+ descr string
+ obtained string
+ expected *Outer
+ result bool
+ msg string
+ }{
+ {
+ descr: "very simple",
+ obtained: `{"First": 47.11}`,
+ expected: &Outer{
+ First: 47.11,
+ },
+ result: true,
+ }, {
+ descr: "nested",
+ obtained: `{"First": 47.11, "Last": [{"First": "Hello", "Second": 42}]}`,
+ expected: &Outer{
+ First: 47.11,
+ Second: []*Inner{
+ {First: "Hello", Second: 42},
+ },
+ },
+ result: true,
+ }, {
+ descr: "nested with newline",
+ obtained: `{"First": 47.11, "Last": [{"First": "Hello", "Second": 42},
+ {"First": "World", "Third": {"T": true, "F": false}}]}`,
+ expected: &Outer{
+ First: 47.11,
+ Second: []*Inner{
+ {First: "Hello", Second: 42},
+ {First: "World", Third: map[string]bool{
+ "F": false,
+ "T": true,
+ }},
+ },
+ },
+ result: true,
+ }, {
+ descr: "illegal field",
+ obtained: `{"NotThere": 47.11}`,
+ expected: &Outer{
+ First: 47.11,
+ },
+ result: false,
+ msg: `mismatch at .*: validity mismatch; .*`,
+ }, {
+ descr: "illegal optained content",
+ obtained: `{"NotThere": `,
+ result: false,
+ msg: `cannot unmarshal obtained contents: unexpected end of JSON input; .*`,
+ },
+ }
+ for i, test := range tests {
+ c.Logf("test #%d) %s", i, test.descr)
+ result, msg := jc.JSONEquals.Check([]interface{}{test.obtained, test.expected}, nil)
+ c.Check(result, gc.Equals, test.result)
+ c.Check(msg, gc.Matches, test.msg)
+ }
+
+ // Test non-string input.
+ result, msg := jc.JSONEquals.Check([]interface{}{true, true}, nil)
+ c.Check(result, gc.Equals, false)
+ c.Check(msg, gc.Matches, "expected string, got bool")
+}
+
+func (s *CheckerSuite) TestYAMLEquals(c *gc.C) {
+ tests := []struct {
+ descr string
+ obtained string
+ expected *Outer
+ result bool
+ msg string
+ }{
+ {
+ descr: "very simple",
+ obtained: `first: 47.11`,
+ expected: &Outer{
+ First: 47.11,
+ },
+ result: true,
+ }, {
+ descr: "nested",
+ obtained: `{first: 47.11, last: [{first: 'Hello', second: 42}]}`,
+ expected: &Outer{
+ First: 47.11,
+ Second: []*Inner{
+ {First: "Hello", Second: 42},
+ },
+ },
+ result: true,
+ }, {
+ descr: "nested with newline",
+ obtained: `{first: 47.11, last: [{first: 'Hello', second: 42},
+ {first: 'World', third: {t: true, f: false}}]}`,
+ expected: &Outer{
+ First: 47.11,
+ Second: []*Inner{
+ {First: "Hello", Second: 42},
+ {First: "World", Third: map[string]bool{
+ "f": false,
+ "t": true,
+ }},
+ },
+ },
+ result: true,
+ }, {
+ descr: "illegal field",
+ obtained: `{"NotThere": 47.11}`,
+ expected: &Outer{
+ First: 47.11,
+ },
+ result: false,
+ msg: `mismatch at .*: validity mismatch; .*`,
+ }, {
+ descr: "illegal obtained content",
+ obtained: `{"NotThere": `,
+ result: false,
+ msg: `cannot unmarshal obtained contents: yaml: line 1: .*`,
+ },
+ }
+ for i, test := range tests {
+ c.Logf("test #%d) %s", i, test.descr)
+ result, msg := jc.YAMLEquals.Check([]interface{}{test.obtained, test.expected}, nil)
+ c.Check(result, gc.Equals, test.result)
+ c.Check(msg, gc.Matches, test.msg)
+ }
+
+ // Test non-string input.
+ result, msg := jc.YAMLEquals.Check([]interface{}{true, true}, nil)
+ c.Check(result, gc.Equals, false)
+ c.Check(msg, gc.Matches, "expected string, got bool")
+}
diff --git a/vendor/github.com/juju/testing/checkers/deepequal.go b/vendor/github.com/juju/testing/checkers/deepequal.go
new file mode 100644
index 0000000..43567fa
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/deepequal.go
@@ -0,0 +1,341 @@
+// Copied with small adaptations from the reflect package in the
+// Go source tree.
+
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE-golang file.
+
+package checkers
+
+import (
+ "fmt"
+ "reflect"
+ "time"
+ "unsafe"
+)
+
+var timeType = reflect.TypeOf(time.Time{})
+
+// During deepValueEqual, must keep track of checks that are
+// in progress. The comparison algorithm assumes that all
+// checks in progress are true when it reencounters them.
+// Visited comparisons are stored in a map indexed by visit.
+type visit struct {
+ a1 uintptr
+ a2 uintptr
+ typ reflect.Type
+}
+
+type mismatchError struct {
+ v1, v2 reflect.Value
+ path string
+ how string
+}
+
+func (err *mismatchError) Error() string {
+ path := err.path
+ if path == "" {
+ path = "top level"
+ }
+ return fmt.Sprintf("mismatch at %s: %s; obtained %#v; expected %#v", path, err.how, printable(err.v1), printable(err.v2))
+}
+
+func printable(v reflect.Value) interface{} {
+ vi := interfaceOf(v)
+ switch vi := vi.(type) {
+ case time.Time:
+ return vi.UTC().Format(time.RFC3339Nano)
+ default:
+ return vi
+ }
+}
+
+// Tests for deep equality using reflected types. The map argument tracks
+// comparisons that have already been seen, which allows short circuiting on
+// recursive types.
+func deepValueEqual(path string, v1, v2 reflect.Value, visited map[visit]bool, depth int) (ok bool, err error) {
+ errorf := func(f string, a ...interface{}) error {
+ return &mismatchError{
+ v1: v1,
+ v2: v2,
+ path: path,
+ how: fmt.Sprintf(f, a...),
+ }
+ }
+ if !v1.IsValid() || !v2.IsValid() {
+ if v1.IsValid() == v2.IsValid() {
+ return true, nil
+ }
+ return false, errorf("validity mismatch")
+ }
+ if v1.Type() != v2.Type() {
+ return false, errorf("type mismatch %s vs %s", v1.Type(), v2.Type())
+ }
+
+ // if depth > 10 { panic("deepValueEqual") } // for debugging
+ hard := func(k reflect.Kind) bool {
+ switch k {
+ case reflect.Array, reflect.Map, reflect.Slice, reflect.Struct:
+ return true
+ }
+ return false
+ }
+
+ if v1.CanAddr() && v2.CanAddr() && hard(v1.Kind()) {
+ addr1 := v1.UnsafeAddr()
+ addr2 := v2.UnsafeAddr()
+ if addr1 > addr2 {
+ // Canonicalize order to reduce number of entries in visited.
+ addr1, addr2 = addr2, addr1
+ }
+
+ // Short circuit if references are identical ...
+ if addr1 == addr2 {
+ return true, nil
+ }
+
+ // ... or already seen
+ typ := v1.Type()
+ v := visit{addr1, addr2, typ}
+ if visited[v] {
+ return true, nil
+ }
+
+ // Remember for later.
+ visited[v] = true
+ }
+
+ switch v1.Kind() {
+ case reflect.Array:
+ if v1.Len() != v2.Len() {
+ // can't happen!
+ return false, errorf("length mismatch, %d vs %d", v1.Len(), v2.Len())
+ }
+ for i := 0; i < v1.Len(); i++ {
+ if ok, err := deepValueEqual(
+ fmt.Sprintf("%s[%d]", path, i),
+ v1.Index(i), v2.Index(i), visited, depth+1); !ok {
+ return false, err
+ }
+ }
+ return true, nil
+ case reflect.Slice:
+ // We treat a nil slice the same as an empty slice.
+ if v1.Len() != v2.Len() {
+ return false, errorf("length mismatch, %d vs %d", v1.Len(), v2.Len())
+ }
+ if v1.Pointer() == v2.Pointer() {
+ return true, nil
+ }
+ for i := 0; i < v1.Len(); i++ {
+ if ok, err := deepValueEqual(
+ fmt.Sprintf("%s[%d]", path, i),
+ v1.Index(i), v2.Index(i), visited, depth+1); !ok {
+ return false, err
+ }
+ }
+ return true, nil
+ case reflect.Interface:
+ if v1.IsNil() || v2.IsNil() {
+ if v1.IsNil() != v2.IsNil() {
+ return false, errorf("nil vs non-nil interface mismatch")
+ }
+ return true, nil
+ }
+ return deepValueEqual(path, v1.Elem(), v2.Elem(), visited, depth+1)
+ case reflect.Ptr:
+ return deepValueEqual("(*"+path+")", v1.Elem(), v2.Elem(), visited, depth+1)
+ case reflect.Struct:
+ if v1.Type() == timeType {
+ // Special case for time - we ignore the time zone.
+ t1 := interfaceOf(v1).(time.Time)
+ t2 := interfaceOf(v2).(time.Time)
+ if t1.Equal(t2) {
+ return true, nil
+ }
+ return false, errorf("unequal")
+ }
+ for i, n := 0, v1.NumField(); i < n; i++ {
+ path := path + "." + v1.Type().Field(i).Name
+ if ok, err := deepValueEqual(path, v1.Field(i), v2.Field(i), visited, depth+1); !ok {
+ return false, err
+ }
+ }
+ return true, nil
+ case reflect.Map:
+ if v1.IsNil() != v2.IsNil() {
+ return false, errorf("nil vs non-nil mismatch")
+ }
+ if v1.Len() != v2.Len() {
+ return false, errorf("length mismatch, %d vs %d", v1.Len(), v2.Len())
+ }
+ if v1.Pointer() == v2.Pointer() {
+ return true, nil
+ }
+ for _, k := range v1.MapKeys() {
+ var p string
+ if k.CanInterface() {
+ p = path + "[" + fmt.Sprintf("%#v", k.Interface()) + "]"
+ } else {
+ p = path + "[someKey]"
+ }
+ if ok, err := deepValueEqual(p, v1.MapIndex(k), v2.MapIndex(k), visited, depth+1); !ok {
+ return false, err
+ }
+ }
+ return true, nil
+ case reflect.Func:
+ if v1.IsNil() && v2.IsNil() {
+ return true, nil
+ }
+ // Can't do better than this:
+ return false, errorf("non-nil functions")
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ if v1.Int() != v2.Int() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.Uint, reflect.Uintptr, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ if v1.Uint() != v2.Uint() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.Float32, reflect.Float64:
+ if v1.Float() != v2.Float() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.Complex64, reflect.Complex128:
+ if v1.Complex() != v2.Complex() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.Bool:
+ if v1.Bool() != v2.Bool() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.String:
+ if v1.String() != v2.String() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ case reflect.Chan, reflect.UnsafePointer:
+ if v1.Pointer() != v2.Pointer() {
+ return false, errorf("unequal")
+ }
+ return true, nil
+ default:
+ panic("unexpected type " + v1.Type().String())
+ }
+}
+
+// DeepEqual tests for deep equality. It uses normal == equality where
+// possible but will scan elements of arrays, slices, maps, and fields
+// of structs. In maps, keys are compared with == but elements use deep
+// equality. DeepEqual correctly handles recursive types. Functions are
+// equal only if they are both nil.
+//
+// DeepEqual differs from reflect.DeepEqual in two ways:
+// - an empty slice is considered equal to a nil slice.
+// - two time.Time values that represent the same instant
+// but with different time zones are considered equal.
+//
+// If the two values compare unequal, the resulting error holds the
+// first difference encountered.
+func DeepEqual(a1, a2 interface{}) (bool, error) {
+ errorf := func(f string, a ...interface{}) error {
+ return &mismatchError{
+ v1: reflect.ValueOf(a1),
+ v2: reflect.ValueOf(a2),
+ path: "",
+ how: fmt.Sprintf(f, a...),
+ }
+ }
+ if a1 == nil || a2 == nil {
+ if a1 == a2 {
+ return true, nil
+ }
+ return false, errorf("nil vs non-nil mismatch")
+ }
+ v1 := reflect.ValueOf(a1)
+ v2 := reflect.ValueOf(a2)
+ if v1.Type() != v2.Type() {
+ return false, errorf("type mismatch %s vs %s", v1.Type(), v2.Type())
+ }
+ return deepValueEqual("", v1, v2, make(map[visit]bool), 0)
+}
+
+// interfaceOf returns v.Interface() even if v.CanInterface() == false.
+// This enables us to call fmt.Printf on a value even if it's derived
+// from inside an unexported field.
+// See https://code.google.com/p/go/issues/detail?id=8965
+// for a possible future alternative to this hack.
+func interfaceOf(v reflect.Value) interface{} {
+ if !v.IsValid() {
+ return nil
+ }
+ return bypassCanInterface(v).Interface()
+}
+
+type flag uintptr
+
+var flagRO flag
+
+// constants copied from reflect/value.go
+const (
+ // The value of flagRO up to and including Go 1.3.
+ flagRO1p3 = 1 << 0
+
+ // The value of flagRO from Go 1.4.
+ flagRO1p4 = 1 << 5
+)
+
+var flagValOffset = func() uintptr {
+ field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
+ if !ok {
+ panic("reflect.Value has no flag field")
+ }
+ return field.Offset
+}()
+
+func flagField(v *reflect.Value) *flag {
+ return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
+}
+
+// bypassCanInterface returns a version of v that
+// bypasses the CanInterface check.
+func bypassCanInterface(v reflect.Value) reflect.Value {
+ if !v.IsValid() || v.CanInterface() {
+ return v
+ }
+ *flagField(&v) &^= flagRO
+ return v
+}
+
+// Sanity checks against future reflect package changes
+// to the type or semantics of the Value.flag field.
+func init() {
+ field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
+ if !ok {
+ panic("reflect.Value has no flag field")
+ }
+ if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
+ panic("reflect.Value flag field has changed kind")
+ }
+ var t struct {
+ a int
+ A int
+ }
+ vA := reflect.ValueOf(t).FieldByName("A")
+ va := reflect.ValueOf(t).FieldByName("a")
+ flagA := *flagField(&vA)
+ flaga := *flagField(&va)
+
+ // Infer flagRO from the difference between the flags
+ // for the (otherwise identical) fields in t.
+ flagRO = flagA ^ flaga
+ if flagRO != flagRO1p3 && flagRO != flagRO1p4 {
+ panic("reflect.Value read-only flag has changed semantics")
+ }
+}
diff --git a/vendor/github.com/juju/testing/checkers/deepequal_test.go b/vendor/github.com/juju/testing/checkers/deepequal_test.go
new file mode 100644
index 0000000..3fbd4d1
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/deepequal_test.go
@@ -0,0 +1,188 @@
+// Copied with small adaptations from the reflect package in the
+// Go source tree. We use testing rather than gocheck to preserve
+// as much source equivalence as possible.
+
+// TODO tests for error messages
+
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE-golang file.
+
+package checkers_test
+
+import (
+ "regexp"
+ "testing"
+ "time"
+
+ "github.com/juju/testing/checkers"
+)
+
+func deepEqual(a1, a2 interface{}) bool {
+ ok, _ := checkers.DeepEqual(a1, a2)
+ return ok
+}
+
+type Basic struct {
+ x int
+ y float32
+}
+
+type NotBasic Basic
+
+type DeepEqualTest struct {
+ a, b interface{}
+ eq bool
+ msg string
+}
+
+// Simple functions for DeepEqual tests.
+var (
+ fn1 func() // nil.
+ fn2 func() // nil.
+ fn3 = func() { fn1() } // Not nil.
+)
+
+var deepEqualTests = []DeepEqualTest{
+ // Equalities
+ {nil, nil, true, ""},
+ {1, 1, true, ""},
+ {int32(1), int32(1), true, ""},
+ {0.5, 0.5, true, ""},
+ {float32(0.5), float32(0.5), true, ""},
+ {"hello", "hello", true, ""},
+ {make([]int, 10), make([]int, 10), true, ""},
+ {&[3]int{1, 2, 3}, &[3]int{1, 2, 3}, true, ""},
+ {Basic{1, 0.5}, Basic{1, 0.5}, true, ""},
+ {error(nil), error(nil), true, ""},
+ {map[int]string{1: "one", 2: "two"}, map[int]string{2: "two", 1: "one"}, true, ""},
+ {fn1, fn2, true, ""},
+ {time.Unix(0, 0), time.Unix(0, 0), true, ""},
+ // Same time from different zones (difference from normal DeepEqual)
+ {time.Unix(0, 0).UTC(), time.Unix(0, 0).In(time.FixedZone("FOO", 60*60)), true, ""},
+
+ // Inequalities
+ {1, 2, false, `mismatch at top level: unequal; obtained 1; expected 2`},
+ {int32(1), int32(2), false, `mismatch at top level: unequal; obtained 1; expected 2`},
+ {0.5, 0.6, false, `mismatch at top level: unequal; obtained 0\.5; expected 0\.6`},
+ {float32(0.5), float32(0.6), false, `mismatch at top level: unequal; obtained 0\.5; expected 0\.6`},
+ {"hello", "hey", false, `mismatch at top level: unequal; obtained "hello"; expected "hey"`},
+ {make([]int, 10), make([]int, 11), false, `mismatch at top level: length mismatch, 10 vs 11; obtained \[\]int\{0, 0, 0, 0, 0, 0, 0, 0, 0, 0\}; expected \[\]int\{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\}`},
+ {&[3]int{1, 2, 3}, &[3]int{1, 2, 4}, false, `mismatch at \(\*\)\[2\]: unequal; obtained 3; expected 4`},
+ {Basic{1, 0.5}, Basic{1, 0.6}, false, `mismatch at \.y: unequal; obtained 0\.5; expected 0\.6`},
+ {Basic{1, 0}, Basic{2, 0}, false, `mismatch at \.x: unequal; obtained 1; expected 2`},
+ {map[int]string{1: "one", 3: "two"}, map[int]string{2: "two", 1: "one"}, false, `mismatch at \[3\]: validity mismatch; obtained "two"; expected `},
+ {map[int]string{1: "one", 2: "txo"}, map[int]string{2: "two", 1: "one"}, false, `mismatch at \[2\]: unequal; obtained "txo"; expected "two"`},
+ {map[int]string{1: "one"}, map[int]string{2: "two", 1: "one"}, false, `mismatch at top level: length mismatch, 1 vs 2; obtained map\[int\]string\{1:"one"\}; expected map\[int\]string\{.*\}`},
+ {map[int]string{2: "two", 1: "one"}, map[int]string{1: "one"}, false, `mismatch at top level: length mismatch, 2 vs 1; obtained map\[int\]string\{.*\}; expected map\[int\]string\{1:"one"\}`},
+ {nil, 1, false, `mismatch at top level: nil vs non-nil mismatch; obtained ; expected 1`},
+ {1, nil, false, `mismatch at top level: nil vs non-nil mismatch; obtained 1; expected `},
+ {fn1, fn3, false, `mismatch at top level: non-nil functions; obtained \(func\(\)\)\(nil\); expected \(func\(\)\)\(0x[0-9a-f]+\)`},
+ {fn3, fn3, false, `mismatch at top level: non-nil functions; obtained \(func\(\)\)\(0x[0-9a-f]+\); expected \(func\(\)\)\(0x[0-9a-f]+\)`},
+ {[]interface{}{nil}, []interface{}{"a"}, false, `mismatch at \[0\]: nil vs non-nil interface mismatch`},
+
+ // Nil vs empty: they're the same (difference from normal DeepEqual)
+ {[]int{}, []int(nil), true, ""},
+ {[]int{}, []int{}, true, ""},
+ {[]int(nil), []int(nil), true, ""},
+
+ // Mismatched types
+ {1, 1.0, false, `mismatch at top level: type mismatch int vs float64; obtained 1; expected 1`},
+ {int32(1), int64(1), false, `mismatch at top level: type mismatch int32 vs int64; obtained 1; expected 1`},
+ {0.5, "hello", false, `mismatch at top level: type mismatch float64 vs string; obtained 0\.5; expected "hello"`},
+ {[]int{1, 2, 3}, [3]int{1, 2, 3}, false, `mismatch at top level: type mismatch \[\]int vs \[3\]int; obtained \[\]int\{1, 2, 3\}; expected \[3\]int\{1, 2, 3\}`},
+ {&[3]interface{}{1, 2, 4}, &[3]interface{}{1, 2, "s"}, false, `mismatch at \(\*\)\[2\]: type mismatch int vs string; obtained 4; expected "s"`},
+ {Basic{1, 0.5}, NotBasic{1, 0.5}, false, `mismatch at top level: type mismatch checkers_test\.Basic vs checkers_test\.NotBasic; obtained checkers_test\.Basic\{x:1, y:0\.5\}; expected checkers_test\.NotBasic\{x:1, y:0\.5\}`},
+ {time.Unix(0, 0).UTC(), time.Unix(0, 0).In(time.FixedZone("FOO", 60*60)).Add(1), false, `mismatch at top level: unequal; obtained "1970-01-01T00:00:00Z"; expected "1970-01-01T00:00:00.000000001Z"`},
+ {time.Unix(0, 0).UTC(), time.Unix(0, 0).Add(1), false, `mismatch at top level: unequal; obtained "1970-01-01T00:00:00Z"; expected "1970-01-01T00:00:00.000000001Z"`},
+
+ {
+ map[uint]string{1: "one", 2: "two"},
+ map[int]string{2: "two", 1: "one"},
+ false,
+ `mismatch at top level: type mismatch map\[uint\]string vs map\[int\]string; obtained map\[uint\]string\{.*\}; expected map\[int\]string\{.*\}`,
+ },
+}
+
+func TestDeepEqual(t *testing.T) {
+ for _, test := range deepEqualTests {
+ r, err := checkers.DeepEqual(test.a, test.b)
+ if r != test.eq {
+ t.Errorf("deepEqual(%v, %v) = %v, want %v", test.a, test.b, r, test.eq)
+ }
+ if test.eq {
+ if err != nil {
+ t.Errorf("deepEqual(%v, %v): unexpected error message %q when equal", test.a, test.b, err)
+ }
+ continue
+ }
+ if err == nil {
+ t.Errorf("deepEqual(%v, %v); mismatch but nil error", test.a, test.b)
+ continue
+ }
+ if ok, _ := regexp.MatchString(test.msg, err.Error()); !ok {
+ t.Errorf("deepEqual(%v, %v); unexpected error %q, want %q", test.a, test.b, err.Error(), test.msg)
+ }
+ }
+}
+
+type Recursive struct {
+ x int
+ r *Recursive
+}
+
+func TestDeepEqualRecursiveStruct(t *testing.T) {
+ a, b := new(Recursive), new(Recursive)
+ *a = Recursive{12, a}
+ *b = Recursive{12, b}
+ if !deepEqual(a, b) {
+ t.Error("deepEqual(recursive same) = false, want true")
+ }
+}
+
+type _Complex struct {
+ a int
+ b [3]*_Complex
+ c *string
+ d map[float64]float64
+}
+
+func TestDeepEqualComplexStruct(t *testing.T) {
+ m := make(map[float64]float64)
+ stra, strb := "hello", "hello"
+ a, b := new(_Complex), new(_Complex)
+ *a = _Complex{5, [3]*_Complex{a, b, a}, &stra, m}
+ *b = _Complex{5, [3]*_Complex{b, a, a}, &strb, m}
+ if !deepEqual(a, b) {
+ t.Error("deepEqual(complex same) = false, want true")
+ }
+}
+
+func TestDeepEqualComplexStructInequality(t *testing.T) {
+ m := make(map[float64]float64)
+ stra, strb := "hello", "helloo" // Difference is here
+ a, b := new(_Complex), new(_Complex)
+ *a = _Complex{5, [3]*_Complex{a, b, a}, &stra, m}
+ *b = _Complex{5, [3]*_Complex{b, a, a}, &strb, m}
+ if deepEqual(a, b) {
+ t.Error("deepEqual(complex different) = true, want false")
+ }
+}
+
+type UnexpT struct {
+ m map[int]int
+}
+
+func TestDeepEqualUnexportedMap(t *testing.T) {
+ // Check that DeepEqual can look at unexported fields.
+ x1 := UnexpT{map[int]int{1: 2}}
+ x2 := UnexpT{map[int]int{1: 2}}
+ if !deepEqual(&x1, &x2) {
+ t.Error("deepEqual(x1, x2) = false, want true")
+ }
+
+ y1 := UnexpT{map[int]int{2: 3}}
+ if deepEqual(&x1, &y1) {
+ t.Error("deepEqual(x1, y1) = true, want false")
+ }
+}
diff --git a/vendor/github.com/juju/testing/checkers/file.go b/vendor/github.com/juju/testing/checkers/file.go
new file mode 100644
index 0000000..9c62013
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/file.go
@@ -0,0 +1,224 @@
+// Copyright 2013 Canonical Ltd.
+// Copyright 2014 Cloudbase Solutions SRL
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "reflect"
+ "runtime"
+ "strings"
+
+ gc "gopkg.in/check.v1"
+)
+
+// IsNonEmptyFile checker
+
+type isNonEmptyFileChecker struct {
+ *gc.CheckerInfo
+}
+
+var IsNonEmptyFile gc.Checker = &isNonEmptyFileChecker{
+ &gc.CheckerInfo{Name: "IsNonEmptyFile", Params: []string{"obtained"}},
+}
+
+func (checker *isNonEmptyFileChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ filename, isString := stringOrStringer(params[0])
+ if isString {
+ fileInfo, err := os.Stat(filename)
+ if os.IsNotExist(err) {
+ return false, fmt.Sprintf("%s does not exist", filename)
+ } else if err != nil {
+ return false, fmt.Sprintf("other stat error: %v", err)
+ }
+ if fileInfo.Size() > 0 {
+ return true, ""
+ } else {
+ return false, fmt.Sprintf("%s is empty", filename)
+ }
+ }
+
+ value := reflect.ValueOf(params[0])
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %s:%#v", value.Kind(), params[0])
+}
+
+// IsDirectory checker
+
+type isDirectoryChecker struct {
+ *gc.CheckerInfo
+}
+
+var IsDirectory gc.Checker = &isDirectoryChecker{
+ &gc.CheckerInfo{Name: "IsDirectory", Params: []string{"obtained"}},
+}
+
+func (checker *isDirectoryChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ path, isString := stringOrStringer(params[0])
+ if isString {
+ fileInfo, err := os.Stat(path)
+ if os.IsNotExist(err) {
+ return false, fmt.Sprintf("%s does not exist", path)
+ } else if err != nil {
+ return false, fmt.Sprintf("other stat error: %v", err)
+ }
+ if fileInfo.IsDir() {
+ return true, ""
+ } else {
+ return false, fmt.Sprintf("%s is not a directory", path)
+ }
+ }
+
+ value := reflect.ValueOf(params[0])
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %s:%#v", value.Kind(), params[0])
+}
+
+// IsSymlink checker
+
+type isSymlinkChecker struct {
+ *gc.CheckerInfo
+}
+
+var IsSymlink gc.Checker = &isSymlinkChecker{
+ &gc.CheckerInfo{Name: "IsSymlink", Params: []string{"obtained"}},
+}
+
+func (checker *isSymlinkChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ path, isString := stringOrStringer(params[0])
+ if isString {
+ fileInfo, err := os.Lstat(path)
+ if os.IsNotExist(err) {
+ return false, fmt.Sprintf("%s does not exist", path)
+ } else if err != nil {
+ return false, fmt.Sprintf("other stat error: %v", err)
+ }
+ if fileInfo.Mode()&os.ModeSymlink != 0 {
+ return true, ""
+ } else {
+ return false, fmt.Sprintf("%s is not a symlink: %+v", path, fileInfo)
+ }
+ }
+
+ value := reflect.ValueOf(params[0])
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %s:%#v", value.Kind(), params[0])
+}
+
+// DoesNotExist checker makes sure the path specified doesn't exist.
+
+type doesNotExistChecker struct {
+ *gc.CheckerInfo
+}
+
+var DoesNotExist gc.Checker = &doesNotExistChecker{
+ &gc.CheckerInfo{Name: "DoesNotExist", Params: []string{"obtained"}},
+}
+
+func (checker *doesNotExistChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ path, isString := stringOrStringer(params[0])
+ if isString {
+ _, err := os.Stat(path)
+ if os.IsNotExist(err) {
+ return true, ""
+ } else if err != nil {
+ return false, fmt.Sprintf("other stat error: %v", err)
+ }
+ return false, fmt.Sprintf("%s exists", path)
+ }
+
+ value := reflect.ValueOf(params[0])
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %s:%#v", value.Kind(), params[0])
+}
+
+// SymlinkDoesNotExist checker makes sure the path specified doesn't exist.
+
+type symlinkDoesNotExistChecker struct {
+ *gc.CheckerInfo
+}
+
+var SymlinkDoesNotExist gc.Checker = &symlinkDoesNotExistChecker{
+ &gc.CheckerInfo{Name: "SymlinkDoesNotExist", Params: []string{"obtained"}},
+}
+
+func (checker *symlinkDoesNotExistChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ path, isString := stringOrStringer(params[0])
+ if isString {
+ _, err := os.Lstat(path)
+ if os.IsNotExist(err) {
+ return true, ""
+ } else if err != nil {
+ return false, fmt.Sprintf("other stat error: %v", err)
+ }
+ return false, fmt.Sprintf("%s exists", path)
+ }
+
+ value := reflect.ValueOf(params[0])
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %s:%#v", value.Kind(), params[0])
+}
+
+// Same path checker -- will check that paths are the same OS indepentent
+
+type samePathChecker struct {
+ *gc.CheckerInfo
+}
+
+// SamePath checks paths to see whether they're the same, can follow symlinks and is OS independent
+var SamePath gc.Checker = &samePathChecker{
+ &gc.CheckerInfo{Name: "SamePath", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *samePathChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ // Check for panics
+ defer func() {
+ if panicked := recover(); panicked != nil {
+ result = false
+ error = fmt.Sprint(panicked)
+ }
+ }()
+
+ // Convert input
+ obtained, isStr := stringOrStringer(params[0])
+ if !isStr {
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %T:%#v", params[0], params[0])
+ }
+ expected, isStr := stringOrStringer(params[1])
+ if !isStr {
+ return false, fmt.Sprintf("obtained value is not a string and has no .String(), %T:%#v", params[1], params[1])
+ }
+
+ // Convert paths to proper format
+ obtained = filepath.FromSlash(obtained)
+ expected = filepath.FromSlash(expected)
+
+ // If running on Windows, paths will be case-insensitive and thus we
+ // normalize the inputs to a default of all upper-case
+ if runtime.GOOS == "windows" {
+ obtained = strings.ToUpper(obtained)
+ expected = strings.ToUpper(expected)
+ }
+
+ // Same path do not check further
+ if obtained == expected {
+ return true, ""
+ }
+
+ // If it's not the same path, check if it points to the same file.
+ // Thus, the cases with windows-shortened paths are accounted for
+ // This will throw an error if it's not a file
+ ob, err := os.Stat(obtained)
+ if err != nil {
+ return false, err.Error()
+ }
+
+ ex, err := os.Stat(expected)
+ if err != nil {
+ return false, err.Error()
+ }
+
+ res := os.SameFile(ob, ex)
+ if res {
+ return true, ""
+ }
+ return false, fmt.Sprintf("Not the same file")
+}
diff --git a/vendor/github.com/juju/testing/checkers/file_test.go b/vendor/github.com/juju/testing/checkers/file_test.go
new file mode 100644
index 0000000..ea1a753
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/file_test.go
@@ -0,0 +1,288 @@
+// Copyright 2013 Canonical Ltd.
+// Copyright 2014 Cloudbase Solutions SRL
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "runtime"
+ "strings"
+
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+type FileSuite struct{}
+
+var _ = gc.Suite(&FileSuite{})
+
+func (s *FileSuite) TestIsNonEmptyFile(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ fmt.Fprintf(file, "something")
+ file.Close()
+
+ c.Assert(file.Name(), jc.IsNonEmptyFile)
+}
+
+func (s *FileSuite) TestIsNonEmptyFileWithEmptyFile(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ file.Close()
+
+ result, message := jc.IsNonEmptyFile.Check([]interface{}{file.Name()}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, file.Name()+" is empty")
+}
+
+func (s *FileSuite) TestIsNonEmptyFileWithMissingFile(c *gc.C) {
+ name := filepath.Join(c.MkDir(), "missing")
+
+ result, message := jc.IsNonEmptyFile.Check([]interface{}{name}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, name+" does not exist")
+}
+
+func (s *FileSuite) TestIsNonEmptyFileWithNumber(c *gc.C) {
+ result, message := jc.IsNonEmptyFile.Check([]interface{}{42}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "obtained value is not a string and has no .String(), int:42")
+}
+
+func (s *FileSuite) TestIsDirectory(c *gc.C) {
+ dir := c.MkDir()
+ c.Assert(dir, jc.IsDirectory)
+}
+
+func (s *FileSuite) TestIsDirectoryMissing(c *gc.C) {
+ absentDir := filepath.Join(c.MkDir(), "foo")
+
+ result, message := jc.IsDirectory.Check([]interface{}{absentDir}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, absentDir+" does not exist")
+}
+
+func (s *FileSuite) TestIsDirectoryWithFile(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ file.Close()
+
+ result, message := jc.IsDirectory.Check([]interface{}{file.Name()}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, file.Name()+" is not a directory")
+}
+
+func (s *FileSuite) TestIsDirectoryWithNumber(c *gc.C) {
+ result, message := jc.IsDirectory.Check([]interface{}{42}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "obtained value is not a string and has no .String(), int:42")
+}
+
+func (s *FileSuite) TestDoesNotExist(c *gc.C) {
+ absentDir := filepath.Join(c.MkDir(), "foo")
+ c.Assert(absentDir, jc.DoesNotExist)
+}
+
+func (s *FileSuite) TestDoesNotExistWithPath(c *gc.C) {
+ dir := c.MkDir()
+ result, message := jc.DoesNotExist.Check([]interface{}{dir}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, dir+" exists")
+}
+
+func (s *FileSuite) TestDoesNotExistWithSymlink(c *gc.C) {
+ dir := c.MkDir()
+ deadPath := filepath.Join(dir, "dead")
+ symlinkPath := filepath.Join(dir, "a-symlink")
+ err := os.Symlink(deadPath, symlinkPath)
+ c.Assert(err, gc.IsNil)
+ // A valid symlink pointing to something that doesn't exist passes.
+ // Use SymlinkDoesNotExist to check for the non-existence of the link itself.
+ c.Assert(symlinkPath, jc.DoesNotExist)
+}
+
+func (s *FileSuite) TestDoesNotExistWithNumber(c *gc.C) {
+ result, message := jc.DoesNotExist.Check([]interface{}{42}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "obtained value is not a string and has no .String(), int:42")
+}
+
+func (s *FileSuite) TestSymlinkDoesNotExist(c *gc.C) {
+ absentDir := filepath.Join(c.MkDir(), "foo")
+ c.Assert(absentDir, jc.SymlinkDoesNotExist)
+}
+
+func (s *FileSuite) TestSymlinkDoesNotExistWithPath(c *gc.C) {
+ dir := c.MkDir()
+ result, message := jc.SymlinkDoesNotExist.Check([]interface{}{dir}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, dir+" exists")
+}
+
+func (s *FileSuite) TestSymlinkDoesNotExistWithSymlink(c *gc.C) {
+ dir := c.MkDir()
+ deadPath := filepath.Join(dir, "dead")
+ symlinkPath := filepath.Join(dir, "a-symlink")
+ err := os.Symlink(deadPath, symlinkPath)
+ c.Assert(err, gc.IsNil)
+
+ result, message := jc.SymlinkDoesNotExist.Check([]interface{}{symlinkPath}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, symlinkPath+" exists")
+}
+
+func (s *FileSuite) TestSymlinkDoesNotExistWithNumber(c *gc.C) {
+ result, message := jc.SymlinkDoesNotExist.Check([]interface{}{42}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "obtained value is not a string and has no .String(), int:42")
+}
+
+func (s *FileSuite) TestIsSymlink(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ c.Log(file.Name())
+ c.Log(filepath.Dir(file.Name()))
+ symlinkPath := filepath.Join(filepath.Dir(file.Name()), "a-symlink")
+ err = os.Symlink(file.Name(), symlinkPath)
+ c.Assert(err, gc.IsNil)
+
+ c.Assert(symlinkPath, jc.IsSymlink)
+}
+
+func (s *FileSuite) TestIsSymlinkWithFile(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ result, message := jc.IsSymlink.Check([]interface{}{file.Name()}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, jc.Contains, " is not a symlink")
+}
+
+func (s *FileSuite) TestIsSymlinkWithDir(c *gc.C) {
+ result, message := jc.IsSymlink.Check([]interface{}{c.MkDir()}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, jc.Contains, " is not a symlink")
+}
+
+func (s *FileSuite) TestSamePathWithNumber(c *gc.C) {
+ result, message := jc.SamePath.Check([]interface{}{42, 52}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "obtained value is not a string and has no .String(), int:42")
+}
+
+func (s *FileSuite) TestSamePathBasic(c *gc.C) {
+ dir := c.MkDir()
+
+ result, message := jc.SamePath.Check([]interface{}{dir, dir}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
+
+type SamePathLinuxSuite struct{}
+
+var _ = gc.Suite(&SamePathLinuxSuite{})
+
+func (s *SamePathLinuxSuite) SetUpSuite(c *gc.C) {
+ if runtime.GOOS == "windows" {
+ c.Skip("Skipped Linux-intented SamePath tests on Windows.")
+ }
+}
+
+func (s *SamePathLinuxSuite) TestNotSamePathLinuxBasic(c *gc.C) {
+ dir := c.MkDir()
+ path1 := filepath.Join(dir, "Test")
+ path2 := filepath.Join(dir, "test")
+
+ result, message := jc.SamePath.Check([]interface{}{path1, path2}, nil)
+
+ c.Assert(result, jc.IsFalse)
+ c.Assert(message, gc.Equals, "stat "+path1+": no such file or directory")
+}
+
+func (s *SamePathLinuxSuite) TestSamePathLinuxSymlinks(c *gc.C) {
+ file, err := ioutil.TempFile(c.MkDir(), "")
+ c.Assert(err, gc.IsNil)
+ symlinkPath := filepath.Join(filepath.Dir(file.Name()), "a-symlink")
+ err = os.Symlink(file.Name(), symlinkPath)
+
+ result, message := jc.SamePath.Check([]interface{}{file.Name(), symlinkPath}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
+
+type SamePathWindowsSuite struct{}
+
+var _ = gc.Suite(&SamePathWindowsSuite{})
+
+func (s *SamePathWindowsSuite) SetUpSuite(c *gc.C) {
+ if runtime.GOOS != "windows" {
+ c.Skip("Skipped Windows-intented SamePath tests.")
+ }
+}
+
+func (s *SamePathWindowsSuite) TestNotSamePathBasic(c *gc.C) {
+ dir := c.MkDir()
+ path1 := filepath.Join(dir, "notTest")
+ path2 := filepath.Join(dir, "test")
+
+ result, message := jc.SamePath.Check([]interface{}{path1, path2}, nil)
+
+ c.Assert(result, jc.IsFalse)
+ path1 = strings.ToUpper(path1)
+ c.Assert(message, gc.Equals, "GetFileAttributesEx "+path1+": The system cannot find the file specified.")
+}
+
+func (s *SamePathWindowsSuite) TestSamePathWindowsCaseInsensitive(c *gc.C) {
+ dir := c.MkDir()
+ path1 := filepath.Join(dir, "Test")
+ path2 := filepath.Join(dir, "test")
+
+ result, message := jc.SamePath.Check([]interface{}{path1, path2}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
+
+func (s *SamePathWindowsSuite) TestSamePathWindowsFixSlashes(c *gc.C) {
+ result, message := jc.SamePath.Check([]interface{}{"C:/Users", "C:\\Users"}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
+
+func (s *SamePathWindowsSuite) TestSamePathShortenedPaths(c *gc.C) {
+ dir := c.MkDir()
+ dir1, err := ioutil.TempDir(dir, "Programming")
+ defer os.Remove(dir1)
+ c.Assert(err, gc.IsNil)
+ result, message := jc.SamePath.Check([]interface{}{dir + "\\PROGRA~1", dir1}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
+
+func (s *SamePathWindowsSuite) TestSamePathShortenedPathsConsistent(c *gc.C) {
+ dir := c.MkDir()
+ dir1, err := ioutil.TempDir(dir, "Programming")
+ defer os.Remove(dir1)
+ c.Assert(err, gc.IsNil)
+ dir2, err := ioutil.TempDir(dir, "Program Files")
+ defer os.Remove(dir2)
+ c.Assert(err, gc.IsNil)
+
+ result, message := jc.SamePath.Check([]interface{}{dir + "\\PROGRA~1", dir2}, nil)
+
+ c.Assert(result, gc.Not(jc.IsTrue))
+ c.Assert(message, gc.Equals, "Not the same file")
+
+ result, message = jc.SamePath.Check([]interface{}{"C:/PROGRA~2", "C:/Program Files (x86)"}, nil)
+
+ c.Assert(result, jc.IsTrue)
+ c.Assert(message, gc.Equals, "")
+}
diff --git a/vendor/github.com/juju/testing/checkers/log.go b/vendor/github.com/juju/testing/checkers/log.go
new file mode 100644
index 0000000..39206c8
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/log.go
@@ -0,0 +1,108 @@
+// Copyright 2012, 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "fmt"
+ "regexp"
+ "strings"
+
+ "github.com/juju/loggo"
+ gc "gopkg.in/check.v1"
+)
+
+type SimpleMessage struct {
+ Level loggo.Level
+ Message string
+}
+
+type SimpleMessages []SimpleMessage
+
+func (s SimpleMessage) String() string {
+ return fmt.Sprintf("%s %s", s.Level, s.Message)
+}
+
+func (s SimpleMessages) GoString() string {
+ out := make([]string, len(s))
+ for i, m := range s {
+ out[i] = m.String()
+ }
+ return fmt.Sprintf("SimpleMessages{\n%s\n}", strings.Join(out, "\n"))
+}
+
+func logToSimpleMessages(log []loggo.Entry) SimpleMessages {
+ out := make(SimpleMessages, len(log))
+ for i, val := range log {
+ out[i].Level = val.Level
+ out[i].Message = val.Message
+ }
+ return out
+}
+
+type logMatches struct {
+ *gc.CheckerInfo
+}
+
+func (checker *logMatches) Check(params []interface{}, _ []string) (result bool, error string) {
+ var obtained SimpleMessages
+ switch params[0].(type) {
+ case []loggo.Entry:
+ obtained = logToSimpleMessages(params[0].([]loggo.Entry))
+ default:
+ return false, "Obtained value must be of type []loggo.Entry or SimpleMessage"
+ }
+
+ var expected SimpleMessages
+ switch param := params[1].(type) {
+ case []SimpleMessage:
+ expected = SimpleMessages(param)
+ case SimpleMessages:
+ expected = param
+ case []string:
+ expected = make(SimpleMessages, len(param))
+ for i, s := range param {
+ expected[i] = SimpleMessage{
+ Message: s,
+ Level: loggo.UNSPECIFIED,
+ }
+ }
+ default:
+ return false, "Expected value must be of type []string or []SimpleMessage"
+ }
+
+ obtainedSinceLastMatch := obtained
+ for len(expected) > 0 && len(obtained) >= len(expected) {
+ var msg SimpleMessage
+ msg, obtained = obtained[0], obtained[1:]
+ expect := expected[0]
+ if expect.Level != loggo.UNSPECIFIED && msg.Level != expect.Level {
+ continue
+ }
+ if matched, err := regexp.MatchString(expect.Message, msg.Message); err != nil {
+ return false, fmt.Sprintf("bad message regexp %q: %v", expect.Message, err)
+ } else if !matched {
+ continue
+ }
+ expected = expected[1:]
+ obtainedSinceLastMatch = obtained
+ }
+ if len(obtained) < len(expected) {
+ params[0] = obtainedSinceLastMatch
+ params[1] = expected
+ return false, ""
+ }
+ return true, ""
+}
+
+// LogMatches checks whether a given TestLogValues actually contains the log
+// messages we expected. If you compare it against a list of strings, we only
+// compare that the strings in the messages are correct. You can alternatively
+// pass a slice of SimpleMessage and we will check that the log levels are
+// also correct.
+//
+// The log may contain additional messages before and after each of the specified
+// expected messages.
+var LogMatches gc.Checker = &logMatches{
+ &gc.CheckerInfo{Name: "LogMatches", Params: []string{"obtained", "expected"}},
+}
diff --git a/vendor/github.com/juju/testing/checkers/log_test.go b/vendor/github.com/juju/testing/checkers/log_test.go
new file mode 100644
index 0000000..3c11a12
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/log_test.go
@@ -0,0 +1,194 @@
+// Copyright 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ "github.com/juju/loggo"
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+type SimpleMessageSuite struct{}
+
+var _ = gc.Suite(&SimpleMessageSuite{})
+
+func (s *SimpleMessageSuite) TestSimpleMessageString(c *gc.C) {
+ m := jc.SimpleMessage{
+ Level: loggo.INFO,
+ Message: `hello
+world
+`,
+ }
+ c.Check(m.String(), gc.Matches, "INFO hello\nworld\n")
+}
+
+func (s *SimpleMessageSuite) TestSimpleMessagesGoString(c *gc.C) {
+ m := jc.SimpleMessages{{
+ Level: loggo.DEBUG,
+ Message: "debug",
+ }, {
+ Level: loggo.ERROR,
+ Message: "Error",
+ }}
+ c.Check(m.GoString(), gc.Matches, `SimpleMessages{
+DEBUG debug
+ERROR Error
+}`)
+}
+
+type LogMatchesSuite struct{}
+
+var _ = gc.Suite(&LogMatchesSuite{})
+
+func (s *LogMatchesSuite) TestMatchSimpleMessage(c *gc.C) {
+ log := []loggo.Entry{
+ {Level: loggo.INFO, Message: "foo bar"},
+ {Level: loggo.INFO, Message: "12345"},
+ }
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.INFO, "foo bar"},
+ {loggo.INFO, "12345"},
+ })
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.INFO, "foo .*"},
+ {loggo.INFO, "12345"},
+ })
+ // UNSPECIFIED means we don't care what the level is,
+ // just check the message string matches.
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.UNSPECIFIED, "foo .*"},
+ {loggo.INFO, "12345"},
+ })
+ c.Check(log, gc.Not(jc.LogMatches), []jc.SimpleMessage{
+ {loggo.INFO, "foo bar"},
+ {loggo.DEBUG, "12345"},
+ })
+}
+
+func (s *LogMatchesSuite) TestMatchSimpleMessages(c *gc.C) {
+ log := []loggo.Entry{
+ {Level: loggo.INFO, Message: "foo bar"},
+ {Level: loggo.INFO, Message: "12345"},
+ }
+ c.Check(log, jc.LogMatches, jc.SimpleMessages{
+ {loggo.INFO, "foo bar"},
+ {loggo.INFO, "12345"},
+ })
+ c.Check(log, jc.LogMatches, jc.SimpleMessages{
+ {loggo.INFO, "foo .*"},
+ {loggo.INFO, "12345"},
+ })
+ // UNSPECIFIED means we don't care what the level is,
+ // just check the message string matches.
+ c.Check(log, jc.LogMatches, jc.SimpleMessages{
+ {loggo.UNSPECIFIED, "foo .*"},
+ {loggo.INFO, "12345"},
+ })
+ c.Check(log, gc.Not(jc.LogMatches), jc.SimpleMessages{
+ {loggo.INFO, "foo bar"},
+ {loggo.DEBUG, "12345"},
+ })
+}
+
+func (s *LogMatchesSuite) TestMatchStrings(c *gc.C) {
+ log := []loggo.Entry{
+ {Level: loggo.INFO, Message: "foo bar"},
+ {Level: loggo.INFO, Message: "12345"},
+ }
+ c.Check(log, jc.LogMatches, []string{"foo bar", "12345"})
+ c.Check(log, jc.LogMatches, []string{"foo .*", "12345"})
+ c.Check(log, gc.Not(jc.LogMatches), []string{"baz", "bing"})
+}
+
+func (s *LogMatchesSuite) TestMatchInexact(c *gc.C) {
+ log := []loggo.Entry{
+ {Level: loggo.INFO, Message: "foo bar"},
+ {Level: loggo.INFO, Message: "baz"},
+ {Level: loggo.DEBUG, Message: "12345"},
+ {Level: loggo.ERROR, Message: "12345"},
+ {Level: loggo.INFO, Message: "67890"},
+ }
+ c.Check(log, jc.LogMatches, []string{"foo bar", "12345"})
+ c.Check(log, jc.LogMatches, []string{"foo .*", "12345"})
+ c.Check(log, jc.LogMatches, []string{"foo .*", "67890"})
+ c.Check(log, jc.LogMatches, []string{"67890"})
+
+ // Matches are always left-most after the previous match.
+ c.Check(log, jc.LogMatches, []string{".*", "baz"})
+ c.Check(log, jc.LogMatches, []string{"foo bar", ".*", "12345"})
+ c.Check(log, jc.LogMatches, []string{"foo bar", ".*", "67890"})
+
+ // Order is important: 67890 advances to the last item in obtained,
+ // and so there's nothing after to match against ".*".
+ c.Check(log, gc.Not(jc.LogMatches), []string{"67890", ".*"})
+ // ALL specified patterns MUST match in the order given.
+ c.Check(log, gc.Not(jc.LogMatches), []string{".*", "foo bar"})
+
+ // Check that levels are matched.
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.UNSPECIFIED, "12345"},
+ {loggo.UNSPECIFIED, "12345"},
+ })
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.DEBUG, "12345"},
+ {loggo.ERROR, "12345"},
+ })
+ c.Check(log, jc.LogMatches, []jc.SimpleMessage{
+ {loggo.DEBUG, "12345"},
+ {loggo.INFO, ".*"},
+ })
+ c.Check(log, gc.Not(jc.LogMatches), []jc.SimpleMessage{
+ {loggo.DEBUG, "12345"},
+ {loggo.INFO, ".*"},
+ {loggo.UNSPECIFIED, ".*"},
+ })
+}
+
+func (s *LogMatchesSuite) TestFromLogMatches(c *gc.C) {
+ tw := &loggo.TestWriter{}
+ _, err := loggo.ReplaceDefaultWriter(tw)
+ c.Assert(err, gc.IsNil)
+ defer loggo.ResetWriters()
+ logger := loggo.GetLogger("test")
+ logger.SetLogLevel(loggo.DEBUG)
+ logger.Infof("foo")
+ logger.Debugf("bar")
+ logger.Tracef("hidden")
+ c.Check(tw.Log(), jc.LogMatches, []string{"foo", "bar"})
+ c.Check(tw.Log(), gc.Not(jc.LogMatches), []string{"foo", "bad"})
+ c.Check(tw.Log(), gc.Not(jc.LogMatches), []jc.SimpleMessage{
+ {loggo.INFO, "foo"},
+ {loggo.INFO, "bar"},
+ })
+}
+
+func (s *LogMatchesSuite) TestLogMatchesOnlyAcceptsSliceTestLogValues(c *gc.C) {
+ obtained := []string{"banana"} // specifically not []loggo.TestLogValues
+ expected := jc.SimpleMessages{}
+ result, err := jc.LogMatches.Check([]interface{}{obtained, expected}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(err, gc.Equals, "Obtained value must be of type []loggo.Entry or SimpleMessage")
+}
+
+func (s *LogMatchesSuite) TestLogMatchesOnlyAcceptsStringOrSimpleMessages(c *gc.C) {
+ obtained := []loggo.Entry{
+ {Level: loggo.INFO, Message: "foo bar"},
+ {Level: loggo.INFO, Message: "baz"},
+ {Level: loggo.DEBUG, Message: "12345"},
+ }
+ expected := "totally wrong"
+ result, err := jc.LogMatches.Check([]interface{}{obtained, expected}, nil)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(err, gc.Equals, "Expected value must be of type []string or []SimpleMessage")
+}
+
+func (s *LogMatchesSuite) TestLogMatchesFailsOnInvalidRegex(c *gc.C) {
+ var obtained interface{} = []loggo.Entry{{Level: loggo.INFO, Message: "foo bar"}}
+ var expected interface{} = []string{"[]foo"}
+
+ result, err := jc.LogMatches.Check([]interface{}{obtained, expected}, nil /* unused */)
+ c.Assert(result, gc.Equals, false)
+ c.Assert(err, gc.Equals, "bad message regexp \"[]foo\": error parsing regexp: missing closing ]: `[]foo`")
+}
diff --git a/vendor/github.com/juju/testing/checkers/relop.go b/vendor/github.com/juju/testing/checkers/relop.go
new file mode 100644
index 0000000..ada94ff
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/relop.go
@@ -0,0 +1,93 @@
+// Copyright 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers
+
+import (
+ "fmt"
+ "reflect"
+
+ gc "gopkg.in/check.v1"
+)
+
+// GreaterThan checker
+
+type greaterThanChecker struct {
+ *gc.CheckerInfo
+}
+
+var GreaterThan gc.Checker = &greaterThanChecker{
+ &gc.CheckerInfo{Name: "GreaterThan", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *greaterThanChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ defer func() {
+ if v := recover(); v != nil {
+ result = false
+ error = fmt.Sprint(v)
+ }
+ }()
+
+ p0value := reflect.ValueOf(params[0])
+ p1value := reflect.ValueOf(params[1])
+ switch p0value.Kind() {
+ case reflect.Int,
+ reflect.Int8,
+ reflect.Int16,
+ reflect.Int32,
+ reflect.Int64:
+ return p0value.Int() > p1value.Int(), ""
+ case reflect.Uint,
+ reflect.Uint8,
+ reflect.Uint16,
+ reflect.Uint32,
+ reflect.Uint64:
+ return p0value.Uint() > p1value.Uint(), ""
+ case reflect.Float32,
+ reflect.Float64:
+ return p0value.Float() > p1value.Float(), ""
+ default:
+ }
+ return false, fmt.Sprintf("obtained value %s:%#v not supported", p0value.Kind(), params[0])
+}
+
+// LessThan checker
+
+type lessThanChecker struct {
+ *gc.CheckerInfo
+}
+
+var LessThan gc.Checker = &lessThanChecker{
+ &gc.CheckerInfo{Name: "LessThan", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *lessThanChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ defer func() {
+ if v := recover(); v != nil {
+ result = false
+ error = fmt.Sprint(v)
+ }
+ }()
+
+ p0value := reflect.ValueOf(params[0])
+ p1value := reflect.ValueOf(params[1])
+ switch p0value.Kind() {
+ case reflect.Int,
+ reflect.Int8,
+ reflect.Int16,
+ reflect.Int32,
+ reflect.Int64:
+ return p0value.Int() < p1value.Int(), ""
+ case reflect.Uint,
+ reflect.Uint8,
+ reflect.Uint16,
+ reflect.Uint32,
+ reflect.Uint64:
+ return p0value.Uint() < p1value.Uint(), ""
+ case reflect.Float32,
+ reflect.Float64:
+ return p0value.Float() < p1value.Float(), ""
+ default:
+ }
+ return false, fmt.Sprintf("obtained value %s:%#v not supported", p0value.Kind(), params[0])
+}
diff --git a/vendor/github.com/juju/testing/checkers/relop_test.go b/vendor/github.com/juju/testing/checkers/relop_test.go
new file mode 100644
index 0000000..a8eb757
--- /dev/null
+++ b/vendor/github.com/juju/testing/checkers/relop_test.go
@@ -0,0 +1,36 @@
+// Copyright 2013 Canonical Ltd.
+// Licensed under the LGPLv3, see LICENCE file for details.
+
+package checkers_test
+
+import (
+ gc "gopkg.in/check.v1"
+
+ jc "github.com/juju/testing/checkers"
+)
+
+type RelopSuite struct{}
+
+var _ = gc.Suite(&RelopSuite{})
+
+func (s *RelopSuite) TestGreaterThan(c *gc.C) {
+ c.Assert(45, jc.GreaterThan, 42)
+ c.Assert(2.25, jc.GreaterThan, 1.0)
+ c.Assert(42, gc.Not(jc.GreaterThan), 42)
+ c.Assert(10, gc.Not(jc.GreaterThan), 42)
+
+ result, msg := jc.GreaterThan.Check([]interface{}{"Hello", "World"}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(msg, gc.Equals, `obtained value string:"Hello" not supported`)
+}
+
+func (s *RelopSuite) TestLessThan(c *gc.C) {
+ c.Assert(42, jc.LessThan, 45)
+ c.Assert(1.0, jc.LessThan, 2.25)
+ c.Assert(42, gc.Not(jc.LessThan), 42)
+ c.Assert(42, gc.Not(jc.LessThan), 10)
+
+ result, msg := jc.LessThan.Check([]interface{}{"Hello", "World"}, nil)
+ c.Assert(result, jc.IsFalse)
+ c.Assert(msg, gc.Equals, `obtained value string:"Hello" not supported`)
+}
diff --git a/vendor/gopkg.in/check.v1/LICENSE b/vendor/gopkg.in/check.v1/LICENSE
new file mode 100644
index 0000000..545cf2d
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/LICENSE
@@ -0,0 +1,25 @@
+Gocheck - A rich testing framework for Go
+
+Copyright (c) 2010-2013 Gustavo Niemeyer
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/gopkg.in/check.v1/README.md b/vendor/gopkg.in/check.v1/README.md
new file mode 100644
index 0000000..0ca9e57
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/README.md
@@ -0,0 +1,20 @@
+Instructions
+============
+
+Install the package with:
+
+ go get gopkg.in/check.v1
+
+Import it with:
+
+ import "gopkg.in/check.v1"
+
+and use _check_ as the package name inside the code.
+
+For more details, visit the project page:
+
+* http://labix.org/gocheck
+
+and the API documentation:
+
+* https://gopkg.in/check.v1
diff --git a/vendor/gopkg.in/check.v1/TODO b/vendor/gopkg.in/check.v1/TODO
new file mode 100644
index 0000000..3349827
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/TODO
@@ -0,0 +1,2 @@
+- Assert(slice, Contains, item)
+- Parallel test support
diff --git a/vendor/gopkg.in/check.v1/benchmark.go b/vendor/gopkg.in/check.v1/benchmark.go
new file mode 100644
index 0000000..46ea9dc
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/benchmark.go
@@ -0,0 +1,187 @@
+// Copyright (c) 2012 The Go Authors. All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package check
+
+import (
+ "fmt"
+ "runtime"
+ "time"
+)
+
+var memStats runtime.MemStats
+
+// testingB is a type passed to Benchmark functions to manage benchmark
+// timing and to specify the number of iterations to run.
+type timer struct {
+ start time.Time // Time test or benchmark started
+ duration time.Duration
+ N int
+ bytes int64
+ timerOn bool
+ benchTime time.Duration
+ // The initial states of memStats.Mallocs and memStats.TotalAlloc.
+ startAllocs uint64
+ startBytes uint64
+ // The net total of this test after being run.
+ netAllocs uint64
+ netBytes uint64
+}
+
+// StartTimer starts timing a test. This function is called automatically
+// before a benchmark starts, but it can also used to resume timing after
+// a call to StopTimer.
+func (c *C) StartTimer() {
+ if !c.timerOn {
+ c.start = time.Now()
+ c.timerOn = true
+
+ runtime.ReadMemStats(&memStats)
+ c.startAllocs = memStats.Mallocs
+ c.startBytes = memStats.TotalAlloc
+ }
+}
+
+// StopTimer stops timing a test. This can be used to pause the timer
+// while performing complex initialization that you don't
+// want to measure.
+func (c *C) StopTimer() {
+ if c.timerOn {
+ c.duration += time.Now().Sub(c.start)
+ c.timerOn = false
+ runtime.ReadMemStats(&memStats)
+ c.netAllocs += memStats.Mallocs - c.startAllocs
+ c.netBytes += memStats.TotalAlloc - c.startBytes
+ }
+}
+
+// ResetTimer sets the elapsed benchmark time to zero.
+// It does not affect whether the timer is running.
+func (c *C) ResetTimer() {
+ if c.timerOn {
+ c.start = time.Now()
+ runtime.ReadMemStats(&memStats)
+ c.startAllocs = memStats.Mallocs
+ c.startBytes = memStats.TotalAlloc
+ }
+ c.duration = 0
+ c.netAllocs = 0
+ c.netBytes = 0
+}
+
+// SetBytes informs the number of bytes that the benchmark processes
+// on each iteration. If this is called in a benchmark it will also
+// report MB/s.
+func (c *C) SetBytes(n int64) {
+ c.bytes = n
+}
+
+func (c *C) nsPerOp() int64 {
+ if c.N <= 0 {
+ return 0
+ }
+ return c.duration.Nanoseconds() / int64(c.N)
+}
+
+func (c *C) mbPerSec() float64 {
+ if c.bytes <= 0 || c.duration <= 0 || c.N <= 0 {
+ return 0
+ }
+ return (float64(c.bytes) * float64(c.N) / 1e6) / c.duration.Seconds()
+}
+
+func (c *C) timerString() string {
+ if c.N <= 0 {
+ return fmt.Sprintf("%3.3fs", float64(c.duration.Nanoseconds())/1e9)
+ }
+ mbs := c.mbPerSec()
+ mb := ""
+ if mbs != 0 {
+ mb = fmt.Sprintf("\t%7.2f MB/s", mbs)
+ }
+ nsop := c.nsPerOp()
+ ns := fmt.Sprintf("%10d ns/op", nsop)
+ if c.N > 0 && nsop < 100 {
+ // The format specifiers here make sure that
+ // the ones digits line up for all three possible formats.
+ if nsop < 10 {
+ ns = fmt.Sprintf("%13.2f ns/op", float64(c.duration.Nanoseconds())/float64(c.N))
+ } else {
+ ns = fmt.Sprintf("%12.1f ns/op", float64(c.duration.Nanoseconds())/float64(c.N))
+ }
+ }
+ memStats := ""
+ if c.benchMem {
+ allocedBytes := fmt.Sprintf("%8d B/op", int64(c.netBytes)/int64(c.N))
+ allocs := fmt.Sprintf("%8d allocs/op", int64(c.netAllocs)/int64(c.N))
+ memStats = fmt.Sprintf("\t%s\t%s", allocedBytes, allocs)
+ }
+ return fmt.Sprintf("%8d\t%s%s%s", c.N, ns, mb, memStats)
+}
+
+func min(x, y int) int {
+ if x > y {
+ return y
+ }
+ return x
+}
+
+func max(x, y int) int {
+ if x < y {
+ return y
+ }
+ return x
+}
+
+// roundDown10 rounds a number down to the nearest power of 10.
+func roundDown10(n int) int {
+ var tens = 0
+ // tens = floor(log_10(n))
+ for n > 10 {
+ n = n / 10
+ tens++
+ }
+ // result = 10^tens
+ result := 1
+ for i := 0; i < tens; i++ {
+ result *= 10
+ }
+ return result
+}
+
+// roundUp rounds x up to a number of the form [1eX, 2eX, 5eX].
+func roundUp(n int) int {
+ base := roundDown10(n)
+ if n < (2 * base) {
+ return 2 * base
+ }
+ if n < (5 * base) {
+ return 5 * base
+ }
+ return 10 * base
+}
diff --git a/vendor/gopkg.in/check.v1/benchmark_test.go b/vendor/gopkg.in/check.v1/benchmark_test.go
new file mode 100644
index 0000000..8b6a8a6
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/benchmark_test.go
@@ -0,0 +1,91 @@
+// These tests verify the test running logic.
+
+package check_test
+
+import (
+ "time"
+ . "gopkg.in/check.v1"
+)
+
+var benchmarkS = Suite(&BenchmarkS{})
+
+type BenchmarkS struct{}
+
+func (s *BenchmarkS) TestCountSuite(c *C) {
+ suitesRun += 1
+}
+
+func (s *BenchmarkS) TestBasicTestTiming(c *C) {
+ helper := FixtureHelper{sleepOn: "Test1", sleep: 1000000 * time.Nanosecond}
+ output := String{}
+ runConf := RunConf{Output: &output, Verbose: true}
+ Run(&helper, &runConf)
+
+ expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test1\t0\\.0[0-9]+s\n" +
+ "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t0\\.0[0-9]+s\n"
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *BenchmarkS) TestStreamTestTiming(c *C) {
+ helper := FixtureHelper{sleepOn: "SetUpSuite", sleep: 1000000 * time.Nanosecond}
+ output := String{}
+ runConf := RunConf{Output: &output, Stream: true}
+ Run(&helper, &runConf)
+
+ expected := "(?s).*\nPASS: check_test\\.go:[0-9]+: FixtureHelper\\.SetUpSuite\t[0-9]+\\.[0-9]+s\n.*"
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *BenchmarkS) TestBenchmark(c *C) {
+ helper := FixtureHelper{sleep: 100000}
+ output := String{}
+ runConf := RunConf{
+ Output: &output,
+ Benchmark: true,
+ BenchmarkTime: 10000000,
+ Filter: "Benchmark1",
+ }
+ Run(&helper, &runConf)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Benchmark1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Benchmark1")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ // ... and more.
+
+ expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark1\t\\s+[0-9]+\t\\s+[0-9]+ ns/op\n"
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *BenchmarkS) TestBenchmarkBytes(c *C) {
+ helper := FixtureHelper{sleep: 100000}
+ output := String{}
+ runConf := RunConf{
+ Output: &output,
+ Benchmark: true,
+ BenchmarkTime: 10000000,
+ Filter: "Benchmark2",
+ }
+ Run(&helper, &runConf)
+
+ expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark2\t\\s+[0-9]+\t\\s+[0-9]+ ns/op\t\\s+ *[1-9]\\.[0-9]{2} MB/s\n"
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *BenchmarkS) TestBenchmarkMem(c *C) {
+ helper := FixtureHelper{sleep: 100000}
+ output := String{}
+ runConf := RunConf{
+ Output: &output,
+ Benchmark: true,
+ BenchmarkMem: true,
+ BenchmarkTime: 10000000,
+ Filter: "Benchmark3",
+ }
+ Run(&helper, &runConf)
+
+ expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark3\t\\s+ [0-9]+\t\\s+ *[0-9]+ ns/op\t\\s+ [0-9]+ B/op\t\\s+ [1-9]+ allocs/op\n"
+ c.Assert(output.value, Matches, expected)
+}
diff --git a/vendor/gopkg.in/check.v1/bootstrap_test.go b/vendor/gopkg.in/check.v1/bootstrap_test.go
new file mode 100644
index 0000000..e55f327
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/bootstrap_test.go
@@ -0,0 +1,82 @@
+// These initial tests are for bootstrapping. They verify that we can
+// basically use the testing infrastructure itself to check if the test
+// system is working.
+//
+// These tests use will break down the test runner badly in case of
+// errors because if they simply fail, we can't be sure the developer
+// will ever see anything (because failing means the failing system
+// somehow isn't working! :-)
+//
+// Do not assume *any* internal functionality works as expected besides
+// what's actually tested here.
+
+package check_test
+
+import (
+ "fmt"
+ "gopkg.in/check.v1"
+ "strings"
+)
+
+type BootstrapS struct{}
+
+var boostrapS = check.Suite(&BootstrapS{})
+
+func (s *BootstrapS) TestCountSuite(c *check.C) {
+ suitesRun += 1
+}
+
+func (s *BootstrapS) TestFailedAndFail(c *check.C) {
+ if c.Failed() {
+ critical("c.Failed() must be false first!")
+ }
+ c.Fail()
+ if !c.Failed() {
+ critical("c.Fail() didn't put the test in a failed state!")
+ }
+ c.Succeed()
+}
+
+func (s *BootstrapS) TestFailedAndSucceed(c *check.C) {
+ c.Fail()
+ c.Succeed()
+ if c.Failed() {
+ critical("c.Succeed() didn't put the test back in a non-failed state")
+ }
+}
+
+func (s *BootstrapS) TestLogAndGetTestLog(c *check.C) {
+ c.Log("Hello there!")
+ log := c.GetTestLog()
+ if log != "Hello there!\n" {
+ critical(fmt.Sprintf("Log() or GetTestLog() is not working! Got: %#v", log))
+ }
+}
+
+func (s *BootstrapS) TestLogfAndGetTestLog(c *check.C) {
+ c.Logf("Hello %v", "there!")
+ log := c.GetTestLog()
+ if log != "Hello there!\n" {
+ critical(fmt.Sprintf("Logf() or GetTestLog() is not working! Got: %#v", log))
+ }
+}
+
+func (s *BootstrapS) TestRunShowsErrors(c *check.C) {
+ output := String{}
+ check.Run(&FailHelper{}, &check.RunConf{Output: &output})
+ if strings.Index(output.value, "Expected failure!") == -1 {
+ critical(fmt.Sprintf("RunWithWriter() output did not contain the "+
+ "expected failure! Got: %#v",
+ output.value))
+ }
+}
+
+func (s *BootstrapS) TestRunDoesntShowSuccesses(c *check.C) {
+ output := String{}
+ check.Run(&SuccessHelper{}, &check.RunConf{Output: &output})
+ if strings.Index(output.value, "Expected success!") != -1 {
+ critical(fmt.Sprintf("RunWithWriter() output contained a successful "+
+ "test! Got: %#v",
+ output.value))
+ }
+}
diff --git a/vendor/gopkg.in/check.v1/check.go b/vendor/gopkg.in/check.v1/check.go
new file mode 100644
index 0000000..137a274
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/check.go
@@ -0,0 +1,873 @@
+// Package check is a rich testing extension for Go's testing package.
+//
+// For details about the project, see:
+//
+// http://labix.org/gocheck
+//
+package check
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ "io"
+ "math/rand"
+ "os"
+ "path"
+ "path/filepath"
+ "reflect"
+ "regexp"
+ "runtime"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+// -----------------------------------------------------------------------
+// Internal type which deals with suite method calling.
+
+const (
+ fixtureKd = iota
+ testKd
+)
+
+type funcKind int
+
+const (
+ succeededSt = iota
+ failedSt
+ skippedSt
+ panickedSt
+ fixturePanickedSt
+ missedSt
+)
+
+type funcStatus uint32
+
+// A method value can't reach its own Method structure.
+type methodType struct {
+ reflect.Value
+ Info reflect.Method
+}
+
+func newMethod(receiver reflect.Value, i int) *methodType {
+ return &methodType{receiver.Method(i), receiver.Type().Method(i)}
+}
+
+func (method *methodType) PC() uintptr {
+ return method.Info.Func.Pointer()
+}
+
+func (method *methodType) suiteName() string {
+ t := method.Info.Type.In(0)
+ if t.Kind() == reflect.Ptr {
+ t = t.Elem()
+ }
+ return t.Name()
+}
+
+func (method *methodType) String() string {
+ return method.suiteName() + "." + method.Info.Name
+}
+
+func (method *methodType) matches(re *regexp.Regexp) bool {
+ return (re.MatchString(method.Info.Name) ||
+ re.MatchString(method.suiteName()) ||
+ re.MatchString(method.String()))
+}
+
+type C struct {
+ method *methodType
+ kind funcKind
+ testName string
+ _status funcStatus
+ logb *logger
+ logw io.Writer
+ done chan *C
+ reason string
+ mustFail bool
+ tempDir *tempDir
+ benchMem bool
+ startTime time.Time
+ timer
+}
+
+func (c *C) status() funcStatus {
+ return funcStatus(atomic.LoadUint32((*uint32)(&c._status)))
+}
+
+func (c *C) setStatus(s funcStatus) {
+ atomic.StoreUint32((*uint32)(&c._status), uint32(s))
+}
+
+func (c *C) stopNow() {
+ runtime.Goexit()
+}
+
+// logger is a concurrency safe byte.Buffer
+type logger struct {
+ sync.Mutex
+ writer bytes.Buffer
+}
+
+func (l *logger) Write(buf []byte) (int, error) {
+ l.Lock()
+ defer l.Unlock()
+ return l.writer.Write(buf)
+}
+
+func (l *logger) WriteTo(w io.Writer) (int64, error) {
+ l.Lock()
+ defer l.Unlock()
+ return l.writer.WriteTo(w)
+}
+
+func (l *logger) String() string {
+ l.Lock()
+ defer l.Unlock()
+ return l.writer.String()
+}
+
+// -----------------------------------------------------------------------
+// Handling of temporary files and directories.
+
+type tempDir struct {
+ sync.Mutex
+ path string
+ counter int
+}
+
+func (td *tempDir) newPath() string {
+ td.Lock()
+ defer td.Unlock()
+ if td.path == "" {
+ var err error
+ for i := 0; i != 100; i++ {
+ path := fmt.Sprintf("%s%ccheck-%d", os.TempDir(), os.PathSeparator, rand.Int())
+ if err = os.Mkdir(path, 0700); err == nil {
+ td.path = path
+ break
+ }
+ }
+ if td.path == "" {
+ panic("Couldn't create temporary directory: " + err.Error())
+ }
+ }
+ result := filepath.Join(td.path, strconv.Itoa(td.counter))
+ td.counter++
+ return result
+}
+
+func (td *tempDir) removeAll() {
+ td.Lock()
+ defer td.Unlock()
+ if td.path != "" {
+ err := os.RemoveAll(td.path)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "WARNING: Error cleaning up temporaries: "+err.Error())
+ }
+ }
+}
+
+// Create a new temporary directory which is automatically removed after
+// the suite finishes running.
+func (c *C) MkDir() string {
+ path := c.tempDir.newPath()
+ if err := os.Mkdir(path, 0700); err != nil {
+ panic(fmt.Sprintf("Couldn't create temporary directory %s: %s", path, err.Error()))
+ }
+ return path
+}
+
+// -----------------------------------------------------------------------
+// Low-level logging functions.
+
+func (c *C) log(args ...interface{}) {
+ c.writeLog([]byte(fmt.Sprint(args...) + "\n"))
+}
+
+func (c *C) logf(format string, args ...interface{}) {
+ c.writeLog([]byte(fmt.Sprintf(format+"\n", args...)))
+}
+
+func (c *C) logNewLine() {
+ c.writeLog([]byte{'\n'})
+}
+
+func (c *C) writeLog(buf []byte) {
+ c.logb.Write(buf)
+ if c.logw != nil {
+ c.logw.Write(buf)
+ }
+}
+
+func hasStringOrError(x interface{}) (ok bool) {
+ _, ok = x.(fmt.Stringer)
+ if ok {
+ return
+ }
+ _, ok = x.(error)
+ return
+}
+
+func (c *C) logValue(label string, value interface{}) {
+ if label == "" {
+ if hasStringOrError(value) {
+ c.logf("... %#v (%q)", value, value)
+ } else {
+ c.logf("... %#v", value)
+ }
+ } else if value == nil {
+ c.logf("... %s = nil", label)
+ } else {
+ if hasStringOrError(value) {
+ fv := fmt.Sprintf("%#v", value)
+ qv := fmt.Sprintf("%q", value)
+ if fv != qv {
+ c.logf("... %s %s = %s (%s)", label, reflect.TypeOf(value), fv, qv)
+ return
+ }
+ }
+ if s, ok := value.(string); ok && isMultiLine(s) {
+ c.logf(`... %s %s = "" +`, label, reflect.TypeOf(value))
+ c.logMultiLine(s)
+ } else {
+ c.logf("... %s %s = %#v", label, reflect.TypeOf(value), value)
+ }
+ }
+}
+
+func (c *C) logMultiLine(s string) {
+ b := make([]byte, 0, len(s)*2)
+ i := 0
+ n := len(s)
+ for i < n {
+ j := i + 1
+ for j < n && s[j-1] != '\n' {
+ j++
+ }
+ b = append(b, "... "...)
+ b = strconv.AppendQuote(b, s[i:j])
+ if j < n {
+ b = append(b, " +"...)
+ }
+ b = append(b, '\n')
+ i = j
+ }
+ c.writeLog(b)
+}
+
+func isMultiLine(s string) bool {
+ for i := 0; i+1 < len(s); i++ {
+ if s[i] == '\n' {
+ return true
+ }
+ }
+ return false
+}
+
+func (c *C) logString(issue string) {
+ c.log("... ", issue)
+}
+
+func (c *C) logCaller(skip int) {
+ // This is a bit heavier than it ought to be.
+ skip++ // Our own frame.
+ pc, callerFile, callerLine, ok := runtime.Caller(skip)
+ if !ok {
+ return
+ }
+ var testFile string
+ var testLine int
+ testFunc := runtime.FuncForPC(c.method.PC())
+ if runtime.FuncForPC(pc) != testFunc {
+ for {
+ skip++
+ if pc, file, line, ok := runtime.Caller(skip); ok {
+ // Note that the test line may be different on
+ // distinct calls for the same test. Showing
+ // the "internal" line is helpful when debugging.
+ if runtime.FuncForPC(pc) == testFunc {
+ testFile, testLine = file, line
+ break
+ }
+ } else {
+ break
+ }
+ }
+ }
+ if testFile != "" && (testFile != callerFile || testLine != callerLine) {
+ c.logCode(testFile, testLine)
+ }
+ c.logCode(callerFile, callerLine)
+}
+
+func (c *C) logCode(path string, line int) {
+ c.logf("%s:%d:", nicePath(path), line)
+ code, err := printLine(path, line)
+ if code == "" {
+ code = "..." // XXX Open the file and take the raw line.
+ if err != nil {
+ code += err.Error()
+ }
+ }
+ c.log(indent(code, " "))
+}
+
+var valueGo = filepath.Join("reflect", "value.go")
+var asmGo = filepath.Join("runtime", "asm_")
+
+func (c *C) logPanic(skip int, value interface{}) {
+ skip++ // Our own frame.
+ initialSkip := skip
+ for ; ; skip++ {
+ if pc, file, line, ok := runtime.Caller(skip); ok {
+ if skip == initialSkip {
+ c.logf("... Panic: %s (PC=0x%X)\n", value, pc)
+ }
+ name := niceFuncName(pc)
+ path := nicePath(file)
+ if strings.Contains(path, "/gopkg.in/check.v") {
+ continue
+ }
+ if name == "Value.call" && strings.HasSuffix(path, valueGo) {
+ continue
+ }
+ if (name == "call16" || name == "call32") && strings.Contains(path, asmGo) {
+ continue
+ }
+ c.logf("%s:%d\n in %s", nicePath(file), line, name)
+ } else {
+ break
+ }
+ }
+}
+
+func (c *C) logSoftPanic(issue string) {
+ c.log("... Panic: ", issue)
+}
+
+func (c *C) logArgPanic(method *methodType, expectedType string) {
+ c.logf("... Panic: %s argument should be %s",
+ niceFuncName(method.PC()), expectedType)
+}
+
+// -----------------------------------------------------------------------
+// Some simple formatting helpers.
+
+var initWD, initWDErr = os.Getwd()
+
+func init() {
+ if initWDErr == nil {
+ initWD = strings.Replace(initWD, "\\", "/", -1) + "/"
+ }
+}
+
+func nicePath(path string) string {
+ if initWDErr == nil {
+ if strings.HasPrefix(path, initWD) {
+ return path[len(initWD):]
+ }
+ }
+ return path
+}
+
+func niceFuncPath(pc uintptr) string {
+ function := runtime.FuncForPC(pc)
+ if function != nil {
+ filename, line := function.FileLine(pc)
+ return fmt.Sprintf("%s:%d", nicePath(filename), line)
+ }
+ return ""
+}
+
+func niceFuncName(pc uintptr) string {
+ function := runtime.FuncForPC(pc)
+ if function != nil {
+ name := path.Base(function.Name())
+ if i := strings.Index(name, "."); i > 0 {
+ name = name[i+1:]
+ }
+ if strings.HasPrefix(name, "(*") {
+ if i := strings.Index(name, ")"); i > 0 {
+ name = name[2:i] + name[i+1:]
+ }
+ }
+ if i := strings.LastIndex(name, ".*"); i != -1 {
+ name = name[:i] + "." + name[i+2:]
+ }
+ if i := strings.LastIndex(name, "·"); i != -1 {
+ name = name[:i] + "." + name[i+2:]
+ }
+ return name
+ }
+ return ""
+}
+
+// -----------------------------------------------------------------------
+// Result tracker to aggregate call results.
+
+type Result struct {
+ Succeeded int
+ Failed int
+ Skipped int
+ Panicked int
+ FixturePanicked int
+ ExpectedFailures int
+ Missed int // Not even tried to run, related to a panic in the fixture.
+ RunError error // Houston, we've got a problem.
+ WorkDir string // If KeepWorkDir is true
+}
+
+type resultTracker struct {
+ result Result
+ _lastWasProblem bool
+ _waiting int
+ _missed int
+ _expectChan chan *C
+ _doneChan chan *C
+ _stopChan chan bool
+}
+
+func newResultTracker() *resultTracker {
+ return &resultTracker{_expectChan: make(chan *C), // Synchronous
+ _doneChan: make(chan *C, 32), // Asynchronous
+ _stopChan: make(chan bool)} // Synchronous
+}
+
+func (tracker *resultTracker) start() {
+ go tracker._loopRoutine()
+}
+
+func (tracker *resultTracker) waitAndStop() {
+ <-tracker._stopChan
+}
+
+func (tracker *resultTracker) expectCall(c *C) {
+ tracker._expectChan <- c
+}
+
+func (tracker *resultTracker) callDone(c *C) {
+ tracker._doneChan <- c
+}
+
+func (tracker *resultTracker) _loopRoutine() {
+ for {
+ var c *C
+ if tracker._waiting > 0 {
+ // Calls still running. Can't stop.
+ select {
+ // XXX Reindent this (not now to make diff clear)
+ case <-tracker._expectChan:
+ tracker._waiting++
+ case c = <-tracker._doneChan:
+ tracker._waiting--
+ switch c.status() {
+ case succeededSt:
+ if c.kind == testKd {
+ if c.mustFail {
+ tracker.result.ExpectedFailures++
+ } else {
+ tracker.result.Succeeded++
+ }
+ }
+ case failedSt:
+ tracker.result.Failed++
+ case panickedSt:
+ if c.kind == fixtureKd {
+ tracker.result.FixturePanicked++
+ } else {
+ tracker.result.Panicked++
+ }
+ case fixturePanickedSt:
+ // Track it as missed, since the panic
+ // was on the fixture, not on the test.
+ tracker.result.Missed++
+ case missedSt:
+ tracker.result.Missed++
+ case skippedSt:
+ if c.kind == testKd {
+ tracker.result.Skipped++
+ }
+ }
+ }
+ } else {
+ // No calls. Can stop, but no done calls here.
+ select {
+ case tracker._stopChan <- true:
+ return
+ case <-tracker._expectChan:
+ tracker._waiting++
+ case <-tracker._doneChan:
+ panic("Tracker got an unexpected done call.")
+ }
+ }
+ }
+}
+
+// -----------------------------------------------------------------------
+// The underlying suite runner.
+
+type suiteRunner struct {
+ suite interface{}
+ setUpSuite, tearDownSuite *methodType
+ setUpTest, tearDownTest *methodType
+ tests []*methodType
+ tracker *resultTracker
+ tempDir *tempDir
+ keepDir bool
+ output *outputWriter
+ reportedProblemLast bool
+ benchTime time.Duration
+ benchMem bool
+}
+
+type RunConf struct {
+ Output io.Writer
+ Stream bool
+ Verbose bool
+ Filter string
+ Benchmark bool
+ BenchmarkTime time.Duration // Defaults to 1 second
+ BenchmarkMem bool
+ KeepWorkDir bool
+}
+
+// Create a new suiteRunner able to run all methods in the given suite.
+func newSuiteRunner(suite interface{}, runConf *RunConf) *suiteRunner {
+ var conf RunConf
+ if runConf != nil {
+ conf = *runConf
+ }
+ if conf.Output == nil {
+ conf.Output = os.Stdout
+ }
+ if conf.Benchmark {
+ conf.Verbose = true
+ }
+
+ suiteType := reflect.TypeOf(suite)
+ suiteNumMethods := suiteType.NumMethod()
+ suiteValue := reflect.ValueOf(suite)
+
+ runner := &suiteRunner{
+ suite: suite,
+ output: newOutputWriter(conf.Output, conf.Stream, conf.Verbose),
+ tracker: newResultTracker(),
+ benchTime: conf.BenchmarkTime,
+ benchMem: conf.BenchmarkMem,
+ tempDir: &tempDir{},
+ keepDir: conf.KeepWorkDir,
+ tests: make([]*methodType, 0, suiteNumMethods),
+ }
+ if runner.benchTime == 0 {
+ runner.benchTime = 1 * time.Second
+ }
+
+ var filterRegexp *regexp.Regexp
+ if conf.Filter != "" {
+ regexp, err := regexp.Compile(conf.Filter)
+ if err != nil {
+ msg := "Bad filter expression: " + err.Error()
+ runner.tracker.result.RunError = errors.New(msg)
+ return runner
+ }
+ filterRegexp = regexp
+ }
+
+ for i := 0; i != suiteNumMethods; i++ {
+ method := newMethod(suiteValue, i)
+ switch method.Info.Name {
+ case "SetUpSuite":
+ runner.setUpSuite = method
+ case "TearDownSuite":
+ runner.tearDownSuite = method
+ case "SetUpTest":
+ runner.setUpTest = method
+ case "TearDownTest":
+ runner.tearDownTest = method
+ default:
+ prefix := "Test"
+ if conf.Benchmark {
+ prefix = "Benchmark"
+ }
+ if !strings.HasPrefix(method.Info.Name, prefix) {
+ continue
+ }
+ if filterRegexp == nil || method.matches(filterRegexp) {
+ runner.tests = append(runner.tests, method)
+ }
+ }
+ }
+ return runner
+}
+
+// Run all methods in the given suite.
+func (runner *suiteRunner) run() *Result {
+ if runner.tracker.result.RunError == nil && len(runner.tests) > 0 {
+ runner.tracker.start()
+ if runner.checkFixtureArgs() {
+ c := runner.runFixture(runner.setUpSuite, "", nil)
+ if c == nil || c.status() == succeededSt {
+ for i := 0; i != len(runner.tests); i++ {
+ c := runner.runTest(runner.tests[i])
+ if c.status() == fixturePanickedSt {
+ runner.skipTests(missedSt, runner.tests[i+1:])
+ break
+ }
+ }
+ } else if c != nil && c.status() == skippedSt {
+ runner.skipTests(skippedSt, runner.tests)
+ } else {
+ runner.skipTests(missedSt, runner.tests)
+ }
+ runner.runFixture(runner.tearDownSuite, "", nil)
+ } else {
+ runner.skipTests(missedSt, runner.tests)
+ }
+ runner.tracker.waitAndStop()
+ if runner.keepDir {
+ runner.tracker.result.WorkDir = runner.tempDir.path
+ } else {
+ runner.tempDir.removeAll()
+ }
+ }
+ return &runner.tracker.result
+}
+
+// Create a call object with the given suite method, and fork a
+// goroutine with the provided dispatcher for running it.
+func (runner *suiteRunner) forkCall(method *methodType, kind funcKind, testName string, logb *logger, dispatcher func(c *C)) *C {
+ var logw io.Writer
+ if runner.output.Stream {
+ logw = runner.output
+ }
+ if logb == nil {
+ logb = new(logger)
+ }
+ c := &C{
+ method: method,
+ kind: kind,
+ testName: testName,
+ logb: logb,
+ logw: logw,
+ tempDir: runner.tempDir,
+ done: make(chan *C, 1),
+ timer: timer{benchTime: runner.benchTime},
+ startTime: time.Now(),
+ benchMem: runner.benchMem,
+ }
+ runner.tracker.expectCall(c)
+ go (func() {
+ runner.reportCallStarted(c)
+ defer runner.callDone(c)
+ dispatcher(c)
+ })()
+ return c
+}
+
+// Same as forkCall(), but wait for call to finish before returning.
+func (runner *suiteRunner) runFunc(method *methodType, kind funcKind, testName string, logb *logger, dispatcher func(c *C)) *C {
+ c := runner.forkCall(method, kind, testName, logb, dispatcher)
+ <-c.done
+ return c
+}
+
+// Handle a finished call. If there were any panics, update the call status
+// accordingly. Then, mark the call as done and report to the tracker.
+func (runner *suiteRunner) callDone(c *C) {
+ value := recover()
+ if value != nil {
+ switch v := value.(type) {
+ case *fixturePanic:
+ if v.status == skippedSt {
+ c.setStatus(skippedSt)
+ } else {
+ c.logSoftPanic("Fixture has panicked (see related PANIC)")
+ c.setStatus(fixturePanickedSt)
+ }
+ default:
+ c.logPanic(1, value)
+ c.setStatus(panickedSt)
+ }
+ }
+ if c.mustFail {
+ switch c.status() {
+ case failedSt:
+ c.setStatus(succeededSt)
+ case succeededSt:
+ c.setStatus(failedSt)
+ c.logString("Error: Test succeeded, but was expected to fail")
+ c.logString("Reason: " + c.reason)
+ }
+ }
+
+ runner.reportCallDone(c)
+ c.done <- c
+}
+
+// Runs a fixture call synchronously. The fixture will still be run in a
+// goroutine like all suite methods, but this method will not return
+// while the fixture goroutine is not done, because the fixture must be
+// run in a desired order.
+func (runner *suiteRunner) runFixture(method *methodType, testName string, logb *logger) *C {
+ if method != nil {
+ c := runner.runFunc(method, fixtureKd, testName, logb, func(c *C) {
+ c.ResetTimer()
+ c.StartTimer()
+ defer c.StopTimer()
+ c.method.Call([]reflect.Value{reflect.ValueOf(c)})
+ })
+ return c
+ }
+ return nil
+}
+
+// Run the fixture method with runFixture(), but panic with a fixturePanic{}
+// in case the fixture method panics. This makes it easier to track the
+// fixture panic together with other call panics within forkTest().
+func (runner *suiteRunner) runFixtureWithPanic(method *methodType, testName string, logb *logger, skipped *bool) *C {
+ if skipped != nil && *skipped {
+ return nil
+ }
+ c := runner.runFixture(method, testName, logb)
+ if c != nil && c.status() != succeededSt {
+ if skipped != nil {
+ *skipped = c.status() == skippedSt
+ }
+ panic(&fixturePanic{c.status(), method})
+ }
+ return c
+}
+
+type fixturePanic struct {
+ status funcStatus
+ method *methodType
+}
+
+// Run the suite test method, together with the test-specific fixture,
+// asynchronously.
+func (runner *suiteRunner) forkTest(method *methodType) *C {
+ testName := method.String()
+ return runner.forkCall(method, testKd, testName, nil, func(c *C) {
+ var skipped bool
+ defer runner.runFixtureWithPanic(runner.tearDownTest, testName, nil, &skipped)
+ defer c.StopTimer()
+ benchN := 1
+ for {
+ runner.runFixtureWithPanic(runner.setUpTest, testName, c.logb, &skipped)
+ mt := c.method.Type()
+ if mt.NumIn() != 1 || mt.In(0) != reflect.TypeOf(c) {
+ // Rather than a plain panic, provide a more helpful message when
+ // the argument type is incorrect.
+ c.setStatus(panickedSt)
+ c.logArgPanic(c.method, "*check.C")
+ return
+ }
+ if strings.HasPrefix(c.method.Info.Name, "Test") {
+ c.ResetTimer()
+ c.StartTimer()
+ c.method.Call([]reflect.Value{reflect.ValueOf(c)})
+ return
+ }
+ if !strings.HasPrefix(c.method.Info.Name, "Benchmark") {
+ panic("unexpected method prefix: " + c.method.Info.Name)
+ }
+
+ runtime.GC()
+ c.N = benchN
+ c.ResetTimer()
+ c.StartTimer()
+ c.method.Call([]reflect.Value{reflect.ValueOf(c)})
+ c.StopTimer()
+ if c.status() != succeededSt || c.duration >= c.benchTime || benchN >= 1e9 {
+ return
+ }
+ perOpN := int(1e9)
+ if c.nsPerOp() != 0 {
+ perOpN = int(c.benchTime.Nanoseconds() / c.nsPerOp())
+ }
+
+ // Logic taken from the stock testing package:
+ // - Run more iterations than we think we'll need for a second (1.5x).
+ // - Don't grow too fast in case we had timing errors previously.
+ // - Be sure to run at least one more than last time.
+ benchN = max(min(perOpN+perOpN/2, 100*benchN), benchN+1)
+ benchN = roundUp(benchN)
+
+ skipped = true // Don't run the deferred one if this panics.
+ runner.runFixtureWithPanic(runner.tearDownTest, testName, nil, nil)
+ skipped = false
+ }
+ })
+}
+
+// Same as forkTest(), but wait for the test to finish before returning.
+func (runner *suiteRunner) runTest(method *methodType) *C {
+ c := runner.forkTest(method)
+ <-c.done
+ return c
+}
+
+// Helper to mark tests as skipped or missed. A bit heavy for what
+// it does, but it enables homogeneous handling of tracking, including
+// nice verbose output.
+func (runner *suiteRunner) skipTests(status funcStatus, methods []*methodType) {
+ for _, method := range methods {
+ runner.runFunc(method, testKd, "", nil, func(c *C) {
+ c.setStatus(status)
+ })
+ }
+}
+
+// Verify if the fixture arguments are *check.C. In case of errors,
+// log the error as a panic in the fixture method call, and return false.
+func (runner *suiteRunner) checkFixtureArgs() bool {
+ succeeded := true
+ argType := reflect.TypeOf(&C{})
+ for _, method := range []*methodType{runner.setUpSuite, runner.tearDownSuite, runner.setUpTest, runner.tearDownTest} {
+ if method != nil {
+ mt := method.Type()
+ if mt.NumIn() != 1 || mt.In(0) != argType {
+ succeeded = false
+ runner.runFunc(method, fixtureKd, "", nil, func(c *C) {
+ c.logArgPanic(method, "*check.C")
+ c.setStatus(panickedSt)
+ })
+ }
+ }
+ }
+ return succeeded
+}
+
+func (runner *suiteRunner) reportCallStarted(c *C) {
+ runner.output.WriteCallStarted("START", c)
+}
+
+func (runner *suiteRunner) reportCallDone(c *C) {
+ runner.tracker.callDone(c)
+ switch c.status() {
+ case succeededSt:
+ if c.mustFail {
+ runner.output.WriteCallSuccess("FAIL EXPECTED", c)
+ } else {
+ runner.output.WriteCallSuccess("PASS", c)
+ }
+ case skippedSt:
+ runner.output.WriteCallSuccess("SKIP", c)
+ case failedSt:
+ runner.output.WriteCallProblem("FAIL", c)
+ case panickedSt:
+ runner.output.WriteCallProblem("PANIC", c)
+ case fixturePanickedSt:
+ // That's a testKd call reporting that its fixture
+ // has panicked. The fixture call which caused the
+ // panic itself was tracked above. We'll report to
+ // aid debugging.
+ runner.output.WriteCallProblem("PANIC", c)
+ case missedSt:
+ runner.output.WriteCallSuccess("MISS", c)
+ }
+}
diff --git a/vendor/gopkg.in/check.v1/check_test.go b/vendor/gopkg.in/check.v1/check_test.go
new file mode 100644
index 0000000..871b325
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/check_test.go
@@ -0,0 +1,207 @@
+// This file contains just a few generic helpers which are used by the
+// other test files.
+
+package check_test
+
+import (
+ "flag"
+ "fmt"
+ "os"
+ "regexp"
+ "runtime"
+ "testing"
+ "time"
+
+ "gopkg.in/check.v1"
+)
+
+// We count the number of suites run at least to get a vague hint that the
+// test suite is behaving as it should. Otherwise a bug introduced at the
+// very core of the system could go unperceived.
+const suitesRunExpected = 8
+
+var suitesRun int = 0
+
+func Test(t *testing.T) {
+ check.TestingT(t)
+ if suitesRun != suitesRunExpected && flag.Lookup("check.f").Value.String() == "" {
+ critical(fmt.Sprintf("Expected %d suites to run rather than %d",
+ suitesRunExpected, suitesRun))
+ }
+}
+
+// -----------------------------------------------------------------------
+// Helper functions.
+
+// Break down badly. This is used in test cases which can't yet assume
+// that the fundamental bits are working.
+func critical(error string) {
+ fmt.Fprintln(os.Stderr, "CRITICAL: "+error)
+ os.Exit(1)
+}
+
+// Return the file line where it's called.
+func getMyLine() int {
+ if _, _, line, ok := runtime.Caller(1); ok {
+ return line
+ }
+ return -1
+}
+
+// -----------------------------------------------------------------------
+// Helper type implementing a basic io.Writer for testing output.
+
+// Type implementing the io.Writer interface for analyzing output.
+type String struct {
+ value string
+}
+
+// The only function required by the io.Writer interface. Will append
+// written data to the String.value string.
+func (s *String) Write(p []byte) (n int, err error) {
+ s.value += string(p)
+ return len(p), nil
+}
+
+// Trivial wrapper to test errors happening on a different file
+// than the test itself.
+func checkEqualWrapper(c *check.C, obtained, expected interface{}) (result bool, line int) {
+ return c.Check(obtained, check.Equals, expected), getMyLine()
+}
+
+// -----------------------------------------------------------------------
+// Helper suite for testing basic fail behavior.
+
+type FailHelper struct {
+ testLine int
+}
+
+func (s *FailHelper) TestLogAndFail(c *check.C) {
+ s.testLine = getMyLine() - 1
+ c.Log("Expected failure!")
+ c.Fail()
+}
+
+// -----------------------------------------------------------------------
+// Helper suite for testing basic success behavior.
+
+type SuccessHelper struct{}
+
+func (s *SuccessHelper) TestLogAndSucceed(c *check.C) {
+ c.Log("Expected success!")
+}
+
+// -----------------------------------------------------------------------
+// Helper suite for testing ordering and behavior of fixture.
+
+type FixtureHelper struct {
+ calls []string
+ panicOn string
+ skip bool
+ skipOnN int
+ sleepOn string
+ sleep time.Duration
+ bytes int64
+}
+
+func (s *FixtureHelper) trace(name string, c *check.C) {
+ s.calls = append(s.calls, name)
+ if name == s.panicOn {
+ panic(name)
+ }
+ if s.sleep > 0 && s.sleepOn == name {
+ time.Sleep(s.sleep)
+ }
+ if s.skip && s.skipOnN == len(s.calls)-1 {
+ c.Skip("skipOnN == n")
+ }
+}
+
+func (s *FixtureHelper) SetUpSuite(c *check.C) {
+ s.trace("SetUpSuite", c)
+}
+
+func (s *FixtureHelper) TearDownSuite(c *check.C) {
+ s.trace("TearDownSuite", c)
+}
+
+func (s *FixtureHelper) SetUpTest(c *check.C) {
+ s.trace("SetUpTest", c)
+}
+
+func (s *FixtureHelper) TearDownTest(c *check.C) {
+ s.trace("TearDownTest", c)
+}
+
+func (s *FixtureHelper) Test1(c *check.C) {
+ s.trace("Test1", c)
+}
+
+func (s *FixtureHelper) Test2(c *check.C) {
+ s.trace("Test2", c)
+}
+
+func (s *FixtureHelper) Benchmark1(c *check.C) {
+ s.trace("Benchmark1", c)
+ for i := 0; i < c.N; i++ {
+ time.Sleep(s.sleep)
+ }
+}
+
+func (s *FixtureHelper) Benchmark2(c *check.C) {
+ s.trace("Benchmark2", c)
+ c.SetBytes(1024)
+ for i := 0; i < c.N; i++ {
+ time.Sleep(s.sleep)
+ }
+}
+
+func (s *FixtureHelper) Benchmark3(c *check.C) {
+ var x []int64
+ s.trace("Benchmark3", c)
+ for i := 0; i < c.N; i++ {
+ time.Sleep(s.sleep)
+ x = make([]int64, 5)
+ _ = x
+ }
+}
+
+// -----------------------------------------------------------------------
+// Helper which checks the state of the test and ensures that it matches
+// the given expectations. Depends on c.Errorf() working, so shouldn't
+// be used to test this one function.
+
+type expectedState struct {
+ name string
+ result interface{}
+ failed bool
+ log string
+}
+
+// Verify the state of the test. Note that since this also verifies if
+// the test is supposed to be in a failed state, no other checks should
+// be done in addition to what is being tested.
+func checkState(c *check.C, result interface{}, expected *expectedState) {
+ failed := c.Failed()
+ c.Succeed()
+ log := c.GetTestLog()
+ matched, matchError := regexp.MatchString("^"+expected.log+"$", log)
+ if matchError != nil {
+ c.Errorf("Error in matching expression used in testing %s",
+ expected.name)
+ } else if !matched {
+ c.Errorf("%s logged:\n----------\n%s----------\n\nExpected:\n----------\n%s\n----------",
+ expected.name, log, expected.log)
+ }
+ if result != expected.result {
+ c.Errorf("%s returned %#v rather than %#v",
+ expected.name, result, expected.result)
+ }
+ if failed != expected.failed {
+ if failed {
+ c.Errorf("%s has failed when it shouldn't", expected.name)
+ } else {
+ c.Errorf("%s has not failed when it should", expected.name)
+ }
+ }
+}
diff --git a/vendor/gopkg.in/check.v1/checkers.go b/vendor/gopkg.in/check.v1/checkers.go
new file mode 100644
index 0000000..3749545
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/checkers.go
@@ -0,0 +1,458 @@
+package check
+
+import (
+ "fmt"
+ "reflect"
+ "regexp"
+)
+
+// -----------------------------------------------------------------------
+// CommentInterface and Commentf helper, to attach extra information to checks.
+
+type comment struct {
+ format string
+ args []interface{}
+}
+
+// Commentf returns an infomational value to use with Assert or Check calls.
+// If the checker test fails, the provided arguments will be passed to
+// fmt.Sprintf, and will be presented next to the logged failure.
+//
+// For example:
+//
+// c.Assert(v, Equals, 42, Commentf("Iteration #%d failed.", i))
+//
+// Note that if the comment is constant, a better option is to
+// simply use a normal comment right above or next to the line, as
+// it will also get printed with any errors:
+//
+// c.Assert(l, Equals, 8192) // Ensure buffer size is correct (bug #123)
+//
+func Commentf(format string, args ...interface{}) CommentInterface {
+ return &comment{format, args}
+}
+
+// CommentInterface must be implemented by types that attach extra
+// information to failed checks. See the Commentf function for details.
+type CommentInterface interface {
+ CheckCommentString() string
+}
+
+func (c *comment) CheckCommentString() string {
+ return fmt.Sprintf(c.format, c.args...)
+}
+
+// -----------------------------------------------------------------------
+// The Checker interface.
+
+// The Checker interface must be provided by checkers used with
+// the Assert and Check verification methods.
+type Checker interface {
+ Info() *CheckerInfo
+ Check(params []interface{}, names []string) (result bool, error string)
+}
+
+// See the Checker interface.
+type CheckerInfo struct {
+ Name string
+ Params []string
+}
+
+func (info *CheckerInfo) Info() *CheckerInfo {
+ return info
+}
+
+// -----------------------------------------------------------------------
+// Not checker logic inverter.
+
+// The Not checker inverts the logic of the provided checker. The
+// resulting checker will succeed where the original one failed, and
+// vice-versa.
+//
+// For example:
+//
+// c.Assert(a, Not(Equals), b)
+//
+func Not(checker Checker) Checker {
+ return ¬Checker{checker}
+}
+
+type notChecker struct {
+ sub Checker
+}
+
+func (checker *notChecker) Info() *CheckerInfo {
+ info := *checker.sub.Info()
+ info.Name = "Not(" + info.Name + ")"
+ return &info
+}
+
+func (checker *notChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ result, error = checker.sub.Check(params, names)
+ result = !result
+ return
+}
+
+// -----------------------------------------------------------------------
+// IsNil checker.
+
+type isNilChecker struct {
+ *CheckerInfo
+}
+
+// The IsNil checker tests whether the obtained value is nil.
+//
+// For example:
+//
+// c.Assert(err, IsNil)
+//
+var IsNil Checker = &isNilChecker{
+ &CheckerInfo{Name: "IsNil", Params: []string{"value"}},
+}
+
+func (checker *isNilChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ return isNil(params[0]), ""
+}
+
+func isNil(obtained interface{}) (result bool) {
+ if obtained == nil {
+ result = true
+ } else {
+ switch v := reflect.ValueOf(obtained); v.Kind() {
+ case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
+ return v.IsNil()
+ }
+ }
+ return
+}
+
+// -----------------------------------------------------------------------
+// NotNil checker. Alias for Not(IsNil), since it's so common.
+
+type notNilChecker struct {
+ *CheckerInfo
+}
+
+// The NotNil checker verifies that the obtained value is not nil.
+//
+// For example:
+//
+// c.Assert(iface, NotNil)
+//
+// This is an alias for Not(IsNil), made available since it's a
+// fairly common check.
+//
+var NotNil Checker = ¬NilChecker{
+ &CheckerInfo{Name: "NotNil", Params: []string{"value"}},
+}
+
+func (checker *notNilChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ return !isNil(params[0]), ""
+}
+
+// -----------------------------------------------------------------------
+// Equals checker.
+
+type equalsChecker struct {
+ *CheckerInfo
+}
+
+// The Equals checker verifies that the obtained value is equal to
+// the expected value, according to usual Go semantics for ==.
+//
+// For example:
+//
+// c.Assert(value, Equals, 42)
+//
+var Equals Checker = &equalsChecker{
+ &CheckerInfo{Name: "Equals", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *equalsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ defer func() {
+ if v := recover(); v != nil {
+ result = false
+ error = fmt.Sprint(v)
+ }
+ }()
+ return params[0] == params[1], ""
+}
+
+// -----------------------------------------------------------------------
+// DeepEquals checker.
+
+type deepEqualsChecker struct {
+ *CheckerInfo
+}
+
+// The DeepEquals checker verifies that the obtained value is deep-equal to
+// the expected value. The check will work correctly even when facing
+// slices, interfaces, and values of different types (which always fail
+// the test).
+//
+// For example:
+//
+// c.Assert(value, DeepEquals, 42)
+// c.Assert(array, DeepEquals, []string{"hi", "there"})
+//
+var DeepEquals Checker = &deepEqualsChecker{
+ &CheckerInfo{Name: "DeepEquals", Params: []string{"obtained", "expected"}},
+}
+
+func (checker *deepEqualsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ return reflect.DeepEqual(params[0], params[1]), ""
+}
+
+// -----------------------------------------------------------------------
+// HasLen checker.
+
+type hasLenChecker struct {
+ *CheckerInfo
+}
+
+// The HasLen checker verifies that the obtained value has the
+// provided length. In many cases this is superior to using Equals
+// in conjunction with the len function because in case the check
+// fails the value itself will be printed, instead of its length,
+// providing more details for figuring the problem.
+//
+// For example:
+//
+// c.Assert(list, HasLen, 5)
+//
+var HasLen Checker = &hasLenChecker{
+ &CheckerInfo{Name: "HasLen", Params: []string{"obtained", "n"}},
+}
+
+func (checker *hasLenChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ n, ok := params[1].(int)
+ if !ok {
+ return false, "n must be an int"
+ }
+ value := reflect.ValueOf(params[0])
+ switch value.Kind() {
+ case reflect.Map, reflect.Array, reflect.Slice, reflect.Chan, reflect.String:
+ default:
+ return false, "obtained value type has no length"
+ }
+ return value.Len() == n, ""
+}
+
+// -----------------------------------------------------------------------
+// ErrorMatches checker.
+
+type errorMatchesChecker struct {
+ *CheckerInfo
+}
+
+// The ErrorMatches checker verifies that the error value
+// is non nil and matches the regular expression provided.
+//
+// For example:
+//
+// c.Assert(err, ErrorMatches, "perm.*denied")
+//
+var ErrorMatches Checker = errorMatchesChecker{
+ &CheckerInfo{Name: "ErrorMatches", Params: []string{"value", "regex"}},
+}
+
+func (checker errorMatchesChecker) Check(params []interface{}, names []string) (result bool, errStr string) {
+ if params[0] == nil {
+ return false, "Error value is nil"
+ }
+ err, ok := params[0].(error)
+ if !ok {
+ return false, "Value is not an error"
+ }
+ params[0] = err.Error()
+ names[0] = "error"
+ return matches(params[0], params[1])
+}
+
+// -----------------------------------------------------------------------
+// Matches checker.
+
+type matchesChecker struct {
+ *CheckerInfo
+}
+
+// The Matches checker verifies that the string provided as the obtained
+// value (or the string resulting from obtained.String()) matches the
+// regular expression provided.
+//
+// For example:
+//
+// c.Assert(err, Matches, "perm.*denied")
+//
+var Matches Checker = &matchesChecker{
+ &CheckerInfo{Name: "Matches", Params: []string{"value", "regex"}},
+}
+
+func (checker *matchesChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ return matches(params[0], params[1])
+}
+
+func matches(value, regex interface{}) (result bool, error string) {
+ reStr, ok := regex.(string)
+ if !ok {
+ return false, "Regex must be a string"
+ }
+ valueStr, valueIsStr := value.(string)
+ if !valueIsStr {
+ if valueWithStr, valueHasStr := value.(fmt.Stringer); valueHasStr {
+ valueStr, valueIsStr = valueWithStr.String(), true
+ }
+ }
+ if valueIsStr {
+ matches, err := regexp.MatchString("^"+reStr+"$", valueStr)
+ if err != nil {
+ return false, "Can't compile regex: " + err.Error()
+ }
+ return matches, ""
+ }
+ return false, "Obtained value is not a string and has no .String()"
+}
+
+// -----------------------------------------------------------------------
+// Panics checker.
+
+type panicsChecker struct {
+ *CheckerInfo
+}
+
+// The Panics checker verifies that calling the provided zero-argument
+// function will cause a panic which is deep-equal to the provided value.
+//
+// For example:
+//
+// c.Assert(func() { f(1, 2) }, Panics, &SomeErrorType{"BOOM"}).
+//
+//
+var Panics Checker = &panicsChecker{
+ &CheckerInfo{Name: "Panics", Params: []string{"function", "expected"}},
+}
+
+func (checker *panicsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ f := reflect.ValueOf(params[0])
+ if f.Kind() != reflect.Func || f.Type().NumIn() != 0 {
+ return false, "Function must take zero arguments"
+ }
+ defer func() {
+ // If the function has not panicked, then don't do the check.
+ if error != "" {
+ return
+ }
+ params[0] = recover()
+ names[0] = "panic"
+ result = reflect.DeepEqual(params[0], params[1])
+ }()
+ f.Call(nil)
+ return false, "Function has not panicked"
+}
+
+type panicMatchesChecker struct {
+ *CheckerInfo
+}
+
+// The PanicMatches checker verifies that calling the provided zero-argument
+// function will cause a panic with an error value matching
+// the regular expression provided.
+//
+// For example:
+//
+// c.Assert(func() { f(1, 2) }, PanicMatches, `open.*: no such file or directory`).
+//
+//
+var PanicMatches Checker = &panicMatchesChecker{
+ &CheckerInfo{Name: "PanicMatches", Params: []string{"function", "expected"}},
+}
+
+func (checker *panicMatchesChecker) Check(params []interface{}, names []string) (result bool, errmsg string) {
+ f := reflect.ValueOf(params[0])
+ if f.Kind() != reflect.Func || f.Type().NumIn() != 0 {
+ return false, "Function must take zero arguments"
+ }
+ defer func() {
+ // If the function has not panicked, then don't do the check.
+ if errmsg != "" {
+ return
+ }
+ obtained := recover()
+ names[0] = "panic"
+ if e, ok := obtained.(error); ok {
+ params[0] = e.Error()
+ } else if _, ok := obtained.(string); ok {
+ params[0] = obtained
+ } else {
+ errmsg = "Panic value is not a string or an error"
+ return
+ }
+ result, errmsg = matches(params[0], params[1])
+ }()
+ f.Call(nil)
+ return false, "Function has not panicked"
+}
+
+// -----------------------------------------------------------------------
+// FitsTypeOf checker.
+
+type fitsTypeChecker struct {
+ *CheckerInfo
+}
+
+// The FitsTypeOf checker verifies that the obtained value is
+// assignable to a variable with the same type as the provided
+// sample value.
+//
+// For example:
+//
+// c.Assert(value, FitsTypeOf, int64(0))
+// c.Assert(value, FitsTypeOf, os.Error(nil))
+//
+var FitsTypeOf Checker = &fitsTypeChecker{
+ &CheckerInfo{Name: "FitsTypeOf", Params: []string{"obtained", "sample"}},
+}
+
+func (checker *fitsTypeChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ obtained := reflect.ValueOf(params[0])
+ sample := reflect.ValueOf(params[1])
+ if !obtained.IsValid() {
+ return false, ""
+ }
+ if !sample.IsValid() {
+ return false, "Invalid sample value"
+ }
+ return obtained.Type().AssignableTo(sample.Type()), ""
+}
+
+// -----------------------------------------------------------------------
+// Implements checker.
+
+type implementsChecker struct {
+ *CheckerInfo
+}
+
+// The Implements checker verifies that the obtained value
+// implements the interface specified via a pointer to an interface
+// variable.
+//
+// For example:
+//
+// var e os.Error
+// c.Assert(err, Implements, &e)
+//
+var Implements Checker = &implementsChecker{
+ &CheckerInfo{Name: "Implements", Params: []string{"obtained", "ifaceptr"}},
+}
+
+func (checker *implementsChecker) Check(params []interface{}, names []string) (result bool, error string) {
+ obtained := reflect.ValueOf(params[0])
+ ifaceptr := reflect.ValueOf(params[1])
+ if !obtained.IsValid() {
+ return false, ""
+ }
+ if !ifaceptr.IsValid() || ifaceptr.Kind() != reflect.Ptr || ifaceptr.Elem().Kind() != reflect.Interface {
+ return false, "ifaceptr should be a pointer to an interface variable"
+ }
+ return obtained.Type().Implements(ifaceptr.Elem().Type()), ""
+}
diff --git a/vendor/gopkg.in/check.v1/checkers_test.go b/vendor/gopkg.in/check.v1/checkers_test.go
new file mode 100644
index 0000000..5c69747
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/checkers_test.go
@@ -0,0 +1,272 @@
+package check_test
+
+import (
+ "errors"
+ "gopkg.in/check.v1"
+ "reflect"
+ "runtime"
+)
+
+type CheckersS struct{}
+
+var _ = check.Suite(&CheckersS{})
+
+func testInfo(c *check.C, checker check.Checker, name string, paramNames []string) {
+ info := checker.Info()
+ if info.Name != name {
+ c.Fatalf("Got name %s, expected %s", info.Name, name)
+ }
+ if !reflect.DeepEqual(info.Params, paramNames) {
+ c.Fatalf("Got param names %#v, expected %#v", info.Params, paramNames)
+ }
+}
+
+func testCheck(c *check.C, checker check.Checker, result bool, error string, params ...interface{}) ([]interface{}, []string) {
+ info := checker.Info()
+ if len(params) != len(info.Params) {
+ c.Fatalf("unexpected param count in test; expected %d got %d", len(info.Params), len(params))
+ }
+ names := append([]string{}, info.Params...)
+ result_, error_ := checker.Check(params, names)
+ if result_ != result || error_ != error {
+ c.Fatalf("%s.Check(%#v) returned (%#v, %#v) rather than (%#v, %#v)",
+ info.Name, params, result_, error_, result, error)
+ }
+ return params, names
+}
+
+func (s *CheckersS) TestComment(c *check.C) {
+ bug := check.Commentf("a %d bc", 42)
+ comment := bug.CheckCommentString()
+ if comment != "a 42 bc" {
+ c.Fatalf("Commentf returned %#v", comment)
+ }
+}
+
+func (s *CheckersS) TestIsNil(c *check.C) {
+ testInfo(c, check.IsNil, "IsNil", []string{"value"})
+
+ testCheck(c, check.IsNil, true, "", nil)
+ testCheck(c, check.IsNil, false, "", "a")
+
+ testCheck(c, check.IsNil, true, "", (chan int)(nil))
+ testCheck(c, check.IsNil, false, "", make(chan int))
+ testCheck(c, check.IsNil, true, "", (error)(nil))
+ testCheck(c, check.IsNil, false, "", errors.New(""))
+ testCheck(c, check.IsNil, true, "", ([]int)(nil))
+ testCheck(c, check.IsNil, false, "", make([]int, 1))
+ testCheck(c, check.IsNil, false, "", int(0))
+}
+
+func (s *CheckersS) TestNotNil(c *check.C) {
+ testInfo(c, check.NotNil, "NotNil", []string{"value"})
+
+ testCheck(c, check.NotNil, false, "", nil)
+ testCheck(c, check.NotNil, true, "", "a")
+
+ testCheck(c, check.NotNil, false, "", (chan int)(nil))
+ testCheck(c, check.NotNil, true, "", make(chan int))
+ testCheck(c, check.NotNil, false, "", (error)(nil))
+ testCheck(c, check.NotNil, true, "", errors.New(""))
+ testCheck(c, check.NotNil, false, "", ([]int)(nil))
+ testCheck(c, check.NotNil, true, "", make([]int, 1))
+}
+
+func (s *CheckersS) TestNot(c *check.C) {
+ testInfo(c, check.Not(check.IsNil), "Not(IsNil)", []string{"value"})
+
+ testCheck(c, check.Not(check.IsNil), false, "", nil)
+ testCheck(c, check.Not(check.IsNil), true, "", "a")
+}
+
+type simpleStruct struct {
+ i int
+}
+
+func (s *CheckersS) TestEquals(c *check.C) {
+ testInfo(c, check.Equals, "Equals", []string{"obtained", "expected"})
+
+ // The simplest.
+ testCheck(c, check.Equals, true, "", 42, 42)
+ testCheck(c, check.Equals, false, "", 42, 43)
+
+ // Different native types.
+ testCheck(c, check.Equals, false, "", int32(42), int64(42))
+
+ // With nil.
+ testCheck(c, check.Equals, false, "", 42, nil)
+
+ // Slices
+ testCheck(c, check.Equals, false, "runtime error: comparing uncomparable type []uint8", []byte{1, 2}, []byte{1, 2})
+
+ // Struct values
+ testCheck(c, check.Equals, true, "", simpleStruct{1}, simpleStruct{1})
+ testCheck(c, check.Equals, false, "", simpleStruct{1}, simpleStruct{2})
+
+ // Struct pointers
+ testCheck(c, check.Equals, false, "", &simpleStruct{1}, &simpleStruct{1})
+ testCheck(c, check.Equals, false, "", &simpleStruct{1}, &simpleStruct{2})
+}
+
+func (s *CheckersS) TestDeepEquals(c *check.C) {
+ testInfo(c, check.DeepEquals, "DeepEquals", []string{"obtained", "expected"})
+
+ // The simplest.
+ testCheck(c, check.DeepEquals, true, "", 42, 42)
+ testCheck(c, check.DeepEquals, false, "", 42, 43)
+
+ // Different native types.
+ testCheck(c, check.DeepEquals, false, "", int32(42), int64(42))
+
+ // With nil.
+ testCheck(c, check.DeepEquals, false, "", 42, nil)
+
+ // Slices
+ testCheck(c, check.DeepEquals, true, "", []byte{1, 2}, []byte{1, 2})
+ testCheck(c, check.DeepEquals, false, "", []byte{1, 2}, []byte{1, 3})
+
+ // Struct values
+ testCheck(c, check.DeepEquals, true, "", simpleStruct{1}, simpleStruct{1})
+ testCheck(c, check.DeepEquals, false, "", simpleStruct{1}, simpleStruct{2})
+
+ // Struct pointers
+ testCheck(c, check.DeepEquals, true, "", &simpleStruct{1}, &simpleStruct{1})
+ testCheck(c, check.DeepEquals, false, "", &simpleStruct{1}, &simpleStruct{2})
+}
+
+func (s *CheckersS) TestHasLen(c *check.C) {
+ testInfo(c, check.HasLen, "HasLen", []string{"obtained", "n"})
+
+ testCheck(c, check.HasLen, true, "", "abcd", 4)
+ testCheck(c, check.HasLen, true, "", []int{1, 2}, 2)
+ testCheck(c, check.HasLen, false, "", []int{1, 2}, 3)
+
+ testCheck(c, check.HasLen, false, "n must be an int", []int{1, 2}, "2")
+ testCheck(c, check.HasLen, false, "obtained value type has no length", nil, 2)
+}
+
+func (s *CheckersS) TestErrorMatches(c *check.C) {
+ testInfo(c, check.ErrorMatches, "ErrorMatches", []string{"value", "regex"})
+
+ testCheck(c, check.ErrorMatches, false, "Error value is nil", nil, "some error")
+ testCheck(c, check.ErrorMatches, false, "Value is not an error", 1, "some error")
+ testCheck(c, check.ErrorMatches, true, "", errors.New("some error"), "some error")
+ testCheck(c, check.ErrorMatches, true, "", errors.New("some error"), "so.*or")
+
+ // Verify params mutation
+ params, names := testCheck(c, check.ErrorMatches, false, "", errors.New("some error"), "other error")
+ c.Assert(params[0], check.Equals, "some error")
+ c.Assert(names[0], check.Equals, "error")
+}
+
+func (s *CheckersS) TestMatches(c *check.C) {
+ testInfo(c, check.Matches, "Matches", []string{"value", "regex"})
+
+ // Simple matching
+ testCheck(c, check.Matches, true, "", "abc", "abc")
+ testCheck(c, check.Matches, true, "", "abc", "a.c")
+
+ // Must match fully
+ testCheck(c, check.Matches, false, "", "abc", "ab")
+ testCheck(c, check.Matches, false, "", "abc", "bc")
+
+ // String()-enabled values accepted
+ testCheck(c, check.Matches, true, "", reflect.ValueOf("abc"), "a.c")
+ testCheck(c, check.Matches, false, "", reflect.ValueOf("abc"), "a.d")
+
+ // Some error conditions.
+ testCheck(c, check.Matches, false, "Obtained value is not a string and has no .String()", 1, "a.c")
+ testCheck(c, check.Matches, false, "Can't compile regex: error parsing regexp: missing closing ]: `[c$`", "abc", "a[c")
+}
+
+func (s *CheckersS) TestPanics(c *check.C) {
+ testInfo(c, check.Panics, "Panics", []string{"function", "expected"})
+
+ // Some errors.
+ testCheck(c, check.Panics, false, "Function has not panicked", func() bool { return false }, "BOOM")
+ testCheck(c, check.Panics, false, "Function must take zero arguments", 1, "BOOM")
+
+ // Plain strings.
+ testCheck(c, check.Panics, true, "", func() { panic("BOOM") }, "BOOM")
+ testCheck(c, check.Panics, false, "", func() { panic("KABOOM") }, "BOOM")
+ testCheck(c, check.Panics, true, "", func() bool { panic("BOOM") }, "BOOM")
+
+ // Error values.
+ testCheck(c, check.Panics, true, "", func() { panic(errors.New("BOOM")) }, errors.New("BOOM"))
+ testCheck(c, check.Panics, false, "", func() { panic(errors.New("KABOOM")) }, errors.New("BOOM"))
+
+ type deep struct{ i int }
+ // Deep value
+ testCheck(c, check.Panics, true, "", func() { panic(&deep{99}) }, &deep{99})
+
+ // Verify params/names mutation
+ params, names := testCheck(c, check.Panics, false, "", func() { panic(errors.New("KABOOM")) }, errors.New("BOOM"))
+ c.Assert(params[0], check.ErrorMatches, "KABOOM")
+ c.Assert(names[0], check.Equals, "panic")
+
+ // Verify a nil panic
+ testCheck(c, check.Panics, true, "", func() { panic(nil) }, nil)
+ testCheck(c, check.Panics, false, "", func() { panic(nil) }, "NOPE")
+}
+
+func (s *CheckersS) TestPanicMatches(c *check.C) {
+ testInfo(c, check.PanicMatches, "PanicMatches", []string{"function", "expected"})
+
+ // Error matching.
+ testCheck(c, check.PanicMatches, true, "", func() { panic(errors.New("BOOM")) }, "BO.M")
+ testCheck(c, check.PanicMatches, false, "", func() { panic(errors.New("KABOOM")) }, "BO.M")
+
+ // Some errors.
+ testCheck(c, check.PanicMatches, false, "Function has not panicked", func() bool { return false }, "BOOM")
+ testCheck(c, check.PanicMatches, false, "Function must take zero arguments", 1, "BOOM")
+
+ // Plain strings.
+ testCheck(c, check.PanicMatches, true, "", func() { panic("BOOM") }, "BO.M")
+ testCheck(c, check.PanicMatches, false, "", func() { panic("KABOOM") }, "BOOM")
+ testCheck(c, check.PanicMatches, true, "", func() bool { panic("BOOM") }, "BO.M")
+
+ // Verify params/names mutation
+ params, names := testCheck(c, check.PanicMatches, false, "", func() { panic(errors.New("KABOOM")) }, "BOOM")
+ c.Assert(params[0], check.Equals, "KABOOM")
+ c.Assert(names[0], check.Equals, "panic")
+
+ // Verify a nil panic
+ testCheck(c, check.PanicMatches, false, "Panic value is not a string or an error", func() { panic(nil) }, "")
+}
+
+func (s *CheckersS) TestFitsTypeOf(c *check.C) {
+ testInfo(c, check.FitsTypeOf, "FitsTypeOf", []string{"obtained", "sample"})
+
+ // Basic types
+ testCheck(c, check.FitsTypeOf, true, "", 1, 0)
+ testCheck(c, check.FitsTypeOf, false, "", 1, int64(0))
+
+ // Aliases
+ testCheck(c, check.FitsTypeOf, false, "", 1, errors.New(""))
+ testCheck(c, check.FitsTypeOf, false, "", "error", errors.New(""))
+ testCheck(c, check.FitsTypeOf, true, "", errors.New("error"), errors.New(""))
+
+ // Structures
+ testCheck(c, check.FitsTypeOf, false, "", 1, simpleStruct{})
+ testCheck(c, check.FitsTypeOf, false, "", simpleStruct{42}, &simpleStruct{})
+ testCheck(c, check.FitsTypeOf, true, "", simpleStruct{42}, simpleStruct{})
+ testCheck(c, check.FitsTypeOf, true, "", &simpleStruct{42}, &simpleStruct{})
+
+ // Some bad values
+ testCheck(c, check.FitsTypeOf, false, "Invalid sample value", 1, interface{}(nil))
+ testCheck(c, check.FitsTypeOf, false, "", interface{}(nil), 0)
+}
+
+func (s *CheckersS) TestImplements(c *check.C) {
+ testInfo(c, check.Implements, "Implements", []string{"obtained", "ifaceptr"})
+
+ var e error
+ var re runtime.Error
+ testCheck(c, check.Implements, true, "", errors.New(""), &e)
+ testCheck(c, check.Implements, false, "", errors.New(""), &re)
+
+ // Some bad values
+ testCheck(c, check.Implements, false, "ifaceptr should be a pointer to an interface variable", 0, errors.New(""))
+ testCheck(c, check.Implements, false, "ifaceptr should be a pointer to an interface variable", 0, interface{}(nil))
+ testCheck(c, check.Implements, false, "", interface{}(nil), &e)
+}
diff --git a/vendor/gopkg.in/check.v1/export_test.go b/vendor/gopkg.in/check.v1/export_test.go
new file mode 100644
index 0000000..abb89a2
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/export_test.go
@@ -0,0 +1,19 @@
+package check
+
+import "io"
+
+func PrintLine(filename string, line int) (string, error) {
+ return printLine(filename, line)
+}
+
+func Indent(s, with string) string {
+ return indent(s, with)
+}
+
+func NewOutputWriter(writer io.Writer, stream, verbose bool) *outputWriter {
+ return newOutputWriter(writer, stream, verbose)
+}
+
+func (c *C) FakeSkip(reason string) {
+ c.reason = reason
+}
diff --git a/vendor/gopkg.in/check.v1/fixture_test.go b/vendor/gopkg.in/check.v1/fixture_test.go
new file mode 100644
index 0000000..2bff9e1
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/fixture_test.go
@@ -0,0 +1,484 @@
+// Tests for the behavior of the test fixture system.
+
+package check_test
+
+import (
+ . "gopkg.in/check.v1"
+)
+
+// -----------------------------------------------------------------------
+// Fixture test suite.
+
+type FixtureS struct{}
+
+var fixtureS = Suite(&FixtureS{})
+
+func (s *FixtureS) TestCountSuite(c *C) {
+ suitesRun += 1
+}
+
+// -----------------------------------------------------------------------
+// Basic fixture ordering verification.
+
+func (s *FixtureS) TestOrder(c *C) {
+ helper := FixtureHelper{}
+ Run(&helper, nil)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Test2")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ c.Check(helper.calls[7], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 8)
+}
+
+// -----------------------------------------------------------------------
+// Check the behavior when panics occur within tests and fixtures.
+
+func (s *FixtureS) TestPanicOnTest(c *C) {
+ helper := FixtureHelper{panicOn: "Test1"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Test2")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ c.Check(helper.calls[7], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 8)
+
+ expected := "^\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: FixtureHelper.Test1\n\n" +
+ "\\.\\.\\. Panic: Test1 \\(PC=[xA-F0-9]+\\)\n\n" +
+ ".+:[0-9]+\n" +
+ " in (go)?panic\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.trace\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.Test1\n" +
+ "(.|\n)*$"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnSetUpTest(c *C) {
+ helper := FixtureHelper{panicOn: "SetUpTest"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "TearDownTest")
+ c.Check(helper.calls[3], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 4)
+
+ expected := "^\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper\\.SetUpTest\n\n" +
+ "\\.\\.\\. Panic: SetUpTest \\(PC=[xA-F0-9]+\\)\n\n" +
+ ".+:[0-9]+\n" +
+ " in (go)?panic\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.trace\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.SetUpTest\n" +
+ "(.|\n)*" +
+ "\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper\\.Test1\n\n" +
+ "\\.\\.\\. Panic: Fixture has panicked " +
+ "\\(see related PANIC\\)\n$"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnTearDownTest(c *C) {
+ helper := FixtureHelper{panicOn: "TearDownTest"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 5)
+
+ expected := "^\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper.TearDownTest\n\n" +
+ "\\.\\.\\. Panic: TearDownTest \\(PC=[xA-F0-9]+\\)\n\n" +
+ ".+:[0-9]+\n" +
+ " in (go)?panic\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.trace\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.TearDownTest\n" +
+ "(.|\n)*" +
+ "\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper\\.Test1\n\n" +
+ "\\.\\.\\. Panic: Fixture has panicked " +
+ "\\(see related PANIC\\)\n$"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnSetUpSuite(c *C) {
+ helper := FixtureHelper{panicOn: "SetUpSuite"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 2)
+
+ expected := "^\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper.SetUpSuite\n\n" +
+ "\\.\\.\\. Panic: SetUpSuite \\(PC=[xA-F0-9]+\\)\n\n" +
+ ".+:[0-9]+\n" +
+ " in (go)?panic\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.trace\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.SetUpSuite\n" +
+ "(.|\n)*$"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnTearDownSuite(c *C) {
+ helper := FixtureHelper{panicOn: "TearDownSuite"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Test2")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ c.Check(helper.calls[7], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 8)
+
+ expected := "^\n-+\n" +
+ "PANIC: check_test\\.go:[0-9]+: " +
+ "FixtureHelper.TearDownSuite\n\n" +
+ "\\.\\.\\. Panic: TearDownSuite \\(PC=[xA-F0-9]+\\)\n\n" +
+ ".+:[0-9]+\n" +
+ " in (go)?panic\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.trace\n" +
+ ".*check_test.go:[0-9]+\n" +
+ " in FixtureHelper.TearDownSuite\n" +
+ "(.|\n)*$"
+
+ c.Check(output.value, Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// A wrong argument on a test or fixture will produce a nice error.
+
+func (s *FixtureS) TestPanicOnWrongTestArg(c *C) {
+ helper := WrongTestArgHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "TearDownTest")
+ c.Check(helper.calls[3], Equals, "SetUpTest")
+ c.Check(helper.calls[4], Equals, "Test2")
+ c.Check(helper.calls[5], Equals, "TearDownTest")
+ c.Check(helper.calls[6], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 7)
+
+ expected := "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongTestArgHelper\\.Test1\n\n" +
+ "\\.\\.\\. Panic: WrongTestArgHelper\\.Test1 argument " +
+ "should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnWrongSetUpTestArg(c *C) {
+ helper := WrongSetUpTestArgHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(len(helper.calls), Equals, 0)
+
+ expected :=
+ "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongSetUpTestArgHelper\\.SetUpTest\n\n" +
+ "\\.\\.\\. Panic: WrongSetUpTestArgHelper\\.SetUpTest argument " +
+ "should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnWrongSetUpSuiteArg(c *C) {
+ helper := WrongSetUpSuiteArgHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(len(helper.calls), Equals, 0)
+
+ expected :=
+ "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongSetUpSuiteArgHelper\\.SetUpSuite\n\n" +
+ "\\.\\.\\. Panic: WrongSetUpSuiteArgHelper\\.SetUpSuite argument " +
+ "should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// Nice errors also when tests or fixture have wrong arg count.
+
+func (s *FixtureS) TestPanicOnWrongTestArgCount(c *C) {
+ helper := WrongTestArgCountHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "TearDownTest")
+ c.Check(helper.calls[3], Equals, "SetUpTest")
+ c.Check(helper.calls[4], Equals, "Test2")
+ c.Check(helper.calls[5], Equals, "TearDownTest")
+ c.Check(helper.calls[6], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 7)
+
+ expected := "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongTestArgCountHelper\\.Test1\n\n" +
+ "\\.\\.\\. Panic: WrongTestArgCountHelper\\.Test1 argument " +
+ "should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnWrongSetUpTestArgCount(c *C) {
+ helper := WrongSetUpTestArgCountHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(len(helper.calls), Equals, 0)
+
+ expected :=
+ "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongSetUpTestArgCountHelper\\.SetUpTest\n\n" +
+ "\\.\\.\\. Panic: WrongSetUpTestArgCountHelper\\.SetUpTest argument " +
+ "should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+func (s *FixtureS) TestPanicOnWrongSetUpSuiteArgCount(c *C) {
+ helper := WrongSetUpSuiteArgCountHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(len(helper.calls), Equals, 0)
+
+ expected :=
+ "^\n-+\n" +
+ "PANIC: fixture_test\\.go:[0-9]+: " +
+ "WrongSetUpSuiteArgCountHelper\\.SetUpSuite\n\n" +
+ "\\.\\.\\. Panic: WrongSetUpSuiteArgCountHelper" +
+ "\\.SetUpSuite argument should be \\*check\\.C\n"
+
+ c.Check(output.value, Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// Helper test suites with wrong function arguments.
+
+type WrongTestArgHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongTestArgHelper) Test1(t int) {
+}
+
+type WrongSetUpTestArgHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongSetUpTestArgHelper) SetUpTest(t int) {
+}
+
+type WrongSetUpSuiteArgHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongSetUpSuiteArgHelper) SetUpSuite(t int) {
+}
+
+type WrongTestArgCountHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongTestArgCountHelper) Test1(c *C, i int) {
+}
+
+type WrongSetUpTestArgCountHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongSetUpTestArgCountHelper) SetUpTest(c *C, i int) {
+}
+
+type WrongSetUpSuiteArgCountHelper struct {
+ FixtureHelper
+}
+
+func (s *WrongSetUpSuiteArgCountHelper) SetUpSuite(c *C, i int) {
+}
+
+// -----------------------------------------------------------------------
+// Ensure fixture doesn't run without tests.
+
+type NoTestsHelper struct {
+ hasRun bool
+}
+
+func (s *NoTestsHelper) SetUpSuite(c *C) {
+ s.hasRun = true
+}
+
+func (s *NoTestsHelper) TearDownSuite(c *C) {
+ s.hasRun = true
+}
+
+func (s *FixtureS) TestFixtureDoesntRunWithoutTests(c *C) {
+ helper := NoTestsHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Check(helper.hasRun, Equals, false)
+}
+
+// -----------------------------------------------------------------------
+// Verify that checks and assertions work correctly inside the fixture.
+
+type FixtureCheckHelper struct {
+ fail string
+ completed bool
+}
+
+func (s *FixtureCheckHelper) SetUpSuite(c *C) {
+ switch s.fail {
+ case "SetUpSuiteAssert":
+ c.Assert(false, Equals, true)
+ case "SetUpSuiteCheck":
+ c.Check(false, Equals, true)
+ }
+ s.completed = true
+}
+
+func (s *FixtureCheckHelper) SetUpTest(c *C) {
+ switch s.fail {
+ case "SetUpTestAssert":
+ c.Assert(false, Equals, true)
+ case "SetUpTestCheck":
+ c.Check(false, Equals, true)
+ }
+ s.completed = true
+}
+
+func (s *FixtureCheckHelper) Test(c *C) {
+ // Do nothing.
+}
+
+func (s *FixtureS) TestSetUpSuiteCheck(c *C) {
+ helper := FixtureCheckHelper{fail: "SetUpSuiteCheck"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Assert(output.value, Matches,
+ "\n---+\n"+
+ "FAIL: fixture_test\\.go:[0-9]+: "+
+ "FixtureCheckHelper\\.SetUpSuite\n\n"+
+ "fixture_test\\.go:[0-9]+:\n"+
+ " c\\.Check\\(false, Equals, true\\)\n"+
+ "\\.+ obtained bool = false\n"+
+ "\\.+ expected bool = true\n\n")
+ c.Assert(helper.completed, Equals, true)
+}
+
+func (s *FixtureS) TestSetUpSuiteAssert(c *C) {
+ helper := FixtureCheckHelper{fail: "SetUpSuiteAssert"}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Assert(output.value, Matches,
+ "\n---+\n"+
+ "FAIL: fixture_test\\.go:[0-9]+: "+
+ "FixtureCheckHelper\\.SetUpSuite\n\n"+
+ "fixture_test\\.go:[0-9]+:\n"+
+ " c\\.Assert\\(false, Equals, true\\)\n"+
+ "\\.+ obtained bool = false\n"+
+ "\\.+ expected bool = true\n\n")
+ c.Assert(helper.completed, Equals, false)
+}
+
+// -----------------------------------------------------------------------
+// Verify that logging within SetUpTest() persists within the test log itself.
+
+type FixtureLogHelper struct {
+ c *C
+}
+
+func (s *FixtureLogHelper) SetUpTest(c *C) {
+ s.c = c
+ c.Log("1")
+}
+
+func (s *FixtureLogHelper) Test(c *C) {
+ c.Log("2")
+ s.c.Log("3")
+ c.Log("4")
+ c.Fail()
+}
+
+func (s *FixtureLogHelper) TearDownTest(c *C) {
+ s.c.Log("5")
+}
+
+func (s *FixtureS) TestFixtureLogging(c *C) {
+ helper := FixtureLogHelper{}
+ output := String{}
+ Run(&helper, &RunConf{Output: &output})
+ c.Assert(output.value, Matches,
+ "\n---+\n"+
+ "FAIL: fixture_test\\.go:[0-9]+: "+
+ "FixtureLogHelper\\.Test\n\n"+
+ "1\n2\n3\n4\n5\n")
+}
+
+// -----------------------------------------------------------------------
+// Skip() within fixture methods.
+
+func (s *FixtureS) TestSkipSuite(c *C) {
+ helper := FixtureHelper{skip: true, skipOnN: 0}
+ output := String{}
+ result := Run(&helper, &RunConf{Output: &output})
+ c.Assert(output.value, Equals, "")
+ c.Assert(helper.calls[0], Equals, "SetUpSuite")
+ c.Assert(helper.calls[1], Equals, "TearDownSuite")
+ c.Assert(len(helper.calls), Equals, 2)
+ c.Assert(result.Skipped, Equals, 2)
+}
+
+func (s *FixtureS) TestSkipTest(c *C) {
+ helper := FixtureHelper{skip: true, skipOnN: 1}
+ output := String{}
+ result := Run(&helper, &RunConf{Output: &output})
+ c.Assert(helper.calls[0], Equals, "SetUpSuite")
+ c.Assert(helper.calls[1], Equals, "SetUpTest")
+ c.Assert(helper.calls[2], Equals, "SetUpTest")
+ c.Assert(helper.calls[3], Equals, "Test2")
+ c.Assert(helper.calls[4], Equals, "TearDownTest")
+ c.Assert(helper.calls[5], Equals, "TearDownSuite")
+ c.Assert(len(helper.calls), Equals, 6)
+ c.Assert(result.Skipped, Equals, 1)
+}
diff --git a/vendor/gopkg.in/check.v1/foundation_test.go b/vendor/gopkg.in/check.v1/foundation_test.go
new file mode 100644
index 0000000..8ecf791
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/foundation_test.go
@@ -0,0 +1,335 @@
+// These tests check that the foundations of gocheck are working properly.
+// They already assume that fundamental failing is working already, though,
+// since this was tested in bootstrap_test.go. Even then, some care may
+// still have to be taken when using external functions, since they should
+// of course not rely on functionality tested here.
+
+package check_test
+
+import (
+ "fmt"
+ "gopkg.in/check.v1"
+ "log"
+ "os"
+ "regexp"
+ "strings"
+)
+
+// -----------------------------------------------------------------------
+// Foundation test suite.
+
+type FoundationS struct{}
+
+var foundationS = check.Suite(&FoundationS{})
+
+func (s *FoundationS) TestCountSuite(c *check.C) {
+ suitesRun += 1
+}
+
+func (s *FoundationS) TestErrorf(c *check.C) {
+ // Do not use checkState() here. It depends on Errorf() working.
+ expectedLog := fmt.Sprintf("foundation_test.go:%d:\n"+
+ " c.Errorf(\"Error %%v!\", \"message\")\n"+
+ "... Error: Error message!\n\n",
+ getMyLine()+1)
+ c.Errorf("Error %v!", "message")
+ failed := c.Failed()
+ c.Succeed()
+ if log := c.GetTestLog(); log != expectedLog {
+ c.Logf("Errorf() logged %#v rather than %#v", log, expectedLog)
+ c.Fail()
+ }
+ if !failed {
+ c.Logf("Errorf() didn't put the test in a failed state")
+ c.Fail()
+ }
+}
+
+func (s *FoundationS) TestError(c *check.C) {
+ expectedLog := fmt.Sprintf("foundation_test.go:%d:\n"+
+ " c\\.Error\\(\"Error \", \"message!\"\\)\n"+
+ "\\.\\.\\. Error: Error message!\n\n",
+ getMyLine()+1)
+ c.Error("Error ", "message!")
+ checkState(c, nil,
+ &expectedState{
+ name: "Error(`Error `, `message!`)",
+ failed: true,
+ log: expectedLog,
+ })
+}
+
+func (s *FoundationS) TestFailNow(c *check.C) {
+ defer (func() {
+ if !c.Failed() {
+ c.Error("FailNow() didn't fail the test")
+ } else {
+ c.Succeed()
+ if c.GetTestLog() != "" {
+ c.Error("Something got logged:\n" + c.GetTestLog())
+ }
+ }
+ })()
+
+ c.FailNow()
+ c.Log("FailNow() didn't stop the test")
+}
+
+func (s *FoundationS) TestSucceedNow(c *check.C) {
+ defer (func() {
+ if c.Failed() {
+ c.Error("SucceedNow() didn't succeed the test")
+ }
+ if c.GetTestLog() != "" {
+ c.Error("Something got logged:\n" + c.GetTestLog())
+ }
+ })()
+
+ c.Fail()
+ c.SucceedNow()
+ c.Log("SucceedNow() didn't stop the test")
+}
+
+func (s *FoundationS) TestFailureHeader(c *check.C) {
+ output := String{}
+ failHelper := FailHelper{}
+ check.Run(&failHelper, &check.RunConf{Output: &output})
+ header := fmt.Sprintf(""+
+ "\n-----------------------------------"+
+ "-----------------------------------\n"+
+ "FAIL: check_test.go:%d: FailHelper.TestLogAndFail\n",
+ failHelper.testLine)
+ if strings.Index(output.value, header) == -1 {
+ c.Errorf(""+
+ "Failure didn't print a proper header.\n"+
+ "... Got:\n%s... Expected something with:\n%s",
+ output.value, header)
+ }
+}
+
+func (s *FoundationS) TestFatal(c *check.C) {
+ var line int
+ defer (func() {
+ if !c.Failed() {
+ c.Error("Fatal() didn't fail the test")
+ } else {
+ c.Succeed()
+ expected := fmt.Sprintf("foundation_test.go:%d:\n"+
+ " c.Fatal(\"Die \", \"now!\")\n"+
+ "... Error: Die now!\n\n",
+ line)
+ if c.GetTestLog() != expected {
+ c.Error("Incorrect log:", c.GetTestLog())
+ }
+ }
+ })()
+
+ line = getMyLine() + 1
+ c.Fatal("Die ", "now!")
+ c.Log("Fatal() didn't stop the test")
+}
+
+func (s *FoundationS) TestFatalf(c *check.C) {
+ var line int
+ defer (func() {
+ if !c.Failed() {
+ c.Error("Fatalf() didn't fail the test")
+ } else {
+ c.Succeed()
+ expected := fmt.Sprintf("foundation_test.go:%d:\n"+
+ " c.Fatalf(\"Die %%s!\", \"now\")\n"+
+ "... Error: Die now!\n\n",
+ line)
+ if c.GetTestLog() != expected {
+ c.Error("Incorrect log:", c.GetTestLog())
+ }
+ }
+ })()
+
+ line = getMyLine() + 1
+ c.Fatalf("Die %s!", "now")
+ c.Log("Fatalf() didn't stop the test")
+}
+
+func (s *FoundationS) TestCallerLoggingInsideTest(c *check.C) {
+ log := fmt.Sprintf(""+
+ "foundation_test.go:%d:\n"+
+ " result := c.Check\\(10, check.Equals, 20\\)\n"+
+ "\\.\\.\\. obtained int = 10\n"+
+ "\\.\\.\\. expected int = 20\n\n",
+ getMyLine()+1)
+ result := c.Check(10, check.Equals, 20)
+ checkState(c, result,
+ &expectedState{
+ name: "Check(10, Equals, 20)",
+ result: false,
+ failed: true,
+ log: log,
+ })
+}
+
+func (s *FoundationS) TestCallerLoggingInDifferentFile(c *check.C) {
+ result, line := checkEqualWrapper(c, 10, 20)
+ testLine := getMyLine() - 1
+ log := fmt.Sprintf(""+
+ "foundation_test.go:%d:\n"+
+ " result, line := checkEqualWrapper\\(c, 10, 20\\)\n"+
+ "check_test.go:%d:\n"+
+ " return c.Check\\(obtained, check.Equals, expected\\), getMyLine\\(\\)\n"+
+ "\\.\\.\\. obtained int = 10\n"+
+ "\\.\\.\\. expected int = 20\n\n",
+ testLine, line)
+ checkState(c, result,
+ &expectedState{
+ name: "Check(10, Equals, 20)",
+ result: false,
+ failed: true,
+ log: log,
+ })
+}
+
+// -----------------------------------------------------------------------
+// ExpectFailure() inverts the logic of failure.
+
+type ExpectFailureSucceedHelper struct{}
+
+func (s *ExpectFailureSucceedHelper) TestSucceed(c *check.C) {
+ c.ExpectFailure("It booms!")
+ c.Error("Boom!")
+}
+
+type ExpectFailureFailHelper struct{}
+
+func (s *ExpectFailureFailHelper) TestFail(c *check.C) {
+ c.ExpectFailure("Bug #XYZ")
+}
+
+func (s *FoundationS) TestExpectFailureFail(c *check.C) {
+ helper := ExpectFailureFailHelper{}
+ output := String{}
+ result := check.Run(&helper, &check.RunConf{Output: &output})
+
+ expected := "" +
+ "^\n-+\n" +
+ "FAIL: foundation_test\\.go:[0-9]+:" +
+ " ExpectFailureFailHelper\\.TestFail\n\n" +
+ "\\.\\.\\. Error: Test succeeded, but was expected to fail\n" +
+ "\\.\\.\\. Reason: Bug #XYZ\n$"
+
+ matched, err := regexp.MatchString(expected, output.value)
+ if err != nil {
+ c.Error("Bad expression: ", expected)
+ } else if !matched {
+ c.Error("ExpectFailure() didn't log properly:\n", output.value)
+ }
+
+ c.Assert(result.ExpectedFailures, check.Equals, 0)
+}
+
+func (s *FoundationS) TestExpectFailureSucceed(c *check.C) {
+ helper := ExpectFailureSucceedHelper{}
+ output := String{}
+ result := check.Run(&helper, &check.RunConf{Output: &output})
+
+ c.Assert(output.value, check.Equals, "")
+ c.Assert(result.ExpectedFailures, check.Equals, 1)
+}
+
+func (s *FoundationS) TestExpectFailureSucceedVerbose(c *check.C) {
+ helper := ExpectFailureSucceedHelper{}
+ output := String{}
+ result := check.Run(&helper, &check.RunConf{Output: &output, Verbose: true})
+
+ expected := "" +
+ "FAIL EXPECTED: foundation_test\\.go:[0-9]+:" +
+ " ExpectFailureSucceedHelper\\.TestSucceed \\(It booms!\\)\t *[.0-9]+s\n"
+
+ matched, err := regexp.MatchString(expected, output.value)
+ if err != nil {
+ c.Error("Bad expression: ", expected)
+ } else if !matched {
+ c.Error("ExpectFailure() didn't log properly:\n", output.value)
+ }
+
+ c.Assert(result.ExpectedFailures, check.Equals, 1)
+}
+
+// -----------------------------------------------------------------------
+// Skip() allows stopping a test without positive/negative results.
+
+type SkipTestHelper struct{}
+
+func (s *SkipTestHelper) TestFail(c *check.C) {
+ c.Skip("Wrong platform or whatever")
+ c.Error("Boom!")
+}
+
+func (s *FoundationS) TestSkip(c *check.C) {
+ helper := SkipTestHelper{}
+ output := String{}
+ check.Run(&helper, &check.RunConf{Output: &output})
+
+ if output.value != "" {
+ c.Error("Skip() logged something:\n", output.value)
+ }
+}
+
+func (s *FoundationS) TestSkipVerbose(c *check.C) {
+ helper := SkipTestHelper{}
+ output := String{}
+ check.Run(&helper, &check.RunConf{Output: &output, Verbose: true})
+
+ expected := "SKIP: foundation_test\\.go:[0-9]+: SkipTestHelper\\.TestFail" +
+ " \\(Wrong platform or whatever\\)"
+ matched, err := regexp.MatchString(expected, output.value)
+ if err != nil {
+ c.Error("Bad expression: ", expected)
+ } else if !matched {
+ c.Error("Skip() didn't log properly:\n", output.value)
+ }
+}
+
+// -----------------------------------------------------------------------
+// Check minimum *log.Logger interface provided by *check.C.
+
+type minLogger interface {
+ Output(calldepth int, s string) error
+}
+
+func (s *BootstrapS) TestMinLogger(c *check.C) {
+ var logger minLogger
+ logger = log.New(os.Stderr, "", 0)
+ logger = c
+ logger.Output(0, "Hello there")
+ expected := `\[LOG\] [0-9]+:[0-9][0-9]\.[0-9][0-9][0-9] +Hello there\n`
+ output := c.GetTestLog()
+ c.Assert(output, check.Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// Ensure that suites with embedded types are working fine, including the
+// the workaround for issue 906.
+
+type EmbeddedInternalS struct {
+ called bool
+}
+
+type EmbeddedS struct {
+ EmbeddedInternalS
+}
+
+var embeddedS = check.Suite(&EmbeddedS{})
+
+func (s *EmbeddedS) TestCountSuite(c *check.C) {
+ suitesRun += 1
+}
+
+func (s *EmbeddedInternalS) TestMethod(c *check.C) {
+ c.Error("TestMethod() of the embedded type was called!?")
+}
+
+func (s *EmbeddedS) TestMethod(c *check.C) {
+ // http://code.google.com/p/go/issues/detail?id=906
+ c.Check(s.called, check.Equals, false) // Go issue 906 is affecting the runner?
+ s.called = true
+}
diff --git a/vendor/gopkg.in/check.v1/helpers.go b/vendor/gopkg.in/check.v1/helpers.go
new file mode 100644
index 0000000..58a733b
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/helpers.go
@@ -0,0 +1,231 @@
+package check
+
+import (
+ "fmt"
+ "strings"
+ "time"
+)
+
+// TestName returns the current test name in the form "SuiteName.TestName"
+func (c *C) TestName() string {
+ return c.testName
+}
+
+// -----------------------------------------------------------------------
+// Basic succeeding/failing logic.
+
+// Failed returns whether the currently running test has already failed.
+func (c *C) Failed() bool {
+ return c.status() == failedSt
+}
+
+// Fail marks the currently running test as failed.
+//
+// Something ought to have been previously logged so the developer can tell
+// what went wrong. The higher level helper functions will fail the test
+// and do the logging properly.
+func (c *C) Fail() {
+ c.setStatus(failedSt)
+}
+
+// FailNow marks the currently running test as failed and stops running it.
+// Something ought to have been previously logged so the developer can tell
+// what went wrong. The higher level helper functions will fail the test
+// and do the logging properly.
+func (c *C) FailNow() {
+ c.Fail()
+ c.stopNow()
+}
+
+// Succeed marks the currently running test as succeeded, undoing any
+// previous failures.
+func (c *C) Succeed() {
+ c.setStatus(succeededSt)
+}
+
+// SucceedNow marks the currently running test as succeeded, undoing any
+// previous failures, and stops running the test.
+func (c *C) SucceedNow() {
+ c.Succeed()
+ c.stopNow()
+}
+
+// ExpectFailure informs that the running test is knowingly broken for
+// the provided reason. If the test does not fail, an error will be reported
+// to raise attention to this fact. This method is useful to temporarily
+// disable tests which cover well known problems until a better time to
+// fix the problem is found, without forgetting about the fact that a
+// failure still exists.
+func (c *C) ExpectFailure(reason string) {
+ if reason == "" {
+ panic("Missing reason why the test is expected to fail")
+ }
+ c.mustFail = true
+ c.reason = reason
+}
+
+// Skip skips the running test for the provided reason. If run from within
+// SetUpTest, the individual test being set up will be skipped, and if run
+// from within SetUpSuite, the whole suite is skipped.
+func (c *C) Skip(reason string) {
+ if reason == "" {
+ panic("Missing reason why the test is being skipped")
+ }
+ c.reason = reason
+ c.setStatus(skippedSt)
+ c.stopNow()
+}
+
+// -----------------------------------------------------------------------
+// Basic logging.
+
+// GetTestLog returns the current test error output.
+func (c *C) GetTestLog() string {
+ return c.logb.String()
+}
+
+// Log logs some information into the test error output.
+// The provided arguments are assembled together into a string with fmt.Sprint.
+func (c *C) Log(args ...interface{}) {
+ c.log(args...)
+}
+
+// Log logs some information into the test error output.
+// The provided arguments are assembled together into a string with fmt.Sprintf.
+func (c *C) Logf(format string, args ...interface{}) {
+ c.logf(format, args...)
+}
+
+// Output enables *C to be used as a logger in functions that require only
+// the minimum interface of *log.Logger.
+func (c *C) Output(calldepth int, s string) error {
+ d := time.Now().Sub(c.startTime)
+ msec := d / time.Millisecond
+ sec := d / time.Second
+ min := d / time.Minute
+
+ c.Logf("[LOG] %d:%02d.%03d %s", min, sec%60, msec%1000, s)
+ return nil
+}
+
+// Error logs an error into the test error output and marks the test as failed.
+// The provided arguments are assembled together into a string with fmt.Sprint.
+func (c *C) Error(args ...interface{}) {
+ c.logCaller(1)
+ c.logString(fmt.Sprint("Error: ", fmt.Sprint(args...)))
+ c.logNewLine()
+ c.Fail()
+}
+
+// Errorf logs an error into the test error output and marks the test as failed.
+// The provided arguments are assembled together into a string with fmt.Sprintf.
+func (c *C) Errorf(format string, args ...interface{}) {
+ c.logCaller(1)
+ c.logString(fmt.Sprintf("Error: "+format, args...))
+ c.logNewLine()
+ c.Fail()
+}
+
+// Fatal logs an error into the test error output, marks the test as failed, and
+// stops the test execution. The provided arguments are assembled together into
+// a string with fmt.Sprint.
+func (c *C) Fatal(args ...interface{}) {
+ c.logCaller(1)
+ c.logString(fmt.Sprint("Error: ", fmt.Sprint(args...)))
+ c.logNewLine()
+ c.FailNow()
+}
+
+// Fatlaf logs an error into the test error output, marks the test as failed, and
+// stops the test execution. The provided arguments are assembled together into
+// a string with fmt.Sprintf.
+func (c *C) Fatalf(format string, args ...interface{}) {
+ c.logCaller(1)
+ c.logString(fmt.Sprint("Error: ", fmt.Sprintf(format, args...)))
+ c.logNewLine()
+ c.FailNow()
+}
+
+// -----------------------------------------------------------------------
+// Generic checks and assertions based on checkers.
+
+// Check verifies if the first value matches the expected value according
+// to the provided checker. If they do not match, an error is logged, the
+// test is marked as failed, and the test execution continues.
+//
+// Some checkers may not need the expected argument (e.g. IsNil).
+//
+// Extra arguments provided to the function are logged next to the reported
+// problem when the matching fails.
+func (c *C) Check(obtained interface{}, checker Checker, args ...interface{}) bool {
+ return c.internalCheck("Check", obtained, checker, args...)
+}
+
+// Assert ensures that the first value matches the expected value according
+// to the provided checker. If they do not match, an error is logged, the
+// test is marked as failed, and the test execution stops.
+//
+// Some checkers may not need the expected argument (e.g. IsNil).
+//
+// Extra arguments provided to the function are logged next to the reported
+// problem when the matching fails.
+func (c *C) Assert(obtained interface{}, checker Checker, args ...interface{}) {
+ if !c.internalCheck("Assert", obtained, checker, args...) {
+ c.stopNow()
+ }
+}
+
+func (c *C) internalCheck(funcName string, obtained interface{}, checker Checker, args ...interface{}) bool {
+ if checker == nil {
+ c.logCaller(2)
+ c.logString(fmt.Sprintf("%s(obtained, nil!?, ...):", funcName))
+ c.logString("Oops.. you've provided a nil checker!")
+ c.logNewLine()
+ c.Fail()
+ return false
+ }
+
+ // If the last argument is a bug info, extract it out.
+ var comment CommentInterface
+ if len(args) > 0 {
+ if c, ok := args[len(args)-1].(CommentInterface); ok {
+ comment = c
+ args = args[:len(args)-1]
+ }
+ }
+
+ params := append([]interface{}{obtained}, args...)
+ info := checker.Info()
+
+ if len(params) != len(info.Params) {
+ names := append([]string{info.Params[0], info.Name}, info.Params[1:]...)
+ c.logCaller(2)
+ c.logString(fmt.Sprintf("%s(%s):", funcName, strings.Join(names, ", ")))
+ c.logString(fmt.Sprintf("Wrong number of parameters for %s: want %d, got %d", info.Name, len(names), len(params)+1))
+ c.logNewLine()
+ c.Fail()
+ return false
+ }
+
+ // Copy since it may be mutated by Check.
+ names := append([]string{}, info.Params...)
+
+ // Do the actual check.
+ result, error := checker.Check(params, names)
+ if !result || error != "" {
+ c.logCaller(2)
+ for i := 0; i != len(params); i++ {
+ c.logValue(names[i], params[i])
+ }
+ if comment != nil {
+ c.logString(comment.CheckCommentString())
+ }
+ if error != "" {
+ c.logString(error)
+ }
+ c.logNewLine()
+ c.Fail()
+ return false
+ }
+ return true
+}
diff --git a/vendor/gopkg.in/check.v1/helpers_test.go b/vendor/gopkg.in/check.v1/helpers_test.go
new file mode 100644
index 0000000..4baa656
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/helpers_test.go
@@ -0,0 +1,519 @@
+// These tests verify the inner workings of the helper methods associated
+// with check.T.
+
+package check_test
+
+import (
+ "gopkg.in/check.v1"
+ "os"
+ "reflect"
+ "runtime"
+ "sync"
+)
+
+var helpersS = check.Suite(&HelpersS{})
+
+type HelpersS struct{}
+
+func (s *HelpersS) TestCountSuite(c *check.C) {
+ suitesRun += 1
+}
+
+// -----------------------------------------------------------------------
+// Fake checker and bug info to verify the behavior of Assert() and Check().
+
+type MyChecker struct {
+ info *check.CheckerInfo
+ params []interface{}
+ names []string
+ result bool
+ error string
+}
+
+func (checker *MyChecker) Info() *check.CheckerInfo {
+ if checker.info == nil {
+ return &check.CheckerInfo{Name: "MyChecker", Params: []string{"myobtained", "myexpected"}}
+ }
+ return checker.info
+}
+
+func (checker *MyChecker) Check(params []interface{}, names []string) (bool, string) {
+ rparams := checker.params
+ rnames := checker.names
+ checker.params = append([]interface{}{}, params...)
+ checker.names = append([]string{}, names...)
+ if rparams != nil {
+ copy(params, rparams)
+ }
+ if rnames != nil {
+ copy(names, rnames)
+ }
+ return checker.result, checker.error
+}
+
+type myCommentType string
+
+func (c myCommentType) CheckCommentString() string {
+ return string(c)
+}
+
+func myComment(s string) myCommentType {
+ return myCommentType(s)
+}
+
+// -----------------------------------------------------------------------
+// Ensure a real checker actually works fine.
+
+func (s *HelpersS) TestCheckerInterface(c *check.C) {
+ testHelperSuccess(c, "Check(1, Equals, 1)", true, func() interface{} {
+ return c.Check(1, check.Equals, 1)
+ })
+}
+
+// -----------------------------------------------------------------------
+// Tests for Check(), mostly the same as for Assert() following these.
+
+func (s *HelpersS) TestCheckSucceedWithExpected(c *check.C) {
+ checker := &MyChecker{result: true}
+ testHelperSuccess(c, "Check(1, checker, 2)", true, func() interface{} {
+ return c.Check(1, checker, 2)
+ })
+ if !reflect.DeepEqual(checker.params, []interface{}{1, 2}) {
+ c.Fatalf("Bad params for check: %#v", checker.params)
+ }
+}
+
+func (s *HelpersS) TestCheckSucceedWithoutExpected(c *check.C) {
+ checker := &MyChecker{result: true, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ testHelperSuccess(c, "Check(1, checker)", true, func() interface{} {
+ return c.Check(1, checker)
+ })
+ if !reflect.DeepEqual(checker.params, []interface{}{1}) {
+ c.Fatalf("Bad params for check: %#v", checker.params)
+ }
+}
+
+func (s *HelpersS) TestCheckFailWithExpected(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, 2\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n\n"
+ testHelperFailure(c, "Check(1, checker, 2)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, 2)
+ })
+}
+
+func (s *HelpersS) TestCheckFailWithExpectedAndComment(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, 2, myComment\\(\"Hello world!\"\\)\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n" +
+ "\\.+ Hello world!\n\n"
+ testHelperFailure(c, "Check(1, checker, 2, msg)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, 2, myComment("Hello world!"))
+ })
+}
+
+func (s *HelpersS) TestCheckFailWithExpectedAndStaticComment(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " // Nice leading comment\\.\n" +
+ " return c\\.Check\\(1, checker, 2\\) // Hello there\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n\n"
+ testHelperFailure(c, "Check(1, checker, 2, msg)", false, false, log,
+ func() interface{} {
+ // Nice leading comment.
+ return c.Check(1, checker, 2) // Hello there
+ })
+}
+
+func (s *HelpersS) TestCheckFailWithoutExpected(c *check.C) {
+ checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker\\)\n" +
+ "\\.+ myvalue int = 1\n\n"
+ testHelperFailure(c, "Check(1, checker)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker)
+ })
+}
+
+func (s *HelpersS) TestCheckFailWithoutExpectedAndMessage(c *check.C) {
+ checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, myComment\\(\"Hello world!\"\\)\\)\n" +
+ "\\.+ myvalue int = 1\n" +
+ "\\.+ Hello world!\n\n"
+ testHelperFailure(c, "Check(1, checker, msg)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, myComment("Hello world!"))
+ })
+}
+
+func (s *HelpersS) TestCheckWithMissingExpected(c *check.C) {
+ checker := &MyChecker{result: true}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker\\)\n" +
+ "\\.+ Check\\(myobtained, MyChecker, myexpected\\):\n" +
+ "\\.+ Wrong number of parameters for MyChecker: " +
+ "want 3, got 2\n\n"
+ testHelperFailure(c, "Check(1, checker, !?)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker)
+ })
+}
+
+func (s *HelpersS) TestCheckWithTooManyExpected(c *check.C) {
+ checker := &MyChecker{result: true}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, 2, 3\\)\n" +
+ "\\.+ Check\\(myobtained, MyChecker, myexpected\\):\n" +
+ "\\.+ Wrong number of parameters for MyChecker: " +
+ "want 3, got 4\n\n"
+ testHelperFailure(c, "Check(1, checker, 2, 3)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, 2, 3)
+ })
+}
+
+func (s *HelpersS) TestCheckWithError(c *check.C) {
+ checker := &MyChecker{result: false, error: "Some not so cool data provided!"}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, 2\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n" +
+ "\\.+ Some not so cool data provided!\n\n"
+ testHelperFailure(c, "Check(1, checker, 2)", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, 2)
+ })
+}
+
+func (s *HelpersS) TestCheckWithNilChecker(c *check.C) {
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, nil\\)\n" +
+ "\\.+ Check\\(obtained, nil!\\?, \\.\\.\\.\\):\n" +
+ "\\.+ Oops\\.\\. you've provided a nil checker!\n\n"
+ testHelperFailure(c, "Check(obtained, nil)", false, false, log,
+ func() interface{} {
+ return c.Check(1, nil)
+ })
+}
+
+func (s *HelpersS) TestCheckWithParamsAndNamesMutation(c *check.C) {
+ checker := &MyChecker{result: false, params: []interface{}{3, 4}, names: []string{"newobtained", "newexpected"}}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " return c\\.Check\\(1, checker, 2\\)\n" +
+ "\\.+ newobtained int = 3\n" +
+ "\\.+ newexpected int = 4\n\n"
+ testHelperFailure(c, "Check(1, checker, 2) with mutation", false, false, log,
+ func() interface{} {
+ return c.Check(1, checker, 2)
+ })
+}
+
+// -----------------------------------------------------------------------
+// Tests for Assert(), mostly the same as for Check() above.
+
+func (s *HelpersS) TestAssertSucceedWithExpected(c *check.C) {
+ checker := &MyChecker{result: true}
+ testHelperSuccess(c, "Assert(1, checker, 2)", nil, func() interface{} {
+ c.Assert(1, checker, 2)
+ return nil
+ })
+ if !reflect.DeepEqual(checker.params, []interface{}{1, 2}) {
+ c.Fatalf("Bad params for check: %#v", checker.params)
+ }
+}
+
+func (s *HelpersS) TestAssertSucceedWithoutExpected(c *check.C) {
+ checker := &MyChecker{result: true, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ testHelperSuccess(c, "Assert(1, checker)", nil, func() interface{} {
+ c.Assert(1, checker)
+ return nil
+ })
+ if !reflect.DeepEqual(checker.params, []interface{}{1}) {
+ c.Fatalf("Bad params for check: %#v", checker.params)
+ }
+}
+
+func (s *HelpersS) TestAssertFailWithExpected(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker, 2\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n\n"
+ testHelperFailure(c, "Assert(1, checker, 2)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker, 2)
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertFailWithExpectedAndMessage(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker, 2, myComment\\(\"Hello world!\"\\)\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n" +
+ "\\.+ Hello world!\n\n"
+ testHelperFailure(c, "Assert(1, checker, 2, msg)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker, 2, myComment("Hello world!"))
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertFailWithoutExpected(c *check.C) {
+ checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker\\)\n" +
+ "\\.+ myvalue int = 1\n\n"
+ testHelperFailure(c, "Assert(1, checker)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker)
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertFailWithoutExpectedAndMessage(c *check.C) {
+ checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker, myComment\\(\"Hello world!\"\\)\\)\n" +
+ "\\.+ myvalue int = 1\n" +
+ "\\.+ Hello world!\n\n"
+ testHelperFailure(c, "Assert(1, checker, msg)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker, myComment("Hello world!"))
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertWithMissingExpected(c *check.C) {
+ checker := &MyChecker{result: true}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker\\)\n" +
+ "\\.+ Assert\\(myobtained, MyChecker, myexpected\\):\n" +
+ "\\.+ Wrong number of parameters for MyChecker: " +
+ "want 3, got 2\n\n"
+ testHelperFailure(c, "Assert(1, checker, !?)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker)
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertWithError(c *check.C) {
+ checker := &MyChecker{result: false, error: "Some not so cool data provided!"}
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, checker, 2\\)\n" +
+ "\\.+ myobtained int = 1\n" +
+ "\\.+ myexpected int = 2\n" +
+ "\\.+ Some not so cool data provided!\n\n"
+ testHelperFailure(c, "Assert(1, checker, 2)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, checker, 2)
+ return nil
+ })
+}
+
+func (s *HelpersS) TestAssertWithNilChecker(c *check.C) {
+ log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
+ " c\\.Assert\\(1, nil\\)\n" +
+ "\\.+ Assert\\(obtained, nil!\\?, \\.\\.\\.\\):\n" +
+ "\\.+ Oops\\.\\. you've provided a nil checker!\n\n"
+ testHelperFailure(c, "Assert(obtained, nil)", nil, true, log,
+ func() interface{} {
+ c.Assert(1, nil)
+ return nil
+ })
+}
+
+// -----------------------------------------------------------------------
+// Ensure that values logged work properly in some interesting cases.
+
+func (s *HelpersS) TestValueLoggingWithArrays(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
+ " return c\\.Check\\(\\[\\]byte{1, 2}, checker, \\[\\]byte{1, 3}\\)\n" +
+ "\\.+ myobtained \\[\\]uint8 = \\[\\]byte{0x1, 0x2}\n" +
+ "\\.+ myexpected \\[\\]uint8 = \\[\\]byte{0x1, 0x3}\n\n"
+ testHelperFailure(c, "Check([]byte{1}, chk, []byte{3})", false, false, log,
+ func() interface{} {
+ return c.Check([]byte{1, 2}, checker, []byte{1, 3})
+ })
+}
+
+func (s *HelpersS) TestValueLoggingWithMultiLine(c *check.C) {
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
+ " return c\\.Check\\(\"a\\\\nb\\\\n\", checker, \"a\\\\nb\\\\nc\"\\)\n" +
+ "\\.+ myobtained string = \"\" \\+\n" +
+ "\\.+ \"a\\\\n\" \\+\n" +
+ "\\.+ \"b\\\\n\"\n" +
+ "\\.+ myexpected string = \"\" \\+\n" +
+ "\\.+ \"a\\\\n\" \\+\n" +
+ "\\.+ \"b\\\\n\" \\+\n" +
+ "\\.+ \"c\"\n\n"
+ testHelperFailure(c, `Check("a\nb\n", chk, "a\nb\nc")`, false, false, log,
+ func() interface{} {
+ return c.Check("a\nb\n", checker, "a\nb\nc")
+ })
+}
+
+func (s *HelpersS) TestValueLoggingWithMultiLineException(c *check.C) {
+ // If the newline is at the end of the string, don't log as multi-line.
+ checker := &MyChecker{result: false}
+ log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
+ " return c\\.Check\\(\"a b\\\\n\", checker, \"a\\\\nb\"\\)\n" +
+ "\\.+ myobtained string = \"a b\\\\n\"\n" +
+ "\\.+ myexpected string = \"\" \\+\n" +
+ "\\.+ \"a\\\\n\" \\+\n" +
+ "\\.+ \"b\"\n\n"
+ testHelperFailure(c, `Check("a b\n", chk, "a\nb")`, false, false, log,
+ func() interface{} {
+ return c.Check("a b\n", checker, "a\nb")
+ })
+}
+
+// -----------------------------------------------------------------------
+// MakeDir() tests.
+
+type MkDirHelper struct {
+ path1 string
+ path2 string
+ isDir1 bool
+ isDir2 bool
+ isDir3 bool
+ isDir4 bool
+}
+
+func (s *MkDirHelper) SetUpSuite(c *check.C) {
+ s.path1 = c.MkDir()
+ s.isDir1 = isDir(s.path1)
+}
+
+func (s *MkDirHelper) Test(c *check.C) {
+ s.path2 = c.MkDir()
+ s.isDir2 = isDir(s.path2)
+}
+
+func (s *MkDirHelper) TearDownSuite(c *check.C) {
+ s.isDir3 = isDir(s.path1)
+ s.isDir4 = isDir(s.path2)
+}
+
+func (s *HelpersS) TestMkDir(c *check.C) {
+ helper := MkDirHelper{}
+ output := String{}
+ check.Run(&helper, &check.RunConf{Output: &output})
+ c.Assert(output.value, check.Equals, "")
+ c.Check(helper.isDir1, check.Equals, true)
+ c.Check(helper.isDir2, check.Equals, true)
+ c.Check(helper.isDir3, check.Equals, true)
+ c.Check(helper.isDir4, check.Equals, true)
+ c.Check(helper.path1, check.Not(check.Equals),
+ helper.path2)
+ c.Check(isDir(helper.path1), check.Equals, false)
+ c.Check(isDir(helper.path2), check.Equals, false)
+}
+
+func isDir(path string) bool {
+ if stat, err := os.Stat(path); err == nil {
+ return stat.IsDir()
+ }
+ return false
+}
+
+// Concurrent logging should not corrupt the underling buffer.
+// Use go test -race to detect the race in this test.
+func (s *HelpersS) TestConcurrentLogging(c *check.C) {
+ defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(runtime.NumCPU()))
+ var start, stop sync.WaitGroup
+ start.Add(1)
+ for i, n := 0, runtime.NumCPU()*2; i < n; i++ {
+ stop.Add(1)
+ go func(i int) {
+ start.Wait()
+ for j := 0; j < 30; j++ {
+ c.Logf("Worker %d: line %d", i, j)
+ }
+ stop.Done()
+ }(i)
+ }
+ start.Done()
+ stop.Wait()
+}
+
+// -----------------------------------------------------------------------
+// Test the TestName function
+
+type TestNameHelper struct {
+ name1 string
+ name2 string
+ name3 string
+ name4 string
+ name5 string
+}
+
+func (s *TestNameHelper) SetUpSuite(c *check.C) { s.name1 = c.TestName() }
+func (s *TestNameHelper) SetUpTest(c *check.C) { s.name2 = c.TestName() }
+func (s *TestNameHelper) Test(c *check.C) { s.name3 = c.TestName() }
+func (s *TestNameHelper) TearDownTest(c *check.C) { s.name4 = c.TestName() }
+func (s *TestNameHelper) TearDownSuite(c *check.C) { s.name5 = c.TestName() }
+
+func (s *HelpersS) TestTestName(c *check.C) {
+ helper := TestNameHelper{}
+ output := String{}
+ check.Run(&helper, &check.RunConf{Output: &output})
+ c.Check(helper.name1, check.Equals, "")
+ c.Check(helper.name2, check.Equals, "TestNameHelper.Test")
+ c.Check(helper.name3, check.Equals, "TestNameHelper.Test")
+ c.Check(helper.name4, check.Equals, "TestNameHelper.Test")
+ c.Check(helper.name5, check.Equals, "")
+}
+
+// -----------------------------------------------------------------------
+// A couple of helper functions to test helper functions. :-)
+
+func testHelperSuccess(c *check.C, name string, expectedResult interface{}, closure func() interface{}) {
+ var result interface{}
+ defer (func() {
+ if err := recover(); err != nil {
+ panic(err)
+ }
+ checkState(c, result,
+ &expectedState{
+ name: name,
+ result: expectedResult,
+ failed: false,
+ log: "",
+ })
+ })()
+ result = closure()
+}
+
+func testHelperFailure(c *check.C, name string, expectedResult interface{}, shouldStop bool, log string, closure func() interface{}) {
+ var result interface{}
+ defer (func() {
+ if err := recover(); err != nil {
+ panic(err)
+ }
+ checkState(c, result,
+ &expectedState{
+ name: name,
+ result: expectedResult,
+ failed: true,
+ log: log,
+ })
+ })()
+ result = closure()
+ if shouldStop {
+ c.Logf("%s didn't stop when it should", name)
+ }
+}
diff --git a/vendor/gopkg.in/check.v1/printer.go b/vendor/gopkg.in/check.v1/printer.go
new file mode 100644
index 0000000..e0f7557
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/printer.go
@@ -0,0 +1,168 @@
+package check
+
+import (
+ "bytes"
+ "go/ast"
+ "go/parser"
+ "go/printer"
+ "go/token"
+ "os"
+)
+
+func indent(s, with string) (r string) {
+ eol := true
+ for i := 0; i != len(s); i++ {
+ c := s[i]
+ switch {
+ case eol && c == '\n' || c == '\r':
+ case c == '\n' || c == '\r':
+ eol = true
+ case eol:
+ eol = false
+ s = s[:i] + with + s[i:]
+ i += len(with)
+ }
+ }
+ return s
+}
+
+func printLine(filename string, line int) (string, error) {
+ fset := token.NewFileSet()
+ file, err := os.Open(filename)
+ if err != nil {
+ return "", err
+ }
+ fnode, err := parser.ParseFile(fset, filename, file, parser.ParseComments)
+ if err != nil {
+ return "", err
+ }
+ config := &printer.Config{Mode: printer.UseSpaces, Tabwidth: 4}
+ lp := &linePrinter{fset: fset, fnode: fnode, line: line, config: config}
+ ast.Walk(lp, fnode)
+ result := lp.output.Bytes()
+ // Comments leave \n at the end.
+ n := len(result)
+ for n > 0 && result[n-1] == '\n' {
+ n--
+ }
+ return string(result[:n]), nil
+}
+
+type linePrinter struct {
+ config *printer.Config
+ fset *token.FileSet
+ fnode *ast.File
+ line int
+ output bytes.Buffer
+ stmt ast.Stmt
+}
+
+func (lp *linePrinter) emit() bool {
+ if lp.stmt != nil {
+ lp.trim(lp.stmt)
+ lp.printWithComments(lp.stmt)
+ lp.stmt = nil
+ return true
+ }
+ return false
+}
+
+func (lp *linePrinter) printWithComments(n ast.Node) {
+ nfirst := lp.fset.Position(n.Pos()).Line
+ nlast := lp.fset.Position(n.End()).Line
+ for _, g := range lp.fnode.Comments {
+ cfirst := lp.fset.Position(g.Pos()).Line
+ clast := lp.fset.Position(g.End()).Line
+ if clast == nfirst-1 && lp.fset.Position(n.Pos()).Column == lp.fset.Position(g.Pos()).Column {
+ for _, c := range g.List {
+ lp.output.WriteString(c.Text)
+ lp.output.WriteByte('\n')
+ }
+ }
+ if cfirst >= nfirst && cfirst <= nlast && n.End() <= g.List[0].Slash {
+ // The printer will not include the comment if it starts past
+ // the node itself. Trick it into printing by overlapping the
+ // slash with the end of the statement.
+ g.List[0].Slash = n.End() - 1
+ }
+ }
+ node := &printer.CommentedNode{n, lp.fnode.Comments}
+ lp.config.Fprint(&lp.output, lp.fset, node)
+}
+
+func (lp *linePrinter) Visit(n ast.Node) (w ast.Visitor) {
+ if n == nil {
+ if lp.output.Len() == 0 {
+ lp.emit()
+ }
+ return nil
+ }
+ first := lp.fset.Position(n.Pos()).Line
+ last := lp.fset.Position(n.End()).Line
+ if first <= lp.line && last >= lp.line {
+ // Print the innermost statement containing the line.
+ if stmt, ok := n.(ast.Stmt); ok {
+ if _, ok := n.(*ast.BlockStmt); !ok {
+ lp.stmt = stmt
+ }
+ }
+ if first == lp.line && lp.emit() {
+ return nil
+ }
+ return lp
+ }
+ return nil
+}
+
+func (lp *linePrinter) trim(n ast.Node) bool {
+ stmt, ok := n.(ast.Stmt)
+ if !ok {
+ return true
+ }
+ line := lp.fset.Position(n.Pos()).Line
+ if line != lp.line {
+ return false
+ }
+ switch stmt := stmt.(type) {
+ case *ast.IfStmt:
+ stmt.Body = lp.trimBlock(stmt.Body)
+ case *ast.SwitchStmt:
+ stmt.Body = lp.trimBlock(stmt.Body)
+ case *ast.TypeSwitchStmt:
+ stmt.Body = lp.trimBlock(stmt.Body)
+ case *ast.CaseClause:
+ stmt.Body = lp.trimList(stmt.Body)
+ case *ast.CommClause:
+ stmt.Body = lp.trimList(stmt.Body)
+ case *ast.BlockStmt:
+ stmt.List = lp.trimList(stmt.List)
+ }
+ return true
+}
+
+func (lp *linePrinter) trimBlock(stmt *ast.BlockStmt) *ast.BlockStmt {
+ if !lp.trim(stmt) {
+ return lp.emptyBlock(stmt)
+ }
+ stmt.Rbrace = stmt.Lbrace
+ return stmt
+}
+
+func (lp *linePrinter) trimList(stmts []ast.Stmt) []ast.Stmt {
+ for i := 0; i != len(stmts); i++ {
+ if !lp.trim(stmts[i]) {
+ stmts[i] = lp.emptyStmt(stmts[i])
+ break
+ }
+ }
+ return stmts
+}
+
+func (lp *linePrinter) emptyStmt(n ast.Node) *ast.ExprStmt {
+ return &ast.ExprStmt{&ast.Ellipsis{n.Pos(), nil}}
+}
+
+func (lp *linePrinter) emptyBlock(n ast.Node) *ast.BlockStmt {
+ p := n.Pos()
+ return &ast.BlockStmt{p, []ast.Stmt{lp.emptyStmt(n)}, p}
+}
diff --git a/vendor/gopkg.in/check.v1/printer_test.go b/vendor/gopkg.in/check.v1/printer_test.go
new file mode 100644
index 0000000..538b2d5
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/printer_test.go
@@ -0,0 +1,104 @@
+package check_test
+
+import (
+ . "gopkg.in/check.v1"
+)
+
+var _ = Suite(&PrinterS{})
+
+type PrinterS struct{}
+
+func (s *PrinterS) TestCountSuite(c *C) {
+ suitesRun += 1
+}
+
+var printTestFuncLine int
+
+func init() {
+ printTestFuncLine = getMyLine() + 3
+}
+
+func printTestFunc() {
+ println(1) // Comment1
+ if 2 == 2 { // Comment2
+ println(3) // Comment3
+ }
+ switch 5 {
+ case 6: println(6) // Comment6
+ println(7)
+ }
+ switch interface{}(9).(type) {// Comment9
+ case int: println(10)
+ println(11)
+ }
+ select {
+ case <-(chan bool)(nil): println(14)
+ println(15)
+ default: println(16)
+ println(17)
+ }
+ println(19,
+ 20)
+ _ = func() { println(21)
+ println(22)
+ }
+ println(24, func() {
+ println(25)
+ })
+ // Leading comment
+ // with multiple lines.
+ println(29) // Comment29
+}
+
+var printLineTests = []struct {
+ line int
+ output string
+}{
+ {1, "println(1) // Comment1"},
+ {2, "if 2 == 2 { // Comment2\n ...\n}"},
+ {3, "println(3) // Comment3"},
+ {5, "switch 5 {\n...\n}"},
+ {6, "case 6:\n println(6) // Comment6\n ..."},
+ {7, "println(7)"},
+ {9, "switch interface{}(9).(type) { // Comment9\n...\n}"},
+ {10, "case int:\n println(10)\n ..."},
+ {14, "case <-(chan bool)(nil):\n println(14)\n ..."},
+ {15, "println(15)"},
+ {16, "default:\n println(16)\n ..."},
+ {17, "println(17)"},
+ {19, "println(19,\n 20)"},
+ {20, "println(19,\n 20)"},
+ {21, "_ = func() {\n println(21)\n println(22)\n}"},
+ {22, "println(22)"},
+ {24, "println(24, func() {\n println(25)\n})"},
+ {25, "println(25)"},
+ {26, "println(24, func() {\n println(25)\n})"},
+ {29, "// Leading comment\n// with multiple lines.\nprintln(29) // Comment29"},
+}
+
+func (s *PrinterS) TestPrintLine(c *C) {
+ for _, test := range printLineTests {
+ output, err := PrintLine("printer_test.go", printTestFuncLine+test.line)
+ c.Assert(err, IsNil)
+ c.Assert(output, Equals, test.output)
+ }
+}
+
+var indentTests = []struct {
+ in, out string
+}{
+ {"", ""},
+ {"\n", "\n"},
+ {"a", ">>>a"},
+ {"a\n", ">>>a\n"},
+ {"a\nb", ">>>a\n>>>b"},
+ {" ", ">>> "},
+}
+
+func (s *PrinterS) TestIndent(c *C) {
+ for _, test := range indentTests {
+ out := Indent(test.in, ">>>")
+ c.Assert(out, Equals, test.out)
+ }
+
+}
diff --git a/vendor/gopkg.in/check.v1/reporter.go b/vendor/gopkg.in/check.v1/reporter.go
new file mode 100644
index 0000000..fb04f76
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/reporter.go
@@ -0,0 +1,88 @@
+package check
+
+import (
+ "fmt"
+ "io"
+ "sync"
+)
+
+// -----------------------------------------------------------------------
+// Output writer manages atomic output writing according to settings.
+
+type outputWriter struct {
+ m sync.Mutex
+ writer io.Writer
+ wroteCallProblemLast bool
+ Stream bool
+ Verbose bool
+}
+
+func newOutputWriter(writer io.Writer, stream, verbose bool) *outputWriter {
+ return &outputWriter{writer: writer, Stream: stream, Verbose: verbose}
+}
+
+func (ow *outputWriter) Write(content []byte) (n int, err error) {
+ ow.m.Lock()
+ n, err = ow.writer.Write(content)
+ ow.m.Unlock()
+ return
+}
+
+func (ow *outputWriter) WriteCallStarted(label string, c *C) {
+ if ow.Stream {
+ header := renderCallHeader(label, c, "", "\n")
+ ow.m.Lock()
+ ow.writer.Write([]byte(header))
+ ow.m.Unlock()
+ }
+}
+
+func (ow *outputWriter) WriteCallProblem(label string, c *C) {
+ var prefix string
+ if !ow.Stream {
+ prefix = "\n-----------------------------------" +
+ "-----------------------------------\n"
+ }
+ header := renderCallHeader(label, c, prefix, "\n\n")
+ ow.m.Lock()
+ ow.wroteCallProblemLast = true
+ ow.writer.Write([]byte(header))
+ if !ow.Stream {
+ c.logb.WriteTo(ow.writer)
+ }
+ ow.m.Unlock()
+}
+
+func (ow *outputWriter) WriteCallSuccess(label string, c *C) {
+ if ow.Stream || (ow.Verbose && c.kind == testKd) {
+ // TODO Use a buffer here.
+ var suffix string
+ if c.reason != "" {
+ suffix = " (" + c.reason + ")"
+ }
+ if c.status() == succeededSt {
+ suffix += "\t" + c.timerString()
+ }
+ suffix += "\n"
+ if ow.Stream {
+ suffix += "\n"
+ }
+ header := renderCallHeader(label, c, "", suffix)
+ ow.m.Lock()
+ // Resist temptation of using line as prefix above due to race.
+ if !ow.Stream && ow.wroteCallProblemLast {
+ header = "\n-----------------------------------" +
+ "-----------------------------------\n" +
+ header
+ }
+ ow.wroteCallProblemLast = false
+ ow.writer.Write([]byte(header))
+ ow.m.Unlock()
+ }
+}
+
+func renderCallHeader(label string, c *C, prefix, suffix string) string {
+ pc := c.method.PC()
+ return fmt.Sprintf("%s%s: %s: %s%s", prefix, label, niceFuncPath(pc),
+ niceFuncName(pc), suffix)
+}
diff --git a/vendor/gopkg.in/check.v1/reporter_test.go b/vendor/gopkg.in/check.v1/reporter_test.go
new file mode 100644
index 0000000..0b7ed76
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/reporter_test.go
@@ -0,0 +1,159 @@
+package check_test
+
+import (
+ "fmt"
+ "path/filepath"
+ "runtime"
+
+ . "gopkg.in/check.v1"
+)
+
+var _ = Suite(&reporterS{})
+
+type reporterS struct {
+ testFile string
+}
+
+func (s *reporterS) SetUpSuite(c *C) {
+ _, fileName, _, ok := runtime.Caller(0)
+ c.Assert(ok, Equals, true)
+ s.testFile = filepath.Base(fileName)
+}
+
+func (s *reporterS) TestWrite(c *C) {
+ testString := "test string"
+ output := String{}
+
+ dummyStream := true
+ dummyVerbose := true
+ o := NewOutputWriter(&output, dummyStream, dummyVerbose)
+
+ o.Write([]byte(testString))
+ c.Assert(output.value, Equals, testString)
+}
+
+func (s *reporterS) TestWriteCallStartedWithStreamFlag(c *C) {
+ testLabel := "test started label"
+ stream := true
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ o.WriteCallStarted(testLabel, c)
+ expected := fmt.Sprintf("%s: %s:\\d+: %s\n", testLabel, s.testFile, c.TestName())
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallStartedWithoutStreamFlag(c *C) {
+ stream := false
+ output := String{}
+
+ dummyLabel := "dummy"
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ o.WriteCallStarted(dummyLabel, c)
+ c.Assert(output.value, Equals, "")
+}
+
+func (s *reporterS) TestWriteCallProblemWithStreamFlag(c *C) {
+ testLabel := "test problem label"
+ stream := true
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ o.WriteCallProblem(testLabel, c)
+ expected := fmt.Sprintf("%s: %s:\\d+: %s\n\n", testLabel, s.testFile, c.TestName())
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallProblemWithoutStreamFlag(c *C) {
+ testLabel := "test problem label"
+ stream := false
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ o.WriteCallProblem(testLabel, c)
+ expected := fmt.Sprintf(""+
+ "\n"+
+ "----------------------------------------------------------------------\n"+
+ "%s: %s:\\d+: %s\n\n", testLabel, s.testFile, c.TestName())
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallProblemWithoutStreamFlagWithLog(c *C) {
+ testLabel := "test problem label"
+ testLog := "test log"
+ stream := false
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ c.Log(testLog)
+ o.WriteCallProblem(testLabel, c)
+ expected := fmt.Sprintf(""+
+ "\n"+
+ "----------------------------------------------------------------------\n"+
+ "%s: %s:\\d+: %s\n\n%s\n", testLabel, s.testFile, c.TestName(), testLog)
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallSuccessWithStreamFlag(c *C) {
+ testLabel := "test success label"
+ stream := true
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+
+ o.WriteCallSuccess(testLabel, c)
+ expected := fmt.Sprintf("%s: %s:\\d+: %s\t\\d\\.\\d+s\n\n", testLabel, s.testFile, c.TestName())
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallSuccessWithStreamFlagAndReason(c *C) {
+ testLabel := "test success label"
+ testReason := "test skip reason"
+ stream := true
+ output := String{}
+
+ dummyVerbose := true
+ o := NewOutputWriter(&output, stream, dummyVerbose)
+ c.FakeSkip(testReason)
+
+ o.WriteCallSuccess(testLabel, c)
+ expected := fmt.Sprintf("%s: %s:\\d+: %s \\(%s\\)\t\\d\\.\\d+s\n\n",
+ testLabel, s.testFile, c.TestName(), testReason)
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallSuccessWithoutStreamFlagWithVerboseFlag(c *C) {
+ testLabel := "test success label"
+ stream := false
+ verbose := true
+ output := String{}
+
+ o := NewOutputWriter(&output, stream, verbose)
+
+ o.WriteCallSuccess(testLabel, c)
+ expected := fmt.Sprintf("%s: %s:\\d+: %s\t\\d\\.\\d+s\n", testLabel, s.testFile, c.TestName())
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *reporterS) TestWriteCallSuccessWithoutStreamFlagWithoutVerboseFlag(c *C) {
+ testLabel := "test success label"
+ stream := false
+ verbose := false
+ output := String{}
+
+ o := NewOutputWriter(&output, stream, verbose)
+
+ o.WriteCallSuccess(testLabel, c)
+ c.Assert(output.value, Equals, "")
+}
diff --git a/vendor/gopkg.in/check.v1/run.go b/vendor/gopkg.in/check.v1/run.go
new file mode 100644
index 0000000..da8fd79
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/run.go
@@ -0,0 +1,175 @@
+package check
+
+import (
+ "bufio"
+ "flag"
+ "fmt"
+ "os"
+ "testing"
+ "time"
+)
+
+// -----------------------------------------------------------------------
+// Test suite registry.
+
+var allSuites []interface{}
+
+// Suite registers the given value as a test suite to be run. Any methods
+// starting with the Test prefix in the given value will be considered as
+// a test method.
+func Suite(suite interface{}) interface{} {
+ allSuites = append(allSuites, suite)
+ return suite
+}
+
+// -----------------------------------------------------------------------
+// Public running interface.
+
+var (
+ oldFilterFlag = flag.String("gocheck.f", "", "Regular expression selecting which tests and/or suites to run")
+ oldVerboseFlag = flag.Bool("gocheck.v", false, "Verbose mode")
+ oldStreamFlag = flag.Bool("gocheck.vv", false, "Super verbose mode (disables output caching)")
+ oldBenchFlag = flag.Bool("gocheck.b", false, "Run benchmarks")
+ oldBenchTime = flag.Duration("gocheck.btime", 1*time.Second, "approximate run time for each benchmark")
+ oldListFlag = flag.Bool("gocheck.list", false, "List the names of all tests that will be run")
+ oldWorkFlag = flag.Bool("gocheck.work", false, "Display and do not remove the test working directory")
+
+ newFilterFlag = flag.String("check.f", "", "Regular expression selecting which tests and/or suites to run")
+ newVerboseFlag = flag.Bool("check.v", false, "Verbose mode")
+ newStreamFlag = flag.Bool("check.vv", false, "Super verbose mode (disables output caching)")
+ newBenchFlag = flag.Bool("check.b", false, "Run benchmarks")
+ newBenchTime = flag.Duration("check.btime", 1*time.Second, "approximate run time for each benchmark")
+ newBenchMem = flag.Bool("check.bmem", false, "Report memory benchmarks")
+ newListFlag = flag.Bool("check.list", false, "List the names of all tests that will be run")
+ newWorkFlag = flag.Bool("check.work", false, "Display and do not remove the test working directory")
+)
+
+// TestingT runs all test suites registered with the Suite function,
+// printing results to stdout, and reporting any failures back to
+// the "testing" package.
+func TestingT(testingT *testing.T) {
+ benchTime := *newBenchTime
+ if benchTime == 1*time.Second {
+ benchTime = *oldBenchTime
+ }
+ conf := &RunConf{
+ Filter: *oldFilterFlag + *newFilterFlag,
+ Verbose: *oldVerboseFlag || *newVerboseFlag,
+ Stream: *oldStreamFlag || *newStreamFlag,
+ Benchmark: *oldBenchFlag || *newBenchFlag,
+ BenchmarkTime: benchTime,
+ BenchmarkMem: *newBenchMem,
+ KeepWorkDir: *oldWorkFlag || *newWorkFlag,
+ }
+ if *oldListFlag || *newListFlag {
+ w := bufio.NewWriter(os.Stdout)
+ for _, name := range ListAll(conf) {
+ fmt.Fprintln(w, name)
+ }
+ w.Flush()
+ return
+ }
+ result := RunAll(conf)
+ println(result.String())
+ if !result.Passed() {
+ testingT.Fail()
+ }
+}
+
+// RunAll runs all test suites registered with the Suite function, using the
+// provided run configuration.
+func RunAll(runConf *RunConf) *Result {
+ result := Result{}
+ for _, suite := range allSuites {
+ result.Add(Run(suite, runConf))
+ }
+ return &result
+}
+
+// Run runs the provided test suite using the provided run configuration.
+func Run(suite interface{}, runConf *RunConf) *Result {
+ runner := newSuiteRunner(suite, runConf)
+ return runner.run()
+}
+
+// ListAll returns the names of all the test functions registered with the
+// Suite function that will be run with the provided run configuration.
+func ListAll(runConf *RunConf) []string {
+ var names []string
+ for _, suite := range allSuites {
+ names = append(names, List(suite, runConf)...)
+ }
+ return names
+}
+
+// List returns the names of the test functions in the given
+// suite that will be run with the provided run configuration.
+func List(suite interface{}, runConf *RunConf) []string {
+ var names []string
+ runner := newSuiteRunner(suite, runConf)
+ for _, t := range runner.tests {
+ names = append(names, t.String())
+ }
+ return names
+}
+
+// -----------------------------------------------------------------------
+// Result methods.
+
+func (r *Result) Add(other *Result) {
+ r.Succeeded += other.Succeeded
+ r.Skipped += other.Skipped
+ r.Failed += other.Failed
+ r.Panicked += other.Panicked
+ r.FixturePanicked += other.FixturePanicked
+ r.ExpectedFailures += other.ExpectedFailures
+ r.Missed += other.Missed
+ if r.WorkDir != "" && other.WorkDir != "" {
+ r.WorkDir += ":" + other.WorkDir
+ } else if other.WorkDir != "" {
+ r.WorkDir = other.WorkDir
+ }
+}
+
+func (r *Result) Passed() bool {
+ return (r.Failed == 0 && r.Panicked == 0 &&
+ r.FixturePanicked == 0 && r.Missed == 0 &&
+ r.RunError == nil)
+}
+
+func (r *Result) String() string {
+ if r.RunError != nil {
+ return "ERROR: " + r.RunError.Error()
+ }
+
+ var value string
+ if r.Failed == 0 && r.Panicked == 0 && r.FixturePanicked == 0 &&
+ r.Missed == 0 {
+ value = "OK: "
+ } else {
+ value = "OOPS: "
+ }
+ value += fmt.Sprintf("%d passed", r.Succeeded)
+ if r.Skipped != 0 {
+ value += fmt.Sprintf(", %d skipped", r.Skipped)
+ }
+ if r.ExpectedFailures != 0 {
+ value += fmt.Sprintf(", %d expected failures", r.ExpectedFailures)
+ }
+ if r.Failed != 0 {
+ value += fmt.Sprintf(", %d FAILED", r.Failed)
+ }
+ if r.Panicked != 0 {
+ value += fmt.Sprintf(", %d PANICKED", r.Panicked)
+ }
+ if r.FixturePanicked != 0 {
+ value += fmt.Sprintf(", %d FIXTURE-PANICKED", r.FixturePanicked)
+ }
+ if r.Missed != 0 {
+ value += fmt.Sprintf(", %d MISSED", r.Missed)
+ }
+ if r.WorkDir != "" {
+ value += "\nWORK=" + r.WorkDir
+ }
+ return value
+}
diff --git a/vendor/gopkg.in/check.v1/run_test.go b/vendor/gopkg.in/check.v1/run_test.go
new file mode 100644
index 0000000..f41fffc
--- /dev/null
+++ b/vendor/gopkg.in/check.v1/run_test.go
@@ -0,0 +1,419 @@
+// These tests verify the test running logic.
+
+package check_test
+
+import (
+ "errors"
+ . "gopkg.in/check.v1"
+ "os"
+ "sync"
+)
+
+var runnerS = Suite(&RunS{})
+
+type RunS struct{}
+
+func (s *RunS) TestCountSuite(c *C) {
+ suitesRun += 1
+}
+
+// -----------------------------------------------------------------------
+// Tests ensuring result counting works properly.
+
+func (s *RunS) TestSuccess(c *C) {
+ output := String{}
+ result := Run(&SuccessHelper{}, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 1)
+ c.Check(result.Failed, Equals, 0)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 0)
+ c.Check(result.FixturePanicked, Equals, 0)
+ c.Check(result.Missed, Equals, 0)
+ c.Check(result.RunError, IsNil)
+}
+
+func (s *RunS) TestFailure(c *C) {
+ output := String{}
+ result := Run(&FailHelper{}, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 0)
+ c.Check(result.Failed, Equals, 1)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 0)
+ c.Check(result.FixturePanicked, Equals, 0)
+ c.Check(result.Missed, Equals, 0)
+ c.Check(result.RunError, IsNil)
+}
+
+func (s *RunS) TestFixture(c *C) {
+ output := String{}
+ result := Run(&FixtureHelper{}, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 2)
+ c.Check(result.Failed, Equals, 0)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 0)
+ c.Check(result.FixturePanicked, Equals, 0)
+ c.Check(result.Missed, Equals, 0)
+ c.Check(result.RunError, IsNil)
+}
+
+func (s *RunS) TestPanicOnTest(c *C) {
+ output := String{}
+ helper := &FixtureHelper{panicOn: "Test1"}
+ result := Run(helper, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 1)
+ c.Check(result.Failed, Equals, 0)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 1)
+ c.Check(result.FixturePanicked, Equals, 0)
+ c.Check(result.Missed, Equals, 0)
+ c.Check(result.RunError, IsNil)
+}
+
+func (s *RunS) TestPanicOnSetUpTest(c *C) {
+ output := String{}
+ helper := &FixtureHelper{panicOn: "SetUpTest"}
+ result := Run(helper, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 0)
+ c.Check(result.Failed, Equals, 0)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 0)
+ c.Check(result.FixturePanicked, Equals, 1)
+ c.Check(result.Missed, Equals, 2)
+ c.Check(result.RunError, IsNil)
+}
+
+func (s *RunS) TestPanicOnSetUpSuite(c *C) {
+ output := String{}
+ helper := &FixtureHelper{panicOn: "SetUpSuite"}
+ result := Run(helper, &RunConf{Output: &output})
+ c.Check(result.Succeeded, Equals, 0)
+ c.Check(result.Failed, Equals, 0)
+ c.Check(result.Skipped, Equals, 0)
+ c.Check(result.Panicked, Equals, 0)
+ c.Check(result.FixturePanicked, Equals, 1)
+ c.Check(result.Missed, Equals, 2)
+ c.Check(result.RunError, IsNil)
+}
+
+// -----------------------------------------------------------------------
+// Check result aggregation.
+
+func (s *RunS) TestAdd(c *C) {
+ result := &Result{
+ Succeeded: 1,
+ Skipped: 2,
+ Failed: 3,
+ Panicked: 4,
+ FixturePanicked: 5,
+ Missed: 6,
+ ExpectedFailures: 7,
+ }
+ result.Add(&Result{
+ Succeeded: 10,
+ Skipped: 20,
+ Failed: 30,
+ Panicked: 40,
+ FixturePanicked: 50,
+ Missed: 60,
+ ExpectedFailures: 70,
+ })
+ c.Check(result.Succeeded, Equals, 11)
+ c.Check(result.Skipped, Equals, 22)
+ c.Check(result.Failed, Equals, 33)
+ c.Check(result.Panicked, Equals, 44)
+ c.Check(result.FixturePanicked, Equals, 55)
+ c.Check(result.Missed, Equals, 66)
+ c.Check(result.ExpectedFailures, Equals, 77)
+ c.Check(result.RunError, IsNil)
+}
+
+// -----------------------------------------------------------------------
+// Check the Passed() method.
+
+func (s *RunS) TestPassed(c *C) {
+ c.Assert((&Result{}).Passed(), Equals, true)
+ c.Assert((&Result{Succeeded: 1}).Passed(), Equals, true)
+ c.Assert((&Result{Skipped: 1}).Passed(), Equals, true)
+ c.Assert((&Result{Failed: 1}).Passed(), Equals, false)
+ c.Assert((&Result{Panicked: 1}).Passed(), Equals, false)
+ c.Assert((&Result{FixturePanicked: 1}).Passed(), Equals, false)
+ c.Assert((&Result{Missed: 1}).Passed(), Equals, false)
+ c.Assert((&Result{RunError: errors.New("!")}).Passed(), Equals, false)
+}
+
+// -----------------------------------------------------------------------
+// Check that result printing is working correctly.
+
+func (s *RunS) TestPrintSuccess(c *C) {
+ result := &Result{Succeeded: 5}
+ c.Check(result.String(), Equals, "OK: 5 passed")
+}
+
+func (s *RunS) TestPrintFailure(c *C) {
+ result := &Result{Failed: 5}
+ c.Check(result.String(), Equals, "OOPS: 0 passed, 5 FAILED")
+}
+
+func (s *RunS) TestPrintSkipped(c *C) {
+ result := &Result{Skipped: 5}
+ c.Check(result.String(), Equals, "OK: 0 passed, 5 skipped")
+}
+
+func (s *RunS) TestPrintExpectedFailures(c *C) {
+ result := &Result{ExpectedFailures: 5}
+ c.Check(result.String(), Equals, "OK: 0 passed, 5 expected failures")
+}
+
+func (s *RunS) TestPrintPanicked(c *C) {
+ result := &Result{Panicked: 5}
+ c.Check(result.String(), Equals, "OOPS: 0 passed, 5 PANICKED")
+}
+
+func (s *RunS) TestPrintFixturePanicked(c *C) {
+ result := &Result{FixturePanicked: 5}
+ c.Check(result.String(), Equals, "OOPS: 0 passed, 5 FIXTURE-PANICKED")
+}
+
+func (s *RunS) TestPrintMissed(c *C) {
+ result := &Result{Missed: 5}
+ c.Check(result.String(), Equals, "OOPS: 0 passed, 5 MISSED")
+}
+
+func (s *RunS) TestPrintAll(c *C) {
+ result := &Result{Succeeded: 1, Skipped: 2, ExpectedFailures: 3,
+ Panicked: 4, FixturePanicked: 5, Missed: 6}
+ c.Check(result.String(), Equals,
+ "OOPS: 1 passed, 2 skipped, 3 expected failures, 4 PANICKED, "+
+ "5 FIXTURE-PANICKED, 6 MISSED")
+}
+
+func (s *RunS) TestPrintRunError(c *C) {
+ result := &Result{Succeeded: 1, Failed: 1,
+ RunError: errors.New("Kaboom!")}
+ c.Check(result.String(), Equals, "ERROR: Kaboom!")
+}
+
+// -----------------------------------------------------------------------
+// Verify that the method pattern flag works correctly.
+
+func (s *RunS) TestFilterTestName(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "Test[91]"}
+ Run(&helper, &runConf)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 5)
+}
+
+func (s *RunS) TestFilterTestNameWithAll(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: ".*"}
+ Run(&helper, &runConf)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Test2")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ c.Check(helper.calls[7], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 8)
+}
+
+func (s *RunS) TestFilterSuiteName(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "FixtureHelper"}
+ Run(&helper, &runConf)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test1")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "SetUpTest")
+ c.Check(helper.calls[5], Equals, "Test2")
+ c.Check(helper.calls[6], Equals, "TearDownTest")
+ c.Check(helper.calls[7], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 8)
+}
+
+func (s *RunS) TestFilterSuiteNameAndTestName(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "FixtureHelper\\.Test2"}
+ Run(&helper, &runConf)
+ c.Check(helper.calls[0], Equals, "SetUpSuite")
+ c.Check(helper.calls[1], Equals, "SetUpTest")
+ c.Check(helper.calls[2], Equals, "Test2")
+ c.Check(helper.calls[3], Equals, "TearDownTest")
+ c.Check(helper.calls[4], Equals, "TearDownSuite")
+ c.Check(len(helper.calls), Equals, 5)
+}
+
+func (s *RunS) TestFilterAllOut(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "NotFound"}
+ Run(&helper, &runConf)
+ c.Check(len(helper.calls), Equals, 0)
+}
+
+func (s *RunS) TestRequirePartialMatch(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "est"}
+ Run(&helper, &runConf)
+ c.Check(len(helper.calls), Equals, 8)
+}
+
+func (s *RunS) TestFilterError(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Filter: "]["}
+ result := Run(&helper, &runConf)
+ c.Check(result.String(), Equals,
+ "ERROR: Bad filter expression: error parsing regexp: missing closing ]: `[`")
+ c.Check(len(helper.calls), Equals, 0)
+}
+
+// -----------------------------------------------------------------------
+// Verify that List works correctly.
+
+func (s *RunS) TestListFiltered(c *C) {
+ names := List(&FixtureHelper{}, &RunConf{Filter: "1"})
+ c.Assert(names, DeepEquals, []string{
+ "FixtureHelper.Test1",
+ })
+}
+
+func (s *RunS) TestList(c *C) {
+ names := List(&FixtureHelper{}, &RunConf{})
+ c.Assert(names, DeepEquals, []string{
+ "FixtureHelper.Test1",
+ "FixtureHelper.Test2",
+ })
+}
+
+// -----------------------------------------------------------------------
+// Verify that verbose mode prints tests which pass as well.
+
+func (s *RunS) TestVerboseMode(c *C) {
+ helper := FixtureHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Verbose: true}
+ Run(&helper, &runConf)
+
+ expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test1\t *[.0-9]+s\n" +
+ "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t *[.0-9]+s\n"
+
+ c.Assert(output.value, Matches, expected)
+}
+
+func (s *RunS) TestVerboseModeWithFailBeforePass(c *C) {
+ helper := FixtureHelper{panicOn: "Test1"}
+ output := String{}
+ runConf := RunConf{Output: &output, Verbose: true}
+ Run(&helper, &runConf)
+
+ expected := "(?s).*PANIC.*\n-+\n" + // Should have an extra line.
+ "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t *[.0-9]+s\n"
+
+ c.Assert(output.value, Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// Verify the stream output mode. In this mode there's no output caching.
+
+type StreamHelper struct {
+ l2 sync.Mutex
+ l3 sync.Mutex
+}
+
+func (s *StreamHelper) SetUpSuite(c *C) {
+ c.Log("0")
+}
+
+func (s *StreamHelper) Test1(c *C) {
+ c.Log("1")
+ s.l2.Lock()
+ s.l3.Lock()
+ go func() {
+ s.l2.Lock() // Wait for "2".
+ c.Log("3")
+ s.l3.Unlock()
+ }()
+}
+
+func (s *StreamHelper) Test2(c *C) {
+ c.Log("2")
+ s.l2.Unlock()
+ s.l3.Lock() // Wait for "3".
+ c.Fail()
+ c.Log("4")
+}
+
+func (s *RunS) TestStreamMode(c *C) {
+ helper := &StreamHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Stream: true}
+ Run(helper, &runConf)
+
+ expected := "START: run_test\\.go:[0-9]+: StreamHelper\\.SetUpSuite\n0\n" +
+ "PASS: run_test\\.go:[0-9]+: StreamHelper\\.SetUpSuite\t *[.0-9]+s\n\n" +
+ "START: run_test\\.go:[0-9]+: StreamHelper\\.Test1\n1\n" +
+ "PASS: run_test\\.go:[0-9]+: StreamHelper\\.Test1\t *[.0-9]+s\n\n" +
+ "START: run_test\\.go:[0-9]+: StreamHelper\\.Test2\n2\n3\n4\n" +
+ "FAIL: run_test\\.go:[0-9]+: StreamHelper\\.Test2\n\n"
+
+ c.Assert(output.value, Matches, expected)
+}
+
+type StreamMissHelper struct{}
+
+func (s *StreamMissHelper) SetUpSuite(c *C) {
+ c.Log("0")
+ c.Fail()
+}
+
+func (s *StreamMissHelper) Test1(c *C) {
+ c.Log("1")
+}
+
+func (s *RunS) TestStreamModeWithMiss(c *C) {
+ helper := &StreamMissHelper{}
+ output := String{}
+ runConf := RunConf{Output: &output, Stream: true}
+ Run(helper, &runConf)
+
+ expected := "START: run_test\\.go:[0-9]+: StreamMissHelper\\.SetUpSuite\n0\n" +
+ "FAIL: run_test\\.go:[0-9]+: StreamMissHelper\\.SetUpSuite\n\n" +
+ "START: run_test\\.go:[0-9]+: StreamMissHelper\\.Test1\n" +
+ "MISS: run_test\\.go:[0-9]+: StreamMissHelper\\.Test1\n\n"
+
+ c.Assert(output.value, Matches, expected)
+}
+
+// -----------------------------------------------------------------------
+// Verify that that the keep work dir request indeed does so.
+
+type WorkDirSuite struct {}
+
+func (s *WorkDirSuite) Test(c *C) {
+ c.MkDir()
+}
+
+func (s *RunS) TestKeepWorkDir(c *C) {
+ output := String{}
+ runConf := RunConf{Output: &output, Verbose: true, KeepWorkDir: true}
+ result := Run(&WorkDirSuite{}, &runConf)
+
+ c.Assert(result.String(), Matches, ".*\nWORK=" + result.WorkDir)
+
+ stat, err := os.Stat(result.WorkDir)
+ c.Assert(err, IsNil)
+ c.Assert(stat.IsDir(), Equals, true)
+}
diff --git a/vendor/gopkg.in/mgo.v2/LICENSE b/vendor/gopkg.in/mgo.v2/LICENSE
new file mode 100644
index 0000000..770c767
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/LICENSE
@@ -0,0 +1,25 @@
+mgo - MongoDB driver for Go
+
+Copyright (c) 2010-2013 - Gustavo Niemeyer
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/gopkg.in/mgo.v2/bson/LICENSE b/vendor/gopkg.in/mgo.v2/bson/LICENSE
new file mode 100644
index 0000000..8903260
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/LICENSE
@@ -0,0 +1,25 @@
+BSON library for Go
+
+Copyright (c) 2010-2012 - Gustavo Niemeyer
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/gopkg.in/mgo.v2/bson/bson.go b/vendor/gopkg.in/mgo.v2/bson/bson.go
new file mode 100644
index 0000000..7fb7f8c
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/bson.go
@@ -0,0 +1,738 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Package bson is an implementation of the BSON specification for Go:
+//
+// http://bsonspec.org
+//
+// It was created as part of the mgo MongoDB driver for Go, but is standalone
+// and may be used on its own without the driver.
+package bson
+
+import (
+ "bytes"
+ "crypto/md5"
+ "crypto/rand"
+ "encoding/binary"
+ "encoding/hex"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "reflect"
+ "runtime"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+// --------------------------------------------------------------------------
+// The public API.
+
+// A value implementing the bson.Getter interface will have its GetBSON
+// method called when the given value has to be marshalled, and the result
+// of this method will be marshaled in place of the actual object.
+//
+// If GetBSON returns return a non-nil error, the marshalling procedure
+// will stop and error out with the provided value.
+type Getter interface {
+ GetBSON() (interface{}, error)
+}
+
+// A value implementing the bson.Setter interface will receive the BSON
+// value via the SetBSON method during unmarshaling, and the object
+// itself will not be changed as usual.
+//
+// If setting the value works, the method should return nil or alternatively
+// bson.SetZero to set the respective field to its zero value (nil for
+// pointer types). If SetBSON returns a value of type bson.TypeError, the
+// BSON value will be omitted from a map or slice being decoded and the
+// unmarshalling will continue. If it returns any other non-nil error, the
+// unmarshalling procedure will stop and error out with the provided value.
+//
+// This interface is generally useful in pointer receivers, since the method
+// will want to change the receiver. A type field that implements the Setter
+// interface doesn't have to be a pointer, though.
+//
+// Unlike the usual behavior, unmarshalling onto a value that implements a
+// Setter interface will NOT reset the value to its zero state. This allows
+// the value to decide by itself how to be unmarshalled.
+//
+// For example:
+//
+// type MyString string
+//
+// func (s *MyString) SetBSON(raw bson.Raw) error {
+// return raw.Unmarshal(s)
+// }
+//
+type Setter interface {
+ SetBSON(raw Raw) error
+}
+
+// SetZero may be returned from a SetBSON method to have the value set to
+// its respective zero value. When used in pointer values, this will set the
+// field to nil rather than to the pre-allocated value.
+var SetZero = errors.New("set to zero")
+
+// M is a convenient alias for a map[string]interface{} map, useful for
+// dealing with BSON in a native way. For instance:
+//
+// bson.M{"a": 1, "b": true}
+//
+// There's no special handling for this type in addition to what's done anyway
+// for an equivalent map type. Elements in the map will be dumped in an
+// undefined ordered. See also the bson.D type for an ordered alternative.
+type M map[string]interface{}
+
+// D represents a BSON document containing ordered elements. For example:
+//
+// bson.D{{"a", 1}, {"b", true}}
+//
+// In some situations, such as when creating indexes for MongoDB, the order in
+// which the elements are defined is important. If the order is not important,
+// using a map is generally more comfortable. See bson.M and bson.RawD.
+type D []DocElem
+
+// DocElem is an element of the bson.D document representation.
+type DocElem struct {
+ Name string
+ Value interface{}
+}
+
+// Map returns a map out of the ordered element name/value pairs in d.
+func (d D) Map() (m M) {
+ m = make(M, len(d))
+ for _, item := range d {
+ m[item.Name] = item.Value
+ }
+ return m
+}
+
+// The Raw type represents raw unprocessed BSON documents and elements.
+// Kind is the kind of element as defined per the BSON specification, and
+// Data is the raw unprocessed data for the respective element.
+// Using this type it is possible to unmarshal or marshal values partially.
+//
+// Relevant documentation:
+//
+// http://bsonspec.org/#/specification
+//
+type Raw struct {
+ Kind byte
+ Data []byte
+}
+
+// RawD represents a BSON document containing raw unprocessed elements.
+// This low-level representation may be useful when lazily processing
+// documents of uncertain content, or when manipulating the raw content
+// documents in general.
+type RawD []RawDocElem
+
+// See the RawD type.
+type RawDocElem struct {
+ Name string
+ Value Raw
+}
+
+// ObjectId is a unique ID identifying a BSON value. It must be exactly 12 bytes
+// long. MongoDB objects by default have such a property set in their "_id"
+// property.
+//
+// http://www.mongodb.org/display/DOCS/Object+IDs
+type ObjectId string
+
+// ObjectIdHex returns an ObjectId from the provided hex representation.
+// Calling this function with an invalid hex representation will
+// cause a runtime panic. See the IsObjectIdHex function.
+func ObjectIdHex(s string) ObjectId {
+ d, err := hex.DecodeString(s)
+ if err != nil || len(d) != 12 {
+ panic(fmt.Sprintf("invalid input to ObjectIdHex: %q", s))
+ }
+ return ObjectId(d)
+}
+
+// IsObjectIdHex returns whether s is a valid hex representation of
+// an ObjectId. See the ObjectIdHex function.
+func IsObjectIdHex(s string) bool {
+ if len(s) != 24 {
+ return false
+ }
+ _, err := hex.DecodeString(s)
+ return err == nil
+}
+
+// objectIdCounter is atomically incremented when generating a new ObjectId
+// using NewObjectId() function. It's used as a counter part of an id.
+var objectIdCounter uint32 = readRandomUint32()
+
+// readRandomUint32 returns a random objectIdCounter.
+func readRandomUint32() uint32 {
+ var b [4]byte
+ _, err := io.ReadFull(rand.Reader, b[:])
+ if err != nil {
+ panic(fmt.Errorf("cannot read random object id: %v", err))
+ }
+ return uint32((uint32(b[0]) << 0) | (uint32(b[1]) << 8) | (uint32(b[2]) << 16) | (uint32(b[3]) << 24))
+}
+
+// machineId stores machine id generated once and used in subsequent calls
+// to NewObjectId function.
+var machineId = readMachineId()
+var processId = os.Getpid()
+
+// readMachineId generates and returns a machine id.
+// If this function fails to get the hostname it will cause a runtime error.
+func readMachineId() []byte {
+ var sum [3]byte
+ id := sum[:]
+ hostname, err1 := os.Hostname()
+ if err1 != nil {
+ _, err2 := io.ReadFull(rand.Reader, id)
+ if err2 != nil {
+ panic(fmt.Errorf("cannot get hostname: %v; %v", err1, err2))
+ }
+ return id
+ }
+ hw := md5.New()
+ hw.Write([]byte(hostname))
+ copy(id, hw.Sum(nil))
+ return id
+}
+
+// NewObjectId returns a new unique ObjectId.
+func NewObjectId() ObjectId {
+ var b [12]byte
+ // Timestamp, 4 bytes, big endian
+ binary.BigEndian.PutUint32(b[:], uint32(time.Now().Unix()))
+ // Machine, first 3 bytes of md5(hostname)
+ b[4] = machineId[0]
+ b[5] = machineId[1]
+ b[6] = machineId[2]
+ // Pid, 2 bytes, specs don't specify endianness, but we use big endian.
+ b[7] = byte(processId >> 8)
+ b[8] = byte(processId)
+ // Increment, 3 bytes, big endian
+ i := atomic.AddUint32(&objectIdCounter, 1)
+ b[9] = byte(i >> 16)
+ b[10] = byte(i >> 8)
+ b[11] = byte(i)
+ return ObjectId(b[:])
+}
+
+// NewObjectIdWithTime returns a dummy ObjectId with the timestamp part filled
+// with the provided number of seconds from epoch UTC, and all other parts
+// filled with zeroes. It's not safe to insert a document with an id generated
+// by this method, it is useful only for queries to find documents with ids
+// generated before or after the specified timestamp.
+func NewObjectIdWithTime(t time.Time) ObjectId {
+ var b [12]byte
+ binary.BigEndian.PutUint32(b[:4], uint32(t.Unix()))
+ return ObjectId(string(b[:]))
+}
+
+// String returns a hex string representation of the id.
+// Example: ObjectIdHex("4d88e15b60f486e428412dc9").
+func (id ObjectId) String() string {
+ return fmt.Sprintf(`ObjectIdHex("%x")`, string(id))
+}
+
+// Hex returns a hex representation of the ObjectId.
+func (id ObjectId) Hex() string {
+ return hex.EncodeToString([]byte(id))
+}
+
+// MarshalJSON turns a bson.ObjectId into a json.Marshaller.
+func (id ObjectId) MarshalJSON() ([]byte, error) {
+ return []byte(fmt.Sprintf(`"%x"`, string(id))), nil
+}
+
+var nullBytes = []byte("null")
+
+// UnmarshalJSON turns *bson.ObjectId into a json.Unmarshaller.
+func (id *ObjectId) UnmarshalJSON(data []byte) error {
+ if len(data) > 0 && (data[0] == '{' || data[0] == 'O') {
+ var v struct {
+ Id json.RawMessage `json:"$oid"`
+ Func struct {
+ Id json.RawMessage
+ } `json:"$oidFunc"`
+ }
+ err := jdec(data, &v)
+ if err == nil {
+ if len(v.Id) > 0 {
+ data = []byte(v.Id)
+ } else {
+ data = []byte(v.Func.Id)
+ }
+ }
+ }
+ if len(data) == 2 && data[0] == '"' && data[1] == '"' || bytes.Equal(data, nullBytes) {
+ *id = ""
+ return nil
+ }
+ if len(data) != 26 || data[0] != '"' || data[25] != '"' {
+ return errors.New(fmt.Sprintf("invalid ObjectId in JSON: %s", string(data)))
+ }
+ var buf [12]byte
+ _, err := hex.Decode(buf[:], data[1:25])
+ if err != nil {
+ return errors.New(fmt.Sprintf("invalid ObjectId in JSON: %s (%s)", string(data), err))
+ }
+ *id = ObjectId(string(buf[:]))
+ return nil
+}
+
+// MarshalText turns bson.ObjectId into an encoding.TextMarshaler.
+func (id ObjectId) MarshalText() ([]byte, error) {
+ return []byte(fmt.Sprintf("%x", string(id))), nil
+}
+
+// UnmarshalText turns *bson.ObjectId into an encoding.TextUnmarshaler.
+func (id *ObjectId) UnmarshalText(data []byte) error {
+ if len(data) == 1 && data[0] == ' ' || len(data) == 0 {
+ *id = ""
+ return nil
+ }
+ if len(data) != 24 {
+ return fmt.Errorf("invalid ObjectId: %s", data)
+ }
+ var buf [12]byte
+ _, err := hex.Decode(buf[:], data[:])
+ if err != nil {
+ return fmt.Errorf("invalid ObjectId: %s (%s)", data, err)
+ }
+ *id = ObjectId(string(buf[:]))
+ return nil
+}
+
+// Valid returns true if id is valid. A valid id must contain exactly 12 bytes.
+func (id ObjectId) Valid() bool {
+ return len(id) == 12
+}
+
+// byteSlice returns byte slice of id from start to end.
+// Calling this function with an invalid id will cause a runtime panic.
+func (id ObjectId) byteSlice(start, end int) []byte {
+ if len(id) != 12 {
+ panic(fmt.Sprintf("invalid ObjectId: %q", string(id)))
+ }
+ return []byte(string(id)[start:end])
+}
+
+// Time returns the timestamp part of the id.
+// It's a runtime error to call this method with an invalid id.
+func (id ObjectId) Time() time.Time {
+ // First 4 bytes of ObjectId is 32-bit big-endian seconds from epoch.
+ secs := int64(binary.BigEndian.Uint32(id.byteSlice(0, 4)))
+ return time.Unix(secs, 0)
+}
+
+// Machine returns the 3-byte machine id part of the id.
+// It's a runtime error to call this method with an invalid id.
+func (id ObjectId) Machine() []byte {
+ return id.byteSlice(4, 7)
+}
+
+// Pid returns the process id part of the id.
+// It's a runtime error to call this method with an invalid id.
+func (id ObjectId) Pid() uint16 {
+ return binary.BigEndian.Uint16(id.byteSlice(7, 9))
+}
+
+// Counter returns the incrementing value part of the id.
+// It's a runtime error to call this method with an invalid id.
+func (id ObjectId) Counter() int32 {
+ b := id.byteSlice(9, 12)
+ // Counter is stored as big-endian 3-byte value
+ return int32(uint32(b[0])<<16 | uint32(b[1])<<8 | uint32(b[2]))
+}
+
+// The Symbol type is similar to a string and is used in languages with a
+// distinct symbol type.
+type Symbol string
+
+// Now returns the current time with millisecond precision. MongoDB stores
+// timestamps with the same precision, so a Time returned from this method
+// will not change after a roundtrip to the database. That's the only reason
+// why this function exists. Using the time.Now function also works fine
+// otherwise.
+func Now() time.Time {
+ return time.Unix(0, time.Now().UnixNano()/1e6*1e6)
+}
+
+// MongoTimestamp is a special internal type used by MongoDB that for some
+// strange reason has its own datatype defined in BSON.
+type MongoTimestamp int64
+
+type orderKey int64
+
+// MaxKey is a special value that compares higher than all other possible BSON
+// values in a MongoDB database.
+var MaxKey = orderKey(1<<63 - 1)
+
+// MinKey is a special value that compares lower than all other possible BSON
+// values in a MongoDB database.
+var MinKey = orderKey(-1 << 63)
+
+type undefined struct{}
+
+// Undefined represents the undefined BSON value.
+var Undefined undefined
+
+// Binary is a representation for non-standard binary values. Any kind should
+// work, but the following are known as of this writing:
+//
+// 0x00 - Generic. This is decoded as []byte(data), not Binary{0x00, data}.
+// 0x01 - Function (!?)
+// 0x02 - Obsolete generic.
+// 0x03 - UUID
+// 0x05 - MD5
+// 0x80 - User defined.
+//
+type Binary struct {
+ Kind byte
+ Data []byte
+}
+
+// RegEx represents a regular expression. The Options field may contain
+// individual characters defining the way in which the pattern should be
+// applied, and must be sorted. Valid options as of this writing are 'i' for
+// case insensitive matching, 'm' for multi-line matching, 'x' for verbose
+// mode, 'l' to make \w, \W, and similar be locale-dependent, 's' for dot-all
+// mode (a '.' matches everything), and 'u' to make \w, \W, and similar match
+// unicode. The value of the Options parameter is not verified before being
+// marshaled into the BSON format.
+type RegEx struct {
+ Pattern string
+ Options string
+}
+
+// JavaScript is a type that holds JavaScript code. If Scope is non-nil, it
+// will be marshaled as a mapping from identifiers to values that may be
+// used when evaluating the provided Code.
+type JavaScript struct {
+ Code string
+ Scope interface{}
+}
+
+// DBPointer refers to a document id in a namespace.
+//
+// This type is deprecated in the BSON specification and should not be used
+// except for backwards compatibility with ancient applications.
+type DBPointer struct {
+ Namespace string
+ Id ObjectId
+}
+
+const initialBufferSize = 64
+
+func handleErr(err *error) {
+ if r := recover(); r != nil {
+ if _, ok := r.(runtime.Error); ok {
+ panic(r)
+ } else if _, ok := r.(externalPanic); ok {
+ panic(r)
+ } else if s, ok := r.(string); ok {
+ *err = errors.New(s)
+ } else if e, ok := r.(error); ok {
+ *err = e
+ } else {
+ panic(r)
+ }
+ }
+}
+
+// Marshal serializes the in value, which may be a map or a struct value.
+// In the case of struct values, only exported fields will be serialized,
+// and the order of serialized fields will match that of the struct itself.
+// The lowercased field name is used as the key for each exported field,
+// but this behavior may be changed using the respective field tag.
+// The tag may also contain flags to tweak the marshalling behavior for
+// the field. The tag formats accepted are:
+//
+// "[][,[,]]"
+//
+// `(...) bson:"[][,[,]]" (...)`
+//
+// The following flags are currently supported:
+//
+// omitempty Only include the field if it's not set to the zero
+// value for the type or to empty slices or maps.
+//
+// minsize Marshal an int64 value as an int32, if that's feasible
+// while preserving the numeric value.
+//
+// inline Inline the field, which must be a struct or a map,
+// causing all of its fields or keys to be processed as if
+// they were part of the outer struct. For maps, keys must
+// not conflict with the bson keys of other struct fields.
+//
+// Some examples:
+//
+// type T struct {
+// A bool
+// B int "myb"
+// C string "myc,omitempty"
+// D string `bson:",omitempty" json:"jsonkey"`
+// E int64 ",minsize"
+// F int64 "myf,omitempty,minsize"
+// }
+//
+func Marshal(in interface{}) (out []byte, err error) {
+ defer handleErr(&err)
+ e := &encoder{make([]byte, 0, initialBufferSize)}
+ e.addDoc(reflect.ValueOf(in))
+ return e.out, nil
+}
+
+// Unmarshal deserializes data from in into the out value. The out value
+// must be a map, a pointer to a struct, or a pointer to a bson.D value.
+// In the case of struct values, only exported fields will be deserialized.
+// The lowercased field name is used as the key for each exported field,
+// but this behavior may be changed using the respective field tag.
+// The tag may also contain flags to tweak the marshalling behavior for
+// the field. The tag formats accepted are:
+//
+// "[][,[,]]"
+//
+// `(...) bson:"[][,[,]]" (...)`
+//
+// The following flags are currently supported during unmarshal (see the
+// Marshal method for other flags):
+//
+// inline Inline the field, which must be a struct or a map.
+// Inlined structs are handled as if its fields were part
+// of the outer struct. An inlined map causes keys that do
+// not match any other struct field to be inserted in the
+// map rather than being discarded as usual.
+//
+// The target field or element types of out may not necessarily match
+// the BSON values of the provided data. The following conversions are
+// made automatically:
+//
+// - Numeric types are converted if at least the integer part of the
+// value would be preserved correctly
+// - Bools are converted to numeric types as 1 or 0
+// - Numeric types are converted to bools as true if not 0 or false otherwise
+// - Binary and string BSON data is converted to a string, array or byte slice
+//
+// If the value would not fit the type and cannot be converted, it's
+// silently skipped.
+//
+// Pointer values are initialized when necessary.
+func Unmarshal(in []byte, out interface{}) (err error) {
+ if raw, ok := out.(*Raw); ok {
+ raw.Kind = 3
+ raw.Data = in
+ return nil
+ }
+ defer handleErr(&err)
+ v := reflect.ValueOf(out)
+ switch v.Kind() {
+ case reflect.Ptr:
+ fallthrough
+ case reflect.Map:
+ d := newDecoder(in)
+ d.readDocTo(v)
+ case reflect.Struct:
+ return errors.New("Unmarshal can't deal with struct values. Use a pointer.")
+ default:
+ return errors.New("Unmarshal needs a map or a pointer to a struct.")
+ }
+ return nil
+}
+
+// Unmarshal deserializes raw into the out value. If the out value type
+// is not compatible with raw, a *bson.TypeError is returned.
+//
+// See the Unmarshal function documentation for more details on the
+// unmarshalling process.
+func (raw Raw) Unmarshal(out interface{}) (err error) {
+ defer handleErr(&err)
+ v := reflect.ValueOf(out)
+ switch v.Kind() {
+ case reflect.Ptr:
+ v = v.Elem()
+ fallthrough
+ case reflect.Map:
+ d := newDecoder(raw.Data)
+ good := d.readElemTo(v, raw.Kind)
+ if !good {
+ return &TypeError{v.Type(), raw.Kind}
+ }
+ case reflect.Struct:
+ return errors.New("Raw Unmarshal can't deal with struct values. Use a pointer.")
+ default:
+ return errors.New("Raw Unmarshal needs a map or a valid pointer.")
+ }
+ return nil
+}
+
+type TypeError struct {
+ Type reflect.Type
+ Kind byte
+}
+
+func (e *TypeError) Error() string {
+ return fmt.Sprintf("BSON kind 0x%02x isn't compatible with type %s", e.Kind, e.Type.String())
+}
+
+// --------------------------------------------------------------------------
+// Maintain a mapping of keys to structure field indexes
+
+type structInfo struct {
+ FieldsMap map[string]fieldInfo
+ FieldsList []fieldInfo
+ InlineMap int
+ Zero reflect.Value
+}
+
+type fieldInfo struct {
+ Key string
+ Num int
+ OmitEmpty bool
+ MinSize bool
+ Inline []int
+}
+
+var structMap = make(map[reflect.Type]*structInfo)
+var structMapMutex sync.RWMutex
+
+type externalPanic string
+
+func (e externalPanic) String() string {
+ return string(e)
+}
+
+func getStructInfo(st reflect.Type) (*structInfo, error) {
+ structMapMutex.RLock()
+ sinfo, found := structMap[st]
+ structMapMutex.RUnlock()
+ if found {
+ return sinfo, nil
+ }
+ n := st.NumField()
+ fieldsMap := make(map[string]fieldInfo)
+ fieldsList := make([]fieldInfo, 0, n)
+ inlineMap := -1
+ for i := 0; i != n; i++ {
+ field := st.Field(i)
+ if field.PkgPath != "" && !field.Anonymous {
+ continue // Private field
+ }
+
+ info := fieldInfo{Num: i}
+
+ tag := field.Tag.Get("bson")
+ if tag == "" && strings.Index(string(field.Tag), ":") < 0 {
+ tag = string(field.Tag)
+ }
+ if tag == "-" {
+ continue
+ }
+
+ inline := false
+ fields := strings.Split(tag, ",")
+ if len(fields) > 1 {
+ for _, flag := range fields[1:] {
+ switch flag {
+ case "omitempty":
+ info.OmitEmpty = true
+ case "minsize":
+ info.MinSize = true
+ case "inline":
+ inline = true
+ default:
+ msg := fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st)
+ panic(externalPanic(msg))
+ }
+ }
+ tag = fields[0]
+ }
+
+ if inline {
+ switch field.Type.Kind() {
+ case reflect.Map:
+ if inlineMap >= 0 {
+ return nil, errors.New("Multiple ,inline maps in struct " + st.String())
+ }
+ if field.Type.Key() != reflect.TypeOf("") {
+ return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String())
+ }
+ inlineMap = info.Num
+ case reflect.Struct:
+ sinfo, err := getStructInfo(field.Type)
+ if err != nil {
+ return nil, err
+ }
+ for _, finfo := range sinfo.FieldsList {
+ if _, found := fieldsMap[finfo.Key]; found {
+ msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String()
+ return nil, errors.New(msg)
+ }
+ if finfo.Inline == nil {
+ finfo.Inline = []int{i, finfo.Num}
+ } else {
+ finfo.Inline = append([]int{i}, finfo.Inline...)
+ }
+ fieldsMap[finfo.Key] = finfo
+ fieldsList = append(fieldsList, finfo)
+ }
+ default:
+ panic("Option ,inline needs a struct value or map field")
+ }
+ continue
+ }
+
+ if tag != "" {
+ info.Key = tag
+ } else {
+ info.Key = strings.ToLower(field.Name)
+ }
+
+ if _, found = fieldsMap[info.Key]; found {
+ msg := "Duplicated key '" + info.Key + "' in struct " + st.String()
+ return nil, errors.New(msg)
+ }
+
+ fieldsList = append(fieldsList, info)
+ fieldsMap[info.Key] = info
+ }
+ sinfo = &structInfo{
+ fieldsMap,
+ fieldsList,
+ inlineMap,
+ reflect.New(st).Elem(),
+ }
+ structMapMutex.Lock()
+ structMap[st] = sinfo
+ structMapMutex.Unlock()
+ return sinfo, nil
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/bson_test.go b/vendor/gopkg.in/mgo.v2/bson/bson_test.go
new file mode 100644
index 0000000..37451f9
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/bson_test.go
@@ -0,0 +1,1832 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+// gobson - BSON library for Go.
+
+package bson_test
+
+import (
+ "encoding/binary"
+ "encoding/hex"
+ "encoding/json"
+ "encoding/xml"
+ "errors"
+ "net/url"
+ "reflect"
+ "strings"
+ "testing"
+ "time"
+
+ . "gopkg.in/check.v1"
+ "gopkg.in/mgo.v2/bson"
+ "gopkg.in/yaml.v2"
+)
+
+func TestAll(t *testing.T) {
+ TestingT(t)
+}
+
+type S struct{}
+
+var _ = Suite(&S{})
+
+// Wrap up the document elements contained in data, prepending the int32
+// length of the data, and appending the '\x00' value closing the document.
+func wrapInDoc(data string) string {
+ result := make([]byte, len(data)+5)
+ binary.LittleEndian.PutUint32(result, uint32(len(result)))
+ copy(result[4:], []byte(data))
+ return string(result)
+}
+
+func makeZeroDoc(value interface{}) (zero interface{}) {
+ v := reflect.ValueOf(value)
+ t := v.Type()
+ switch t.Kind() {
+ case reflect.Map:
+ mv := reflect.MakeMap(t)
+ zero = mv.Interface()
+ case reflect.Ptr:
+ pv := reflect.New(v.Type().Elem())
+ zero = pv.Interface()
+ case reflect.Slice, reflect.Int, reflect.Int64, reflect.Struct:
+ zero = reflect.New(t).Interface()
+ default:
+ panic("unsupported doc type: " + t.Name())
+ }
+ return zero
+}
+
+func testUnmarshal(c *C, data string, obj interface{}) {
+ zero := makeZeroDoc(obj)
+ err := bson.Unmarshal([]byte(data), zero)
+ c.Assert(err, IsNil)
+ c.Assert(zero, DeepEquals, obj)
+}
+
+type testItemType struct {
+ obj interface{}
+ data string
+}
+
+// --------------------------------------------------------------------------
+// Samples from bsonspec.org:
+
+var sampleItems = []testItemType{
+ {bson.M{"hello": "world"},
+ "\x16\x00\x00\x00\x02hello\x00\x06\x00\x00\x00world\x00\x00"},
+ {bson.M{"BSON": []interface{}{"awesome", float64(5.05), 1986}},
+ "1\x00\x00\x00\x04BSON\x00&\x00\x00\x00\x020\x00\x08\x00\x00\x00" +
+ "awesome\x00\x011\x00333333\x14@\x102\x00\xc2\x07\x00\x00\x00\x00"},
+}
+
+func (s *S) TestMarshalSampleItems(c *C) {
+ for i, item := range sampleItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, item.data, Commentf("Failed on item %d", i))
+ }
+}
+
+func (s *S) TestUnmarshalSampleItems(c *C) {
+ for i, item := range sampleItems {
+ value := bson.M{}
+ err := bson.Unmarshal([]byte(item.data), value)
+ c.Assert(err, IsNil)
+ c.Assert(value, DeepEquals, item.obj, Commentf("Failed on item %d", i))
+ }
+}
+
+// --------------------------------------------------------------------------
+// Every type, ordered by the type flag. These are not wrapped with the
+// length and last \x00 from the document. wrapInDoc() computes them.
+// Note that all of them should be supported as two-way conversions.
+
+var allItems = []testItemType{
+ {bson.M{},
+ ""},
+ {bson.M{"_": float64(5.05)},
+ "\x01_\x00333333\x14@"},
+ {bson.M{"_": "yo"},
+ "\x02_\x00\x03\x00\x00\x00yo\x00"},
+ {bson.M{"_": bson.M{"a": true}},
+ "\x03_\x00\x09\x00\x00\x00\x08a\x00\x01\x00"},
+ {bson.M{"_": []interface{}{true, false}},
+ "\x04_\x00\r\x00\x00\x00\x080\x00\x01\x081\x00\x00\x00"},
+ {bson.M{"_": []byte("yo")},
+ "\x05_\x00\x02\x00\x00\x00\x00yo"},
+ {bson.M{"_": bson.Binary{0x80, []byte("udef")}},
+ "\x05_\x00\x04\x00\x00\x00\x80udef"},
+ {bson.M{"_": bson.Undefined}, // Obsolete, but still seen in the wild.
+ "\x06_\x00"},
+ {bson.M{"_": bson.ObjectId("0123456789ab")},
+ "\x07_\x000123456789ab"},
+ {bson.M{"_": bson.DBPointer{"testnamespace", bson.ObjectId("0123456789ab")}},
+ "\x0C_\x00\x0e\x00\x00\x00testnamespace\x000123456789ab"},
+ {bson.M{"_": false},
+ "\x08_\x00\x00"},
+ {bson.M{"_": true},
+ "\x08_\x00\x01"},
+ {bson.M{"_": time.Unix(0, 258e6)}, // Note the NS <=> MS conversion.
+ "\x09_\x00\x02\x01\x00\x00\x00\x00\x00\x00"},
+ {bson.M{"_": nil},
+ "\x0A_\x00"},
+ {bson.M{"_": bson.RegEx{"ab", "cd"}},
+ "\x0B_\x00ab\x00cd\x00"},
+ {bson.M{"_": bson.JavaScript{"code", nil}},
+ "\x0D_\x00\x05\x00\x00\x00code\x00"},
+ {bson.M{"_": bson.Symbol("sym")},
+ "\x0E_\x00\x04\x00\x00\x00sym\x00"},
+ {bson.M{"_": bson.JavaScript{"code", bson.M{"": nil}}},
+ "\x0F_\x00\x14\x00\x00\x00\x05\x00\x00\x00code\x00" +
+ "\x07\x00\x00\x00\x0A\x00\x00"},
+ {bson.M{"_": 258},
+ "\x10_\x00\x02\x01\x00\x00"},
+ {bson.M{"_": bson.MongoTimestamp(258)},
+ "\x11_\x00\x02\x01\x00\x00\x00\x00\x00\x00"},
+ {bson.M{"_": int64(258)},
+ "\x12_\x00\x02\x01\x00\x00\x00\x00\x00\x00"},
+ {bson.M{"_": int64(258 << 32)},
+ "\x12_\x00\x00\x00\x00\x00\x02\x01\x00\x00"},
+ {bson.M{"_": bson.MaxKey},
+ "\x7F_\x00"},
+ {bson.M{"_": bson.MinKey},
+ "\xFF_\x00"},
+}
+
+func (s *S) TestMarshalAllItems(c *C) {
+ for i, item := range allItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc(item.data), Commentf("Failed on item %d: %#v", i, item))
+ }
+}
+
+func (s *S) TestUnmarshalAllItems(c *C) {
+ for i, item := range allItems {
+ value := bson.M{}
+ err := bson.Unmarshal([]byte(wrapInDoc(item.data)), value)
+ c.Assert(err, IsNil)
+ c.Assert(value, DeepEquals, item.obj, Commentf("Failed on item %d: %#v", i, item))
+ }
+}
+
+func (s *S) TestUnmarshalRawAllItems(c *C) {
+ for i, item := range allItems {
+ if len(item.data) == 0 {
+ continue
+ }
+ value := item.obj.(bson.M)["_"]
+ if value == nil {
+ continue
+ }
+ pv := reflect.New(reflect.ValueOf(value).Type())
+ raw := bson.Raw{item.data[0], []byte(item.data[3:])}
+ c.Logf("Unmarshal raw: %#v, %#v", raw, pv.Interface())
+ err := raw.Unmarshal(pv.Interface())
+ c.Assert(err, IsNil)
+ c.Assert(pv.Elem().Interface(), DeepEquals, value, Commentf("Failed on item %d: %#v", i, item))
+ }
+}
+
+func (s *S) TestUnmarshalRawIncompatible(c *C) {
+ raw := bson.Raw{0x08, []byte{0x01}} // true
+ err := raw.Unmarshal(&struct{}{})
+ c.Assert(err, ErrorMatches, "BSON kind 0x08 isn't compatible with type struct \\{\\}")
+}
+
+func (s *S) TestUnmarshalZeroesStruct(c *C) {
+ data, err := bson.Marshal(bson.M{"b": 2})
+ c.Assert(err, IsNil)
+ type T struct{ A, B int }
+ v := T{A: 1}
+ err = bson.Unmarshal(data, &v)
+ c.Assert(err, IsNil)
+ c.Assert(v.A, Equals, 0)
+ c.Assert(v.B, Equals, 2)
+}
+
+func (s *S) TestUnmarshalZeroesMap(c *C) {
+ data, err := bson.Marshal(bson.M{"b": 2})
+ c.Assert(err, IsNil)
+ m := bson.M{"a": 1}
+ err = bson.Unmarshal(data, &m)
+ c.Assert(err, IsNil)
+ c.Assert(m, DeepEquals, bson.M{"b": 2})
+}
+
+func (s *S) TestUnmarshalNonNilInterface(c *C) {
+ data, err := bson.Marshal(bson.M{"b": 2})
+ c.Assert(err, IsNil)
+ m := bson.M{"a": 1}
+ var i interface{}
+ i = m
+ err = bson.Unmarshal(data, &i)
+ c.Assert(err, IsNil)
+ c.Assert(i, DeepEquals, bson.M{"b": 2})
+ c.Assert(m, DeepEquals, bson.M{"a": 1})
+}
+
+// --------------------------------------------------------------------------
+// Some one way marshaling operations which would unmarshal differently.
+
+var oneWayMarshalItems = []testItemType{
+ // These are being passed as pointers, and will unmarshal as values.
+ {bson.M{"": &bson.Binary{0x02, []byte("old")}},
+ "\x05\x00\x07\x00\x00\x00\x02\x03\x00\x00\x00old"},
+ {bson.M{"": &bson.Binary{0x80, []byte("udef")}},
+ "\x05\x00\x04\x00\x00\x00\x80udef"},
+ {bson.M{"": &bson.RegEx{"ab", "cd"}},
+ "\x0B\x00ab\x00cd\x00"},
+ {bson.M{"": &bson.JavaScript{"code", nil}},
+ "\x0D\x00\x05\x00\x00\x00code\x00"},
+ {bson.M{"": &bson.JavaScript{"code", bson.M{"": nil}}},
+ "\x0F\x00\x14\x00\x00\x00\x05\x00\x00\x00code\x00" +
+ "\x07\x00\x00\x00\x0A\x00\x00"},
+
+ // There's no float32 type in BSON. Will encode as a float64.
+ {bson.M{"": float32(5.05)},
+ "\x01\x00\x00\x00\x00@33\x14@"},
+
+ // The array will be unmarshaled as a slice instead.
+ {bson.M{"": [2]bool{true, false}},
+ "\x04\x00\r\x00\x00\x00\x080\x00\x01\x081\x00\x00\x00"},
+
+ // The typed slice will be unmarshaled as []interface{}.
+ {bson.M{"": []bool{true, false}},
+ "\x04\x00\r\x00\x00\x00\x080\x00\x01\x081\x00\x00\x00"},
+
+ // Will unmarshal as a []byte.
+ {bson.M{"": bson.Binary{0x00, []byte("yo")}},
+ "\x05\x00\x02\x00\x00\x00\x00yo"},
+ {bson.M{"": bson.Binary{0x02, []byte("old")}},
+ "\x05\x00\x07\x00\x00\x00\x02\x03\x00\x00\x00old"},
+
+ // No way to preserve the type information here. We might encode as a zero
+ // value, but this would mean that pointer values in structs wouldn't be
+ // able to correctly distinguish between unset and set to the zero value.
+ {bson.M{"": (*byte)(nil)},
+ "\x0A\x00"},
+
+ // No int types smaller than int32 in BSON. Could encode this as a char,
+ // but it would still be ambiguous, take more, and be awkward in Go when
+ // loaded without typing information.
+ {bson.M{"": byte(8)},
+ "\x10\x00\x08\x00\x00\x00"},
+
+ // There are no unsigned types in BSON. Will unmarshal as int32 or int64.
+ {bson.M{"": uint32(258)},
+ "\x10\x00\x02\x01\x00\x00"},
+ {bson.M{"": uint64(258)},
+ "\x12\x00\x02\x01\x00\x00\x00\x00\x00\x00"},
+ {bson.M{"": uint64(258 << 32)},
+ "\x12\x00\x00\x00\x00\x00\x02\x01\x00\x00"},
+
+ // This will unmarshal as int.
+ {bson.M{"": int32(258)},
+ "\x10\x00\x02\x01\x00\x00"},
+
+ // That's a special case. The unsigned value is too large for an int32,
+ // so an int64 is used instead.
+ {bson.M{"": uint32(1<<32 - 1)},
+ "\x12\x00\xFF\xFF\xFF\xFF\x00\x00\x00\x00"},
+ {bson.M{"": uint(1<<32 - 1)},
+ "\x12\x00\xFF\xFF\xFF\xFF\x00\x00\x00\x00"},
+}
+
+func (s *S) TestOneWayMarshalItems(c *C) {
+ for i, item := range oneWayMarshalItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc(item.data),
+ Commentf("Failed on item %d", i))
+ }
+}
+
+// --------------------------------------------------------------------------
+// Two-way tests for user-defined structures using the samples
+// from bsonspec.org.
+
+type specSample1 struct {
+ Hello string
+}
+
+type specSample2 struct {
+ BSON []interface{} "BSON"
+}
+
+var structSampleItems = []testItemType{
+ {&specSample1{"world"},
+ "\x16\x00\x00\x00\x02hello\x00\x06\x00\x00\x00world\x00\x00"},
+ {&specSample2{[]interface{}{"awesome", float64(5.05), 1986}},
+ "1\x00\x00\x00\x04BSON\x00&\x00\x00\x00\x020\x00\x08\x00\x00\x00" +
+ "awesome\x00\x011\x00333333\x14@\x102\x00\xc2\x07\x00\x00\x00\x00"},
+}
+
+func (s *S) TestMarshalStructSampleItems(c *C) {
+ for i, item := range structSampleItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, item.data,
+ Commentf("Failed on item %d", i))
+ }
+}
+
+func (s *S) TestUnmarshalStructSampleItems(c *C) {
+ for _, item := range structSampleItems {
+ testUnmarshal(c, item.data, item.obj)
+ }
+}
+
+func (s *S) Test64bitInt(c *C) {
+ var i int64 = (1 << 31)
+ if int(i) > 0 {
+ data, err := bson.Marshal(bson.M{"i": int(i)})
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc("\x12i\x00\x00\x00\x00\x80\x00\x00\x00\x00"))
+
+ var result struct{ I int }
+ err = bson.Unmarshal(data, &result)
+ c.Assert(err, IsNil)
+ c.Assert(int64(result.I), Equals, i)
+ }
+}
+
+// --------------------------------------------------------------------------
+// Generic two-way struct marshaling tests.
+
+var bytevar = byte(8)
+var byteptr = &bytevar
+
+var structItems = []testItemType{
+ {&struct{ Ptr *byte }{nil},
+ "\x0Aptr\x00"},
+ {&struct{ Ptr *byte }{&bytevar},
+ "\x10ptr\x00\x08\x00\x00\x00"},
+ {&struct{ Ptr **byte }{&byteptr},
+ "\x10ptr\x00\x08\x00\x00\x00"},
+ {&struct{ Byte byte }{8},
+ "\x10byte\x00\x08\x00\x00\x00"},
+ {&struct{ Byte byte }{0},
+ "\x10byte\x00\x00\x00\x00\x00"},
+ {&struct {
+ V byte "Tag"
+ }{8},
+ "\x10Tag\x00\x08\x00\x00\x00"},
+ {&struct {
+ V *struct {
+ Byte byte
+ }
+ }{&struct{ Byte byte }{8}},
+ "\x03v\x00" + "\x0f\x00\x00\x00\x10byte\x00\b\x00\x00\x00\x00"},
+ {&struct{ priv byte }{}, ""},
+
+ // The order of the dumped fields should be the same in the struct.
+ {&struct{ A, C, B, D, F, E *byte }{},
+ "\x0Aa\x00\x0Ac\x00\x0Ab\x00\x0Ad\x00\x0Af\x00\x0Ae\x00"},
+
+ {&struct{ V bson.Raw }{bson.Raw{0x03, []byte("\x0f\x00\x00\x00\x10byte\x00\b\x00\x00\x00\x00")}},
+ "\x03v\x00" + "\x0f\x00\x00\x00\x10byte\x00\b\x00\x00\x00\x00"},
+ {&struct{ V bson.Raw }{bson.Raw{0x10, []byte("\x00\x00\x00\x00")}},
+ "\x10v\x00" + "\x00\x00\x00\x00"},
+
+ // Byte arrays.
+ {&struct{ V [2]byte }{[2]byte{'y', 'o'}},
+ "\x05v\x00\x02\x00\x00\x00\x00yo"},
+}
+
+func (s *S) TestMarshalStructItems(c *C) {
+ for i, item := range structItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc(item.data),
+ Commentf("Failed on item %d", i))
+ }
+}
+
+func (s *S) TestUnmarshalStructItems(c *C) {
+ for _, item := range structItems {
+ testUnmarshal(c, wrapInDoc(item.data), item.obj)
+ }
+}
+
+func (s *S) TestUnmarshalRawStructItems(c *C) {
+ for i, item := range structItems {
+ raw := bson.Raw{0x03, []byte(wrapInDoc(item.data))}
+ zero := makeZeroDoc(item.obj)
+ err := raw.Unmarshal(zero)
+ c.Assert(err, IsNil)
+ c.Assert(zero, DeepEquals, item.obj, Commentf("Failed on item %d: %#v", i, item))
+ }
+}
+
+func (s *S) TestUnmarshalRawNil(c *C) {
+ // Regression test: shouldn't try to nil out the pointer itself,
+ // as it's not settable.
+ raw := bson.Raw{0x0A, []byte{}}
+ err := raw.Unmarshal(&struct{}{})
+ c.Assert(err, IsNil)
+}
+
+// --------------------------------------------------------------------------
+// One-way marshaling tests.
+
+type dOnIface struct {
+ D interface{}
+}
+
+type ignoreField struct {
+ Before string
+ Ignore string `bson:"-"`
+ After string
+}
+
+var marshalItems = []testItemType{
+ // Ordered document dump. Will unmarshal as a dictionary by default.
+ {bson.D{{"a", nil}, {"c", nil}, {"b", nil}, {"d", nil}, {"f", nil}, {"e", true}},
+ "\x0Aa\x00\x0Ac\x00\x0Ab\x00\x0Ad\x00\x0Af\x00\x08e\x00\x01"},
+ {MyD{{"a", nil}, {"c", nil}, {"b", nil}, {"d", nil}, {"f", nil}, {"e", true}},
+ "\x0Aa\x00\x0Ac\x00\x0Ab\x00\x0Ad\x00\x0Af\x00\x08e\x00\x01"},
+ {&dOnIface{bson.D{{"a", nil}, {"c", nil}, {"b", nil}, {"d", true}}},
+ "\x03d\x00" + wrapInDoc("\x0Aa\x00\x0Ac\x00\x0Ab\x00\x08d\x00\x01")},
+
+ {bson.RawD{{"a", bson.Raw{0x0A, nil}}, {"c", bson.Raw{0x0A, nil}}, {"b", bson.Raw{0x08, []byte{0x01}}}},
+ "\x0Aa\x00" + "\x0Ac\x00" + "\x08b\x00\x01"},
+ {MyRawD{{"a", bson.Raw{0x0A, nil}}, {"c", bson.Raw{0x0A, nil}}, {"b", bson.Raw{0x08, []byte{0x01}}}},
+ "\x0Aa\x00" + "\x0Ac\x00" + "\x08b\x00\x01"},
+ {&dOnIface{bson.RawD{{"a", bson.Raw{0x0A, nil}}, {"c", bson.Raw{0x0A, nil}}, {"b", bson.Raw{0x08, []byte{0x01}}}}},
+ "\x03d\x00" + wrapInDoc("\x0Aa\x00"+"\x0Ac\x00"+"\x08b\x00\x01")},
+
+ {&ignoreField{"before", "ignore", "after"},
+ "\x02before\x00\a\x00\x00\x00before\x00\x02after\x00\x06\x00\x00\x00after\x00"},
+
+ // Marshalling a Raw document does nothing.
+ {bson.Raw{0x03, []byte(wrapInDoc("anything"))},
+ "anything"},
+ {bson.Raw{Data: []byte(wrapInDoc("anything"))},
+ "anything"},
+}
+
+func (s *S) TestMarshalOneWayItems(c *C) {
+ for _, item := range marshalItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc(item.data))
+ }
+}
+
+// --------------------------------------------------------------------------
+// One-way unmarshaling tests.
+
+var unmarshalItems = []testItemType{
+ // Field is private. Should not attempt to unmarshal it.
+ {&struct{ priv byte }{},
+ "\x10priv\x00\x08\x00\x00\x00"},
+
+ // Wrong casing. Field names are lowercased.
+ {&struct{ Byte byte }{},
+ "\x10Byte\x00\x08\x00\x00\x00"},
+
+ // Ignore non-existing field.
+ {&struct{ Byte byte }{9},
+ "\x10boot\x00\x08\x00\x00\x00" + "\x10byte\x00\x09\x00\x00\x00"},
+
+ // Do not unmarshal on ignored field.
+ {&ignoreField{"before", "", "after"},
+ "\x02before\x00\a\x00\x00\x00before\x00" +
+ "\x02-\x00\a\x00\x00\x00ignore\x00" +
+ "\x02after\x00\x06\x00\x00\x00after\x00"},
+
+ // Ignore unsuitable types silently.
+ {map[string]string{"str": "s"},
+ "\x02str\x00\x02\x00\x00\x00s\x00" + "\x10int\x00\x01\x00\x00\x00"},
+ {map[string][]int{"array": []int{5, 9}},
+ "\x04array\x00" + wrapInDoc("\x100\x00\x05\x00\x00\x00"+"\x021\x00\x02\x00\x00\x00s\x00"+"\x102\x00\x09\x00\x00\x00")},
+
+ // Wrong type. Shouldn't init pointer.
+ {&struct{ Str *byte }{},
+ "\x02str\x00\x02\x00\x00\x00s\x00"},
+ {&struct{ Str *struct{ Str string } }{},
+ "\x02str\x00\x02\x00\x00\x00s\x00"},
+
+ // Ordered document.
+ {&struct{ bson.D }{bson.D{{"a", nil}, {"c", nil}, {"b", nil}, {"d", true}}},
+ "\x03d\x00" + wrapInDoc("\x0Aa\x00\x0Ac\x00\x0Ab\x00\x08d\x00\x01")},
+
+ // Raw document.
+ {&bson.Raw{0x03, []byte(wrapInDoc("\x10byte\x00\x08\x00\x00\x00"))},
+ "\x10byte\x00\x08\x00\x00\x00"},
+
+ // RawD document.
+ {&struct{ bson.RawD }{bson.RawD{{"a", bson.Raw{0x0A, []byte{}}}, {"c", bson.Raw{0x0A, []byte{}}}, {"b", bson.Raw{0x08, []byte{0x01}}}}},
+ "\x03rawd\x00" + wrapInDoc("\x0Aa\x00\x0Ac\x00\x08b\x00\x01")},
+
+ // Decode old binary.
+ {bson.M{"_": []byte("old")},
+ "\x05_\x00\x07\x00\x00\x00\x02\x03\x00\x00\x00old"},
+
+ // Decode old binary without length. According to the spec, this shouldn't happen.
+ {bson.M{"_": []byte("old")},
+ "\x05_\x00\x03\x00\x00\x00\x02old"},
+
+ // Decode a doc within a doc in to a slice within a doc; shouldn't error
+ {&struct{ Foo []string }{},
+ "\x03\x66\x6f\x6f\x00\x05\x00\x00\x00\x00"},
+}
+
+func (s *S) TestUnmarshalOneWayItems(c *C) {
+ for _, item := range unmarshalItems {
+ testUnmarshal(c, wrapInDoc(item.data), item.obj)
+ }
+}
+
+func (s *S) TestUnmarshalNilInStruct(c *C) {
+ // Nil is the default value, so we need to ensure it's indeed being set.
+ b := byte(1)
+ v := &struct{ Ptr *byte }{&b}
+ err := bson.Unmarshal([]byte(wrapInDoc("\x0Aptr\x00")), v)
+ c.Assert(err, IsNil)
+ c.Assert(v, DeepEquals, &struct{ Ptr *byte }{nil})
+}
+
+// --------------------------------------------------------------------------
+// Marshalling error cases.
+
+type structWithDupKeys struct {
+ Name byte
+ Other byte "name" // Tag should precede.
+}
+
+var marshalErrorItems = []testItemType{
+ {bson.M{"": uint64(1 << 63)},
+ "BSON has no uint64 type, and value is too large to fit correctly in an int64"},
+ {bson.M{"": bson.ObjectId("tooshort")},
+ "ObjectIDs must be exactly 12 bytes long \\(got 8\\)"},
+ {int64(123),
+ "Can't marshal int64 as a BSON document"},
+ {bson.M{"": 1i},
+ "Can't marshal complex128 in a BSON document"},
+ {&structWithDupKeys{},
+ "Duplicated key 'name' in struct bson_test.structWithDupKeys"},
+ {bson.Raw{0xA, []byte{}},
+ "Attempted to marshal Raw kind 10 as a document"},
+ {bson.Raw{0x3, []byte{}},
+ "Attempted to marshal empty Raw document"},
+ {bson.M{"w": bson.Raw{0x3, []byte{}}},
+ "Attempted to marshal empty Raw document"},
+ {&inlineCantPtr{&struct{ A, B int }{1, 2}},
+ "Option ,inline needs a struct value or map field"},
+ {&inlineDupName{1, struct{ A, B int }{2, 3}},
+ "Duplicated key 'a' in struct bson_test.inlineDupName"},
+ {&inlineDupMap{},
+ "Multiple ,inline maps in struct bson_test.inlineDupMap"},
+ {&inlineBadKeyMap{},
+ "Option ,inline needs a map with string keys in struct bson_test.inlineBadKeyMap"},
+ {&inlineMap{A: 1, M: map[string]interface{}{"a": 1}},
+ `Can't have key "a" in inlined map; conflicts with struct field`},
+}
+
+func (s *S) TestMarshalErrorItems(c *C) {
+ for _, item := range marshalErrorItems {
+ data, err := bson.Marshal(item.obj)
+ c.Assert(err, ErrorMatches, item.data)
+ c.Assert(data, IsNil)
+ }
+}
+
+// --------------------------------------------------------------------------
+// Unmarshalling error cases.
+
+type unmarshalErrorType struct {
+ obj interface{}
+ data string
+ error string
+}
+
+var unmarshalErrorItems = []unmarshalErrorType{
+ // Tag name conflicts with existing parameter.
+ {&structWithDupKeys{},
+ "\x10name\x00\x08\x00\x00\x00",
+ "Duplicated key 'name' in struct bson_test.structWithDupKeys"},
+
+ // Non-string map key.
+ {map[int]interface{}{},
+ "\x10name\x00\x08\x00\x00\x00",
+ "BSON map must have string keys. Got: map\\[int\\]interface \\{\\}"},
+
+ {nil,
+ "\xEEname\x00",
+ "Unknown element kind \\(0xEE\\)"},
+
+ {struct{ Name bool }{},
+ "\x10name\x00\x08\x00\x00\x00",
+ "Unmarshal can't deal with struct values. Use a pointer."},
+
+ {123,
+ "\x10name\x00\x08\x00\x00\x00",
+ "Unmarshal needs a map or a pointer to a struct."},
+
+ {nil,
+ "\x08\x62\x00\x02",
+ "encoded boolean must be 1 or 0, found 2"},
+}
+
+func (s *S) TestUnmarshalErrorItems(c *C) {
+ for _, item := range unmarshalErrorItems {
+ data := []byte(wrapInDoc(item.data))
+ var value interface{}
+ switch reflect.ValueOf(item.obj).Kind() {
+ case reflect.Map, reflect.Ptr:
+ value = makeZeroDoc(item.obj)
+ case reflect.Invalid:
+ value = bson.M{}
+ default:
+ value = item.obj
+ }
+ err := bson.Unmarshal(data, value)
+ c.Assert(err, ErrorMatches, item.error)
+ }
+}
+
+type unmarshalRawErrorType struct {
+ obj interface{}
+ raw bson.Raw
+ error string
+}
+
+var unmarshalRawErrorItems = []unmarshalRawErrorType{
+ // Tag name conflicts with existing parameter.
+ {&structWithDupKeys{},
+ bson.Raw{0x03, []byte("\x10byte\x00\x08\x00\x00\x00")},
+ "Duplicated key 'name' in struct bson_test.structWithDupKeys"},
+
+ {&struct{}{},
+ bson.Raw{0xEE, []byte{}},
+ "Unknown element kind \\(0xEE\\)"},
+
+ {struct{ Name bool }{},
+ bson.Raw{0x10, []byte("\x08\x00\x00\x00")},
+ "Raw Unmarshal can't deal with struct values. Use a pointer."},
+
+ {123,
+ bson.Raw{0x10, []byte("\x08\x00\x00\x00")},
+ "Raw Unmarshal needs a map or a valid pointer."},
+}
+
+func (s *S) TestUnmarshalRawErrorItems(c *C) {
+ for i, item := range unmarshalRawErrorItems {
+ err := item.raw.Unmarshal(item.obj)
+ c.Assert(err, ErrorMatches, item.error, Commentf("Failed on item %d: %#v\n", i, item))
+ }
+}
+
+var corruptedData = []string{
+ "\x04\x00\x00\x00\x00", // Document shorter than minimum
+ "\x06\x00\x00\x00\x00", // Not enough data
+ "\x05\x00\x00", // Broken length
+ "\x05\x00\x00\x00\xff", // Corrupted termination
+ "\x0A\x00\x00\x00\x0Aooop\x00", // Unfinished C string
+
+ // Array end past end of string (s[2]=0x07 is correct)
+ wrapInDoc("\x04\x00\x09\x00\x00\x00\x0A\x00\x00"),
+
+ // Array end within string, but past acceptable.
+ wrapInDoc("\x04\x00\x08\x00\x00\x00\x0A\x00\x00"),
+
+ // Document end within string, but past acceptable.
+ wrapInDoc("\x03\x00\x08\x00\x00\x00\x0A\x00\x00"),
+
+ // String with corrupted end.
+ wrapInDoc("\x02\x00\x03\x00\x00\x00yo\xFF"),
+
+ // String with negative length (issue #116).
+ "\x0c\x00\x00\x00\x02x\x00\xff\xff\xff\xff\x00",
+
+ // String with zero length (must include trailing '\x00')
+ "\x0c\x00\x00\x00\x02x\x00\x00\x00\x00\x00\x00",
+
+ // Binary with negative length.
+ "\r\x00\x00\x00\x05x\x00\xff\xff\xff\xff\x00\x00",
+}
+
+func (s *S) TestUnmarshalMapDocumentTooShort(c *C) {
+ for _, data := range corruptedData {
+ err := bson.Unmarshal([]byte(data), bson.M{})
+ c.Assert(err, ErrorMatches, "Document is corrupted")
+
+ err = bson.Unmarshal([]byte(data), &struct{}{})
+ c.Assert(err, ErrorMatches, "Document is corrupted")
+ }
+}
+
+// --------------------------------------------------------------------------
+// Setter test cases.
+
+var setterResult = map[string]error{}
+
+type setterType struct {
+ received interface{}
+}
+
+func (o *setterType) SetBSON(raw bson.Raw) error {
+ err := raw.Unmarshal(&o.received)
+ if err != nil {
+ panic("The panic:" + err.Error())
+ }
+ if s, ok := o.received.(string); ok {
+ if result, ok := setterResult[s]; ok {
+ return result
+ }
+ }
+ return nil
+}
+
+type ptrSetterDoc struct {
+ Field *setterType "_"
+}
+
+type valSetterDoc struct {
+ Field setterType "_"
+}
+
+func (s *S) TestUnmarshalAllItemsWithPtrSetter(c *C) {
+ for _, item := range allItems {
+ for i := 0; i != 2; i++ {
+ var field *setterType
+ if i == 0 {
+ obj := &ptrSetterDoc{}
+ err := bson.Unmarshal([]byte(wrapInDoc(item.data)), obj)
+ c.Assert(err, IsNil)
+ field = obj.Field
+ } else {
+ obj := &valSetterDoc{}
+ err := bson.Unmarshal([]byte(wrapInDoc(item.data)), obj)
+ c.Assert(err, IsNil)
+ field = &obj.Field
+ }
+ if item.data == "" {
+ // Nothing to unmarshal. Should be untouched.
+ if i == 0 {
+ c.Assert(field, IsNil)
+ } else {
+ c.Assert(field.received, IsNil)
+ }
+ } else {
+ expected := item.obj.(bson.M)["_"]
+ c.Assert(field, NotNil, Commentf("Pointer not initialized (%#v)", expected))
+ c.Assert(field.received, DeepEquals, expected)
+ }
+ }
+ }
+}
+
+func (s *S) TestUnmarshalWholeDocumentWithSetter(c *C) {
+ obj := &setterType{}
+ err := bson.Unmarshal([]byte(sampleItems[0].data), obj)
+ c.Assert(err, IsNil)
+ c.Assert(obj.received, DeepEquals, bson.M{"hello": "world"})
+}
+
+func (s *S) TestUnmarshalSetterOmits(c *C) {
+ setterResult["2"] = &bson.TypeError{}
+ setterResult["4"] = &bson.TypeError{}
+ defer func() {
+ delete(setterResult, "2")
+ delete(setterResult, "4")
+ }()
+
+ m := map[string]*setterType{}
+ data := wrapInDoc("\x02abc\x00\x02\x00\x00\x001\x00" +
+ "\x02def\x00\x02\x00\x00\x002\x00" +
+ "\x02ghi\x00\x02\x00\x00\x003\x00" +
+ "\x02jkl\x00\x02\x00\x00\x004\x00")
+ err := bson.Unmarshal([]byte(data), m)
+ c.Assert(err, IsNil)
+ c.Assert(m["abc"], NotNil)
+ c.Assert(m["def"], IsNil)
+ c.Assert(m["ghi"], NotNil)
+ c.Assert(m["jkl"], IsNil)
+
+ c.Assert(m["abc"].received, Equals, "1")
+ c.Assert(m["ghi"].received, Equals, "3")
+}
+
+func (s *S) TestUnmarshalSetterErrors(c *C) {
+ boom := errors.New("BOOM")
+ setterResult["2"] = boom
+ defer delete(setterResult, "2")
+
+ m := map[string]*setterType{}
+ data := wrapInDoc("\x02abc\x00\x02\x00\x00\x001\x00" +
+ "\x02def\x00\x02\x00\x00\x002\x00" +
+ "\x02ghi\x00\x02\x00\x00\x003\x00")
+ err := bson.Unmarshal([]byte(data), m)
+ c.Assert(err, Equals, boom)
+ c.Assert(m["abc"], NotNil)
+ c.Assert(m["def"], IsNil)
+ c.Assert(m["ghi"], IsNil)
+
+ c.Assert(m["abc"].received, Equals, "1")
+}
+
+func (s *S) TestDMap(c *C) {
+ d := bson.D{{"a", 1}, {"b", 2}}
+ c.Assert(d.Map(), DeepEquals, bson.M{"a": 1, "b": 2})
+}
+
+func (s *S) TestUnmarshalSetterSetZero(c *C) {
+ setterResult["foo"] = bson.SetZero
+ defer delete(setterResult, "field")
+
+ data, err := bson.Marshal(bson.M{"field": "foo"})
+ c.Assert(err, IsNil)
+
+ m := map[string]*setterType{}
+ err = bson.Unmarshal([]byte(data), m)
+ c.Assert(err, IsNil)
+
+ value, ok := m["field"]
+ c.Assert(ok, Equals, true)
+ c.Assert(value, IsNil)
+}
+
+// --------------------------------------------------------------------------
+// Getter test cases.
+
+type typeWithGetter struct {
+ result interface{}
+ err error
+}
+
+func (t *typeWithGetter) GetBSON() (interface{}, error) {
+ if t == nil {
+ return "", nil
+ }
+ return t.result, t.err
+}
+
+type docWithGetterField struct {
+ Field *typeWithGetter "_"
+}
+
+func (s *S) TestMarshalAllItemsWithGetter(c *C) {
+ for i, item := range allItems {
+ if item.data == "" {
+ continue
+ }
+ obj := &docWithGetterField{}
+ obj.Field = &typeWithGetter{result: item.obj.(bson.M)["_"]}
+ data, err := bson.Marshal(obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, wrapInDoc(item.data),
+ Commentf("Failed on item #%d", i))
+ }
+}
+
+func (s *S) TestMarshalWholeDocumentWithGetter(c *C) {
+ obj := &typeWithGetter{result: sampleItems[0].obj}
+ data, err := bson.Marshal(obj)
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, sampleItems[0].data)
+}
+
+func (s *S) TestGetterErrors(c *C) {
+ e := errors.New("oops")
+
+ obj1 := &docWithGetterField{}
+ obj1.Field = &typeWithGetter{sampleItems[0].obj, e}
+ data, err := bson.Marshal(obj1)
+ c.Assert(err, ErrorMatches, "oops")
+ c.Assert(data, IsNil)
+
+ obj2 := &typeWithGetter{sampleItems[0].obj, e}
+ data, err = bson.Marshal(obj2)
+ c.Assert(err, ErrorMatches, "oops")
+ c.Assert(data, IsNil)
+}
+
+type intGetter int64
+
+func (t intGetter) GetBSON() (interface{}, error) {
+ return int64(t), nil
+}
+
+type typeWithIntGetter struct {
+ V intGetter ",minsize"
+}
+
+func (s *S) TestMarshalShortWithGetter(c *C) {
+ obj := typeWithIntGetter{42}
+ data, err := bson.Marshal(obj)
+ c.Assert(err, IsNil)
+ m := bson.M{}
+ err = bson.Unmarshal(data, m)
+ c.Assert(err, IsNil)
+ c.Assert(m["v"], Equals, 42)
+}
+
+func (s *S) TestMarshalWithGetterNil(c *C) {
+ obj := docWithGetterField{}
+ data, err := bson.Marshal(obj)
+ c.Assert(err, IsNil)
+ m := bson.M{}
+ err = bson.Unmarshal(data, m)
+ c.Assert(err, IsNil)
+ c.Assert(m, DeepEquals, bson.M{"_": ""})
+}
+
+// --------------------------------------------------------------------------
+// Cross-type conversion tests.
+
+type crossTypeItem struct {
+ obj1 interface{}
+ obj2 interface{}
+}
+
+type condStr struct {
+ V string ",omitempty"
+}
+type condStrNS struct {
+ V string `a:"A" bson:",omitempty" b:"B"`
+}
+type condBool struct {
+ V bool ",omitempty"
+}
+type condInt struct {
+ V int ",omitempty"
+}
+type condUInt struct {
+ V uint ",omitempty"
+}
+type condFloat struct {
+ V float64 ",omitempty"
+}
+type condIface struct {
+ V interface{} ",omitempty"
+}
+type condPtr struct {
+ V *bool ",omitempty"
+}
+type condSlice struct {
+ V []string ",omitempty"
+}
+type condMap struct {
+ V map[string]int ",omitempty"
+}
+type namedCondStr struct {
+ V string "myv,omitempty"
+}
+type condTime struct {
+ V time.Time ",omitempty"
+}
+type condStruct struct {
+ V struct{ A []int } ",omitempty"
+}
+type condRaw struct {
+ V bson.Raw ",omitempty"
+}
+
+type shortInt struct {
+ V int64 ",minsize"
+}
+type shortUint struct {
+ V uint64 ",minsize"
+}
+type shortIface struct {
+ V interface{} ",minsize"
+}
+type shortPtr struct {
+ V *int64 ",minsize"
+}
+type shortNonEmptyInt struct {
+ V int64 ",minsize,omitempty"
+}
+
+type inlineInt struct {
+ V struct{ A, B int } ",inline"
+}
+type inlineCantPtr struct {
+ V *struct{ A, B int } ",inline"
+}
+type inlineDupName struct {
+ A int
+ V struct{ A, B int } ",inline"
+}
+type inlineMap struct {
+ A int
+ M map[string]interface{} ",inline"
+}
+type inlineMapInt struct {
+ A int
+ M map[string]int ",inline"
+}
+type inlineMapMyM struct {
+ A int
+ M MyM ",inline"
+}
+type inlineDupMap struct {
+ M1 map[string]interface{} ",inline"
+ M2 map[string]interface{} ",inline"
+}
+type inlineBadKeyMap struct {
+ M map[int]int ",inline"
+}
+type inlineUnexported struct {
+ M map[string]interface{} ",inline"
+ unexported ",inline"
+}
+type unexported struct {
+ A int
+}
+
+type getterSetterD bson.D
+
+func (s getterSetterD) GetBSON() (interface{}, error) {
+ if len(s) == 0 {
+ return bson.D{}, nil
+ }
+ return bson.D(s[:len(s)-1]), nil
+}
+
+func (s *getterSetterD) SetBSON(raw bson.Raw) error {
+ var doc bson.D
+ err := raw.Unmarshal(&doc)
+ doc = append(doc, bson.DocElem{"suffix", true})
+ *s = getterSetterD(doc)
+ return err
+}
+
+type getterSetterInt int
+
+func (i getterSetterInt) GetBSON() (interface{}, error) {
+ return bson.D{{"a", int(i)}}, nil
+}
+
+func (i *getterSetterInt) SetBSON(raw bson.Raw) error {
+ var doc struct{ A int }
+ err := raw.Unmarshal(&doc)
+ *i = getterSetterInt(doc.A)
+ return err
+}
+
+type ifaceType interface {
+ Hello()
+}
+
+type ifaceSlice []ifaceType
+
+func (s *ifaceSlice) SetBSON(raw bson.Raw) error {
+ var ns []int
+ if err := raw.Unmarshal(&ns); err != nil {
+ return err
+ }
+ *s = make(ifaceSlice, ns[0])
+ return nil
+}
+
+func (s ifaceSlice) GetBSON() (interface{}, error) {
+ return []int{len(s)}, nil
+}
+
+type (
+ MyString string
+ MyBytes []byte
+ MyBool bool
+ MyD []bson.DocElem
+ MyRawD []bson.RawDocElem
+ MyM map[string]interface{}
+)
+
+var (
+ truevar = true
+ falsevar = false
+
+ int64var = int64(42)
+ int64ptr = &int64var
+ intvar = int(42)
+ intptr = &intvar
+
+ gsintvar = getterSetterInt(42)
+)
+
+func parseURL(s string) *url.URL {
+ u, err := url.Parse(s)
+ if err != nil {
+ panic(err)
+ }
+ return u
+}
+
+// That's a pretty fun test. It will dump the first item, generate a zero
+// value equivalent to the second one, load the dumped data onto it, and then
+// verify that the resulting value is deep-equal to the untouched second value.
+// Then, it will do the same in the *opposite* direction!
+var twoWayCrossItems = []crossTypeItem{
+ // int<=>int
+ {&struct{ I int }{42}, &struct{ I int8 }{42}},
+ {&struct{ I int }{42}, &struct{ I int32 }{42}},
+ {&struct{ I int }{42}, &struct{ I int64 }{42}},
+ {&struct{ I int8 }{42}, &struct{ I int32 }{42}},
+ {&struct{ I int8 }{42}, &struct{ I int64 }{42}},
+ {&struct{ I int32 }{42}, &struct{ I int64 }{42}},
+
+ // uint<=>uint
+ {&struct{ I uint }{42}, &struct{ I uint8 }{42}},
+ {&struct{ I uint }{42}, &struct{ I uint32 }{42}},
+ {&struct{ I uint }{42}, &struct{ I uint64 }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I uint32 }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I uint64 }{42}},
+ {&struct{ I uint32 }{42}, &struct{ I uint64 }{42}},
+
+ // float32<=>float64
+ {&struct{ I float32 }{42}, &struct{ I float64 }{42}},
+
+ // int<=>uint
+ {&struct{ I uint }{42}, &struct{ I int }{42}},
+ {&struct{ I uint }{42}, &struct{ I int8 }{42}},
+ {&struct{ I uint }{42}, &struct{ I int32 }{42}},
+ {&struct{ I uint }{42}, &struct{ I int64 }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I int }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I int8 }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I int32 }{42}},
+ {&struct{ I uint8 }{42}, &struct{ I int64 }{42}},
+ {&struct{ I uint32 }{42}, &struct{ I int }{42}},
+ {&struct{ I uint32 }{42}, &struct{ I int8 }{42}},
+ {&struct{ I uint32 }{42}, &struct{ I int32 }{42}},
+ {&struct{ I uint32 }{42}, &struct{ I int64 }{42}},
+ {&struct{ I uint64 }{42}, &struct{ I int }{42}},
+ {&struct{ I uint64 }{42}, &struct{ I int8 }{42}},
+ {&struct{ I uint64 }{42}, &struct{ I int32 }{42}},
+ {&struct{ I uint64 }{42}, &struct{ I int64 }{42}},
+
+ // int <=> float
+ {&struct{ I int }{42}, &struct{ I float64 }{42}},
+
+ // int <=> bool
+ {&struct{ I int }{1}, &struct{ I bool }{true}},
+ {&struct{ I int }{0}, &struct{ I bool }{false}},
+
+ // uint <=> float64
+ {&struct{ I uint }{42}, &struct{ I float64 }{42}},
+
+ // uint <=> bool
+ {&struct{ I uint }{1}, &struct{ I bool }{true}},
+ {&struct{ I uint }{0}, &struct{ I bool }{false}},
+
+ // float64 <=> bool
+ {&struct{ I float64 }{1}, &struct{ I bool }{true}},
+ {&struct{ I float64 }{0}, &struct{ I bool }{false}},
+
+ // string <=> string and string <=> []byte
+ {&struct{ S []byte }{[]byte("abc")}, &struct{ S string }{"abc"}},
+ {&struct{ S []byte }{[]byte("def")}, &struct{ S bson.Symbol }{"def"}},
+ {&struct{ S string }{"ghi"}, &struct{ S bson.Symbol }{"ghi"}},
+
+ // map <=> struct
+ {&struct {
+ A struct {
+ B, C int
+ }
+ }{struct{ B, C int }{1, 2}},
+ map[string]map[string]int{"a": map[string]int{"b": 1, "c": 2}}},
+
+ {&struct{ A bson.Symbol }{"abc"}, map[string]string{"a": "abc"}},
+ {&struct{ A bson.Symbol }{"abc"}, map[string][]byte{"a": []byte("abc")}},
+ {&struct{ A []byte }{[]byte("abc")}, map[string]string{"a": "abc"}},
+ {&struct{ A uint }{42}, map[string]int{"a": 42}},
+ {&struct{ A uint }{42}, map[string]float64{"a": 42}},
+ {&struct{ A uint }{1}, map[string]bool{"a": true}},
+ {&struct{ A int }{42}, map[string]uint{"a": 42}},
+ {&struct{ A int }{42}, map[string]float64{"a": 42}},
+ {&struct{ A int }{1}, map[string]bool{"a": true}},
+ {&struct{ A float64 }{42}, map[string]float32{"a": 42}},
+ {&struct{ A float64 }{42}, map[string]int{"a": 42}},
+ {&struct{ A float64 }{42}, map[string]uint{"a": 42}},
+ {&struct{ A float64 }{1}, map[string]bool{"a": true}},
+ {&struct{ A bool }{true}, map[string]int{"a": 1}},
+ {&struct{ A bool }{true}, map[string]uint{"a": 1}},
+ {&struct{ A bool }{true}, map[string]float64{"a": 1}},
+ {&struct{ A **byte }{&byteptr}, map[string]byte{"a": 8}},
+
+ // url.URL <=> string
+ {&struct{ URL *url.URL }{parseURL("h://e.c/p")}, map[string]string{"url": "h://e.c/p"}},
+ {&struct{ URL url.URL }{*parseURL("h://e.c/p")}, map[string]string{"url": "h://e.c/p"}},
+
+ // Slices
+ {&struct{ S []int }{[]int{1, 2, 3}}, map[string][]int{"s": []int{1, 2, 3}}},
+ {&struct{ S *[]int }{&[]int{1, 2, 3}}, map[string][]int{"s": []int{1, 2, 3}}},
+
+ // Conditionals
+ {&condBool{true}, map[string]bool{"v": true}},
+ {&condBool{}, map[string]bool{}},
+ {&condInt{1}, map[string]int{"v": 1}},
+ {&condInt{}, map[string]int{}},
+ {&condUInt{1}, map[string]uint{"v": 1}},
+ {&condUInt{}, map[string]uint{}},
+ {&condFloat{}, map[string]int{}},
+ {&condStr{"yo"}, map[string]string{"v": "yo"}},
+ {&condStr{}, map[string]string{}},
+ {&condStrNS{"yo"}, map[string]string{"v": "yo"}},
+ {&condStrNS{}, map[string]string{}},
+ {&condSlice{[]string{"yo"}}, map[string][]string{"v": []string{"yo"}}},
+ {&condSlice{}, map[string][]string{}},
+ {&condMap{map[string]int{"k": 1}}, bson.M{"v": bson.M{"k": 1}}},
+ {&condMap{}, map[string][]string{}},
+ {&condIface{"yo"}, map[string]string{"v": "yo"}},
+ {&condIface{""}, map[string]string{"v": ""}},
+ {&condIface{}, map[string]string{}},
+ {&condPtr{&truevar}, map[string]bool{"v": true}},
+ {&condPtr{&falsevar}, map[string]bool{"v": false}},
+ {&condPtr{}, map[string]string{}},
+
+ {&condTime{time.Unix(123456789, 123e6)}, map[string]time.Time{"v": time.Unix(123456789, 123e6)}},
+ {&condTime{}, map[string]string{}},
+
+ {&condStruct{struct{ A []int }{[]int{1}}}, bson.M{"v": bson.M{"a": []interface{}{1}}}},
+ {&condStruct{struct{ A []int }{}}, bson.M{}},
+
+ {&condRaw{bson.Raw{Kind: 0x0A, Data: []byte{}}}, bson.M{"v": nil}},
+ {&condRaw{bson.Raw{Kind: 0x00}}, bson.M{}},
+
+ {&namedCondStr{"yo"}, map[string]string{"myv": "yo"}},
+ {&namedCondStr{}, map[string]string{}},
+
+ {&shortInt{1}, map[string]interface{}{"v": 1}},
+ {&shortInt{1 << 30}, map[string]interface{}{"v": 1 << 30}},
+ {&shortInt{1 << 31}, map[string]interface{}{"v": int64(1 << 31)}},
+ {&shortUint{1 << 30}, map[string]interface{}{"v": 1 << 30}},
+ {&shortUint{1 << 31}, map[string]interface{}{"v": int64(1 << 31)}},
+ {&shortIface{int64(1) << 31}, map[string]interface{}{"v": int64(1 << 31)}},
+ {&shortPtr{int64ptr}, map[string]interface{}{"v": intvar}},
+
+ {&shortNonEmptyInt{1}, map[string]interface{}{"v": 1}},
+ {&shortNonEmptyInt{1 << 31}, map[string]interface{}{"v": int64(1 << 31)}},
+ {&shortNonEmptyInt{}, map[string]interface{}{}},
+
+ {&inlineInt{struct{ A, B int }{1, 2}}, map[string]interface{}{"a": 1, "b": 2}},
+ {&inlineMap{A: 1, M: map[string]interface{}{"b": 2}}, map[string]interface{}{"a": 1, "b": 2}},
+ {&inlineMap{A: 1, M: nil}, map[string]interface{}{"a": 1}},
+ {&inlineMapInt{A: 1, M: map[string]int{"b": 2}}, map[string]int{"a": 1, "b": 2}},
+ {&inlineMapInt{A: 1, M: nil}, map[string]int{"a": 1}},
+ {&inlineMapMyM{A: 1, M: MyM{"b": MyM{"c": 3}}}, map[string]interface{}{"a": 1, "b": map[string]interface{}{"c": 3}}},
+ {&inlineUnexported{M: map[string]interface{}{"b": 1}, unexported: unexported{A: 2}}, map[string]interface{}{"b": 1, "a": 2}},
+
+ // []byte <=> Binary
+ {&struct{ B []byte }{[]byte("abc")}, map[string]bson.Binary{"b": bson.Binary{Data: []byte("abc")}}},
+
+ // []byte <=> MyBytes
+ {&struct{ B MyBytes }{[]byte("abc")}, map[string]string{"b": "abc"}},
+ {&struct{ B MyBytes }{[]byte{}}, map[string]string{"b": ""}},
+ {&struct{ B MyBytes }{}, map[string]bool{}},
+ {&struct{ B []byte }{[]byte("abc")}, map[string]MyBytes{"b": []byte("abc")}},
+
+ // bool <=> MyBool
+ {&struct{ B MyBool }{true}, map[string]bool{"b": true}},
+ {&struct{ B MyBool }{}, map[string]bool{"b": false}},
+ {&struct{ B MyBool }{}, map[string]string{}},
+ {&struct{ B bool }{}, map[string]MyBool{"b": false}},
+
+ // arrays
+ {&struct{ V [2]int }{[...]int{1, 2}}, map[string][2]int{"v": [2]int{1, 2}}},
+ {&struct{ V [2]byte }{[...]byte{1, 2}}, map[string][2]byte{"v": [2]byte{1, 2}}},
+
+ // zero time
+ {&struct{ V time.Time }{}, map[string]interface{}{"v": time.Time{}}},
+
+ // zero time + 1 second + 1 millisecond; overflows int64 as nanoseconds
+ {&struct{ V time.Time }{time.Unix(-62135596799, 1e6).Local()},
+ map[string]interface{}{"v": time.Unix(-62135596799, 1e6).Local()}},
+
+ // bson.D <=> []DocElem
+ {&bson.D{{"a", bson.D{{"b", 1}, {"c", 2}}}}, &bson.D{{"a", bson.D{{"b", 1}, {"c", 2}}}}},
+ {&bson.D{{"a", bson.D{{"b", 1}, {"c", 2}}}}, &MyD{{"a", MyD{{"b", 1}, {"c", 2}}}}},
+ {&struct{ V MyD }{MyD{{"a", 1}}}, &bson.D{{"v", bson.D{{"a", 1}}}}},
+
+ // bson.RawD <=> []RawDocElem
+ {&bson.RawD{{"a", bson.Raw{0x08, []byte{0x01}}}}, &bson.RawD{{"a", bson.Raw{0x08, []byte{0x01}}}}},
+ {&bson.RawD{{"a", bson.Raw{0x08, []byte{0x01}}}}, &MyRawD{{"a", bson.Raw{0x08, []byte{0x01}}}}},
+
+ // bson.M <=> map
+ {bson.M{"a": bson.M{"b": 1, "c": 2}}, MyM{"a": MyM{"b": 1, "c": 2}}},
+ {bson.M{"a": bson.M{"b": 1, "c": 2}}, map[string]interface{}{"a": map[string]interface{}{"b": 1, "c": 2}}},
+
+ // bson.M <=> map[MyString]
+ {bson.M{"a": bson.M{"b": 1, "c": 2}}, map[MyString]interface{}{"a": map[MyString]interface{}{"b": 1, "c": 2}}},
+
+ // json.Number <=> int64, float64
+ {&struct{ N json.Number }{"5"}, map[string]interface{}{"n": int64(5)}},
+ {&struct{ N json.Number }{"5.05"}, map[string]interface{}{"n": 5.05}},
+ {&struct{ N json.Number }{"9223372036854776000"}, map[string]interface{}{"n": float64(1 << 63)}},
+
+ // bson.D <=> non-struct getter/setter
+ {&bson.D{{"a", 1}}, &getterSetterD{{"a", 1}, {"suffix", true}}},
+ {&bson.D{{"a", 42}}, &gsintvar},
+
+ // Interface slice setter.
+ {&struct{ V ifaceSlice }{ifaceSlice{nil, nil, nil}}, bson.M{"v": []interface{}{3}}},
+}
+
+// Same thing, but only one way (obj1 => obj2).
+var oneWayCrossItems = []crossTypeItem{
+ // map <=> struct
+ {map[string]interface{}{"a": 1, "b": "2", "c": 3}, map[string]int{"a": 1, "c": 3}},
+
+ // inline map elides badly typed values
+ {map[string]interface{}{"a": 1, "b": "2", "c": 3}, &inlineMapInt{A: 1, M: map[string]int{"c": 3}}},
+
+ // Can't decode int into struct.
+ {bson.M{"a": bson.M{"b": 2}}, &struct{ A bool }{}},
+
+ // Would get decoded into a int32 too in the opposite direction.
+ {&shortIface{int64(1) << 30}, map[string]interface{}{"v": 1 << 30}},
+
+ // Ensure omitempty on struct with private fields works properly.
+ {&struct {
+ V struct{ v time.Time } ",omitempty"
+ }{}, map[string]interface{}{}},
+
+ // Attempt to marshal slice into RawD (issue #120).
+ {bson.M{"x": []int{1, 2, 3}}, &struct{ X bson.RawD }{}},
+}
+
+func testCrossPair(c *C, dump interface{}, load interface{}) {
+ c.Logf("Dump: %#v", dump)
+ c.Logf("Load: %#v", load)
+ zero := makeZeroDoc(load)
+ data, err := bson.Marshal(dump)
+ c.Assert(err, IsNil)
+ c.Logf("Dumped: %#v", string(data))
+ err = bson.Unmarshal(data, zero)
+ c.Assert(err, IsNil)
+ c.Logf("Loaded: %#v", zero)
+ c.Assert(zero, DeepEquals, load)
+}
+
+func (s *S) TestTwoWayCrossPairs(c *C) {
+ for _, item := range twoWayCrossItems {
+ testCrossPair(c, item.obj1, item.obj2)
+ testCrossPair(c, item.obj2, item.obj1)
+ }
+}
+
+func (s *S) TestOneWayCrossPairs(c *C) {
+ for _, item := range oneWayCrossItems {
+ testCrossPair(c, item.obj1, item.obj2)
+ }
+}
+
+// --------------------------------------------------------------------------
+// ObjectId hex representation test.
+
+func (s *S) TestObjectIdHex(c *C) {
+ id := bson.ObjectIdHex("4d88e15b60f486e428412dc9")
+ c.Assert(id.String(), Equals, `ObjectIdHex("4d88e15b60f486e428412dc9")`)
+ c.Assert(id.Hex(), Equals, "4d88e15b60f486e428412dc9")
+}
+
+func (s *S) TestIsObjectIdHex(c *C) {
+ test := []struct {
+ id string
+ valid bool
+ }{
+ {"4d88e15b60f486e428412dc9", true},
+ {"4d88e15b60f486e428412dc", false},
+ {"4d88e15b60f486e428412dc9e", false},
+ {"4d88e15b60f486e428412dcx", false},
+ }
+ for _, t := range test {
+ c.Assert(bson.IsObjectIdHex(t.id), Equals, t.valid)
+ }
+}
+
+// --------------------------------------------------------------------------
+// ObjectId parts extraction tests.
+
+type objectIdParts struct {
+ id bson.ObjectId
+ timestamp int64
+ machine []byte
+ pid uint16
+ counter int32
+}
+
+var objectIds = []objectIdParts{
+ objectIdParts{
+ bson.ObjectIdHex("4d88e15b60f486e428412dc9"),
+ 1300816219,
+ []byte{0x60, 0xf4, 0x86},
+ 0xe428,
+ 4271561,
+ },
+ objectIdParts{
+ bson.ObjectIdHex("000000000000000000000000"),
+ 0,
+ []byte{0x00, 0x00, 0x00},
+ 0x0000,
+ 0,
+ },
+ objectIdParts{
+ bson.ObjectIdHex("00000000aabbccddee000001"),
+ 0,
+ []byte{0xaa, 0xbb, 0xcc},
+ 0xddee,
+ 1,
+ },
+}
+
+func (s *S) TestObjectIdPartsExtraction(c *C) {
+ for i, v := range objectIds {
+ t := time.Unix(v.timestamp, 0)
+ c.Assert(v.id.Time(), Equals, t, Commentf("#%d Wrong timestamp value", i))
+ c.Assert(v.id.Machine(), DeepEquals, v.machine, Commentf("#%d Wrong machine id value", i))
+ c.Assert(v.id.Pid(), Equals, v.pid, Commentf("#%d Wrong pid value", i))
+ c.Assert(v.id.Counter(), Equals, v.counter, Commentf("#%d Wrong counter value", i))
+ }
+}
+
+func (s *S) TestNow(c *C) {
+ before := time.Now()
+ time.Sleep(1e6)
+ now := bson.Now()
+ time.Sleep(1e6)
+ after := time.Now()
+ c.Assert(now.After(before) && now.Before(after), Equals, true, Commentf("now=%s, before=%s, after=%s", now, before, after))
+}
+
+// --------------------------------------------------------------------------
+// ObjectId generation tests.
+
+func (s *S) TestNewObjectId(c *C) {
+ // Generate 10 ids
+ ids := make([]bson.ObjectId, 10)
+ for i := 0; i < 10; i++ {
+ ids[i] = bson.NewObjectId()
+ }
+ for i := 1; i < 10; i++ {
+ prevId := ids[i-1]
+ id := ids[i]
+ // Test for uniqueness among all other 9 generated ids
+ for j, tid := range ids {
+ if j != i {
+ c.Assert(id, Not(Equals), tid, Commentf("Generated ObjectId is not unique"))
+ }
+ }
+ // Check that timestamp was incremented and is within 30 seconds of the previous one
+ secs := id.Time().Sub(prevId.Time()).Seconds()
+ c.Assert((secs >= 0 && secs <= 30), Equals, true, Commentf("Wrong timestamp in generated ObjectId"))
+ // Check that machine ids are the same
+ c.Assert(id.Machine(), DeepEquals, prevId.Machine())
+ // Check that pids are the same
+ c.Assert(id.Pid(), Equals, prevId.Pid())
+ // Test for proper increment
+ delta := int(id.Counter() - prevId.Counter())
+ c.Assert(delta, Equals, 1, Commentf("Wrong increment in generated ObjectId"))
+ }
+}
+
+func (s *S) TestNewObjectIdWithTime(c *C) {
+ t := time.Unix(12345678, 0)
+ id := bson.NewObjectIdWithTime(t)
+ c.Assert(id.Time(), Equals, t)
+ c.Assert(id.Machine(), DeepEquals, []byte{0x00, 0x00, 0x00})
+ c.Assert(int(id.Pid()), Equals, 0)
+ c.Assert(int(id.Counter()), Equals, 0)
+}
+
+// --------------------------------------------------------------------------
+// ObjectId JSON marshalling.
+
+type jsonType struct {
+ Id bson.ObjectId
+}
+
+var jsonIdTests = []struct {
+ value jsonType
+ json string
+ marshal bool
+ unmarshal bool
+ error string
+}{{
+ value: jsonType{Id: bson.ObjectIdHex("4d88e15b60f486e428412dc9")},
+ json: `{"Id":"4d88e15b60f486e428412dc9"}`,
+ marshal: true,
+ unmarshal: true,
+}, {
+ value: jsonType{},
+ json: `{"Id":""}`,
+ marshal: true,
+ unmarshal: true,
+}, {
+ value: jsonType{},
+ json: `{"Id":null}`,
+ marshal: false,
+ unmarshal: true,
+}, {
+ json: `{"Id":"4d88e15b60f486e428412dc9A"}`,
+ error: `invalid ObjectId in JSON: "4d88e15b60f486e428412dc9A"`,
+ marshal: false,
+ unmarshal: true,
+}, {
+ json: `{"Id":"4d88e15b60f486e428412dcZ"}`,
+ error: `invalid ObjectId in JSON: "4d88e15b60f486e428412dcZ" .*`,
+ marshal: false,
+ unmarshal: true,
+}}
+
+func (s *S) TestObjectIdJSONMarshaling(c *C) {
+ for _, test := range jsonIdTests {
+ if test.marshal {
+ data, err := json.Marshal(&test.value)
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, test.json)
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+
+ if test.unmarshal {
+ var value jsonType
+ err := json.Unmarshal([]byte(test.json), &value)
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ c.Assert(value, DeepEquals, test.value)
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+ }
+}
+
+// --------------------------------------------------------------------------
+// Spec tests
+
+type specTest struct {
+ Description string
+ Documents []struct {
+ Decoded map[string]interface{}
+ Encoded string
+ DecodeOnly bool `yaml:"decodeOnly"`
+ Error interface{}
+ }
+}
+
+func (s *S) TestSpecTests(c *C) {
+ for _, data := range specTests {
+ var test specTest
+ err := yaml.Unmarshal([]byte(data), &test)
+ c.Assert(err, IsNil)
+
+ c.Logf("Running spec test set %q", test.Description)
+
+ for _, doc := range test.Documents {
+ if doc.Error != nil {
+ continue
+ }
+ c.Logf("Ensuring %q decodes as %v", doc.Encoded, doc.Decoded)
+ var decoded map[string]interface{}
+ encoded, err := hex.DecodeString(doc.Encoded)
+ c.Assert(err, IsNil)
+ err = bson.Unmarshal(encoded, &decoded)
+ c.Assert(err, IsNil)
+ c.Assert(decoded, DeepEquals, doc.Decoded)
+ }
+
+ for _, doc := range test.Documents {
+ if doc.DecodeOnly || doc.Error != nil {
+ continue
+ }
+ c.Logf("Ensuring %v encodes as %q", doc.Decoded, doc.Encoded)
+ encoded, err := bson.Marshal(doc.Decoded)
+ c.Assert(err, IsNil)
+ c.Assert(strings.ToUpper(hex.EncodeToString(encoded)), Equals, doc.Encoded)
+ }
+
+ for _, doc := range test.Documents {
+ if doc.Error == nil {
+ continue
+ }
+ c.Logf("Ensuring %q errors when decoded: %s", doc.Encoded, doc.Error)
+ var decoded map[string]interface{}
+ encoded, err := hex.DecodeString(doc.Encoded)
+ c.Assert(err, IsNil)
+ err = bson.Unmarshal(encoded, &decoded)
+ c.Assert(err, NotNil)
+ c.Logf("Failed with: %v", err)
+ }
+ }
+}
+
+// --------------------------------------------------------------------------
+// ObjectId Text encoding.TextUnmarshaler.
+
+var textIdTests = []struct {
+ value bson.ObjectId
+ text string
+ marshal bool
+ unmarshal bool
+ error string
+}{{
+ value: bson.ObjectIdHex("4d88e15b60f486e428412dc9"),
+ text: "4d88e15b60f486e428412dc9",
+ marshal: true,
+ unmarshal: true,
+}, {
+ text: "",
+ marshal: true,
+ unmarshal: true,
+}, {
+ text: "4d88e15b60f486e428412dc9A",
+ marshal: false,
+ unmarshal: true,
+ error: `invalid ObjectId: 4d88e15b60f486e428412dc9A`,
+}, {
+ text: "4d88e15b60f486e428412dcZ",
+ marshal: false,
+ unmarshal: true,
+ error: `invalid ObjectId: 4d88e15b60f486e428412dcZ .*`,
+}}
+
+func (s *S) TestObjectIdTextMarshaling(c *C) {
+ for _, test := range textIdTests {
+ if test.marshal {
+ data, err := test.value.MarshalText()
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, test.text)
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+
+ if test.unmarshal {
+ err := test.value.UnmarshalText([]byte(test.text))
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ if test.value != "" {
+ value := bson.ObjectIdHex(test.text)
+ c.Assert(value, DeepEquals, test.value)
+ }
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+ }
+}
+
+// --------------------------------------------------------------------------
+// ObjectId XML marshalling.
+
+type xmlType struct {
+ Id bson.ObjectId
+}
+
+var xmlIdTests = []struct {
+ value xmlType
+ xml string
+ marshal bool
+ unmarshal bool
+ error string
+}{{
+ value: xmlType{Id: bson.ObjectIdHex("4d88e15b60f486e428412dc9")},
+ xml: "4d88e15b60f486e428412dc9",
+ marshal: true,
+ unmarshal: true,
+}, {
+ value: xmlType{},
+ xml: "",
+ marshal: true,
+ unmarshal: true,
+}, {
+ xml: "4d88e15b60f486e428412dc9A",
+ marshal: false,
+ unmarshal: true,
+ error: `invalid ObjectId: 4d88e15b60f486e428412dc9A`,
+}, {
+ xml: "4d88e15b60f486e428412dcZ",
+ marshal: false,
+ unmarshal: true,
+ error: `invalid ObjectId: 4d88e15b60f486e428412dcZ .*`,
+}}
+
+func (s *S) TestObjectIdXMLMarshaling(c *C) {
+ for _, test := range xmlIdTests {
+ if test.marshal {
+ data, err := xml.Marshal(&test.value)
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ c.Assert(string(data), Equals, test.xml)
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+
+ if test.unmarshal {
+ var value xmlType
+ err := xml.Unmarshal([]byte(test.xml), &value)
+ if test.error == "" {
+ c.Assert(err, IsNil)
+ c.Assert(value, DeepEquals, test.value)
+ } else {
+ c.Assert(err, ErrorMatches, test.error)
+ }
+ }
+ }
+}
+
+// --------------------------------------------------------------------------
+// Some simple benchmarks.
+
+type BenchT struct {
+ A, B, C, D, E, F string
+}
+
+type BenchRawT struct {
+ A string
+ B int
+ C bson.M
+ D []float64
+}
+
+func (s *S) BenchmarkUnmarhsalStruct(c *C) {
+ v := BenchT{A: "A", D: "D", E: "E"}
+ data, err := bson.Marshal(&v)
+ if err != nil {
+ panic(err)
+ }
+ c.ResetTimer()
+ for i := 0; i < c.N; i++ {
+ err = bson.Unmarshal(data, &v)
+ }
+ if err != nil {
+ panic(err)
+ }
+}
+
+func (s *S) BenchmarkUnmarhsalMap(c *C) {
+ m := bson.M{"a": "a", "d": "d", "e": "e"}
+ data, err := bson.Marshal(&m)
+ if err != nil {
+ panic(err)
+ }
+ c.ResetTimer()
+ for i := 0; i < c.N; i++ {
+ err = bson.Unmarshal(data, &m)
+ }
+ if err != nil {
+ panic(err)
+ }
+}
+
+func (s *S) BenchmarkUnmarshalRaw(c *C) {
+ var err error
+ m := BenchRawT{
+ A: "test_string",
+ B: 123,
+ C: bson.M{
+ "subdoc_int": 12312,
+ "subdoc_doc": bson.M{"1": 1},
+ },
+ D: []float64{0.0, 1.3333, -99.9997, 3.1415},
+ }
+ data, err := bson.Marshal(&m)
+ if err != nil {
+ panic(err)
+ }
+ raw := bson.Raw{}
+ c.ResetTimer()
+ for i := 0; i < c.N; i++ {
+ err = bson.Unmarshal(data, &raw)
+ }
+ if err != nil {
+ panic(err)
+ }
+}
+
+func (s *S) BenchmarkNewObjectId(c *C) {
+ for i := 0; i < c.N; i++ {
+ bson.NewObjectId()
+ }
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/decimal.go b/vendor/gopkg.in/mgo.v2/bson/decimal.go
new file mode 100644
index 0000000..3d2f700
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/decimal.go
@@ -0,0 +1,310 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package bson
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// Decimal128 holds decimal128 BSON values.
+type Decimal128 struct {
+ h, l uint64
+}
+
+func (d Decimal128) String() string {
+ var pos int // positive sign
+ var e int // exponent
+ var h, l uint64 // significand high/low
+
+ if d.h>>63&1 == 0 {
+ pos = 1
+ }
+
+ switch d.h >> 58 & (1<<5 - 1) {
+ case 0x1F:
+ return "NaN"
+ case 0x1E:
+ return "-Inf"[pos:]
+ }
+
+ l = d.l
+ if d.h>>61&3 == 3 {
+ // Bits: 1*sign 2*ignored 14*exponent 111*significand.
+ // Implicit 0b100 prefix in significand.
+ e = int(d.h>>47&(1<<14-1)) - 6176
+ //h = 4<<47 | d.h&(1<<47-1)
+ // Spec says all of these values are out of range.
+ h, l = 0, 0
+ } else {
+ // Bits: 1*sign 14*exponent 113*significand
+ e = int(d.h>>49&(1<<14-1)) - 6176
+ h = d.h & (1<<49 - 1)
+ }
+
+ // Would be handled by the logic below, but that's trivial and common.
+ if h == 0 && l == 0 && e == 0 {
+ return "-0"[pos:]
+ }
+
+ var repr [48]byte // Loop 5 times over 9 digits plus dot, negative sign, and leading zero.
+ var last = len(repr)
+ var i = len(repr)
+ var dot = len(repr) + e
+ var rem uint32
+Loop:
+ for d9 := 0; d9 < 5; d9++ {
+ h, l, rem = divmod(h, l, 1e9)
+ for d1 := 0; d1 < 9; d1++ {
+ // Handle "-0.0", "0.00123400", "-1.00E-6", "1.050E+3", etc.
+ if i < len(repr) && (dot == i || l == 0 && h == 0 && rem > 0 && rem < 10 && (dot < i-6 || e > 0)) {
+ e += len(repr) - i
+ i--
+ repr[i] = '.'
+ last = i - 1
+ dot = len(repr) // Unmark.
+ }
+ c := '0' + byte(rem%10)
+ rem /= 10
+ i--
+ repr[i] = c
+ // Handle "0E+3", "1E+3", etc.
+ if l == 0 && h == 0 && rem == 0 && i == len(repr)-1 && (dot < i-5 || e > 0) {
+ last = i
+ break Loop
+ }
+ if c != '0' {
+ last = i
+ }
+ // Break early. Works without it, but why.
+ if dot > i && l == 0 && h == 0 && rem == 0 {
+ break Loop
+ }
+ }
+ }
+ repr[last-1] = '-'
+ last--
+
+ if e > 0 {
+ return string(repr[last+pos:]) + "E+" + strconv.Itoa(e)
+ }
+ if e < 0 {
+ return string(repr[last+pos:]) + "E" + strconv.Itoa(e)
+ }
+ return string(repr[last+pos:])
+}
+
+func divmod(h, l uint64, div uint32) (qh, ql uint64, rem uint32) {
+ div64 := uint64(div)
+ a := h >> 32
+ aq := a / div64
+ ar := a % div64
+ b := ar<<32 + h&(1<<32-1)
+ bq := b / div64
+ br := b % div64
+ c := br<<32 + l>>32
+ cq := c / div64
+ cr := c % div64
+ d := cr<<32 + l&(1<<32-1)
+ dq := d / div64
+ dr := d % div64
+ return (aq<<32 | bq), (cq<<32 | dq), uint32(dr)
+}
+
+var dNaN = Decimal128{0x1F << 58, 0}
+var dPosInf = Decimal128{0x1E << 58, 0}
+var dNegInf = Decimal128{0x3E << 58, 0}
+
+func dErr(s string) (Decimal128, error) {
+ return dNaN, fmt.Errorf("cannot parse %q as a decimal128", s)
+}
+
+func ParseDecimal128(s string) (Decimal128, error) {
+ orig := s
+ if s == "" {
+ return dErr(orig)
+ }
+ neg := s[0] == '-'
+ if neg || s[0] == '+' {
+ s = s[1:]
+ }
+
+ if (len(s) == 3 || len(s) == 8) && (s[0] == 'N' || s[0] == 'n' || s[0] == 'I' || s[0] == 'i') {
+ if s == "NaN" || s == "nan" || strings.EqualFold(s, "nan") {
+ return dNaN, nil
+ }
+ if s == "Inf" || s == "inf" || strings.EqualFold(s, "inf") || strings.EqualFold(s, "infinity") {
+ if neg {
+ return dNegInf, nil
+ }
+ return dPosInf, nil
+ }
+ return dErr(orig)
+ }
+
+ var h, l uint64
+ var e int
+
+ var add, ovr uint32
+ var mul uint32 = 1
+ var dot = -1
+ var digits = 0
+ var i = 0
+ for i < len(s) {
+ c := s[i]
+ if mul == 1e9 {
+ h, l, ovr = muladd(h, l, mul, add)
+ mul, add = 1, 0
+ if ovr > 0 || h&((1<<15-1)<<49) > 0 {
+ return dErr(orig)
+ }
+ }
+ if c >= '0' && c <= '9' {
+ i++
+ if c > '0' || digits > 0 {
+ digits++
+ }
+ if digits > 34 {
+ if c == '0' {
+ // Exact rounding.
+ e++
+ continue
+ }
+ return dErr(orig)
+ }
+ mul *= 10
+ add *= 10
+ add += uint32(c - '0')
+ continue
+ }
+ if c == '.' {
+ i++
+ if dot >= 0 || i == 1 && len(s) == 1 {
+ return dErr(orig)
+ }
+ if i == len(s) {
+ break
+ }
+ if s[i] < '0' || s[i] > '9' || e > 0 {
+ return dErr(orig)
+ }
+ dot = i
+ continue
+ }
+ break
+ }
+ if i == 0 {
+ return dErr(orig)
+ }
+ if mul > 1 {
+ h, l, ovr = muladd(h, l, mul, add)
+ if ovr > 0 || h&((1<<15-1)<<49) > 0 {
+ return dErr(orig)
+ }
+ }
+ if dot >= 0 {
+ e += dot - i
+ }
+ if i+1 < len(s) && (s[i] == 'E' || s[i] == 'e') {
+ i++
+ eneg := s[i] == '-'
+ if eneg || s[i] == '+' {
+ i++
+ if i == len(s) {
+ return dErr(orig)
+ }
+ }
+ n := 0
+ for i < len(s) && n < 1e4 {
+ c := s[i]
+ i++
+ if c < '0' || c > '9' {
+ return dErr(orig)
+ }
+ n *= 10
+ n += int(c - '0')
+ }
+ if eneg {
+ n = -n
+ }
+ e += n
+ for e < -6176 {
+ // Subnormal.
+ var div uint32 = 1
+ for div < 1e9 && e < -6176 {
+ div *= 10
+ e++
+ }
+ var rem uint32
+ h, l, rem = divmod(h, l, div)
+ if rem > 0 {
+ return dErr(orig)
+ }
+ }
+ for e > 6111 {
+ // Clamped.
+ var mul uint32 = 1
+ for mul < 1e9 && e > 6111 {
+ mul *= 10
+ e--
+ }
+ h, l, ovr = muladd(h, l, mul, 0)
+ if ovr > 0 || h&((1<<15-1)<<49) > 0 {
+ return dErr(orig)
+ }
+ }
+ if e < -6176 || e > 6111 {
+ return dErr(orig)
+ }
+ }
+
+ if i < len(s) {
+ return dErr(orig)
+ }
+
+ h |= uint64(e+6176) & uint64(1<<14-1) << 49
+ if neg {
+ h |= 1 << 63
+ }
+ return Decimal128{h, l}, nil
+}
+
+func muladd(h, l uint64, mul uint32, add uint32) (resh, resl uint64, overflow uint32) {
+ mul64 := uint64(mul)
+ a := mul64 * (l & (1<<32 - 1))
+ b := a>>32 + mul64*(l>>32)
+ c := b>>32 + mul64*(h&(1<<32-1))
+ d := c>>32 + mul64*(h>>32)
+
+ a = a&(1<<32-1) + uint64(add)
+ b = b&(1<<32-1) + a>>32
+ c = c&(1<<32-1) + b>>32
+ d = d&(1<<32-1) + c>>32
+
+ return (d<<32 | c&(1<<32-1)), (b<<32 | a&(1<<32-1)), uint32(d >> 32)
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/decimal_test.go b/vendor/gopkg.in/mgo.v2/bson/decimal_test.go
new file mode 100644
index 0000000..a297280
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/decimal_test.go
@@ -0,0 +1,4109 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package bson_test
+
+import (
+ "encoding/hex"
+ "encoding/json"
+ "fmt"
+ "regexp"
+ "strings"
+
+ "gopkg.in/mgo.v2/bson"
+
+ . "gopkg.in/check.v1"
+)
+
+// --------------------------------------------------------------------------
+// Decimal tests
+
+type decimalTests struct {
+ Valid []struct {
+ Description string `json:"description"`
+ BSON string `json:"bson"`
+ CanonicalBSON string `json:"canonical_bson"`
+ ExtJSON string `json:"extjson"`
+ CanonicalExtJSON string `json:"canonical_extjson"`
+ Lossy bool `json:"lossy"`
+ } `json:"valid"`
+
+ ParseErrors []struct {
+ Description string `json:"description"`
+ String string `json:"string"`
+ } `json:"parseErrors"`
+}
+
+func extJSONRepr(s string) string {
+ var value struct {
+ D struct {
+ Repr string `json:"$numberDecimal"`
+ } `json:"d"`
+ }
+ err := json.Unmarshal([]byte(s), &value)
+ if err != nil {
+ panic(err)
+ }
+ return value.D.Repr
+}
+
+func (s *S) TestDecimalTests(c *C) {
+ // These also conform to the spec and are used by Go elsewhere.
+ // (e.g. math/big won't parse "Infinity").
+ goStr := func(s string) string {
+ switch s {
+ case "Infinity":
+ return "Inf"
+ case "-Infinity":
+ return "-Inf"
+ }
+ return s
+ }
+
+ for _, testEntry := range decimalTestsJSON {
+ testFile := testEntry.file
+
+ var tests decimalTests
+ err := json.Unmarshal([]byte(testEntry.json), &tests)
+ c.Assert(err, IsNil)
+
+ for _, test := range tests.Valid {
+ c.Logf("Running %s test: %s", testFile, test.Description)
+
+ test.BSON = strings.ToLower(test.BSON)
+
+ // Unmarshal value from BSON data.
+ bsonData, err := hex.DecodeString(test.BSON)
+ var bsonValue struct{ D interface{} }
+ err = bson.Unmarshal(bsonData, &bsonValue)
+ c.Assert(err, IsNil)
+ dec128, ok := bsonValue.D.(bson.Decimal128)
+ c.Assert(ok, Equals, true)
+
+ // Extract ExtJSON representations (canonical and not).
+ extjRepr := extJSONRepr(test.ExtJSON)
+ cextjRepr := extjRepr
+ if test.CanonicalExtJSON != "" {
+ cextjRepr = extJSONRepr(test.CanonicalExtJSON)
+ }
+
+ wantRepr := goStr(cextjRepr)
+
+ // Generate canonical representation.
+ c.Assert(dec128.String(), Equals, wantRepr)
+
+ // Parse original canonical representation.
+ parsed, err := bson.ParseDecimal128(cextjRepr)
+ c.Assert(err, IsNil)
+ c.Assert(parsed.String(), Equals, wantRepr)
+
+ // Parse non-canonical representation.
+ parsed, err = bson.ParseDecimal128(extjRepr)
+ c.Assert(err, IsNil)
+ c.Assert(parsed.String(), Equals, wantRepr)
+
+ // Parse Go canonical representation (Inf vs. Infinity).
+ parsed, err = bson.ParseDecimal128(wantRepr)
+ c.Assert(err, IsNil)
+ c.Assert(parsed.String(), Equals, wantRepr)
+
+ // Marshal original value back into BSON data.
+ data, err := bson.Marshal(bsonValue)
+ c.Assert(err, IsNil)
+ c.Assert(hex.EncodeToString(data), Equals, test.BSON)
+
+ if test.Lossy {
+ continue
+ }
+
+ // Marshal the parsed canonical representation.
+ var parsedValue struct{ D interface{} }
+ parsedValue.D = parsed
+ data, err = bson.Marshal(parsedValue)
+ c.Assert(err, IsNil)
+ c.Assert(hex.EncodeToString(data), Equals, test.BSON)
+ }
+
+ for _, test := range tests.ParseErrors {
+ c.Logf("Running %s parse error test: %s (string %q)", testFile, test.Description, test.String)
+
+ _, err := bson.ParseDecimal128(test.String)
+ quoted := regexp.QuoteMeta(fmt.Sprintf("%q", test.String))
+ c.Assert(err, ErrorMatches, `cannot parse `+quoted+` as a decimal128`)
+ }
+ }
+}
+
+const decBenchNum = "9.999999999999999999999999999999999E+6144"
+
+func (s *S) BenchmarkDecimal128String(c *C) {
+ d, err := bson.ParseDecimal128(decBenchNum)
+ c.Assert(err, IsNil)
+ c.Assert(d.String(), Equals, decBenchNum)
+
+ c.ResetTimer()
+ for i := 0; i < c.N; i++ {
+ d.String()
+ }
+}
+
+func (s *S) BenchmarkDecimal128Parse(c *C) {
+ var err error
+ c.ResetTimer()
+ for i := 0; i < c.N; i++ {
+ _, err = bson.ParseDecimal128(decBenchNum)
+ }
+ if err != nil {
+ panic(err)
+ }
+}
+
+var decimalTestsJSON = []struct{ file, json string }{
+ {"decimal128-1.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "valid": [
+ {
+ "description": "Special - Canonical NaN",
+ "bson": "180000001364000000000000000000000000000000007C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}"
+ },
+ {
+ "description": "Special - Negative NaN",
+ "bson": "18000000136400000000000000000000000000000000FC00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Negative NaN",
+ "bson": "18000000136400000000000000000000000000000000FC00",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-NaN\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Canonical SNaN",
+ "bson": "180000001364000000000000000000000000000000007E00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Negative SNaN",
+ "bson": "18000000136400000000000000000000000000000000FE00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - NaN with a payload",
+ "bson": "180000001364001200000000000000000000000000007E00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Canonical Positive Infinity",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Special - Canonical Negative Infinity",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Special - Invalid representation treated as 0",
+ "bson": "180000001364000000000000000000000000000000106C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Invalid representation treated as -0",
+ "bson": "18000000136400DCBA9876543210DEADBEEF00000010EC00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Special - Invalid representation treated as 0E3",
+ "bson": "18000000136400FFFFFFFFFFFFFFFFFFFFFFFFFFFF116C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+3\"}}",
+ "lossy": true
+ },
+ {
+ "description": "Regular - Adjusted Exponent Limit",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3CF22F00",
+ "extjson": "{\"d\": { \"$numberDecimal\": \"0.000001234567890123456789012345678901234\" }}"
+ },
+ {
+ "description": "Regular - Smallest",
+ "bson": "18000000136400D204000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.001234\"}}"
+ },
+ {
+ "description": "Regular - Smallest with Trailing Zeros",
+ "bson": "1800000013640040EF5A07000000000000000000002A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00123400000\"}}"
+ },
+ {
+ "description": "Regular - 0.1",
+ "bson": "1800000013640001000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1\"}}"
+ },
+ {
+ "description": "Regular - 0.1234567890123456789012345678901234",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3CFC2F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1234567890123456789012345678901234\"}}"
+ },
+ {
+ "description": "Regular - 0",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "Regular - -0",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "Regular - -0.0",
+ "bson": "1800000013640000000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0\"}}"
+ },
+ {
+ "description": "Regular - 2",
+ "bson": "180000001364000200000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2\"}}"
+ },
+ {
+ "description": "Regular - 2.000",
+ "bson": "18000000136400D0070000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2.000\"}}"
+ },
+ {
+ "description": "Regular - Largest",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3C403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1234567890123456789012345678901234\"}}"
+ },
+ {
+ "description": "Scientific - Tiniest",
+ "bson": "18000000136400FFFFFFFF638E8D37C087ADBE09ED010000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"9.999999999999999999999999999999999E-6143\"}}"
+ },
+ {
+ "description": "Scientific - Tiny",
+ "bson": "180000001364000100000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6176\"}}"
+ },
+ {
+ "description": "Scientific - Negative Tiny",
+ "bson": "180000001364000100000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6176\"}}"
+ },
+ {
+ "description": "Scientific - Adjusted Exponent Limit",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3CF02F00",
+ "extjson": "{\"d\": { \"$numberDecimal\": \"1.234567890123456789012345678901234E-7\" }}"
+ },
+ {
+ "description": "Scientific - Fractional",
+ "bson": "1800000013640064000000000000000000000000002CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.00E-8\"}}"
+ },
+ {
+ "description": "Scientific - 0 with Exponent",
+ "bson": "180000001364000000000000000000000000000000205F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6000\"}}"
+ },
+ {
+ "description": "Scientific - 0 with Negative Exponent",
+ "bson": "1800000013640000000000000000000000000000007A2B00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-611\"}}"
+ },
+ {
+ "description": "Scientific - No Decimal with Signed Exponent",
+ "bson": "180000001364000100000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+3\"}}"
+ },
+ {
+ "description": "Scientific - Trailing Zero",
+ "bson": "180000001364001A04000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.050E+4\"}}"
+ },
+ {
+ "description": "Scientific - With Decimal",
+ "bson": "180000001364006900000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.05E+3\"}}"
+ },
+ {
+ "description": "Scientific - Full",
+ "bson": "18000000136400FFFFFFFFFFFFFFFFFFFFFFFFFFFF403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"5192296858534827628530496329220095\"}}"
+ },
+ {
+ "description": "Scientific - Large",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "Scientific - Largest",
+ "bson": "18000000136400FFFFFFFF638E8D37C087ADBE09EDFF5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"9.999999999999999999999999999999999E+6144\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Exponent Normalization",
+ "bson": "1800000013640064000000000000000000000000002CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-100E-10\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.00E-8\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Unsigned Positive Exponent",
+ "bson": "180000001364000100000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+3\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Lowercase Exponent Identifier",
+ "bson": "180000001364000100000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1e+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+3\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Long Significand with Exponent",
+ "bson": "1800000013640079D9E0F9763ADA429D0200000000583000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12345689012345789012345E+12\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.2345689012345789012345E+34\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Positive Sign",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3C403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+1234567890123456789012345678901234\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1234567890123456789012345678901234\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - Long Decimal String",
+ "bson": "180000001364000100000000000000000000000000722800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \".000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-999\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - nan",
+ "bson": "180000001364000000000000000000000000000000007C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"nan\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - nAn",
+ "bson": "180000001364000000000000000000000000000000007C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"nAn\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - +infinity",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+infinity\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - infinity",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"infinity\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - infiniTY",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"infiniTY\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - inf",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"inf\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - inF",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"inF\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - -infinity",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-infinity\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - -infiniTy",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-infiniTy\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - -Inf",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - -inf",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-inf\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Non-Canonical Parsing - -inF",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-inF\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "Rounded Subnormal number",
+ "bson": "180000001364000100000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10E-6177\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6176\"}}"
+ },
+ {
+ "description": "Clamped",
+ "bson": "180000001364000a00000000000000000000000000fe5f00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E6112\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+6112\"}}"
+ },
+ {
+ "description": "Exact rounding",
+ "bson": "18000000136400000000000a5bc138938d44c64d31cc3700",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E+999\"}}"
+ }
+ ]
+}
+`},
+
+ {"decimal128-2.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "valid": [
+ {
+ "description": "[decq021] Normality",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3C40B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1234567890123456789012345678901234\"}}"
+ },
+ {
+ "description": "[decq823] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400010000800000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-2147483649\"}}"
+ },
+ {
+ "description": "[decq822] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400000000800000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-2147483648\"}}"
+ },
+ {
+ "description": "[decq821] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FFFFFF7F0000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-2147483647\"}}"
+ },
+ {
+ "description": "[decq820] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FEFFFF7F0000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-2147483646\"}}"
+ },
+ {
+ "description": "[decq152] fold-downs (more below)",
+ "bson": "18000000136400393000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-12345\"}}"
+ },
+ {
+ "description": "[decq154] fold-downs (more below)",
+ "bson": "18000000136400D20400000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1234\"}}"
+ },
+ {
+ "description": "[decq006] derivative canonical plain strings",
+ "bson": "18000000136400EE0200000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-750\"}}"
+ },
+ {
+ "description": "[decq164] fold-downs (more below)",
+ "bson": "1800000013640039300000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-123.45\"}}"
+ },
+ {
+ "description": "[decq156] fold-downs (more below)",
+ "bson": "180000001364007B0000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-123\"}}"
+ },
+ {
+ "description": "[decq008] derivative canonical plain strings",
+ "bson": "18000000136400EE020000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-75.0\"}}"
+ },
+ {
+ "description": "[decq158] fold-downs (more below)",
+ "bson": "180000001364000C0000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-12\"}}"
+ },
+ {
+ "description": "[decq122] Nmax and similar",
+ "bson": "18000000136400FFFFFFFF638E8D37C087ADBE09EDFFDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.999999999999999999999999999999999E+6144\"}}"
+ },
+ {
+ "description": "[decq002] (mostly derived from the Strawman 4 document and examples)",
+ "bson": "18000000136400EE020000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-7.50\"}}"
+ },
+ {
+ "description": "[decq004] derivative canonical plain strings",
+ "bson": "18000000136400EE0200000000000000000000000042B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-7.50E+3\"}}"
+ },
+ {
+ "description": "[decq018] derivative canonical plain strings",
+ "bson": "18000000136400EE020000000000000000000000002EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-7.50E-7\"}}"
+ },
+ {
+ "description": "[decq125] Nmax and similar",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3CFEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.234567890123456789012345678901234E+6144\"}}"
+ },
+ {
+ "description": "[decq131] fold-downs (more below)",
+ "bson": "18000000136400000000807F1BCF85B27059C8A43CFEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.230000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq162] fold-downs (more below)",
+ "bson": "180000001364007B000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.23\"}}"
+ },
+ {
+ "description": "[decq176] Nmin and below",
+ "bson": "18000000136400010000000A5BC138938D44C64D31008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.000000000000000000000000000000001E-6143\"}}"
+ },
+ {
+ "description": "[decq174] Nmin and below",
+ "bson": "18000000136400000000000A5BC138938D44C64D31008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.000000000000000000000000000000000E-6143\"}}"
+ },
+ {
+ "description": "[decq133] fold-downs (more below)",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq160] fold-downs (more below)",
+ "bson": "18000000136400010000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1\"}}"
+ },
+ {
+ "description": "[decq172] Nmin and below",
+ "bson": "180000001364000100000000000000000000000000428000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6143\"}}"
+ },
+ {
+ "description": "[decq010] derivative canonical plain strings",
+ "bson": "18000000136400EE020000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.750\"}}"
+ },
+ {
+ "description": "[decq012] derivative canonical plain strings",
+ "bson": "18000000136400EE0200000000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0750\"}}"
+ },
+ {
+ "description": "[decq014] derivative canonical plain strings",
+ "bson": "18000000136400EE0200000000000000000000000034B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000750\"}}"
+ },
+ {
+ "description": "[decq016] derivative canonical plain strings",
+ "bson": "18000000136400EE0200000000000000000000000030B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000750\"}}"
+ },
+ {
+ "description": "[decq404] zeros",
+ "bson": "180000001364000000000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-6176\"}}"
+ },
+ {
+ "description": "[decq424] negative zeros",
+ "bson": "180000001364000000000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-6176\"}}"
+ },
+ {
+ "description": "[decq407] zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[decq427] negative zeros",
+ "bson": "1800000013640000000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00\"}}"
+ },
+ {
+ "description": "[decq409] zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[decq428] negative zeros",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "[decq700] Selected DPD codes",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[decq406] zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[decq426] negative zeros",
+ "bson": "1800000013640000000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00\"}}"
+ },
+ {
+ "description": "[decq410] zeros",
+ "bson": "180000001364000000000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+3\"}}"
+ },
+ {
+ "description": "[decq431] negative zeros",
+ "bson": "18000000136400000000000000000000000000000046B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+3\"}}"
+ },
+ {
+ "description": "[decq419] clamped zeros...",
+ "bson": "180000001364000000000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6111\"}}"
+ },
+ {
+ "description": "[decq432] negative zeros",
+ "bson": "180000001364000000000000000000000000000000FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6111\"}}"
+ },
+ {
+ "description": "[decq405] zeros",
+ "bson": "180000001364000000000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-6176\"}}"
+ },
+ {
+ "description": "[decq425] negative zeros",
+ "bson": "180000001364000000000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-6176\"}}"
+ },
+ {
+ "description": "[decq508] Specials",
+ "bson": "180000001364000000000000000000000000000000007800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"Infinity\"}}"
+ },
+ {
+ "description": "[decq528] Specials",
+ "bson": "18000000136400000000000000000000000000000000F800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-Infinity\"}}"
+ },
+ {
+ "description": "[decq541] Specials",
+ "bson": "180000001364000000000000000000000000000000007C00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"NaN\"}}"
+ },
+ {
+ "description": "[decq074] Nmin and below",
+ "bson": "18000000136400000000000A5BC138938D44C64D31000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E-6143\"}}"
+ },
+ {
+ "description": "[decq602] fold-down full sequence",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq604] fold-down full sequence",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000000E+6143\"}}"
+ },
+ {
+ "description": "[decq606] fold-down full sequence",
+ "bson": "1800000013640000000080264B91C02220BE377E00FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000000000E+6142\"}}"
+ },
+ {
+ "description": "[decq608] fold-down full sequence",
+ "bson": "1800000013640000000040EAED7446D09C2C9F0C00FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000E+6141\"}}"
+ },
+ {
+ "description": "[decq610] fold-down full sequence",
+ "bson": "18000000136400000000A0CA17726DAE0F1E430100FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000E+6140\"}}"
+ },
+ {
+ "description": "[decq612] fold-down full sequence",
+ "bson": "18000000136400000000106102253E5ECE4F200000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000000E+6139\"}}"
+ },
+ {
+ "description": "[decq614] fold-down full sequence",
+ "bson": "18000000136400000000E83C80D09F3C2E3B030000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000E+6138\"}}"
+ },
+ {
+ "description": "[decq616] fold-down full sequence",
+ "bson": "18000000136400000000E4D20CC8DCD2B752000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000E+6137\"}}"
+ },
+ {
+ "description": "[decq618] fold-down full sequence",
+ "bson": "180000001364000000004A48011416954508000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000E+6136\"}}"
+ },
+ {
+ "description": "[decq620] fold-down full sequence",
+ "bson": "18000000136400000000A1EDCCCE1BC2D300000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000E+6135\"}}"
+ },
+ {
+ "description": "[decq622] fold-down full sequence",
+ "bson": "18000000136400000080F64AE1C7022D1500000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000E+6134\"}}"
+ },
+ {
+ "description": "[decq624] fold-down full sequence",
+ "bson": "18000000136400000040B2BAC9E0191E0200000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000E+6133\"}}"
+ },
+ {
+ "description": "[decq626] fold-down full sequence",
+ "bson": "180000001364000000A0DEC5ADC935360000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000E+6132\"}}"
+ },
+ {
+ "description": "[decq628] fold-down full sequence",
+ "bson": "18000000136400000010632D5EC76B050000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000E+6131\"}}"
+ },
+ {
+ "description": "[decq630] fold-down full sequence",
+ "bson": "180000001364000000E8890423C78A000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000E+6130\"}}"
+ },
+ {
+ "description": "[decq632] fold-down full sequence",
+ "bson": "18000000136400000064A7B3B6E00D000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000E+6129\"}}"
+ },
+ {
+ "description": "[decq634] fold-down full sequence",
+ "bson": "1800000013640000008A5D78456301000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000E+6128\"}}"
+ },
+ {
+ "description": "[decq636] fold-down full sequence",
+ "bson": "180000001364000000C16FF2862300000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000E+6127\"}}"
+ },
+ {
+ "description": "[decq638] fold-down full sequence",
+ "bson": "180000001364000080C6A47E8D0300000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000E+6126\"}}"
+ },
+ {
+ "description": "[decq640] fold-down full sequence",
+ "bson": "1800000013640000407A10F35A0000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000E+6125\"}}"
+ },
+ {
+ "description": "[decq642] fold-down full sequence",
+ "bson": "1800000013640000A0724E18090000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000E+6124\"}}"
+ },
+ {
+ "description": "[decq644] fold-down full sequence",
+ "bson": "180000001364000010A5D4E8000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000E+6123\"}}"
+ },
+ {
+ "description": "[decq646] fold-down full sequence",
+ "bson": "1800000013640000E8764817000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000E+6122\"}}"
+ },
+ {
+ "description": "[decq648] fold-down full sequence",
+ "bson": "1800000013640000E40B5402000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000E+6121\"}}"
+ },
+ {
+ "description": "[decq650] fold-down full sequence",
+ "bson": "1800000013640000CA9A3B00000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000E+6120\"}}"
+ },
+ {
+ "description": "[decq652] fold-down full sequence",
+ "bson": "1800000013640000E1F50500000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000E+6119\"}}"
+ },
+ {
+ "description": "[decq654] fold-down full sequence",
+ "bson": "180000001364008096980000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000E+6118\"}}"
+ },
+ {
+ "description": "[decq656] fold-down full sequence",
+ "bson": "1800000013640040420F0000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000E+6117\"}}"
+ },
+ {
+ "description": "[decq658] fold-down full sequence",
+ "bson": "18000000136400A086010000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000E+6116\"}}"
+ },
+ {
+ "description": "[decq660] fold-down full sequence",
+ "bson": "180000001364001027000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000E+6115\"}}"
+ },
+ {
+ "description": "[decq662] fold-down full sequence",
+ "bson": "18000000136400E803000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000E+6114\"}}"
+ },
+ {
+ "description": "[decq664] fold-down full sequence",
+ "bson": "180000001364006400000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+6113\"}}"
+ },
+ {
+ "description": "[decq666] fold-down full sequence",
+ "bson": "180000001364000A00000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+6112\"}}"
+ },
+ {
+ "description": "[decq060] fold-downs (more below)",
+ "bson": "180000001364000100000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1\"}}"
+ },
+ {
+ "description": "[decq670] fold-down full sequence",
+ "bson": "180000001364000100000000000000000000000000FC5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6110\"}}"
+ },
+ {
+ "description": "[decq668] fold-down full sequence",
+ "bson": "180000001364000100000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6111\"}}"
+ },
+ {
+ "description": "[decq072] Nmin and below",
+ "bson": "180000001364000100000000000000000000000000420000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6143\"}}"
+ },
+ {
+ "description": "[decq076] Nmin and below",
+ "bson": "18000000136400010000000A5BC138938D44C64D31000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000001E-6143\"}}"
+ },
+ {
+ "description": "[decq036] fold-downs (more below)",
+ "bson": "18000000136400000000807F1BCF85B27059C8A43CFE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.230000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq062] fold-downs (more below)",
+ "bson": "180000001364007B000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.23\"}}"
+ },
+ {
+ "description": "[decq034] Nmax and similar",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3CFE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.234567890123456789012345678901234E+6144\"}}"
+ },
+ {
+ "description": "[decq441] exponent lengths",
+ "bson": "180000001364000700000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7\"}}"
+ },
+ {
+ "description": "[decq449] exponent lengths",
+ "bson": "1800000013640007000000000000000000000000001E5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+5999\"}}"
+ },
+ {
+ "description": "[decq447] exponent lengths",
+ "bson": "1800000013640007000000000000000000000000000E3800",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+999\"}}"
+ },
+ {
+ "description": "[decq445] exponent lengths",
+ "bson": "180000001364000700000000000000000000000000063100",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+99\"}}"
+ },
+ {
+ "description": "[decq443] exponent lengths",
+ "bson": "180000001364000700000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+9\"}}"
+ },
+ {
+ "description": "[decq842] VG testcase",
+ "bson": "180000001364000000FED83F4E7C9FE4E269E38A5BCD1700",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7.049000000000010795488000000000000E-3097\"}}"
+ },
+ {
+ "description": "[decq841] VG testcase",
+ "bson": "180000001364000000203B9DB5056F000000000000002400",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"8.000000000000000000E-1550\"}}"
+ },
+ {
+ "description": "[decq840] VG testcase",
+ "bson": "180000001364003C17258419D710C42F0000000000002400",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"8.81125000000001349436E-1548\"}}"
+ },
+ {
+ "description": "[decq701] Selected DPD codes",
+ "bson": "180000001364000900000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"9\"}}"
+ },
+ {
+ "description": "[decq032] Nmax and similar",
+ "bson": "18000000136400FFFFFFFF638E8D37C087ADBE09EDFF5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"9.999999999999999999999999999999999E+6144\"}}"
+ },
+ {
+ "description": "[decq702] Selected DPD codes",
+ "bson": "180000001364000A00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10\"}}"
+ },
+ {
+ "description": "[decq057] fold-downs (more below)",
+ "bson": "180000001364000C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12\"}}"
+ },
+ {
+ "description": "[decq703] Selected DPD codes",
+ "bson": "180000001364001300000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"19\"}}"
+ },
+ {
+ "description": "[decq704] Selected DPD codes",
+ "bson": "180000001364001400000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"20\"}}"
+ },
+ {
+ "description": "[decq705] Selected DPD codes",
+ "bson": "180000001364001D00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"29\"}}"
+ },
+ {
+ "description": "[decq706] Selected DPD codes",
+ "bson": "180000001364001E00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"30\"}}"
+ },
+ {
+ "description": "[decq707] Selected DPD codes",
+ "bson": "180000001364002700000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"39\"}}"
+ },
+ {
+ "description": "[decq708] Selected DPD codes",
+ "bson": "180000001364002800000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"40\"}}"
+ },
+ {
+ "description": "[decq709] Selected DPD codes",
+ "bson": "180000001364003100000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"49\"}}"
+ },
+ {
+ "description": "[decq710] Selected DPD codes",
+ "bson": "180000001364003200000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"50\"}}"
+ },
+ {
+ "description": "[decq711] Selected DPD codes",
+ "bson": "180000001364003B00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"59\"}}"
+ },
+ {
+ "description": "[decq712] Selected DPD codes",
+ "bson": "180000001364003C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"60\"}}"
+ },
+ {
+ "description": "[decq713] Selected DPD codes",
+ "bson": "180000001364004500000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"69\"}}"
+ },
+ {
+ "description": "[decq714] Selected DPD codes",
+ "bson": "180000001364004600000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"70\"}}"
+ },
+ {
+ "description": "[decq715] Selected DPD codes",
+ "bson": "180000001364004700000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"71\"}}"
+ },
+ {
+ "description": "[decq716] Selected DPD codes",
+ "bson": "180000001364004800000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"72\"}}"
+ },
+ {
+ "description": "[decq717] Selected DPD codes",
+ "bson": "180000001364004900000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"73\"}}"
+ },
+ {
+ "description": "[decq718] Selected DPD codes",
+ "bson": "180000001364004A00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"74\"}}"
+ },
+ {
+ "description": "[decq719] Selected DPD codes",
+ "bson": "180000001364004B00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"75\"}}"
+ },
+ {
+ "description": "[decq720] Selected DPD codes",
+ "bson": "180000001364004C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"76\"}}"
+ },
+ {
+ "description": "[decq721] Selected DPD codes",
+ "bson": "180000001364004D00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"77\"}}"
+ },
+ {
+ "description": "[decq722] Selected DPD codes",
+ "bson": "180000001364004E00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"78\"}}"
+ },
+ {
+ "description": "[decq723] Selected DPD codes",
+ "bson": "180000001364004F00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"79\"}}"
+ },
+ {
+ "description": "[decq056] fold-downs (more below)",
+ "bson": "180000001364007B00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"123\"}}"
+ },
+ {
+ "description": "[decq064] fold-downs (more below)",
+ "bson": "1800000013640039300000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"123.45\"}}"
+ },
+ {
+ "description": "[decq732] Selected DPD codes",
+ "bson": "180000001364000802000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"520\"}}"
+ },
+ {
+ "description": "[decq733] Selected DPD codes",
+ "bson": "180000001364000902000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"521\"}}"
+ },
+ {
+ "description": "[decq740] DPD: one of each of the huffman groups",
+ "bson": "180000001364000903000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"777\"}}"
+ },
+ {
+ "description": "[decq741] DPD: one of each of the huffman groups",
+ "bson": "180000001364000A03000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"778\"}}"
+ },
+ {
+ "description": "[decq742] DPD: one of each of the huffman groups",
+ "bson": "180000001364001303000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"787\"}}"
+ },
+ {
+ "description": "[decq746] DPD: one of each of the huffman groups",
+ "bson": "180000001364001F03000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"799\"}}"
+ },
+ {
+ "description": "[decq743] DPD: one of each of the huffman groups",
+ "bson": "180000001364006D03000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"877\"}}"
+ },
+ {
+ "description": "[decq753] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "180000001364007803000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"888\"}}"
+ },
+ {
+ "description": "[decq754] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "180000001364007903000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"889\"}}"
+ },
+ {
+ "description": "[decq760] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "180000001364008203000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"898\"}}"
+ },
+ {
+ "description": "[decq764] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "180000001364008303000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"899\"}}"
+ },
+ {
+ "description": "[decq745] DPD: one of each of the huffman groups",
+ "bson": "18000000136400D303000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"979\"}}"
+ },
+ {
+ "description": "[decq770] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "18000000136400DC03000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"988\"}}"
+ },
+ {
+ "description": "[decq774] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "18000000136400DD03000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"989\"}}"
+ },
+ {
+ "description": "[decq730] Selected DPD codes",
+ "bson": "18000000136400E203000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"994\"}}"
+ },
+ {
+ "description": "[decq731] Selected DPD codes",
+ "bson": "18000000136400E303000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"995\"}}"
+ },
+ {
+ "description": "[decq744] DPD: one of each of the huffman groups",
+ "bson": "18000000136400E503000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"997\"}}"
+ },
+ {
+ "description": "[decq780] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "18000000136400E603000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"998\"}}"
+ },
+ {
+ "description": "[decq787] DPD all-highs cases (includes the 24 redundant codes)",
+ "bson": "18000000136400E703000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"999\"}}"
+ },
+ {
+ "description": "[decq053] fold-downs (more below)",
+ "bson": "18000000136400D204000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1234\"}}"
+ },
+ {
+ "description": "[decq052] fold-downs (more below)",
+ "bson": "180000001364003930000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12345\"}}"
+ },
+ {
+ "description": "[decq792] Miscellaneous (testers' queries, etc.)",
+ "bson": "180000001364003075000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"30000\"}}"
+ },
+ {
+ "description": "[decq793] Miscellaneous (testers' queries, etc.)",
+ "bson": "1800000013640090940D0000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"890000\"}}"
+ },
+ {
+ "description": "[decq824] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FEFFFF7F00000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2147483646\"}}"
+ },
+ {
+ "description": "[decq825] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FFFFFF7F00000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2147483647\"}}"
+ },
+ {
+ "description": "[decq826] values around [u]int32 edges (zeros done earlier)",
+ "bson": "180000001364000000008000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2147483648\"}}"
+ },
+ {
+ "description": "[decq827] values around [u]int32 edges (zeros done earlier)",
+ "bson": "180000001364000100008000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2147483649\"}}"
+ },
+ {
+ "description": "[decq828] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FEFFFFFF00000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"4294967294\"}}"
+ },
+ {
+ "description": "[decq829] values around [u]int32 edges (zeros done earlier)",
+ "bson": "18000000136400FFFFFFFF00000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"4294967295\"}}"
+ },
+ {
+ "description": "[decq830] values around [u]int32 edges (zeros done earlier)",
+ "bson": "180000001364000000000001000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"4294967296\"}}"
+ },
+ {
+ "description": "[decq831] values around [u]int32 edges (zeros done earlier)",
+ "bson": "180000001364000100000001000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"4294967297\"}}"
+ },
+ {
+ "description": "[decq022] Normality",
+ "bson": "18000000136400C7711CC7B548F377DC80A131C836403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1111111111111111111111111111111111\"}}"
+ },
+ {
+ "description": "[decq020] Normality",
+ "bson": "18000000136400F2AF967ED05C82DE3297FF6FDE3C403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1234567890123456789012345678901234\"}}"
+ },
+ {
+ "description": "[decq550] Specials",
+ "bson": "18000000136400FFFFFFFF638E8D37C087ADBE09ED413000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"9999999999999999999999999999999999\"}}"
+ }
+ ]
+}
+`},
+
+ {"decimal128-3.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "valid": [
+ {
+ "description": "[basx066] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE0000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-00345678.5432\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-345678.5432\"}}"
+ },
+ {
+ "description": "[basx065] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE0000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0345678.5432\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-345678.5432\"}}"
+ },
+ {
+ "description": "[basx064] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE0000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-345678.5432\"}}"
+ },
+ {
+ "description": "[basx041] strings without E cannot generate E in result",
+ "bson": "180000001364004C0000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-76\"}}"
+ },
+ {
+ "description": "[basx027] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000F270000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.999\"}}"
+ },
+ {
+ "description": "[basx026] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364009F230000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.119\"}}"
+ },
+ {
+ "description": "[basx025] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364008F030000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.11\"}}"
+ },
+ {
+ "description": "[basx024] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364005B000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.1\"}}"
+ },
+ {
+ "description": "[dqbsr531] negatives (Rounded)",
+ "bson": "1800000013640099761CC7B548F377DC80A131C836FEAF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.1111111111111111111111111111123450\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.111111111111111111111111111112345\"}}"
+ },
+ {
+ "description": "[basx022] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000A000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.0\"}}"
+ },
+ {
+ "description": "[basx021] conform to rules and exponent will be in permitted range).",
+ "bson": "18000000136400010000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1\"}}"
+ },
+ {
+ "description": "[basx601] Zeros",
+ "bson": "1800000013640000000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-9\"}}"
+ },
+ {
+ "description": "[basx622] Zeros",
+ "bson": "1800000013640000000000000000000000000000002EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-9\"}}"
+ },
+ {
+ "description": "[basx602] Zeros",
+ "bson": "180000001364000000000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-8\"}}"
+ },
+ {
+ "description": "[basx621] Zeros",
+ "bson": "18000000136400000000000000000000000000000030B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-8\"}}"
+ },
+ {
+ "description": "[basx603] Zeros",
+ "bson": "180000001364000000000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-7\"}}"
+ },
+ {
+ "description": "[basx620] Zeros",
+ "bson": "18000000136400000000000000000000000000000032B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-7\"}}"
+ },
+ {
+ "description": "[basx604] Zeros",
+ "bson": "180000001364000000000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000\"}}"
+ },
+ {
+ "description": "[basx619] Zeros",
+ "bson": "18000000136400000000000000000000000000000034B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000000\"}}"
+ },
+ {
+ "description": "[basx605] Zeros",
+ "bson": "180000001364000000000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000\"}}"
+ },
+ {
+ "description": "[basx618] Zeros",
+ "bson": "18000000136400000000000000000000000000000036B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000\"}}"
+ },
+ {
+ "description": "[basx680] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"000000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx606] Zeros",
+ "bson": "180000001364000000000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000\"}}"
+ },
+ {
+ "description": "[basx617] Zeros",
+ "bson": "18000000136400000000000000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0000\"}}"
+ },
+ {
+ "description": "[basx681] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"00000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx686] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+00000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx687] Zeros",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-00000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "[basx019] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640000000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-00.00\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00\"}}"
+ },
+ {
+ "description": "[basx607] Zeros",
+ "bson": "1800000013640000000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000\"}}"
+ },
+ {
+ "description": "[basx616] Zeros",
+ "bson": "1800000013640000000000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000\"}}"
+ },
+ {
+ "description": "[basx682] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx155] Numbers with E",
+ "bson": "1800000013640000000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000e+0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000\"}}"
+ },
+ {
+ "description": "[basx130] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000\"}}"
+ },
+ {
+ "description": "[basx290] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0000\"}}"
+ },
+ {
+ "description": "[basx131] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000\"}}"
+ },
+ {
+ "description": "[basx291] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000036B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000\"}}"
+ },
+ {
+ "description": "[basx132] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000\"}}"
+ },
+ {
+ "description": "[basx292] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000034B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000000\"}}"
+ },
+ {
+ "description": "[basx133] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-7\"}}"
+ },
+ {
+ "description": "[basx293] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000032B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-7\"}}"
+ },
+ {
+ "description": "[basx608] Zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[basx615] Zeros",
+ "bson": "1800000013640000000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00\"}}"
+ },
+ {
+ "description": "[basx683] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"000.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx630] Zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[basx670] Zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[basx631] Zeros",
+ "bson": "1800000013640000000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0\"}}"
+ },
+ {
+ "description": "[basx671] Zeros",
+ "bson": "1800000013640000000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000\"}}"
+ },
+ {
+ "description": "[basx134] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000\"}}"
+ },
+ {
+ "description": "[basx294] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0000\"}}"
+ },
+ {
+ "description": "[basx632] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx672] Zeros",
+ "bson": "180000001364000000000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000\"}}"
+ },
+ {
+ "description": "[basx135] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000\"}}"
+ },
+ {
+ "description": "[basx295] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000036B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000\"}}"
+ },
+ {
+ "description": "[basx633] Zeros",
+ "bson": "180000001364000000000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+1\"}}"
+ },
+ {
+ "description": "[basx673] Zeros",
+ "bson": "180000001364000000000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000\"}}"
+ },
+ {
+ "description": "[basx136] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000\"}}"
+ },
+ {
+ "description": "[basx674] Zeros",
+ "bson": "180000001364000000000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000\"}}"
+ },
+ {
+ "description": "[basx634] Zeros",
+ "bson": "180000001364000000000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+2\"}}"
+ },
+ {
+ "description": "[basx137] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-7\"}}"
+ },
+ {
+ "description": "[basx635] Zeros",
+ "bson": "180000001364000000000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+3\"}}"
+ },
+ {
+ "description": "[basx675] Zeros",
+ "bson": "180000001364000000000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-7\"}}"
+ },
+ {
+ "description": "[basx636] Zeros",
+ "bson": "180000001364000000000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+4\"}}"
+ },
+ {
+ "description": "[basx676] Zeros",
+ "bson": "180000001364000000000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-8\"}}"
+ },
+ {
+ "description": "[basx637] Zeros",
+ "bson": "1800000013640000000000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+5\"}}"
+ },
+ {
+ "description": "[basx677] Zeros",
+ "bson": "1800000013640000000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-9\"}}"
+ },
+ {
+ "description": "[basx638] Zeros",
+ "bson": "1800000013640000000000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6\"}}"
+ },
+ {
+ "description": "[basx678] Zeros",
+ "bson": "1800000013640000000000000000000000000000002C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-10\"}}"
+ },
+ {
+ "description": "[basx149] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"000E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx639] Zeros",
+ "bson": "1800000013640000000000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+7\"}}"
+ },
+ {
+ "description": "[basx679] Zeros",
+ "bson": "1800000013640000000000000000000000000000002A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00E-9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-11\"}}"
+ },
+ {
+ "description": "[basx063] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE00000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+00345678.5432\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.5432\"}}"
+ },
+ {
+ "description": "[basx018] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640000000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0\"}}"
+ },
+ {
+ "description": "[basx609] Zeros",
+ "bson": "1800000013640000000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0\"}}"
+ },
+ {
+ "description": "[basx614] Zeros",
+ "bson": "1800000013640000000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0\"}}"
+ },
+ {
+ "description": "[basx684] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"00.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx640] Zeros",
+ "bson": "1800000013640000000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0\"}}"
+ },
+ {
+ "description": "[basx660] Zeros",
+ "bson": "1800000013640000000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0\"}}"
+ },
+ {
+ "description": "[basx641] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx661] Zeros",
+ "bson": "1800000013640000000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00\"}}"
+ },
+ {
+ "description": "[basx296] some more negative zeros [systematic tests below]",
+ "bson": "1800000013640000000000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000\"}}"
+ },
+ {
+ "description": "[basx642] Zeros",
+ "bson": "180000001364000000000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+1\"}}"
+ },
+ {
+ "description": "[basx662] Zeros",
+ "bson": "1800000013640000000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000\"}}"
+ },
+ {
+ "description": "[basx297] some more negative zeros [systematic tests below]",
+ "bson": "18000000136400000000000000000000000000000038B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0000\"}}"
+ },
+ {
+ "description": "[basx643] Zeros",
+ "bson": "180000001364000000000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+2\"}}"
+ },
+ {
+ "description": "[basx663] Zeros",
+ "bson": "180000001364000000000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000\"}}"
+ },
+ {
+ "description": "[basx644] Zeros",
+ "bson": "180000001364000000000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+3\"}}"
+ },
+ {
+ "description": "[basx664] Zeros",
+ "bson": "180000001364000000000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000\"}}"
+ },
+ {
+ "description": "[basx645] Zeros",
+ "bson": "180000001364000000000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+4\"}}"
+ },
+ {
+ "description": "[basx665] Zeros",
+ "bson": "180000001364000000000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000\"}}"
+ },
+ {
+ "description": "[basx646] Zeros",
+ "bson": "1800000013640000000000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+5\"}}"
+ },
+ {
+ "description": "[basx666] Zeros",
+ "bson": "180000001364000000000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-7\"}}"
+ },
+ {
+ "description": "[basx647] Zeros",
+ "bson": "1800000013640000000000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6\"}}"
+ },
+ {
+ "description": "[basx667] Zeros",
+ "bson": "180000001364000000000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-8\"}}"
+ },
+ {
+ "description": "[basx648] Zeros",
+ "bson": "1800000013640000000000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+7\"}}"
+ },
+ {
+ "description": "[basx668] Zeros",
+ "bson": "1800000013640000000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-9\"}}"
+ },
+ {
+ "description": "[basx160] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"00E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx161] Numbers with E",
+ "bson": "1800000013640000000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"00E-9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-9\"}}"
+ },
+ {
+ "description": "[basx649] Zeros",
+ "bson": "180000001364000000000000000000000000000000503000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+8\"}}"
+ },
+ {
+ "description": "[basx669] Zeros",
+ "bson": "1800000013640000000000000000000000000000002C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0E-9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-10\"}}"
+ },
+ {
+ "description": "[basx062] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE00000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+0345678.5432\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.5432\"}}"
+ },
+ {
+ "description": "[basx001] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx017] conform to rules and exponent will be in permitted range).",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "[basx611] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx613] Zeros",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "[basx685] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx688] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+0.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx689] Zeros",
+ "bson": "18000000136400000000000000000000000000000040B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0\"}}"
+ },
+ {
+ "description": "[basx650] Zeros",
+ "bson": "180000001364000000000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0\"}}"
+ },
+ {
+ "description": "[basx651] Zeros",
+ "bson": "180000001364000000000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+1\"}}"
+ },
+ {
+ "description": "[basx298] some more negative zeros [systematic tests below]",
+ "bson": "1800000013640000000000000000000000000000003CB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00\"}}"
+ },
+ {
+ "description": "[basx652] Zeros",
+ "bson": "180000001364000000000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+2\"}}"
+ },
+ {
+ "description": "[basx299] some more negative zeros [systematic tests below]",
+ "bson": "1800000013640000000000000000000000000000003AB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000\"}}"
+ },
+ {
+ "description": "[basx653] Zeros",
+ "bson": "180000001364000000000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+3\"}}"
+ },
+ {
+ "description": "[basx654] Zeros",
+ "bson": "180000001364000000000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+4\"}}"
+ },
+ {
+ "description": "[basx655] Zeros",
+ "bson": "1800000013640000000000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+5\"}}"
+ },
+ {
+ "description": "[basx656] Zeros",
+ "bson": "1800000013640000000000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6\"}}"
+ },
+ {
+ "description": "[basx657] Zeros",
+ "bson": "1800000013640000000000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+7\"}}"
+ },
+ {
+ "description": "[basx658] Zeros",
+ "bson": "180000001364000000000000000000000000000000503000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+8\"}}"
+ },
+ {
+ "description": "[basx138] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+0E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx139] Numbers with E",
+ "bson": "18000000136400000000000000000000000000000052B000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+9\"}}"
+ },
+ {
+ "description": "[basx144] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx154] Numbers with E",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx659] Zeros",
+ "bson": "180000001364000000000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+9\"}}"
+ },
+ {
+ "description": "[basx042] strings without E cannot generate E in result",
+ "bson": "18000000136400FC040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+12.76\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.76\"}}"
+ },
+ {
+ "description": "[basx143] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+1E+009\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx061] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE00000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+345678.5432\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.5432\"}}"
+ },
+ {
+ "description": "[basx036] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640015CD5B0700000000000000000000203000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000000123456789\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.23456789E-8\"}}"
+ },
+ {
+ "description": "[basx035] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640015CD5B0700000000000000000000223000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000123456789\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.23456789E-7\"}}"
+ },
+ {
+ "description": "[basx034] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640015CD5B0700000000000000000000243000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000123456789\"}}"
+ },
+ {
+ "description": "[basx053] strings without E cannot generate E in result",
+ "bson": "180000001364003200000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000050\"}}"
+ },
+ {
+ "description": "[basx033] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640015CD5B0700000000000000000000263000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000123456789\"}}"
+ },
+ {
+ "description": "[basx016] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000C000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.012\"}}"
+ },
+ {
+ "description": "[basx015] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364007B000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.123\"}}"
+ },
+ {
+ "description": "[basx037] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640078DF0D8648700000000000000000223000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.123456789012344\"}}"
+ },
+ {
+ "description": "[basx038] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640079DF0D8648700000000000000000223000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.123456789012345\"}}"
+ },
+ {
+ "description": "[basx250] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx257] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx256] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.01265\"}}"
+ },
+ {
+ "description": "[basx258] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx251] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000103000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-21\"}}"
+ },
+ {
+ "description": "[basx263] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000603000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+19\"}}"
+ },
+ {
+ "description": "[basx255] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.001265\"}}"
+ },
+ {
+ "description": "[basx259] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx254] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0001265\"}}"
+ },
+ {
+ "description": "[basx260] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx253] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00001265\"}}"
+ },
+ {
+ "description": "[basx261] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx252] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000283000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-9\"}}"
+ },
+ {
+ "description": "[basx262] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+7\"}}"
+ },
+ {
+ "description": "[basx159] Numbers with E",
+ "bson": "1800000013640049000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.73e-7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7.3E-8\"}}"
+ },
+ {
+ "description": "[basx004] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640064000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00\"}}"
+ },
+ {
+ "description": "[basx003] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000A000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0\"}}"
+ },
+ {
+ "description": "[basx002] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000100000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1\"}}"
+ },
+ {
+ "description": "[basx148] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+009\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx153] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E009\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx141] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1e+09\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx146] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+09\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx151] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1e09\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx142] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000F43000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+90\"}}"
+ },
+ {
+ "description": "[basx147] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000F43000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1e+90\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+90\"}}"
+ },
+ {
+ "description": "[basx152] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000F43000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E90\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+90\"}}"
+ },
+ {
+ "description": "[basx140] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx150] Numbers with E",
+ "bson": "180000001364000100000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+9\"}}"
+ },
+ {
+ "description": "[basx014] conform to rules and exponent will be in permitted range).",
+ "bson": "18000000136400D2040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.234\"}}"
+ },
+ {
+ "description": "[basx170] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx177] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx176] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx178] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx171] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000123000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-20\"}}"
+ },
+ {
+ "description": "[basx183] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000623000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+20\"}}"
+ },
+ {
+ "description": "[basx175] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.01265\"}}"
+ },
+ {
+ "description": "[basx179] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx174] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.001265\"}}"
+ },
+ {
+ "description": "[basx180] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx173] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0001265\"}}"
+ },
+ {
+ "description": "[basx181] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+4\"}}"
+ },
+ {
+ "description": "[basx172] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000002A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-8\"}}"
+ },
+ {
+ "description": "[basx182] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+8\"}}"
+ },
+ {
+ "description": "[basx157] Numbers with E",
+ "bson": "180000001364000400000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"4E+9\"}}"
+ },
+ {
+ "description": "[basx067] examples",
+ "bson": "180000001364000500000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"5E-6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000005\"}}"
+ },
+ {
+ "description": "[basx069] examples",
+ "bson": "180000001364000500000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"5E-7\"}}"
+ },
+ {
+ "description": "[basx385] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7\"}}"
+ },
+ {
+ "description": "[basx365] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000543000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E10\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+10\"}}"
+ },
+ {
+ "description": "[basx405] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000002C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-10\"}}"
+ },
+ {
+ "description": "[basx363] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000563000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E11\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+11\"}}"
+ },
+ {
+ "description": "[basx407] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000002A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-11\"}}"
+ },
+ {
+ "description": "[basx361] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000583000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E12\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+12\"}}"
+ },
+ {
+ "description": "[basx409] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000283000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-12\"}}"
+ },
+ {
+ "description": "[basx411] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000263000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-13\"}}"
+ },
+ {
+ "description": "[basx383] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+1\"}}"
+ },
+ {
+ "description": "[basx387] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.7\"}}"
+ },
+ {
+ "description": "[basx381] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+2\"}}"
+ },
+ {
+ "description": "[basx389] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.07\"}}"
+ },
+ {
+ "description": "[basx379] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+3\"}}"
+ },
+ {
+ "description": "[basx391] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.007\"}}"
+ },
+ {
+ "description": "[basx377] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+4\"}}"
+ },
+ {
+ "description": "[basx393] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0007\"}}"
+ },
+ {
+ "description": "[basx375] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+5\"}}"
+ },
+ {
+ "description": "[basx395] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00007\"}}"
+ },
+ {
+ "description": "[basx373] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+6\"}}"
+ },
+ {
+ "description": "[basx397] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000007\"}}"
+ },
+ {
+ "description": "[basx371] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+7\"}}"
+ },
+ {
+ "description": "[basx399] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-7\"}}"
+ },
+ {
+ "description": "[basx369] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000503000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+8\"}}"
+ },
+ {
+ "description": "[basx401] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-8\"}}"
+ },
+ {
+ "description": "[basx367] Engineering notation tests",
+ "bson": "180000001364000700000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"7E+9\"}}"
+ },
+ {
+ "description": "[basx403] Engineering notation tests",
+ "bson": "1800000013640007000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"7E-9\"}}"
+ },
+ {
+ "description": "[basx007] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640064000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.0\"}}"
+ },
+ {
+ "description": "[basx005] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364000A00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10\"}}"
+ },
+ {
+ "description": "[basx165] Numbers with E",
+ "bson": "180000001364000A00000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10E+009\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+10\"}}"
+ },
+ {
+ "description": "[basx163] Numbers with E",
+ "bson": "180000001364000A00000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10E+09\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+10\"}}"
+ },
+ {
+ "description": "[basx325] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"10\"}}"
+ },
+ {
+ "description": "[basx305] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000543000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e10\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+11\"}}"
+ },
+ {
+ "description": "[basx345] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000002C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-10\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-9\"}}"
+ },
+ {
+ "description": "[basx303] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000563000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e11\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+12\"}}"
+ },
+ {
+ "description": "[basx347] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000002A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-11\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-10\"}}"
+ },
+ {
+ "description": "[basx301] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000583000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e12\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+13\"}}"
+ },
+ {
+ "description": "[basx349] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000283000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-12\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-11\"}}"
+ },
+ {
+ "description": "[basx351] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000263000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-13\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-12\"}}"
+ },
+ {
+ "description": "[basx323] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+2\"}}"
+ },
+ {
+ "description": "[basx327] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0\"}}"
+ },
+ {
+ "description": "[basx321] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+3\"}}"
+ },
+ {
+ "description": "[basx329] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.10\"}}"
+ },
+ {
+ "description": "[basx319] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+4\"}}"
+ },
+ {
+ "description": "[basx331] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.010\"}}"
+ },
+ {
+ "description": "[basx317] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+5\"}}"
+ },
+ {
+ "description": "[basx333] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0010\"}}"
+ },
+ {
+ "description": "[basx315] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000004A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+6\"}}"
+ },
+ {
+ "description": "[basx335] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00010\"}}"
+ },
+ {
+ "description": "[basx313] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+7\"}}"
+ },
+ {
+ "description": "[basx337] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-6\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000010\"}}"
+ },
+ {
+ "description": "[basx311] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+8\"}}"
+ },
+ {
+ "description": "[basx339] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000010\"}}"
+ },
+ {
+ "description": "[basx309] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000503000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+9\"}}"
+ },
+ {
+ "description": "[basx341] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-7\"}}"
+ },
+ {
+ "description": "[basx164] Numbers with E",
+ "bson": "180000001364000A00000000000000000000000000F43000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e+90\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+91\"}}"
+ },
+ {
+ "description": "[basx162] Numbers with E",
+ "bson": "180000001364000A00000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+10\"}}"
+ },
+ {
+ "description": "[basx307] Engineering notation tests",
+ "bson": "180000001364000A00000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+10\"}}"
+ },
+ {
+ "description": "[basx343] Engineering notation tests",
+ "bson": "180000001364000A000000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10e-9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-8\"}}"
+ },
+ {
+ "description": "[basx008] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640065000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.1\"}}"
+ },
+ {
+ "description": "[basx009] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640068000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.4\"}}"
+ },
+ {
+ "description": "[basx010] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640069000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.5\"}}"
+ },
+ {
+ "description": "[basx011] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364006A000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.6\"}}"
+ },
+ {
+ "description": "[basx012] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364006D000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"10.9\"}}"
+ },
+ {
+ "description": "[basx013] conform to rules and exponent will be in permitted range).",
+ "bson": "180000001364006E000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"11.0\"}}"
+ },
+ {
+ "description": "[basx040] strings without E cannot generate E in result",
+ "bson": "180000001364000C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12\"}}"
+ },
+ {
+ "description": "[basx190] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx197] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx196] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx198] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx191] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000143000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-19\"}}"
+ },
+ {
+ "description": "[basx203] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000643000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+21\"}}"
+ },
+ {
+ "description": "[basx195] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx199] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx194] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.01265\"}}"
+ },
+ {
+ "description": "[basx200] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+4\"}}"
+ },
+ {
+ "description": "[basx193] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.001265\"}}"
+ },
+ {
+ "description": "[basx201] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+5\"}}"
+ },
+ {
+ "description": "[basx192] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000002C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-7\"}}"
+ },
+ {
+ "description": "[basx202] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000004C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+9\"}}"
+ },
+ {
+ "description": "[basx044] strings without E cannot generate E in result",
+ "bson": "18000000136400FC040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"012.76\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.76\"}}"
+ },
+ {
+ "description": "[basx042] strings without E cannot generate E in result",
+ "bson": "18000000136400FC040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12.76\"}}"
+ },
+ {
+ "description": "[basx046] strings without E cannot generate E in result",
+ "bson": "180000001364001100000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"17.\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"17\"}}"
+ },
+ {
+ "description": "[basx049] strings without E cannot generate E in result",
+ "bson": "180000001364002C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0044\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"44\"}}"
+ },
+ {
+ "description": "[basx048] strings without E cannot generate E in result",
+ "bson": "180000001364002C00000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"044\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"44\"}}"
+ },
+ {
+ "description": "[basx158] Numbers with E",
+ "bson": "180000001364002C00000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"44E+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"4.4E+10\"}}"
+ },
+ {
+ "description": "[basx068] examples",
+ "bson": "180000001364003200000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"50E-7\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000050\"}}"
+ },
+ {
+ "description": "[basx169] Numbers with E",
+ "bson": "180000001364006400000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"100e+009\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+11\"}}"
+ },
+ {
+ "description": "[basx167] Numbers with E",
+ "bson": "180000001364006400000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"100e+09\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+11\"}}"
+ },
+ {
+ "description": "[basx168] Numbers with E",
+ "bson": "180000001364006400000000000000000000000000F43000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"100E+90\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+92\"}}"
+ },
+ {
+ "description": "[basx166] Numbers with E",
+ "bson": "180000001364006400000000000000000000000000523000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"100e+9\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+11\"}}"
+ },
+ {
+ "description": "[basx210] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx217] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx216] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx218] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx211] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000163000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-18\"}}"
+ },
+ {
+ "description": "[basx223] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000663000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+22\"}}"
+ },
+ {
+ "description": "[basx215] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx219] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+4\"}}"
+ },
+ {
+ "description": "[basx214] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx220] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+5\"}}"
+ },
+ {
+ "description": "[basx213] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.01265\"}}"
+ },
+ {
+ "description": "[basx221] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+6\"}}"
+ },
+ {
+ "description": "[basx212] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000002E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000001265\"}}"
+ },
+ {
+ "description": "[basx222] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000004E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+10\"}}"
+ },
+ {
+ "description": "[basx006] conform to rules and exponent will be in permitted range).",
+ "bson": "18000000136400E803000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1000\"}}"
+ },
+ {
+ "description": "[basx230] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx237] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1265\"}}"
+ },
+ {
+ "description": "[basx236] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"126.5\"}}"
+ },
+ {
+ "description": "[basx238] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000423000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+1\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+4\"}}"
+ },
+ {
+ "description": "[basx231] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000183000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E-17\"}}"
+ },
+ {
+ "description": "[basx243] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000683000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+20\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+23\"}}"
+ },
+ {
+ "description": "[basx235] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.65\"}}"
+ },
+ {
+ "description": "[basx239] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000443000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+2\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+5\"}}"
+ },
+ {
+ "description": "[basx234] Numbers with E",
+ "bson": "18000000136400F1040000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265\"}}"
+ },
+ {
+ "description": "[basx240] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000463000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+3\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+6\"}}"
+ },
+ {
+ "description": "[basx233] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1265\"}}"
+ },
+ {
+ "description": "[basx241] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000483000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+4\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+7\"}}"
+ },
+ {
+ "description": "[basx232] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E-8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00001265\"}}"
+ },
+ {
+ "description": "[basx242] Numbers with E",
+ "bson": "18000000136400F104000000000000000000000000503000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1265E+8\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.265E+11\"}}"
+ },
+ {
+ "description": "[basx060] strings without E cannot generate E in result",
+ "bson": "18000000136400185C0ACE00000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.5432\"}}"
+ },
+ {
+ "description": "[basx059] strings without E cannot generate E in result",
+ "bson": "18000000136400F198670C08000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0345678.54321\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.54321\"}}"
+ },
+ {
+ "description": "[basx058] strings without E cannot generate E in result",
+ "bson": "180000001364006AF90B7C50000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"345678.543210\"}}"
+ },
+ {
+ "description": "[basx057] strings without E cannot generate E in result",
+ "bson": "180000001364006A19562522020000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"2345678.543210\"}}"
+ },
+ {
+ "description": "[basx056] strings without E cannot generate E in result",
+ "bson": "180000001364006AB9C8733A0B0000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"12345678.543210\"}}"
+ },
+ {
+ "description": "[basx031] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640040AF0D8648700000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"123456789.000000\"}}"
+ },
+ {
+ "description": "[basx030] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640080910F8648700000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"123456789.123456\"}}"
+ },
+ {
+ "description": "[basx032] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640080910F8648700000000000000000403000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"123456789123456\"}}"
+ }
+ ]
+}
+`},
+
+ {"decimal128-4.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "valid": [
+ {
+ "description": "[basx023] conform to rules and exponent will be in permitted range).",
+ "bson": "1800000013640001000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.1\"}}"
+ },
+
+ {
+ "description": "[basx045] strings without E cannot generate E in result",
+ "bson": "1800000013640003000000000000000000000000003A3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+0.003\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.003\"}}"
+ },
+ {
+ "description": "[basx610] Zeros",
+ "bson": "1800000013640000000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \".0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0\"}}"
+ },
+ {
+ "description": "[basx612] Zeros",
+ "bson": "1800000013640000000000000000000000000000003EB000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-.0\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.0\"}}"
+ },
+ {
+ "description": "[basx043] strings without E cannot generate E in result",
+ "bson": "18000000136400FC040000000000000000000000003C3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"+12.76\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"12.76\"}}"
+ },
+ {
+ "description": "[basx055] strings without E cannot generate E in result",
+ "bson": "180000001364000500000000000000000000000000303000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000005\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"5E-8\"}}"
+ },
+ {
+ "description": "[basx054] strings without E cannot generate E in result",
+ "bson": "180000001364000500000000000000000000000000323000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0000005\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"5E-7\"}}"
+ },
+ {
+ "description": "[basx052] strings without E cannot generate E in result",
+ "bson": "180000001364000500000000000000000000000000343000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000005\"}}"
+ },
+ {
+ "description": "[basx051] strings without E cannot generate E in result",
+ "bson": "180000001364000500000000000000000000000000363000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"00.00005\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00005\"}}"
+ },
+ {
+ "description": "[basx050] strings without E cannot generate E in result",
+ "bson": "180000001364000500000000000000000000000000383000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.0005\"}}"
+ },
+ {
+ "description": "[basx047] strings without E cannot generate E in result",
+ "bson": "1800000013640005000000000000000000000000003E3000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \".5\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.5\"}}"
+ },
+ {
+ "description": "[dqbsr431] check rounding modes heeded (Rounded)",
+ "bson": "1800000013640099761CC7B548F377DC80A131C836FE2F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.1111111111111111111111111111123450\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.111111111111111111111111111112345\"}}"
+ },
+ {
+ "description": "OK2",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FC2F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \".100000000000000000000000000000000000000000000000000000000000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0.1000000000000000000000000000000000\"}}"
+ }
+ ],
+ "parseErrors": [
+ {
+ "description": "[basx564] Near-specials (Conversion_syntax)",
+ "string": "Infi"
+ },
+ {
+ "description": "[basx565] Near-specials (Conversion_syntax)",
+ "string": "Infin"
+ },
+ {
+ "description": "[basx566] Near-specials (Conversion_syntax)",
+ "string": "Infini"
+ },
+ {
+ "description": "[basx567] Near-specials (Conversion_syntax)",
+ "string": "Infinit"
+ },
+ {
+ "description": "[basx568] Near-specials (Conversion_syntax)",
+ "string": "-Infinit"
+ },
+ {
+ "description": "[basx590] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": ".Infinity"
+ },
+ {
+ "description": "[basx562] Near-specials (Conversion_syntax)",
+ "string": "NaNq"
+ },
+ {
+ "description": "[basx563] Near-specials (Conversion_syntax)",
+ "string": "NaNs"
+ },
+ {
+ "description": "[dqbas939] overflow results at different rounding modes (Overflow & Inexact & Rounded)",
+ "string": "-7e10000"
+ },
+ {
+ "description": "[dqbsr534] negatives (Rounded & Inexact)",
+ "string": "-1.11111111111111111111111111111234650"
+ },
+ {
+ "description": "[dqbsr535] negatives (Rounded & Inexact)",
+ "string": "-1.11111111111111111111111111111234551"
+ },
+ {
+ "description": "[dqbsr533] negatives (Rounded & Inexact)",
+ "string": "-1.11111111111111111111111111111234550"
+ },
+ {
+ "description": "[dqbsr532] negatives (Rounded & Inexact)",
+ "string": "-1.11111111111111111111111111111234549"
+ },
+ {
+ "description": "[dqbsr432] check rounding modes heeded (Rounded & Inexact)",
+ "string": "1.11111111111111111111111111111234549"
+ },
+ {
+ "description": "[dqbsr433] check rounding modes heeded (Rounded & Inexact)",
+ "string": "1.11111111111111111111111111111234550"
+ },
+ {
+ "description": "[dqbsr435] check rounding modes heeded (Rounded & Inexact)",
+ "string": "1.11111111111111111111111111111234551"
+ },
+ {
+ "description": "[dqbsr434] check rounding modes heeded (Rounded & Inexact)",
+ "string": "1.11111111111111111111111111111234650"
+ },
+ {
+ "description": "[dqbas938] overflow results at different rounding modes (Overflow & Inexact & Rounded)",
+ "string": "7e10000"
+ },
+ {
+ "description": "Inexact rounding#1",
+ "string": "100000000000000000000000000000000000000000000000000000000001"
+ },
+ {
+ "description": "Inexact rounding#2",
+ "string": "1E-6177"
+ }
+ ]
+}
+`},
+
+ {"decimal128-5.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "valid": [
+ {
+ "description": "[decq035] fold-downs (more below) (Clamped)",
+ "bson": "18000000136400000000807F1BCF85B27059C8A43CFE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.23E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.230000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq037] fold-downs (more below) (Clamped)",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq077] Nmin and below (Subnormal)",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.100000000000000000000000000000000E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000000E-6144\"}}"
+ },
+ {
+ "description": "[decq078] Nmin and below (Subnormal)",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000000E-6144\"}}"
+ },
+ {
+ "description": "[decq079] Nmin and below (Subnormal)",
+ "bson": "180000001364000A00000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000000000000000000000000000010E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-6175\"}}"
+ },
+ {
+ "description": "[decq080] Nmin and below (Subnormal)",
+ "bson": "180000001364000A00000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E-6175\"}}"
+ },
+ {
+ "description": "[decq081] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000020000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.00000000000000000000000000000001E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6175\"}}"
+ },
+ {
+ "description": "[decq082] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000020000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6175\"}}"
+ },
+ {
+ "description": "[decq083] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0.000000000000000000000000000000001E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6176\"}}"
+ },
+ {
+ "description": "[decq084] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6176\"}}"
+ },
+ {
+ "description": "[decq090] underflows cannot be tested for simple copies, check edge cases (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1e-6176\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1E-6176\"}}"
+ },
+ {
+ "description": "[decq100] underflows cannot be tested for simple copies, check edge cases (Subnormal)",
+ "bson": "18000000136400FFFFFFFF095BC138938D44C64D31000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"999999999999999999999999999999999e-6176\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"9.99999999999999999999999999999999E-6144\"}}"
+ },
+ {
+ "description": "[decq130] fold-downs (more below) (Clamped)",
+ "bson": "18000000136400000000807F1BCF85B27059C8A43CFEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.23E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.230000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq132] fold-downs (more below) (Clamped)",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq177] Nmin and below (Subnormal)",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.100000000000000000000000000000000E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.00000000000000000000000000000000E-6144\"}}"
+ },
+ {
+ "description": "[decq178] Nmin and below (Subnormal)",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.00000000000000000000000000000000E-6144\"}}"
+ },
+ {
+ "description": "[decq179] Nmin and below (Subnormal)",
+ "bson": "180000001364000A00000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000000000000000000000000000000010E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.0E-6175\"}}"
+ },
+ {
+ "description": "[decq180] Nmin and below (Subnormal)",
+ "bson": "180000001364000A00000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1.0E-6175\"}}"
+ },
+ {
+ "description": "[decq181] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000028000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.00000000000000000000000000000001E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6175\"}}"
+ },
+ {
+ "description": "[decq182] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000028000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6175\"}}"
+ },
+ {
+ "description": "[decq183] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0.000000000000000000000000000000001E-6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6176\"}}"
+ },
+ {
+ "description": "[decq184] Nmin and below (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6176\"}}"
+ },
+ {
+ "description": "[decq190] underflow edge cases (Subnormal)",
+ "bson": "180000001364000100000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-1e-6176\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-1E-6176\"}}"
+ },
+ {
+ "description": "[decq200] underflow edge cases (Subnormal)",
+ "bson": "18000000136400FFFFFFFF095BC138938D44C64D31008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-999999999999999999999999999999999e-6176\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-9.99999999999999999999999999999999E-6144\"}}"
+ },
+ {
+ "description": "[decq400] zeros (Clamped)",
+ "bson": "180000001364000000000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-8000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-6176\"}}"
+ },
+ {
+ "description": "[decq401] zeros (Clamped)",
+ "bson": "180000001364000000000000000000000000000000000000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-6177\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E-6176\"}}"
+ },
+ {
+ "description": "[decq414] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6112\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6111\"}}"
+ },
+ {
+ "description": "[decq416] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6111\"}}"
+ },
+ {
+ "description": "[decq418] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+8000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"0E+6111\"}}"
+ },
+ {
+ "description": "[decq420] negative zeros (Clamped)",
+ "bson": "180000001364000000000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-8000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-6176\"}}"
+ },
+ {
+ "description": "[decq421] negative zeros (Clamped)",
+ "bson": "180000001364000000000000000000000000000000008000",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-6177\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E-6176\"}}"
+ },
+ {
+ "description": "[decq434] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6112\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6111\"}}"
+ },
+ {
+ "description": "[decq436] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6111\"}}"
+ },
+ {
+ "description": "[decq438] clamped zeros... (Clamped)",
+ "bson": "180000001364000000000000000000000000000000FEDF00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+8000\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"-0E+6111\"}}"
+ },
+ {
+ "description": "[decq601] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000000A5BC138938D44C64D31FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6144\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000000E+6144\"}}"
+ },
+ {
+ "description": "[decq603] fold-down full sequence (Clamped)",
+ "bson": "180000001364000000000081EFAC855B416D2DEE04FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6143\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000000E+6143\"}}"
+ },
+ {
+ "description": "[decq605] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000000080264B91C02220BE377E00FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6142\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000000000E+6142\"}}"
+ },
+ {
+ "description": "[decq607] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000000040EAED7446D09C2C9F0C00FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6141\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000000E+6141\"}}"
+ },
+ {
+ "description": "[decq609] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000A0CA17726DAE0F1E430100FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6140\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000000E+6140\"}}"
+ },
+ {
+ "description": "[decq611] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000106102253E5ECE4F200000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6139\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000000E+6139\"}}"
+ },
+ {
+ "description": "[decq613] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000E83C80D09F3C2E3B030000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6138\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000000E+6138\"}}"
+ },
+ {
+ "description": "[decq615] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000E4D20CC8DCD2B752000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6137\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000000E+6137\"}}"
+ },
+ {
+ "description": "[decq617] fold-down full sequence (Clamped)",
+ "bson": "180000001364000000004A48011416954508000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6136\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000000E+6136\"}}"
+ },
+ {
+ "description": "[decq619] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000000A1EDCCCE1BC2D300000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6135\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000000E+6135\"}}"
+ },
+ {
+ "description": "[decq621] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000080F64AE1C7022D1500000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6134\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000000E+6134\"}}"
+ },
+ {
+ "description": "[decq623] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000040B2BAC9E0191E0200000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6133\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000000E+6133\"}}"
+ },
+ {
+ "description": "[decq625] fold-down full sequence (Clamped)",
+ "bson": "180000001364000000A0DEC5ADC935360000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6132\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000000E+6132\"}}"
+ },
+ {
+ "description": "[decq627] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000010632D5EC76B050000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6131\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000000E+6131\"}}"
+ },
+ {
+ "description": "[decq629] fold-down full sequence (Clamped)",
+ "bson": "180000001364000000E8890423C78A000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6130\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000000E+6130\"}}"
+ },
+ {
+ "description": "[decq631] fold-down full sequence (Clamped)",
+ "bson": "18000000136400000064A7B3B6E00D000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6129\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000000E+6129\"}}"
+ },
+ {
+ "description": "[decq633] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000008A5D78456301000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6128\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000000E+6128\"}}"
+ },
+ {
+ "description": "[decq635] fold-down full sequence (Clamped)",
+ "bson": "180000001364000000C16FF2862300000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6127\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000000E+6127\"}}"
+ },
+ {
+ "description": "[decq637] fold-down full sequence (Clamped)",
+ "bson": "180000001364000080C6A47E8D0300000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6126\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000000E+6126\"}}"
+ },
+ {
+ "description": "[decq639] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000407A10F35A0000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6125\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000000E+6125\"}}"
+ },
+ {
+ "description": "[decq641] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000A0724E18090000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6124\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000000E+6124\"}}"
+ },
+ {
+ "description": "[decq643] fold-down full sequence (Clamped)",
+ "bson": "180000001364000010A5D4E8000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6123\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000000E+6123\"}}"
+ },
+ {
+ "description": "[decq645] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000E8764817000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6122\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000000E+6122\"}}"
+ },
+ {
+ "description": "[decq647] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000E40B5402000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6121\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000000E+6121\"}}"
+ },
+ {
+ "description": "[decq649] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000CA9A3B00000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6120\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000000E+6120\"}}"
+ },
+ {
+ "description": "[decq651] fold-down full sequence (Clamped)",
+ "bson": "1800000013640000E1F50500000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6119\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000000E+6119\"}}"
+ },
+ {
+ "description": "[decq653] fold-down full sequence (Clamped)",
+ "bson": "180000001364008096980000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6118\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000000E+6118\"}}"
+ },
+ {
+ "description": "[decq655] fold-down full sequence (Clamped)",
+ "bson": "1800000013640040420F0000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6117\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000000E+6117\"}}"
+ },
+ {
+ "description": "[decq657] fold-down full sequence (Clamped)",
+ "bson": "18000000136400A086010000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6116\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00000E+6116\"}}"
+ },
+ {
+ "description": "[decq659] fold-down full sequence (Clamped)",
+ "bson": "180000001364001027000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6115\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0000E+6115\"}}"
+ },
+ {
+ "description": "[decq661] fold-down full sequence (Clamped)",
+ "bson": "18000000136400E803000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6114\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.000E+6114\"}}"
+ },
+ {
+ "description": "[decq663] fold-down full sequence (Clamped)",
+ "bson": "180000001364006400000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6113\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.00E+6113\"}}"
+ },
+ {
+ "description": "[decq665] fold-down full sequence (Clamped)",
+ "bson": "180000001364000A00000000000000000000000000FE5F00",
+ "extjson": "{\"d\" : {\"$numberDecimal\" : \"1E+6112\"}}",
+ "canonical_extjson": "{\"d\" : {\"$numberDecimal\" : \"1.0E+6112\"}}"
+ }
+ ]
+}
+`},
+
+ {"decimal128-6.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "parseErrors": [
+ {
+ "description": "Incomplete Exponent",
+ "string": "1e"
+ },
+ {
+ "description": "Exponent at the beginning",
+ "string": "E01"
+ },
+ {
+ "description": "Just a decimal place",
+ "string": "."
+ },
+ {
+ "description": "2 decimal places",
+ "string": "..3"
+ },
+ {
+ "description": "2 decimal places",
+ "string": ".13.3"
+ },
+ {
+ "description": "2 decimal places",
+ "string": "1..3"
+ },
+ {
+ "description": "2 decimal places",
+ "string": "1.3.4"
+ },
+ {
+ "description": "2 decimal places",
+ "string": "1.34."
+ },
+ {
+ "description": "Decimal with no digits",
+ "string": ".e"
+ },
+ {
+ "description": "2 signs",
+ "string": "+-32.4"
+ },
+ {
+ "description": "2 signs",
+ "string": "-+32.4"
+ },
+ {
+ "description": "2 negative signs",
+ "string": "--32.4"
+ },
+ {
+ "description": "2 negative signs",
+ "string": "-32.-4"
+ },
+ {
+ "description": "End in negative sign",
+ "string": "32.0-"
+ },
+ {
+ "description": "2 negative signs",
+ "string": "32.4E--21"
+ },
+ {
+ "description": "2 negative signs",
+ "string": "32.4E-2-1"
+ },
+ {
+ "description": "2 signs",
+ "string": "32.4E+-21"
+ },
+ {
+ "description": "Empty string",
+ "string": ""
+ },
+ {
+ "description": "leading white space positive number",
+ "string": " 1"
+ },
+ {
+ "description": "leading white space negative number",
+ "string": " -1"
+ },
+ {
+ "description": "trailing white space",
+ "string": "1 "
+ },
+ {
+ "description": "Invalid",
+ "string": "E"
+ },
+ {
+ "description": "Invalid",
+ "string": "invalid"
+ },
+ {
+ "description": "Invalid",
+ "string": "i"
+ },
+ {
+ "description": "Invalid",
+ "string": "in"
+ },
+ {
+ "description": "Invalid",
+ "string": "-in"
+ },
+ {
+ "description": "Invalid",
+ "string": "Na"
+ },
+ {
+ "description": "Invalid",
+ "string": "-Na"
+ },
+ {
+ "description": "Invalid",
+ "string": "1.23abc"
+ },
+ {
+ "description": "Invalid",
+ "string": "1.23abcE+02"
+ },
+ {
+ "description": "Invalid",
+ "string": "1.23E+0aabs2"
+ }
+ ]
+}
+`},
+
+ {"decimal128-7.json", `
+{
+ "description": "Decimal128",
+ "bson_type": "0x13",
+ "test_key": "d",
+ "parseErrors": [
+ {
+ "description": "[basx572] Near-specials (Conversion_syntax)",
+ "string": "-9Inf"
+ },
+ {
+ "description": "[basx516] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "-1-"
+ },
+ {
+ "description": "[basx533] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "0000.."
+ },
+ {
+ "description": "[basx534] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": ".0000."
+ },
+ {
+ "description": "[basx535] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "00..00"
+ },
+ {
+ "description": "[basx569] Near-specials (Conversion_syntax)",
+ "string": "0Inf"
+ },
+ {
+ "description": "[basx571] Near-specials (Conversion_syntax)",
+ "string": "-0Inf"
+ },
+ {
+ "description": "[basx575] Near-specials (Conversion_syntax)",
+ "string": "0sNaN"
+ },
+ {
+ "description": "[basx503] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "++1"
+ },
+ {
+ "description": "[basx504] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "--1"
+ },
+ {
+ "description": "[basx505] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "-+1"
+ },
+ {
+ "description": "[basx506] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "+-1"
+ },
+ {
+ "description": "[basx510] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": " +1"
+ },
+ {
+ "description": "[basx513] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": " + 1"
+ },
+ {
+ "description": "[basx514] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": " - 1"
+ },
+ {
+ "description": "[basx501] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "."
+ },
+ {
+ "description": "[basx502] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": ".."
+ },
+ {
+ "description": "[basx519] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": ""
+ },
+ {
+ "description": "[basx525] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "e100"
+ },
+ {
+ "description": "[basx549] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "e+1"
+ },
+ {
+ "description": "[basx577] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": ".e+1"
+ },
+ {
+ "description": "[basx578] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "+.e+1"
+ },
+ {
+ "description": "[basx581] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "E+1"
+ },
+ {
+ "description": "[basx582] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": ".E+1"
+ },
+ {
+ "description": "[basx583] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "+.E+1"
+ },
+ {
+ "description": "[basx579] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "-.e+"
+ },
+ {
+ "description": "[basx580] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "-.e"
+ },
+ {
+ "description": "[basx584] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "-.E+"
+ },
+ {
+ "description": "[basx585] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "-.E"
+ },
+ {
+ "description": "[basx589] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "+.Inf"
+ },
+ {
+ "description": "[basx586] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": ".NaN"
+ },
+ {
+ "description": "[basx587] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "-.NaN"
+ },
+ {
+ "description": "[basx545] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "ONE"
+ },
+ {
+ "description": "[basx561] Near-specials (Conversion_syntax)",
+ "string": "qNaN"
+ },
+ {
+ "description": "[basx573] Near-specials (Conversion_syntax)",
+ "string": "-sNa"
+ },
+ {
+ "description": "[basx588] some baddies with dots and Es and dots and specials (Conversion_syntax)",
+ "string": "+.sNaN"
+ },
+ {
+ "description": "[basx544] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "ten"
+ },
+ {
+ "description": "[basx527] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "u0b65"
+ },
+ {
+ "description": "[basx526] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "u0e5a"
+ },
+ {
+ "description": "[basx515] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "x"
+ },
+ {
+ "description": "[basx574] Near-specials (Conversion_syntax)",
+ "string": "xNaN"
+ },
+ {
+ "description": "[basx530] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": ".123.5"
+ },
+ {
+ "description": "[basx500] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1..2"
+ },
+ {
+ "description": "[basx542] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1e1.0"
+ },
+ {
+ "description": "[basx553] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E+1.2.3"
+ },
+ {
+ "description": "[basx543] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1e123e"
+ },
+ {
+ "description": "[basx552] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E+1.2"
+ },
+ {
+ "description": "[basx546] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1e.1"
+ },
+ {
+ "description": "[basx547] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1e1."
+ },
+ {
+ "description": "[basx554] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E++1"
+ },
+ {
+ "description": "[basx555] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E--1"
+ },
+ {
+ "description": "[basx556] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E+-1"
+ },
+ {
+ "description": "[basx557] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E-+1"
+ },
+ {
+ "description": "[basx558] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E'1"
+ },
+ {
+ "description": "[basx559] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E\"1"
+ },
+ {
+ "description": "[basx520] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1e-"
+ },
+ {
+ "description": "[basx560] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1E"
+ },
+ {
+ "description": "[basx548] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1ee"
+ },
+ {
+ "description": "[basx551] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1.2.1"
+ },
+ {
+ "description": "[basx550] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1.23.4"
+ },
+ {
+ "description": "[basx529] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "1.34.5"
+ },
+ {
+ "description": "[basx531] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "01.35."
+ },
+ {
+ "description": "[basx532] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "01.35-"
+ },
+ {
+ "description": "[basx518] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "3+"
+ },
+ {
+ "description": "[basx521] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "7e99999a"
+ },
+ {
+ "description": "[basx570] Near-specials (Conversion_syntax)",
+ "string": "9Inf"
+ },
+ {
+ "description": "[basx512] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "12 "
+ },
+ {
+ "description": "[basx517] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "12-"
+ },
+ {
+ "description": "[basx507] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "12e"
+ },
+ {
+ "description": "[basx508] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "12e++"
+ },
+ {
+ "description": "[basx509] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "12f4"
+ },
+ {
+ "description": "[basx536] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111e*123"
+ },
+ {
+ "description": "[basx537] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111e123-"
+ },
+ {
+ "description": "[basx540] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111e1*23"
+ },
+ {
+ "description": "[basx538] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111e+12+"
+ },
+ {
+ "description": "[basx539] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111e1-3-"
+ },
+ {
+ "description": "[basx541] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "111E1e+3"
+ },
+ {
+ "description": "[basx528] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "123,65"
+ },
+ {
+ "description": "[basx523] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "7e12356789012x"
+ },
+ {
+ "description": "[basx522] The 'baddies' tests from DiagBigDecimal, plus some new ones (Conversion_syntax)",
+ "string": "7e123567890x"
+ }
+ ]
+}
+`},
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/decode.go b/vendor/gopkg.in/mgo.v2/bson/decode.go
new file mode 100644
index 0000000..7c2d841
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/decode.go
@@ -0,0 +1,849 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+// gobson - BSON library for Go.
+
+package bson
+
+import (
+ "fmt"
+ "math"
+ "net/url"
+ "reflect"
+ "strconv"
+ "sync"
+ "time"
+)
+
+type decoder struct {
+ in []byte
+ i int
+ docType reflect.Type
+}
+
+var typeM = reflect.TypeOf(M{})
+
+func newDecoder(in []byte) *decoder {
+ return &decoder{in, 0, typeM}
+}
+
+// --------------------------------------------------------------------------
+// Some helper functions.
+
+func corrupted() {
+ panic("Document is corrupted")
+}
+
+func settableValueOf(i interface{}) reflect.Value {
+ v := reflect.ValueOf(i)
+ sv := reflect.New(v.Type()).Elem()
+ sv.Set(v)
+ return sv
+}
+
+// --------------------------------------------------------------------------
+// Unmarshaling of documents.
+
+const (
+ setterUnknown = iota
+ setterNone
+ setterType
+ setterAddr
+)
+
+var setterStyles map[reflect.Type]int
+var setterIface reflect.Type
+var setterMutex sync.RWMutex
+
+func init() {
+ var iface Setter
+ setterIface = reflect.TypeOf(&iface).Elem()
+ setterStyles = make(map[reflect.Type]int)
+}
+
+func setterStyle(outt reflect.Type) int {
+ setterMutex.RLock()
+ style := setterStyles[outt]
+ setterMutex.RUnlock()
+ if style == setterUnknown {
+ setterMutex.Lock()
+ defer setterMutex.Unlock()
+ if outt.Implements(setterIface) {
+ setterStyles[outt] = setterType
+ } else if reflect.PtrTo(outt).Implements(setterIface) {
+ setterStyles[outt] = setterAddr
+ } else {
+ setterStyles[outt] = setterNone
+ }
+ style = setterStyles[outt]
+ }
+ return style
+}
+
+func getSetter(outt reflect.Type, out reflect.Value) Setter {
+ style := setterStyle(outt)
+ if style == setterNone {
+ return nil
+ }
+ if style == setterAddr {
+ if !out.CanAddr() {
+ return nil
+ }
+ out = out.Addr()
+ } else if outt.Kind() == reflect.Ptr && out.IsNil() {
+ out.Set(reflect.New(outt.Elem()))
+ }
+ return out.Interface().(Setter)
+}
+
+func clearMap(m reflect.Value) {
+ var none reflect.Value
+ for _, k := range m.MapKeys() {
+ m.SetMapIndex(k, none)
+ }
+}
+
+func (d *decoder) readDocTo(out reflect.Value) {
+ var elemType reflect.Type
+ outt := out.Type()
+ outk := outt.Kind()
+
+ for {
+ if outk == reflect.Ptr && out.IsNil() {
+ out.Set(reflect.New(outt.Elem()))
+ }
+ if setter := getSetter(outt, out); setter != nil {
+ var raw Raw
+ d.readDocTo(reflect.ValueOf(&raw))
+ err := setter.SetBSON(raw)
+ if _, ok := err.(*TypeError); err != nil && !ok {
+ panic(err)
+ }
+ return
+ }
+ if outk == reflect.Ptr {
+ out = out.Elem()
+ outt = out.Type()
+ outk = out.Kind()
+ continue
+ }
+ break
+ }
+
+ var fieldsMap map[string]fieldInfo
+ var inlineMap reflect.Value
+ start := d.i
+
+ origout := out
+ if outk == reflect.Interface {
+ if d.docType.Kind() == reflect.Map {
+ mv := reflect.MakeMap(d.docType)
+ out.Set(mv)
+ out = mv
+ } else {
+ dv := reflect.New(d.docType).Elem()
+ out.Set(dv)
+ out = dv
+ }
+ outt = out.Type()
+ outk = outt.Kind()
+ }
+
+ docType := d.docType
+ keyType := typeString
+ convertKey := false
+ switch outk {
+ case reflect.Map:
+ keyType = outt.Key()
+ if keyType.Kind() != reflect.String {
+ panic("BSON map must have string keys. Got: " + outt.String())
+ }
+ if keyType != typeString {
+ convertKey = true
+ }
+ elemType = outt.Elem()
+ if elemType == typeIface {
+ d.docType = outt
+ }
+ if out.IsNil() {
+ out.Set(reflect.MakeMap(out.Type()))
+ } else if out.Len() > 0 {
+ clearMap(out)
+ }
+ case reflect.Struct:
+ if outt != typeRaw {
+ sinfo, err := getStructInfo(out.Type())
+ if err != nil {
+ panic(err)
+ }
+ fieldsMap = sinfo.FieldsMap
+ out.Set(sinfo.Zero)
+ if sinfo.InlineMap != -1 {
+ inlineMap = out.Field(sinfo.InlineMap)
+ if !inlineMap.IsNil() && inlineMap.Len() > 0 {
+ clearMap(inlineMap)
+ }
+ elemType = inlineMap.Type().Elem()
+ if elemType == typeIface {
+ d.docType = inlineMap.Type()
+ }
+ }
+ }
+ case reflect.Slice:
+ switch outt.Elem() {
+ case typeDocElem:
+ origout.Set(d.readDocElems(outt))
+ return
+ case typeRawDocElem:
+ origout.Set(d.readRawDocElems(outt))
+ return
+ }
+ fallthrough
+ default:
+ panic("Unsupported document type for unmarshalling: " + out.Type().String())
+ }
+
+ end := int(d.readInt32())
+ end += d.i - 4
+ if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' {
+ corrupted()
+ }
+ for d.in[d.i] != '\x00' {
+ kind := d.readByte()
+ name := d.readCStr()
+ if d.i >= end {
+ corrupted()
+ }
+
+ switch outk {
+ case reflect.Map:
+ e := reflect.New(elemType).Elem()
+ if d.readElemTo(e, kind) {
+ k := reflect.ValueOf(name)
+ if convertKey {
+ k = k.Convert(keyType)
+ }
+ out.SetMapIndex(k, e)
+ }
+ case reflect.Struct:
+ if outt == typeRaw {
+ d.dropElem(kind)
+ } else {
+ if info, ok := fieldsMap[name]; ok {
+ if info.Inline == nil {
+ d.readElemTo(out.Field(info.Num), kind)
+ } else {
+ d.readElemTo(out.FieldByIndex(info.Inline), kind)
+ }
+ } else if inlineMap.IsValid() {
+ if inlineMap.IsNil() {
+ inlineMap.Set(reflect.MakeMap(inlineMap.Type()))
+ }
+ e := reflect.New(elemType).Elem()
+ if d.readElemTo(e, kind) {
+ inlineMap.SetMapIndex(reflect.ValueOf(name), e)
+ }
+ } else {
+ d.dropElem(kind)
+ }
+ }
+ case reflect.Slice:
+ }
+
+ if d.i >= end {
+ corrupted()
+ }
+ }
+ d.i++ // '\x00'
+ if d.i != end {
+ corrupted()
+ }
+ d.docType = docType
+
+ if outt == typeRaw {
+ out.Set(reflect.ValueOf(Raw{0x03, d.in[start:d.i]}))
+ }
+}
+
+func (d *decoder) readArrayDocTo(out reflect.Value) {
+ end := int(d.readInt32())
+ end += d.i - 4
+ if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' {
+ corrupted()
+ }
+ i := 0
+ l := out.Len()
+ for d.in[d.i] != '\x00' {
+ if i >= l {
+ panic("Length mismatch on array field")
+ }
+ kind := d.readByte()
+ for d.i < end && d.in[d.i] != '\x00' {
+ d.i++
+ }
+ if d.i >= end {
+ corrupted()
+ }
+ d.i++
+ d.readElemTo(out.Index(i), kind)
+ if d.i >= end {
+ corrupted()
+ }
+ i++
+ }
+ if i != l {
+ panic("Length mismatch on array field")
+ }
+ d.i++ // '\x00'
+ if d.i != end {
+ corrupted()
+ }
+}
+
+func (d *decoder) readSliceDoc(t reflect.Type) interface{} {
+ tmp := make([]reflect.Value, 0, 8)
+ elemType := t.Elem()
+ if elemType == typeRawDocElem {
+ d.dropElem(0x04)
+ return reflect.Zero(t).Interface()
+ }
+
+ end := int(d.readInt32())
+ end += d.i - 4
+ if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' {
+ corrupted()
+ }
+ for d.in[d.i] != '\x00' {
+ kind := d.readByte()
+ for d.i < end && d.in[d.i] != '\x00' {
+ d.i++
+ }
+ if d.i >= end {
+ corrupted()
+ }
+ d.i++
+ e := reflect.New(elemType).Elem()
+ if d.readElemTo(e, kind) {
+ tmp = append(tmp, e)
+ }
+ if d.i >= end {
+ corrupted()
+ }
+ }
+ d.i++ // '\x00'
+ if d.i != end {
+ corrupted()
+ }
+
+ n := len(tmp)
+ slice := reflect.MakeSlice(t, n, n)
+ for i := 0; i != n; i++ {
+ slice.Index(i).Set(tmp[i])
+ }
+ return slice.Interface()
+}
+
+var typeSlice = reflect.TypeOf([]interface{}{})
+var typeIface = typeSlice.Elem()
+
+func (d *decoder) readDocElems(typ reflect.Type) reflect.Value {
+ docType := d.docType
+ d.docType = typ
+ slice := make([]DocElem, 0, 8)
+ d.readDocWith(func(kind byte, name string) {
+ e := DocElem{Name: name}
+ v := reflect.ValueOf(&e.Value)
+ if d.readElemTo(v.Elem(), kind) {
+ slice = append(slice, e)
+ }
+ })
+ slicev := reflect.New(typ).Elem()
+ slicev.Set(reflect.ValueOf(slice))
+ d.docType = docType
+ return slicev
+}
+
+func (d *decoder) readRawDocElems(typ reflect.Type) reflect.Value {
+ docType := d.docType
+ d.docType = typ
+ slice := make([]RawDocElem, 0, 8)
+ d.readDocWith(func(kind byte, name string) {
+ e := RawDocElem{Name: name}
+ v := reflect.ValueOf(&e.Value)
+ if d.readElemTo(v.Elem(), kind) {
+ slice = append(slice, e)
+ }
+ })
+ slicev := reflect.New(typ).Elem()
+ slicev.Set(reflect.ValueOf(slice))
+ d.docType = docType
+ return slicev
+}
+
+func (d *decoder) readDocWith(f func(kind byte, name string)) {
+ end := int(d.readInt32())
+ end += d.i - 4
+ if end <= d.i || end > len(d.in) || d.in[end-1] != '\x00' {
+ corrupted()
+ }
+ for d.in[d.i] != '\x00' {
+ kind := d.readByte()
+ name := d.readCStr()
+ if d.i >= end {
+ corrupted()
+ }
+ f(kind, name)
+ if d.i >= end {
+ corrupted()
+ }
+ }
+ d.i++ // '\x00'
+ if d.i != end {
+ corrupted()
+ }
+}
+
+// --------------------------------------------------------------------------
+// Unmarshaling of individual elements within a document.
+
+var blackHole = settableValueOf(struct{}{})
+
+func (d *decoder) dropElem(kind byte) {
+ d.readElemTo(blackHole, kind)
+}
+
+// Attempt to decode an element from the document and put it into out.
+// If the types are not compatible, the returned ok value will be
+// false and out will be unchanged.
+func (d *decoder) readElemTo(out reflect.Value, kind byte) (good bool) {
+
+ start := d.i
+
+ if kind == 0x03 {
+ // Delegate unmarshaling of documents.
+ outt := out.Type()
+ outk := out.Kind()
+ switch outk {
+ case reflect.Interface, reflect.Ptr, reflect.Struct, reflect.Map:
+ d.readDocTo(out)
+ return true
+ }
+ if setterStyle(outt) != setterNone {
+ d.readDocTo(out)
+ return true
+ }
+ if outk == reflect.Slice {
+ switch outt.Elem() {
+ case typeDocElem:
+ out.Set(d.readDocElems(outt))
+ case typeRawDocElem:
+ out.Set(d.readRawDocElems(outt))
+ default:
+ d.readDocTo(blackHole)
+ }
+ return true
+ }
+ d.readDocTo(blackHole)
+ return true
+ }
+
+ var in interface{}
+
+ switch kind {
+ case 0x01: // Float64
+ in = d.readFloat64()
+ case 0x02: // UTF-8 string
+ in = d.readStr()
+ case 0x03: // Document
+ panic("Can't happen. Handled above.")
+ case 0x04: // Array
+ outt := out.Type()
+ if setterStyle(outt) != setterNone {
+ // Skip the value so its data is handed to the setter below.
+ d.dropElem(kind)
+ break
+ }
+ for outt.Kind() == reflect.Ptr {
+ outt = outt.Elem()
+ }
+ switch outt.Kind() {
+ case reflect.Array:
+ d.readArrayDocTo(out)
+ return true
+ case reflect.Slice:
+ in = d.readSliceDoc(outt)
+ default:
+ in = d.readSliceDoc(typeSlice)
+ }
+ case 0x05: // Binary
+ b := d.readBinary()
+ if b.Kind == 0x00 || b.Kind == 0x02 {
+ in = b.Data
+ } else {
+ in = b
+ }
+ case 0x06: // Undefined (obsolete, but still seen in the wild)
+ in = Undefined
+ case 0x07: // ObjectId
+ in = ObjectId(d.readBytes(12))
+ case 0x08: // Bool
+ in = d.readBool()
+ case 0x09: // Timestamp
+ // MongoDB handles timestamps as milliseconds.
+ i := d.readInt64()
+ if i == -62135596800000 {
+ in = time.Time{} // In UTC for convenience.
+ } else {
+ in = time.Unix(i/1e3, i%1e3*1e6)
+ }
+ case 0x0A: // Nil
+ in = nil
+ case 0x0B: // RegEx
+ in = d.readRegEx()
+ case 0x0C:
+ in = DBPointer{Namespace: d.readStr(), Id: ObjectId(d.readBytes(12))}
+ case 0x0D: // JavaScript without scope
+ in = JavaScript{Code: d.readStr()}
+ case 0x0E: // Symbol
+ in = Symbol(d.readStr())
+ case 0x0F: // JavaScript with scope
+ d.i += 4 // Skip length
+ js := JavaScript{d.readStr(), make(M)}
+ d.readDocTo(reflect.ValueOf(js.Scope))
+ in = js
+ case 0x10: // Int32
+ in = int(d.readInt32())
+ case 0x11: // Mongo-specific timestamp
+ in = MongoTimestamp(d.readInt64())
+ case 0x12: // Int64
+ in = d.readInt64()
+ case 0x13: // Decimal128
+ in = Decimal128{
+ l: uint64(d.readInt64()),
+ h: uint64(d.readInt64()),
+ }
+ case 0x7F: // Max key
+ in = MaxKey
+ case 0xFF: // Min key
+ in = MinKey
+ default:
+ panic(fmt.Sprintf("Unknown element kind (0x%02X)", kind))
+ }
+
+ outt := out.Type()
+
+ if outt == typeRaw {
+ out.Set(reflect.ValueOf(Raw{kind, d.in[start:d.i]}))
+ return true
+ }
+
+ if setter := getSetter(outt, out); setter != nil {
+ err := setter.SetBSON(Raw{kind, d.in[start:d.i]})
+ if err == SetZero {
+ out.Set(reflect.Zero(outt))
+ return true
+ }
+ if err == nil {
+ return true
+ }
+ if _, ok := err.(*TypeError); !ok {
+ panic(err)
+ }
+ return false
+ }
+
+ if in == nil {
+ out.Set(reflect.Zero(outt))
+ return true
+ }
+
+ outk := outt.Kind()
+
+ // Dereference and initialize pointer if necessary.
+ first := true
+ for outk == reflect.Ptr {
+ if !out.IsNil() {
+ out = out.Elem()
+ } else {
+ elem := reflect.New(outt.Elem())
+ if first {
+ // Only set if value is compatible.
+ first = false
+ defer func(out, elem reflect.Value) {
+ if good {
+ out.Set(elem)
+ }
+ }(out, elem)
+ } else {
+ out.Set(elem)
+ }
+ out = elem
+ }
+ outt = out.Type()
+ outk = outt.Kind()
+ }
+
+ inv := reflect.ValueOf(in)
+ if outt == inv.Type() {
+ out.Set(inv)
+ return true
+ }
+
+ switch outk {
+ case reflect.Interface:
+ out.Set(inv)
+ return true
+ case reflect.String:
+ switch inv.Kind() {
+ case reflect.String:
+ out.SetString(inv.String())
+ return true
+ case reflect.Slice:
+ if b, ok := in.([]byte); ok {
+ out.SetString(string(b))
+ return true
+ }
+ case reflect.Int, reflect.Int64:
+ if outt == typeJSONNumber {
+ out.SetString(strconv.FormatInt(inv.Int(), 10))
+ return true
+ }
+ case reflect.Float64:
+ if outt == typeJSONNumber {
+ out.SetString(strconv.FormatFloat(inv.Float(), 'f', -1, 64))
+ return true
+ }
+ }
+ case reflect.Slice, reflect.Array:
+ // Remember, array (0x04) slices are built with the correct
+ // element type. If we are here, must be a cross BSON kind
+ // conversion (e.g. 0x05 unmarshalling on string).
+ if outt.Elem().Kind() != reflect.Uint8 {
+ break
+ }
+ switch inv.Kind() {
+ case reflect.String:
+ slice := []byte(inv.String())
+ out.Set(reflect.ValueOf(slice))
+ return true
+ case reflect.Slice:
+ switch outt.Kind() {
+ case reflect.Array:
+ reflect.Copy(out, inv)
+ case reflect.Slice:
+ out.SetBytes(inv.Bytes())
+ }
+ return true
+ }
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ switch inv.Kind() {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ out.SetInt(inv.Int())
+ return true
+ case reflect.Float32, reflect.Float64:
+ out.SetInt(int64(inv.Float()))
+ return true
+ case reflect.Bool:
+ if inv.Bool() {
+ out.SetInt(1)
+ } else {
+ out.SetInt(0)
+ }
+ return true
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ panic("can't happen: no uint types in BSON (!?)")
+ }
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ switch inv.Kind() {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ out.SetUint(uint64(inv.Int()))
+ return true
+ case reflect.Float32, reflect.Float64:
+ out.SetUint(uint64(inv.Float()))
+ return true
+ case reflect.Bool:
+ if inv.Bool() {
+ out.SetUint(1)
+ } else {
+ out.SetUint(0)
+ }
+ return true
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ panic("Can't happen. No uint types in BSON.")
+ }
+ case reflect.Float32, reflect.Float64:
+ switch inv.Kind() {
+ case reflect.Float32, reflect.Float64:
+ out.SetFloat(inv.Float())
+ return true
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ out.SetFloat(float64(inv.Int()))
+ return true
+ case reflect.Bool:
+ if inv.Bool() {
+ out.SetFloat(1)
+ } else {
+ out.SetFloat(0)
+ }
+ return true
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ panic("Can't happen. No uint types in BSON?")
+ }
+ case reflect.Bool:
+ switch inv.Kind() {
+ case reflect.Bool:
+ out.SetBool(inv.Bool())
+ return true
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ out.SetBool(inv.Int() != 0)
+ return true
+ case reflect.Float32, reflect.Float64:
+ out.SetBool(inv.Float() != 0)
+ return true
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ panic("Can't happen. No uint types in BSON?")
+ }
+ case reflect.Struct:
+ if outt == typeURL && inv.Kind() == reflect.String {
+ u, err := url.Parse(inv.String())
+ if err != nil {
+ panic(err)
+ }
+ out.Set(reflect.ValueOf(u).Elem())
+ return true
+ }
+ if outt == typeBinary {
+ if b, ok := in.([]byte); ok {
+ out.Set(reflect.ValueOf(Binary{Data: b}))
+ return true
+ }
+ }
+ }
+
+ return false
+}
+
+// --------------------------------------------------------------------------
+// Parsers of basic types.
+
+func (d *decoder) readRegEx() RegEx {
+ re := RegEx{}
+ re.Pattern = d.readCStr()
+ re.Options = d.readCStr()
+ return re
+}
+
+func (d *decoder) readBinary() Binary {
+ l := d.readInt32()
+ b := Binary{}
+ b.Kind = d.readByte()
+ b.Data = d.readBytes(l)
+ if b.Kind == 0x02 && len(b.Data) >= 4 {
+ // Weird obsolete format with redundant length.
+ b.Data = b.Data[4:]
+ }
+ return b
+}
+
+func (d *decoder) readStr() string {
+ l := d.readInt32()
+ b := d.readBytes(l - 1)
+ if d.readByte() != '\x00' {
+ corrupted()
+ }
+ return string(b)
+}
+
+func (d *decoder) readCStr() string {
+ start := d.i
+ end := start
+ l := len(d.in)
+ for ; end != l; end++ {
+ if d.in[end] == '\x00' {
+ break
+ }
+ }
+ d.i = end + 1
+ if d.i > l {
+ corrupted()
+ }
+ return string(d.in[start:end])
+}
+
+func (d *decoder) readBool() bool {
+ b := d.readByte()
+ if b == 0 {
+ return false
+ }
+ if b == 1 {
+ return true
+ }
+ panic(fmt.Sprintf("encoded boolean must be 1 or 0, found %d", b))
+}
+
+func (d *decoder) readFloat64() float64 {
+ return math.Float64frombits(uint64(d.readInt64()))
+}
+
+func (d *decoder) readInt32() int32 {
+ b := d.readBytes(4)
+ return int32((uint32(b[0]) << 0) |
+ (uint32(b[1]) << 8) |
+ (uint32(b[2]) << 16) |
+ (uint32(b[3]) << 24))
+}
+
+func (d *decoder) readInt64() int64 {
+ b := d.readBytes(8)
+ return int64((uint64(b[0]) << 0) |
+ (uint64(b[1]) << 8) |
+ (uint64(b[2]) << 16) |
+ (uint64(b[3]) << 24) |
+ (uint64(b[4]) << 32) |
+ (uint64(b[5]) << 40) |
+ (uint64(b[6]) << 48) |
+ (uint64(b[7]) << 56))
+}
+
+func (d *decoder) readByte() byte {
+ i := d.i
+ d.i++
+ if d.i > len(d.in) {
+ corrupted()
+ }
+ return d.in[i]
+}
+
+func (d *decoder) readBytes(length int32) []byte {
+ if length < 0 {
+ corrupted()
+ }
+ start := d.i
+ d.i += int(length)
+ if d.i < start || d.i > len(d.in) {
+ corrupted()
+ }
+ return d.in[start : start+int(length)]
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/encode.go b/vendor/gopkg.in/mgo.v2/bson/encode.go
new file mode 100644
index 0000000..add39e8
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/encode.go
@@ -0,0 +1,514 @@
+// BSON library for Go
+//
+// Copyright (c) 2010-2012 - Gustavo Niemeyer
+//
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice, this
+// list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+// gobson - BSON library for Go.
+
+package bson
+
+import (
+ "encoding/json"
+ "fmt"
+ "math"
+ "net/url"
+ "reflect"
+ "strconv"
+ "time"
+)
+
+// --------------------------------------------------------------------------
+// Some internal infrastructure.
+
+var (
+ typeBinary = reflect.TypeOf(Binary{})
+ typeObjectId = reflect.TypeOf(ObjectId(""))
+ typeDBPointer = reflect.TypeOf(DBPointer{"", ObjectId("")})
+ typeSymbol = reflect.TypeOf(Symbol(""))
+ typeMongoTimestamp = reflect.TypeOf(MongoTimestamp(0))
+ typeOrderKey = reflect.TypeOf(MinKey)
+ typeDocElem = reflect.TypeOf(DocElem{})
+ typeRawDocElem = reflect.TypeOf(RawDocElem{})
+ typeRaw = reflect.TypeOf(Raw{})
+ typeURL = reflect.TypeOf(url.URL{})
+ typeTime = reflect.TypeOf(time.Time{})
+ typeString = reflect.TypeOf("")
+ typeJSONNumber = reflect.TypeOf(json.Number(""))
+)
+
+const itoaCacheSize = 32
+
+var itoaCache []string
+
+func init() {
+ itoaCache = make([]string, itoaCacheSize)
+ for i := 0; i != itoaCacheSize; i++ {
+ itoaCache[i] = strconv.Itoa(i)
+ }
+}
+
+func itoa(i int) string {
+ if i < itoaCacheSize {
+ return itoaCache[i]
+ }
+ return strconv.Itoa(i)
+}
+
+// --------------------------------------------------------------------------
+// Marshaling of the document value itself.
+
+type encoder struct {
+ out []byte
+}
+
+func (e *encoder) addDoc(v reflect.Value) {
+ for {
+ if vi, ok := v.Interface().(Getter); ok {
+ getv, err := vi.GetBSON()
+ if err != nil {
+ panic(err)
+ }
+ v = reflect.ValueOf(getv)
+ continue
+ }
+ if v.Kind() == reflect.Ptr {
+ v = v.Elem()
+ continue
+ }
+ break
+ }
+
+ if v.Type() == typeRaw {
+ raw := v.Interface().(Raw)
+ if raw.Kind != 0x03 && raw.Kind != 0x00 {
+ panic("Attempted to marshal Raw kind " + strconv.Itoa(int(raw.Kind)) + " as a document")
+ }
+ if len(raw.Data) == 0 {
+ panic("Attempted to marshal empty Raw document")
+ }
+ e.addBytes(raw.Data...)
+ return
+ }
+
+ start := e.reserveInt32()
+
+ switch v.Kind() {
+ case reflect.Map:
+ e.addMap(v)
+ case reflect.Struct:
+ e.addStruct(v)
+ case reflect.Array, reflect.Slice:
+ e.addSlice(v)
+ default:
+ panic("Can't marshal " + v.Type().String() + " as a BSON document")
+ }
+
+ e.addBytes(0)
+ e.setInt32(start, int32(len(e.out)-start))
+}
+
+func (e *encoder) addMap(v reflect.Value) {
+ for _, k := range v.MapKeys() {
+ e.addElem(k.String(), v.MapIndex(k), false)
+ }
+}
+
+func (e *encoder) addStruct(v reflect.Value) {
+ sinfo, err := getStructInfo(v.Type())
+ if err != nil {
+ panic(err)
+ }
+ var value reflect.Value
+ if sinfo.InlineMap >= 0 {
+ m := v.Field(sinfo.InlineMap)
+ if m.Len() > 0 {
+ for _, k := range m.MapKeys() {
+ ks := k.String()
+ if _, found := sinfo.FieldsMap[ks]; found {
+ panic(fmt.Sprintf("Can't have key %q in inlined map; conflicts with struct field", ks))
+ }
+ e.addElem(ks, m.MapIndex(k), false)
+ }
+ }
+ }
+ for _, info := range sinfo.FieldsList {
+ if info.Inline == nil {
+ value = v.Field(info.Num)
+ } else {
+ value = v.FieldByIndex(info.Inline)
+ }
+ if info.OmitEmpty && isZero(value) {
+ continue
+ }
+ e.addElem(info.Key, value, info.MinSize)
+ }
+}
+
+func isZero(v reflect.Value) bool {
+ switch v.Kind() {
+ case reflect.String:
+ return len(v.String()) == 0
+ case reflect.Ptr, reflect.Interface:
+ return v.IsNil()
+ case reflect.Slice:
+ return v.Len() == 0
+ case reflect.Map:
+ return v.Len() == 0
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.Bool:
+ return !v.Bool()
+ case reflect.Struct:
+ vt := v.Type()
+ if vt == typeTime {
+ return v.Interface().(time.Time).IsZero()
+ }
+ for i := 0; i < v.NumField(); i++ {
+ if vt.Field(i).PkgPath != "" && !vt.Field(i).Anonymous {
+ continue // Private field
+ }
+ if !isZero(v.Field(i)) {
+ return false
+ }
+ }
+ return true
+ }
+ return false
+}
+
+func (e *encoder) addSlice(v reflect.Value) {
+ vi := v.Interface()
+ if d, ok := vi.(D); ok {
+ for _, elem := range d {
+ e.addElem(elem.Name, reflect.ValueOf(elem.Value), false)
+ }
+ return
+ }
+ if d, ok := vi.(RawD); ok {
+ for _, elem := range d {
+ e.addElem(elem.Name, reflect.ValueOf(elem.Value), false)
+ }
+ return
+ }
+ l := v.Len()
+ et := v.Type().Elem()
+ if et == typeDocElem {
+ for i := 0; i < l; i++ {
+ elem := v.Index(i).Interface().(DocElem)
+ e.addElem(elem.Name, reflect.ValueOf(elem.Value), false)
+ }
+ return
+ }
+ if et == typeRawDocElem {
+ for i := 0; i < l; i++ {
+ elem := v.Index(i).Interface().(RawDocElem)
+ e.addElem(elem.Name, reflect.ValueOf(elem.Value), false)
+ }
+ return
+ }
+ for i := 0; i < l; i++ {
+ e.addElem(itoa(i), v.Index(i), false)
+ }
+}
+
+// --------------------------------------------------------------------------
+// Marshaling of elements in a document.
+
+func (e *encoder) addElemName(kind byte, name string) {
+ e.addBytes(kind)
+ e.addBytes([]byte(name)...)
+ e.addBytes(0)
+}
+
+func (e *encoder) addElem(name string, v reflect.Value, minSize bool) {
+
+ if !v.IsValid() {
+ e.addElemName(0x0A, name)
+ return
+ }
+
+ if getter, ok := v.Interface().(Getter); ok {
+ getv, err := getter.GetBSON()
+ if err != nil {
+ panic(err)
+ }
+ e.addElem(name, reflect.ValueOf(getv), minSize)
+ return
+ }
+
+ switch v.Kind() {
+
+ case reflect.Interface:
+ e.addElem(name, v.Elem(), minSize)
+
+ case reflect.Ptr:
+ e.addElem(name, v.Elem(), minSize)
+
+ case reflect.String:
+ s := v.String()
+ switch v.Type() {
+ case typeObjectId:
+ if len(s) != 12 {
+ panic("ObjectIDs must be exactly 12 bytes long (got " +
+ strconv.Itoa(len(s)) + ")")
+ }
+ e.addElemName(0x07, name)
+ e.addBytes([]byte(s)...)
+ case typeSymbol:
+ e.addElemName(0x0E, name)
+ e.addStr(s)
+ case typeJSONNumber:
+ n := v.Interface().(json.Number)
+ if i, err := n.Int64(); err == nil {
+ e.addElemName(0x12, name)
+ e.addInt64(i)
+ } else if f, err := n.Float64(); err == nil {
+ e.addElemName(0x01, name)
+ e.addFloat64(f)
+ } else {
+ panic("failed to convert json.Number to a number: " + s)
+ }
+ default:
+ e.addElemName(0x02, name)
+ e.addStr(s)
+ }
+
+ case reflect.Float32, reflect.Float64:
+ e.addElemName(0x01, name)
+ e.addFloat64(v.Float())
+
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ u := v.Uint()
+ if int64(u) < 0 {
+ panic("BSON has no uint64 type, and value is too large to fit correctly in an int64")
+ } else if u <= math.MaxInt32 && (minSize || v.Kind() <= reflect.Uint32) {
+ e.addElemName(0x10, name)
+ e.addInt32(int32(u))
+ } else {
+ e.addElemName(0x12, name)
+ e.addInt64(int64(u))
+ }
+
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ switch v.Type() {
+ case typeMongoTimestamp:
+ e.addElemName(0x11, name)
+ e.addInt64(v.Int())
+
+ case typeOrderKey:
+ if v.Int() == int64(MaxKey) {
+ e.addElemName(0x7F, name)
+ } else {
+ e.addElemName(0xFF, name)
+ }
+
+ default:
+ i := v.Int()
+ if (minSize || v.Type().Kind() != reflect.Int64) && i >= math.MinInt32 && i <= math.MaxInt32 {
+ // It fits into an int32, encode as such.
+ e.addElemName(0x10, name)
+ e.addInt32(int32(i))
+ } else {
+ e.addElemName(0x12, name)
+ e.addInt64(i)
+ }
+ }
+
+ case reflect.Bool:
+ e.addElemName(0x08, name)
+ if v.Bool() {
+ e.addBytes(1)
+ } else {
+ e.addBytes(0)
+ }
+
+ case reflect.Map:
+ e.addElemName(0x03, name)
+ e.addDoc(v)
+
+ case reflect.Slice:
+ vt := v.Type()
+ et := vt.Elem()
+ if et.Kind() == reflect.Uint8 {
+ e.addElemName(0x05, name)
+ e.addBinary(0x00, v.Bytes())
+ } else if et == typeDocElem || et == typeRawDocElem {
+ e.addElemName(0x03, name)
+ e.addDoc(v)
+ } else {
+ e.addElemName(0x04, name)
+ e.addDoc(v)
+ }
+
+ case reflect.Array:
+ et := v.Type().Elem()
+ if et.Kind() == reflect.Uint8 {
+ e.addElemName(0x05, name)
+ if v.CanAddr() {
+ e.addBinary(0x00, v.Slice(0, v.Len()).Interface().([]byte))
+ } else {
+ n := v.Len()
+ e.addInt32(int32(n))
+ e.addBytes(0x00)
+ for i := 0; i < n; i++ {
+ el := v.Index(i)
+ e.addBytes(byte(el.Uint()))
+ }
+ }
+ } else {
+ e.addElemName(0x04, name)
+ e.addDoc(v)
+ }
+
+ case reflect.Struct:
+ switch s := v.Interface().(type) {
+
+ case Raw:
+ kind := s.Kind
+ if kind == 0x00 {
+ kind = 0x03
+ }
+ if len(s.Data) == 0 && kind != 0x06 && kind != 0x0A && kind != 0xFF && kind != 0x7F {
+ panic("Attempted to marshal empty Raw document")
+ }
+ e.addElemName(kind, name)
+ e.addBytes(s.Data...)
+
+ case Binary:
+ e.addElemName(0x05, name)
+ e.addBinary(s.Kind, s.Data)
+
+ case Decimal128:
+ e.addElemName(0x13, name)
+ e.addInt64(int64(s.l))
+ e.addInt64(int64(s.h))
+
+ case DBPointer:
+ e.addElemName(0x0C, name)
+ e.addStr(s.Namespace)
+ if len(s.Id) != 12 {
+ panic("ObjectIDs must be exactly 12 bytes long (got " +
+ strconv.Itoa(len(s.Id)) + ")")
+ }
+ e.addBytes([]byte(s.Id)...)
+
+ case RegEx:
+ e.addElemName(0x0B, name)
+ e.addCStr(s.Pattern)
+ e.addCStr(s.Options)
+
+ case JavaScript:
+ if s.Scope == nil {
+ e.addElemName(0x0D, name)
+ e.addStr(s.Code)
+ } else {
+ e.addElemName(0x0F, name)
+ start := e.reserveInt32()
+ e.addStr(s.Code)
+ e.addDoc(reflect.ValueOf(s.Scope))
+ e.setInt32(start, int32(len(e.out)-start))
+ }
+
+ case time.Time:
+ // MongoDB handles timestamps as milliseconds.
+ e.addElemName(0x09, name)
+ e.addInt64(s.Unix()*1000 + int64(s.Nanosecond()/1e6))
+
+ case url.URL:
+ e.addElemName(0x02, name)
+ e.addStr(s.String())
+
+ case undefined:
+ e.addElemName(0x06, name)
+
+ default:
+ e.addElemName(0x03, name)
+ e.addDoc(v)
+ }
+
+ default:
+ panic("Can't marshal " + v.Type().String() + " in a BSON document")
+ }
+}
+
+// --------------------------------------------------------------------------
+// Marshaling of base types.
+
+func (e *encoder) addBinary(subtype byte, v []byte) {
+ if subtype == 0x02 {
+ // Wonder how that brilliant idea came to life. Obsolete, luckily.
+ e.addInt32(int32(len(v) + 4))
+ e.addBytes(subtype)
+ e.addInt32(int32(len(v)))
+ } else {
+ e.addInt32(int32(len(v)))
+ e.addBytes(subtype)
+ }
+ e.addBytes(v...)
+}
+
+func (e *encoder) addStr(v string) {
+ e.addInt32(int32(len(v) + 1))
+ e.addCStr(v)
+}
+
+func (e *encoder) addCStr(v string) {
+ e.addBytes([]byte(v)...)
+ e.addBytes(0)
+}
+
+func (e *encoder) reserveInt32() (pos int) {
+ pos = len(e.out)
+ e.addBytes(0, 0, 0, 0)
+ return pos
+}
+
+func (e *encoder) setInt32(pos int, v int32) {
+ e.out[pos+0] = byte(v)
+ e.out[pos+1] = byte(v >> 8)
+ e.out[pos+2] = byte(v >> 16)
+ e.out[pos+3] = byte(v >> 24)
+}
+
+func (e *encoder) addInt32(v int32) {
+ u := uint32(v)
+ e.addBytes(byte(u), byte(u>>8), byte(u>>16), byte(u>>24))
+}
+
+func (e *encoder) addInt64(v int64) {
+ u := uint64(v)
+ e.addBytes(byte(u), byte(u>>8), byte(u>>16), byte(u>>24),
+ byte(u>>32), byte(u>>40), byte(u>>48), byte(u>>56))
+}
+
+func (e *encoder) addFloat64(v float64) {
+ e.addInt64(int64(math.Float64bits(v)))
+}
+
+func (e *encoder) addBytes(v ...byte) {
+ e.out = append(e.out, v...)
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/json.go b/vendor/gopkg.in/mgo.v2/bson/json.go
new file mode 100644
index 0000000..09df826
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/json.go
@@ -0,0 +1,380 @@
+package bson
+
+import (
+ "bytes"
+ "encoding/base64"
+ "fmt"
+ "gopkg.in/mgo.v2/internal/json"
+ "strconv"
+ "time"
+)
+
+// UnmarshalJSON unmarshals a JSON value that may hold non-standard
+// syntax as defined in BSON's extended JSON specification.
+func UnmarshalJSON(data []byte, value interface{}) error {
+ d := json.NewDecoder(bytes.NewBuffer(data))
+ d.Extend(&jsonExt)
+ return d.Decode(value)
+}
+
+// MarshalJSON marshals a JSON value that may hold non-standard
+// syntax as defined in BSON's extended JSON specification.
+func MarshalJSON(value interface{}) ([]byte, error) {
+ var buf bytes.Buffer
+ e := json.NewEncoder(&buf)
+ e.Extend(&jsonExt)
+ err := e.Encode(value)
+ if err != nil {
+ return nil, err
+ }
+ return buf.Bytes(), nil
+}
+
+// jdec is used internally by the JSON decoding functions
+// so they may unmarshal functions without getting into endless
+// recursion due to keyed objects.
+func jdec(data []byte, value interface{}) error {
+ d := json.NewDecoder(bytes.NewBuffer(data))
+ d.Extend(&funcExt)
+ return d.Decode(value)
+}
+
+var jsonExt json.Extension
+var funcExt json.Extension
+
+// TODO
+// - Shell regular expressions ("/regexp/opts")
+
+func init() {
+ jsonExt.DecodeUnquotedKeys(true)
+ jsonExt.DecodeTrailingCommas(true)
+
+ funcExt.DecodeFunc("BinData", "$binaryFunc", "$type", "$binary")
+ jsonExt.DecodeKeyed("$binary", jdecBinary)
+ jsonExt.DecodeKeyed("$binaryFunc", jdecBinary)
+ jsonExt.EncodeType([]byte(nil), jencBinarySlice)
+ jsonExt.EncodeType(Binary{}, jencBinaryType)
+
+ funcExt.DecodeFunc("ISODate", "$dateFunc", "S")
+ funcExt.DecodeFunc("new Date", "$dateFunc", "S")
+ jsonExt.DecodeKeyed("$date", jdecDate)
+ jsonExt.DecodeKeyed("$dateFunc", jdecDate)
+ jsonExt.EncodeType(time.Time{}, jencDate)
+
+ funcExt.DecodeFunc("Timestamp", "$timestamp", "t", "i")
+ jsonExt.DecodeKeyed("$timestamp", jdecTimestamp)
+ jsonExt.EncodeType(MongoTimestamp(0), jencTimestamp)
+
+ funcExt.DecodeConst("undefined", Undefined)
+
+ jsonExt.DecodeKeyed("$regex", jdecRegEx)
+ jsonExt.EncodeType(RegEx{}, jencRegEx)
+
+ funcExt.DecodeFunc("ObjectId", "$oidFunc", "Id")
+ jsonExt.DecodeKeyed("$oid", jdecObjectId)
+ jsonExt.DecodeKeyed("$oidFunc", jdecObjectId)
+ jsonExt.EncodeType(ObjectId(""), jencObjectId)
+
+ funcExt.DecodeFunc("DBRef", "$dbrefFunc", "$ref", "$id")
+ jsonExt.DecodeKeyed("$dbrefFunc", jdecDBRef)
+
+ funcExt.DecodeFunc("NumberLong", "$numberLongFunc", "N")
+ jsonExt.DecodeKeyed("$numberLong", jdecNumberLong)
+ jsonExt.DecodeKeyed("$numberLongFunc", jdecNumberLong)
+ jsonExt.EncodeType(int64(0), jencNumberLong)
+ jsonExt.EncodeType(int(0), jencInt)
+
+ funcExt.DecodeConst("MinKey", MinKey)
+ funcExt.DecodeConst("MaxKey", MaxKey)
+ jsonExt.DecodeKeyed("$minKey", jdecMinKey)
+ jsonExt.DecodeKeyed("$maxKey", jdecMaxKey)
+ jsonExt.EncodeType(orderKey(0), jencMinMaxKey)
+
+ jsonExt.DecodeKeyed("$undefined", jdecUndefined)
+ jsonExt.EncodeType(Undefined, jencUndefined)
+
+ jsonExt.Extend(&funcExt)
+}
+
+func fbytes(format string, args ...interface{}) []byte {
+ var buf bytes.Buffer
+ fmt.Fprintf(&buf, format, args...)
+ return buf.Bytes()
+}
+
+func jdecBinary(data []byte) (interface{}, error) {
+ var v struct {
+ Binary []byte `json:"$binary"`
+ Type string `json:"$type"`
+ Func struct {
+ Binary []byte `json:"$binary"`
+ Type int64 `json:"$type"`
+ } `json:"$binaryFunc"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+
+ var binData []byte
+ var binKind int64
+ if v.Type == "" && v.Binary == nil {
+ binData = v.Func.Binary
+ binKind = v.Func.Type
+ } else if v.Type == "" {
+ return v.Binary, nil
+ } else {
+ binData = v.Binary
+ binKind, err = strconv.ParseInt(v.Type, 0, 64)
+ if err != nil {
+ binKind = -1
+ }
+ }
+
+ if binKind == 0 {
+ return binData, nil
+ }
+ if binKind < 0 || binKind > 255 {
+ return nil, fmt.Errorf("invalid type in binary object: %s", data)
+ }
+
+ return Binary{Kind: byte(binKind), Data: binData}, nil
+}
+
+func jencBinarySlice(v interface{}) ([]byte, error) {
+ in := v.([]byte)
+ out := make([]byte, base64.StdEncoding.EncodedLen(len(in)))
+ base64.StdEncoding.Encode(out, in)
+ return fbytes(`{"$binary":"%s","$type":"0x0"}`, out), nil
+}
+
+func jencBinaryType(v interface{}) ([]byte, error) {
+ in := v.(Binary)
+ out := make([]byte, base64.StdEncoding.EncodedLen(len(in.Data)))
+ base64.StdEncoding.Encode(out, in.Data)
+ return fbytes(`{"$binary":"%s","$type":"0x%x"}`, out, in.Kind), nil
+}
+
+const jdateFormat = "2006-01-02T15:04:05.999Z"
+
+func jdecDate(data []byte) (interface{}, error) {
+ var v struct {
+ S string `json:"$date"`
+ Func struct {
+ S string
+ } `json:"$dateFunc"`
+ }
+ _ = jdec(data, &v)
+ if v.S == "" {
+ v.S = v.Func.S
+ }
+ if v.S != "" {
+ for _, format := range []string{jdateFormat, "2006-01-02"} {
+ t, err := time.Parse(format, v.S)
+ if err == nil {
+ return t, nil
+ }
+ }
+ return nil, fmt.Errorf("cannot parse date: %q", v.S)
+ }
+
+ var vn struct {
+ Date struct {
+ N int64 `json:"$numberLong,string"`
+ } `json:"$date"`
+ Func struct {
+ S int64
+ } `json:"$dateFunc"`
+ }
+ err := jdec(data, &vn)
+ if err != nil {
+ return nil, fmt.Errorf("cannot parse date: %q", data)
+ }
+ n := vn.Date.N
+ if n == 0 {
+ n = vn.Func.S
+ }
+ return time.Unix(n/1000, n%1000*1e6).UTC(), nil
+}
+
+func jencDate(v interface{}) ([]byte, error) {
+ t := v.(time.Time)
+ return fbytes(`{"$date":%q}`, t.Format(jdateFormat)), nil
+}
+
+func jdecTimestamp(data []byte) (interface{}, error) {
+ var v struct {
+ Func struct {
+ T int32 `json:"t"`
+ I int32 `json:"i"`
+ } `json:"$timestamp"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ return MongoTimestamp(uint64(v.Func.T)<<32 | uint64(uint32(v.Func.I))), nil
+}
+
+func jencTimestamp(v interface{}) ([]byte, error) {
+ ts := uint64(v.(MongoTimestamp))
+ return fbytes(`{"$timestamp":{"t":%d,"i":%d}}`, ts>>32, uint32(ts)), nil
+}
+
+func jdecRegEx(data []byte) (interface{}, error) {
+ var v struct {
+ Regex string `json:"$regex"`
+ Options string `json:"$options"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ return RegEx{v.Regex, v.Options}, nil
+}
+
+func jencRegEx(v interface{}) ([]byte, error) {
+ re := v.(RegEx)
+ type regex struct {
+ Regex string `json:"$regex"`
+ Options string `json:"$options"`
+ }
+ return json.Marshal(regex{re.Pattern, re.Options})
+}
+
+func jdecObjectId(data []byte) (interface{}, error) {
+ var v struct {
+ Id string `json:"$oid"`
+ Func struct {
+ Id string
+ } `json:"$oidFunc"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ if v.Id == "" {
+ v.Id = v.Func.Id
+ }
+ return ObjectIdHex(v.Id), nil
+}
+
+func jencObjectId(v interface{}) ([]byte, error) {
+ return fbytes(`{"$oid":"%s"}`, v.(ObjectId).Hex()), nil
+}
+
+func jdecDBRef(data []byte) (interface{}, error) {
+ // TODO Support unmarshaling $ref and $id into the input value.
+ var v struct {
+ Obj map[string]interface{} `json:"$dbrefFunc"`
+ }
+ // TODO Fix this. Must not be required.
+ v.Obj = make(map[string]interface{})
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ return v.Obj, nil
+}
+
+func jdecNumberLong(data []byte) (interface{}, error) {
+ var v struct {
+ N int64 `json:"$numberLong,string"`
+ Func struct {
+ N int64 `json:",string"`
+ } `json:"$numberLongFunc"`
+ }
+ var vn struct {
+ N int64 `json:"$numberLong"`
+ Func struct {
+ N int64
+ } `json:"$numberLongFunc"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ err = jdec(data, &vn)
+ v.N = vn.N
+ v.Func.N = vn.Func.N
+ }
+ if err != nil {
+ return nil, err
+ }
+ if v.N != 0 {
+ return v.N, nil
+ }
+ return v.Func.N, nil
+}
+
+func jencNumberLong(v interface{}) ([]byte, error) {
+ n := v.(int64)
+ f := `{"$numberLong":"%d"}`
+ if n <= 1<<53 {
+ f = `{"$numberLong":%d}`
+ }
+ return fbytes(f, n), nil
+}
+
+func jencInt(v interface{}) ([]byte, error) {
+ n := v.(int)
+ f := `{"$numberLong":"%d"}`
+ if int64(n) <= 1<<53 {
+ f = `%d`
+ }
+ return fbytes(f, n), nil
+}
+
+func jdecMinKey(data []byte) (interface{}, error) {
+ var v struct {
+ N int64 `json:"$minKey"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ if v.N != 1 {
+ return nil, fmt.Errorf("invalid $minKey object: %s", data)
+ }
+ return MinKey, nil
+}
+
+func jdecMaxKey(data []byte) (interface{}, error) {
+ var v struct {
+ N int64 `json:"$maxKey"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ if v.N != 1 {
+ return nil, fmt.Errorf("invalid $maxKey object: %s", data)
+ }
+ return MaxKey, nil
+}
+
+func jencMinMaxKey(v interface{}) ([]byte, error) {
+ switch v.(orderKey) {
+ case MinKey:
+ return []byte(`{"$minKey":1}`), nil
+ case MaxKey:
+ return []byte(`{"$maxKey":1}`), nil
+ }
+ panic(fmt.Sprintf("invalid $minKey/$maxKey value: %d", v))
+}
+
+func jdecUndefined(data []byte) (interface{}, error) {
+ var v struct {
+ B bool `json:"$undefined"`
+ }
+ err := jdec(data, &v)
+ if err != nil {
+ return nil, err
+ }
+ if !v.B {
+ return nil, fmt.Errorf("invalid $undefined object: %s", data)
+ }
+ return Undefined, nil
+}
+
+func jencUndefined(v interface{}) ([]byte, error) {
+ return []byte(`{"$undefined":true}`), nil
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/json_test.go b/vendor/gopkg.in/mgo.v2/bson/json_test.go
new file mode 100644
index 0000000..866f51c
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/json_test.go
@@ -0,0 +1,184 @@
+package bson_test
+
+import (
+ "gopkg.in/mgo.v2/bson"
+
+ . "gopkg.in/check.v1"
+ "reflect"
+ "strings"
+ "time"
+)
+
+type jsonTest struct {
+ a interface{} // value encoded into JSON (optional)
+ b string // JSON expected as output of , and used as input to
+ c interface{} // Value expected from decoding , defaults to
+ e string // error string, if decoding (b) should fail
+}
+
+var jsonTests = []jsonTest{
+ // $binary
+ {
+ a: []byte("foo"),
+ b: `{"$binary":"Zm9v","$type":"0x0"}`,
+ }, {
+ a: bson.Binary{Kind: 2, Data: []byte("foo")},
+ b: `{"$binary":"Zm9v","$type":"0x2"}`,
+ }, {
+ b: `BinData(2,"Zm9v")`,
+ c: bson.Binary{Kind: 2, Data: []byte("foo")},
+ },
+
+ // $date
+ {
+ a: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.UTC),
+ b: `{"$date":"2016-05-15T01:02:03.004Z"}`,
+ }, {
+ b: `{"$date": {"$numberLong": "1002"}}`,
+ c: time.Date(1970, 1, 1, 0, 0, 1, 2e6, time.UTC),
+ }, {
+ b: `ISODate("2016-05-15T01:02:03.004Z")`,
+ c: time.Date(2016, 5, 15, 1, 2, 3, 4000000, time.UTC),
+ }, {
+ b: `new Date(1000)`,
+ c: time.Date(1970, 1, 1, 0, 0, 1, 0, time.UTC),
+ }, {
+ b: `new Date("2016-05-15")`,
+ c: time.Date(2016, 5, 15, 0, 0, 0, 0, time.UTC),
+ },
+
+ // $timestamp
+ {
+ a: bson.MongoTimestamp(4294967298),
+ b: `{"$timestamp":{"t":1,"i":2}}`,
+ }, {
+ b: `Timestamp(1, 2)`,
+ c: bson.MongoTimestamp(4294967298),
+ },
+
+ // $regex
+ {
+ a: bson.RegEx{"pattern", "options"},
+ b: `{"$regex":"pattern","$options":"options"}`,
+ },
+
+ // $oid
+ {
+ a: bson.ObjectIdHex("0123456789abcdef01234567"),
+ b: `{"$oid":"0123456789abcdef01234567"}`,
+ }, {
+ b: `ObjectId("0123456789abcdef01234567")`,
+ c: bson.ObjectIdHex("0123456789abcdef01234567"),
+ },
+
+ // $ref (no special type)
+ {
+ b: `DBRef("name", "id")`,
+ c: map[string]interface{}{"$ref": "name", "$id": "id"},
+ },
+
+ // $numberLong
+ {
+ a: 123,
+ b: `123`,
+ }, {
+ a: int64(9007199254740992),
+ b: `{"$numberLong":9007199254740992}`,
+ }, {
+ a: int64(1<<53 + 1),
+ b: `{"$numberLong":"9007199254740993"}`,
+ }, {
+ a: 1<<53 + 1,
+ b: `{"$numberLong":"9007199254740993"}`,
+ c: int64(9007199254740993),
+ }, {
+ b: `NumberLong(9007199254740992)`,
+ c: int64(1 << 53),
+ }, {
+ b: `NumberLong("9007199254740993")`,
+ c: int64(1<<53 + 1),
+ },
+
+ // $minKey, $maxKey
+ {
+ a: bson.MinKey,
+ b: `{"$minKey":1}`,
+ }, {
+ a: bson.MaxKey,
+ b: `{"$maxKey":1}`,
+ }, {
+ b: `MinKey`,
+ c: bson.MinKey,
+ }, {
+ b: `MaxKey`,
+ c: bson.MaxKey,
+ }, {
+ b: `{"$minKey":0}`,
+ e: `invalid $minKey object: {"$minKey":0}`,
+ }, {
+ b: `{"$maxKey":0}`,
+ e: `invalid $maxKey object: {"$maxKey":0}`,
+ },
+
+ {
+ a: bson.Undefined,
+ b: `{"$undefined":true}`,
+ }, {
+ b: `undefined`,
+ c: bson.Undefined,
+ }, {
+ b: `{"v": undefined}`,
+ c: struct{ V interface{} }{bson.Undefined},
+ },
+
+ // Unquoted keys and trailing commas
+ {
+ b: `{$foo: ["bar",],}`,
+ c: map[string]interface{}{"$foo": []interface{}{"bar"}},
+ },
+}
+
+func (s *S) TestJSON(c *C) {
+ for i, item := range jsonTests {
+ c.Logf("------------ (#%d)", i)
+ c.Logf("A: %#v", item.a)
+ c.Logf("B: %#v", item.b)
+
+ if item.c == nil {
+ item.c = item.a
+ } else {
+ c.Logf("C: %#v", item.c)
+ }
+ if item.e != "" {
+ c.Logf("E: %s", item.e)
+ }
+
+ if item.a != nil {
+ data, err := bson.MarshalJSON(item.a)
+ c.Assert(err, IsNil)
+ c.Logf("Dumped: %#v", string(data))
+ c.Assert(strings.TrimSuffix(string(data), "\n"), Equals, item.b)
+ }
+
+ var zero interface{}
+ if item.c == nil {
+ zero = &struct{}{}
+ } else {
+ zero = reflect.New(reflect.TypeOf(item.c)).Interface()
+ }
+ err := bson.UnmarshalJSON([]byte(item.b), zero)
+ if item.e != "" {
+ c.Assert(err, NotNil)
+ c.Assert(err.Error(), Equals, item.e)
+ continue
+ }
+ c.Assert(err, IsNil)
+ zerov := reflect.ValueOf(zero)
+ value := zerov.Interface()
+ if zerov.Kind() == reflect.Ptr {
+ value = zerov.Elem().Interface()
+ }
+ c.Logf("Loaded: %#v", value)
+ c.Assert(value, DeepEquals, item.c)
+ }
+}
diff --git a/vendor/gopkg.in/mgo.v2/bson/specdata_test.go b/vendor/gopkg.in/mgo.v2/bson/specdata_test.go
new file mode 100644
index 0000000..513f9b2
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/bson/specdata_test.go
@@ -0,0 +1,241 @@
+package bson_test
+
+var specTests = []string{
+ `
+---
+description: "Array type"
+documents:
+ -
+ decoded:
+ a : []
+ encoded: 0D000000046100050000000000
+ -
+ decoded:
+ a: [10]
+ encoded: 140000000461000C0000001030000A0000000000
+ -
+ # Decode an array that uses an empty string as the key
+ decodeOnly : true
+ decoded:
+ a: [10]
+ encoded: 130000000461000B00000010000A0000000000
+ -
+ # Decode an array that uses a non-numeric string as the key
+ decodeOnly : true
+ decoded:
+ a: [10]
+ encoded: 150000000461000D000000106162000A0000000000
+
+
+`, `
+---
+description: "Boolean type"
+documents:
+ -
+ encoded: "090000000862000100"
+ decoded: { "b" : true }
+ -
+ encoded: "090000000862000000"
+ decoded: { "b" : false }
+
+
+ `, `
+---
+description: "Corrupted BSON"
+documents:
+ -
+ encoded: "09000000016600"
+ error: "truncated double"
+ -
+ encoded: "09000000026600"
+ error: "truncated string"
+ -
+ encoded: "09000000036600"
+ error: "truncated document"
+ -
+ encoded: "09000000046600"
+ error: "truncated array"
+ -
+ encoded: "09000000056600"
+ error: "truncated binary"
+ -
+ encoded: "09000000076600"
+ error: "truncated objectid"
+ -
+ encoded: "09000000086600"
+ error: "truncated boolean"
+ -
+ encoded: "09000000096600"
+ error: "truncated date"
+ -
+ encoded: "090000000b6600"
+ error: "truncated regex"
+ -
+ encoded: "090000000c6600"
+ error: "truncated db pointer"
+ -
+ encoded: "0C0000000d6600"
+ error: "truncated javascript"
+ -
+ encoded: "0C0000000e6600"
+ error: "truncated symbol"
+ -
+ encoded: "0C0000000f6600"
+ error: "truncated javascript with scope"
+ -
+ encoded: "0C000000106600"
+ error: "truncated int32"
+ -
+ encoded: "0C000000116600"
+ error: "truncated timestamp"
+ -
+ encoded: "0C000000126600"
+ error: "truncated int64"
+ -
+ encoded: "0400000000"
+ error: basic
+ -
+ encoded: "0500000001"
+ error: basic
+ -
+ encoded: "05000000"
+ error: basic
+ -
+ encoded: "0700000002610078563412"
+ error: basic
+ -
+ encoded: "090000001061000500"
+ error: basic
+ -
+ encoded: "00000000000000000000"
+ error: basic
+ -
+ encoded: "1300000002666f6f00040000006261720000"
+ error: "basic"
+ -
+ encoded: "1800000003666f6f000f0000001062617200ffffff7f0000"
+ error: basic
+ -
+ encoded: "1500000003666f6f000c0000000862617200010000"
+ error: basic
+ -
+ encoded: "1c00000003666f6f001200000002626172000500000062617a000000"
+ error: basic
+ -
+ encoded: "1000000002610004000000616263ff00"
+ error: string is not null-terminated
+ -
+ encoded: "0c0000000200000000000000"
+ error: bad_string_length
+ -
+ encoded: "120000000200ffffffff666f6f6261720000"
+ error: bad_string_length
+ -
+ encoded: "0c0000000e00000000000000"
+ error: bad_string_length
+ -
+ encoded: "120000000e00ffffffff666f6f6261720000"
+ error: bad_string_length
+ -
+ encoded: "180000000c00fa5bd841d6585d9900"
+ error: ""
+ -
+ encoded: "1e0000000c00ffffffff666f6f626172005259b56afa5bd841d6585d9900"
+ error: bad_string_length
+ -
+ encoded: "0c0000000d00000000000000"
+ error: bad_string_length
+ -
+ encoded: "0c0000000d00ffffffff0000"
+ error: bad_string_length
+ -
+ encoded: "1c0000000f001500000000000000000c000000020001000000000000"
+ error: bad_string_length
+ -
+ encoded: "1c0000000f0015000000ffffffff000c000000020001000000000000"
+ error: bad_string_length
+ -
+ encoded: "1c0000000f001500000001000000000c000000020000000000000000"
+ error: bad_string_length
+ -
+ encoded: "1c0000000f001500000001000000000c0000000200ffffffff000000"
+ error: bad_string_length
+ -
+ encoded: "0E00000008616263646566676869707172737475"
+ error: "Run-on CString"
+ -
+ encoded: "0100000000"
+ error: "An object size that's too small to even include the object size, but is correctly encoded, along with a correct EOO (and no data)"
+ -
+ encoded: "1a0000000e74657374000c00000068656c6c6f20776f726c6400000500000000"
+ error: "One object, but with object size listed smaller than it is in the data"
+ -
+ encoded: "05000000"
+ error: "One object, missing the EOO at the end"
+ -
+ encoded: "0500000001"
+ error: "One object, sized correctly, with a spot for an EOO, but the EOO is 0x01"
+ -
+ encoded: "05000000ff"
+ error: "One object, sized correctly, with a spot for an EOO, but the EOO is 0xff"
+ -
+ encoded: "0500000070"
+ error: "One object, sized correctly, with a spot for an EOO, but the EOO is 0x70"
+ -
+ encoded: "07000000000000"
+ error: "Invalid BSON type low range"
+ -
+ encoded: "07000000800000"
+ error: "Invalid BSON type high range"
+ -
+ encoded: "090000000862000200"
+ error: "Invalid boolean value of 2"
+ -
+ encoded: "09000000086200ff00"
+ error: "Invalid boolean value of -1"
+ `, `
+---
+description: "Int32 type"
+documents:
+ -
+ decoded:
+ i: -2147483648
+ encoded: 0C0000001069000000008000
+ -
+ decoded:
+ i: 2147483647
+ encoded: 0C000000106900FFFFFF7F00
+ -
+ decoded:
+ i: -1
+ encoded: 0C000000106900FFFFFFFF00
+ -
+ decoded:
+ i: 0
+ encoded: 0C0000001069000000000000
+ -
+ decoded:
+ i: 1
+ encoded: 0C0000001069000100000000
+
+`, `
+---
+description: "String type"
+documents:
+ -
+ decoded:
+ s : ""
+ encoded: 0D000000027300010000000000
+ -
+ decoded:
+ s: "a"
+ encoded: 0E00000002730002000000610000
+ -
+ decoded:
+ s: "This is a string"
+ encoded: 1D0000000273001100000054686973206973206120737472696E670000
+ -
+ decoded:
+ s: "κόσμε"
+ encoded: 180000000273000C000000CEBAE1BDB9CF83CEBCCEB50000
+`}
diff --git a/vendor/gopkg.in/mgo.v2/internal/json/LICENSE b/vendor/gopkg.in/mgo.v2/internal/json/LICENSE
new file mode 100644
index 0000000..7448756
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/internal/json/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2012 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/gopkg.in/mgo.v2/internal/json/bench_test.go b/vendor/gopkg.in/mgo.v2/internal/json/bench_test.go
new file mode 100644
index 0000000..cd7380b
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/internal/json/bench_test.go
@@ -0,0 +1,223 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Large data benchmark.
+// The JSON data is a summary of agl's changes in the
+// go, webkit, and chromium open source projects.
+// We benchmark converting between the JSON form
+// and in-memory data structures.
+
+package json
+
+import (
+ "bytes"
+ "compress/gzip"
+ "io/ioutil"
+ "os"
+ "strings"
+ "testing"
+)
+
+type codeResponse struct {
+ Tree *codeNode `json:"tree"`
+ Username string `json:"username"`
+}
+
+type codeNode struct {
+ Name string `json:"name"`
+ Kids []*codeNode `json:"kids"`
+ CLWeight float64 `json:"cl_weight"`
+ Touches int `json:"touches"`
+ MinT int64 `json:"min_t"`
+ MaxT int64 `json:"max_t"`
+ MeanT int64 `json:"mean_t"`
+}
+
+var codeJSON []byte
+var codeStruct codeResponse
+
+func codeInit() {
+ f, err := os.Open("testdata/code.json.gz")
+ if err != nil {
+ panic(err)
+ }
+ defer f.Close()
+ gz, err := gzip.NewReader(f)
+ if err != nil {
+ panic(err)
+ }
+ data, err := ioutil.ReadAll(gz)
+ if err != nil {
+ panic(err)
+ }
+
+ codeJSON = data
+
+ if err := Unmarshal(codeJSON, &codeStruct); err != nil {
+ panic("unmarshal code.json: " + err.Error())
+ }
+
+ if data, err = Marshal(&codeStruct); err != nil {
+ panic("marshal code.json: " + err.Error())
+ }
+
+ if !bytes.Equal(data, codeJSON) {
+ println("different lengths", len(data), len(codeJSON))
+ for i := 0; i < len(data) && i < len(codeJSON); i++ {
+ if data[i] != codeJSON[i] {
+ println("re-marshal: changed at byte", i)
+ println("orig: ", string(codeJSON[i-10:i+10]))
+ println("new: ", string(data[i-10:i+10]))
+ break
+ }
+ }
+ panic("re-marshal code.json: different result")
+ }
+}
+
+func BenchmarkCodeEncoder(b *testing.B) {
+ if codeJSON == nil {
+ b.StopTimer()
+ codeInit()
+ b.StartTimer()
+ }
+ enc := NewEncoder(ioutil.Discard)
+ for i := 0; i < b.N; i++ {
+ if err := enc.Encode(&codeStruct); err != nil {
+ b.Fatal("Encode:", err)
+ }
+ }
+ b.SetBytes(int64(len(codeJSON)))
+}
+
+func BenchmarkCodeMarshal(b *testing.B) {
+ if codeJSON == nil {
+ b.StopTimer()
+ codeInit()
+ b.StartTimer()
+ }
+ for i := 0; i < b.N; i++ {
+ if _, err := Marshal(&codeStruct); err != nil {
+ b.Fatal("Marshal:", err)
+ }
+ }
+ b.SetBytes(int64(len(codeJSON)))
+}
+
+func BenchmarkCodeDecoder(b *testing.B) {
+ if codeJSON == nil {
+ b.StopTimer()
+ codeInit()
+ b.StartTimer()
+ }
+ var buf bytes.Buffer
+ dec := NewDecoder(&buf)
+ var r codeResponse
+ for i := 0; i < b.N; i++ {
+ buf.Write(codeJSON)
+ // hide EOF
+ buf.WriteByte('\n')
+ buf.WriteByte('\n')
+ buf.WriteByte('\n')
+ if err := dec.Decode(&r); err != nil {
+ b.Fatal("Decode:", err)
+ }
+ }
+ b.SetBytes(int64(len(codeJSON)))
+}
+
+func BenchmarkDecoderStream(b *testing.B) {
+ b.StopTimer()
+ var buf bytes.Buffer
+ dec := NewDecoder(&buf)
+ buf.WriteString(`"` + strings.Repeat("x", 1000000) + `"` + "\n\n\n")
+ var x interface{}
+ if err := dec.Decode(&x); err != nil {
+ b.Fatal("Decode:", err)
+ }
+ ones := strings.Repeat(" 1\n", 300000) + "\n\n\n"
+ b.StartTimer()
+ for i := 0; i < b.N; i++ {
+ if i%300000 == 0 {
+ buf.WriteString(ones)
+ }
+ x = nil
+ if err := dec.Decode(&x); err != nil || x != 1.0 {
+ b.Fatalf("Decode: %v after %d", err, i)
+ }
+ }
+}
+
+func BenchmarkCodeUnmarshal(b *testing.B) {
+ if codeJSON == nil {
+ b.StopTimer()
+ codeInit()
+ b.StartTimer()
+ }
+ for i := 0; i < b.N; i++ {
+ var r codeResponse
+ if err := Unmarshal(codeJSON, &r); err != nil {
+ b.Fatal("Unmarshal:", err)
+ }
+ }
+ b.SetBytes(int64(len(codeJSON)))
+}
+
+func BenchmarkCodeUnmarshalReuse(b *testing.B) {
+ if codeJSON == nil {
+ b.StopTimer()
+ codeInit()
+ b.StartTimer()
+ }
+ var r codeResponse
+ for i := 0; i < b.N; i++ {
+ if err := Unmarshal(codeJSON, &r); err != nil {
+ b.Fatal("Unmarshal:", err)
+ }
+ }
+}
+
+func BenchmarkUnmarshalString(b *testing.B) {
+ data := []byte(`"hello, world"`)
+ var s string
+
+ for i := 0; i < b.N; i++ {
+ if err := Unmarshal(data, &s); err != nil {
+ b.Fatal("Unmarshal:", err)
+ }
+ }
+}
+
+func BenchmarkUnmarshalFloat64(b *testing.B) {
+ var f float64
+ data := []byte(`3.14`)
+
+ for i := 0; i < b.N; i++ {
+ if err := Unmarshal(data, &f); err != nil {
+ b.Fatal("Unmarshal:", err)
+ }
+ }
+}
+
+func BenchmarkUnmarshalInt64(b *testing.B) {
+ var x int64
+ data := []byte(`3`)
+
+ for i := 0; i < b.N; i++ {
+ if err := Unmarshal(data, &x); err != nil {
+ b.Fatal("Unmarshal:", err)
+ }
+ }
+}
+
+func BenchmarkIssue10335(b *testing.B) {
+ b.ReportAllocs()
+ var s struct{}
+ j := []byte(`{"a":{ }}`)
+ for n := 0; n < b.N; n++ {
+ if err := Unmarshal(j, &s); err != nil {
+ b.Fatal(err)
+ }
+ }
+}
diff --git a/vendor/gopkg.in/mgo.v2/internal/json/decode.go b/vendor/gopkg.in/mgo.v2/internal/json/decode.go
new file mode 100644
index 0000000..ce7c7d2
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/internal/json/decode.go
@@ -0,0 +1,1685 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Represents JSON data structure using native Go types: booleans, floats,
+// strings, arrays, and maps.
+
+package json
+
+import (
+ "bytes"
+ "encoding"
+ "encoding/base64"
+ "errors"
+ "fmt"
+ "reflect"
+ "runtime"
+ "strconv"
+ "unicode"
+ "unicode/utf16"
+ "unicode/utf8"
+)
+
+// Unmarshal parses the JSON-encoded data and stores the result
+// in the value pointed to by v.
+//
+// Unmarshal uses the inverse of the encodings that
+// Marshal uses, allocating maps, slices, and pointers as necessary,
+// with the following additional rules:
+//
+// To unmarshal JSON into a pointer, Unmarshal first handles the case of
+// the JSON being the JSON literal null. In that case, Unmarshal sets
+// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into
+// the value pointed at by the pointer. If the pointer is nil, Unmarshal
+// allocates a new value for it to point to.
+//
+// To unmarshal JSON into a struct, Unmarshal matches incoming object
+// keys to the keys used by Marshal (either the struct field name or its tag),
+// preferring an exact match but also accepting a case-insensitive match.
+// Unmarshal will only set exported fields of the struct.
+//
+// To unmarshal JSON into an interface value,
+// Unmarshal stores one of these in the interface value:
+//
+// bool, for JSON booleans
+// float64, for JSON numbers
+// string, for JSON strings
+// []interface{}, for JSON arrays
+// map[string]interface{}, for JSON objects
+// nil for JSON null
+//
+// To unmarshal a JSON array into a slice, Unmarshal resets the slice length
+// to zero and then appends each element to the slice.
+// As a special case, to unmarshal an empty JSON array into a slice,
+// Unmarshal replaces the slice with a new empty slice.
+//
+// To unmarshal a JSON array into a Go array, Unmarshal decodes
+// JSON array elements into corresponding Go array elements.
+// If the Go array is smaller than the JSON array,
+// the additional JSON array elements are discarded.
+// If the JSON array is smaller than the Go array,
+// the additional Go array elements are set to zero values.
+//
+// To unmarshal a JSON object into a map, Unmarshal first establishes a map to
+// use, If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal
+// reuses the existing map, keeping existing entries. Unmarshal then stores key-
+// value pairs from the JSON object into the map. The map's key type must
+// either be a string or implement encoding.TextUnmarshaler.
+//
+// If a JSON value is not appropriate for a given target type,
+// or if a JSON number overflows the target type, Unmarshal
+// skips that field and completes the unmarshaling as best it can.
+// If no more serious errors are encountered, Unmarshal returns
+// an UnmarshalTypeError describing the earliest such error.
+//
+// The JSON null value unmarshals into an interface, map, pointer, or slice
+// by setting that Go value to nil. Because null is often used in JSON to mean
+// ``not present,'' unmarshaling a JSON null into any other Go type has no effect
+// on the value and produces no error.
+//
+// When unmarshaling quoted strings, invalid UTF-8 or
+// invalid UTF-16 surrogate pairs are not treated as an error.
+// Instead, they are replaced by the Unicode replacement
+// character U+FFFD.
+//
+func Unmarshal(data []byte, v interface{}) error {
+ // Check for well-formedness.
+ // Avoids filling out half a data structure
+ // before discovering a JSON syntax error.
+ var d decodeState
+ err := checkValid(data, &d.scan)
+ if err != nil {
+ return err
+ }
+
+ d.init(data)
+ return d.unmarshal(v)
+}
+
+// Unmarshaler is the interface implemented by types
+// that can unmarshal a JSON description of themselves.
+// The input can be assumed to be a valid encoding of
+// a JSON value. UnmarshalJSON must copy the JSON data
+// if it wishes to retain the data after returning.
+type Unmarshaler interface {
+ UnmarshalJSON([]byte) error
+}
+
+// An UnmarshalTypeError describes a JSON value that was
+// not appropriate for a value of a specific Go type.
+type UnmarshalTypeError struct {
+ Value string // description of JSON value - "bool", "array", "number -5"
+ Type reflect.Type // type of Go value it could not be assigned to
+ Offset int64 // error occurred after reading Offset bytes
+}
+
+func (e *UnmarshalTypeError) Error() string {
+ return "json: cannot unmarshal " + e.Value + " into Go value of type " + e.Type.String()
+}
+
+// An UnmarshalFieldError describes a JSON object key that
+// led to an unexported (and therefore unwritable) struct field.
+// (No longer used; kept for compatibility.)
+type UnmarshalFieldError struct {
+ Key string
+ Type reflect.Type
+ Field reflect.StructField
+}
+
+func (e *UnmarshalFieldError) Error() string {
+ return "json: cannot unmarshal object key " + strconv.Quote(e.Key) + " into unexported field " + e.Field.Name + " of type " + e.Type.String()
+}
+
+// An InvalidUnmarshalError describes an invalid argument passed to Unmarshal.
+// (The argument to Unmarshal must be a non-nil pointer.)
+type InvalidUnmarshalError struct {
+ Type reflect.Type
+}
+
+func (e *InvalidUnmarshalError) Error() string {
+ if e.Type == nil {
+ return "json: Unmarshal(nil)"
+ }
+
+ if e.Type.Kind() != reflect.Ptr {
+ return "json: Unmarshal(non-pointer " + e.Type.String() + ")"
+ }
+ return "json: Unmarshal(nil " + e.Type.String() + ")"
+}
+
+func (d *decodeState) unmarshal(v interface{}) (err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ if _, ok := r.(runtime.Error); ok {
+ panic(r)
+ }
+ err = r.(error)
+ }
+ }()
+
+ rv := reflect.ValueOf(v)
+ if rv.Kind() != reflect.Ptr || rv.IsNil() {
+ return &InvalidUnmarshalError{reflect.TypeOf(v)}
+ }
+
+ d.scan.reset()
+ // We decode rv not rv.Elem because the Unmarshaler interface
+ // test must be applied at the top level of the value.
+ d.value(rv)
+ return d.savedError
+}
+
+// A Number represents a JSON number literal.
+type Number string
+
+// String returns the literal text of the number.
+func (n Number) String() string { return string(n) }
+
+// Float64 returns the number as a float64.
+func (n Number) Float64() (float64, error) {
+ return strconv.ParseFloat(string(n), 64)
+}
+
+// Int64 returns the number as an int64.
+func (n Number) Int64() (int64, error) {
+ return strconv.ParseInt(string(n), 10, 64)
+}
+
+// isValidNumber reports whether s is a valid JSON number literal.
+func isValidNumber(s string) bool {
+ // This function implements the JSON numbers grammar.
+ // See https://tools.ietf.org/html/rfc7159#section-6
+ // and http://json.org/number.gif
+
+ if s == "" {
+ return false
+ }
+
+ // Optional -
+ if s[0] == '-' {
+ s = s[1:]
+ if s == "" {
+ return false
+ }
+ }
+
+ // Digits
+ switch {
+ default:
+ return false
+
+ case s[0] == '0':
+ s = s[1:]
+
+ case '1' <= s[0] && s[0] <= '9':
+ s = s[1:]
+ for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+ s = s[1:]
+ }
+ }
+
+ // . followed by 1 or more digits.
+ if len(s) >= 2 && s[0] == '.' && '0' <= s[1] && s[1] <= '9' {
+ s = s[2:]
+ for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+ s = s[1:]
+ }
+ }
+
+ // e or E followed by an optional - or + and
+ // 1 or more digits.
+ if len(s) >= 2 && (s[0] == 'e' || s[0] == 'E') {
+ s = s[1:]
+ if s[0] == '+' || s[0] == '-' {
+ s = s[1:]
+ if s == "" {
+ return false
+ }
+ }
+ for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+ s = s[1:]
+ }
+ }
+
+ // Make sure we are at the end.
+ return s == ""
+}
+
+// decodeState represents the state while decoding a JSON value.
+type decodeState struct {
+ data []byte
+ off int // read offset in data
+ scan scanner
+ nextscan scanner // for calls to nextValue
+ savedError error
+ useNumber bool
+ ext Extension
+}
+
+// errPhase is used for errors that should not happen unless
+// there is a bug in the JSON decoder or something is editing
+// the data slice while the decoder executes.
+var errPhase = errors.New("JSON decoder out of sync - data changing underfoot?")
+
+func (d *decodeState) init(data []byte) *decodeState {
+ d.data = data
+ d.off = 0
+ d.savedError = nil
+ return d
+}
+
+// error aborts the decoding by panicking with err.
+func (d *decodeState) error(err error) {
+ panic(err)
+}
+
+// saveError saves the first err it is called with,
+// for reporting at the end of the unmarshal.
+func (d *decodeState) saveError(err error) {
+ if d.savedError == nil {
+ d.savedError = err
+ }
+}
+
+// next cuts off and returns the next full JSON value in d.data[d.off:].
+// The next value is known to be an object or array, not a literal.
+func (d *decodeState) next() []byte {
+ c := d.data[d.off]
+ item, rest, err := nextValue(d.data[d.off:], &d.nextscan)
+ if err != nil {
+ d.error(err)
+ }
+ d.off = len(d.data) - len(rest)
+
+ // Our scanner has seen the opening brace/bracket
+ // and thinks we're still in the middle of the object.
+ // invent a closing brace/bracket to get it out.
+ if c == '{' {
+ d.scan.step(&d.scan, '}')
+ } else if c == '[' {
+ d.scan.step(&d.scan, ']')
+ } else {
+ // Was inside a function name. Get out of it.
+ d.scan.step(&d.scan, '(')
+ d.scan.step(&d.scan, ')')
+ }
+
+ return item
+}
+
+// scanWhile processes bytes in d.data[d.off:] until it
+// receives a scan code not equal to op.
+// It updates d.off and returns the new scan code.
+func (d *decodeState) scanWhile(op int) int {
+ var newOp int
+ for {
+ if d.off >= len(d.data) {
+ newOp = d.scan.eof()
+ d.off = len(d.data) + 1 // mark processed EOF with len+1
+ } else {
+ c := d.data[d.off]
+ d.off++
+ newOp = d.scan.step(&d.scan, c)
+ }
+ if newOp != op {
+ break
+ }
+ }
+ return newOp
+}
+
+// value decodes a JSON value from d.data[d.off:] into the value.
+// it updates d.off to point past the decoded value.
+func (d *decodeState) value(v reflect.Value) {
+ if !v.IsValid() {
+ _, rest, err := nextValue(d.data[d.off:], &d.nextscan)
+ if err != nil {
+ d.error(err)
+ }
+ d.off = len(d.data) - len(rest)
+
+ // d.scan thinks we're still at the beginning of the item.
+ // Feed in an empty string - the shortest, simplest value -
+ // so that it knows we got to the end of the value.
+ if d.scan.redo {
+ // rewind.
+ d.scan.redo = false
+ d.scan.step = stateBeginValue
+ }
+ d.scan.step(&d.scan, '"')
+ d.scan.step(&d.scan, '"')
+
+ n := len(d.scan.parseState)
+ if n > 0 && d.scan.parseState[n-1] == parseObjectKey {
+ // d.scan thinks we just read an object key; finish the object
+ d.scan.step(&d.scan, ':')
+ d.scan.step(&d.scan, '"')
+ d.scan.step(&d.scan, '"')
+ d.scan.step(&d.scan, '}')
+ }
+
+ return
+ }
+
+ switch op := d.scanWhile(scanSkipSpace); op {
+ default:
+ d.error(errPhase)
+
+ case scanBeginArray:
+ d.array(v)
+
+ case scanBeginObject:
+ d.object(v)
+
+ case scanBeginLiteral:
+ d.literal(v)
+
+ case scanBeginName:
+ d.name(v)
+ }
+}
+
+type unquotedValue struct{}
+
+// valueQuoted is like value but decodes a
+// quoted string literal or literal null into an interface value.
+// If it finds anything other than a quoted string literal or null,
+// valueQuoted returns unquotedValue{}.
+func (d *decodeState) valueQuoted() interface{} {
+ switch op := d.scanWhile(scanSkipSpace); op {
+ default:
+ d.error(errPhase)
+
+ case scanBeginArray:
+ d.array(reflect.Value{})
+
+ case scanBeginObject:
+ d.object(reflect.Value{})
+
+ case scanBeginName:
+ switch v := d.nameInterface().(type) {
+ case nil, string:
+ return v
+ }
+
+ case scanBeginLiteral:
+ switch v := d.literalInterface().(type) {
+ case nil, string:
+ return v
+ }
+ }
+ return unquotedValue{}
+}
+
+// indirect walks down v allocating pointers as needed,
+// until it gets to a non-pointer.
+// if it encounters an Unmarshaler, indirect stops and returns that.
+// if decodingNull is true, indirect stops at the last pointer so it can be set to nil.
+func (d *decodeState) indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) {
+ // If v is a named type and is addressable,
+ // start with its address, so that if the type has pointer methods,
+ // we find them.
+ if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() {
+ v = v.Addr()
+ }
+ for {
+ // Load value from interface, but only if the result will be
+ // usefully addressable.
+ if v.Kind() == reflect.Interface && !v.IsNil() {
+ e := v.Elem()
+ if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) {
+ v = e
+ continue
+ }
+ }
+
+ if v.Kind() != reflect.Ptr {
+ break
+ }
+
+ if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() {
+ break
+ }
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ if v.Type().NumMethod() > 0 {
+ if u, ok := v.Interface().(Unmarshaler); ok {
+ return u, nil, v
+ }
+ if u, ok := v.Interface().(encoding.TextUnmarshaler); ok {
+ return nil, u, v
+ }
+ }
+ v = v.Elem()
+ }
+ return nil, nil, v
+}
+
+// array consumes an array from d.data[d.off-1:], decoding into the value v.
+// the first byte of the array ('[') has been read already.
+func (d *decodeState) array(v reflect.Value) {
+ // Check for unmarshaler.
+ u, ut, pv := d.indirect(v, false)
+ if u != nil {
+ d.off--
+ err := u.UnmarshalJSON(d.next())
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+ if ut != nil {
+ d.saveError(&UnmarshalTypeError{"array", v.Type(), int64(d.off)})
+ d.off--
+ d.next()
+ return
+ }
+
+ v = pv
+
+ // Check type of target.
+ switch v.Kind() {
+ case reflect.Interface:
+ if v.NumMethod() == 0 {
+ // Decoding into nil interface? Switch to non-reflect code.
+ v.Set(reflect.ValueOf(d.arrayInterface()))
+ return
+ }
+ // Otherwise it's invalid.
+ fallthrough
+ default:
+ d.saveError(&UnmarshalTypeError{"array", v.Type(), int64(d.off)})
+ d.off--
+ d.next()
+ return
+ case reflect.Array:
+ case reflect.Slice:
+ break
+ }
+
+ i := 0
+ for {
+ // Look ahead for ] - can only happen on first iteration.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndArray {
+ break
+ }
+
+ // Back up so d.value can have the byte we just read.
+ d.off--
+ d.scan.undo(op)
+
+ // Get element of array, growing if necessary.
+ if v.Kind() == reflect.Slice {
+ // Grow slice if necessary
+ if i >= v.Cap() {
+ newcap := v.Cap() + v.Cap()/2
+ if newcap < 4 {
+ newcap = 4
+ }
+ newv := reflect.MakeSlice(v.Type(), v.Len(), newcap)
+ reflect.Copy(newv, v)
+ v.Set(newv)
+ }
+ if i >= v.Len() {
+ v.SetLen(i + 1)
+ }
+ }
+
+ if i < v.Len() {
+ // Decode into element.
+ d.value(v.Index(i))
+ } else {
+ // Ran out of fixed array: skip.
+ d.value(reflect.Value{})
+ }
+ i++
+
+ // Next token must be , or ].
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndArray {
+ break
+ }
+ if op != scanArrayValue {
+ d.error(errPhase)
+ }
+ }
+
+ if i < v.Len() {
+ if v.Kind() == reflect.Array {
+ // Array. Zero the rest.
+ z := reflect.Zero(v.Type().Elem())
+ for ; i < v.Len(); i++ {
+ v.Index(i).Set(z)
+ }
+ } else {
+ v.SetLen(i)
+ }
+ }
+ if i == 0 && v.Kind() == reflect.Slice {
+ v.Set(reflect.MakeSlice(v.Type(), 0, 0))
+ }
+}
+
+var nullLiteral = []byte("null")
+var textUnmarshalerType = reflect.TypeOf(new(encoding.TextUnmarshaler)).Elem()
+
+// object consumes an object from d.data[d.off-1:], decoding into the value v.
+// the first byte ('{') of the object has been read already.
+func (d *decodeState) object(v reflect.Value) {
+ // Check for unmarshaler.
+ u, ut, pv := d.indirect(v, false)
+ if d.storeKeyed(pv) {
+ return
+ }
+ if u != nil {
+ d.off--
+ err := u.UnmarshalJSON(d.next())
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+ if ut != nil {
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over { } in input
+ return
+ }
+ v = pv
+
+ // Decoding into nil interface? Switch to non-reflect code.
+ if v.Kind() == reflect.Interface && v.NumMethod() == 0 {
+ v.Set(reflect.ValueOf(d.objectInterface()))
+ return
+ }
+
+ // Check type of target:
+ // struct or
+ // map[string]T or map[encoding.TextUnmarshaler]T
+ switch v.Kind() {
+ case reflect.Map:
+ // Map key must either have string kind or be an encoding.TextUnmarshaler.
+ t := v.Type()
+ if t.Key().Kind() != reflect.String &&
+ !reflect.PtrTo(t.Key()).Implements(textUnmarshalerType) {
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over { } in input
+ return
+ }
+ if v.IsNil() {
+ v.Set(reflect.MakeMap(t))
+ }
+ case reflect.Struct:
+
+ default:
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over { } in input
+ return
+ }
+
+ var mapElem reflect.Value
+
+ empty := true
+ for {
+ // Read opening " of string key or closing }.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndObject {
+ if !empty && !d.ext.trailingCommas {
+ d.syntaxError("beginning of object key string")
+ }
+ break
+ }
+ empty = false
+ if op == scanBeginName {
+ if !d.ext.unquotedKeys {
+ d.syntaxError("beginning of object key string")
+ }
+ } else if op != scanBeginLiteral {
+ d.error(errPhase)
+ }
+ unquotedKey := op == scanBeginName
+
+ // Read key.
+ start := d.off - 1
+ op = d.scanWhile(scanContinue)
+ item := d.data[start : d.off-1]
+ var key []byte
+ if unquotedKey {
+ key = item
+ // TODO Fix code below to quote item when necessary.
+ } else {
+ var ok bool
+ key, ok = unquoteBytes(item)
+ if !ok {
+ d.error(errPhase)
+ }
+ }
+
+ // Figure out field corresponding to key.
+ var subv reflect.Value
+ destring := false // whether the value is wrapped in a string to be decoded first
+
+ if v.Kind() == reflect.Map {
+ elemType := v.Type().Elem()
+ if !mapElem.IsValid() {
+ mapElem = reflect.New(elemType).Elem()
+ } else {
+ mapElem.Set(reflect.Zero(elemType))
+ }
+ subv = mapElem
+ } else {
+ var f *field
+ fields := cachedTypeFields(v.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if bytes.Equal(ff.nameBytes, key) {
+ f = ff
+ break
+ }
+ if f == nil && ff.equalFold(ff.nameBytes, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ subv = v
+ destring = f.quoted
+ for _, i := range f.index {
+ if subv.Kind() == reflect.Ptr {
+ if subv.IsNil() {
+ subv.Set(reflect.New(subv.Type().Elem()))
+ }
+ subv = subv.Elem()
+ }
+ subv = subv.Field(i)
+ }
+ }
+ }
+
+ // Read : before value.
+ if op == scanSkipSpace {
+ op = d.scanWhile(scanSkipSpace)
+ }
+ if op != scanObjectKey {
+ d.error(errPhase)
+ }
+
+ // Read value.
+ if destring {
+ switch qv := d.valueQuoted().(type) {
+ case nil:
+ d.literalStore(nullLiteral, subv, false)
+ case string:
+ d.literalStore([]byte(qv), subv, true)
+ default:
+ d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type()))
+ }
+ } else {
+ d.value(subv)
+ }
+
+ // Write value back to map;
+ // if using struct, subv points into struct already.
+ if v.Kind() == reflect.Map {
+ kt := v.Type().Key()
+ var kv reflect.Value
+ switch {
+ case kt.Kind() == reflect.String:
+ kv = reflect.ValueOf(key).Convert(v.Type().Key())
+ case reflect.PtrTo(kt).Implements(textUnmarshalerType):
+ kv = reflect.New(v.Type().Key())
+ d.literalStore(item, kv, true)
+ kv = kv.Elem()
+ default:
+ panic("json: Unexpected key type") // should never occur
+ }
+ v.SetMapIndex(kv, subv)
+ }
+
+ // Next token must be , or }.
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndObject {
+ break
+ }
+ if op != scanObjectValue {
+ d.error(errPhase)
+ }
+ }
+}
+
+// isNull returns whether there's a null literal at the provided offset.
+func (d *decodeState) isNull(off int) bool {
+ if off+4 >= len(d.data) || d.data[off] != 'n' || d.data[off+1] != 'u' || d.data[off+2] != 'l' || d.data[off+3] != 'l' {
+ return false
+ }
+ d.nextscan.reset()
+ for i, c := range d.data[off:] {
+ if i > 4 {
+ return false
+ }
+ switch d.nextscan.step(&d.nextscan, c) {
+ case scanContinue, scanBeginName:
+ continue
+ }
+ break
+ }
+ return true
+}
+
+// name consumes a const or function from d.data[d.off-1:], decoding into the value v.
+// the first byte of the function name has been read already.
+func (d *decodeState) name(v reflect.Value) {
+ if d.isNull(d.off-1) {
+ d.literal(v)
+ return
+ }
+
+ // Check for unmarshaler.
+ u, ut, pv := d.indirect(v, false)
+ if d.storeKeyed(pv) {
+ return
+ }
+ if u != nil {
+ d.off--
+ err := u.UnmarshalJSON(d.next())
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+ if ut != nil {
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over function in input
+ return
+ }
+ v = pv
+
+ // Decoding into nil interface? Switch to non-reflect code.
+ if v.Kind() == reflect.Interface && v.NumMethod() == 0 {
+ out := d.nameInterface()
+ if out == nil {
+ v.Set(reflect.Zero(v.Type()))
+ } else {
+ v.Set(reflect.ValueOf(out))
+ }
+ return
+ }
+
+ nameStart := d.off - 1
+
+ op := d.scanWhile(scanContinue)
+
+ name := d.data[nameStart : d.off-1]
+ if op != scanParam {
+ // Back up so the byte just read is consumed next.
+ d.off--
+ d.scan.undo(op)
+ if l, ok := d.convertLiteral(name); ok {
+ d.storeValue(v, l)
+ return
+ }
+ d.error(&SyntaxError{fmt.Sprintf("json: unknown constant %q", name), int64(d.off)})
+ }
+
+ funcName := string(name)
+ funcData := d.ext.funcs[funcName]
+ if funcData.key == "" {
+ d.error(fmt.Errorf("json: unknown function %q", funcName))
+ }
+
+ // Check type of target:
+ // struct or
+ // map[string]T or map[encoding.TextUnmarshaler]T
+ switch v.Kind() {
+ case reflect.Map:
+ // Map key must either have string kind or be an encoding.TextUnmarshaler.
+ t := v.Type()
+ if t.Key().Kind() != reflect.String &&
+ !reflect.PtrTo(t.Key()).Implements(textUnmarshalerType) {
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over { } in input
+ return
+ }
+ if v.IsNil() {
+ v.Set(reflect.MakeMap(t))
+ }
+ case reflect.Struct:
+
+ default:
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ d.off--
+ d.next() // skip over { } in input
+ return
+ }
+
+ // TODO Fix case of func field as map.
+ //topv := v
+
+ // Figure out field corresponding to function.
+ key := []byte(funcData.key)
+ if v.Kind() == reflect.Map {
+ elemType := v.Type().Elem()
+ v = reflect.New(elemType).Elem()
+ } else {
+ var f *field
+ fields := cachedTypeFields(v.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if bytes.Equal(ff.nameBytes, key) {
+ f = ff
+ break
+ }
+ if f == nil && ff.equalFold(ff.nameBytes, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ for _, i := range f.index {
+ if v.Kind() == reflect.Ptr {
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ v = v.Elem()
+ }
+ v = v.Field(i)
+ }
+ if v.Kind() == reflect.Ptr {
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ v = v.Elem()
+ }
+ }
+ }
+
+ // Check for unmarshaler on func field itself.
+ u, ut, pv = d.indirect(v, false)
+ if u != nil {
+ d.off = nameStart
+ err := u.UnmarshalJSON(d.next())
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+
+ var mapElem reflect.Value
+
+ // Parse function arguments.
+ for i := 0; ; i++ {
+ // closing ) - can only happen on first iteration.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndParams {
+ break
+ }
+
+ // Back up so d.value can have the byte we just read.
+ d.off--
+ d.scan.undo(op)
+
+ if i >= len(funcData.args) {
+ d.error(fmt.Errorf("json: too many arguments for function %s", funcName))
+ }
+ key := []byte(funcData.args[i])
+
+ // Figure out field corresponding to key.
+ var subv reflect.Value
+ destring := false // whether the value is wrapped in a string to be decoded first
+
+ if v.Kind() == reflect.Map {
+ elemType := v.Type().Elem()
+ if !mapElem.IsValid() {
+ mapElem = reflect.New(elemType).Elem()
+ } else {
+ mapElem.Set(reflect.Zero(elemType))
+ }
+ subv = mapElem
+ } else {
+ var f *field
+ fields := cachedTypeFields(v.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if bytes.Equal(ff.nameBytes, key) {
+ f = ff
+ break
+ }
+ if f == nil && ff.equalFold(ff.nameBytes, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ subv = v
+ destring = f.quoted
+ for _, i := range f.index {
+ if subv.Kind() == reflect.Ptr {
+ if subv.IsNil() {
+ subv.Set(reflect.New(subv.Type().Elem()))
+ }
+ subv = subv.Elem()
+ }
+ subv = subv.Field(i)
+ }
+ }
+ }
+
+ // Read value.
+ if destring {
+ switch qv := d.valueQuoted().(type) {
+ case nil:
+ d.literalStore(nullLiteral, subv, false)
+ case string:
+ d.literalStore([]byte(qv), subv, true)
+ default:
+ d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type()))
+ }
+ } else {
+ d.value(subv)
+ }
+
+ // Write value back to map;
+ // if using struct, subv points into struct already.
+ if v.Kind() == reflect.Map {
+ kt := v.Type().Key()
+ var kv reflect.Value
+ switch {
+ case kt.Kind() == reflect.String:
+ kv = reflect.ValueOf(key).Convert(v.Type().Key())
+ case reflect.PtrTo(kt).Implements(textUnmarshalerType):
+ kv = reflect.New(v.Type().Key())
+ d.literalStore(key, kv, true)
+ kv = kv.Elem()
+ default:
+ panic("json: Unexpected key type") // should never occur
+ }
+ v.SetMapIndex(kv, subv)
+ }
+
+ // Next token must be , or ).
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndParams {
+ break
+ }
+ if op != scanParam {
+ d.error(errPhase)
+ }
+ }
+}
+
+// keyed attempts to decode an object or function using a keyed doc extension,
+// and returns the value and true on success, or nil and false otherwise.
+func (d *decodeState) keyed() (interface{}, bool) {
+ if len(d.ext.keyed) == 0 {
+ return nil, false
+ }
+
+ unquote := false
+
+ // Look-ahead first key to check for a keyed document extension.
+ d.nextscan.reset()
+ var start, end int
+ for i, c := range d.data[d.off-1:] {
+ switch op := d.nextscan.step(&d.nextscan, c); op {
+ case scanSkipSpace, scanContinue, scanBeginObject:
+ continue
+ case scanBeginLiteral, scanBeginName:
+ unquote = op == scanBeginLiteral
+ start = i
+ continue
+ }
+ end = i
+ break
+ }
+
+ name := d.data[d.off-1+start : d.off-1+end]
+
+ var key []byte
+ var ok bool
+ if unquote {
+ key, ok = unquoteBytes(name)
+ if !ok {
+ d.error(errPhase)
+ }
+ } else {
+ funcData, ok := d.ext.funcs[string(name)]
+ if !ok {
+ return nil, false
+ }
+ key = []byte(funcData.key)
+ }
+
+ decode, ok := d.ext.keyed[string(key)]
+ if !ok {
+ return nil, false
+ }
+
+ d.off--
+ out, err := decode(d.next())
+ if err != nil {
+ d.error(err)
+ }
+ return out, true
+}
+
+func (d *decodeState) storeKeyed(v reflect.Value) bool {
+ keyed, ok := d.keyed()
+ if !ok {
+ return false
+ }
+ d.storeValue(v, keyed)
+ return true
+}
+
+var (
+ trueBytes = []byte("true")
+ falseBytes = []byte("false")
+ nullBytes = []byte("null")
+)
+
+func (d *decodeState) storeValue(v reflect.Value, from interface{}) {
+ switch from {
+ case nil:
+ d.literalStore(nullBytes, v, false)
+ return
+ case true:
+ d.literalStore(trueBytes, v, false)
+ return
+ case false:
+ d.literalStore(falseBytes, v, false)
+ return
+ }
+ fromv := reflect.ValueOf(from)
+ for fromv.Kind() == reflect.Ptr && !fromv.IsNil() {
+ fromv = fromv.Elem()
+ }
+ fromt := fromv.Type()
+ for v.Kind() == reflect.Ptr && !v.IsNil() {
+ v = v.Elem()
+ }
+ vt := v.Type()
+ if fromt.AssignableTo(vt) {
+ v.Set(fromv)
+ } else if fromt.ConvertibleTo(vt) {
+ v.Set(fromv.Convert(vt))
+ } else {
+ d.saveError(&UnmarshalTypeError{"object", v.Type(), int64(d.off)})
+ }
+}
+
+func (d *decodeState) convertLiteral(name []byte) (interface{}, bool) {
+ if len(name) == 0 {
+ return nil, false
+ }
+ switch name[0] {
+ case 't':
+ if bytes.Equal(name, trueBytes) {
+ return true, true
+ }
+ case 'f':
+ if bytes.Equal(name, falseBytes) {
+ return false, true
+ }
+ case 'n':
+ if bytes.Equal(name, nullBytes) {
+ return nil, true
+ }
+ }
+ if l, ok := d.ext.consts[string(name)]; ok {
+ return l, true
+ }
+ return nil, false
+}
+
+// literal consumes a literal from d.data[d.off-1:], decoding into the value v.
+// The first byte of the literal has been read already
+// (that's how the caller knows it's a literal).
+func (d *decodeState) literal(v reflect.Value) {
+ // All bytes inside literal return scanContinue op code.
+ start := d.off - 1
+ op := d.scanWhile(scanContinue)
+
+ // Scan read one byte too far; back up.
+ d.off--
+ d.scan.undo(op)
+
+ d.literalStore(d.data[start:d.off], v, false)
+}
+
+// convertNumber converts the number literal s to a float64 or a Number
+// depending on the setting of d.useNumber.
+func (d *decodeState) convertNumber(s string) (interface{}, error) {
+ if d.useNumber {
+ return Number(s), nil
+ }
+ f, err := strconv.ParseFloat(s, 64)
+ if err != nil {
+ return nil, &UnmarshalTypeError{"number " + s, reflect.TypeOf(0.0), int64(d.off)}
+ }
+ return f, nil
+}
+
+var numberType = reflect.TypeOf(Number(""))
+
+// literalStore decodes a literal stored in item into v.
+//
+// fromQuoted indicates whether this literal came from unwrapping a
+// string from the ",string" struct tag option. this is used only to
+// produce more helpful error messages.
+func (d *decodeState) literalStore(item []byte, v reflect.Value, fromQuoted bool) {
+ // Check for unmarshaler.
+ if len(item) == 0 {
+ //Empty string given
+ d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ return
+ }
+ wantptr := item[0] == 'n' // null
+ u, ut, pv := d.indirect(v, wantptr)
+ if u != nil {
+ err := u.UnmarshalJSON(item)
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+ if ut != nil {
+ if item[0] != '"' {
+ if fromQuoted {
+ d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)})
+ }
+ return
+ }
+ s, ok := unquoteBytes(item)
+ if !ok {
+ if fromQuoted {
+ d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.error(errPhase)
+ }
+ }
+ err := ut.UnmarshalText(s)
+ if err != nil {
+ d.error(err)
+ }
+ return
+ }
+
+ v = pv
+
+ switch c := item[0]; c {
+ case 'n': // null
+ switch v.Kind() {
+ case reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice:
+ v.Set(reflect.Zero(v.Type()))
+ // otherwise, ignore null for primitives/string
+ }
+ case 't', 'f': // true, false
+ value := c == 't'
+ switch v.Kind() {
+ default:
+ if fromQuoted {
+ d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.saveError(&UnmarshalTypeError{"bool", v.Type(), int64(d.off)})
+ }
+ case reflect.Bool:
+ v.SetBool(value)
+ case reflect.Interface:
+ if v.NumMethod() == 0 {
+ v.Set(reflect.ValueOf(value))
+ } else {
+ d.saveError(&UnmarshalTypeError{"bool", v.Type(), int64(d.off)})
+ }
+ }
+
+ case '"': // string
+ s, ok := unquoteBytes(item)
+ if !ok {
+ if fromQuoted {
+ d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.error(errPhase)
+ }
+ }
+ switch v.Kind() {
+ default:
+ d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)})
+ case reflect.Slice:
+ if v.Type().Elem().Kind() != reflect.Uint8 {
+ d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)})
+ break
+ }
+ b := make([]byte, base64.StdEncoding.DecodedLen(len(s)))
+ n, err := base64.StdEncoding.Decode(b, s)
+ if err != nil {
+ d.saveError(err)
+ break
+ }
+ v.SetBytes(b[:n])
+ case reflect.String:
+ v.SetString(string(s))
+ case reflect.Interface:
+ if v.NumMethod() == 0 {
+ v.Set(reflect.ValueOf(string(s)))
+ } else {
+ d.saveError(&UnmarshalTypeError{"string", v.Type(), int64(d.off)})
+ }
+ }
+
+ default: // number
+ if c != '-' && (c < '0' || c > '9') {
+ if fromQuoted {
+ d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.error(errPhase)
+ }
+ }
+ s := string(item)
+ switch v.Kind() {
+ default:
+ if v.Kind() == reflect.String && v.Type() == numberType {
+ v.SetString(s)
+ if !isValidNumber(s) {
+ d.error(fmt.Errorf("json: invalid number literal, trying to unmarshal %q into Number", item))
+ }
+ break
+ }
+ if fromQuoted {
+ d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+ } else {
+ d.error(&UnmarshalTypeError{"number", v.Type(), int64(d.off)})
+ }
+ case reflect.Interface:
+ n, err := d.convertNumber(s)
+ if err != nil {
+ d.saveError(err)
+ break
+ }
+ if v.NumMethod() != 0 {
+ d.saveError(&UnmarshalTypeError{"number", v.Type(), int64(d.off)})
+ break
+ }
+ v.Set(reflect.ValueOf(n))
+
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ n, err := strconv.ParseInt(s, 10, 64)
+ if err != nil || v.OverflowInt(n) {
+ d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)})
+ break
+ }
+ v.SetInt(n)
+
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ n, err := strconv.ParseUint(s, 10, 64)
+ if err != nil || v.OverflowUint(n) {
+ d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)})
+ break
+ }
+ v.SetUint(n)
+
+ case reflect.Float32, reflect.Float64:
+ n, err := strconv.ParseFloat(s, v.Type().Bits())
+ if err != nil || v.OverflowFloat(n) {
+ d.saveError(&UnmarshalTypeError{"number " + s, v.Type(), int64(d.off)})
+ break
+ }
+ v.SetFloat(n)
+ }
+ }
+}
+
+// The xxxInterface routines build up a value to be stored
+// in an empty interface. They are not strictly necessary,
+// but they avoid the weight of reflection in this common case.
+
+// valueInterface is like value but returns interface{}
+func (d *decodeState) valueInterface() interface{} {
+ switch d.scanWhile(scanSkipSpace) {
+ default:
+ d.error(errPhase)
+ panic("unreachable")
+ case scanBeginArray:
+ return d.arrayInterface()
+ case scanBeginObject:
+ return d.objectInterface()
+ case scanBeginLiteral:
+ return d.literalInterface()
+ case scanBeginName:
+ return d.nameInterface()
+ }
+}
+
+func (d *decodeState) syntaxError(expected string) {
+ msg := fmt.Sprintf("invalid character '%c' looking for %s", d.data[d.off-1], expected)
+ d.error(&SyntaxError{msg, int64(d.off)})
+}
+
+// arrayInterface is like array but returns []interface{}.
+func (d *decodeState) arrayInterface() []interface{} {
+ var v = make([]interface{}, 0)
+ for {
+ // Look ahead for ] - can only happen on first iteration.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndArray {
+ if len(v) > 0 && !d.ext.trailingCommas {
+ d.syntaxError("beginning of value")
+ }
+ break
+ }
+
+ // Back up so d.value can have the byte we just read.
+ d.off--
+ d.scan.undo(op)
+
+ v = append(v, d.valueInterface())
+
+ // Next token must be , or ].
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndArray {
+ break
+ }
+ if op != scanArrayValue {
+ d.error(errPhase)
+ }
+ }
+ return v
+}
+
+// objectInterface is like object but returns map[string]interface{}.
+func (d *decodeState) objectInterface() interface{} {
+ v, ok := d.keyed()
+ if ok {
+ return v
+ }
+
+ m := make(map[string]interface{})
+ for {
+ // Read opening " of string key or closing }.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndObject {
+ if len(m) > 0 && !d.ext.trailingCommas {
+ d.syntaxError("beginning of object key string")
+ }
+ break
+ }
+ if op == scanBeginName {
+ if !d.ext.unquotedKeys {
+ d.syntaxError("beginning of object key string")
+ }
+ } else if op != scanBeginLiteral {
+ d.error(errPhase)
+ }
+ unquotedKey := op == scanBeginName
+
+ // Read string key.
+ start := d.off - 1
+ op = d.scanWhile(scanContinue)
+ item := d.data[start : d.off-1]
+ var key string
+ if unquotedKey {
+ key = string(item)
+ } else {
+ var ok bool
+ key, ok = unquote(item)
+ if !ok {
+ d.error(errPhase)
+ }
+ }
+
+ // Read : before value.
+ if op == scanSkipSpace {
+ op = d.scanWhile(scanSkipSpace)
+ }
+ if op != scanObjectKey {
+ d.error(errPhase)
+ }
+
+ // Read value.
+ m[key] = d.valueInterface()
+
+ // Next token must be , or }.
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndObject {
+ break
+ }
+ if op != scanObjectValue {
+ d.error(errPhase)
+ }
+ }
+ return m
+}
+
+// literalInterface is like literal but returns an interface value.
+func (d *decodeState) literalInterface() interface{} {
+ // All bytes inside literal return scanContinue op code.
+ start := d.off - 1
+ op := d.scanWhile(scanContinue)
+
+ // Scan read one byte too far; back up.
+ d.off--
+ d.scan.undo(op)
+ item := d.data[start:d.off]
+
+ switch c := item[0]; c {
+ case 'n': // null
+ return nil
+
+ case 't', 'f': // true, false
+ return c == 't'
+
+ case '"': // string
+ s, ok := unquote(item)
+ if !ok {
+ d.error(errPhase)
+ }
+ return s
+
+ default: // number
+ if c != '-' && (c < '0' || c > '9') {
+ d.error(errPhase)
+ }
+ n, err := d.convertNumber(string(item))
+ if err != nil {
+ d.saveError(err)
+ }
+ return n
+ }
+}
+
+// nameInterface is like function but returns map[string]interface{}.
+func (d *decodeState) nameInterface() interface{} {
+ v, ok := d.keyed()
+ if ok {
+ return v
+ }
+
+ nameStart := d.off - 1
+
+ op := d.scanWhile(scanContinue)
+
+ name := d.data[nameStart : d.off-1]
+ if op != scanParam {
+ // Back up so the byte just read is consumed next.
+ d.off--
+ d.scan.undo(op)
+ if l, ok := d.convertLiteral(name); ok {
+ return l
+ }
+ d.error(&SyntaxError{fmt.Sprintf("json: unknown constant %q", name), int64(d.off)})
+ }
+
+ funcName := string(name)
+ funcData := d.ext.funcs[funcName]
+ if funcData.key == "" {
+ d.error(fmt.Errorf("json: unknown function %q", funcName))
+ }
+
+ m := make(map[string]interface{})
+ for i := 0; ; i++ {
+ // Look ahead for ) - can only happen on first iteration.
+ op := d.scanWhile(scanSkipSpace)
+ if op == scanEndParams {
+ break
+ }
+
+ // Back up so d.value can have the byte we just read.
+ d.off--
+ d.scan.undo(op)
+
+ if i >= len(funcData.args) {
+ d.error(fmt.Errorf("json: too many arguments for function %s", funcName))
+ }
+ m[funcData.args[i]] = d.valueInterface()
+
+ // Next token must be , or ).
+ op = d.scanWhile(scanSkipSpace)
+ if op == scanEndParams {
+ break
+ }
+ if op != scanParam {
+ d.error(errPhase)
+ }
+ }
+ return map[string]interface{}{funcData.key: m}
+}
+
+// getu4 decodes \uXXXX from the beginning of s, returning the hex value,
+// or it returns -1.
+func getu4(s []byte) rune {
+ if len(s) < 6 || s[0] != '\\' || s[1] != 'u' {
+ return -1
+ }
+ r, err := strconv.ParseUint(string(s[2:6]), 16, 64)
+ if err != nil {
+ return -1
+ }
+ return rune(r)
+}
+
+// unquote converts a quoted JSON string literal s into an actual string t.
+// The rules are different than for Go, so cannot use strconv.Unquote.
+func unquote(s []byte) (t string, ok bool) {
+ s, ok = unquoteBytes(s)
+ t = string(s)
+ return
+}
+
+func unquoteBytes(s []byte) (t []byte, ok bool) {
+ if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' {
+ return
+ }
+ s = s[1 : len(s)-1]
+
+ // Check for unusual characters. If there are none,
+ // then no unquoting is needed, so return a slice of the
+ // original bytes.
+ r := 0
+ for r < len(s) {
+ c := s[r]
+ if c == '\\' || c == '"' || c < ' ' {
+ break
+ }
+ if c < utf8.RuneSelf {
+ r++
+ continue
+ }
+ rr, size := utf8.DecodeRune(s[r:])
+ if rr == utf8.RuneError && size == 1 {
+ break
+ }
+ r += size
+ }
+ if r == len(s) {
+ return s, true
+ }
+
+ b := make([]byte, len(s)+2*utf8.UTFMax)
+ w := copy(b, s[0:r])
+ for r < len(s) {
+ // Out of room? Can only happen if s is full of
+ // malformed UTF-8 and we're replacing each
+ // byte with RuneError.
+ if w >= len(b)-2*utf8.UTFMax {
+ nb := make([]byte, (len(b)+utf8.UTFMax)*2)
+ copy(nb, b[0:w])
+ b = nb
+ }
+ switch c := s[r]; {
+ case c == '\\':
+ r++
+ if r >= len(s) {
+ return
+ }
+ switch s[r] {
+ default:
+ return
+ case '"', '\\', '/', '\'':
+ b[w] = s[r]
+ r++
+ w++
+ case 'b':
+ b[w] = '\b'
+ r++
+ w++
+ case 'f':
+ b[w] = '\f'
+ r++
+ w++
+ case 'n':
+ b[w] = '\n'
+ r++
+ w++
+ case 'r':
+ b[w] = '\r'
+ r++
+ w++
+ case 't':
+ b[w] = '\t'
+ r++
+ w++
+ case 'u':
+ r--
+ rr := getu4(s[r:])
+ if rr < 0 {
+ return
+ }
+ r += 6
+ if utf16.IsSurrogate(rr) {
+ rr1 := getu4(s[r:])
+ if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar {
+ // A valid pair; consume.
+ r += 6
+ w += utf8.EncodeRune(b[w:], dec)
+ break
+ }
+ // Invalid surrogate; fall back to replacement rune.
+ rr = unicode.ReplacementChar
+ }
+ w += utf8.EncodeRune(b[w:], rr)
+ }
+
+ // Quote, control characters are invalid.
+ case c == '"', c < ' ':
+ return
+
+ // ASCII
+ case c < utf8.RuneSelf:
+ b[w] = c
+ r++
+ w++
+
+ // Coerce to well-formed UTF-8.
+ default:
+ rr, size := utf8.DecodeRune(s[r:])
+ r += size
+ w += utf8.EncodeRune(b[w:], rr)
+ }
+ }
+ return b[0:w], true
+}
diff --git a/vendor/gopkg.in/mgo.v2/internal/json/decode_test.go b/vendor/gopkg.in/mgo.v2/internal/json/decode_test.go
new file mode 100644
index 0000000..30e46ca
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/internal/json/decode_test.go
@@ -0,0 +1,1512 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import (
+ "bytes"
+ "encoding"
+ "errors"
+ "fmt"
+ "image"
+ "net"
+ "reflect"
+ "strings"
+ "testing"
+ "time"
+)
+
+type T struct {
+ X string
+ Y int
+ Z int `json:"-"`
+}
+
+type U struct {
+ Alphabet string `json:"alpha"`
+}
+
+type V struct {
+ F1 interface{}
+ F2 int32
+ F3 Number
+}
+
+// ifaceNumAsFloat64/ifaceNumAsNumber are used to test unmarshaling with and
+// without UseNumber
+var ifaceNumAsFloat64 = map[string]interface{}{
+ "k1": float64(1),
+ "k2": "s",
+ "k3": []interface{}{float64(1), float64(2.0), float64(3e-3)},
+ "k4": map[string]interface{}{"kk1": "s", "kk2": float64(2)},
+}
+
+var ifaceNumAsNumber = map[string]interface{}{
+ "k1": Number("1"),
+ "k2": "s",
+ "k3": []interface{}{Number("1"), Number("2.0"), Number("3e-3")},
+ "k4": map[string]interface{}{"kk1": "s", "kk2": Number("2")},
+}
+
+type tx struct {
+ x int
+}
+
+// A type that can unmarshal itself.
+
+type unmarshaler struct {
+ T bool
+}
+
+func (u *unmarshaler) UnmarshalJSON(b []byte) error {
+ *u = unmarshaler{true} // All we need to see that UnmarshalJSON is called.
+ return nil
+}
+
+type ustruct struct {
+ M unmarshaler
+}
+
+type unmarshalerText struct {
+ A, B string
+}
+
+// needed for re-marshaling tests
+func (u unmarshalerText) MarshalText() ([]byte, error) {
+ return []byte(u.A + ":" + u.B), nil
+}
+
+func (u *unmarshalerText) UnmarshalText(b []byte) error {
+ pos := bytes.Index(b, []byte(":"))
+ if pos == -1 {
+ return errors.New("missing separator")
+ }
+ u.A, u.B = string(b[:pos]), string(b[pos+1:])
+ return nil
+}
+
+var _ encoding.TextUnmarshaler = (*unmarshalerText)(nil)
+
+type ustructText struct {
+ M unmarshalerText
+}
+
+var (
+ um0, um1 unmarshaler // target2 of unmarshaling
+ ump = &um1
+ umtrue = unmarshaler{true}
+ umslice = []unmarshaler{{true}}
+ umslicep = new([]unmarshaler)
+ umstruct = ustruct{unmarshaler{true}}
+
+ um0T, um1T unmarshalerText // target2 of unmarshaling
+ umpType = &um1T
+ umtrueXY = unmarshalerText{"x", "y"}
+ umsliceXY = []unmarshalerText{{"x", "y"}}
+ umslicepType = new([]unmarshalerText)
+ umstructType = new(ustructText)
+ umstructXY = ustructText{unmarshalerText{"x", "y"}}
+
+ ummapType = map[unmarshalerText]bool{}
+ ummapXY = map[unmarshalerText]bool{unmarshalerText{"x", "y"}: true}
+)
+
+// Test data structures for anonymous fields.
+
+type Point struct {
+ Z int
+}
+
+type Top struct {
+ Level0 int
+ Embed0
+ *Embed0a
+ *Embed0b `json:"e,omitempty"` // treated as named
+ Embed0c `json:"-"` // ignored
+ Loop
+ Embed0p // has Point with X, Y, used
+ Embed0q // has Point with Z, used
+ embed // contains exported field
+}
+
+type Embed0 struct {
+ Level1a int // overridden by Embed0a's Level1a with json tag
+ Level1b int // used because Embed0a's Level1b is renamed
+ Level1c int // used because Embed0a's Level1c is ignored
+ Level1d int // annihilated by Embed0a's Level1d
+ Level1e int `json:"x"` // annihilated by Embed0a.Level1e
+}
+
+type Embed0a struct {
+ Level1a int `json:"Level1a,omitempty"`
+ Level1b int `json:"LEVEL1B,omitempty"`
+ Level1c int `json:"-"`
+ Level1d int // annihilated by Embed0's Level1d
+ Level1f int `json:"x"` // annihilated by Embed0's Level1e
+}
+
+type Embed0b Embed0
+
+type Embed0c Embed0
+
+type Embed0p struct {
+ image.Point
+}
+
+type Embed0q struct {
+ Point
+}
+
+type embed struct {
+ Q int
+}
+
+type Loop struct {
+ Loop1 int `json:",omitempty"`
+ Loop2 int `json:",omitempty"`
+ *Loop
+}
+
+// From reflect test:
+// The X in S6 and S7 annihilate, but they also block the X in S8.S9.
+type S5 struct {
+ S6
+ S7
+ S8
+}
+
+type S6 struct {
+ X int
+}
+
+type S7 S6
+
+type S8 struct {
+ S9
+}
+
+type S9 struct {
+ X int
+ Y int
+}
+
+// From reflect test:
+// The X in S11.S6 and S12.S6 annihilate, but they also block the X in S13.S8.S9.
+type S10 struct {
+ S11
+ S12
+ S13
+}
+
+type S11 struct {
+ S6
+}
+
+type S12 struct {
+ S6
+}
+
+type S13 struct {
+ S8
+}
+
+type unmarshalTest struct {
+ in string
+ ptr interface{}
+ out interface{}
+ err error
+ useNumber bool
+}
+
+type Ambig struct {
+ // Given "hello", the first match should win.
+ First int `json:"HELLO"`
+ Second int `json:"Hello"`
+}
+
+type XYZ struct {
+ X interface{}
+ Y interface{}
+ Z interface{}
+}
+
+func sliceAddr(x []int) *[]int { return &x }
+func mapAddr(x map[string]int) *map[string]int { return &x }
+
+var unmarshalTests = []unmarshalTest{
+ // basic types
+ {in: `true`, ptr: new(bool), out: true},
+ {in: `1`, ptr: new(int), out: 1},
+ {in: `1.2`, ptr: new(float64), out: 1.2},
+ {in: `-5`, ptr: new(int16), out: int16(-5)},
+ {in: `2`, ptr: new(Number), out: Number("2"), useNumber: true},
+ {in: `2`, ptr: new(Number), out: Number("2")},
+ {in: `2`, ptr: new(interface{}), out: float64(2.0)},
+ {in: `2`, ptr: new(interface{}), out: Number("2"), useNumber: true},
+ {in: `"a\u1234"`, ptr: new(string), out: "a\u1234"},
+ {in: `"http:\/\/"`, ptr: new(string), out: "http://"},
+ {in: `"g-clef: \uD834\uDD1E"`, ptr: new(string), out: "g-clef: \U0001D11E"},
+ {in: `"invalid: \uD834x\uDD1E"`, ptr: new(string), out: "invalid: \uFFFDx\uFFFD"},
+ {in: "null", ptr: new(interface{}), out: nil},
+ {in: `{"X": [1,2,3], "Y": 4}`, ptr: new(T), out: T{Y: 4}, err: &UnmarshalTypeError{"array", reflect.TypeOf(""), 7}},
+ {in: `{"x": 1}`, ptr: new(tx), out: tx{}},
+ {in: `{"F1":1,"F2":2,"F3":3}`, ptr: new(V), out: V{F1: float64(1), F2: int32(2), F3: Number("3")}},
+ {in: `{"F1":1,"F2":2,"F3":3}`, ptr: new(V), out: V{F1: Number("1"), F2: int32(2), F3: Number("3")}, useNumber: true},
+ {in: `{"k1":1,"k2":"s","k3":[1,2.0,3e-3],"k4":{"kk1":"s","kk2":2}}`, ptr: new(interface{}), out: ifaceNumAsFloat64},
+ {in: `{"k1":1,"k2":"s","k3":[1,2.0,3e-3],"k4":{"kk1":"s","kk2":2}}`, ptr: new(interface{}), out: ifaceNumAsNumber, useNumber: true},
+
+ // raw values with whitespace
+ {in: "\n true ", ptr: new(bool), out: true},
+ {in: "\t 1 ", ptr: new(int), out: 1},
+ {in: "\r 1.2 ", ptr: new(float64), out: 1.2},
+ {in: "\t -5 \n", ptr: new(int16), out: int16(-5)},
+ {in: "\t \"a\\u1234\" \n", ptr: new(string), out: "a\u1234"},
+
+ // Z has a "-" tag.
+ {in: `{"Y": 1, "Z": 2}`, ptr: new(T), out: T{Y: 1}},
+
+ {in: `{"alpha": "abc", "alphabet": "xyz"}`, ptr: new(U), out: U{Alphabet: "abc"}},
+ {in: `{"alpha": "abc"}`, ptr: new(U), out: U{Alphabet: "abc"}},
+ {in: `{"alphabet": "xyz"}`, ptr: new(U), out: U{}},
+
+ // syntax errors
+ {in: `{"X": "foo", "Y"}`, err: &SyntaxError{"invalid character '}' after object key", 17}},
+ {in: `[1, 2, 3+]`, err: &SyntaxError{"invalid character '+' after array element", 9}},
+ {in: `{"X":12x}`, err: &SyntaxError{"invalid character 'x' after object key:value pair", 8}, useNumber: true},
+
+ // raw value errors
+ {in: "\x01 42", err: &SyntaxError{"invalid character '\\x01' looking for beginning of value", 1}},
+ {in: " 42 \x01", err: &SyntaxError{"invalid character '\\x01' after top-level value", 5}},
+ {in: "\x01 true", err: &SyntaxError{"invalid character '\\x01' looking for beginning of value", 1}},
+ {in: " false \x01", err: &SyntaxError{"invalid character '\\x01' after top-level value", 8}},
+ {in: "\x01 1.2", err: &SyntaxError{"invalid character '\\x01' looking for beginning of value", 1}},
+ {in: " 3.4 \x01", err: &SyntaxError{"invalid character '\\x01' after top-level value", 6}},
+ {in: "\x01 \"string\"", err: &SyntaxError{"invalid character '\\x01' looking for beginning of value", 1}},
+ {in: " \"string\" \x01", err: &SyntaxError{"invalid character '\\x01' after top-level value", 11}},
+
+ // array tests
+ {in: `[1, 2, 3]`, ptr: new([3]int), out: [3]int{1, 2, 3}},
+ {in: `[1, 2, 3]`, ptr: new([1]int), out: [1]int{1}},
+ {in: `[1, 2, 3]`, ptr: new([5]int), out: [5]int{1, 2, 3, 0, 0}},
+
+ // empty array to interface test
+ {in: `[]`, ptr: new([]interface{}), out: []interface{}{}},
+ {in: `null`, ptr: new([]interface{}), out: []interface{}(nil)},
+ {in: `{"T":[]}`, ptr: new(map[string]interface{}), out: map[string]interface{}{"T": []interface{}{}}},
+ {in: `{"T":null}`, ptr: new(map[string]interface{}), out: map[string]interface{}{"T": interface{}(nil)}},
+
+ // composite tests
+ {in: allValueIndent, ptr: new(All), out: allValue},
+ {in: allValueCompact, ptr: new(All), out: allValue},
+ {in: allValueIndent, ptr: new(*All), out: &allValue},
+ {in: allValueCompact, ptr: new(*All), out: &allValue},
+ {in: pallValueIndent, ptr: new(All), out: pallValue},
+ {in: pallValueCompact, ptr: new(All), out: pallValue},
+ {in: pallValueIndent, ptr: new(*All), out: &pallValue},
+ {in: pallValueCompact, ptr: new(*All), out: &pallValue},
+
+ // unmarshal interface test
+ {in: `{"T":false}`, ptr: &um0, out: umtrue}, // use "false" so test will fail if custom unmarshaler is not called
+ {in: `{"T":false}`, ptr: &ump, out: &umtrue},
+ {in: `[{"T":false}]`, ptr: &umslice, out: umslice},
+ {in: `[{"T":false}]`, ptr: &umslicep, out: &umslice},
+ {in: `{"M":{"T":"x:y"}}`, ptr: &umstruct, out: umstruct},
+
+ // UnmarshalText interface test
+ {in: `"x:y"`, ptr: &um0T, out: umtrueXY},
+ {in: `"x:y"`, ptr: &umpType, out: &umtrueXY},
+ {in: `["x:y"]`, ptr: &umsliceXY, out: umsliceXY},
+ {in: `["x:y"]`, ptr: &umslicepType, out: &umsliceXY},
+ {in: `{"M":"x:y"}`, ptr: umstructType, out: umstructXY},
+
+ // Map keys can be encoding.TextUnmarshalers
+ {in: `{"x:y":true}`, ptr: &ummapType, out: ummapXY},
+ // If multiple values for the same key exists, only the most recent value is used.
+ {in: `{"x:y":false,"x:y":true}`, ptr: &ummapType, out: ummapXY},
+
+ // Overwriting of data.
+ // This is different from package xml, but it's what we've always done.
+ // Now documented and tested.
+ {in: `[2]`, ptr: sliceAddr([]int{1}), out: []int{2}},
+ {in: `{"key": 2}`, ptr: mapAddr(map[string]int{"old": 0, "key": 1}), out: map[string]int{"key": 2}},
+
+ {
+ in: `{
+ "Level0": 1,
+ "Level1b": 2,
+ "Level1c": 3,
+ "x": 4,
+ "Level1a": 5,
+ "LEVEL1B": 6,
+ "e": {
+ "Level1a": 8,
+ "Level1b": 9,
+ "Level1c": 10,
+ "Level1d": 11,
+ "x": 12
+ },
+ "Loop1": 13,
+ "Loop2": 14,
+ "X": 15,
+ "Y": 16,
+ "Z": 17,
+ "Q": 18
+ }`,
+ ptr: new(Top),
+ out: Top{
+ Level0: 1,
+ Embed0: Embed0{
+ Level1b: 2,
+ Level1c: 3,
+ },
+ Embed0a: &Embed0a{
+ Level1a: 5,
+ Level1b: 6,
+ },
+ Embed0b: &Embed0b{
+ Level1a: 8,
+ Level1b: 9,
+ Level1c: 10,
+ Level1d: 11,
+ Level1e: 12,
+ },
+ Loop: Loop{
+ Loop1: 13,
+ Loop2: 14,
+ },
+ Embed0p: Embed0p{
+ Point: image.Point{X: 15, Y: 16},
+ },
+ Embed0q: Embed0q{
+ Point: Point{Z: 17},
+ },
+ embed: embed{
+ Q: 18,
+ },
+ },
+ },
+ {
+ in: `{"hello": 1}`,
+ ptr: new(Ambig),
+ out: Ambig{First: 1},
+ },
+
+ {
+ in: `{"X": 1,"Y":2}`,
+ ptr: new(S5),
+ out: S5{S8: S8{S9: S9{Y: 2}}},
+ },
+ {
+ in: `{"X": 1,"Y":2}`,
+ ptr: new(S10),
+ out: S10{S13: S13{S8: S8{S9: S9{Y: 2}}}},
+ },
+
+ // invalid UTF-8 is coerced to valid UTF-8.
+ {
+ in: "\"hello\xffworld\"",
+ ptr: new(string),
+ out: "hello\ufffdworld",
+ },
+ {
+ in: "\"hello\xc2\xc2world\"",
+ ptr: new(string),
+ out: "hello\ufffd\ufffdworld",
+ },
+ {
+ in: "\"hello\xc2\xffworld\"",
+ ptr: new(string),
+ out: "hello\ufffd\ufffdworld",
+ },
+ {
+ in: "\"hello\\ud800world\"",
+ ptr: new(string),
+ out: "hello\ufffdworld",
+ },
+ {
+ in: "\"hello\\ud800\\ud800world\"",
+ ptr: new(string),
+ out: "hello\ufffd\ufffdworld",
+ },
+ {
+ in: "\"hello\\ud800\\ud800world\"",
+ ptr: new(string),
+ out: "hello\ufffd\ufffdworld",
+ },
+ {
+ in: "\"hello\xed\xa0\x80\xed\xb0\x80world\"",
+ ptr: new(string),
+ out: "hello\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdworld",
+ },
+
+ // Used to be issue 8305, but time.Time implements encoding.TextUnmarshaler so this works now.
+ {
+ in: `{"2009-11-10T23:00:00Z": "hello world"}`,
+ ptr: &map[time.Time]string{},
+ out: map[time.Time]string{time.Date(2009, 11, 10, 23, 0, 0, 0, time.UTC): "hello world"},
+ },
+
+ // issue 8305
+ {
+ in: `{"2009-11-10T23:00:00Z": "hello world"}`,
+ ptr: &map[Point]string{},
+ err: &UnmarshalTypeError{"object", reflect.TypeOf(map[Point]string{}), 1},
+ },
+ {
+ in: `{"asdf": "hello world"}`,
+ ptr: &map[unmarshaler]string{},
+ err: &UnmarshalTypeError{"object", reflect.TypeOf(map[unmarshaler]string{}), 1},
+ },
+}
+
+func TestMarshal(t *testing.T) {
+ b, err := Marshal(allValue)
+ if err != nil {
+ t.Fatalf("Marshal allValue: %v", err)
+ }
+ if string(b) != allValueCompact {
+ t.Errorf("Marshal allValueCompact")
+ diff(t, b, []byte(allValueCompact))
+ return
+ }
+
+ b, err = Marshal(pallValue)
+ if err != nil {
+ t.Fatalf("Marshal pallValue: %v", err)
+ }
+ if string(b) != pallValueCompact {
+ t.Errorf("Marshal pallValueCompact")
+ diff(t, b, []byte(pallValueCompact))
+ return
+ }
+}
+
+var badUTF8 = []struct {
+ in, out string
+}{
+ {"hello\xffworld", `"hello\ufffdworld"`},
+ {"", `""`},
+ {"\xff", `"\ufffd"`},
+ {"\xff\xff", `"\ufffd\ufffd"`},
+ {"a\xffb", `"a\ufffdb"`},
+ {"\xe6\x97\xa5\xe6\x9c\xac\xff\xaa\x9e", `"日本\ufffd\ufffd\ufffd"`},
+}
+
+func TestMarshalBadUTF8(t *testing.T) {
+ for _, tt := range badUTF8 {
+ b, err := Marshal(tt.in)
+ if string(b) != tt.out || err != nil {
+ t.Errorf("Marshal(%q) = %#q, %v, want %#q, nil", tt.in, b, err, tt.out)
+ }
+ }
+}
+
+func TestMarshalNumberZeroVal(t *testing.T) {
+ var n Number
+ out, err := Marshal(n)
+ if err != nil {
+ t.Fatal(err)
+ }
+ outStr := string(out)
+ if outStr != "0" {
+ t.Fatalf("Invalid zero val for Number: %q", outStr)
+ }
+}
+
+func TestMarshalEmbeds(t *testing.T) {
+ top := &Top{
+ Level0: 1,
+ Embed0: Embed0{
+ Level1b: 2,
+ Level1c: 3,
+ },
+ Embed0a: &Embed0a{
+ Level1a: 5,
+ Level1b: 6,
+ },
+ Embed0b: &Embed0b{
+ Level1a: 8,
+ Level1b: 9,
+ Level1c: 10,
+ Level1d: 11,
+ Level1e: 12,
+ },
+ Loop: Loop{
+ Loop1: 13,
+ Loop2: 14,
+ },
+ Embed0p: Embed0p{
+ Point: image.Point{X: 15, Y: 16},
+ },
+ Embed0q: Embed0q{
+ Point: Point{Z: 17},
+ },
+ embed: embed{
+ Q: 18,
+ },
+ }
+ b, err := Marshal(top)
+ if err != nil {
+ t.Fatal(err)
+ }
+ want := "{\"Level0\":1,\"Level1b\":2,\"Level1c\":3,\"Level1a\":5,\"LEVEL1B\":6,\"e\":{\"Level1a\":8,\"Level1b\":9,\"Level1c\":10,\"Level1d\":11,\"x\":12},\"Loop1\":13,\"Loop2\":14,\"X\":15,\"Y\":16,\"Z\":17,\"Q\":18}"
+ if string(b) != want {
+ t.Errorf("Wrong marshal result.\n got: %q\nwant: %q", b, want)
+ }
+}
+
+func TestUnmarshal(t *testing.T) {
+ for i, tt := range unmarshalTests {
+ var scan scanner
+ in := []byte(tt.in)
+ if err := checkValid(in, &scan); err != nil {
+ if !reflect.DeepEqual(err, tt.err) {
+ t.Errorf("#%d: checkValid: %#v", i, err)
+ continue
+ }
+ }
+ if tt.ptr == nil {
+ continue
+ }
+
+ // v = new(right-type)
+ v := reflect.New(reflect.TypeOf(tt.ptr).Elem())
+ dec := NewDecoder(bytes.NewReader(in))
+ if tt.useNumber {
+ dec.UseNumber()
+ }
+ if err := dec.Decode(v.Interface()); !reflect.DeepEqual(err, tt.err) {
+ t.Errorf("#%d: %v, want %v", i, err, tt.err)
+ continue
+ } else if err != nil {
+ continue
+ }
+ if !reflect.DeepEqual(v.Elem().Interface(), tt.out) {
+ t.Errorf("#%d: mismatch\nhave: %#+v\nwant: %#+v", i, v.Elem().Interface(), tt.out)
+ data, _ := Marshal(v.Elem().Interface())
+ println(string(data))
+ data, _ = Marshal(tt.out)
+ println(string(data))
+ continue
+ }
+
+ // Check round trip.
+ if tt.err == nil {
+ enc, err := Marshal(v.Interface())
+ if err != nil {
+ t.Errorf("#%d: error re-marshaling: %v", i, err)
+ continue
+ }
+ vv := reflect.New(reflect.TypeOf(tt.ptr).Elem())
+ dec = NewDecoder(bytes.NewReader(enc))
+ if tt.useNumber {
+ dec.UseNumber()
+ }
+ if err := dec.Decode(vv.Interface()); err != nil {
+ t.Errorf("#%d: error re-unmarshaling %#q: %v", i, enc, err)
+ continue
+ }
+ if !reflect.DeepEqual(v.Elem().Interface(), vv.Elem().Interface()) {
+ t.Errorf("#%d: mismatch\nhave: %#+v\nwant: %#+v", i, v.Elem().Interface(), vv.Elem().Interface())
+ t.Errorf(" In: %q", strings.Map(noSpace, string(in)))
+ t.Errorf("Marshal: %q", strings.Map(noSpace, string(enc)))
+ continue
+ }
+ }
+ }
+}
+
+func TestUnmarshalMarshal(t *testing.T) {
+ initBig()
+ var v interface{}
+ if err := Unmarshal(jsonBig, &v); err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ b, err := Marshal(v)
+ if err != nil {
+ t.Fatalf("Marshal: %v", err)
+ }
+ if !bytes.Equal(jsonBig, b) {
+ t.Errorf("Marshal jsonBig")
+ diff(t, b, jsonBig)
+ return
+ }
+}
+
+var numberTests = []struct {
+ in string
+ i int64
+ intErr string
+ f float64
+ floatErr string
+}{
+ {in: "-1.23e1", intErr: "strconv.ParseInt: parsing \"-1.23e1\": invalid syntax", f: -1.23e1},
+ {in: "-12", i: -12, f: -12.0},
+ {in: "1e1000", intErr: "strconv.ParseInt: parsing \"1e1000\": invalid syntax", floatErr: "strconv.ParseFloat: parsing \"1e1000\": value out of range"},
+}
+
+// Independent of Decode, basic coverage of the accessors in Number
+func TestNumberAccessors(t *testing.T) {
+ for _, tt := range numberTests {
+ n := Number(tt.in)
+ if s := n.String(); s != tt.in {
+ t.Errorf("Number(%q).String() is %q", tt.in, s)
+ }
+ if i, err := n.Int64(); err == nil && tt.intErr == "" && i != tt.i {
+ t.Errorf("Number(%q).Int64() is %d", tt.in, i)
+ } else if (err == nil && tt.intErr != "") || (err != nil && err.Error() != tt.intErr) {
+ t.Errorf("Number(%q).Int64() wanted error %q but got: %v", tt.in, tt.intErr, err)
+ }
+ if f, err := n.Float64(); err == nil && tt.floatErr == "" && f != tt.f {
+ t.Errorf("Number(%q).Float64() is %g", tt.in, f)
+ } else if (err == nil && tt.floatErr != "") || (err != nil && err.Error() != tt.floatErr) {
+ t.Errorf("Number(%q).Float64() wanted error %q but got: %v", tt.in, tt.floatErr, err)
+ }
+ }
+}
+
+func TestLargeByteSlice(t *testing.T) {
+ s0 := make([]byte, 2000)
+ for i := range s0 {
+ s0[i] = byte(i)
+ }
+ b, err := Marshal(s0)
+ if err != nil {
+ t.Fatalf("Marshal: %v", err)
+ }
+ var s1 []byte
+ if err := Unmarshal(b, &s1); err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ if !bytes.Equal(s0, s1) {
+ t.Errorf("Marshal large byte slice")
+ diff(t, s0, s1)
+ }
+}
+
+type Xint struct {
+ X int
+}
+
+func TestUnmarshalInterface(t *testing.T) {
+ var xint Xint
+ var i interface{} = &xint
+ if err := Unmarshal([]byte(`{"X":1}`), &i); err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ if xint.X != 1 {
+ t.Fatalf("Did not write to xint")
+ }
+}
+
+func TestUnmarshalPtrPtr(t *testing.T) {
+ var xint Xint
+ pxint := &xint
+ if err := Unmarshal([]byte(`{"X":1}`), &pxint); err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ if xint.X != 1 {
+ t.Fatalf("Did not write to xint")
+ }
+}
+
+func TestEscape(t *testing.T) {
+ const input = `"foobar"` + " [\u2028 \u2029]"
+ const expected = `"\"foobar\"\u003chtml\u003e [\u2028 \u2029]"`
+ b, err := Marshal(input)
+ if err != nil {
+ t.Fatalf("Marshal error: %v", err)
+ }
+ if s := string(b); s != expected {
+ t.Errorf("Encoding of [%s]:\n got [%s]\nwant [%s]", input, s, expected)
+ }
+}
+
+// WrongString is a struct that's misusing the ,string modifier.
+type WrongString struct {
+ Message string `json:"result,string"`
+}
+
+type wrongStringTest struct {
+ in, err string
+}
+
+var wrongStringTests = []wrongStringTest{
+ {`{"result":"x"}`, `json: invalid use of ,string struct tag, trying to unmarshal "x" into string`},
+ {`{"result":"foo"}`, `json: invalid use of ,string struct tag, trying to unmarshal "foo" into string`},
+ {`{"result":"123"}`, `json: invalid use of ,string struct tag, trying to unmarshal "123" into string`},
+ {`{"result":123}`, `json: invalid use of ,string struct tag, trying to unmarshal unquoted value into string`},
+}
+
+// If people misuse the ,string modifier, the error message should be
+// helpful, telling the user that they're doing it wrong.
+func TestErrorMessageFromMisusedString(t *testing.T) {
+ for n, tt := range wrongStringTests {
+ r := strings.NewReader(tt.in)
+ var s WrongString
+ err := NewDecoder(r).Decode(&s)
+ got := fmt.Sprintf("%v", err)
+ if got != tt.err {
+ t.Errorf("%d. got err = %q, want %q", n, got, tt.err)
+ }
+ }
+}
+
+func noSpace(c rune) rune {
+ if isSpace(byte(c)) { //only used for ascii
+ return -1
+ }
+ return c
+}
+
+type All struct {
+ Bool bool
+ Int int
+ Int8 int8
+ Int16 int16
+ Int32 int32
+ Int64 int64
+ Uint uint
+ Uint8 uint8
+ Uint16 uint16
+ Uint32 uint32
+ Uint64 uint64
+ Uintptr uintptr
+ Float32 float32
+ Float64 float64
+
+ Foo string `json:"bar"`
+ Foo2 string `json:"bar2,dummyopt"`
+
+ IntStr int64 `json:",string"`
+
+ PBool *bool
+ PInt *int
+ PInt8 *int8
+ PInt16 *int16
+ PInt32 *int32
+ PInt64 *int64
+ PUint *uint
+ PUint8 *uint8
+ PUint16 *uint16
+ PUint32 *uint32
+ PUint64 *uint64
+ PUintptr *uintptr
+ PFloat32 *float32
+ PFloat64 *float64
+
+ String string
+ PString *string
+
+ Map map[string]Small
+ MapP map[string]*Small
+ PMap *map[string]Small
+ PMapP *map[string]*Small
+
+ EmptyMap map[string]Small
+ NilMap map[string]Small
+
+ Slice []Small
+ SliceP []*Small
+ PSlice *[]Small
+ PSliceP *[]*Small
+
+ EmptySlice []Small
+ NilSlice []Small
+
+ StringSlice []string
+ ByteSlice []byte
+
+ Small Small
+ PSmall *Small
+ PPSmall **Small
+
+ Interface interface{}
+ PInterface *interface{}
+
+ unexported int
+}
+
+type Small struct {
+ Tag string
+}
+
+var allValue = All{
+ Bool: true,
+ Int: 2,
+ Int8: 3,
+ Int16: 4,
+ Int32: 5,
+ Int64: 6,
+ Uint: 7,
+ Uint8: 8,
+ Uint16: 9,
+ Uint32: 10,
+ Uint64: 11,
+ Uintptr: 12,
+ Float32: 14.1,
+ Float64: 15.1,
+ Foo: "foo",
+ Foo2: "foo2",
+ IntStr: 42,
+ String: "16",
+ Map: map[string]Small{
+ "17": {Tag: "tag17"},
+ "18": {Tag: "tag18"},
+ },
+ MapP: map[string]*Small{
+ "19": {Tag: "tag19"},
+ "20": nil,
+ },
+ EmptyMap: map[string]Small{},
+ Slice: []Small{{Tag: "tag20"}, {Tag: "tag21"}},
+ SliceP: []*Small{{Tag: "tag22"}, nil, {Tag: "tag23"}},
+ EmptySlice: []Small{},
+ StringSlice: []string{"str24", "str25", "str26"},
+ ByteSlice: []byte{27, 28, 29},
+ Small: Small{Tag: "tag30"},
+ PSmall: &Small{Tag: "tag31"},
+ Interface: 5.2,
+}
+
+var pallValue = All{
+ PBool: &allValue.Bool,
+ PInt: &allValue.Int,
+ PInt8: &allValue.Int8,
+ PInt16: &allValue.Int16,
+ PInt32: &allValue.Int32,
+ PInt64: &allValue.Int64,
+ PUint: &allValue.Uint,
+ PUint8: &allValue.Uint8,
+ PUint16: &allValue.Uint16,
+ PUint32: &allValue.Uint32,
+ PUint64: &allValue.Uint64,
+ PUintptr: &allValue.Uintptr,
+ PFloat32: &allValue.Float32,
+ PFloat64: &allValue.Float64,
+ PString: &allValue.String,
+ PMap: &allValue.Map,
+ PMapP: &allValue.MapP,
+ PSlice: &allValue.Slice,
+ PSliceP: &allValue.SliceP,
+ PPSmall: &allValue.PSmall,
+ PInterface: &allValue.Interface,
+}
+
+var allValueIndent = `{
+ "Bool": true,
+ "Int": 2,
+ "Int8": 3,
+ "Int16": 4,
+ "Int32": 5,
+ "Int64": 6,
+ "Uint": 7,
+ "Uint8": 8,
+ "Uint16": 9,
+ "Uint32": 10,
+ "Uint64": 11,
+ "Uintptr": 12,
+ "Float32": 14.1,
+ "Float64": 15.1,
+ "bar": "foo",
+ "bar2": "foo2",
+ "IntStr": "42",
+ "PBool": null,
+ "PInt": null,
+ "PInt8": null,
+ "PInt16": null,
+ "PInt32": null,
+ "PInt64": null,
+ "PUint": null,
+ "PUint8": null,
+ "PUint16": null,
+ "PUint32": null,
+ "PUint64": null,
+ "PUintptr": null,
+ "PFloat32": null,
+ "PFloat64": null,
+ "String": "16",
+ "PString": null,
+ "Map": {
+ "17": {
+ "Tag": "tag17"
+ },
+ "18": {
+ "Tag": "tag18"
+ }
+ },
+ "MapP": {
+ "19": {
+ "Tag": "tag19"
+ },
+ "20": null
+ },
+ "PMap": null,
+ "PMapP": null,
+ "EmptyMap": {},
+ "NilMap": null,
+ "Slice": [
+ {
+ "Tag": "tag20"
+ },
+ {
+ "Tag": "tag21"
+ }
+ ],
+ "SliceP": [
+ {
+ "Tag": "tag22"
+ },
+ null,
+ {
+ "Tag": "tag23"
+ }
+ ],
+ "PSlice": null,
+ "PSliceP": null,
+ "EmptySlice": [],
+ "NilSlice": null,
+ "StringSlice": [
+ "str24",
+ "str25",
+ "str26"
+ ],
+ "ByteSlice": "Gxwd",
+ "Small": {
+ "Tag": "tag30"
+ },
+ "PSmall": {
+ "Tag": "tag31"
+ },
+ "PPSmall": null,
+ "Interface": 5.2,
+ "PInterface": null
+}`
+
+var allValueCompact = strings.Map(noSpace, allValueIndent)
+
+var pallValueIndent = `{
+ "Bool": false,
+ "Int": 0,
+ "Int8": 0,
+ "Int16": 0,
+ "Int32": 0,
+ "Int64": 0,
+ "Uint": 0,
+ "Uint8": 0,
+ "Uint16": 0,
+ "Uint32": 0,
+ "Uint64": 0,
+ "Uintptr": 0,
+ "Float32": 0,
+ "Float64": 0,
+ "bar": "",
+ "bar2": "",
+ "IntStr": "0",
+ "PBool": true,
+ "PInt": 2,
+ "PInt8": 3,
+ "PInt16": 4,
+ "PInt32": 5,
+ "PInt64": 6,
+ "PUint": 7,
+ "PUint8": 8,
+ "PUint16": 9,
+ "PUint32": 10,
+ "PUint64": 11,
+ "PUintptr": 12,
+ "PFloat32": 14.1,
+ "PFloat64": 15.1,
+ "String": "",
+ "PString": "16",
+ "Map": null,
+ "MapP": null,
+ "PMap": {
+ "17": {
+ "Tag": "tag17"
+ },
+ "18": {
+ "Tag": "tag18"
+ }
+ },
+ "PMapP": {
+ "19": {
+ "Tag": "tag19"
+ },
+ "20": null
+ },
+ "EmptyMap": null,
+ "NilMap": null,
+ "Slice": null,
+ "SliceP": null,
+ "PSlice": [
+ {
+ "Tag": "tag20"
+ },
+ {
+ "Tag": "tag21"
+ }
+ ],
+ "PSliceP": [
+ {
+ "Tag": "tag22"
+ },
+ null,
+ {
+ "Tag": "tag23"
+ }
+ ],
+ "EmptySlice": null,
+ "NilSlice": null,
+ "StringSlice": null,
+ "ByteSlice": null,
+ "Small": {
+ "Tag": ""
+ },
+ "PSmall": null,
+ "PPSmall": {
+ "Tag": "tag31"
+ },
+ "Interface": null,
+ "PInterface": 5.2
+}`
+
+var pallValueCompact = strings.Map(noSpace, pallValueIndent)
+
+func TestRefUnmarshal(t *testing.T) {
+ type S struct {
+ // Ref is defined in encode_test.go.
+ R0 Ref
+ R1 *Ref
+ R2 RefText
+ R3 *RefText
+ }
+ want := S{
+ R0: 12,
+ R1: new(Ref),
+ R2: 13,
+ R3: new(RefText),
+ }
+ *want.R1 = 12
+ *want.R3 = 13
+
+ var got S
+ if err := Unmarshal([]byte(`{"R0":"ref","R1":"ref","R2":"ref","R3":"ref"}`), &got); err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ if !reflect.DeepEqual(got, want) {
+ t.Errorf("got %+v, want %+v", got, want)
+ }
+}
+
+// Test that the empty string doesn't panic decoding when ,string is specified
+// Issue 3450
+func TestEmptyString(t *testing.T) {
+ type T2 struct {
+ Number1 int `json:",string"`
+ Number2 int `json:",string"`
+ }
+ data := `{"Number1":"1", "Number2":""}`
+ dec := NewDecoder(strings.NewReader(data))
+ var t2 T2
+ err := dec.Decode(&t2)
+ if err == nil {
+ t.Fatal("Decode: did not return error")
+ }
+ if t2.Number1 != 1 {
+ t.Fatal("Decode: did not set Number1")
+ }
+}
+
+// Test that a null for ,string is not replaced with the previous quoted string (issue 7046).
+// It should also not be an error (issue 2540, issue 8587).
+func TestNullString(t *testing.T) {
+ type T struct {
+ A int `json:",string"`
+ B int `json:",string"`
+ C *int `json:",string"`
+ }
+ data := []byte(`{"A": "1", "B": null, "C": null}`)
+ var s T
+ s.B = 1
+ s.C = new(int)
+ *s.C = 2
+ err := Unmarshal(data, &s)
+ if err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+ if s.B != 1 || s.C != nil {
+ t.Fatalf("after Unmarshal, s.B=%d, s.C=%p, want 1, nil", s.B, s.C)
+ }
+}
+
+func intp(x int) *int {
+ p := new(int)
+ *p = x
+ return p
+}
+
+func intpp(x *int) **int {
+ pp := new(*int)
+ *pp = x
+ return pp
+}
+
+var interfaceSetTests = []struct {
+ pre interface{}
+ json string
+ post interface{}
+}{
+ {"foo", `"bar"`, "bar"},
+ {"foo", `2`, 2.0},
+ {"foo", `true`, true},
+ {"foo", `null`, nil},
+
+ {nil, `null`, nil},
+ {new(int), `null`, nil},
+ {(*int)(nil), `null`, nil},
+ {new(*int), `null`, new(*int)},
+ {(**int)(nil), `null`, nil},
+ {intp(1), `null`, nil},
+ {intpp(nil), `null`, intpp(nil)},
+ {intpp(intp(1)), `null`, intpp(nil)},
+}
+
+func TestInterfaceSet(t *testing.T) {
+ for _, tt := range interfaceSetTests {
+ b := struct{ X interface{} }{tt.pre}
+ blob := `{"X":` + tt.json + `}`
+ if err := Unmarshal([]byte(blob), &b); err != nil {
+ t.Errorf("Unmarshal %#q: %v", blob, err)
+ continue
+ }
+ if !reflect.DeepEqual(b.X, tt.post) {
+ t.Errorf("Unmarshal %#q into %#v: X=%#v, want %#v", blob, tt.pre, b.X, tt.post)
+ }
+ }
+}
+
+// JSON null values should be ignored for primitives and string values instead of resulting in an error.
+// Issue 2540
+func TestUnmarshalNulls(t *testing.T) {
+ jsonData := []byte(`{
+ "Bool" : null,
+ "Int" : null,
+ "Int8" : null,
+ "Int16" : null,
+ "Int32" : null,
+ "Int64" : null,
+ "Uint" : null,
+ "Uint8" : null,
+ "Uint16" : null,
+ "Uint32" : null,
+ "Uint64" : null,
+ "Float32" : null,
+ "Float64" : null,
+ "String" : null}`)
+
+ nulls := All{
+ Bool: true,
+ Int: 2,
+ Int8: 3,
+ Int16: 4,
+ Int32: 5,
+ Int64: 6,
+ Uint: 7,
+ Uint8: 8,
+ Uint16: 9,
+ Uint32: 10,
+ Uint64: 11,
+ Float32: 12.1,
+ Float64: 13.1,
+ String: "14"}
+
+ err := Unmarshal(jsonData, &nulls)
+ if err != nil {
+ t.Errorf("Unmarshal of null values failed: %v", err)
+ }
+ if !nulls.Bool || nulls.Int != 2 || nulls.Int8 != 3 || nulls.Int16 != 4 || nulls.Int32 != 5 || nulls.Int64 != 6 ||
+ nulls.Uint != 7 || nulls.Uint8 != 8 || nulls.Uint16 != 9 || nulls.Uint32 != 10 || nulls.Uint64 != 11 ||
+ nulls.Float32 != 12.1 || nulls.Float64 != 13.1 || nulls.String != "14" {
+
+ t.Errorf("Unmarshal of null values affected primitives")
+ }
+}
+
+func TestStringKind(t *testing.T) {
+ type stringKind string
+
+ var m1, m2 map[stringKind]int
+ m1 = map[stringKind]int{
+ "foo": 42,
+ }
+
+ data, err := Marshal(m1)
+ if err != nil {
+ t.Errorf("Unexpected error marshaling: %v", err)
+ }
+
+ err = Unmarshal(data, &m2)
+ if err != nil {
+ t.Errorf("Unexpected error unmarshaling: %v", err)
+ }
+
+ if !reflect.DeepEqual(m1, m2) {
+ t.Error("Items should be equal after encoding and then decoding")
+ }
+}
+
+// Custom types with []byte as underlying type could not be marshalled
+// and then unmarshalled.
+// Issue 8962.
+func TestByteKind(t *testing.T) {
+ type byteKind []byte
+
+ a := byteKind("hello")
+
+ data, err := Marshal(a)
+ if err != nil {
+ t.Error(err)
+ }
+ var b byteKind
+ err = Unmarshal(data, &b)
+ if err != nil {
+ t.Fatal(err)
+ }
+ if !reflect.DeepEqual(a, b) {
+ t.Errorf("expected %v == %v", a, b)
+ }
+}
+
+// The fix for issue 8962 introduced a regression.
+// Issue 12921.
+func TestSliceOfCustomByte(t *testing.T) {
+ type Uint8 uint8
+
+ a := []Uint8("hello")
+
+ data, err := Marshal(a)
+ if err != nil {
+ t.Fatal(err)
+ }
+ var b []Uint8
+ err = Unmarshal(data, &b)
+ if err != nil {
+ t.Fatal(err)
+ }
+ if !reflect.DeepEqual(a, b) {
+ t.Fatalf("expected %v == %v", a, b)
+ }
+}
+
+var decodeTypeErrorTests = []struct {
+ dest interface{}
+ src string
+}{
+ {new(string), `{"user": "name"}`}, // issue 4628.
+ {new(error), `{}`}, // issue 4222
+ {new(error), `[]`},
+ {new(error), `""`},
+ {new(error), `123`},
+ {new(error), `true`},
+}
+
+func TestUnmarshalTypeError(t *testing.T) {
+ for _, item := range decodeTypeErrorTests {
+ err := Unmarshal([]byte(item.src), item.dest)
+ if _, ok := err.(*UnmarshalTypeError); !ok {
+ t.Errorf("expected type error for Unmarshal(%q, type %T): got %T",
+ item.src, item.dest, err)
+ }
+ }
+}
+
+var unmarshalSyntaxTests = []string{
+ "tru",
+ "fals",
+ "nul",
+ "123e",
+ `"hello`,
+ `[1,2,3`,
+ `{"key":1`,
+ `{"key":1,`,
+}
+
+func TestUnmarshalSyntax(t *testing.T) {
+ var x interface{}
+ for _, src := range unmarshalSyntaxTests {
+ err := Unmarshal([]byte(src), &x)
+ if _, ok := err.(*SyntaxError); !ok {
+ t.Errorf("expected syntax error for Unmarshal(%q): got %T", src, err)
+ }
+ }
+}
+
+// Test handling of unexported fields that should be ignored.
+// Issue 4660
+type unexportedFields struct {
+ Name string
+ m map[string]interface{} `json:"-"`
+ m2 map[string]interface{} `json:"abcd"`
+}
+
+func TestUnmarshalUnexported(t *testing.T) {
+ input := `{"Name": "Bob", "m": {"x": 123}, "m2": {"y": 456}, "abcd": {"z": 789}}`
+ want := &unexportedFields{Name: "Bob"}
+
+ out := &unexportedFields{}
+ err := Unmarshal([]byte(input), out)
+ if err != nil {
+ t.Errorf("got error %v, expected nil", err)
+ }
+ if !reflect.DeepEqual(out, want) {
+ t.Errorf("got %q, want %q", out, want)
+ }
+}
+
+// Time3339 is a time.Time which encodes to and from JSON
+// as an RFC 3339 time in UTC.
+type Time3339 time.Time
+
+func (t *Time3339) UnmarshalJSON(b []byte) error {
+ if len(b) < 2 || b[0] != '"' || b[len(b)-1] != '"' {
+ return fmt.Errorf("types: failed to unmarshal non-string value %q as an RFC 3339 time", b)
+ }
+ tm, err := time.Parse(time.RFC3339, string(b[1:len(b)-1]))
+ if err != nil {
+ return err
+ }
+ *t = Time3339(tm)
+ return nil
+}
+
+func TestUnmarshalJSONLiteralError(t *testing.T) {
+ var t3 Time3339
+ err := Unmarshal([]byte(`"0000-00-00T00:00:00Z"`), &t3)
+ if err == nil {
+ t.Fatalf("expected error; got time %v", time.Time(t3))
+ }
+ if !strings.Contains(err.Error(), "range") {
+ t.Errorf("got err = %v; want out of range error", err)
+ }
+}
+
+// Test that extra object elements in an array do not result in a
+// "data changing underfoot" error.
+// Issue 3717
+func TestSkipArrayObjects(t *testing.T) {
+ json := `[{}]`
+ var dest [0]interface{}
+
+ err := Unmarshal([]byte(json), &dest)
+ if err != nil {
+ t.Errorf("got error %q, want nil", err)
+ }
+}
+
+// Test semantics of pre-filled struct fields and pre-filled map fields.
+// Issue 4900.
+func TestPrefilled(t *testing.T) {
+ ptrToMap := func(m map[string]interface{}) *map[string]interface{} { return &m }
+
+ // Values here change, cannot reuse table across runs.
+ var prefillTests = []struct {
+ in string
+ ptr interface{}
+ out interface{}
+ }{
+ {
+ in: `{"X": 1, "Y": 2}`,
+ ptr: &XYZ{X: float32(3), Y: int16(4), Z: 1.5},
+ out: &XYZ{X: float64(1), Y: float64(2), Z: 1.5},
+ },
+ {
+ in: `{"X": 1, "Y": 2}`,
+ ptr: ptrToMap(map[string]interface{}{"X": float32(3), "Y": int16(4), "Z": 1.5}),
+ out: ptrToMap(map[string]interface{}{"X": float64(1), "Y": float64(2), "Z": 1.5}),
+ },
+ }
+
+ for _, tt := range prefillTests {
+ ptrstr := fmt.Sprintf("%v", tt.ptr)
+ err := Unmarshal([]byte(tt.in), tt.ptr) // tt.ptr edited here
+ if err != nil {
+ t.Errorf("Unmarshal: %v", err)
+ }
+ if !reflect.DeepEqual(tt.ptr, tt.out) {
+ t.Errorf("Unmarshal(%#q, %s): have %v, want %v", tt.in, ptrstr, tt.ptr, tt.out)
+ }
+ }
+}
+
+var invalidUnmarshalTests = []struct {
+ v interface{}
+ want string
+}{
+ {nil, "json: Unmarshal(nil)"},
+ {struct{}{}, "json: Unmarshal(non-pointer struct {})"},
+ {(*int)(nil), "json: Unmarshal(nil *int)"},
+}
+
+func TestInvalidUnmarshal(t *testing.T) {
+ buf := []byte(`{"a":"1"}`)
+ for _, tt := range invalidUnmarshalTests {
+ err := Unmarshal(buf, tt.v)
+ if err == nil {
+ t.Errorf("Unmarshal expecting error, got nil")
+ continue
+ }
+ if got := err.Error(); got != tt.want {
+ t.Errorf("Unmarshal = %q; want %q", got, tt.want)
+ }
+ }
+}
+
+var invalidUnmarshalTextTests = []struct {
+ v interface{}
+ want string
+}{
+ {nil, "json: Unmarshal(nil)"},
+ {struct{}{}, "json: Unmarshal(non-pointer struct {})"},
+ {(*int)(nil), "json: Unmarshal(nil *int)"},
+ {new(net.IP), "json: cannot unmarshal string into Go value of type *net.IP"},
+}
+
+func TestInvalidUnmarshalText(t *testing.T) {
+ buf := []byte(`123`)
+ for _, tt := range invalidUnmarshalTextTests {
+ err := Unmarshal(buf, tt.v)
+ if err == nil {
+ t.Errorf("Unmarshal expecting error, got nil")
+ continue
+ }
+ if got := err.Error(); got != tt.want {
+ t.Errorf("Unmarshal = %q; want %q", got, tt.want)
+ }
+ }
+}
+
+// Test that string option is ignored for invalid types.
+// Issue 9812.
+func TestInvalidStringOption(t *testing.T) {
+ num := 0
+ item := struct {
+ T time.Time `json:",string"`
+ M map[string]string `json:",string"`
+ S []string `json:",string"`
+ A [1]string `json:",string"`
+ I interface{} `json:",string"`
+ P *int `json:",string"`
+ }{M: make(map[string]string), S: make([]string, 0), I: num, P: &num}
+
+ data, err := Marshal(item)
+ if err != nil {
+ t.Fatalf("Marshal: %v", err)
+ }
+
+ err = Unmarshal(data, &item)
+ if err != nil {
+ t.Fatalf("Unmarshal: %v", err)
+ }
+}
diff --git a/vendor/gopkg.in/mgo.v2/internal/json/encode.go b/vendor/gopkg.in/mgo.v2/internal/json/encode.go
new file mode 100644
index 0000000..67a0f00
--- /dev/null
+++ b/vendor/gopkg.in/mgo.v2/internal/json/encode.go
@@ -0,0 +1,1256 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package json implements encoding and decoding of JSON as defined in
+// RFC 4627. The mapping between JSON and Go values is described
+// in the documentation for the Marshal and Unmarshal functions.
+//
+// See "JSON and Go" for an introduction to this package:
+// https://golang.org/doc/articles/json_and_go.html
+package json
+
+import (
+ "bytes"
+ "encoding"
+ "encoding/base64"
+ "fmt"
+ "math"
+ "reflect"
+ "runtime"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+ "unicode"
+ "unicode/utf8"
+)
+
+// Marshal returns the JSON encoding of v.
+//
+// Marshal traverses the value v recursively.
+// If an encountered value implements the Marshaler interface
+// and is not a nil pointer, Marshal calls its MarshalJSON method
+// to produce JSON. If no MarshalJSON method is present but the
+// value implements encoding.TextMarshaler instead, Marshal calls
+// its MarshalText method.
+// The nil pointer exception is not strictly necessary
+// but mimics a similar, necessary exception in the behavior of
+// UnmarshalJSON.
+//
+// Otherwise, Marshal uses the following type-dependent default encodings:
+//
+// Boolean values encode as JSON booleans.
+//
+// Floating point, integer, and Number values encode as JSON numbers.
+//
+// String values encode as JSON strings coerced to valid UTF-8,
+// replacing invalid bytes with the Unicode replacement rune.
+// The angle brackets "<" and ">" are escaped to "\u003c" and "\u003e"
+// to keep some browsers from misinterpreting JSON output as HTML.
+// Ampersand "&" is also escaped to "\u0026" for the same reason.
+// This escaping can be disabled using an Encoder with DisableHTMLEscaping.
+//
+// Array and slice values encode as JSON arrays, except that
+// []byte encodes as a base64-encoded string, and a nil slice
+// encodes as the null JSON value.
+//
+// Struct values encode as JSON objects. Each exported struct field
+// becomes a member of the object unless
+// - the field's tag is "-", or
+// - the field is empty and its tag specifies the "omitempty" option.
+// The empty values are false, 0, any
+// nil pointer or interface value, and any array, slice, map, or string of
+// length zero. The object's default key string is the struct field name
+// but can be specified in the struct field's tag value. The "json" key in
+// the struct field's tag value is the key name, followed by an optional comma
+// and options. Examples:
+//
+// // Field is ignored by this package.
+// Field int `json:"-"`
+//
+// // Field appears in JSON as key "myName".
+// Field int `json:"myName"`
+//
+// // Field appears in JSON as key "myName" and
+// // the field is omitted from the object if its value is empty,
+// // as defined above.
+// Field int `json:"myName,omitempty"`
+//
+// // Field appears in JSON as key "Field" (the default), but
+// // the field is skipped if empty.
+// // Note the leading comma.
+// Field int `json:",omitempty"`
+//
+// The "string" option signals that a field is stored as JSON inside a
+// JSON-encoded string. It applies only to fields of string, floating point,
+// integer, or boolean types. This extra level of encoding is sometimes used
+// when communicating with JavaScript programs:
+//
+// Int64String int64 `json:",string"`
+//
+// The key name will be used if it's a non-empty string consisting of
+// only Unicode letters, digits, dollar signs, percent signs, hyphens,
+// underscores and slashes.
+//
+// Anonymous struct fields are usually marshaled as if their inner exported fields
+// were fields in the outer struct, subject to the usual Go visibility rules amended
+// as described in the next paragraph.
+// An anonymous struct field with a name given in its JSON tag is treated as
+// having that name, rather than being anonymous.
+// An anonymous struct field of interface type is treated the same as having
+// that type as its name, rather than being anonymous.
+//
+// The Go visibility rules for struct fields are amended for JSON when
+// deciding which field to marshal or unmarshal. If there are
+// multiple fields at the same level, and that level is the least
+// nested (and would therefore be the nesting level selected by the
+// usual Go rules), the following extra rules apply:
+//
+// 1) Of those fields, if any are JSON-tagged, only tagged fields are considered,
+// even if there are multiple untagged fields that would otherwise conflict.
+// 2) If there is exactly one field (tagged or not according to the first rule), that is selected.
+// 3) Otherwise there are multiple fields, and all are ignored; no error occurs.
+//
+// Handling of anonymous struct fields is new in Go 1.1.
+// Prior to Go 1.1, anonymous struct fields were ignored. To force ignoring of
+// an anonymous struct field in both current and earlier versions, give the field
+// a JSON tag of "-".
+//
+// Map values encode as JSON objects. The map's key type must either be a string
+// or implement encoding.TextMarshaler. The map keys are used as JSON object
+// keys, subject to the UTF-8 coercion described for string values above.
+//
+// Pointer values encode as the value pointed to.
+// A nil pointer encodes as the null JSON value.
+//
+// Interface values encode as the value contained in the interface.
+// A nil interface value encodes as the null JSON value.
+//
+// Channel, complex, and function values cannot be encoded in JSON.
+// Attempting to encode such a value causes Marshal to return
+// an UnsupportedTypeError.
+//
+// JSON cannot represent cyclic data structures and Marshal does not
+// handle them. Passing cyclic structures to Marshal will result in
+// an infinite recursion.
+//
+func Marshal(v interface{}) ([]byte, error) {
+ e := &encodeState{}
+ err := e.marshal(v, encOpts{escapeHTML: true})
+ if err != nil {
+ return nil, err
+ }
+ return e.Bytes(), nil
+}
+
+// MarshalIndent is like Marshal but applies Indent to format the output.
+func MarshalIndent(v interface{}, prefix, indent string) ([]byte, error) {
+ b, err := Marshal(v)
+ if err != nil {
+ return nil, err
+ }
+ var buf bytes.Buffer
+ err = Indent(&buf, b, prefix, indent)
+ if err != nil {
+ return nil, err
+ }
+ return buf.Bytes(), nil
+}
+
+// HTMLEscape appends to dst the JSON-encoded src with <, >, &, U+2028 and U+2029
+// characters inside string literals changed to \u003c, \u003e, \u0026, \u2028, \u2029
+// so that the JSON will be safe to embed inside HTML