Install the package with:
go get gopkg.in/check.v1
Import it with:
import "gopkg.in/check.v1"
and use check as the package name inside the code.
For more details, visit the project page:
and the API documentation:
Rich testing for the Go language
License: Other
Install the package with:
go get gopkg.in/check.v1
Import it with:
import "gopkg.in/check.v1"
and use check as the package name inside the code.
For more details, visit the project page:
and the API documentation:
package main_test
import (
"testing"
- "os"
+ "syscall"
. "gopkg.in/check.v1"
)
// Hook up gocheck into the "go test" runner.
func Test(t *testing.T) { TestingT(t) }
type MySuite struct{}
var _ = Suite(&MySuite{})
func (s *MySuite) TestHelloWorld(c *C) {
c.Assert(42, Equals, "42")
- c.Assert(os.Errno(13), Matches, "perm.*accepted")
+ c.Assert(syscall.Errno(13), Matches, "perm.*accepted")
c.Check(42, Equals, 42)
}
Verbose mode in this command :
$ go test $PKG -check.f NodeTestSuite -check.v
Not working as the document says.
It's only working if i change directory to package's path:
$ cd src/$PKG
$ go test -check.f NodeTestSuite -check.vv
When saved as tst_test.go and run with go test tst_test.go, the following
code produces an appropriate panic traceback when using Go 1.2.2
but not when using Go tip (currently c6e6ca8c7964).
package main_test
import (
gc "gopkg.in/check.v1"
"testing"
)
func TestAll(t *testing.T) {
gc.TestingT(t)
}
var _ = gc.Suite(&suite{})
type suite struct{}
func (*suite) TestPanic(c *gc.C) {
someFunc()
}
func someFunc() {
otherFunc()
}
func otherFunc() {
panic("run in circles, scream and shout")
}
Using the current version of gocheck:
vagrant@vagrant-ubuntu-trusty-64:/vagrant$ go test -check.vv -v ./...
? _/vagrant [no test files]
Without -check.vv:
vagrant@vagrant-ubuntu-trusty-64:/vagrant$ go test -v ./...
? _/vagrant [no test files]
? _/vagrant/adapter/basic [no test files]
=== RUN Test
The idea is that you can write something like this:
func (s* suite) TestSomething(c *C) {
suiteCtx := c.SuiteContext()
testCtx := c.TestContext()
}
The SuiteContext
is guaranteed to be cancelled at the end of the suite, and the TestContext
is guaranteed to be cancelled at the end of the individual test. This is something that could be provided in a fixture, but it seems a shame to duplicate that in all my suites, and (as far as I know) there's no way to do "Suite mixins" or anything like that. In any case, I also wasn't sure what (if any) concurrency capabilities that GoCheck provides, and placing the TestContext
object inside the suite would preclude its safe use in multiple parallel tests.
Hi!
We've been using your check library a lot with our go code, and there is a desire to have machine readable results.
Ideally this would be a format that many CI systems can consume (xunit xml, etc) but I'm curious if you have any thoughts on the check library providing this. I've forked the repo because I certainly would love to help in anyway.
Also tips on how this would be ideally implemented would also be greatly appreciated.
Not sure if this is the expected behavior, but if a checker returns an error message this will cause a surrounding 'Not' to fail.
type failsWithError struct {
*CheckerInfo
}
var FailsWithError Checker = &containsChecker{
&CheckerInfo{Name: "FailsWithError", Params: []string{"obtained", "expected"}},
}
func (checker *failsWithError) Check(params []interface{}, names []string) (result bool, error string) {
return false, "Error message explaining failing check"
}
running this checker with not will fail
c.Assert("somestring", Not(FailsWithError), "fails")
Should the Not fail in this way? Not returning the message loses information about the reason for the failure.
Modifying the Not check to clear the error if the result is true lets the test pass
func (checker *notAndIgnoreError) Check(params []interface{}, names []string) (result bool, error string) {
result, error = checker.sub.Check(params, names)
result = !result
if result {
error = ""
}
return
}
is this how the Not should behave?
gocheck has it's own way of formatting test names for its verbose output. This makes it difficult to use gocheck with tools like goconvey, which work by parsing the text output of go test
.
I'd like to propose a flag to gocheck that makes its output look like that of go test
. With just a few hacks (see tchajed/check@62ea135) I was able to almost get gocheck to work with goconvey (the individual suite tests showed up in the goconvey interface), and could develop these hacks into a more proper pull request for gocheck.
For example I haven't found a way to make this work.
c.Check("one\ntwo", Matches, ".*two.*")
Because the matches must be a string, and that string is wrapped with ^
and $
we can't add our own flags either.
A possible quick fix is to check if the pattern is a regexp.Regexp, in which case don't compile the pattern ourselves.
Hi,
We have added some benchmarks to our project, but to prepare them we need to initialise few million of strings with random values.
This is of course run in SetUpSuite. The problem is that it takes around 20 seconds to finish the SetUpSuite call.
I was wondering if would be ok to add the methods listed above. This way I could move the benchmark initialisation code and run it only when i set the --check.b flag. The other methods are to keep it symmetrical.
I could try to hack it myself, when time allows, but first I wanted to have a heads up from the gocheck devs.
Best regards,
Maciej.
Hey Gustavo
I'm reporting a bug about some of the benchmark tests failing so that I can reference it in SUSE and Fedora packaging.
[ 25s] ----------------------------------------------------------------------
[ 25s] FAIL: benchmark_test.go:18: BenchmarkS.TestBasicTestTiming
[ 25s]
[ 25s] benchmark_test.go:26:
[ 25s] c.Assert(output.value, Matches, expected)
[ 25s] ... value string = "" +
[ 25s] ... "PASS: check_test.go:136: FixtureHelper.Test1\t0.002s\n" +
[ 25s] ... "PASS: check_test.go:140: FixtureHelper.Test2\t0.000s\n"
[ 25s] ... regex string = "" +
[ 25s] ... "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test1\t0\\.001s\n" +
[ 25s] ... "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t0\\.000s\n"
[ 25s]
[ 25s]
[ 25s] ----------------------------------------------------------------------
[ 25s] FAIL: benchmark_test.go:39: BenchmarkS.TestBenchmark
[ 25s]
[ 25s] benchmark_test.go:59:
[ 25s] c.Assert(output.value, Matches, expected)
[ 25s] ... value string = "PASS: check_test.go:144: FixtureHelper.Benchmark1\t 50\t 208950 ns/op\n"
[ 25s] ... regex string = "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark1\t *100\t *[12][0-9]{5} ns/op\n"
[ 25s]
[ 25s] OOPS: 135 passed, 2 FAILED
[ 25s] --- FAIL: Test (0.19s)
[ 25s] FAIL
[ 25s] FAIL gopkg.in/check.v1 0.197s
[ 25s] error: Bad exit status from /var/tmp/rpm-tmp.jhzrJl (%check)
Okay, so I've managed to reproduce this in a simple test case. Basically it seems like when I have two different test files in the same directory, same package and run go test -v ./...
the second test file which is run executes the tests from the first test file's suite alongside its own suite's tests.
Below is a small example, just create both files in the same directory and run go test ./...
. The output should show SuiteOne outputting TESTONE and then SuiteTwo outputting TESTONE and TESTTWO.
FILE - onetest.go
package session
import (
. "gopkg.in/check.v1"
"fmt"
"testing"
)
type SuiteOne struct { }
func TestOne(t *testing.T) {
Suite(&SuiteOne{})
TestingT(t)
}
func (t *SuiteOne) TestOne(c *C) {
fmt.Println("TESTONE")
}
FILE - twotest.go
package session
import (
. "gopkg.in/check.v1"
"fmt"
"testing"
)
type SuiteTwo struct { }
func TestTwo(t *testing.T) {
Suite(&SuiteTwo{})
TestingT(t)
}
func (t *SuiteTwo) TestTwo(c *C) {
fmt.Println("TESTTWO")
}
The output looks like this:
$ go test -v ./...
=== RUN TestOne
TESTONE
OK: 1 passed
--- PASS: TestOne (0.00 seconds)
=== RUN TestTwo
TESTONE
TESTTWO
OK: 2 passed
--- PASS: TestTwo (0.00 seconds)
PASS
ok _/Users/llovelock/test/check 0.014s
Any idea of what's going on here?
I'd like to be able to access the test name inside the test. Would you be willing to expose c.method.String()
method via a method on the C
type?
Please tag releases
Please consider assigning version numbers and tagging releases. Tags/releases
are quite useful for downstream package maintainers (in Debian and other distributions) to export source tarballs, automatically track new releases and to declare dependencies between packages. Read more in the Debian Upstream Guide.
Thank you.
Hi,
While working on a PR for the project, I've noticed that the code isn't formatted with gofmt.
Would it be possible to merge the existing PRs then I can create a PR to format the code.
Thank you.
It would be nice to have an addCleanup method that saves a function to be called after tear down, like python's: https://docs.python.org/3/library/unittest.html#unittest.TestCase.addCleanup
mvo implemented one for snappy tests, which is very simple: http://bazaar.launchpad.net/~snappy-dev/snappy/snappy/view/head:/_integration-tests/helpers/common/common.go#L138
I'm not sure how to upstream that to gocheck, as it doesn't have a base suite.
Hi check team!
File benchmark.go has been taken from golang project and contains proper copyright notice.
Together with copyright notice this file contains the following license information:
'Use of this source code is governed by a BSD-style license that can be found in the LICENSE file'
This statement seems to be wrong because the LICENSE file provided by go-check project contains 2-clause BSD license:
<...>
while benchmark.go was initially licensed under 3-clause BSD license which can be found here:
https://github.com/golang/go/blob/master/LICENSE
<...>
<...>
I think that you may want to add Golang's LICENSE file to the project as LICENCE.golang
and put a link to it from benchmark.go.
Thanks for helping!
Oleg
Hello,
is it possible to descibe a test? I like gocheck but I miss a way to describe what a method is testing with a sentence. Am I right that the method name and the suite name are the only ways to describe the tests?
The following test completes successfully when in fact it should be a failure.
func (s *MySuit) TestPanic(c *C) {
panic(nil)
}
Hi there,
There are quite a few open issues / PRs that have yet to receive input (either comments or be merged / rejected). Can we consider this project dead? Are you interested in help maintaining it if not (I'm volunteering to help out).
Thank you!
Hi there,
I just got started testing using gocheck, and for some parts of my tests that write to and read back from the filesystem it would be very handy to keep the temporary files around to be examined, when the tests fail.
That way I could find out more easily why the test failed.
What do you think?
godep go test -race -cover ./... --environment=dev
will report that all tests pass.
However, running the same tests without gocheck in the test framework results in a data race detection found:
WARNING: DATA RACE
Write by goroutine 81:
...elided...
Previous write by goroutine 79:
...identical elided...
I would like to use the TestName() of the tests I'm about to run inside the SetUpTest method. Can we do this? Could we just feed it in somehow?
Hi Gustavo,
I'm curious about:
https://github.com/go-check/check/blob/v1/helpers_test.go#L436
The line
defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(runtime.NumCPU()))
seems wrong to me.
My understanding is that the default value of GOMAXPROCS for Go is always 1.
But this code here is going to set it to the number of cores of the local machines(not 1!) once this test is completed as a side effect. Then I assume that value is going to remain as long as the runtime is active.
I think more proper code would be:
prevVal := runtime.GOMAXPROCS(runtime.GOMAXPROCS(0)
// change it temporarely if needed
defer runtime.GOMAXPROCS(prevVal) // set it back to what it was
Does that make sense or am I misunderstanding something ?
Thank you.
I'd like to work on adding new methods with the following signatures:
Check(obtained interface{}, checker Checker, failureMessage string, args ...interface{}) bool
Assert(obtained interface{}, checker Checker, failureMessage string, args ...interface{})
This way, if a test fails it would print a custom message which could be for example the index of the loop the test is currently asserting or some additional debug information that could be used to trace why the assertion failed.
What do you think?
Consider this code:
package main
import (
"testing"
gc "gopkg.in/check.v1"
)
type suite struct {
closed bool
}
var _ = gc.Suite(&suite{})
func TestP(t *testing.T) {
gc.TestingT(t)
}
func (s *suite) SetUpTest(c *gc.C) {
s.closed = false
}
func (s *suite) TestSomething(c *gc.C) {
panic("this panic does not become visible")
s.closed = true
}
func (s *suite) TearDownTest(c *gc.C) {
if !s.closed {
c.Fatalf("not closed")
}
}
This prints:
----------------------------------------------------------------------
FAIL: tst_test.go:28: suite.TearDownTest
tst_test.go:30:
c.Fatalf("not closed")
... Error: not closed
----------------------------------------------------------------------
PANIC: tst_test.go:23: suite.TestSomething
... Panic: Fixture has panicked (see related PANIC)
OOPS: 0 passed, 1 FAILED, 1 MISSED
--- FAIL: TestP (0.00s)
FAIL
FAIL command-line-arguments 0.002s
The panic has been hidden because the teardown failed. This isn't
that unusual. For example in the Juju test suite, there's a check in TearDownTest
that all the mongo sessions are closed - if test doesn't proceed to completion because
of a panic and as a result does not close a session, the failure is confusing
as it's not clear that there has been a panic at all.
In fact the output is identical to the output that happens when the TearDownTest
has an error after the actual test has passed successfully.
Consider this test case
package suite
import (
"sync"
"testing"
gc "gopkg.in/check.v1"
)
type TestSuite struct{}
var _ = gc.Suite(&TestSuite{})
func TestTesting(t *testing.T) {
gc.TestingT(t)
}
func (t *TestSuite) TestRace(c *gc.C) {
var wg sync.WaitGroup
start := make(chan bool)
const n = 2
wg.Add(n)
for i := 0; i < n; i++ {
go func() {
defer wg.Done()
<-start
c.Error("an error occured")
}()
}
close(start)
wg.Wait()
}
When running the test with -race, the following data race results
==================
WARNING: DATA RACE
Write by goroutine 9:
gopkg.in/check%2ev1.(*C).Error()
/home/dfc/src/gopkg.in/check.v1/helpers.go:117 +0x246
suite.(*TestSuite).TestRace.func1()
/home/dfc/src/suite/suite_test.go:31 +0x152
Previous write by goroutine 10:
gopkg.in/check%2ev1.(*C).Error()
/home/dfc/src/gopkg.in/check.v1/helpers.go:117 +0x246
suite.(*TestSuite).TestRace.func1()
/home/dfc/src/suite/suite_test.go:31 +0x152
Goroutine 9 (running) created at:
suite.(*TestSuite).TestRace()
/home/dfc/src/suite/suite_test.go:32 +0xd0
runtime.call32()
/home/dfc/go/src/runtime/asm_amd64.s:437 +0x44
reflect.Value.Call()
/home/dfc/go/src/reflect/value.go:300 +0xd0
gopkg.in/check%2ev1.(*suiteRunner).forkTest.func1()
/home/dfc/src/gopkg.in/check.v1/check.go:763 +0x5e3
gopkg.in/check%2ev1.(*suiteRunner).forkCall.func1()
/home/dfc/src/gopkg.in/check.v1/check.go:657 +0x83
Goroutine 10 (finished) created at:
suite.(*TestSuite).TestRace()
/home/dfc/src/suite/suite_test.go:32 +0xd0
runtime.call32()
/home/dfc/go/src/runtime/asm_amd64.s:437 +0x44
reflect.Value.Call()
/home/dfc/go/src/reflect/value.go:300 +0xd0
gopkg.in/check%2ev1.(*suiteRunner).forkTest.func1()
/home/dfc/src/gopkg.in/check.v1/check.go:763 +0x5e3
gopkg.in/check%2ev1.(*suiteRunner).forkCall.func1()
/home/dfc/src/gopkg.in/check.v1/check.go:657 +0x83
==================
----------------------------------------------------------------------
FAIL: suite_test.go:18: TestSuite.TestRace
OOPS: 0 passed, 1 FAILED
suite_test.go:31:
c.Error("an error occured")
... Error: an error occured
suite_test.go:31:
c.Error("an error occured")
... Error: an error occured
--- FAIL: TestTesting (0.00s)
FAIL
exit status 1
FAIL suite 0.018s
Do we have any plan to add a checker for contain
, for example
c.Assert(value, check.Contains, "hello")
(copied with updates from http://pad.lv/1297690)
go-check uses temporary directories to run tests in, but these temporary directories are made with known, predictable names.
We can see here
https://github.com/go-check/check/blob/v1/check.go#L133
that newPath
uses rand.Int()
without first seeding the RNG. Meaning I'll always have tests run in /tmp/gocheck-5577006791947779410
.
I don't think cryptographically secure random numbers are needed so something as simple as rand.Seed(time.Nanoseconds())
would probably do the job.
For each package, each file has its own test suite. Is there a way to have a common setup for all of the suites?
Right now they all share a common database to operate on.
Furthurmore, can you share context between packages?
I'm guessing something about how panics are printed out has changed? https://gist.github.com/mwhudson/751032f1bd893bbc93e2 for the gory failures.
I have a function that returns an error that includes a stack trace:
result, err := my_func()
c.Assert(err, IsNil)
# err may have a long, multi-line message
When I test for nil on the error, the error message is long and not super helpful:
.../.../my_test.go:
c.Assert(err, isNil)
... value *errors.baseError = &errors.baseError{msg:"long msg", <other error fields>} ("long msg")
In this case, the "long msg" is duplicated, and the message itself is printed in "repr" form, meaning that newlines (and other control characters) show up as "\n" and the message is basically completely unreadable.
The problem appears to be rooted here. The problem is that the "go syntax" repr and the "quoted" repr aren't equal, so it's written twice, and what's written isn't written in a useful way. I understand that, in general, you want to have this kind of representation for arbitrary interface{}
values, but for error I think there should be some kind of new representation which makes use of isMultiLine
, and at the very least doesn't duplicate all the internal baseError data or write the error message twice. I was working on a P/R but I wasn't sure what that new representation should be.
Please improve the documentation regarding:
func Test(t *testing.T) { TestingT(t) }
If a given package has multiple test files, the above (required) line of code can only appear once. Please provide direction on the best way to introduce this line of code.
Also, a given suite can only appear once in a package. If I merely want to have n tests files for a given package with one suite, it's unclear on how best to initialize go-check.
I have a suite of unit tests that check the output of my program. To ensure that there is no unchecked output, I use the following code:
type Suite struct {
output bytes.Buffer
}
func (s *Suite) Output() string {
defer s.output.Reset()
return s.output.String()
}
func (s *Suite) TearDownTest(c *check.C) {
if out := s.Output(); out != "" {
c.Logf("Unchecked output; check with: c.Check(s.Output(), check.Equals, %q)", out)
}
}
Using c.LogF
hides the output though, since successful tests don’t log anything. Using c.ErrorF
instead of c.LogF
reports that the fixture panicked, which is also not what I intended when I wrote that code. I had expected that I can use c.ErrorF
in the TearDownTest
exactly as I can use it in the actual unit test, having it print the nice “obtained—expected” message.
The current API doesn’t allow me to query whether the test panicked, I can only ask c.Failed()
.
Is the above code something that check
should support?
I've only seen this on a package build machine and retrying usually makes it go away eventually:
dh_auto_build -O--buildsystem=golang
go install -v gopkg.in/check.v1
gopkg.in/check.v1
dh_auto_test -O--buildsystem=golang
go test -v gopkg.in/check.v1
=== RUN Test
----------------------------------------------------------------------
FAIL: benchmark_test.go:77: BenchmarkS.TestBenchmarkMem
benchmark_test.go:90:
c.Assert(output.value, Matches, expected)
... value string = "PASS: check_test.go:159: FixtureHelper.Benchmark3\t 50\t 215042 ns/op\t 126 B/op\t 2 allocs/op\n"
... regex string = "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark3\t *100\t *[12][0-9]{5} ns/op\t *[0-9]+ B/op\t *[1-9] allocs/op\n"
OOPS: 126 passed, 1 FAILED
--- FAIL: Test (0.32s)
FAIL
exit status 1
FAIL gopkg.in/check.v1 0.323s
Temporary folder created with c.MkDir()
is not being removed if an error occured inside SetUpSuite
method.
Here's the code that produces this behavior for me:
func (s *ApiSuite) SetUpSuite(c *C) {
tempDir := c.MkDir()
log.Fatal("Error")
}
https://github.com/go-check/check/blob/v1/helpers.go#L215
Method "internalCheck" does not only check the result returned by checker, but also check if error message is empty. The problem is about the "Not" checker. It only inverts the result, but not clears error message. It does make sense that every checker returns error message if result is false. So internalCheck reports fail when using Not checker. I think it should be a bug here.
Hi there,
Earlier this year, I had occasion to write some tests with gocheck and go 1.3 that happened to involve calling c.Assert() 67,108,864 (8192*8192) times in a loop. I don't have exact timings but it was fast enough for me not to care.
Re-running those tests now with go1.4 or go1.5 and most recent gocheck (I've not managed to check against go1.3, I'm afraid), they now take ~40 seconds.
Here's a simple testcase, with the number of iterations whacked down a bit:
package mytest_test
import (
. "gopkg.in/check.v1"
"testing"
)
type MySuite struct {}
var _ = Suite(&MySuite{})
func Test(t *testing.T) { TestingT(t) }
const Iterations = 8192*1024
func (s *MySuite) TestAssertEqualsSpeed(c *C) {
for i := 0; i < Iterations; i++ {
c.Assert(1, Equals, 1)
}
}
func (s *MySuite) TestAssertIsNilSpeed(c *C) {
for i := 0; i < Iterations; i++ {
c.Assert(nil, IsNil)
}
}
func (s *MySuite) TestOrdinaryComparisonSpeed(c *C) {
for i := 0; i < Iterations; i++ {
_ = (1 == 1)
}
}
And the output:
[lupine@nlwork2 mytest]$ go test ./... -v -check.v -check.vv
=== RUN Test
START: mytest_test.go:17: MySuite.TestAssertEqualsSpeed
PASS: mytest_test.go:17: MySuite.TestAssertEqualsSpeed 5.288s
START: mytest_test.go:23: MySuite.TestAssertIsNilSpeed
PASS: mytest_test.go:23: MySuite.TestAssertIsNilSpeed 1.612s
START: mytest_test.go:30: MySuite.TestOrdinaryComparisonSpeed
PASS: mytest_test.go:30: MySuite.TestOrdinaryComparisonSpeed 0.004s
OK: 3 passed
--- PASS: Test (6.90s)
PASS
ok bytemark.co.uk/auth3/auth/mytest 6.906s
Some loss of performance is to be expected, of course, but I don't believe this amount is expected or desirable. I can rearchitect the tests to avoid the large number of calls to c.Assert(), but hopefully it's just down to some unnecessary use of reflect or a copy or something that isn't needed.
If you write a function in the test setup that captures the *C in a closure, and that method is called during the test or teardown, any failures aren't recorded as failures.
Similarly capturing the *C in a test and running it in the teardown doesn't record a failure.
I looked at the code myself to look at fixing this, but unfortunately couldn't follow it enough to work out how to fix it myself.
When the -v
flag is used, the test reporting is very difficult to read. It is basically the stdout/err of every test combined without any information about which tests are running.
I think check should probably print some more details if the -v
flag is given. Definitely the name of the test suite and function. Probably also the filename of the test.
The state of the flag can be retrieved through the testing.Verbose function.
I was recently trying to find out what was going wrong in a failed
test. The symptom was that the tests failed saying "49 PASSED 7 FAILED"
but with no other error message (and there were only 49 tests in total).
Running with the race detector enabled showed that there was a race.
The reason was that a test suite was setting up an helper object in
SetUpTest that held on to the *C value passed in SetUpTest, and then in
one of the tests the helper object did an assert on it.
This is obviously a wrong thing to do, but it's an easy mistake to make
and it would have saved me lots of time if the failure mode had been
less obscure - for example if the assert had panicked because the C
object had already been discarded.
Here's some code that reproduces the issue.
http://play.golang.org/p/3CXBOBWLB_ I've been testing using go tip
(+2db587c)
For the record, the culprit was NewFakeAPI in
github.com/juju/juju/worker/provisioner.
There are references to functions that used to be public but aren't (presumably), so they don't compile. Maybe these functions shouldn't be tested from check_test anymore?
(v1 ✓) wes-macbook:check go test
# github.com/wfreeman/check_test
./printer_test.go:81: undefined: PrintLine
./printer_test.go:100: undefined: Indent
FAIL github.com/wfreeman/check [build failed]
From the documentation it is hard to determine whether tests run sequentially or in parallel, and if the setup/teardown for the test happens on the same goroutine as the test itself.
This matters when trying to diagnose failures that have symptoms of tests (and their setup/teardown) clobbering each other. This started happening after moving to go 1.5, and it may be likely that we've written tests badly, but its hard to be sure.
I have setup a test suite for each file of my package. I also redirect logs (from go-logging) into testing.T. The failures and logs are duplicated in the output of each suite. This is quite confusing.
func (s *Suite) TestMatch(c *check.C) {
re := `foo|bar`
c.Check("foo", check.Matches, re)
c.Check("football", check.Matches, re) // Should not match
c.Check("ballfoot", check.Not(check.Matches), re)
}
The problem is this line in checkers.go
:
matches, err := regexp.MatchString("^"+reStr+"$", valueStr)
It should be:
matches, err := regexp.MatchString("^(?:"+reStr+")$", valueStr)
I frequently use the -test.run=
argument with go test
if i'm only interested in running a single test in the test suite, which is really useful when the entire test suite takes a long time to run and I'm only working on a single test at a time.
I noticed that with gocheck this flag seems to have no effect (i'm running it with gocheck -gocheck.v -test.run="TestName"
) as it always prints PASS
immediately and doesn't run any tests at all. Is there a way to get this flag to work, or an alternate way of accomplishing the same thing with gocheck?
It would be very helpful for gocheck to have an option to run suites and/or tests in parallel, similar to the way t.Parallel()
works in the standard testing
package.
For long running e2e or functional tests, it would be nice if we could terminate the entire test suite at the point of first failure. I haven't found a way to resolve this yet if there is a way to already, but a flag/parameter to control this behavior internally would make this a lot easier and clearer to implement from an end-user perspective.
My earlier attempts were around trying to use the C.Failed() method to determine whether or not I could panic, but c.Failed doesn't report appropriately inside fixture hooks. Seems kinda like a hack anyway. :)
If you're willing to accept the idea, I can take a stab at a patch.
/cc @unclejack
Is there a way to do this?
c.Assert(found.Id > 0)
I have many checks like this
c.Assert(found.Id > 0, Equals, true)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.