Git Product home page Git Product logo

pikzie's Introduction

README

Name

Pikzie

Author

License

LGPLv3 or later

What's this?

Pikzie is an easy to write/debug Unit Testing Framework for Python.

Pikzie provides the following features that are lacked in unittest.py included in the standard Python distribution:

  • Pythonic API
  • a lot of assertions
  • outputs result with useful format for debugging.

And Pikzie has the following features:

  • ...

Dependencies

  • Python >= 2.6 (Python 3.x is also supported.)

Install

Install with easy_install:

% sudo easy_install Pikzie

Install with pip:

% sudo pip install Pikzie

Install from tar.gz:

% wget http://downloads.sourceforge.net/pikzie/pikzie-1.0.0.tar.gz
% tar xvzf pikzie-1.0.0.tar.gz
% cd pikzie-1.0.0
% sudo python setup.py install

Repository

There is "clear-code/pikzie <https://github.com/clear-code/pikzie>"_ on GitHub.

git:

% git clone https://github.com/clear-code/pikzie.git
% cd pikzie
% sudo python setup.py install

Usage

We assume that you have the following directory structure:

. -+- lib  --- your_module --- ...
   |
   +- test -+- run-test.py
            |
            +- __init__.py
            |
            +- test_module1.py
            |
            +- ...

Template

Here is a template of a test case:

import pikzie
import test_target_module

class TestYourModule(pikzie.TestCase):
    def setup(self):
        self.setup_called = True

    def teardown(self):
        self.setup_called = False

    def test_condition(self): # starts with "test_"
        self.assert_true(self.setup_called)

test/run-test.py is the following:

#!/usr/bin/env python

import sys
import os

base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
sys.path.insert(0, os.path.join(base_dir, "lib"))
sys.path.insert(0, base_dir)

import pikzie

sys.exit(pikzie.Tester().run())

Make test/run-test.py executable:

% chmod +x test/run-test.py

test/test_*.py are automatically loaded and defined tests are ran by invoking run-test.py like the following:

% test/run-test.py

You can pass zero or more options to test/run-test.py like the following:

% test/run-test.py --priority

You can see all available options by --help option like the following:

% test/run-test.py --help

See "Options" section in this document for more details.

Test result

Here is an example test result:

....F..............................

1) Failure: TestLoader.test_collect_test_cases: sorted(test_case_names))
expected: <['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']>
 but was: <['TestXXX1', 'TestXXX2', 'TestYYY']>
diff:
- ['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']
?                                   -----------

+ ['TestXXX1', 'TestXXX2', 'TestYYY']
/home/kou/work/python/pikzie/test/test_loader.py:30: test_collect_test_cases(): sorted(test_case_names))

Finished in 0.013 seconds

35 test(s), 55 assertion(s), 1 failure(s), 0 error(s), 0 pending(s), 0 notification(s)

Progress

A part that contains "." and "F" of the test result shows test progress:

....F..............................

Each "." and "F" shows a test case (test function). "." shows a test case that is succeeded and "F" shows a test case that is failed. There are "E", "P" and "N". They shows error, pending and notification respectively. Here is a summary of test case marks:

.
A succeeded test
F
A failed test
E
A test that had an error
P
A test that is marked as pending
N
A test that had an notification

The above marks are showed after each test is finished. We can confirm the test progress from the output in testing.

Summary of test result

Pikzie outputs a summary of test result after all tests are finished. The first of a summary is a list of a detail of test result of non-succeeded test. In the example, Pikzie outputs a detail of test result because there is a failure:

1) Failure: TestLoader.test_collect_test_cases: sorted(test_case_names))
expected: <['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']>
 but was: <['TestXXX1', 'TestXXX2', 'TestYYY']>
diff:
- ['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']
?                                   -----------

+ ['TestXXX1', 'TestXXX2', 'TestYYY']
/home/kou/work/python/pikzie/test/test_loader.py:30: test_collect_test_cases(): sorted(test_case_names))

In the example, TestLoader.test_collect_test_cases test case is failed and shows that we expected:

['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']

but was:

['TestXXX1', 'TestXXX2', 'TestYYY']

The following part of "diff:" marks different parts to find difference easily:

diff:
- ['TestXXX1', 'TestXXX2', 'TestYYY', 'TestZZZ']
?                                   -----------

+ ['TestXXX1', 'TestXXX2', 'TestYYY']

The failed assertion is in test_collect_test_cases() method in /home/kou/work/python/pikzie/test/test_loader.py at 30th line and the line's content is the following:

sorted(test_case_names))

Elapsed time for testing is showed after a list of a detail of test result:

Finished in 0.013 seconds

The last line is an summary of test result:

35 test(s), 55 assertion(s), 1 failure(s), 0 error(s), 0 pending(s), 0 notification(s)

Here are the means of each output:

n test(s)
n test case(s) (test function(s)) are run.
n assertion(s)
n assertion(s) are passed.
n failure(s)
n assertion(s) are failed.
n error(s)
n error(s) are occurred (n exception(s) are raised)
n pending(s)
n test case(s) are pending (self.pend() is used n times)
n notification(s)
n notification(s) are occurred (self.notify() is used n times)

In the example, 35 test cases are run, 55 assertions are passed and an assertion is failed. There are no error, pending, notification.

XML report

Pikzie reports test result as XML format if --xml-report option is specified. A reported XML has the following structure:

<report>
  <result>
    <test-case>
      <name>TEST CASE NAME</name>
      <description>DESCRIPTION OF TEST CASE (if exists)</description>
    </test-case>
    <test>
      <name>TEST NAME</name>
      <description>DESCRIPTION OF TEST CASE (if exists)</description>
      <option><!-- ATTRIBUTE INFORMATION (if exists) -->
        <name>ATTRIBUTE NAME (e.g.: bug)</name>
        <value>ATTRIBUTE VALUE (e.g.: 1234)</value>
      </option>
      <option>
        ...
      </option>
    </test>
    <status>TEST RESULT ([success|failure|error|pending|notification])</status>
    <detail>DETAIL OF TEST RESULT (if exists)</detail>
    <backtrace><!-- BACKTRACE (if exists) -->
      <entry>
        <file>FILE NAME</file>
        <line>LINE</line>
        <info>ADDITIONAL INFORMATION</info>
      </entry>
      <entry>
        ...
      </entry>
    </backtrace>
    <elapsed>ELAPSED TIME (e.g.: 0.000010)</elapsed>
  </result>
  <result>
    ...
  </result>
  ...
</report>

References

Options

See "Template" section in this document how to pass options to Pikzie.

--version shows its own version and exits.
-pPATTERN, --test-file-name-pattern=PATTERN collects test

files that matches with the specified glob pattern.

Default: test/test_*.py

-nTEST_NAME, --name=TEST_NAME
 

runs tests that are matched with TEST_NAME. If TEST_NAME is surrounded by "/" (e.g. /test_/), TEST_NAME is handled as regular expression.

This option can be specified n times. In the case, Pikzie runs tests that are matched with any TEST_NAME.

-tTEST_CASE_NAME, --test-case=TEST_CASE_NAME
 

runs test cases that are matched with TEST_CASE_NAME. If TEST_CASE_NAME is surrounded by "/" (e.g. /TestMyLib/), TEST_CASE_NAME is handled as regular expression.

This option can be specified n times. In the case, Pikzie runs test cases that are matched with any TEST_CASE_NAME.

--xml-report=FILE
 outputs test result in XML format to FILE.
--priority selects tests to run according to their priority. If a test is not passed in the previous test, the test is ran.
--no-priority runs all tests regardless of their priority. (default)
-vLEVEL, --verbose=LEVEL
 

specifies verbose level. LEVEL is one of [s|silent|n|normal|v|verbose].

This option is only for console UI. (There is only console UI at present.)

-cMODE, --color=MODE
 

specifies whether colorize output or not. MODE is one of [yes|true|no|false|auto]. If 'yes' or 'true' is specified, colorized output by escape sequence is used. If 'no' or 'false' is specified, colorized output is never used. If 'auto' or the option is omitted, colorized output is used if available.

This option is only for console UI. (There is only console UI at present.)

--color-scheme=SCHEME
 

specifies whether color scheme is used for output. SCHEME is one of [default].

This option is only for console UI. (There is only console UI at present.)

Assertions

Use pydoc:

% pydoc pikzie.assertions.Assertions

Or you can see HTML version on the Web: http://pikzie.sourceforge.net/assertions.html

Attribute

You can add attributes to your test to get more useful information on failure. For example, you can add Bug ID like the following:

import pikzie

class TestYourModule(pikzie.TestCase):
    @pikzie.bug(123)
    def test_invalid_input(self):
        self.assert_call_raise(IndexError, ().__getitem__, 0)

In the above example, test_invalid_input test has an attribute that the test is for Bug #123.

Here is a list of available attributes:

pikzie.bug(id)
Set id as Bug ID.
pikzie.priority(priority)

Decide to whether run the test or not according to the priority. Here are the available priorities. If --no-priority command line option is specified, the priority is not used.

must
must run the test.
important
run the test at the probability of 90 percent.
high
run the test at the probability of 70 percent.
normal
run the test at the probability of 50 percent. (default)
low
run the test at the probability of 25 percent.
never
never run the test.

Thanks

pikzie's People

Contributors

kou avatar okkez avatar yosuke-yasuda avatar keitaw avatar hhatto avatar 092975 avatar shopetan avatar

Watchers

James Cloos avatar

pikzie's Issues

テストの使い方がわからなかった><

[やりたい事]
任意の関数を作成して,関数に対してテストをしたかった.

事実 どのようなDir構造で動くのか分からなかった.
(2) どのようなFileが存在していれば動くのか分からなかった.
(3) そもそもどうモジュール化して…ってやるのか分からなかった.

[こうすればよかった]

. -+- lib --- your_module --- ...
|
+- test -+- run-test.py
|
+- test_module1.py
|
+- ...

最低限必要な構造はこれ
lib/[関数を定義したファイル(例えばfactorial.py)]
test/run-test.py(これはpikzieのドキュメントにあるrun-test.py)
test/test_factorial.py(ここにひな形のファイルが必要)

実行には

python3 test/run-test.py

で大丈夫!!

python3で pikzie自体にrun-test.pyを行うとエラーが発生する.

[事実]

MacBook-Pro-8:pikzie shopetan$ python3 test/run-test.py

..............FF..P..O.......................F.....F...........

1) Failure: TestAssertions.test_assert_open_file: ["test_assert_open_file"])
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:600: ["test_assert_open_file"])
expected: <(False, (1, 2, 1, 0, 0, 0, 0), [['F', 'TestCase.test_assert_open_file', "expected: open('/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') succeeds\n but was: <<class 'OSError'>>([Errno 2] No such file or directory: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') is raised", None]])>
 but was: <(False, (1, 2, 1, 0, 0, 0, 0), [['F', 'TestCase.test_assert_open_file', "expected: open('/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') succeeds\n but was: <<class 'FileNotFoundError'>>([Errno 2] No such file or directory: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') is raised", None]])>
diff:
- (False, (1, 2, 1, 0, 0, 0, 0), [['F', 'TestCase.test_assert_open_file', "expected: open('/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') succeeds\n but was: <<class 'OSError'>>([Errno 2] No such file or directory: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') is raised", None]])
?                                                                                                                                                                                              ^^
+ (False, (1, 2, 1, 0, 0, 0, 0), [['F', 'TestCase.test_assert_open_file', "expected: open('/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') succeeds\n but was: <<class 'FileNotFoundError'>>([Errno 2] No such file or directory: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') is raised", None]])
?                                                                                                                                                                                              ^^^^^^^^^^^^

folded diff:
  (False, (1, 2, 1, 0, 0, 0, 0), [['F', 'TestCase.test_assert_open_file', "expec
  ted: open('/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexisten
- t') succeeds\n but was: <<class 'OSError'>>([Errno 2] No such file or director
?                                  ^^                                 ----------
+ t') succeeds\n but was: <<class 'FileNotFoundError'>>([Errno 2] No such file o
?                                  ^^^^^^^^^^^^
- y: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexistent') is 
?                                                                     ----------
+ r directory: '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/nonexis
? ++++++++++
- raised", None]])
+ tent') is raised", None]])
? ++++++++++

2) Failure: TestAssertions.test_assert_raise_call: "test_assert_raise_call_instance"])
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:508: "test_assert_raise_call_instance"])
expected: <(False, (3, 8, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', "expected: <<class 'NameError'>> is raised\n but was: test_assertions.nothing_raised() nothing raised", None], ['F', 'TestCase.test_assert_raise_call_different_error', "expected: <<class 'NameError'>> is raised\n but was: <<class 'ZeroDivisionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_raise_call_instance', "expected: <ComparableError('not error',)>\n but was: <ComparableError('raise error',)>", None]])>
 but was: <(False, (3, 6, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', 'expected: <"global name \'unknown_name\' is not defined">\n but was: <"name \'unknown_name\' is not defined">', None], ['F', 'TestCase.test_assert_raise_call_different_error', "expected: <<class 'NameError'>> is raised\n but was: <<class 'ZeroDivisionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_raise_call_instance', "expected: <ComparableError('not error',)>\n but was: <ComparableError('raise error',)>", None]])>
diff:
- (False, (3, 8, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', "expected: <<class 'NameError'>> is raised\n but was: test_assertions.nothing_raised() nothing raised", None], ['F', 'TestCase.test_assert_raise_call_different_error', "expected: <<class 'NameError'>> is raised\n but was: <<class 'ZeroDivisionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_raise_call_instance', "expected: <ComparableError('not error',)>\n but was: <ComparableError('raise error',)>", None]])
?             ^^                                                           ^           ^^^^^^^^ ------------------------------------------------------------------------------------
+ (False, (3, 6, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', 'expected: <"global name \'unknown_name\' is not defined">\n but was: <"name \'unknown_name\' is not defined">', None], ['F', 'TestCase.test_assert_raise_call_different_error', "expected: <<class 'NameError'>> is raised\n but was: <<class 'ZeroDivisionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_raise_call_instance', "expected: <ComparableError('not error',)>\n but was: <ComparableError('raise error',)>", None]])
?             ^^                                                           ^           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

folded diff:
- (False, (3, 8, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', "expe
?             ^                                                            ^
+ (False, (3, 6, 3, 0, 0, 0, 0), [['F', 'TestCase.test_assert_raise_call', 'expe
?             ^                                                            ^
- cted: <<class 'NameError'>> is raised\n but was: test_assertions.nothing_raise
+ cted: <"global name \'unknown_name\' is not defined">\n but was: <"name \'unkn
- d() nothing raised", None], ['F', 'TestCase.test_assert_raise_call_different_e
? ^^^    ^  ------                                                     ---------
+ own_name\' is not defined">', None], ['F', 'TestCase.test_assert_raise_call_di
? ^^^^^^^^^^^^^    ^^^^     ++
- rror', "expected: <<class 'NameError'>> is raised\n but was: <<class 'ZeroDivi
?                                                                      ---------
+ fferent_error', "expected: <<class 'NameError'>> is raised\n but was: <<class 
? +++++++++
- sionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_raise_call
?                                                                      ---------
+ 'ZeroDivisionError'>>(division by zero)", None], ['F', 'TestCase.test_assert_r
? +++++++++
- _instance', "expected: <ComparableError('not error',)>\n but was: <ComparableE
?                                                                      ---------
+ aise_call_instance', "expected: <ComparableError('not error',)>\n but was: <Co
? +++++++++
- rror('raise error',)>", None]])
+ mparableError('raise error',)>", None]])
? +++++++++

3) Pending: TestAssertions.test_assert_search_syslog_call: can't read /var/log/messages.
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:561: self.pend("can't read /var/log/messages.")

4) Omission: TestAssertions.test_kernel_symbol: only for Linux environment
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:619: self.omit("only for Linux environment")

5) Failure: TestRunner.test_metadata: self.assert_output("EF", 2, 0, 1, 1, 0, 0, 0, details, tests)
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:118: self.assert_output("EF", 2, 0, 1, 1, 0, 0, 0, details, tests)
expected: <('EF\n'
 '\n'
 '1) Error: TestCase.test_error_raised\n'
 '  bug: 123\n'
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:84: '
 'self.unknown_attribute\n'
 "<class 'AttributeError'>: 'TestCase' object has no attribute "
 "'unknown_attribute'\n"
 '\n'
 '2) Failure: TestCase.test_with_metadata: self.assert_equal(3, 1 - 2)\n'
 '  bug: 999\n'
 '  key: value\n'
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:89: '
 'self.assert_equal(3, 1 - 2)\n'
 'expected: <3>\n'
 ' but was: <-1>\n'
 '\n'
 'Finished in 0.000 seconds\n'
 '\n'
 '2 test(s), 0 assertion(s), 1 failure(s), 1 error(s), 0 pending(s), 0 '
 'omission(s), 0 notification(s)\n')>
 but was: <('EF\n'
 '\n'
 '1) Error: TestCase.test_error_raised\n'
 '  bug: 123\n'
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:84: '
 'self.unknown_attribute\n'
 "<class 'AttributeError'>: 'TestCase' object has no attribute "
 "'unknown_attribute'\n"
 '\n'
 '2) Failure: TestCase.test_with_metadata: self.assert_equal(3, 1 - 2)\n'
 '  key: value\n'
 '  bug: 999\n'
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:89: '
 'self.assert_equal(3, 1 - 2)\n'
 'expected: <3>\n'
 ' but was: <-1>\n'
 '\n'
 'Finished in 0.000 seconds\n'
 '\n'
 '2 test(s), 0 assertion(s), 1 failure(s), 1 error(s), 0 pending(s), 0 '
 'omission(s), 0 notification(s)\n')>
diff:
  ('EF\n'
   '\n'
   '1) Error: TestCase.test_error_raised\n'
   '  bug: 123\n'
   '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:84: '
   'self.unknown_attribute\n'
   "<class 'AttributeError'>: 'TestCase' object has no attribute "
   "'unknown_attribute'\n"
   '\n'
   '2) Failure: TestCase.test_with_metadata: self.assert_equal(3, 1 - 2)\n'
+  '  key: value\n'
   '  bug: 999\n'
-  '  key: value\n'
   '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:89: '
   'self.assert_equal(3, 1 - 2)\n'
   'expected: <3>\n'
   ' but was: <-1>\n'
   '\n'
   'Finished in 0.000 seconds\n'
   '\n'
   '2 test(s), 0 assertion(s), 1 failure(s), 1 error(s), 0 pending(s), 0 '
   'omission(s), 0 notification(s)\n')

6) Failure: TestRunner.test_run_failed_dict_data: self.assert_output("F", 1, 1, 1, 0, 0, 0, 0, details, [test])
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:234: self.assert_output("F", 1, 1, 1, 0, 0, 0, 0, details, [test])
expected: <('F\n'
 '\n'
 '1) Failure: TestCase.test_fail_assertion_dict_data (fail): '
 'self.assert_equal("dict", data)\n'
 "  data: {'b': 2, 'a': 1}\n"
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:218: '
 'self.assert_equal("dict", data)\n'
 "expected: <'dict'>\n"
 " but was: <{'b': 2, 'a': 1}>\n"
 '\n'
 'Finished in 0.000 seconds\n'
 '\n'
 '1 test(s), 1 assertion(s), 1 failure(s), 0 error(s), 0 pending(s), 0 '
 'omission(s), 0 notification(s)\n')>
 but was: <('F\n'
 '\n'
 '1) Failure: TestCase.test_fail_assertion_dict_data (fail): '
 'self.assert_equal("dict", data)\n'
 "  data: {'b': 2, 'a': 1}\n"
 '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:218: '
 'self.assert_equal("dict", data)\n'
 "expected: <'dict'>\n"
 " but was: <{'a': 1, 'b': 2}>\n"
 '\n'
 'Finished in 0.000 seconds\n'
 '\n'
 '1 test(s), 1 assertion(s), 1 failure(s), 0 error(s), 0 pending(s), 0 '
 'omission(s), 0 notification(s)\n')>
diff:
  ('F\n'
   '\n'
   '1) Failure: TestCase.test_fail_assertion_dict_data (fail): '
   'self.assert_equal("dict", data)\n'
   "  data: {'b': 2, 'a': 1}\n"
   '/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_runner.py:218: '
   'self.assert_equal("dict", data)\n'
   "expected: <'dict'>\n"
-  " but was: <{'b': 2, 'a': 1}>\n"
?                     --------
+  " but was: <{'a': 1, 'b': 2}>\n"
?               ++++++++
   '\n'
   'Finished in 0.000 seconds\n'
   '\n'
   '1 test(s), 1 assertion(s), 1 failure(s), 0 error(s), 0 pending(s), 0 '
   'omission(s), 0 notification(s)\n')

Finished in 0.278 seconds

63 test(s), 89 assertion(s), 4 failure(s), 0 error(s), 1 pending(s), 1 omission(s), 0 notification(s)

python2では,
MacBook-Pro-8:pikzie shopetan$ python2 test/run-test.py

..................P..O.........................................

1) Pending: TestAssertions.test_assert_search_syslog_call: can't read /var/log/messages.
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:561: self.pend("can't read /var/log/messages.")

2) Omission: TestAssertions.test_kernel_symbol: only for Linux environment
/Users/shopetan/b3/springAB/sezemi/2015_0627/pikzie/test/test_assertions.py:619: self.omit("only for Linux environment")

Finished in 0.262 seconds

63 test(s), 93 assertion(s), 0 failure(s), 0 error(s), 1 pending(s), 1 omission(s), 0 notification(s)

python2系列ではエラーが発生していない.

pip installで躓く

[事実]

$ pip2 install pikzie
でエラーが発生.正常にインストールが出来ない.

MacBook-Pro-8:2015_0627 shopetan$ pip2 install pikzie
Collecting pikzie
  Downloading pikzie-1.0.1.tar.gz (75kB)
    100% |████████████████████████████████| 77kB 2.4MB/s 
    Complete output from command python setup.py egg_info:
    running egg_info
    creating pip-egg-info/Pikzie.egg-info
    writing pip-egg-info/Pikzie.egg-info/PKG-INFO
    writing top-level names to pip-egg-info/Pikzie.egg-info/top_level.txt
    writing dependency_links to pip-egg-info/Pikzie.egg-info/dependency_links.txt
    writing manifest file 'pip-egg-info/Pikzie.egg-info/SOURCES.txt'
    warning: manifest_maker: standard file '-c' not found
reading manifest file 'pip-egg-info/Pikzie.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'pip-egg-info/Pikzie.egg-info/SOURCES.txt'
Usage: -c [options] [test_files]

-c: error: no such option: --egg-base
Exception KeyError: KeyError(140735271932688,) in <module 'threading' from '/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.pyc'> ignored

----------------------------------------

Command "python setup.py egg_info" failed with error code 2 in /private/var/folders/pw/y9z4j4y133v6sf6b8_z906km0000gn/T/pip-build-V9Hxy_/pikzie


pip3 install pikzie


では正常にインストールが可能.

[期待すること]
pip2系列でインストール出来ないことの提示.
改善案を.

README 実行時の手順に不足あり

[環境]
Mac OSX Marvericks

[事実]
https://github.com/shopetan/pikzie/blob/master/README.ja#L102-105
https://github.com/shopetan/pikzie/blob/master/README#L102-105

以下のようにtest/run-test.pyを起動すると、test/test_*.pyテス
トを自動で読み込み、定義されているテストを実行します。::
% test/run-test.py

[結果]

MacBook-Pro-8:2015_0627 shopetan$ test/run-test.py
-bash: test/run-test.py: Permission denied

[期待する事]
実行権限を与えるプロセスをドキュメントに付与して欲しい.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.