Git Product home page Git Product logo

stackless-dev / stackless Goto Github PK

View Code? Open in Web Editor NEW

This project forked from python/cpython

1.0K 39.0 60.0 500.57 MB

The Stackless Python programming language

Home Page: http://www.stackless.com/

License: Other

C 37.54% C++ 0.73% Python 60.29% Shell 0.11% Batchfile 0.15% HTML 0.37% CSS 0.01% Roff 0.08% PLSQL 0.05% PowerShell 0.02% Objective-C 0.06% Makefile 0.06% Assembly 0.12% Common Lisp 0.05% M4 0.36% DTrace 0.01% Rich Text Format 0.01% VBScript 0.01% XSLT 0.01%

stackless's Issues

tasklet.kill : implementation details

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @krisvale on 2013-01-05 09:53:31)

tasklet.kill appears to clear the target tasklet. Is this necessary? This could be done to enable tasklet.kill to work on newly bound tasklets that haven't run yet.
Should simplify this and make sure kill works also for not-yet-run tasklets and dead tasklets (silently)


Update hard-switching routines from greenlet

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @rmtew on 2012-07-23 03:05:10)

Greenlet has updated versions of Stackless' hard switching routines. It would be worthwhile to update these for Stackless where suitable.

One way in which it may not be suitable, is that Stackless is under the PSF license. And greenlet is under some confused mix of the MIT and PSF license, due to the confused authorship of Stackless-sourced files within it.

Any that are obviously derived from Stackless versions, should be PSF licensed.

Kristjan, should we consider this for the 3.3 release?


Unexpected increase of nesting level

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-12 15:30:16)

Hi,

thanks to my colleague Michael Bauer I was able to identify the following issue:

Version: 2.7-slp

If I define the following callable class

class C(object): pass
C.__call__ = some_function

and execute c=C(); c() in a tasklet then some_function runs at nesting level 1.

If I change the definition of class C to

class C(object):
    __call__ = some_function

the nesting level does not increase.

The attached test script demonstrates this problem.
I would like to fix this issue for v2.7.4-slp.


Stackless documentation on readthedocs.org

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


I have reserved stackless.readthedocs.org and successfully created http://stackless.readthedocs.org/en/latest/index.html
It requires a small patch to Doc/conf.py.

Currently the documentation is protected. You can access it, if you know the URL.

I propose to publish the documentation for all versions of Stackless (2.7.x, 2.8, 3.2, 3.3) on readthedocs. It makes the documentation accessible and reduces the effort to publish it.


enhance tasklet.bind()

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


This issue proposes an enhancement for the method tasklet.bind(). It adds the functionality of tasklet.setup() except the implicit tasklet.insert() to bind().

The details were discussed on the Stackless mailing list in the thread http://www.stackless.com/pipermail/stackless/2013-November/005899.html. The credits for the idea to enhance tasklet.bind() go to Kristján: http://www.stackless.com/pipermail/stackless/2013-November/005911.html.

Details

Currently tasklet.bind() requires a single positional argument. Kristján proposes to add two optional arguments. The signature of bind then becomes

def bind(self, function, args=None, keywords=None):

If both args and keywords are None, bind() behaves as before.
Otherwise presence of args and/or keywords as being non-None, would then
imply a setup, without scheduling the tasklet. In this case, if function is None the value of self.tempval is used as function, similar to tasklet.setup(). If self.tempval is None too, bind() raises RuntimeError('the tasklet was not bound to a function')

With this change tasklet.bind() and tasklet.insert() become the "atomic" building blocks for tasklet creation. tasklet.setup() is then equivalent to

#!python

def setup(self, *args, **kw):
    self.bind(None, args, kw)
    return self.insert()

Plan

  1. Implement the proposal and appropriate unit tests
  2. Update the documentation in Doc/library/stackless/tasklets.rst and tasklet_state_chart.png
  3. Update Stackless/changelog.txt
  4. Port the change to Stackless version 3.x

Any objections?

tasklet_state.eap.zip


interpreter crash in test test_cmd_line

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-11-06 15:43:54)

Hi Kristján,

Change 5fb62dd1d4c6 causes an NULL-pointer access on Win32 in the Python test suite, test_cmd_line.

$ hg bisect --bad
The first bad revision is:
changeset:   82967:5fb62dd1d4c6
branch:      2.7-slp
user:        Kristjan Valur Jonsson <[email protected]>
date:        Fri Oct 25 12:00:17 2013 +0000
files:       Stackless/module/scheduling.c
description:
Don't silently ignore TaskletExit on the main tasklet.

causes an interpreter crash on Windows 32bit in test_cmd_line.

Call stack:

>	python27.dll!tasklet_end(_object * retval=0x00000000)  Line 1221 + 0x5 bytes	C
 	python27.dll!slp_run_tasklet(_frame * f=0x01e5b128)  Line 1323 + 0x5 bytes	C
 	python27.dll!slp_eval_frame(_frame * f=0x01e5b128)  Line 317 + 0xa bytes	C
 	python27.dll!climb_stack_and_eval_frame(_frame * f=0x01e5b128)  Line 274 + 0x9 bytes	C
 	python27.dll!slp_eval_frame(_frame * f=0x01e5b128)  Line 303 + 0x6 bytes	C
 	python27.dll!PyEval_EvalCodeEx(PyCodeObject * co=0x02239728, _object * globals=0x01e5b128, _object * locals=0x00000000, _object * * args=0x0223c604, int argcount=2, _object * * kws=0x00000000, int kwcount=0, _object * * defs=0x0223f71c, int defcount=1, _object * closure=0x00000000)  Line 3695 + 0x6 bytes	C
 	python27.dll!function_call(_object * func=0x0227d970, _object * arg=0x0223c5f8, _object * kw=0x00000000)  Line 542 + 0x2a bytes	C
 	python27.dll!PyObject_Call(_object * func=0x0227d970, _object * arg=0x0223c5f8, _object * kw=0x00000000)  Line 2539 + 0x1c bytes	C
 	python27.dll!RunModule(char * module=0x002f3ed0, int set_argv0=1)  Line 192 + 0x9 bytes	C
 	python27.dll!Py_Main(int argc=7, char * * argv=0x002f3e58)  Line 587 + 0xc bytes	C
 	python.exe!__tmainCRTStartup()  Line 586 + 0x17 bytes	C
 	kernel32.dll!7563336a() 	
 	[Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll]	
 	ntdll.dll!77dd9f72() 	
 	ntdll.dll!77dd9f45() 	

Commands to reproduce the crash:

hg update 5fb62dd1d4c6
cd PCbuild
build.bat -r
rt -q test_cmd_line

slpmodule_new() doesn't initialize extra members added by type_new()

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by ndade on 2013-06-08 02:47:27)

slpmodule_new() allocates the module using

        m = PyObject_GC_New(PySlpModuleObject, PySlpModule_TypePtr);

and that uses PySlpModule_TypePtr->tp_basicsize to determine how many bytes to allocate. On amd64, for example, we expect there to be 40 bytes allocated (sizeof(*m)), but in fact 48 are allocated[1] because tp_basicsize was set to 48 back when the SLP module type was derived in flextype.c. type_new() computed 48 by taking the base type's tp_basicsize of 40 and adding space for one more pointer to hold the weak reference list. Grep for "may_add_weak" in type_new() for the details.

Since this silently added field in *m is not initialized, when the slpmodule object is cleaned up during exit, if that field had a non-NULL value that value gets interpreted as the head of a list of PyWeakReference objects and garbage ensues.

Apparently with the default memory allocators, and perhaps since slpmodule is allocated pretty early, it was NULL. But I've been replacing the GC allocator with my own and there the memory was recycled and the pointer was non-NULL.

The fix is to init the entire memory/object allocated. Something like

@@ -806,6 +806,8 @@
        m->__tasklet__ = NULL;
        nameobj = PyString_FromString(name);
        m->md_dict = PyDict_New();
+        // set the extra fields to 0 (type_new() added a weak reference list field automatically)
+        memset(m+1, 0, PySlpModule_TypePtr->tp_basicsize - sizeof(*m));
        if (m->md_dict == NULL || nameobj == NULL)
                goto fail;
        if (PyDict_SetItemString(m->md_dict, "__name__", nameobj) != 0)

does the trick.

You can also reproduce this by making the base memory allocator memset() the memory to random values.

[1] 48 is not including the Py_GC_Head header, which adds another 24 or 32 bytes depending on alignment requirements.


Infinite recursion crash

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-23 13:04:22)

The following code leads to an infinite recursion.

stackless.enable_softswitch(True)

class A(object):
    def __call__(self): pass # work around issue #18
A.__call__ = A()             # OK
a=A()                        # OK
a()                          # Crash

This bounces between typeobject.c slot_tp_call() and PyObject_Call() without
ever hitting eval_frame().

The infinite recursion does not happen with CPython. The relevant difference is a change
in PyObject_Call() (http://svn.python.org/view?view=revision&revision=76528).

I'll commit a fix soon.


inconsistent slp_switch_stack.h

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-07 09:58:27)

Version: branch 2.7-slp

The generated file Stackless/platf/slp_switch_stack.h is not consistent with its source files.
This is not a serious problem, because the file isn't used.

We should either remove it or recreate it.

If we want to recreate the file, we need a small patch to the mkswitch_stack.py script,
because it is no compatible with recent changes of switch_amd64_unix.h.

I would prefer to remove this file from branch 2.7-slp


Compiler warnings about unused values

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-07 19:54:40)

Version: branch 2.7-slp
Linux amd64
Compiler: clang 3.2

I get many similar warnings about unused expression results for two macros (STACKLESS_PROMOTE_ALL, STACKLESS_UNPACK) and one warning about an unused variable (Stackless/module/taskletobject.c:84:20: warning: unused variable 'ts'). The fixes are straight forward. I'll commit them to 2.7-slp.

Fortunately clang doesn't emit any other warnings. :-)


Make an atomic context manager

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @ctismer on 2013-01-27 03:16:39)

Thinking of this example of atomic (taken as-is)

#!python
def acquire_lock(self):
  old = stackless.setatomic(1)
  if self.free:
    self.free = False:
  else:
    self.channel.receive()
  stackless.setatomic(old)

I felt it would make sense to make a context manager:

#!python
def acquire_lock(self):
  with stackless.atomic()
    if self.free:
      self.free = False:
    else:
      self.channel.receive()

See http://www.python.org/dev/peps/pep-0343/

What do you think:

  • Does it make sense to add that?
  • do we need any arguments?
  • makes sense as a builtin?

Does the syntax make sense, or is there a usecase that operates on anything else than
//stackless.getcurrent()//?


crash on exit in slp_kill_tasks_with_stacks

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by mconst on 2013-04-26 23:45:44)

With the current stackless 2.7-slp, the following code crashes on exit:

import stackless

stackless.tasklet(stackless.test_cframe)(1)
stackless.schedule()

It's crashing in slp_kill_tasks_with_stacks, where it calls SLP_CHAIN_REMOVE to remove the task from the runqueue:

/* unlink from runnable queue if it wasn't previously remove()'d */
if (t->next && t->prev) {
    task = t;
    chain = &task;
    SLP_CHAIN_REMOVE(PyTaskletObject, chain, task, next, prev);
}

The problem is that this code is calling SLP_CHAIN_REMOVE incorrectly. "task" is supposed to be an output parameter, not input -- and it can't be an alias of *chain, or else SLP_CHAIN_REMOVE will think the list is empty and crash. The code should be:

/* unlink from runnable queue if it wasn't previously remove()'d */
if (t->next && t->prev) {
    chain = &t;
    SLP_CHAIN_REMOVE(PyTaskletObject, chain, task, next, prev);
    t = task;
}

I've attached a patch. With this patch, the program above works for me.


Compile error with STACKLESS_OFF defined

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-05-17 11:23:57)

Current tip of branch 2.7-slp

gcc -pthread -c -fno-strict-aliasing -DSTACKLESS_FRHACK=0 -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes  -I. -IInclude -I./Include -I./Stackless -DSTACKLESS_OFF  -DPy_BUILD_CORE -o Objects/descrobject.o Objects/descrobject.c
Objects/descrobject.c:523: error: static declaration of ‘PyMemberDescr_Type’ follows non-static declaration
Include/descrobject.h:79: note: previous declaration of ‘PyMemberDescr_Type’ was here
Objects/descrobject.c:563: error: static declaration of ‘PyGetSetDescr_Type’ follows non-static declaration
Include/descrobject.h:78: note: previous declaration of ‘PyGetSetDescr_Type’ was here
make: *** [Objects/descrobject.o] Error 1

This failure is caused by change http://svn.python.org/view/python/trunk/Objects/descrobject.c?r1=64048&r2=71734&pathrev=71734 which exposed descriptor objects also in CPython.

I'll commit a fix to 2.7-slp prior to releasing v2.7.4-slp


Make 2.8-slp compatible with PEP 0404 discussion

Originally reported by: Anonymous


The PEP 0404 discussion was quite controversial.
As a result, we should not refer to a certain Python version that does not exist
(by definition of PEP 0404).
Instead, we will use the name "stackless 2.8".

Revert changes that are not ok, map them to 2.7-slp and only comment on the 2.8 additions, based upon 2.7-slp.

Current approach:

  • Stackless 2.8 is based upon Stackless 2.7.X

  • it always reflects the latest changes of 2.7.X-slp

  • all additions in Stackless 2.8 are Python enhancements back-ported from Python 3.X


gcc 4.7.2 miscompiles stackless

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-07 09:39:02)

Compilers constantly improve. When I tried to build Stackless 2.7.4rc1 with gcc 4.7.2 the stackless unittests didn't terminate. Python went into an endless loop just before terminating.

It turned out that gcc overly optimised climb_stack_and_transfer. The compiler removed the alloca call and then performed a tail recursion optimisation. Pretty cool. :-)
Details: linux amd64, gcc 4.7.2, options "-O2". The relevant command line switches are
-foptimize-sibling-calls
-ftree-vrp
-ftree-dce
Because this optimisation does not depend on the architecture, I suspect that this problem affects
other architectures and stackless versions too.

I see two possibilities to fix this issue:

    1. Add some #pragmas or specific compiler switches to inhibit the optimisation.
    1. Use the pointer returned from alloca, i.e. store the pointer in a global variable.

I prefer option 2, because it is less compiler specific. And the overhead of an additional write is negligible.


thread state (exception) not preserved

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


I'm currently working on an impoved Stackless support for the
PyDev debugger. I'm going to create tickets for a few shortcomings.

First, there is a issue with thread state preservation. If a thread state has an exception and tracing is enabled, the exception won't be preserved on a soft switch.

I extended the test Stackless/unittests/test_tstate.py. The new test case
test_tstate.TestTracingState.testExceptionAndTraceState tests this situation. Currently the test would fail and is skipped.

The problem is IMHO caused by suboptimal implementation of function slp_schedule_task_prepared(). I'm going to prepare a patch.


Traced tasklet is unpicklable

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


Pickling of a traced tasklet fails with an exception. This is demonstrated by the test case test_tstate.TestTracingState.testUnpickledTracingState

Traceback (most recent call last):
  File "E:\fg2\stackless\fg2python\Stackless\unittests\test_tstate.py", line 153, in testUnpickledTracingState
    self._testTracingOrProfileState(do_pickle=True, do_trace=True)
  File "E:\fg2\stackless\fg2python\Stackless\unittests\test_tstate.py", line 118, in _testTracingOrProfileState
    p.dump(t)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 231, in dump
    self.save(obj)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 338, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 426, in save_reduce
    save(state)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 293, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 569, in save_tuple
    save(element)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 293, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 607, in save_list
    self._batch_appends(iter(obj))
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 640, in _batch_appends
    save(x)
  File "C:\kruis_E\fg2\stackless\fg2python\lib\pickle.py", line 313, in save
    rv = reduce(self.proto)
ValueError: frame exec function at 1e1f9bb0 is not registered!

Our documentation states: "It should be possible to pickle any tasklets that you might want to." Therefore this exception counts as a bug. The frame exec function is restore_tracing() from scheduling.c.

I see two different ways to fix this problem:

  1. Register (and rename) the function restore_tracing() like slp_restore_exception().
  2. Skip a restore_tracing cframe while building the frame-list in in tasklet_reduce. (We should also skip the trace function of regular frames in frameobject_reduce() in pricklepit.c.)

Option 1. serialises the tracing state of a tasklet. Option 2 disables tracing in the serialised tasklet.

I'm not sure, which option is appropriate. Let's look at typical use cases:

UC1: Save a tasklet and resume it later in another process. It you trace the first process, this does usually not imply that you want to trace the second process too. Furthermore a trace function (or more likely a bound method) might be unpickleable.
Option 2 seems appropriate.

UC2: Post-mortem-analysis: save a tasklet and inspect it later. Here I'd like to see every detail. Option 1 seems appropriate.

I can't decide which way to go tonight. Perhaps we need both options.


StopIteration used incorrectly by stackless

Originally reported by: Kristján Valur Jónsson (Bitbucket: krisvale, GitHub: kristjanvalur)


I just go this error and spent a long time looking for it:
(<type 'exceptions.StopIteration'>, StopIteration('the main tasklet is receiving
without a sender available.',), <traceback object at 0x0289C030>)

This error disappeared because it was raised inside a generator context manager and this context manager was excpecting them.

I don't think we should be raising this error. StopIteration should be used by the iterator protocol only, i.e. when doing next() on a closed channel.

Similarly, doing channel.close() will cause a channel.send() to raise StopIteration as well. This is not good. I'd suggest EOFExcption. Or we could use 'GeneratorExit' which is what "yield" returns if someone has closed a generator.


stackless python 2.7.3+: test.test_xrange fails

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-02-01 19:47:11)

I'm using stackless python branch 2.7-slp from http://hg.python.org/stackless/

Since the merge of changeset 79317:bff269ee7288 the test test_xrange fails. This is caused by the stackless specific code for xrange objects in Stackless/pickling/prickelpit.c around line 1880-1925. This code predates the improved pickling support for xrange objects in plain CPython 2.6 and 3.0. See http://bugs.python.org/issue2582

I propose the remove the xrange specific code in prickelpit.c. See the attached patch.


tastklet.raise_exception

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @krisvale on 2013-01-05 09:36:55)

This little used api was broken in 3.x. Is it needed? If so, we should make it work similarly to kill. It is completely unexpected that the exception be raised on the calling tasklet if the target is either dead or hasn't run yet.

What about the latter case? Is it ok to run such tasklets or should they be treated as uncaught exceptions, percolated to the main tasklet in this case?


Enhancement: make it possible to retrive the current channel/schedule callbacks

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


Currently it is not possible to query the channel or schedule callbacks. For a remote debugger, that attaches itself late to already running code it would be very useful to get the current callbacks, if any are installed.

I propose to modify the functions stackless.set_channel_callback(callable) / set_schedule_callback(callable) to return the previous value of the callback or None if none was installed.

Additionally I propose to add new functions stackless.get_channel_callback() / get_schedule_callback(). These functions would return the current value.

If this change is OK, I can provide an implementation including test case and documentation.


New tasklet attributes tasklet.trace_function and tasklet.profile_function

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


I added two attributes to class tasklet: tasklet.trace_function and tasklet.profile_function. These attributes are the tasklet counterparts of the standard functions sys.gettrace(), sys.settrace(), sys.getprofile() and sys.setprofile(). With these attributes it is now possible to control tracing completely using the schedule callback. An example is given in the documentation and in Stackless/demo/tracing.py.

The implementation also changes slp_schedule_task_prepared / slp_restore_tracing to modify tracing related members of PyThreadState only with official API functions. This prevents a ref-counting problem and prevents incorrect values of the global flag _Py_TracingPossible in ceval.c


tasklet.raise : new semantics

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @krisvale on 2013-01-05 09:57:14)

if we want to keep tasklet.raise_exception then we should consider changing its semantics similarly to channel.send_throw or adding another api that does that. The current api of just taking an exception class and value is too limited since it doesn't support passing already existing exceptions on to target threads, nor passing tracebacks.


Exception in schedule callback -> Assertion failed,https://bitbucket.org/stackless-dev/stackless/issues/new taskletobject.c, line 51

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


By chance I just discovered another crash in current 2.7-slp. I added a test case with @Skip.

With a debug build of python (Windows, 32bit) I get the following assertion error:
Assertion failed: ts->st.current == NULL, file ..\Stackless\module\taskletobject.c, line 51

Test Case: test_defects.TestExceptionInScheduleCallback

#!python

class TestExceptionInScheduleCallback(StacklessTestCase):
    # Problem
    # Assertion failed: ts->st.current == NULL, file ..\Stackless\module\taskletobject.c, line 51
    # 
    def scheduleCallback(self, prev, next):
        if next.is_main:
            raise RuntimeError("scheduleCallback")
    
    @unittest.skip('crashes python')
    def testExceptionInScheduleCallback(self):
        stackless.set_schedule_callback(self.scheduleCallback)
        self.addCleanup(stackless.set_schedule_callback, None)
        stackless.tasklet(lambda:None)()
        stackless.run()


new method tasklet.bind_thread()

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-11-28 10:46:33)

Thanks to Kristján Stackless Python got a new method: tasklet.bind_thread(). I won't discuss the motivation and the implementation details of this method here. It's all in this thread of the Stackless mailing list:
http://www.stackless.com/pipermail/stackless/2013-November/005869.html

The purpose of this ticket is to add all the missing details we need for the next release (2.7.6) of Stackless.

  • document the method in Doc/library/stackless/tasklets.rst
  • update the documentation in Doc/library/stackless/threads.rst to clearly describe the relation between
    a tasklet and a thread:
    • A tasklet always has an associated thread. This thread is identified by the property thread_id
    • The thread_id of a tasklet changes only if the method bind_thread() is called or if the associated thread terminates.
      In the later case the new thread is ... \
      @KristJán here I need your input: I noticed, that the thread id changes to the main thread,
      but what happens if the main thread terminates while other threads are still active?
  • update Stackless/changelog.txt

I'll take care about these points myself, but I need help from Kristján to add the missing information.


Do not kill deletable tasklets in PyThreadState_Clear

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-06-03 11:49:35)

Problem

If you run a tasklet only partially (the tasklet calls stackless.schedule_remove() ) and then pickle the tasklet for later execution, you do not want Python to kill the tasklet during PyThreadState_Clear.

The following test code demonstrates the problem:

import stackless
import threading

finally_run_count=0

def task():
    global finally_run_count
    try:
        stackless.schedule_remove(None)
    finally:
        finally_run_count += 1

def run():
    global tasklet
    t = stackless.tasklet(task)
    t()
    stackless.run()
    
    # if you comment the next line,
    # then t will be garbage collected 
    # at the end of this function. 
    tasklet = t

thread = threading.Thread(target=run)
thread.start()
thread.join()
tasklet = None

print "finally_run_count: %d" % (finally_run_count,)

This script emits "finally_run_count: 1". If you comment the line tasklet = t Python deletes the tasklet t during the execution of the thread. As a consequence the finally clause of function task won't be called and the script would print "finally_run_count: 0". Otherwise, the tasklet still exists when Python clears the thread state. In this case PyThreadState_Clear calls slp_kill_tasks_with_stacks which kills the tasklet.

Unfortunately it is not always easily possible to delete the tasklet in time. For instance an object that is part of a not yet collected reference cycle can reference the tasklet and delay its destruction. (This happened in our application flowGuide. Depending on the execution of the garbage collector a tasklet was killed or simply deleted.)

I propose to change the C-function slp_kill_tasks_with_stacks() to not kill a tasklet, if both conditions are met:

  • The tasklet can be deleted (C-function tasklet_has_c_stack() returns 0.
  • The tasklet is not scheduled (one of stackless.schedule_remove() or tasklet.remove() has been called).

Does this change make sense? Are there any pitfalls?


Crash during shutdown with patch

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-01-24 16:13:06)

Setup

I'm working with stackless python version 2.7. compiled from a mercurial sandbox. Current changeset: f34947c81d3e+ (2.7-slp)
OS: Windows 7, Compiler VS 2008 professional, build target is x86 release with optimisation turned off.

Testcase

My test code is huge, confidential and the crash disappears if I make small modifications. The crash happens in about 1 of 5 test runs.

Details

  • The windows error message is always: "Unhandled exception at 0x77df15de (ntdll.dll) in _fg2python.exe: 0xC0000005: Access violation reading location 0x00000018."
  • The crash does not occur with versions before changeset 74135:ac70790fa499
  • The crash occurs with every version that includes 74135:ac70790fa499
  • The code does not involve tasklet switching.
  • The crash does not occur if I make one of the following modifications:
    • disable atexit processing
    • call gc.collect() within atexit
    • call stackless.enable_softswitch(False) during program startup
  • I can't reproduce the crash on Linux 64bit

I'm fairly confident that I can explain and fix the problem. Look at the call stack:

Call Stack (innermost frame first)

ntdll.dll!_ZwRaiseException@12()  + 0x12 bytes	
ntdll.dll!_ZwRaiseException@12()  + 0x12 bytes	

python27.dll!_string_tailmatch()  Line 2900 + 0x1c bytes	C

This line varies between test runs. The arguments on the stack usually don't match to the code location.

python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x0318d8c0, int throwflag=0, _object * retval=0x1e320030) Line 964 + 0x11 bytes C
frame: co_filename "f:\fg2\eclipsews\fg2py\arch\win32\bin..\libexec\lib\linecache.py", co_name "updatecache" f_lasti=72

python27.dll!PyEval_EvalFrame_value(_frame * f=0x0328a040, int throwflag=0, _object * retval=0x1e320030)  Line 3271 + 0x1a bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x0328a040, int throwflag=0, _object * retval=0x1e320030)  Line 964 + 0x11 bytes	C
python27.dll!PyEval_EvalFrame_value(_frame * f=0x03289ec8, int throwflag=0, _object * retval=0x1e320030)  Line 3271 + 0x1a bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x03289ec8, int throwflag=0, _object * retval=0x1e320030)  Line 964 + 0x11 bytes	C
python27.dll!PyEval_EvalFrame_value(_frame * f=0x031c74e8, int throwflag=0, _object * retval=0x1e320030)  Line 3271 + 0x1a bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x031c74e8, int throwflag=0, _object * retval=0x1e320030)  Line 964 + 0x11 bytes	C
python27.dll!PyEval_EvalFrame_value(_frame * f=0x03289d50, int throwflag=0, _object * retval=0x1e320030)  Line 3271 + 0x1a bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x03289d50, int throwflag=0, _object * retval=0x1e320030)  Line 964 + 0x11 bytes	C
python27.dll!PyEval_EvalFrame_value(_frame * f=0x03289bd8, int throwflag=0, _object * retval=0x1e320030)  Line 3271 + 0x1a bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x03289bd8, int throwflag=0, _object * retval=0x1e320030)  Line 964 + 0x11 bytes	C
python27.dll!slp_eval_frame_newstack(_frame * f=0x03289bd8, int exc=0, _object * retval=0x1e320030)  Line 470 + 0x11 bytes	C
python27.dll!PyEval_EvalFrameEx_slp(_frame * f=0x03289bd8, int throwflag=0, _object * retval=0x1e320030)  Line 910 + 0x11 bytes	C
python27.dll!slp_frame_dispatch(_frame * f=0x03289bd8, _frame * stopframe=0x031d7f48, int exc=0, _object * retval=0x1e320030)  Line 737 + 0x16 bytes	C
python27.dll!PyEval_EvalCodeEx(PyCodeObject * co=0x023d23c8, _object * globals=0x02435a50, _object * locals=0x00000000, _object * * args=0x03249adc, int argcount=1, _object * * kws=0x00000000, int kwcount=0, _object * * defs=0x00000000, int defcount=0, _object * closure=0x00000000)  Line 3561 + 0x16 bytes	C
python27.dll!function_call(_object * func=0x024429b0, _object * arg=0x03249ad0, _object * kw=0x00000000)  Line 542 + 0x3a bytes	C
python27.dll!PyObject_Call(_object * func=0x024429b0, _object * arg=0x03249ad0, _object * kw=0x00000000)  Line 2539 + 0x3e bytes	C
python27.dll!PyObject_CallFunctionObjArgs(_object * callable=0x024429b0, ...)  Line 2786 + 0xf bytes	C
python27.dll!handle_weakrefs(_gc_head * unreachable=0x1e34c9f0, _gc_head * old=0x1e2f0d38)  Line 752 + 0xf bytes	C
python27.dll!collect(int generation=0)  Line 1025 + 0xe bytes	C
python27.dll!collect_generations()  Line 1097 + 0x9 bytes	C
python27.dll!_PyObject_GC_Malloc(unsigned int basicsize=44)  Line 1559	C
python27.dll!PyType_GenericAlloc(_typeobject * type=0x00397f70, int nitems=0)  Line 754 + 0x9 bytes	C
python27.dll!PyTasklet_New(_typeobject * type=0x00397f70, _object * func=0x00000000)  Line 218 + 0x13 bytes	C
python27.dll!tasklet_new(_typeobject * type=0x00397f70, _object * args=0x01d58030, _object * kwds=0x00000000)  Line 282 + 0xd bytes	C
python27.dll!initialize_main_and_current()  Line 1028 + 0x1c bytes	C
python27.dll!slp_run_tasklet()  Line 1231 + 0xe bytes	C
python27.dll!slp_eval_frame(_frame * f=0x031d7f48)  Line 313 + 0x5 bytes	C
python27.dll!climb_stack_and_eval_frame(_frame * f=0x031d7f48)  Line 274 + 0x9 bytes	C
python27.dll!slp_eval_frame(_frame * f=0x031d7f48)  Line 303 + 0x9 bytes	C
python27.dll!PyEval_EvalCodeEx(PyCodeObject * co=0x023a4848, _object * globals=0x01fddae0, _object * locals=0x00000000, _object * * args=0x01d5803c, int argcount=0, _object * * kws=0x00000000, int kwcount=0, _object * * defs=0x00000000, int defcount=0, _object * closure=0x00000000)  Line 3564 + 0x9 bytes	C
python27.dll!function_call(_object * func=0x023b20b0, _object * arg=0x01d58030, _object * kw=0x00000000)  Line 542 + 0x3a bytes	C
python27.dll!PyObject_Call(_object * func=0x023b20b0, _object * arg=0x01d58030, _object * kw=0x00000000)  Line 2539 + 0x3e bytes	C
python27.dll!PyEval_CallObjectWithKeywords(_object * func=0x023b20b0, _object * arg=0x01d58030, _object * kw=0x00000000)  Line 4219 + 0x11 bytes	C

func: co_filename "f:\fg2\eclipsews\fg2py\arch\win32\libexec\lib\atexit.py", co_name "_run_exitfuncs"

python27.dll!call_sys_exitfunc()  Line 1778 + 0xd bytes	C
python27.dll!Py_Finalize()  Line 433	C
python27.dll!Py_Main(int argc=3, char * * argv=0x00391b10)  Line 683	C
_fg2python.exe!__tmainCRTStartup()  Line 586 + 0x17 bytes	C
kernel32.dll!76b233aa() 	
[Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll]	
ntdll.dll!___RtlUserThreadStart@8()  + 0x27 bytes	
ntdll.dll!__RtlUserThreadStart@8()  + 0x1b bytes	

IMHO the crash is caused by the interpreter recursion

slp_run_tasklet() -> initialize_main_and_current() -> tasklet_new() -> PyTasklet_New() -> PyType_GenericAlloc() -> _PyObject_GC_Malloc() -> collect_generations() -> collect() -> handle_weakrefs() -> PyObject_CallFunctionObjArgs() -> ...

If I disable the garbage collector in initialize_main_and_current() during the execution of tasklet_new(), the crash does not occur (see attached patch).

Open questions:

  • Why did the bug not occur with builds prior to changeset 74135:ac70790fa499?
  • What is the exact mechanism for the access violation. Where the location 0x00000018 coming from.
  • Why didn't I observe similar issues on linux 64bit.
    I can't answer these questions, because my understanding of the internal workings of ceval.c is limited.

Could anybody please review the patch. Is there a better way to disable the GC? Unfortunately there is no C-API for gc.isenabled(), gc.disable() and gc.enable().


Unused static functions in Stackless/module/channelobject.c

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-04-07 19:21:52)

In preparation for slp 2.7.4 I'm testing various compilers. I'm going to create tickets for warnings.

Version: branch 2.7-slp
Linux amd64
Compiler: clang 3.2
$ ./configure --enable-unicode=ucs4 --prefix=/tmp/slp27 CC=clang CPP='clang -E' CXX='clang++'

clang -pthread -c -fno-strict-aliasing -DSTACKLESS_FRHACK=0 -OPT:Olimit=0 -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes  -I. -IInclude -I./Include -I./Stackless   -DPy_BUILD_CORE -o Stackless/module/channelobject.o Stackless/module/channelobject.c
Stackless/module/channelobject.c:76:1: warning: unused function 'slp_channel_has_tasklet' [-Wunused-function]
slp_channel_has_tasklet(PyChannelObject *channel,
^
Stackless/module/channelobject.c:731:32: warning: unused function 'wrap_channel_send_throw' [-Wunused-function]
static CHANNEL_SEND_THROW_HEAD(wrap_channel_send_throw)
                               ^
Stackless/module/channelobject.h:8:13: note: expanded from macro 'CHANNEL_SEND_THROW_HEAD'
        PyObject * func (PyChannelObject *self, PyObject *exc, PyObject *val, PyObject *tb)
                   ^
2 warnings generated.

Kristjan, you contributed these functions. Are they simply dead code?


stackless python: test/test_multiprocessing.py fails

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2013-02-01 22:05:15)

As always with the tip of branch 2.7-slp

The test case "test_unpickleable_result" from LIb/test/test_multiprocessing.py fails. The failure is caused by an incorrect assumption in the test code the a lambda expression can't be pickled.

The attached patch fixes this issue.


slp_get_frame(PyTaskletObject *task) broken?

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


Either my understanding of Python threading is poor or slp_get_frame() is buggy. This is the current implementation.

#!python
PyFrameObject *
slp_get_frame(PyTaskletObject *task)
{
    PyThreadState *ts = PyThreadState_GET();

    return ts->st.current == task ? ts->frame : task->f.frame;
}

It uses the current thread state, not the thread state of the given tasklet. task->cstate->tstate. If this is a bug, it affects tasklet.alive and other properties.


Get rid of flextype

Originally reported by: Anonymous


About Flextype

The flextype module was an early approach to make method calls of classes
faster.

From the docstring in flextype.c :

"An extension type that supports cached virtual C methods"

Flextype is used for tasklets, channels and the stacklessmodule as the base type.

This was a very well working trick to make methods very fast, to override them
from C and from Python, with caching.

I think time is ripe to replace flextype by regular heaptypes, because Python has
good enough caching now.

Another use-case was the ability to define the stacklessmodule with module methods,
something that I thought was missing.

Meanwhile it is easy to create an object by using a class as a module surrogate
and expose some methods. In other words: Flextype is no longer necessary and should go away.

At the same time I think it makes sense to abandon stacklessmodule as it is and replace it by a _stackless module that implements the bare minimum necessary.
All the rest should be implemented in a stackless.py module that makes things
nicer and supplies methods etc.

The _stackless module should for instance hold the current tracing functions to make
them fast callable during a transfer. For retrieving those functions, there is no need for C code. Instead, the stackless.py module should provide a class and access functions to support introspection.

My goal:

turn as much C code into Python code as possible, unless speed is a concern

replace special code by builtin code as much as possible


wrong stackless.current in schedule callback

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


stackless.run() switches from the main tasklet to another tasklet T. If a schedule callback is invoked for this switch, the stackless.current is T. This is incorrect. Inspection of the code in slp_schedule_task_prepared() reveals, that Stackless calls the schedule callback before the switch.

I added a test case and I also have a preliminary fix.


ABI incompatibility in Objects/structseq.c PyStructSequence_InitType

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by @akruis on 2012-09-14 21:28:30)

In Objects/structseq.c

void
PyStructSequence_InitType(PyTypeObject *type, PyStructSequence_Desc *desc)
{
....
memcpy(type, &_struct_sequence_template, sizeof(PyTypeObject));

Now if type comes from an extension module compiled against vanilla python, stackless writes its extensions into memory behind the end of *type.

I'm not sure about my fix.


Adhere to the PSF Trademark Usage Policy

Originally reported by: Anselm Kruis (Bitbucket: akruis, GitHub: akruis)


It is a legal requirement to adhere to the PSF Trademark Usage Policy. Currently that is most certailly not the case.

I would like to update our Stackless specific documentation (rst-files, doc-strings and source code comments) to meet the conditions set by the PSF for usage of the word "python".

Plan:

  1. Summarise the rules for using the word "python" in Stackless.

  2. Update stackless specific *.rst files

  3. Update stackless specific doc strings

  4. Update other stackless specific documentation including sourcee code comments

PythonTrademarkUsage.odt.zip


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.