Git Product home page Git Product logo

transwarp's Introduction

transwarp

Actions

Doxygen documentation

transwarp is a header-only C++ library for task concurrency. It allows you to easily create a graph of tasks where every task can be executed asynchronously. transwarp is written in C++17 and only depends on the standard library. Just copy include/transwarp.h to your project and off you go! Tested with GCC, Clang, ICC, and Visual Studio.

C++11 support can be enabled by defining TRANSWARP_CPP11 at compile time.

Important: Only use tagged releases of transwarp in production code!

Table of contents

Example

This example creates three tasks and connects them with each other to form a two-level graph. The tasks are then scheduled twice for computation while using 4 threads.

#include <fstream>
#include <iostream>
#include "transwarp.h"

namespace tw = transwarp;

int main() {
    double x = 0;
    int y = 0;

    // Building the task graph
    auto parent1 = tw::make_task(tw::root, [&x]{ return 13.3 + x; })->named("something");
    auto parent2 = tw::make_task(tw::root, [&y]{ return 42 + y; })->named("something else");
    auto child = tw::make_task(tw::consume, [](double a, int b) { return a + b;
                                            }, parent1, parent2)->named("adder");

    tw::parallel executor{4};  // Parallel execution with 4 threads

    child->schedule_all(executor);  // Schedules all tasks for execution
    std::cout << "result = " << child->get() << std::endl;  // result = 55.3

    // Modifying data input
    x += 2.5;
    y += 1;

    child->schedule_all(executor);  // Re-schedules all tasks for execution
    std::cout << "result = " << child->get() << std::endl;  // result = 58.8

    // Creating a dot-style graph for visualization
    std::ofstream{"basic_with_three_tasks.dot"} << tw::to_string(child->edges());
}

The resulting graph of this example looks like this:

graph

Every bubble represents a task and every arrow an edge between two tasks. The first line within a bubble is the task name. The second line denotes the task type followed by the task id and the task level in the graph.

API doc

This is a brief API doc of transwarp. For more details check out the doxygen documentation and the transwarp examples.

In the following we will use tw as a namespace alias for transwarp.

Creating tasks

transwarp supports seven different task types:

root, // The task has no parents
accept, // The task's functor accepts all parent futures
accept_any, // The task's functor accepts the first parent future that becomes ready
consume, // The task's functor consumes all parent results
consume_any, // The task's functor consumes the first parent result that becomes ready
wait, // The task's functor takes no arguments but waits for all parents to finish
wait_any, // The task's functor takes no arguments but waits for the first parent to finish

The task type is passed as the first parameter to make_task, e.g., to create a consume task simply do this:

auto task = tw::make_task(tw::consume, functor, parent1, parent2);

where functor denotes some callable and parent1/2 the parent tasks.

The functor as passed to make_task needs to fulfill certain requirements based on the task type and the given parents:

root: A task at the root (top) of the graph. This task gets executed first. A functor to a root task cannot have any parameters since this task does not have parent tasks, e.g.:

auto task = tw::make_task(tw::root, []{ return 42; });

Another way of defining aroot task is a value task which can be created as:

auto task = tw::make_value_task(42);  

A value task doesn't require scheduling and always returns the same value or exception.

accept: This task is required to have at least one parent. It accepts the resulting parent futures as they are without unwrapping. Hence, the child can decide how to proceed since a call to get() can potentially throw an exception. Here's an example:

auto task = tw::make_task(tw::accept, [](auto f1, auto f2) { return f1.get() + f2.get(); }, parent1, parent2);

accept_any: This task is required to have at least one parent but its functor takes exactly one future, namely the future of the parent that first finishes. All other parents are abandoned and canceled. Here's an example:

auto task = tw::make_task(tw::accept_any, [](auto f1) { return f1.get(); }, parent1, parent2);

Note that canceling only works for already running tasks when the functor is sub-classed from transwarp::functor.

consume: This task follows the same rules as accept with the difference that the resulting parent futures are unwrapped (have get() called on them). The results are then passed to the child, hence, consumed by the child task. The child task will not be invoked if any parent throws an exception. For example:

auto task = tw::make_task(tw::consume, [](int x, int y) { return x + y; }, parent1, parent2);

consume_any: This task follows the same rules as accept_any with the difference that the resulting parent futures are unwrapped (have get() called on them). For example:

auto task = tw::make_task(tw::consume_any, [](int x) { return x; }, parent1, parent2);

wait: This task's functor does not take any parameters but the task must have at least one parent. It simply waits for completion of all parents while unwrapping futures before calling the child's functor. For example:

auto task = tw::make_task(tw::wait, []{ return 42; }, parent1, parent2);

wait_any: This task works similar to the wait task but calls its functor as soon as the first parent completes. It abandons and cancels all remaining parent tasks. For example:

auto task = tw::make_task(tw::wait_any, []{ return 42; }, parent1, parent2);

Generally, tasks are created using make_task which allows for any number of parents. However, it is a common use case for a child to only have one parent. For this, then() can be directly called on the parent object to create a continuation:

auto child = tw::make_task(tw::root, []{ return 42; })->then(tw::consume, functor);

child is now a single-parent task whose functor consumes an integer.

Scheduling tasks

Once a task is created it can be scheduled just by itself:

auto task = tw::make_task(tw::root, functor);
task->schedule();

which, if nothing else is specified, will run the task on the current thread. However, using the built-in parallel executor the task can be pushed into a thread pool and executed asynchronously:

tw::parallel executor{4};  // Thread pool with 4 threads
auto task = tw::make_task(tw::root, functor);
task->schedule(executor);

Regardless of how you schedule, the task result can be retrieved through:

std::cout << task->get() << std::endl;

When chaining multiple tasks together a directed acyclic graph is built in which every task can be scheduled individually. Though, in many scenarios it is useful to compute all tasks in the right order with a single call:

auto parent1 = tw::make_task(tw::root, foo);  // foo is a functor
auto parent2 = tw::make_task(tw::root, bar);  // bar is a functor
auto task = tw::make_task(tw::consume, functor, parent1, parent2);
task->schedule_all();  // Schedules all parents and itself

which can also be scheduled using an executor, for instance:

tw::parallel executor{4};
task->schedule_all(executor);

which will run those tasks in parallel that do not depend on each other.

Executors

We have seen that we can pass executors to schedule() and schedule_all(). Additionally, they can be assigned to a task directly:

auto exec1 = std::make_shared<tw::parallel>(2);
task->set_executor(exec1);
tw::sequential exec2;
task->schedule(exec2);  // exec1 will be used to schedule the task

The task-specific executor will always be preferred over other executors when scheduling tasks.

transwarp defines an executor interface which can be implemented to perform custom behavior when scheduling tasks. The interface looks like this:

class executor {
public:
    virtual ~executor() = default;
    
    // The name of the executor
    virtual std::string name() const = 0;
    
    // Only ever called on the thread of the caller to schedule()
    virtual void execute(const std::function<void()>& functor, tw::itask& task) = 0;
};

where functor denotes the function to be run and task the task the functor belongs to.

Range functions

There are convenience functions that can be applied to an iterator range:

  • tw::for_each
  • tw::transform

These are very similar to their standard library counterparts except that they return a task for deferred, possibly asynchronous execution. Here's an example:

std::vector<int> vec = {1, 2, 3, 4, 5, 6, 7};
tw::parallel exec{4};
auto task = tw::for_each(exec, vec.begin(), vec.end(), [](int& x){ x *= 2; });
task->wait();  // all values in vec will have doubled

Canceling tasks

A task can be canceled by calling task->cancel(true) which will, by default, only affect tasks that are not currently running yet. However, if you create a functor that inherits from transwarp::functor you can terminate tasks while they're running. transwarp::functor looks like this:

class functor {
public:
    virtual ~functor() = default;

protected:
    // The associated task (only to be called after the task was constructed)
    const tw::itask& transwarp_task() const noexcept;

    // The associated task (only to be called after the task was constructed)
    tw::itask& transwarp_task() noexcept;

    // If the associated task is canceled then this will throw transwarp::task_canceled
    // which will stop the task while it's running (only to be called after the task was constructed)
    void transwarp_cancel_point() const;

private:
    ...
};

By placing calls to transwarp_cancel_point() in strategic places of your functor you can denote well defined points where the functor will exit when the associated task is canceled. A task can also be canceled by throwing transwarp::task_canceled directly.

As mentioned above, tasks can be explicitly canceled on client request. In addition, all tasks considered abandoned by accept_any, consume_any, or wait_any operations are also canceled in order to terminate them as soon as their computations become superfluous.

Event system

Transwarp provides an event system that allows you to subscribe to all or specific events of a task, such as, before started or after finished events. The task events are enumerated in the event_type enum:

enum class event_type {
    before_scheduled, // Just before a task is scheduled
    after_future_changed, // Just after the task's future was changed
    before_started, // Just before a task starts running
    before_invoked, // Just before a task's functor is invoked
    after_finished, // Just after a task has finished running
    after_canceled, // Just after a task was canceled
    after_satisfied, ///< Just after a task has satisfied all its children with results
    after_custom_data_set, // Just after custom data was assigned
}

Listeners are created by sub-classing from the listener interface:

class listener {
public:
    virtual ~listener() = default;

    // This may be called from arbitrary threads depending on the event type
    virtual void handle_event(tw::event_type event, tw::itask& task) = 0;
};

A listener can then be passed to the add_listener functions of a task to add a new listener or to the remove_listener functions to remove an existing listener.

Task pool

A task pool is useful when one wants to run the same graph in parallel. For this purpose, transwarp provides a task_pool which manages a pool of tasks from which one can request an idle task for parallel graph execution. For example:

tw::parallel exec{4};

auto my_task = make_graph();
tw::task_pool<double> pool{my_task};

for (;;) {
    auto task = pool.next_task(); // task may be null if the pool size is exhausted
    if (task) {
        task->schedule_all(exec);
    }
}

Timing tasks

In order to identify bottlenecks it's often useful to know how much time is spent in which task. transwarp provides a timer listener that will automatically time the tasks it listens to:

auto task = make_graph();
task->add_listener_all(std::make_shared<tw::timer>()); // assigns the timer listener to all tasks
task->schedule_all();
std::ofstream{"graph.dot"} << tw::to_string(task->edges()); // the dot file now contains timing info

Optimizing efficiency

Compile time switches

By default, transwarp provides its full functionality to its client. However, in many cases not all of that is actually required and so transwarp provides a few compile time switches to reduce the task size. These switches are:

TRANSWARP_DISABLE_TASK_CUSTOM_DATA
TRANSWARP_DISABLE_TASK_NAME
TRANSWARP_DISABLE_TASK_PRIORITY
TRANSWARP_DISABLE_TASK_REFCOUNT
TRANSWARP_DISABLE_TASK_TIME

To get the minimal task size with a single switch one can define

TRANSWARP_MINIMUM_TASK_SIZE

at build time.

Releasing unused memory

By default, every task in a graph will keep its result until rescheduling or a manual task reset. The releaser listener allows you to automatically release a task result after that task's children have consumed the result. For example:

auto task = make_graph();
task->add_listener_all(std::make_shared<tw::releaser>()); // assigns the releaser listener to all tasks
task->schedule_all();
// All intermediate task results are now released (i.e. futures are invalid)
auto result = task->get(); // The final task's result remains valid

The releaser also accepts an executor that gives control over where a task's result is released.

Using transwarp with tipi.build

transwarp can be easily used in tipi.build projects simply by adding the following entry to your .tipi/deps:

{
    "bloomen/transwarp": { }
}

Feedback

Get in touch if you have any questions or suggestions to make this a better library! You can post on gitter, submit a pull request, create a Github issue, or simply email one of the contributors.

If you're serious about contributing code to transwarp (which would be awesome!) then please submit a pull request and keep in mind that:

  • unit tests should be added for all new code by extending the existing unit test suite
  • C++ code uses spaces throughout

transwarp's People

Contributors

acdemiralp avatar bloomen avatar chausner avatar guancodes avatar pysco68 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transwarp's Issues

Schedule the same graph again

The documentation explicitly states that

It is currently not possible to schedule the same graph again while it is still running

then suggests a solution with graph_pool.

The developers at Adobe's stlab in their Concurrency library, a library that have a lot in common with transwarp, have worked on an upgrade of the std::future to fix some of their shortcoming that impact this project (continuation, cancellable). Maybe is is worth a look and provide a way to parametrize the future implementation in transwarp?

They also provide Channels with one can build graphs that can be used for multiple invocations and this may be a better alternative to graph_pool. Just a heads up and thanks for transwarp!

Measure unit test coverage

The goal is to run the tests and then do make coverage to generate a coverage report in HTML format. For this to work, we'd have to add a section to our cmake config.

`listener` and `executor` use `itask`

Currently, the listener and executor interfaces refer to a shared_ptr of node. This should be changed to a shared_ptr of itask for greater flexibility.

store parallel executor as a member variable causing crash

I would like to re-use transwarp's parallel executor for transforming data multiple times, so I store it as a class's member variable like this:

class Transformer {
public:
  vector<int> transform(vector<int> &data) {
    vector<int> result(data.size());
    auto t = tw::transform(_exec, data.begin(), data.end(), result.begin(),
                           [&](int x) { return x * 2; });

    return result;
  }

private:
  tw::parallel _exec{1};
};

TEST_CASE("transwarp test") {
  Transformer t;
  auto data = vector<int>{1, 2, 3};
  auto r = t.transform(data);
  REQUIRE(r.size() == 3);
}

But this will cause crash on macOS Big Sur (11.3.1), with stack like this:

Thread 1 Crashed:
0   libsystem_platform.dylib      	0x00007fff2061650c _os_unfair_lock_recursive_abort + 23
1   libsystem_platform.dylib      	0x00007fff20611125 _os_unfair_lock_lock_slow + 258
2   libsystem_malloc.dylib        	0x00007fff203f90e5 free_tiny + 134
3   tracing-tests                 	0x000000010bed15a5 std::__1::_DeallocateCaller::__do_call(void*) + 21 (new:334)
4   tracing-tests                 	0x000000010bed1559 std::__1::_DeallocateCaller::__do_deallocate_handle_size(void*, unsigned long) + 25 (new:292)
5   tracing-tests                 	0x000000010bf4fbf5 std::__1::_DeallocateCaller::__do_deallocate_handle_size_align(void*, unsigned long, unsigned long) + 85 (new:268)
6   tracing-tests                 	0x000000010bf4fb95 std::__1::__libcpp_deallocate(void*, unsigned long, unsigned long) + 37 (new:340)
7   tracing-tests                 	0x000000010bf78a0d std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >::deallocate(std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*, unsigned long) + 45 (memory:1673)
8   tracing-tests                 	0x000000010bf78865 std::__1::allocator_traits<std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > >::deallocate(std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >&, std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*, unsigned long) + 37 (memory:1408)
9   tracing-tests                 	0x000000010bf787f4 std::__1::__split_buffer<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >, std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >&>::~__split_buffer() + 100 (__split_buffer:350)
10  tracing-tests                 	0x000000010bf77fb5 std::__1::__split_buffer<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >, std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >&>::~__split_buffer() + 21 (__split_buffer:347)
11  tracing-tests                 	0x000000010bf779ba void std::__1::vector<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >, std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > >::__push_back_slow_path<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >(std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&&) + 186 (vector:1632)
12  tracing-tests                 	0x000000010bf77757 std::__1::vector<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >, std::__1::allocator<std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > >::push_back(std::__1::unique_ptr<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::default_delete<std::__1::basic_ostringstream<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&&) + 103 (vector:1659)
13  tracing-tests                 	0x000000010bf34a96 Catch::StringStreams::add() + 134 (catch.hpp:13662)
14  tracing-tests                 	0x000000010bf34979 Catch::ReusableStringStream::ReusableStringStream() + 73 (catch.hpp:13679)
15  tracing-tests                 	0x000000010bf1ab45 Catch::ReusableStringStream::ReusableStringStream() + 21 (catch.hpp:13681)
16  tracing-tests                 	0x000000010bf2b745 Catch::MessageStream::MessageStream() + 21 (catch.hpp:2623)
17  tracing-tests                 	0x000000010bf2b70a Catch::MessageBuilder::MessageBuilder(Catch::StringRef const&, Catch::SourceLineInfo const&, Catch::ResultWas::OfType) + 42 (catch.hpp:11782)
18  tracing-tests                 	0x000000010bf24bab Catch::MessageBuilder::MessageBuilder(Catch::StringRef const&, Catch::SourceLineInfo const&, Catch::ResultWas::OfType) + 43 (catch.hpp:11785)
19  tracing-tests                 	0x000000010bf249d9 Catch::AssertionStats::AssertionStats(Catch::AssertionResult const&, std::__1::vector<Catch::MessageInfo, std::__1::allocator<Catch::MessageInfo> > const&, Catch::Totals const&) + 297 (catch.hpp:11044)
20  tracing-tests                 	0x000000010bf24ced Catch::AssertionStats::AssertionStats(Catch::AssertionResult const&, std::__1::vector<Catch::MessageInfo, std::__1::allocator<Catch::MessageInfo> > const&, Catch::Totals const&) + 45 (catch.hpp:11038)
21  tracing-tests                 	0x000000010bf2f48a Catch::RunContext::assertionEnded(Catch::AssertionResult const&) + 330 (catch.hpp:12708)
22  tracing-tests                 	0x000000010bf308ec Catch::RunContext::handleFatalErrorCondition(Catch::StringRef) + 268 (catch.hpp:12831)
23  tracing-tests                 	0x000000010bf24320 (anonymous namespace)::reportFatal(char const*) + 64 (catch.hpp:10756)
24  tracing-tests                 	0x000000010bf2423f Catch::FatalConditionHandler::handleSignal(int) + 143 (catch.hpp:10850)
25  libsystem_platform.dylib      	0x00007fff20612d7d _sigtramp + 29
26  ???                           	0x0000000000008fd0 0 + 36816
27  libsystem_malloc.dylib        	0x00007fff203f97b3 tiny_free_no_lock + 1112
28  libsystem_malloc.dylib        	0x00007fff203f9219 free_tiny + 442
29  tracing-tests                 	0x000000010bf033b7 std::__1::default_delete<transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> > >::operator()(transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> >*) const + 55 (memory:2084)
30  tracing-tests                 	0x000000010bf03089 std::__1::__shared_ptr_pointer<transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> >*, std::__1::shared_ptr<transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> > >::__shared_ptr_default_delete<transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> >, transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> > >, std::__1::allocator<transwarp::detail::runner<void, transwarp::root_type, transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>, std::__1::tuple<> > > >::__on_zero_shared() + 89 (memory:3265)
31  tracing-tests                 	0x000000010bedbdfd std::__1::__shared_count::__release_shared() + 61 (memory:3169)
32  tracing-tests                 	0x000000010bedbd9f std::__1::__shared_weak_count::__release_shared() + 31 (memory:3211)
33  tracing-tests                 	0x000000010bedbd6c std::__1::shared_ptr<jaegertracing::Tracer>::~shared_ptr() + 44 (memory:3884)
34  tracing-tests                 	0x000000010becaad5 std::__1::shared_ptr<opentracing::v3::Tracer>::~shared_ptr() + 21 (memory:3882)
35  tracing-tests                 	0x000000010befb225 transwarp::detail::add_listener_visitor::~add_listener_visitor() + 21 (transwarp.h:1328)
36  tracing-tests                 	0x000000010befb0b5 transwarp::detail::add_listener_visitor::~add_listener_visitor() + 21 (transwarp.h:1328)
37  tracing-tests                 	0x000000010bf04205 std::__1::__compressed_pair_elem<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'(), 0, false>::~__compressed_pair_elem() + 21 (memory:1909)
38  tracing-tests                 	0x000000010bf04448 std::__1::__compressed_pair<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'(), std::__1::allocator<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'()> >::~__compressed_pair() + 24 (memory:1983)
39  tracing-tests                 	0x000000010bf04425 std::__1::__compressed_pair<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'(), std::__1::allocator<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'()> >::~__compressed_pair() + 21 (memory:1983)
40  tracing-tests                 	0x000000010bf06b15 std::__1::__function::__alloc_func<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'(), std::__1::allocator<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'()>, void ()>::destroy() + 21 (functional:1572)
41  tracing-tests                 	0x000000010bf057ce std::__1::__function::__func<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'(), std::__1::allocator<transwarp::detail::task_impl_base<void, transwarp::root_type, std::__1::shared_ptr<transwarp::task_impl<transwarp::wait_type, transwarp::no_op_functor, std::__1::vector<std::__1::shared_ptr<transwarp::task<void> >, std::__1::allocator<std::__1::shared_ptr<transwarp::task<void> > > > > > transwarp::transform<std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int)>(std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, std::__1::__wrap_iter<int*>, Transformer::transform(std::__1::vector<int, std::__1::allocator<int> >&)::'lambda'(int))::'lambda'()>::schedule_impl(bool, transwarp::executor*)::'lambda0'()>, void ()>::destroy() + 30 (functional:1709)
42  tracing-tests                 	0x000000010bee7b45 std::__1::__function::__value_func<void ()>::~__value_func() + 53 (functional:1839)
43  tracing-tests                 	0x000000010bee7b05 std::__1::__function::__value_func<void ()>::~__value_func() + 21 (functional:1837)
44  tracing-tests                 	0x000000010bee7ae5 std::__1::function<void ()>::~function() + 21 (functional:2542)
45  tracing-tests                 	0x000000010bee70c5 std::__1::function<void ()>::~function() + 21 (functional:2542)
46  tracing-tests                 	0x000000010bee5f3f transwarp::detail::thread_pool::worker(unsigned long) + 447 (transwarp.h:786)
47  tracing-tests                 	0x000000010bee899f decltype(*(std::__1::forward<transwarp::detail::thread_pool*>(fp0)).*fp(std::__1::forward<unsigned long>(fp1))) std::__1::__invoke<void (transwarp::detail::thread_pool::*)(unsigned long), transwarp::detail::thread_pool*, unsigned long, void>(void (transwarp::detail::thread_pool::*&&)(unsigned long), transwarp::detail::thread_pool*&&, unsigned long&&) + 143 (type_traits:3688)
48  tracing-tests                 	0x000000010bee8897 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (transwarp::detail::thread_pool::*)(unsigned long), transwarp::detail::thread_pool*, unsigned long, 2ul, 3ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (transwarp::detail::thread_pool::*)(unsigned long), transwarp::detail::thread_pool*, unsigned long>&, std::__1::__tuple_indices<2ul, 3ul>) + 87 (thread:280)
49  tracing-tests                 	0x000000010bee7fb6 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (transwarp::detail::thread_pool::*)(unsigned long), transwarp::detail::thread_pool*, unsigned long> >(void*) + 118 (thread:291)
50  libsystem_pthread.dylib       	0x00007fff205cd954 _pthread_start + 224
51  libsystem_pthread.dylib       	0x00007fff205c94a7 thread_start + 15

I have to move the executor as a local variable instead of a instance member variable like below:

class Transformer {
public:
  vector<int> transform(vector<int> &data) {
    vector<int> result(data.size());
    tw::parallel exec{1}; // use it as a local variable
    auto t =
        tw::transform(exec, data.begin(), data.end(), result.begin(), [&](int x) { return x * 2; });

    return result;
  }

private:
//    tw::parallel exec{1}; // have to commit it out
};

I searched all the tests in the repo but cannot find an example like this, is the executor expected to be used as an instance variable like above? I think if I make it as a local variable, the thread pool will be launched every time I call the transform, what would be the correct approach for re-using the thread pool for multiple times of transformation? Thanks.

Merge `node` into `itask`

The goal is to remove confusion between task and node. There's no significant argument anymore to keep node around so the goal is to merge its fields (minus executor) into itask and add corresponding accessor methods. itask may have to be renamed pending discussions.

Add support for C++11 on master branch

This would entail:

  • introduce a new compile time switch: TRANSWARP_CPP11
  • use that new switch to fallback to C++11's way of doing things
  • document that switch in the readme
  • delete transwarp1.X branch

This way our C++11 users keep getting the latest of transwarp while C++17/20 users enjoy a modern code base.

next() method documented but does not exist

Hello,

I saw this part in the documentation:

Generally, tasks are created using make_task which allows for any number of parents. However, it is a common use case for a child to only have one parent. For this, next() can be directly called on the parent object to create a continuation:
auto child = tw::make_task(tw::root, []{ return 42; })->next(tw::consume, functor);

I don't see anything related to this method in the code. Is it something planned or removed ?

Great work btw

Comparison with cpp-taskflow

Thank you for your very hard work!

I keep an eye on in this library and cpp-taskflow. Both libraries are very alive and they seems powerful. I think a comparison between the two would be interesting.

I must admit that I have not done anything serious yet but I feel very fortunate and grateful for these projects. Variety in alternatives, aproaches, options makes us all stronger, Thanks again!

DJuego

Control lifetime of task results

The way I understand the current state of affairs is that all tasks (and their results) will live until the end of evaluating the whole task graph. For applications with sizable memory footprint, it would be nice to have a way to control the lifetime of results and mark intermediary objects that do not actually matter eventually as such, so that they can be cleaned up when they are not required by any further tasks.

Is there a mechanism to achieve this behavior?

readability and interface

Hi,

this library is really neat and tidy in general. Which is why it seems a shame that some of the readability of the interfaces are somewhat clunky

consider this example:

auto task = pool.wait_for_next_task(); // Get the next available task
auto input = task->tasks()[0];
static_cast<tw::task<std::shared_ptr<std::vector<double>>>*>(input)->set_value(data);

https://github.com/bloomen/transwarp/blob/master/examples/wide_graph_with_pool.cpp#L75-L77

from the name of the wait_for_next_task one would expect that this is returning indeed available/idle tasks in breadth order. However the 2nd line suggests you are getting an array of said tasks. Couldn't this be written in a more concise interface on a single line?

I understand that the cast on the 3rd line is probably due to not knowing if you are expecting a value or consumer task, but perhaps (since the intent here is to feed root/value tasks) something like wait_for_next_root/value_task would be a better interface, which could avoid this static_cast smudge on the example code

How to handle sub task

Hello,

In case that you create tasks inside a task:

auto main_task = tw::make_task(tw::wait, [exec]{
std::vector<int> vec = {1, 2, 3, 4, 5, 6, 7};
auto sub_task = tw::for_each(exec, vec.begin(), vec.end(), [](int& x){ x *= 2; });
sub_task-wait();
});

How can we handle this issue? If we imagine that I have a single worker in my pool, my main task will be paused until my sub task is finished but because the worker is already used by the main_task, my sub_task is blocked because there is no free worker.

Regards

Define listeners as array

Currently, the listeners_ member of task_impl_base is a vector of vectors. However, the outer vector is always fixed in size, namely fixed to the number of events. Hence, we should use a std::array instead.

Conan package

Hello,
Do you know about Conan?
Conan is modern dependency manager for C++. And will be great if your library will be available via package manager for other developers.

Here you can find example, how you can create package for the library.

If you have any questions, just ask :-)

Add support for parallel_for

With the recent support for vector parents we can now add support for a parallel_for which is essentially nothing else but a wide graph with many independent tasks on the same level. There should be a version for just specifying the number of tasks and another version that takes a range of inputs.

parallel_for should be a free-standing function following the regular transwarp conventions of task naming and task types. The function should return a vector of result tasks.

Add an example for a pool of graphs

There are use cases in which a graph wants to be scheduled as soon as new data arrives without having to wait for the previous graph calculation to finish. In this case, one can construct a pool of graphs in which an available graph is scheduled once new data arrives. The pool size will adjust itself automatically based on demand.

Comment: Mutex Usage

See the following (on phone, formatting will be jacked up).

void shutdown() {
{
std::lock_guardstd::mutex lock(mutex_);
done_ = true;
}
cond_var_.notify_all();
for (std::thread& thread : threads_) {
thread.join();
}
threads_.clear();
}

Setting a boolean is an atomic operation, even without std::atomic to my knowledge, that doesn't require a lock. I recommend taking a look at the rest of your code and seeing where you can remove unneeded locks.

Problem compiling wait_any task

This following code doesn't compile when it should:

TEST_CASE("make_task_wait_any_with_different_types") {
    auto t1 = tw::make_value_task(42);
    auto t2 = tw::make_value_task(42.0);
    auto t3 = tw::make_task(tw::wait_any, []{}, t1, t2);
}

Currently, it only compiles when all parents have the same return type.

Allow for adding a listener to all tasks in the graph at once

In some cases (e.g. timing tasks) it is useful to be able to add the same listener to all tasks in the graph. Hence, the current add_listener and remove_listener functions should get siblings (add_listener_all and remove_listener_all) that allow for this behavior.

Incorporate the task priority into breadth scheduling

Currently, there are two schedule types: breadth and depth.

breadth is scheduling according to task level and ID. depth is according to ID only.

The task priority is currently unused within transwarp itself but may be used by, e.g., a custom user-defined executor.

Whenever we can detect a tie in the sorting of tasks (i.e. when tasks are on the same level) we should then incorporate the priority for an improved sorting and, hence, scheduling.

breadth scheduling will then be scheduling according to task level, priority, and ID.

Transwarp doesn't seem to be fully move aware

This could be more of a question than a bug, but is it intended to be able to package move only objects inside tasks? I've been looking at the code, and there seems to be some intent that transwarp is move aware, but it doesn't seem to be complete.

Minimum repo on VS2019 (v142) or Clang on godbolt:
transwarp::make_task(transwarp::root, []() { return std::make_unique<int>(); });

What happens is that because transwarp::task has both overloads for set_value:
virtual void set_value(const typename transwarp::decay<result_type>::type& value) = 0;
virtual void set_value(typename transwarp::decay<result_type>::type&& value) = 0;

...both functions get instantiated in task_impl_proxy. With movable only ResultTypes, the first function will still get instantiated, but it will be invalid C++ as the function's implementation will eventually attempt to make a copy.

Is it intended that this does not work?

Cheers

Tasks are not executed

My code does not seem to execute at all.

        auto t = transwarp::make_task(transwarp::root,
            [this, id, url] {
                qCWarning(npackdImportant) << "downloadFileRunnable";
                return this->downloadFileRunnable(id, url);
            });
        t->add_listener(transwarp::event_type::after_finished,
                downloadFileListener);

        t->schedule(threadPool);

std::result_of removed in C++20

transwarp.h uses
using type = typename std::result_of<decltype(&std::shared_future<T>::get)(std::shared_future<T>)>::type;

This doesn't compile on Visual Studio 2019 (v142) with stdcpplatest as C++20 removes this functionality.

The compiler has a number of macros to determine which STL/C++ standard it supports. You should be able to do this with an #ifdef albeit I don't know the macros/versions off the top of my head.

Fix travis build for C++17

The current compilers used on Travis for GCC and Clang are too old and don't compile the new C++17 master branch. We need to update GCC and Clang on both Linux and Mac.

if graph final have many leaf tasks, can transwatp get the final result?

because the program finally use the last task excute

std::shared_ptr<tw::task> build_graph() {
auto task0 = tw::make_task(tw::root, func0);
auto task1 = tw::make_task(tw::root, func1);
auto task2 = tw::make_task(tw::consume, func2, task0, task1);
auto task3 = tw::make_task(tw::root, func3);
auto task4 = tw::make_task(tw::consume, func4, task2, task3);
return task4;
}

void calculate_via_transwarp(tw::task& task) {
tw::parallel executor{4};
task.schedule_all(executor);
long result = task.get();
std::cout << "transwarp result is : " << result << std::endl;
}

Trace timings

Thank you for the wonderful library.

What do you think about adding some type of tracing/metrics interface? It would be neat for a given execution of the graph to augment the dot-style graph with timing info.

Set up clang-tidy

The goal is to set up clang-tidy and run it on the transwarp code base.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.