Git Product home page Git Product logo

libraries's Introduction

Software Technology Lab (STLab) Library Source Code Repository

ASL libraries will be migrated here in the stlab namespace, new libraries will be created here.

Branch states

  • main: Build and Tests

Content

This library provides futures and channels, high-level abstractions for implementing algorithms that ease the use of multiple CPU cores while minimizing contention. This library solves several problems of the C++11 and C++17 TS futures.

Documentation

The complete documentation is available on the STLab home page.

Release changelogs are listed in CHANGES.md.

Tested on

  • Linux with GCC 11
  • Linux with Clang 14
  • MacOS 11 with Apple-clang 13.0.0
  • Windows with Visual Studio 16

Requirements

  • A standards-compliant C++17, C++20, or C++23 compiler
  • Building the library requires CMake 3.23 or later
  • Testing or developing the library requires Boost.Test >= 1.74.0

Building

STLab is a standard CMake project. See the running CMake tutorial for an introduction to this tool.

Preparation

  1. Create a build directory outside this library's source tree. In this guide, we'll use a sibling directory called BUILD.

  2. Install a version of CMake >= 3.23. If you are on Debian or Ubuntu Linux you may need to use snap to find one that's new enough.

  3. If you are using MSVC, you may need to set environment variables appropriately for your target architecture by invoking VCVARSALL.BAT with an appropriate option.

Configure

Run CMake in the root directory of this project, setting ../BUILD as your build directory. The basis of your command will be

cmake -S . -B ../BUILD -DCMAKE_BUILD_TYPE=# SEE BELOW

but there are other options you may need to append in order to be successful. Among them:

  • -DCMAKE_BUILD_TYPE=[Release|Debug] to build the given configuration (required unless you're using Visual Studio or another multi-config generator).
  • -DCMAKE_CXX_STANDARD=[17|20|23] to build with compliance to the given C++ standard.
  • -DBUILD_TESTING=OFF if you only intend to build, but not test, this library.
  • -DBoost_USE_STATIC_LIBS=TRUE if you will be testing on Windows.

We also suggest the installation of Ninja and its use by adding -GNinja to your cmake command line… but ninja is not required.

A typical invocation might look like this:

cmake -S . -B ../BUILD -GNinja -DCMAKE_CXX_STANDARD=17 -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF

If you organize the build directory into subdirectories you can support multiple configurations.

rm -rf ../builds/portable
cmake -S . -B ../builds/portable -GXcode -DCMAKE_CXX_STANDARD=17 -DBUILD_TESTING=ON -DSTLAB_TASK_SYSTEM=portable -DCMAKE_OSX_DEPLOYMENT_TARGET=macosx14.4

Build

If your configuration command was successful, go to your build directory (cd ../BUILD) and invoke:

cmake --build .

Testing

Running the tests in the BUILD directory is as simple as invoking

ctest -C Debug

or

ctest -C Release

depending on which configuration (`CMAKE_BUILD_TYPE) you choose to build.

Generating Documentation

For generating the documentation, see the README.md in the docs directory.

libraries's People

Contributors

aaronalbers avatar apmccartney avatar benfrantzdale avatar camio avatar dabrahams avatar dependabot[bot] avatar dhaibatc avatar dixlorenz avatar felixpetriconi avatar fernandopff avatar fosterbrereton avatar fpelliccioni avatar frans-willem avatar friendlyanon avatar jaredadobe avatar kevinhopps avatar kypp avatar laserallan avatar manu343726 avatar neil-ca-moore avatar nickpdemarco avatar olnrao avatar rwols avatar sdebionne avatar sean-parent avatar superfunc avatar thinlang avatar touraill-adobe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libraries's Issues

Reduction does not appear to work for failed futures

The following code works, and prints both "Recover on test called" and "Recover on package called"

  // Make a ready resolved future
  auto test = stlab::make_ready_future(stlab::immediate_executor);
  // Create a future that we'll resolve later.
  auto package = stlab::package<void()>(stlab::immediate_executor, []() {});
  // Chain this to-be-resolved future after test.
  test.then([package]() { return package.second; })
      // Chain something to catch resolves/rejections
      .recover([](stlab::future<void> f) {
        std::cout << "Recover on test called" << std::endl;
      })
      .detach();
  // Also stay informed of the package:
  package.second.recover([](stlab::future<void> f) {
    std::cout << "Recover on package called" << std::endl;
  }).detach();
  // Now resolve the chained future.
  package.first();

However, if we have the inner package fail, it will only print "Recover on package called":

  // Make a ready resolved future
  auto test = stlab::make_ready_future(stlab::immediate_executor);
  // Create a future that we'll resolve later.
  auto package = stlab::package<void()>(stlab::immediate_executor, []() {throw std::runtime_error("Oops");});
  // Chain this to-be-resolved future after test.
  test.then([package]() { return package.second; })
      // Chain something to catch resolves/rejections
      .recover([](stlab::future<void> f) {
        std::cout << "Recover on test called" << std::endl;
      })
      .detach();
  // Also stay informed of the package:
  package.second.recover([](stlab::future<void> f) {
    std::cout << "Recover on package called" << std::endl;
  }).detach();
  // Now resolve the chained future.
  package.first();

I'm running a clone from the master branch that is up to date as of today...

task_system and notification_queue doesn't work with move only lambda

When the operator() of the task_system is instantiated with a move only lambda, the notification queue tries to move the lambda into an std::function object in the dequeue of the notification_queue. std::function is by definition movable, this is turn generates a compilation error for such instantiations of the std::function.

Question: argument for recover on when_all()

In the current implementation a given recover() gets the future passed as argument.

In case of a failed future that is an argument to a when_all() only the failed future is passed to recover().
With Alex' guideline never to throw away useful information wouldn't it make sense to pass in this case a tuple of all futures to the recover clause?
Of course this would have the disadvantage that a recover gets always a tuple<future...> passed as argument, even it is only one future.

A process cannot close its output

Imagine a process that compresses data. It receives byte arrays and outputs the same. When you have finished inputting data, you close the sender. When the compressing process is closed, it needs to flush the data, so it yields until it is finished, and then what? I cannot see anything for the process to say that it is has no further data to yield.
Cheers,
Guillaume

channel01.cpp : /usr/include/stlab/concurrency/variant.hpp:37:10: fatal error: boost/variant.hpp: No such file or directory

First example channel taken here: https://github.com/FelixPetriconi/accu2017_course/tree/master/Channels

`#include <stlab/concurrency/channel.hpp>
 #include <stlab/concurrency/default_executor.hpp>
  #include <iostream>

  int main() {
    stlab::sender<int> send;       // sending part of the channel
    stlab::receiver<int> receiver; // receiving part of the channel
    std::tie(send, receiver) =     // combining both to a channel
    stlab::channel<int>(stlab::default_executor);

    auto printer =
      [](int x){ std::cout << x << '\n'; };       // stateless process

    auto printer_process =
      receiver | printer;                                  // attaching process to the receiving
                                                                                                                             // part
    receiver.set_ready();          // no more processes will be attached
                                                                                                                             // process starts to work
    send(1); send(2); send(3);     // start sending into the channel
    int end; std::cin >> end;      // simply wait to end application
    return 0;
  }
 `

When compiling:

`accu2017_course-master/Channels$ g++ -std=c++17 channel01.cpp -ochannel01`

`In file included from /usr/include/stlab/concurrency/channel.hpp:26:0,
                 from channel01.cpp:1:
 /usr/include/stlab/concurrency/variant.hpp:37:10: fatal error: boost/variant.hpp: No such file or    
 directory
  #include <boost/variant.hpp>
           ^~~~~~~~~~~~~~~~~~~
 compilation terminated.
`

Do I have to install boost even if I use C++17 ?

Marco

Provide guidance how to deal with blocking apis

Hello all,

First thanks for publishing this wonderful library. It follows exactly the concurrency model I was always wishing for. Unfortunately, unlike in the emsctipten world which seems to be the origin of the design idea, I have a hard time using this to full effect because I very often have to deal with blocking apis. Most notably file I/o and OpenGL, but also cuda. Could you provide guidance how to deal with such apis? Maybe some form of unlimited size executor into which one promises only ever to throw blocking calls?

rare double free occurs in package_task<>::operator()

I build using Clion which uses CMake v3.5.1 and Clang (version clang-700.1.81). When I run the test suite, 99/100 all is well. Very rarely, I get a malloc/free error. Sometimes it is a unallocated free, sometimes it is a double free. This points to a data race. I've tracked it down to the package_task<>::operator() method. The error occurs because the no-op std::function object that is assigned to the task _f member. This assignment deletes the previous _f member and on rare occasions this occurs twice or erroneously.

-------------test output------------
Running 100 test cases...
future_test(59684,0x10dfea000) malloc: *** error for object 0xd000000000000000: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Exception: EXC_BAD_ACCESS (code=EXC_I386_GPFLT))

Feature request: flatten futures or then_f.

Suppose we have the following functions:

stlab::future<std::string> getFirstHitOnGoogle(std::string query); //Get the URL of the first hit on google for query
stlab::future<std::string> fetchUrl(std::string url); //Asynchronously load an URL
std::string extractJoke(std::string html); //Extract the joke from HTML input

To get the best chuck norris joke, I'd like to be able to write something to the effect of this:

getFirstHitOnGoogle("chuck norris joke")
  .then(fetchUrl)
  .then(extractJoke)
  .then([](const std::string& joke) {
    std::cout << "Best Chuck Norris joke:" << joke << std::endl;
  }).detach();

But unfortunately, this will not work, as:

getFirstHitOnGoogle("chuck norris joke").then(fetchUrl)

is of type stlab::future<stlab::future<std::string>>, and extractJoke is not of the form std::string extractJoke(stlab::future<std::string> html);.

I think at least a flatten function should be available of the following form:

template<typename T> stlab::future<T> flatten(stlab::future<stlab::future<T>> input);

But ideally a specialized then_f would be even better:

template<typename A, typename B> stlab::future<B> stlab::future<A>::then_f(std::function<stlab::future<B>(const A&)> f);

I suspect at least the flatten function should be rather trivial, and the specialized then/recover pair shouldn't be too hard either. Are there any plans to implement something like this ?

Using `future<>::detach()` is a possible memory leak

{
auto p = package<int()>(immediate_executor, []{ return 42; });
p.second.then([a = annotate()](int x){ std::cout << x << '\n'; }).detach();
}

outputs:

annotate ctor
annotate move-ctor
annotate move-ctor
annotate move-ctor
annotate move-ctor
annotate dtor
annotate dtor
annotate dtor
annotate dtor

Notice we are one dtor short... detach() creates a retain cycle - this needs to be broken in the event of a broken promise exception (or any exception).

when_all does not support continuing futures that return void

The following code fails to compile on VS2015 Update 3. If futures are changed to take and return ints, then everything works.

#include <stlab/concurrency/concurrency.hpp>

#include <iostream>
int main(int argc, char *argv[])
{
	std::cout << "1\n";
	auto future = stlab::async(stlab::default_executor, [] {std::cout << "2\n"; });
	auto future2 = future.then([] {std::cout << "3\n"; }).then([] {std::cout << "5\n"; });
	auto future3 = future.then([] {std::cout << "4\n"; });
	auto future4 = stlab::when_all(stlab::default_executor, [] {std::cout << "6\n"; }, future2, future3);

	std::cout << "7\n";
	while (!future4.get_try());
	return 0;
}

channel TODO early exit in clear_to_send()?

Open issue:
channel.hpp

std::unique_lockstd::mutex lock(_process_mutex);
// TODO FP I am not sure if this is the correct way to handle an closed upstream
if (_process_final) {
return;
}

channel TODO _process_final under mutex?

Open issue:
channel.hpp

bool _process_close_queue = false;
// REVISIT (sparent) : I'm not certain final needs to be under the mutex
bool _process_final = false;

Stack overflow with exception handling in futures

Currently, exceptions are handle inline (without creating new promises). This can cause a stack overflow when there are long chains of futures. They could instead be passed off to the executor which appears to help if it is an async one (but I guess the problem would still be there in the case of an immediate executor).

diff --git a/ThirdParty/stlab/concurrency/future.hpp b/ThirdParty/stlab/concurrency/future.hpp
index c0ec59397..d92667fd9 100644
--- a/ThirdParty/stlab/concurrency/future.hpp
+++ b/ThirdParty/stlab/concurrency/future.hpp
@@ -288,7 +288,7 @@ struct shared_base<T, enable_if_copyable<T>> : std::enable_shared_from_this<shar
             _ready = true;
         }
         // propagate exception without scheduling
-        for (const auto& e : then) { e.second(); }
+        for (const auto& e : then) { _executor(e.second); }
     }

     template <typename F, typename... Args>
@@ -456,8 +456,7 @@ struct shared_base<void> : std::enable_shared_from_this<shared_base<void>> {
             then = std::move(_then);
             _ready = true;
         }
-        // propagate exception without scheduling
-        for (const auto& e : then) { e.second(); }
+        for (const auto& e : then) { _executor(e.second); }
     }

     auto get_try() -> bool {

Report typo - calling deleted atomic copy constructor in `await()` example.

http://www.stlab.cc/libraries/concurrency/channel/process/await.html

int main() {
    sender<int> send;
    receiver<int> receive;

    tie(send, receive) = channel<int>(default_executor);

    // std::atomic_int r = 0;    /* calling deleted copy constructor of std::atomic. */
    std::atomic_int r{0};
...

This is trivial typo error.

I would like to say that Thank you for sharing this project. I have been learning so much with this project. Thanks. πŸ˜„

Surprising behavior of copied receiver

#include <iostream>
#include <chrono>

#include <stlab/concurrency/concurrency.hpp>

using namespace std::literals;
using std::to_string;

int main()
{
	auto ch = stlab::channel<int>(stlab::default_executor);
	auto receive = ch.second;

	auto pipe1 = ch.second | [](auto i) { std::cout << "pipe1: " + to_string(i) + '\n'; };
	auto pipe2 = ch.second | [](auto i) { std::cout << "pipe2: " + to_string(i) + '\n'; };

	ch.second.set_ready();
	receive.set_ready(); // (*) Really needed

	ch.first(1);

	std::this_thread::sleep_for(1s);
}

Without the line marked with (*), nothing is printed out.

I'm not sure if this is an intended behavior or a bug, but I found it very surprising and I think it has to be documented at least.

(Thanks for a very nice library, BTW!)

stlab::async will not accept non-const reference parameters

std::async() allows passing non-const reference parameters by wrapping them in std::ref. stlab::async() fails to compile if given non-const reference parameters, whether naked or wrapped in std::ref.

Platform and Toolchain

  • MacOS Sierra 10.12.5
  • clang: Apple LLVM version 8.1.0 (clang-802.0.42)
  • stlab built with homebrew and xcode 8.3.3 following install instructions and using the develop branch July 25 head.
  • repro code built in CLion 2016.2.2.
  • Fails when compiled with any of these flags -std=c++1z, -std=c++14, -std=c++11.

Selected extracts from Clion build of repro with -v flag

Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin16.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
 "/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang" -cc1 -triple x86_64-apple-macosx10.12.0 -Wdeprecated-objc-isa-usage -Werror=deprecated-objc-isa-usage -emit-obj -mrelax-all -disable-free -disable-llvm-verifier -discard-value-names -main-file-name async_error_repro2.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mdisable-fp-elim -masm-verbose -munwind-tables -target-cpu penryn -target-linker-version 278.4 -v -dwarf-column-info -debug-info-kind=standalone -dwarf-version=4 -debugger-tuning=lldb -coverage-file /Users/ahcox/Library/Caches/CLion2016.2/cmake/generated/stlab_experiments-e68879c1/e68879c1/Debug/CMakeFiles/async_error_repro2.dir/async_error_repro2.cpp.o -resource-dir /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk -I /Users/ahcox/checkouts/stlab -I /Users/ahcox/.conan/data/Boost/1.60.0/lasote/stable/package/4224f719ca8cda8602a34d6a7dcc57685612aec9/include -stdlib=libc++ -std=c++1z -fdeprecated-macro -fdebug-compilation-dir /Users/ahcox/Library/Caches/CLion2016.2/cmake/generated/stlab_experiments-e68879c1/e68879c1/Debug -ferror-limit 19 -fmessage-length 0 -stack-protector 1 -fblocks -fobjc-runtime=macosx-10.12.0 -fencode-extended-block-signature -fcxx-exceptions -fexceptions -fmax-type-align=16 -fdiagnostics-show-option -fcolor-diagnostics -o CMakeFiles/async_error_repro2.dir/async_error_repro2.cpp.o -x c++ /Users/ahcox/checkouts/stlab_experiments/async_error_repro2.cpp
clang -cc1 version 8.1.0 (clang-802.0.42) default target x86_64-apple-darwin16.6.0
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/v1"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/local/include"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/Library/Frameworks"
...
#include <...> search starts here:
 /Users/ahcox/checkouts/stlab
 /Users/ahcox/.conan/data/Boost/1.60.0/lasote/stable/package/4224f719ca8cda8602a34d6a7dcc57685612aec9/include
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/include
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
 /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include
 /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks (framework directory)
...

Repro

A simple example showing this is here: https://gist.github.com/ahcox/0e76d32a0a87ac24160bf50298052a1d
The log from the compile failure is here: https://gist.github.com/ahcox/9ce507f383ec7820fc245f9ea90297ee

channel TODO destruction of sender in final state

Open issue:
channel.hpp

void map(sender f) override {
/*
REVISIT (sparent) : If we are in a final state then we should destruct the sender
and not add it here.
*/
{
std::unique_lockstd::mutex lock(_downstream_mutex);
_downstream.append_receiver(std::move(f));
}
}

future TODO: reduction on future<void>

In future.hpp there is the comment at nearly the end:

REVIST (sparent) : This is doing reduction on future we also need to do the same for
other result types.

In general, we shouldn't generate constructs like future<future> - that should collapse to future. So the following should work (currently it return an error because the return value is future<future>. We should fix anything returning a future.

#include <iostream>

#include <stlab/concurrency/default_executor.hpp>
#include <stlab/concurrency/future.hpp>

using namespace stlab;
using namespace std;

int main() {
    future<void> a = async(default_executor, []{ cout << 1 << endl; }).then([]{
        return async(default_executor, []{ cout << 2 << endl; });
    });
}

Unnecessary copies

The following code creates an unnecessary copy (internally it is because get_try() is returning an optional instead of an optional<const T&>). It also does 4 moves. Can that be reduced?

#include <iostream>

#include <stlab/concurrency/future.hpp>
#include <stlab/concurrency/immediate_executor.hpp>

using namespace stlab;
using namespace std;

struct annotate {
    annotate() { cout << "annotate ctor" << endl; }
    annotate(const annotate&) { cout << "annotate copy-ctor" << endl; }
    annotate(annotate&&) noexcept { cout << "annotate move-ctor" << endl; }
    annotate& operator=(const annotate&) {
        cout << "annotate assign" << endl;
        return *this;
    }
    annotate& operator=(annotate&&) noexcept {
        cout << "annotate move-assign" << endl;
        return *this;
    }
    ~annotate() { cout << "annotate dtor" << endl; }
    friend inline void swap(annotate&, annotate&) { cout << "annotate swap" << endl; }
    friend inline bool operator==(const annotate&, const annotate&) { return true; }
    friend inline bool operator!=(const annotate&, const annotate&) { return false; }
};

template <class T>
inline auto promise_future() {
    return package<T(T)>(immediate_executor, [](auto&& x) -> decltype(x) { return std::forward<decltype(x)>(x); });
}

int main() {
    auto [promise, future] = promise_future<annotate>();
    promise(annotate());
    future.then([](const annotate&){ }); // copy happens here!
}

channel TODO states in task_done()

Open issue:
channel.hpp

// The mutual exclusiveness of this assert implies too many variables. Should have a single
// "get state" call.
assert(!(do_run && do_final) && "ERROR (sparent) : can't run and close at the same time.");

channel TODO receiver operator | handle error case

Open issue

channel.hpp

template
auto operator|(F&& f) const {
// TODO - report error if not constructed or _ready.
auto p = std::make_shared<detail::shared_process<detail::default_queue_strategy,
F,
detail::yield_type<F, T>,
T>>(_p->scheduler(), std::forward(f), _p);
_p->map(sender(p));
return receiver<detail::yield_type<F, T>>(std::move(p));
}

Some channel functions have problematic names

The functions like zip, join and similar in this library have different meanings compared to some other C++ and non-C++ libraries that model similar or same concepts.

This issue is based on the presentation from Meeting C++ 2017, I haven't checked whether the implementation follows what was presented there.

These are naming collisions against Eric's range-v3 library since those names will become a part of STL at some point, and having the functions with the same names and completely different meanings will be problematic for this library:

  • zip should create pairs (tuples) of values from the source channels, not interleave them (round robin)
  • join flattens out range of ranges in range-v3 (joining is a usual term for flattening out nested monads) - here, the equivalent would be to have a channel<channel<T>> and to get a channel<T>

Proposal:

  • change zip to create pairs/tuples
  • make a join function that flattens out nested channels, or remove join function from the library
  • merge should accept the merging strategy as an argument - should it be round-robin or unordered (emit whichever value comes first regardless of the source channel) - instead of having to invent different names for all different functions that perform merging and forcing the user to check out the documentation to know which one does what.

p.s. I don't usually comment on APIs (backwards-compatibility and all), but Sean told me the API is not yet considered stable and that there is time for the things to get fixed.

future TODO void cancel()

Open issue:

future.hpp:631

What is this? The bool is never tested. If _p is unique then the rest of the dance is not
necessary. Why would we continue to hold if someone else is holding? The should just be the
equivalent of:

void cancel() { *this = future(); }

blocking_get randomly does not return.

Trying to get acquainted with the concurrency library, I tried the following code and it randomly gets stuck, even though the tasks associated with the futures all have been completed. Sometimes the executable will finish, sometimes it will not, staying stuck in one of the blocking_get(f).

#include <iostream>
#include <thread>
#include <stlab/concurrency/concurrency.hpp>
using namespace std;
using namespace stlab;

int main() {
	std::vector<future<int>> allFutures;
	auto &the_executor = default_executor;
	for(int i = 0; i < 32; ++i)
		allFutures.push_back(async(the_executor, [](int i) {
			for(int j = 0; j < 16'000'000; ++j)
				i *= (2*j+1);
			std::cout << "Got " << i << std::endl;
			return i;
		}, i));
	for(auto &f : allFutures)
		std::cout << "R: " << blocking_get(f) << std::endl;
        return 0;
}

Running on Mac OS X 10.12.6. Compiled with clang++ and C++17.

channel TODO step for process with timeout

Open issue:

channel.hpp
/*
REVISIT (sparent) : The case is not implemented.

                Schedule a timeout. If a new value is received then cancel pending timeout.

                Mechanism for cancelation? Possibly a shared/weak ptr. Checking yield state is
                not sufficient since process might have changed state many times before timeout
                is invoked.

                Timeout may occur concurrent with other operation - requires syncronization.
            */

future TODO get_try() &&

Open issue:
future.hpp:651

// Fp Does it make sense to have this? At the moment I don't see a real use case for it.
// One can only ask once on an r-value and then the future is gone.
// To perform this in an l-value casted to an r-value does not make sense either,
// because in this case _p is not unique any more and internally it is forwarded to
// the l-value get_try.
auto get_try() &&

copy_on_write comparing function

Hello all,

assuming copy_on_write is used when copying the underlying type is expensive; I would think that if copying is expensive, comparing is probably equally expensive. Wouldn't it make sense to expand the comparing functions that involve equality/inequality to first check the pointers for being equal/inequal?

Warnings from CMake in Xcode

After running the setup_xcode.sh script and opening/building the /build/stlab.bxcodeproj, I'm seeing the following warnings:

The debug directory is not present in these packages. Can this be fixed? Also, can we update to Boost 1.64.0?

ld: warning: directory not found for option '-L/Users/sean-parent/.conan/data/Boost/1.60.0/lasote/stable/package/4224f719ca8cda8602a34d6a7dcc57685612aec9/lib/Debug'
ld: warning: directory not found for option '-L/Users/sean-parent/.conan/data/bzip2/1.0.6/lasote/stable/package/95eb44e3b12158e2d032ee9847c0b6fadfe6ffe8/lib/Debug'
ld: warning: directory not found for option '-L/Users/sean-parent/.conan/data/zlib/1.2.8/lasote/stable/package/1edd309d7294a74df2e50513591db7111c960be2/lib/Debug'

Possible memory race in channels

Encountered a failure of the future test distributed with the library when compiling with thread sanitizer, seemingly due to the detection of a data race. This issue was mentioned in my pull request, but I thought it best to file an issue.

OS: Ubuntu Trusty (Travis CI virtual machine)
Compiler: Clang++ 3.8
Flags: -g -Wall -ftemplate-backtrace-limit=0 -gdwarf-3 -fsanitize=thread -fno-omit-frame-pointer -std=gnu++14

The travis log is available here. Searching for

Test project /home/travis/build/apmccartney/libraries/build

will get you past the configuration and compilation to see the error.

channel TODO get process state under mutex?

Open issue:

channel.hpp
if (!_process_suspend_count) {
// FIXME (sparent): This is calling the process state ender the lock.
if (get_process_state(_process).first == process_state::yield || !_queue.empty()
|| _process_close_queue) {
do_run = true;
} else {
_process_running = false;
do_run = false;
}
}

Memory leak on windows

If I am running lib on Windows (10), then STLAB_TASK_SYSTEM STLAB_TASK_SYSTEM_WINDOWS is used and this is expected.

But what I have realized, that it leaks memory. It does not happen with STLAB_TASK_SYSTEM_PORTABLE.

Code example:
https://github.com/FelikZ/cpp-boilerplate/blob/channels/src/main.cpp

I have few channels and pushing cv::Mat object through them.

This happening in a loop here:

        for (auto i = 0; i < 10000; i++) {
            camera();
            this_thread::sleep_for(1ms);
        }

If I check task manager, the memory is constantly consumed every time I send something into a channel. Its around 1mb per second and never cleaning itself.

I think its somehow related to:
https://msdn.microsoft.com/en-us/magazine/hh456398.aspx?f=255&MSPPError=-2147217396

If you fail to call CloseThreadpoolCleanupGroupMembers, your application will leak memory.

But I am not that good in Windows Threading API.

Any ideas what to do?

Thanks!

Support move-only tasks

The following will fail to compile, because futures are based on std::function, a copyable type:

#include <iostream>

#include <stlab/concurrency/default_executor.hpp>
#include <stlab/concurrency/serial_queue.hpp>

using namespace stlab;

int main(int, const char*) {
    stlab::serial_queue_t q(stlab::default_executor);
    std::unique_ptr<int>  i(std::make_unique<int>(42));

    auto f = q([_i = std::move(i)]() mutable {
        std::cout << *_i << "!\n";
    });

    return 0;
}

How to correctly install the libraries in Ubuntu ? Are the Boost dependencies compulsory?

In my Ubuntu 16.04 Server Edition with gcc updated : gcc version 7.2.0 (Ubuntu 7.2.0-1ubuntu1~16.04) I git cloned the repository and compiled:

`marco@PC:~/libraries/build$ cmake .
 CMake Error: The source directory "/home/marco/libraries/build" does not appear to contain    
 CMakeLists.txt.
 Specify --help for usage, or press the help button on the CMake GUI.
 marco@PC:~/libraries/build$ cd ..
 marco@PC:~/libraries$ rm -rf build
 marco@PC:~/libraries$ cmake .
 -- The C compiler identification is GNU 7.2.0
 -- The CXX compiler identification is GNU 7.2.0
 -- Check for working C compiler: /usr/bin/cc
 -- Check for working C compiler: /usr/bin/cc -- works
 -- Detecting C compiler ABI info
 -- Detecting C compiler ABI info - done
 -- Detecting C compile features
 -- Detecting C compile features - done
 -- Check for working CXX compiler: /usr/bin/c++
 -- Check for working CXX compiler: /usr/bin/c++ -- works
 -- Detecting CXX compiler ABI info
 -- Detecting CXX compiler ABI info - done
 -- Detecting CXX compile features
 -- Detecting CXX compile features - done
 CMake Warning at /usr/share/cmake-3.5/Modules/FindBoost.cmake:725 (message):
   Imported targets not available for Boost version
 Call Stack (most recent call first):
   /usr/share/cmake-3.5/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)
   /usr/share/cmake-3.5/Modules/FindBoost.cmake:1332 (_Boost_MISSING_DEPENDENCIES)
 CMakeLists.txt:17 (find_package)
 -- Could NOT find Boost
 -- Looking for pthread.h
 -- Looking for pthread.h - found
 -- Looking for pthread_create
 -- Looking for pthread_create - not found
 -- Looking for pthread_create in pthreads
 -- Looking for pthread_create in pthreads - not found
 -- Looking for pthread_create in pthread
 -- Looking for pthread_create in pthread - found
 -- Found Threads: TRUE
 -- Configuring done
 -- Generating done
 -- Build files have been written to: /home/marco/libraries
 marco@PC:~/libraries$ make -j4
 marco@PC:~/libraries$  sudo make install
 [sudo] password for marco:
  make: *** No rule to make target 'install'.  Stop.

`
Questions:

  1. How to correctly install the libraries in Ubuntu ?
  2. Are the Boost dependencies compulsory?

Marco

(non-default) Executors and make_ready_future

make_ready_future uses the default executor implicitly, which doesn't matter for the future itself, but does for continuations that don't explicitly specify the executor. It seems like providing an executor argument to make_ready_future would make the API more consistent.

Currently, if you have one or more custom executors, you have to be always sure to pass them when calling .then if you use make_ready_future somewhere (and can't consistently have the continuation use the same executor as the previous).

How does your Concurrency library compare to, e.g., Boost.Fiber?

Dear developers,

I am working on a software project (not yet visible on github) in "Secure multiparty computation", which is a privacy-preserving computation paradigm from the field of cryptography, in which many basic operations like multiplication of numbers require interaction between several parties (connected via an IP network).
The problem above maps well to an async computation-graph type of solution, which is the concurrency pattern that your stlab.Concurrency library seems to provide.

I have started my project using Boost.Fiber (from Oliver Kowalke), which also integrates fairly well with Boost.ASIO.
Now I just saw your announcement of stlab.Concurrency, which looks very interesting (in particular the "one-to-many"-future feature, or in other words the "easily create splits" feature), and seems to aim to achieve similar goals as Boost.Fiber.

Would you be willing to provide some more information for library-users that would help in judging the benefits of your [Concurrency] library over modern c++ alternatives,[like boost.fiber]?
(BTW, Asio's author, Kohlhoff, is also working on a new executors library, not sure how much his goals overlap with yours...)

Also in terms of benchmarking, Alexander Temerev has a benchmark called skynet,
it would be interesting to see how stlab's Concurrency library compares there to the state of the art.
https://github.com/atemerev/skynet

Anyway, thank you for making your work available online!
kind regards,
niek

Remove use of tie() in favor of structured bindings

The remaining issue here is that without structured binding we use tie(). Moving this to a C++17 milestone with the task to remove tie() in favor of structured bindings.

Open issue:

channel.hpp
auto pop_from_queue() {
boost::optional message; // TODO : make functional

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.