Git Product home page Git Product logo

cats-effect's People

Contributors

alexandrustana avatar antoniojimeneznieto avatar armanbilge avatar baccata avatar balmungsan avatar bplommer avatar christopherdavenport avatar danicheg avatar djspiewak avatar durban avatar etspaceman avatar fthomas avatar irevive avatar kubukoz avatar larsrh avatar majk-p avatar manufacturist avatar mpilquist avatar mtomko avatar nikiforo avatar rafalsumislawski avatar rossabaker avatar samspills avatar scala-steward avatar systemfw avatar timwspence avatar typelevel-steward[bot] avatar vasilmkd avatar wemrysi avatar wosin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cats-effect's Issues

Problem with SyncLaws.repeatedAsyncEvaluationNotMemoized

We have a problem with SyncLaws.repeatedAsyncEvaluationNotMemoized.

I have not payed attention and the law has been redefined to use *> instead of >> in PR #80.

This is a problem because *> has no ordering guarantees and according to the its definition, it can only work when product is defined in terms of flatMap.

This is not necessarily the case guys. Therefore my tests for my non-deterministic Task instances, the one that does parallel processing, are now failing in Monix.

Of course, this non-deterministic Applicative instance that needs to be imported in scope has always been a hack in search of a better solution.

But the law itself can only work if you have ordering imposed. No ordering means that there's a race condition for setting the var in question. And you don't have ordering without using flatMap. Hence *> usage is incorrect.

Add source links to scaladoc

Can probably just link to master for now, but eventually we'll want to link to the hash if at all possible. I'm not sure if it's possible to do that in scaladoc without source file templating though, which wouldn't be worth it IMO.

Extra Sync[F] laws for map and flatMap

SyncLaws[F] have nothing to say about the result of throwing an exception from within a function passed to map or flatMap being equivalent to the result of using raiseError. To my mind this is one of the biggest practical differences between Sync and MonadError, because MonadError is incapable of offering this as a law (despite the instances for Try and Future doing so).

I think this is necessary because all of the exception-centered mechanisms in cats-effect being fully-specified greatly decreases the chance of unsound usage and relying by accident on properties which are not necessarily shared in common by all Sync instances; I also think it's more consistent, seeing as behavior like F.delay(throw e) <-> F.raiseError(e) is defined as a law.

Proposal: Rename Sync to Something Else

Asynchronous versus Synchronous refers to the execution model and Asynchronous execution subsumes Synchronous execution as well.

We know that the Async type class has an async evaluation model because it's being set in stone by the register signature in its async function. Implementations are free to execute stuff immediately / synchronously of course.

Which is why Sync <: Async.

So problems with the current naming:

  1. the Sync type-class has nothing to do with synchronous execution ... in case of IO, the created instances might as well execute asynchronously, you never know, because IO is async
  2. there's no Sync <: Async relationship, no equivalence, no duality

To put things in perspective, an actual Sync type-class is the Comonad.

Alternatives:

  • Suspendable
  • Deferrable
  • Deferred

Async <: LiftIO

There's really no distinction between these. Once #50 lands, we'll be able to default-implement the liftIO function on Async, with the possibility still open for override.

CoflatMap for Sync

It appears that any Applicative can form a CoflatMap

This personally strikes me as a useful value to have perhaps to generalize an Action describing an Action, but more so in effect suspension for things that implement Sync like Fs2's Stream which in order to get concurrency Stream(1,2,3).map(i => Stream(i)).joinUnbounded could instead utilized Stream(1,2,3).coflatten.joinUnbounded however I think this is a generally good thing to have available. The PR linked from cats can bring this easily for all Sync with the next Milestone.

Would this be something you would entertain?

Proposal: Make Shift Easier to Reason About (and a bit more efficient)

Hey!

Link Summary:

I'm super-thrilled to see that a standard typeclass and datatype for pure IO in the cats is under development!

I read @djspiewak's introductory blog post yesterday and I was struck by how unintuitive the existing shift implementation is.

Before I explain why I think shift is confusing, though, let me first try to frame things by explaining the two goals I believe shift is trying to accomplish:

  1. We want to be able to allow the user to control which ExecutionContext runs their synchronous effects.
  2. We want to allow the user to definitively shift execution back to some "desired" ExecutionContext after some possibly-thread-jumping effects complete.

The way that the present shift goes about achieving these goals is to bracket a target effect with jumps onto a target ExecutionContext (jump to the new context, run the effects, jump to the new context again). That way, if the effects are synchronous, we can control which ExecutionContext runs them (yay, first goal accomplished!). Of course, if the effects are fully or partially asynchronous, they could very well end up shifting execution onto a thread outside of our target ExecutionContext. Therefore, in order to achieve our second goal, we need to insert a second jump to our target ExecutionContext after the target effects complete. That way, we have control of which ExecutionContext is active coming out of our bracket region, instead of our target effect. Interestingly, shift actually turns effects asynchronous. By this, I mean that if you call shift on one effect several times in a row on a single synchronous effect (eff.shift(a).shift(b).shift...), the only the first shift has the power to change which ExecutionContext the synchronous effect runs on. Subsequent calls to shift can only change which context will gain control after the effect completes. Exploiting this is actually critical to achieving both of our two goals at the same time, by leveraging the "double-shift" pattern described in Daniel's post: If we want to jump over to a specialExecutionContext to run a blocking effect and then come back to the main ExecutionContext, we can achieve that by doing something like eff.shift(ioEC).shift(mainEC).

I've just explained how the current implementation allows us to meet both of our goals (even both at the same time!). Why am I writing this issue, then? There are two reasons:

First, shift is extremely difficult to reason about, especially for people other than the original author of a piece of code. This is because each time shift is used, it actually generates two ExecutionContext jumps. This means that the user of shift has to keep the effects of both jumps in mind. This isn't too hard to do once you understand what's going on, but it does mean that you will often get two ExecutionContext jumps when you were only after one. For the reader, though, it's much worse. Any time you come across a call to shift, you generally don't have any clues as to what the author's intentions were. That is, you don't know whether a given call to shift is intended to move a synchronous effect to another ExecutionContext or whether the intention is to shift the execution of subsequent effects (or whether it's both). You end up having to go down the rabbit hole of finding out whether the target effect is contains blocking IO or is fully or partially asynchronous. And even doing that just gives you heuristics.

Things become worse still for learners, in my opinion, since I think the implementation creates a trap that will drive new users toward using shift improperly. This is because most people will initially have just one of the two problems that shift solves. For example, let's say that that a programmer wants to do a file read on their ExecutionContext for blocking operations. They are likely to come across example code like IO(println("Hello!")).shift(ioCtx). This looks really promising! This scan thing seems to allow you to cause some specific effect to run on the ExecutionContext of your choice! At this point, the programmer will probably try using shift this way in their code, find that it seems to work, and move on, unaware that they have introduced a bug into their code. Of course, a programmer that first learns to use shift in order to control the ExecutionContext for subsequent effects is likely to have a similar problem. In both cases, the bug is likely to go unnoticed too, since it probably doesn't affect any computed result. Basically, I believe, that in order to be able to use shift correctly, a programmer has to understand the details of shift's implementation and its interactions with IO.async, which I don't think is reasonable to expect of new learners.

Second, shift is likely to cause code to reschedule itself on ExecutionContexts about twice as often as it needs to. This is because, the vast majority of the time when you use shift, you are either trying to change which ExecutionContext a synchronous effect runs on (in which case, you only need the first ExecutionContext jump that shift inserts) or you are trying to jump to some target ExecutionContext after an asynchronous computation completes (in which casee, you only need the second ExecutionContext jump that shift inserts). In either case, your code will end up yielding its Thread and being rescheduled twice as often as it needs to.

So... What do I propose should be done instead? In short, I propose that shift become an independent, unit-returning effect that injects single ExecutionContext jumps. I've pushed working code showing what I mean to GitHub. You can see the most-relevant section at this link. As you can see, we retain the ability to control which ExecutionContext runs synchronous effects: we just have to shift to our desired ExecutionContext before running them. We also retain the ability to transfer execution back to our desired ExecutionContext after an asynchronous effect: we just have to add a shift after the target effect. There is less ambiguity for readers about the intention behind each call to shift. I believe that it's also night-and-day clearer for code-sample learners. It makes the double-shift pattern something so intuitive that pretty much any user could come up with it on their own on the spot, as opposed to something that you have to be taught as a weird non-intuitive trick. It also completely eliminates the unnecessary reschedulings that the current version of shift creates. In short, I feel this shift is just... better and I propose that it should be adopted in cats-effect.

One other thing... Why is this example tagged as proposal-one in my repo? Well... I have always felt that this pattern of sealing ExecutionContext objects inside of our effects is... well... awful. I've long had the idea that it would be better to directly encode the notion that there should be one pool for blocking IO and one for compute directly into our datatypes and defer the selection of those pools until we actually run an effect. It's less flexible but I'm not sure that it a good design ever needs more ThreadPools than that (plus maybe one extra usually-sleeping Thread for timers if it can't be cleanly integrated into the compute pool). And the flexibility is actually still retained by virtue of the existence of e.g. IO.async (it's just a bit harder to access). Anyway, I intend to play around with this some and may write another proposal if the results are promising. I make no commitment on that, though.

Note: This proposal is also available as a separate file in my cats-effect-scheduling-proposals repo, in case it proves to be hard to read as an issue. Here's a link.

Proposal: provide Sync-like class without MonadError

My proposal is to move the MonadError requirement down the chain and leave Sync be just a Monad:

  • Sync[F[_]] extends Monad[F]
  • Async[F[_]] extends Sync[F] with MonadError[F, Throwable]

Another proposal would be:

  • MonadSuspend[F[_]] extends Monad[F]
  • Sync[F[_]] extends MonadSuspend[F] with MonadError[F, Throwable]

Current use-case is for lazy lists that don't necessarily deal with side effects, see:

Currently we have no way to make this Streaming type polymorphic, i.e. replace Eval with F[_] and then use that as Streaming[Eval, ?].

/cc @etorreborre @djspiewak

IO.map should never be strictly evaluated

Currently map is defined like this:

  final def map[B](f: A => B): IO[B] =
    this match {
      case Pure(a) => try Pure(f(a)) catch { case NonFatal(e) => RaiseError(e) }
      case ref @ RaiseError(_) => ref
      case _ => flatMap(a => Pure(f(a)))
    }

Unfortunately this map operation is not equivalent with a flatMap(f.andThen(pure)) because by not suspending execution it can trigger stack overflow errors.

As use-case, here's a toy Iterant implementation:

import cats.effect.IO

sealed trait Iterant[+A] {
  override def toString = "Iterant"
}

case class Next[+A](head: A, rest: IO[Iterant[A]]) extends Iterant[A]
case class Suspend[+A](rest: IO[Iterant[A]]) extends Iterant[A]
case class Halt(ex: Option[Throwable]) extends Iterant[Nothing]

// And a filter operation
def filter[A](f: A => Boolean)(stream: Iterant[A]): Iterant[A] =
  stream match {
    case Next(a, rest) => 
      val tail = rest.map(filter(f))
      if (f(a)) Next(a, tail) else Suspend(tail)
    case Suspend(rest) =>
      Suspend(rest.map(filter(f)))
    case Halt(_) => 
      stream
  }

The filter is pretty straightforward. Now lets build an already evaluated stream and filter it:

val stream = (0 until 1000000).foldLeft(Halt(None) : Iterant[Int]) { 
  (acc, elem) => Next(elem, IO.pure(acc)) 
}

filter((x: Int) => x % 2 == 0)(stream)
//=> StackOverflowException

Currently this filter operation blows with a StackOverflowException.

Why this is bad

Having users to remember which operations are safe and which operations can blow up in their face and in what contexts makes for a really bad user experience. io.map(f) should be equivalent with io.flatMap(x => pure(f(x))) in all contexts, safety comes before performance.

Also monix.tail.Iterant is a real example shipping in Monix 3.0 which is generic over F[_] data types, making use of cats.effect.Sync.

So I'd like a test for this to be in SyncLaws.

Exception silently ignored in async after callback

I'm not sure if this is a bug, but it's surprising:

scala> val tsk = cats.effect.IO.async[Int] { cb => cb(Right(42)); throw new Exception }
tsk: cats.effect.IO[Int] = IO$830563880

scala> tsk.unsafeRunSync()
res1: Int = 42

It's probably due to this line: the callback would be called twice in this case, but the second one (with the exception) is ignored (correctly) due to the onceOnly.

Effect capture with Eval

I think the scaladoc of IO.fromFuture says that it can be used for effect capture. For example like this:

def sideEffectingAsyncMethod(): Future[Unit] = ...
val pureValue: IO[Unit] = IO.fromFuture(Eval.always(sideEffectingAsyncMethod()))

However, using Eval for effect capture seems ... questionable. Shouldn't fromFuture accept a by-name Future (similarly to IO.apply)?

Publish `Arbitrary[IO[A]]`

I recently needed an Arbitrary[IO[A]]. Would anybody object to moving ours into cats-effect-laws? This would be consistent with cats, which publishes arbitraries for its data types in cats-laws.

Runtime error when blocking in Scala.js

fs.Task doesn't allow calling unsafeRun() when compiling for Scala.js, it causes a compile-time error. As far as I can tell, cats.effect.IO allows calling unsafeRunSync(), and it will produce a runtime error on Scala.js. Would it be possible to have a compile-time error here too? (I'm sure it's technically possible to implement, so my question is more about why was it implemented this way ... what is the upside to this implementation?)

Should we provide a cats.Parallel instance for IO?

I know we talked about the boundaries of cats-effect and that the implementation in cats-effect should not be concerned with parallelism or race conditions.

However we now have cats.Parallel in cats-core and providing a default instance for IO makes a lot of sense.

Otherwise defining that in fs2 is difficult due to having to use a newtype (for which we don't have an agreed upon encoding) or to essentially deal with orphaned instances, which we try to avoid. So at the very least, due to Cats 1.0-RC1 having a Parallel in it, I think we should have another discussion about it.

I'm fine either way, I just think Parallel is very cool and maybe we should provide that instance.

UPDATE: for reference there's — Why not both.

UPDATE 2: PR — #115

Replace the entire type class hierarchy

The existing Cats Effect type class hierarchy was a good proof-of-concept and demonstrated the existence of demand for library authors to abstract over effect monads.

Unfortunately, it is not possible to build resource safe, composable applications on the existing type class hierarchy. At the root of this problem is the lack of an abstraction for resource safety.

In the same way that all languages with exceptions have a try / catch / finally construct, all effect monads that support failure must have a similar analogue that can be used by higher-level libraries to ensure resource safety.

In other words, something like MonadBracket needs to be a super class of all effect monads.

The lack of such a construct has created libraries like FS2 which (a) depend on unlawful behavior, and (b) depend on unspecified behavior. This is false abstraction, not actual abstraction as per the original design goals of this project, and the very antithesis of principled functional programming.

I propose to replace the entire existing type class hierarchy by three type classes: MonadBracket, which extends MonadError; MonadFork, which extends MonadBracket; and MonadIO, which subsumes the best parts of Sync/Async and extends MonadBracket and MonadFork.

These are necessary and sufficient abstractions for resource-safe, concurrent, composable functional applications, and other functionality should be left to the underlying implementations.

Proposal: rename IO.apply

I see our IO.apply as filling the same niche as foreign import in Haskell; i.e., it's the backdoor to the unsafe part of the language, and the one and only way to construct a new IO primitive. So I think it should be documented more aggressively in these terms and renamed to definePrimitive or something.

I realize nobody will like this idea but I don't care.

Issues with recursion

Hi,

I discussed this issue with @tpolecat on gitter and he advised me to report it.

As I was looking at different ways of expressing an infinite (or very large) recursion I stumbled upon this snippet that produces strange result:

  def loop(i: Int = 0): IO[Int] =
    for {
      _ <- IO { if(i % 100 == 0) println(i) }
      _ <- if(i > 50000) IO.raiseError(new Exception("oops")) else loop(i + 1)
    }
    yield i
  loop().unsafeRunSync // Expect oops exception to be thrown at some point

This works but takes a very long time and allocates several GB of memory (it becomes exponentially slower and slower so we can suspect that it allocates exponentially more and more memory at each iteration). You can entirely reclaim the memory after all so there is no leak.

Also using IO.suspend the iteration itself works perfectly, but in case of error the error is not reported as expected because of a StackOverflowError in AndThen.runLoop.

  def loop(i: Int = 0): IO[Int] =
    for {
      _ <- IO { if(i % 100 == 0) println(i) }
      _ <- if(i > 50000) IO.raiseError(new Exception("oops")) else IO.suspend(loop(i + 1))
    }
    yield i
  loop().unsafeRunSync // Expect oops exception to be thrown at some point but got StackOverflowError

Proposal: Delete IO#ensuring

It's a really useful function, but it suffers from two problems:

  • The name is identical to Predef.ensuring and very very close to MonadError#ensure, but the functionality is completely different
  • The functionality really belongs on MonadError in a generic capacity

Seems like it would be better just to pull it back, get it off of IO, and propose a similar function (likely with a different name) to cats.

Pull up laws

Right now, there are quite a few laws on Effect that could be on Sync or Async, but were just stuffed in EffectLaws for… reasons. The stack-safety laws are a good exemplar. These should be moved up if at all possible.

EffectLaws.repeatedCallbackIgnored

So we've got this law:

  def repeatedCallbackIgnored[A](a: A, f: A => A) = {
    var cur = a
    val change = F.delay(cur = f(cur))
    val readResult = IO { cur }

    val double: F[Unit] = F async { cb =>
      cb(Right(()))
      cb(Right(()))
    }

    val test = F.runAsync(double >> change) { _ => IO.unit }

    test >> readResult <-> IO.pure(f(a))
  }

This is fine, however I hope you don't intend that implementations synchronize on those callback calls. As I mentioned before, in Monix I prefer to treat this by convention + error reporting by means of the EC and I do have a check in place, but it's a simple unsynchronised var and not an AtomicBoolean. So it takes care of simple cases, like in the sample above, but if access is not synchronised, then all bets are off.

Enable test forking

To avoid nondeterministic build failures like this one:

[info] cats.effect.internals.OnceOnlyTests *** ABORTED ***
[info]   java.util.ConcurrentModificationException: Two threads have apparently attempted to run a suite at the same time. This has resulted in both threads attempting to concurrently change the current Informer. Suite class name: cats.effect.internals.OnceOnlyTests
[info]   at org.scalatest.AsyncSuperEngine$$anonfun$runImpl$1.apply(AsyncEngine.scala:626)
[info]   at org.scalatest.AsyncSuperEngine$$anonfun$runImpl$1.apply(AsyncEngine.scala:623)
[info]   at org.scalatest.CompositeStatus.whenCompleted(Status.scala:886)
[info]   at org.scalatest.AsyncSuperEngine.runImpl(AsyncEngine.scala:623)
[info]   at org.scalatest.AsyncFunSuiteLike$class.run(AsyncFunSuiteLike.scala:216)
[info]   at org.scalatest.AsyncFunSuite.run(AsyncFunSuite.scala:2187)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.TestRunner.runTest$1(TestFramework.scala:76)
[info]   at sbt.TestRunner.run(TestFramework.scala:85)

AsyncLaws.thrownInRegisterIsRaiseError cannot be upheld

Hi guys,

We cannot have this law, described in this way:

  def thrownInRegisterIsRaiseError[A](t: Throwable) =
    F.async[A](_ => throw t) <-> F.raiseError(t)

I understand the intention here, but here's the problem: by contract (whether enforced by a law or not), the provided callback function must be called at most once, because that function might trigger side-effects, therefore, at least by convention, we need an idempotency guarantee to keep sanity.

Well, if you say that F.async[A](_ => throw t) must be equivalent to F.raiseError(t), then you cannot keep this idempotency guarantee without synchronization ... meaning for example to keep some state in an AtomicReference keeping track of calls (e.g. how Promise#trySuccess and Promise#tryFailure are working).

Example:

F.async { callback =>
  // Calling once
  callback(Right(1))
  // Oops, are we going to call our callback again?
  throw new RuntimeException("Bam, in your face!")
}

And while that would be cool, I'm pretty sure that you guys don't indent to introduce synchronization in order to achieve such idempotency guarantee, primarily because of performance reasons. In other words, it's better to keep this idempotency as a soft contract ... i.e. in our own implementation, we leave absolutely no room for violating it, but if the user does it, then the behaviour is undefined.

In Monix for such cases I prefer to simply do an ec.reportFailure(err), with that Task being non-terminating. In other words Task.async(_ => throw ex) is actually equivalent with Task.never.

I find that acceptable because ... (1) it doesn't blow the current call-stack, (2) it can be treated (with e.g. timeout and timeoutTo), (3) the exception gets reported by using the ExecutionContext so it has some visibility and (4) I find it normal in this case for the behaviour to be undefined.

Users should not allow exceptions to be thrown in register. If they do, they should do so at their own risk.

Think of this as a tradeoff: it's better to have the behaviour undefined (e.g. either never or raiseError, depending on what F[_] can do), than to risk calling that callback twice.

We need a `toIO` operation and type class

We have LiftIO as a lawless type class that's then inherited by Async.
I think we also need a ToIO type class to be inherited by Effect:

trait ToIO[F[_]] {
  def toIO[A](fa: F[A]): IO[A]
}

trait Effect[F[_]] extends Async[F] with ToIO[F] {
  // ...
}

Reasons:

  1. when abstracting over Effect, going through toIO for evaluating an F[_] might actually be easier to do than going through runAsync, we can already express toIO in terms of runAsync so all Effect instances can do it
  2. implementations can provide an efficient toIO and we already have precedent for such operations in Cats, e.g. Applicative.unit

But I'd also like a different ToIO type class because there are data types out there that can be converted to IO, but that can't implement Effect. Not sure if it can have any laws though.

If there's agreement I can work on the PR.
So what do you think?

Ongoing issues with simultaneous suite runs

@alexandru it looks like we're having some issues with test suites running concurrently. An example error:

[info] cats.effect.internals.OnceOnlyTests *** ABORTED ***
[info]   java.util.ConcurrentModificationException: Two threads have apparently attempted to run a suite at the same time. This has resulted in both threads attempting to concurrently change the current Informer. Suite class name: cats.effect.internals.OnceOnlyTests
[info]   at org.scalatest.AsyncSuperEngine$$anonfun$runImpl$1.apply(AsyncEngine.scala:626)
[info]   at org.scalatest.AsyncSuperEngine$$anonfun$runImpl$1.apply(AsyncEngine.scala:623)
[info]   at org.scalatest.CompositeStatus.whenCompleted(Status.scala:886)
[info]   at org.scalatest.AsyncSuperEngine.runImpl(AsyncEngine.scala:623)
[info]   at org.scalatest.AsyncFunSuiteLike$class.run(AsyncFunSuiteLike.scala:216)
[info]   at org.scalatest.AsyncFunSuite.run(AsyncFunSuite.scala:2187)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.TestRunner.runTest$1(TestFramework.scala:76)
[info]   at sbt.TestRunner.run(TestFramework.scala:85)

I think this is related to some of the execution context work you did; would you mind taking a look?

Consider specialized Eval instances

As a follow up to #51, we could add some specialized instances for Eval:

  • Sync[EitherT[Eval, Throwable, ?]]
  • Sync[λ[α => Eval[Try[α]]]]
  • maybe some others?

Thoughts and suggestions welcome.

fromEither on IO

Could there be a function in the companion object fromEither[A](e: Either[Throwable, A]: IO[A] = e.fold(IO.raiseError, IO.pure)

I've been asked if this was a philosophical reason for the lack, so I figured I would pass it along and if you are interested I'd happily draft a PR. I also see how minimal this is that I could understand your lack of desire for it.

Effect.stackSafetyOnRepeatedAttempts is too strong

Hi folks,

We've got the following law in EffectLaws:

  lazy val stackSafetyOnRepeatedAttempts = {
    val result = (0 until 10000).foldLeft(F.delay(())) { (acc, _) =>
      F.attempt(acc).map(_ => ())
    }

    F.runAsync(result)(_ => IO.unit).unsafeRunSync() <-> (())
  }

Trying to run this law for Monix's Task revealed that the implementation freezes, because the process is left without memory. This is due to how its "attempt" was implemented.

I then redesigned the run-loop of both Monix's Task and Coeval to be optimal for repeated and eagerly specified attempts like this. The code is specified in this PR: monix/monix#353

Anyway, this law is too strong and I don't think eager evaluation is needed, because at this moment:

  1. cats.Eval fails with a StackOverflowError ... I might port my fix for Monix's Coeval type, but I'm not sure if it's actually needed
  2. cats.effect.IO, even if it works, is actually building an in memory data-structure like ... Right(Right(Right(Right(.... 10,000 levels ...)))); note that after optimising the encoding of these attempts, the Monix Task no longer does this

So even though this test was useful in seeing a potential problem for Monix's Task, which I then fixed, I don't feel that it is a problem that users will have. Note that this wouldn't happen in a flatMap-powered loop, it's only problematic because that foldLeft is initialising a huge data-structure.

Consider a Type class for resource safety

I raised this issue on the gitter channel and got a lot of positive feedback.
We removed IO#ensuring some time ago and realized MonadError is not powerful enough to implement it (See here). I'd like to propose something in the vein of MonadBracket described in this article. It might also help with providing a resource safe Parallel experience.

Relevant gitter discussion:

LukaJCB @LukaJCB 14:22
Hey everyone, I kind of miss IO#ensuring and I realize why it’s gone, but maybe we should add something to the cats-effect type class hierarchy to ensure resource safety. It can’t be added to MonadError, so maybe something like the MonadBracket described here: https://www.fpcomplete.com/blog/2017/02/monadmask-vs-monadbracket I can’t really claim to be super knowledgable about this, so I’d love to hear what you think!

Fabio Labella @SystemFw 14:23
that's kind of a big issue
fwiw, I kind of prefer a resource safe F a la scalaz IO as well
I can see reasons to have a simple F, and delegate this aspect to Stream

LukaJCB @LukaJCB 14:24
Right now, without ensuring it’s pretty difficult to do

Fabio Labella @SystemFw 14:24
with ensuring as well

LukaJCB @LukaJCB 14:25

Michael Pilquist @mpilquist 14:25
+1 from me for MonadBracket -- @djspiewak may have objections

Fabio Labella @SystemFw 14:25
you need to change the internals of your IO type to support this
which I'd agree with
I suspect Daniel doesn't
I think our life in fs2 would be a tad easier if some thing (interruption, resource safety) would be at the F level

Michael Pilquist @mpilquist 14:26
Note that stream libraries still need their own, separate definition of bracketing as MonadBracket doesn't distribute over Stream

Fabio Labella @SystemFw 14:26
sure

LukaJCB @LukaJCB 14:26
I wouldn’t personally mind if IO would’t have it, but I’d love to see a type class that supports it. I’m sure something like Monix Task could make use of it

Fabio Labella @SystemFw 14:26
well, then you'd have IO not implementing one of the cats effect typeclasses
so it isn't a reference implementation anymore

Michael Pilquist @mpilquist 14:27
Yeah I'd definitely want IO to implement it
BTW, we're about to add an AtomicReference backed version of Ref to fs2 (called SyncRef) but maybe we should consider moving that to cats-effect and using it to implement the Parallel instance

LukaJCB @LukaJCB 14:29
I’d be +1 on that
I think @alexandru said he had a Proof of Concept on a Parallel instance usign Java atomic refs as well
I’ll create a ticket

Michael Pilquist @mpilquist 14:32
fs2 needs SyncRef for 0.10 final but otherwise, I think this stuff could be post cats-effect 1.0. Maybe with exception of parallel instance

Fabio Labella @SystemFw 14:33
btw interruption is crucial here
from what I know from haskell + reading scalaz IO + using my imagination
I'd like to know if this assumption is wrong
crucial to implement resource safety, that is
or rather, the two are closely linked

LukaJCB @LukaJCB 14:36
Interruption as in, cancelling a running computation?

Fabio Labella @SystemFw 14:38
yes
think about race
in the why-not-both example
(and some way of storing finalisers as well)
in any case I'd like to hear what Daniel and Alexandru think

LukaJCB @LukaJCB 14:40
Yeah, but we’d likely run into the same problem using Parallel, no?

Fabio Labella @SystemFw 14:40
that's kindof what I'm saying
interruption, resource safety and concurrency are linked

Bike shedding

  • Remove cats.effect.Attempt alias. It's currently private[effect] which is confusing for folks reading the code. We could make it public but I don't think an effect library should provide a minor syntax convenience like this.
  • Replacing Catchable with MonadError means we lose this method:
  def attempt[A](fa: F[A]): F[Attempt[A]]

This is commonly used in FS2. I suppose the right thing to do here is PRing this method to MonadError in cats.

  • We need unsafeRunAsync somewhere -- either on Effect or perhaps on a subtype of Effect if we want to limit where it might be invoked via parametric polymorphism.
  • Not sure how I feel about StdIO -- seems like a bucket of random things for which there's no guiding principle on deciding what should be added. E.g., how about randomUUID: IO[UUID]?
  • Change IO from a sealed trait to a sealed abstract class to help with binary compat issues in the future.
  • Note that we are switching to scala.util.control.NonFatal instead of fs2.util.NonFatal. I'm fine with this but both Paul and Daniel supported using a custom notion of NonFatal that didn't catch ControlException and some others.
  • Probably should think of a new name for IO#ensuring given the ensuring from Predef
  • To make sure I've got it right, Task.delay is now IO.apply and Task.apply doesn't exist, right?

Rewrite documentation to help new users

Current documentation is more like a technical description of differences to Haskell IO and Scalaz Task and is therefore not very useful for developers who just want to use cats-effect IO without previous experience with Scalaz/Haskell.

The documentation should cover the "why you should use IO" and offer practical examples for incorporating IO into applications.

Bare minimum improvement would be to give a prominent link to this blog post describing IO in cats-effect :) https://typelevel.org/blog/2017/05/02/io-monad-for-cats.html

Is there a reason not to have a recoverWith style function?

I found myself wanting this:

object IORecoverWith {
  private def raise[A]: PartialFunction[Throwable, IO[A]] = {
    case ex => IO.raiseError[A](ex)
  }
  implicit class IORecoverWith[A](io: IO[A]) {
    def recoverWith(f: PartialFunction[Throwable, IO[A]]): IO[A] =
      io.attempt.flatMap(_.fold(f.orElse(raise[A]), IO.pure))
  }
}

Is there a reason that something like this isn't built in?

tut it up

Has there been any thought regarding tut-style documentation?

At work I've had people ask me about the difference between scalaz.concurrent.Task and scala.concurrent.Future many times. I've written a tut-based doc that talks a bit about how Task is referentially transparent while Future is not and what the implications of that are. I'd be happy to adapt it to talk about IO instead. Would you be interested in such a doc in this project?

SemigroupK[IO]

I was writing a DSL that required boolean OR logic with early exit and used SemigroupK as a way to achieve this, however without an instance I was not able to use IO. It would be good to provide one.

Initiate Gitter channel

Folks, lets initiate a Gitter channel for cats-effect.

The general Cats channel is too noisy. I'd like our own channel because we can talk about implementation details among ourselves and also because I could keep notifications on for users seeking help.

The Cats channel is so noisy that I keep it on mute and I don't participate much in discussions there, happens only once in a blue moon.

So it won't be too popular, but so what? That's a good thing.

Proposal: make IO.fail name consistent with Cats Eval or ApplicativeError

This is a pet peeve of mine, but the naming of failing data constructors isn't consistent across the board, so we've got:

  • Eval.raise
  • Applicative.raiseError
  • IO.fail
  • Future.failure
  • in Scalaz: Task.fail
  • in Monix: Task.raiseError

So either we rename IO.fail to raise or raiseError, or we push an issue in Cats for renaming of Eval.raise to Eval.fail. Personally I don't like raise.

And if we have raiseError in ApplicativeError, I see no reason to not use it everywhere.

EDIT: just did a PR in Cats - typelevel/cats#1634

Provide a way to `unroll` up to asynchronous boundary of IO

I would like to propose a feature to have ability to unroll synchronous part of F execution steps, that will run all synchronous code within the F up to asynchronous boundary.

Not sure if given laws etc it shall be on Sync/Async interface, but essentially I would like to have something simple as this (not sure 100% about naming)

def syncView(f: F[A]): Either[F[A], A] 

where this will return on left F[A] that contains F with asynchronous execution steps, whereas on right, if F was composed purely from synchronous steps.

This may greatly help optimize code that implements interruption, which won't be necessary for synchronous F but is necessary for asynchronous steps and is achieved via racing results from two F (one with result other with interruption signal).

Proposal: New Effects4s

I propose integration with the following type-classes:

package effects4s

trait Async[F[_]] {
  def async[A](register: (Either[Throwable, A] => Unit) => Unit): F[A]
}

trait Eventual[F[_]] {
  def unsafeExtractAsync[A](fa: F[A])(cb: Either[Throwable, A] => Unit): Unit

  // Optimization on the above for things that execute synchronously
  def unsafeExtractTrySync[A](fa: F[A])(cb: Either[Throwable, A] => Unit): Either[Unit, A]
}

The idea would be to implement seamless back and forth conversions in the APIs of participating libraries.

NOTES:

  • these should be implemented by data types, like cats.effect.IO and shouldn't be extended or anything
  • no inheritance, no Simulacrum
  • Async at most will match the laws of cats.effect.Async, or be more relaxed (not sure if I can describe useful laws for it) ... thinking to make Async[Future] possible
  • project name, organization, maintainers can be discussed
  • the plan is of course to eventually convince other projects to implement them
    (I would actually push a proposal for the standard library as well, though anything in the standard library is a tough sell)

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.