Git Product home page Git Product logo

skunk's Introduction

Skunk

Discord Join the chat at https://gitter.im/skunk-pg/Lobby codecov Maven Central Javadocs

Skunk is a data access library for Scala + Postgres.

Please proceed to the microsite for more information.

Please drop a ⭐ if this project interests you. I need encouragement.

To contribute, see the contributing guide.

skunk's People

Contributors

armanbilge avatar asoltysik avatar astiob avatar bbstilson avatar bcarter97 avatar busti avatar christopherdavenport avatar cranst0n avatar daenyth avatar gvolpe avatar ikr0m avatar ljoublanc avatar lorandszakacs avatar massimosiani avatar matthughes avatar mbaechler avatar mergify[bot] avatar mpilquist avatar mwielocha avatar rolang avatar rossabaker avatar scala-steward avatar stephenjudkins avatar svalaskevicius avatar taig avatar tpolecat avatar typelevel-steward[bot] avatar vbergeron-ledger avatar yilinwei avatar zsambek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

skunk's Issues

Supported Postgres versions

Concerning Postgres versions:

  1. It should be clarified & documented which Postgres versions are supported
  2. Tests should be added for each supported version

switch to keypool?

The session pool is complicated and probably not 100% correct yet. Try switching to keypool while retaining the existing API.

Automatic reconnection

Right now, if the connection to the server is lost (i.e. restart PostgreSQL server), the application needs to be restarted to get reconnected. I think this would be a nice feature to have :)

Here's the exception I get when I stop and then start the PostgreSQL server:

root[ERROR] java.lang.Exception: Fatal: EOF before 5 bytes could be read.Bytes
root[ERROR] 	at skunk.net.BitVectorSocket$$anon$2.$anonfun$readBytes$1(BitVectorSocket.scala:72)
root[ERROR] 	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:139)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
root[ERROR] 	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
root[ERROR] 	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
root[ERROR] 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
root[ERROR] 	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
root[ERROR] 	at cats.effect.internals.PoolUtils$$anon$2$$anon$3.run(PoolUtils.scala:51)
root[ERROR] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
root[ERROR] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
root[ERROR] 	at java.lang.Thread.run(Thread.java:748)

Commands not completing until next query/command is run

Here's a gist: https://gist.github.com/dpnova/06e3650b943d1f80321bbba8364e4f92

Basically I think a sync needs to be sent here: https://github.com/tpolecat/skunk/blob/813025335411dd09584010f50ae02e7e37808902/modules/core/src/main/scala/net/protocol/Execute.scala#L33

This is the relevant section from the postgres docs.

  At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless 
  message causes the backend to close the current transaction if it's not inside a BEGIN/COMMIT transaction block ("close" 
  meaning to commit if no error, or roll back if error). Then a ReadyForQuery response is issued. The purpose of Sync is to 
  provide a resynchronization point for error recovery. When an error is detected while processing any extended-query message, 
  the backend issues ErrorResponse, then reads and discards messages until a Sync is reached, then issues ReadyForQuery 
  and returns to normal message processing. (But note that no skipping occurs if an error is detected while processing Sync — 
  this ensures that there is one and only one ReadyForQuery sent for each Sync.)

Sorry I'm being rushed out the door for kids birthdays, but I can do any further testing etc here, or even the fix if we think the Sync is the answer. I'll be back a little bit later.

[Docs] Add a page on error handling

  • Include an example of the most common errors that you'd get
    • Incorrect types being most common
  • Screenshots of what the error output looks like (even if only to have as an easily-linked selling point)
  • A section on any built-in error combinators (like how doobie has attemptSomeSql and so on)
  • A tiny overview of the common cats methods that you'd use (especially since cats lacks in-depth error handling docs)

Incorrect completion for UPDATE

case Patterns.Update(s) => apply(Completion.Delete(s.toInt)) should be an Update completion in CommandComplete.

Happy to PR this if you want but I thought it probably should have a test and I wasn't sure on the best way to write it.

Use with Redshift

🔥  Postgres FATAL 42704 raised in set_config_option (/home/ec2-user/padb/src/pg/src/backend/utils/misc/guc.c:23873)
🔥  
🔥    Problem: Unrecognized configuration parameter "IntervalStyle".

IntervalStyle isn't supported by Redshift. I found some documentation that states the default parameters are not overridable at the moment, but I'd be happy to take a crack at this with some guidance in the right direction.

For example, this is what a postgres library for Julia offers: https://github.com/invenia/LibPQ.jl/blob/dc200995d2c5c928952dfe7b80d7aa70a8421956/docs/src/pages/faq.md

[Proposal] Track connection pool metrics via jmx

I think a blocker for using Skunk in production is the ability to track hikari-equivalent metrics.

Specifically, hikari exposes these

  • Idle Connection count
  • Active Connections (in use)
  • Total Connections
  • The number of threads waiting for a connection

They may not exactly translate to the nonblocking api (especially the 'number of threads waiting', but getting something near parity would be massively helpful.

As a bonus, LISTEN-related metrics may be useful too - number of currently active LISTEN streams could be useful to track also.

Some conversation for context: https://gitter.im/skunk-pg/Lobby?at=5f31708aa4768b68568515af

[Request] Add a `check` utility for test purposes

Basically similar to doobie's approach. Take a Query as an input, validate against a server connection that the query as written matches the types.

It shouldn't require inserting any rows, only that the tables involved exist.

Full-fledged integration tests for certain queries are often hard to write. They might involve many foreign key relations, and thus a lot of distracting and fiddly test setup, or they might be highly-parameterized queries (in terms of the sql structure - imagine a "list from table, maybe sort on a user-selected property, sometimes filter some other property by a user input, etc").

Meta-Issue for 0.1

Doc - Tutorial

  • Channels
  • Command
  • Query
  • Setup
  • Tracing
  • Transactions

Doc - Reference

  • Encoders
  • Decoders
  • Concurrency
  • Fragments
  • Identifiers
  • SchemaTypes
  • Sessions
  • TwiddleLists

Input Tests

  • atomic
  • optional
  • record
  • list

Output Tests

  • atomic
  • optional
  • record
  • list

Codec Tests

  • boolean
  • enum
  • numeric
  • temporal
  • text

Protocol Tests

  • Bind
  • Close
  • Describe
  • Exchange
  • Execute
  • Parse
  • Prepare
  • Query
  • Startup
  • Unroll

Session Tests -

  • channel
  • transaction

Skunk's raison d'etre

This is not an issue, just a question, what is the purpose of skunk given that we have doobie? What's the difference? Why would one use skunk (with a postgresql backend) and not doobie?

Thank you

Nonterminating Compilation on wide Codec with mismatch types with Encoder

Skunk version 0.0.20

I have an Codec that is 31 columns wide and if I create an Encoder that doesn't match the types the compiler doesn't terminate.

  import skunk._
  import skunk.implicits._
  import skunk.codec.all._
  import java.time.LocalDateTime

   val transactionCodec =
      text ~ // accountId: String
        text.opt ~ // accountOwner: Option[String]
        text.opt ~ // address: Option[String]
        float8 ~ // amount: Double
        timestamp.opt ~ // authorizedDate: Option[LocalDateTime]
        text.opt ~ // byOrderOf: Option[String]
        text ~ // categoryId: String
        text.opt ~ // city: Option[String]
        text.opt ~ // country: Option[String]
        timestamp ~ // date: LocalDateTime
        text.opt ~ // isoCurrencyCode: Option[String]
        float8.opt ~ // lat: Option[Double]
        float8.opt ~ // lon: Option[Double]
        text.opt ~ // merchantName: Option[String]
        text ~ // name: String
        text ~ // originalDescription: String
        text.opt ~ // payee: Option[String]
        text.opt ~ // payer: Option[String]
        text ~ // paymentChannel: String
        text.opt ~ // paymentMethod: Option[String]
        text.opt ~ // paymentProcessor: Option[String]
        bool ~ // pending: Boolean
        text.opt ~ // pendingTransactionId: Option[String]
        text ~ // postalCode: String,
        text.opt ~ // ppdId: Option[String],
        text.opt ~ // reason: Option[String]
        text.opt ~ // referenceNumber: Option[String]
        text.opt ~ // storeNumber: Option[String]
        text.opt ~ // transactionCode: Option[String]
        text ~ // transactionId: String
        text.opt ~ // unofficialCurrencyCode: Option[String]
        text // userId: String


    case class Transaction2(
    accountId: String,
    accountOwner: Option[String],
    address: Option[String],
    amount: Double,
    authorizedDate: Option[LocalDateTime],
    byOrderOf: Option[String],
    categoryId: String,
    city: Option[String],
    country: Option[String],
    date: LocalDateTime,
    isoCurrencyCode: Option[String],
    lat: Option[Double],
    lon: Option[Double],
    merchantName: Option[String],
    name: String,
    originalDescription: String,
    payee: Option[String],
    payer: Option[String],
    paymentChannel: String,
    paymentMethod: Option[String],
    paymentProcessor: Option[String],
    pending: Boolean,
    pendingTransactionId: Option[String],
    postalCode: String,
    ppdId: Option[String],
    reason: Option[String],
    referenceNumber: Option[String],
    storeNumber: Option[String],
    transactionCode: Option[String],
    transactionId: String,
    unofficialCurrencyCode: Option[String],
    userId: String
)    
    val transactionEncoder =
     transactionCodec.values.contramap((t: Transaction2) =>
        t.accountId ~
          t.accountOwner ~
          t.address ~
          t.amount ~
          t.authorizedDate ~
          t.byOrderOf ~
          t.categoryId ~
          t.city ~
          t.country ~
          t.date ~
          t.isoCurrencyCode ~
          t.lat  ~
          t.lon ~
          t.merchantName ~
          t.name ~
          t.originalDescription ~
          t.payee ~
          t.payer ~
          t.paymentChannel ~
          t.paymentMethod ~
          t.paymentProcessor ~
          t.pending ~
          t.pendingTransactionId ~
          t.postalCode ~
          t.ppdId ~
          t.reason ~
          t.referenceNumber ~
          t.storeNumber ~
          t.transactionCode ~
          t.transactionId ~
          t.unofficialCurrencyCode ~
          t.userId 
      )

This compiles fine in a console but when you change one of the types of the Encoder e.g. float8 -> float4, the compiler fails to terminate.

timestamptz codec with UTC

I need to save java.time.Instant object to Postgres 10 db. Server zone is Moskow.

I have a table:

CREATE TABLE public.table1
(
    id integer NOT NULL DEFAULT nextval('manufacture_bindings_id'::regclass),
    manufacture_id integer NOT NULL,
    channel_id character varying COLLATE pg_catalog."default" NOT NULL,
    last_sync_time timestamp with time zone,
    CONSTRAINT "ManufactureBindings_pkey" PRIMARY KEY (manufacture_id)
)

I created a Codec:

val instantCodec: Codec[Instant] = timestamptz.imap(_.toInstant)(_.atOffset(ZoneOffset.UTC))

After all I do:

case class CaseClass1(id: Int, manufactureId: Int, channelId: String, lastSyncTime: Option[Instant])

val instant = Instant.parse("2020-11-23T10:24:31.000Z")

val cs1 = CaseClass1(1, 1, "2", Some(instant))

val command: Command[CaseClass1] = 
      sql"""
        UPDATE public.table1
        SET    last_sync_time = ${instantCodec.opt}
        WHERE  id = $int4
      """.command
        .contramap { case CaseClass1(id, _, _, lastSyncTime) => lastSyncTime ~ id }

I expect to see in table1: "2020-11-23 13:24:31+03", but i see "2020-11-23 10:24:31+03" .

This looks like a bug.

If I change codec like this:

val instantCodec: Codec[Instant] = timestamptz.imap(_.toInstant)(_.atOffset(ZoneOffset.of("+1")))

Value in table1 is "2020-11-23 13:24:31+03" .

[Request] Add macro support for intellij

Currently intellij isn't able to know the types of the sql macro.

This can be fixed by:

  1. Defining an intellij macro expansion/typing extension.
  2. Publishing those extensions as a jar with some metadata in the manifest
  3. Modifying the META-INF section of skunk's jar to point at those extensions

(I know this is a "won't fix" from Rob, I'm just filing this here for someone else to work on)

Feedback on `Leak`

As requested, a bit of feedback (or maybe a lot, I tend to blabber) on Leak, and why the current implementation wouldn't be safe to expose in cats-effect.

Two preliminary points:

  • For a possible implementation of Leak, this is the approach proposed in cats-effect typelevel/cats-effect#419
  • I don't think Leak is broken in this codebase, because everything is rewrapped in Resource.make, which basically makes it uninterruptible, but I haven't checked too thoroughly.

With that being said, this is the code in question . I've only modified it to use cf.start { fa } as opposed to fa.start, because it makes things clearer in this case.

sealed abstract case class Leak[F[_], A](value: A, release: F[Unit])

object Leak {

  private def chain(t: Throwable): Exception =
    new RuntimeException("Resource allocation failed. See cause for details.", t)

  def of[F[_], A](rfa: Resource[F, A])(
    implicit cf: Concurrent[F]
  ): F[Leak[F, A]] =
    for {
      du <- Deferred[F, Unit]
      mv <- MVar[F].empty[Either[Throwable, A]]
      _  <- cf.start {
                  rfa
                      .use {  a => mv.put(Right(a)) *> du.get }
                      .handleErrorWith { e => mv.put(Left(e))  *> du.get }
              }
      e  <- mv.take
      a  <- e match {
              case Right(a) => cf.pure(new Leak(a, du.complete(())) {})
              case Left(e)  => cf.raiseError(chain(e))
            }
    } yield a

}

So, let's start highlighting potential problems/improvements.


The MVar there is completed at most once, it could be another Deferred.


If use fails with an error, which can happen if a step in the resource acquisition fails, use guarantees that all the finalisers are run, which means it's unclear why the handleErrorWith branch needs to wait on du.get.
Worse, it sets mv to have an error, which means we hit the Left branch below, which however does not complete du. This means that while resources have been released, we have leaked a Fiber, which is now stuck waiting on du.get


Let's say we successfully start, and the resource is acquired correctly, put in mv, and waiting for du.get to be completed, which will in turn trigger the finalisation.
After start, there are several points in which the code below it could be interrupted, which means that du.complete(()) is not called, use doesn't complete, and the finalisers aren't run: this is a resource leak, and the most serious problem.
The interruption points are:
- While waiting on take: after the acquisition is started, but before it's completed. take (and all operations that involve waiting) introduce async boundaries, which can be interrupted.
- In cats-effect 1.0.1 (unreleased), in between flatMaps, so between start and take, and between take and the match.


This is more subtle, but let's take this code using Resource:

    Resource
      .make(IO(println("open")))(_ =>
        IO.sleep(1.second) >> IO(println("close")))
      .use(_ => IO.unit) >> IO(println("next"))

The output is this:

open
close
next

That is, the code waits until the finalisers are done before moving forward. This is important for two reasons:

  1. It's easier to not wait for something designed to wait (via start), than to wait for something designed not to wait (synchronisation with extra Deferreds, sometimes impossible if not exposed)
  2. In cases where you need to release things in order, not waiting introduces non determinism, and possibly closing something still in use by something else.

In such a scenario, even on the happy path the code above can be problematic, because the returned F[Unit] finaliser just unblocks the release process by calling .complete(()), but doesn't wait for it to finish


Possible solutions: most issues stem from the fact that you need a guarantee that du.complete(()) will happen no matter what: you need to use bracket for that. The final issue with back pressure of finalisers is more subtle and would require extra synchronisation with another Deferred, which however means potentially more problems. The approach taken in the cats effect PR instead interprets the Resource ADT, building a chain of brackets without relying on concurrency.

Support for Scala 2.13.0

Hi Rob,

I guess the dep on natchez is easily solvable (by publishing for 2.13.x). The only one I see problematic is scodec-cats. Any other blockers? I'd be happy to collaborate to get this done.

Thanks.

Timestamp value with trailing zeroes is truncated causing DateTimeParseException

Having following table definition:

DROP TABLE IF EXISTS foo CASCADE;
CREATE TABLE foo
(
    bar               text NOT NULL,
    updated_on        timestamp(3) without time zone DEFAULT transaction_timestamp() NOT NULL
);

and initialization script:

DELETE FROM foo;
INSERT INTO foo(bar) VALUES ('bar');

Following code:

package my.foo

import java.time.format.DateTimeFormatter
import java.time.LocalDateTime

import scala.concurrent.duration._
import skunk._
import skunk.implicits._
import skunk.codec.all._
import natchez.Trace.Implicits.noop
import cats.Functor
import cats.implicits._
import cats.effect._
import fs2._
import java.time.LocalDate
import skunk.data.Completion

object FooApp extends IOApp {
  type SessionF[F[_]] = Resource[F, Session[F]]

  def run(args: List[String]): IO[ExitCode] = {
    val io = dbSession[IO] use { implicit pool =>
      stream[IO].compile.drain
    }

    io.as(ExitCode.Success)
  }

  def stream[F[_]: ContextShift: ConcurrentEffect: Timer: SessionF] =
    process[F]

  def process[F[_]: ConcurrentEffect: SessionF]: Stream[F, Unit] = for {
    _ <- Stream.eval(updateFoo[F])
    _ <- Stream.eval(getUpdated[F])
  } yield ()

  def dbSession[F[_]: ContextShift: ConcurrentEffect]: Resource[F, SessionF[F]] = Session.pooled(
    host = "localhost",
    port = 5432,
    user = "user",
    database = "",
    password = none,
    max = 5
  )

  val queryUpdated: Query[Void, LocalDateTime] = sql"SELECT updated_on FROM foo".query(timestamp(3))

  def getUpdated[F[_]](implicit F: Sync[F], pool: SessionF[F]): F[LocalDateTime] = pool use { session =>
    for {
      updated <- session.unique(queryUpdated)
      _ <- F.delay(println(s"Updated: $updated"))
    } yield updated
  }

  def commandFoo: Command[LocalDateTime] =
    sql"UPDATE foo SET updated_on = ${timestamp(3)}".command

  def updateFoo[F[_]](implicit F: Sync[F], pool: SessionF[F]): F[Completion] = pool use { session =>
    for {
      formatter <- F.delay { DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS") }
      s = "2020-03-04 16:57:50.880"
      dateTime <- F.delay { LocalDateTime.parse(s, formatter) }
      _ <- F.delay(println(s"Updating: $dateTime"))
      resource = session prepare commandFoo
      updated = resource use { pc => pc.execute(dateTime) }
      result <- updated
    } yield result
  }
}

causes DateTimeParseException with Skunk 0.0.7:

updated_on  timestamp(3)  ->  2020-03-04 16:5⋯  ├── java.time.format.DateTimeParseException: Text '2020-03-04 16:57:50.88' could not be parsed at index 20

Insert completion when oids are enabled

Insert completion message parsing isn't going to work when table oids are enabled, so need to follow up here. Also see what happens with the statement causes multiple rows to be inserted, since they'll have different oids. May need to look at the PG source to figure this out.

See #72 for context.

WHERE fragment with optional clauses

Given two fragments

  val nameLike: Fragment[String]        = sql"""name LIKE $varchar"""
  val populationLessThan: Fragment[Int] = sql"""population < $int4"""

is there a way to build a fragment that should behave like this one

  /**
   * Should produce:
   * 1. "" if both inputs are None
   * 2. "WHERE name LIKE $1" if first input is Some
   * 3. "WHERE population < $2" if second input is Some
   * 4. "WHERE name LIKE $1 AND population < $2" if both inputs are Some
   */
  val where: Fragment[Option[String] ~ Option[Int]] = ???

Related combinator from Doobie - whereAndOpt

StackOverflowError while preparing a statement

I tried to minimize the failing code:

object Main extends IOApp {

  val session: Resource[IO, Session[IO]] = ???

  def run(args: List[String]): IO[ExitCode] = session.use { s =>
    val list = List.fill(12000)("whatever")
    val values = text.values.list(list.length)

    val cmd = sql"VALUES $values".command

    s.prepare(cmd).use { ps => 
      IO.pure(ExitCode.Success)
    }
  }

Result:

ERROR java.lang.StackOverflowError
ERROR 	at scodec.bits.ByteVector.go$2(ByteVector.scala:230)
ERROR 	at scodec.bits.ByteVector.take(ByteVector.scala:237)
ERROR 	at scodec.bits.BitVector$.toBytes(BitVector.scala:1691)
ERROR 	at scodec.bits.BitVector$Bytes.take(BitVector.scala:1699)
ERROR 	at scodec.bits.BitVector$Bytes.take(BitVector.scala:1695)
ERROR 	at scodec.codecs.IntCodec.decode(IntCodec.scala:31)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.$anonfun$decode$2(Decoder.scala:56)
ERROR 	at scodec.Attempt$Successful.flatMap(Attempt.scala:113)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.$anonfun$decode$2(Decoder.scala:56)
ERROR 	at scodec.Attempt$Successful.flatMap(Attempt.scala:113)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.$anonfun$decode$2(Decoder.scala:56)
ERROR 	at scodec.Attempt$Successful.flatMap(Attempt.scala:113)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.decode(Decoder.scala:56)
ERROR 	at scodec.Decoder$$anon$2.$anonfun$decode$2(Decoder.scala:56)
... and so on

Report error earlier when limit on number of parameters in a statement is exceeded

Having a command that performs multiple inserts in one statement with values.list(n) encoder combinators, similar to following one:

  case class Record(id: UUID, name: String, active: Boolean)

  def recordsEncoder(parentId: UUID, n: Int): Encoder[List[Record]] = (uuid ~ uuid ~ text ~ bool).contramap((rc: Record) => rc.id ~ parentId ~ rc.name ~ rc.active).values.list(n)

  def commandInsertRecords(parentId: UUID, n: Int): Command[List[Record]] =
    sql"INSERT INTO my.records (id, parent_id, name, active) VALUES ${recordsEncoder(parentId, n)}".command

  def insertRecords[F[_]](parentId: UUID, records: Vector[Record])(implicit F: Sync[F], logger: Logger[F], pool: SessionF[F]): F[Unit] = pool use { session =>
    session.transaction.use { trans =>
      val resource = session prepare commandInsertRecords(parentId, records.size)
      val updated = resource use { pc => pc.execute(records.toList) }
      for {
        _ <- logger.info(s"Inserting Records, size: ${records.size}")
        completion <- updated
        _ <- logger.info(s"Completed Records insertion: $completion")
      } yield ()
    }
  }

there's a limit on number of parameters one can have in a statement (more specifically it's a Postgres Frontend/Backend protocol restriction: the Parse message takes a length-prefixed array of parameter types and the prefix is an Int16). When this limit is exceeded, error message similar to following is thrown:

java.lang.IllegalArgumentException: 70000 is greater than maximum value 32767 for 16-bit signed integer
	at scodec.Attempt$Failure.require(Attempt.scala:142)
	at scodec.Attempt$Failure.require(Attempt.scala:128)
	at skunk.net.MessageSocket$$anon$1.$anonfun$send$2(MessageSocket.scala:68)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:142)
	at cats.effect.internals.IORunLoop$.startCancelable(IORunLoop.scala:41)
	at cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:88)
	at cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:67)
	at cats.effect.internals.Trampoline.startLoop(Trampoline.scala:35)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.super$startLoop(TrampolineEC.scala:89)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.$anonfun$startLoop$1(TrampolineEC.scala:89)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:89)
	at cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
	at cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:42)
	at cats.effect.internals.Callback$AsyncIdempotentCallback.apply(Callback.scala:136)
	at cats.effect.internals.Callback$AsyncIdempotentCallback.apply(Callback.scala:125)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$unsafeRegister$1(Deferred.scala:201)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$unsafeRegister$1$adapted(Deferred.scala:201)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$notifyReadersLoop$1(Deferred.scala:236)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:359)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:380)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:323)
	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:35)
	at cats.effect.internals.PoolUtils$$anon$2$$anon$3.run(PoolUtils.scala:52)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

It would be nice to catch it and provide error earlier.

Numeric codec doesn't seem to work

I get this exception when trying to have a column "total" of type numeric:

skunk.exception.PostgresErrorException: Column "total" is of type numeric but expression is of type character varying.

After a bit of digging I found this:

https://stackoverflow.com/questions/45873514/postgresql-hint-you-will-need-to-rewrite-or-cast-the-expression-column-state

Apparently the SQL statement should specify the type when using setString. Something like:

INSERT INTO foo (bar, total)
VALUES (?, ?::numeric)

Have you stumbled upon this issue? If you're okay with it I can try to fix it and send a PR :)

Codecs for nested and fixed-length arrays.

I'm thinking about how to build Codecs for arbitrary PostgreSQL array types, including arbitrary-depth nested arrays and fixed-length arrays (mapped using homogeneously typed Scala tuples).

A general solution might be able to use a signature like this.

def array[A](scalarCodec: Codec[A]): Codec[List[A]]

But Codec[A] is Semigroupal, and exposes types: List[type]. For this use-case, we only want to admit instances of Codec[A] which map to scalars and have not been built using the twiddle syntax.

Does including that kind of support in the base library make sense, or clutter the API?

Allow creating session pool in a different effect than the sessions

Session.pooled currently takes one effect, but if you want to use e.g. tracing in Kleisli[F, Span[F], *], you'd need a Span[F] to even allocate the session pool (external resource). If possible, there should be a way to use two different effects, so that the outer resource can be allocated without a span.

switch to munit + munit-cats-effect

We can probably do this without having to rewrite all the tests. I want to switch to property-based tests, and it would be nice to add law-checking where possible. Would also be nice to add expecty.

Encoding problem in MessageSocket/UnknownMessage (US-ASCII/UTF-8/iso_8601)

Referring to the dicussion on gitter (starting at @galy :Mai 01 18-00):
The exception is:
java.lang.IllegalArgumentException: 3: US-ASCII cannot decode string from '0x4d6b65696e6520426572656368746967756e672066c3bc7220546162656c6c6520636f756e747279'
at skunk.net.MessageSocket$$anon$1.$anonfun$receiveImpl$2(MessageSocket.scala:54)

I have my own database.
In my IDE (Intellij) in the project "skunk"
the output of Session.single(....debug = true) is:
.... ← ParameterStatus(client_encoding,UTF8)
.... ← ParameterStatus(IntervalStyle,iso_8601)
... ← ParameterStatus(server_encoding,UTF8)

in pgAmin4:
the output of "show client_encoding;" is UNICODE
of "show server_encoding;" is "UTF-8"
In the definition of the database "world" there are fields collation and character type. the value of both is German.Germany 1252

In the IDE (Intellij) in the project "Skunk"
the global file Encoding is UTF-8
but the Project Encoding is windows-1252
println(sys.props("file.encoding")) => UTF-8

In order to test a bugfix you need a (error)message from postgres containg (for example) a "Umlaut" (ö,ü,ä)

Fix temporal codecs (and test them)

Temporal codecs currently do not work too well.

For example:
https://github.com/tpolecat/skunk/blob/346d42bcb3479c7ed9e59ae77e59c79c92d7d1ad/modules/core/src/main/scala/codec/TemporalCodecs.scala#L23
This uses format method on String instead of DateTimeFormatter.

https://github.com/tpolecat/skunk/blob/346d42bcb3479c7ed9e59ae77e59c79c92d7d1ad/modules/core/src/main/scala/codec/TemporalCodecs.scala#L33
I think this isn't quite right, it wouldn't decode e.g. 02:03:06.33 or 02:03:06 and these values are of type time(6). DateTimeFormatterBuilder offers quite a nice way to build these, I'm thinking something along these lines:

private def timeFormatter(precision: Int): DateTimeFormatter = {
    val requiredPart = new DateTimeFormatterBuilder()
      .appendValue(HOUR_OF_DAY, 2)
      .appendLiteral(':')
      .appendValue(MINUTE_OF_HOUR, 2)
      .appendLiteral(':')
      .appendValue(SECOND_OF_MINUTE, 2)

    if(precision > 0) {
      requiredPart.optionalStart()
        .appendFraction(MILLI_OF_SECOND, 0, precision, true)
        .optionalEnd()
        .toFormatter()
    } else {
      requiredPart.toFormatter()  
    }  
  }

If I get a green light on this I would love to provide a PR this week.

roundtripTest for interval with precision is failing

I wanted to increase the coverage for TemporalCodecs by adding a roundtripTest for every valid interval with precision codec.
However, it is failing with the following exception.

🍋  tests.codec.TemporalCodecTest
   ? interval(0) (gmap) (Asserted and actual column types differ.)
   ? interval(0) (Asserted and actual column types differ.)

🔥  
🔥  ColumnAlignmentException
🔥  
🔥    Problem: Asserted and actual column types differ.
🔥       Hint: The decoder you provided is incompatible with the output columns for
🔥             this query. You may need to add or remove columns from the query or
🔥             your decoder, change their types, or add explicit SQL casts.
🔥  
🔥  The statement under consideration was defined
🔥    at /home/zsambek/dev/opensource/skunk/modules/tests/src/test/scala/codec/CodecTest.scala:35
🔥  
🔥    select $1::interval(0)
🔥  
🔥  The actual and asserted output columns are
🔥  
🍋  tests.codec.TemporalCodecTest
   ? interval(0) (gmap) (Asserted and actual column types differ.)
🔥    interval  interval  ->  interval(0)  ── type mismatch
🔥  

skunk.exception.ColumnAlignmentException: Asserted and actual column types differ.

🔥  
🔥  ColumnAlignmentException
🔥  
🔥    Problem: Asserted and actual column types differ.
🔥       Hint: The decoder you provided is incompatible with the output columns for
🔥             this query. You may need to add or remove columns from the query or
🔥             your decoder, change their types, or add explicit SQL casts.
🔥  
🔥  The statement under consideration was defined
🔥    at /home/zsambek/dev/opensource/skunk/modules/tests/src/test/scala/codec/CodecTest.scala:35
🔥  
🔥    select $1::interval(0)
🔥  
🔥  The actual and asserted output columns are
🔥  
🔥    interval  interval  ->  interval(0)  ── type mismatch
Test suite aborted
🔥  
Test suite aborted

skunk.exception.ColumnAlignmentException: Asserted and actual column types differ.
Execution took -2ms

The test case for this one was: roundtripTest(interval(0))(intervals: _*)

Documentation points to inaccessible file

Problem

Skunk uses the generic paradox theme which adds the following line at the end of all generated pages.
The source code for this page can be found here.
The problem is that is pointing to an inaccessible file. (e.g. https://github.com/tpolecat/skunk/tree/v0.0.15/modules/docs/target/mdoc/index.md)

Background

This problem occurs because the project generates the type-checked markdown under the target folder and then this will be used as the input for the paradox command.

Fix

I think we should modify the generic paradox theme as described here: https://developer.lightbend.com/docs/paradox/current/customization/theming.html
We can decide if we want to remove the whole line or if we want to replace the target/mdoc with src/main/paradox in the href.

Socket disconnect on large TLS packets with Postgres 12.3

Receiving an incomplete message here, which can only happen if the socket has terminated. Need to investigate.

TLS: wrap result: Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 172
 → StartupMessage(tttazoalvxfmkd,dsh6hqhoue38b)
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_TASK
bytesConsumed = 62 bytesProduced = 0
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_TASK
bytesConsumed = 749 bytesProduced = 0
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_TASK
bytesConsumed = 338 bytesProduced = 0
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_TASK
bytesConsumed = 9 bytesProduced = 0
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_WRAP
bytesConsumed = 0 bytesProduced = 0
TLS: wrapHandshake result: Status = OK HandshakeStatus = NEED_WRAP
bytesConsumed = 0 bytesProduced = 75
TLS: wrapHandshake result: Status = OK HandshakeStatus = NEED_WRAP
bytesConsumed = 0 bytesProduced = 6
TLS: wrapHandshake result: Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 45
TLS: unwrapHandshake result: Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 6 bytesProduced = 0
TLS: unwrapHandshake result: Status = OK HandshakeStatus = FINISHED
bytesConsumed = 45 bytesProduced = 0
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 143 bytesProduced = 172
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 42 bytesProduced = 13
 ← AuthenticationMD5Password([B@1a9e5980)
 → PasswordMessage(...)
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 41 bytesProduced = 70
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 385 bytesProduced = 356
 ← AuthenticationOk
 ← ParameterStatus(application_name,)
 ← ParameterStatus(client_encoding,UTF8)
 ← ParameterStatus(DateStyle,ISO, MDY)
 ← ParameterStatus(integer_datetimes,on)
 ← ParameterStatus(IntervalStyle,iso_8601)
 ← ParameterStatus(is_superuser,off)
 ← ParameterStatus(server_encoding,UTF8)
 ← ParameterStatus(server_version,12.3 (Ubuntu 12.3-1.pgdg16.04+1))
 ← ParameterStatus(session_authorization,tttazoalvxfmkd)
 ← ParameterStatus(standard_conforming_strings,on)
 ← ParameterStatus(TimeZone,Etc/UTC)
 ← BackendKeyData(21717,586388467)
 ← ReadyForQuery(Idle)
 → Query(
          SELECT oid typid, typname, typarray, typrelid
          FROM   pg_type
          WHERE typnamespace IN (
            SELECT oid
            FROM   pg_namespace
            WHERE nspname = ANY(current_schemas(true))
          )
        )
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 252 bytesProduced = 281
TLS: unwrap result: Status = BUFFER_UNDERFLOW HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 8221 bytesProduced = 8192
 ← RowDescription(Field(typid, 26); Field(typname, 19); Field(typarray, 26); Field(typrelid, 26))
 ← RowData(List(Some(16), Some(bool), Some(1000), Some(0)))
 ← RowData(List(Some(17), Some(bytea), Some(1001), Some(0)))
 ← RowData(List(Some(18), Some(char), Some(1002), Some(0)))
 ← RowData(List(Some(19), Some(name), Some(1003), Some(0)))
...
 ← RowData(List(Some(12038), Some(pg_foreign_server), Some(0), Some(1417)))
 ← RowData(List(Some(12039), Some(pg_user_mapping), Some(0), Some(1418)))
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 4039 bytesProduced = 4010
 ← RowData(List(Some(12040), Some(pg_foreign_table), Some(0), Some(3118)))
 ← RowData(List(Some(12041), Some(pg_policy), Some(0), Some(3256)))
 ← RowData(List(Some(12042), Some(pg_replication_origin), Some(0), Some(6000)))
...
 ← RowData(List(Some(12310), Some(pg_replication_origin_status), Some(0), Some(12309)))
 ← CommandComplete(Select(286))
 ← ReadyForQuery(Idle)
 → Query(
          SELECT attrelid relid, atttypid typid
          FROM   pg_class
          JOIN   pg_attribute ON pg_attribute.attrelid = pg_class.oid
          WHERE  relnamespace IN (
            SELECT oid
            FROM   pg_namespace
            WHERE  nspname = ANY(current_schemas(true))
          )
          AND    attnum > 0
          ORDER  BY attrelid DESC, attnum ASC
        )
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 391 bytesProduced = 420
TLS: unwrap result: Status = BUFFER_UNDERFLOW HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
TLS: unwrap result: Status = BUFFER_UNDERFLOW HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 8221 bytesProduced = 8192
 ← RowDescription(Field(relid, 26); Field(typid, 26))
 ← RowData(List(Some(12309), Some(26)))
 ← RowData(List(Some(12309), Some(25)))
...
 ← RowData(List(Some(12190), Some(20)))
 ← RowData(List(Some(12190), Some(20)))
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 8221 bytesProduced = 8192
TLS: unwrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 8221 bytesProduced = 8192
TLS: unwrap result: Status = BUFFER_OVERFLOW HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
TLS: unwrap result: Status = BUFFER_UNDERFLOW HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 0 bytesProduced = 0
 → Terminate
TLS: wrap result: Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5 bytesProduced = 34

Scala native support

Do you think your library could be cross-built to support both the JVM and Scala Native?

Documentation improvements

A list of issues in documentation that a new user easily runs into:

  • Use of gimap is only documented in the twiddle list reference, although it's a huge boilerplate saver. Should be used in general examples with case classes
  • A list of all supported types and associated codecs (like varchar and int4) is missing

... more to come :)

Better reporting of error with enumerated data type in database

Here's the issue:

  • I have an enumerated data type in Postgres
banking=> \dT+ accounttype
                                         List of data types
 Schema |    Name     | Internal name | Size | Elements |  Owner   | Access privileges | Description 
--------+-------------+---------------+------+----------+----------+-------------------+-------------
 public | accounttype | accounttype   | 4    | Checking+| postgres |                   | 
        |             |               |      | Savings  |          |                   |
  • Here's the codec that I defined in Scala ..
val accountType = enum(AccountType, Type("accountType"))
  • Here's a sample decoder that uses it ..
val decoder: Decoder[Account] =
    (varchar ~ varchar ~ accountType ~ numeric ~ timestamp ~ timestamp.opt ~ numeric).map {
      case no ~ nm ~ tp ~ ri ~ dp ~ dc ~ bl =>
        tp match {
          case Checking =>
            CheckingAccount(AccountNo(no), AccountName(nm), Option(dp), dc, Balance(USD(bl)))
          case Savings =>
            SavingsAccount(AccountNo(no), AccountName(nm), ri, Option(dp), dc, Balance(USD(bl)))
        }
    }
  • and here's the query ..
val selectByAccountNo: Query[AccountNo, Account] =
    sql"""
        SELECT a.no, a.name, a.type, a.rateOfInterest, a.dateOfOpen, a.dateOfClose, a.balance
        FROM accounts AS a
        WHERE a.no = ${varchar.cimap[AccountNo]}
       """.query(decoder)
  • The 3rd column a.type is the enum typed column. When I execute this query, I get an UnknownOidException for the enum ..
The actual and asserted output columns are
[error] 🔥  
[error] 🔥    no              varchar    ->  varchar                         
[error] 🔥    name            varchar    ->  varchar                         
[error] 🔥    type            16434      ->  accountType  ── unknown type oid
[error] 🔥    rateofinterest  numeric    ->  numeric                         
[error] 🔥    dateofopen      timestamp  ->  timestamp                       
[error] 🔥    dateofclose     timestamp  ->  timestamp                       
[error] 🔥    balance         numeric    ->  numeric

@tpolecat suggested on gitter the following ..

Pass strategy = Typer.Strategy.SearchPath as another named argument when you specify all your session properties.
This is an optimization … if you're using built-in types only then it doesn't have to consult the system tables to get type metadata.

However it would be good to have this suggestion included as part of the error message.

Map to Tuple

I'm trying to get a List[(Country, Option[City]) but the code gets an ProtocolError

scalaVersion := "2.13.1"
libraryDependencies += "org.tpolecat" %% "skunk-core" % "0.0.7"

the code:

object Test extends App {

  import cats.effect._
  import natchez.Trace.Implicits.noop
  import scala.concurrent.ExecutionContext
  import cats.effect.IO
  import skunk._
  import skunk.codec.all._
  import skunk.implicits._

  implicit val cs: ContextShift[IO] = IO.contextShift(ExecutionContext.global)

  val session: Resource[IO, Session[IO]] =
    Session.single[IO](
      host = "localhost",
      port = 5432,
      user = "jimmy",
      database = "world",
      password = Some("banana"),
      debug = true
    )

  case class Country(name: String, code: String)
  case class City(name: String, district: String)

  val join: IO[List[(Country, Option[City])]] = session.use { s =>
    val q: Query[Void, (Country, Option[City])] = sql"""
                 select c.name, c.code, k.name, k.district
                 from country c
                 left outer join city k
                 on c.capital = k.id
                 order by c.code desc"""
      .query(varchar ~ bpchar(3) ~ varchar.opt ~ varchar.opt)
      .map { case a ~ b ~ Some(c) ~ Some(d) => (Country(a, b), Some(City(c, d))) }
    s.execute(q)
  }

  val myList = join.unsafeRunSync

  assert(myList.length == 239)
  assert(myList.filter(_._2.isEmpty).length == 7)
  assert(
    myList.take(2) == List(
      (Country("Zimbawe", "ZWE"), Some(City("Harare", "Harare"))),
      (Country("Zambia", "ZMB"), Some(City("Lusaka", "Lusaka")))
    )
  )
}

error:

 → StartupMessage(jimmy,world)
 ← AuthenticationMD5Password([B@666a7fec)
 → PasswordMessage(md5fec21932ccd0798ecd972f35609fec0b)
 ← AuthenticationOk
 ← ParameterStatus(application_name,)
 ← ParameterStatus(client_encoding,UTF8)
 ← ParameterStatus(DateStyle,ISO, MDY)
 ← ParameterStatus(integer_datetimes,on)
 ← ParameterStatus(IntervalStyle,iso_8601)
 ← ParameterStatus(is_superuser,on)
 ← ParameterStatus(server_encoding,UTF8)
 ← ParameterStatus(server_version,11.3 (Debian 11.3-1.pgdg90+1))
 ← ParameterStatus(session_authorization,jimmy)
 ← ParameterStatus(standard_conforming_strings,on)
 ← ParameterStatus(TimeZone,UCT)
 ← BackendKeyData(467,2036581669)
 ← ReadyForQuery(Idle)
 → Query(
                 select c.name, c.code, k.name, k.district
                 from country c
                 left outer join city k
                 on c.capital = k.id
                 order by c.code desc)
 ← RowDescription(Field(name, 1043); Field(code, 1042); Field(name, 1043); Field(district, 1043))
 ← RowData(List(Some(Zimbabwe), Some(ZWE), Some(Harare), Some(Harare)))
...
...
 ← RowData(List(Some(Afghanistan), Some(AFG), Some(Kabul), Some(Kabol)))
 ← RowData(List(Some(Aruba), Some(ABW), Some(Oranjestad), Some(�)))
 ← CommandComplete(Select(239))
 ← ReadyForQuery(Idle)
 → Query(RESET ALL)
 ← CommandComplete(Reset)
 ← ReadyForQuery(Idle)
 → Terminate

🔥  An unhandled backend message was encountered
🔥    at /home/travis/build/tpolecat/skunk/modules/core/src/main/scala/net/protocol/Query.scala:101
🔥  
🔥    Message: ReadyForQuery(Idle)
🔥  
🔥  This is an implementation error in Skunk.
🔥  Please report a bug with the full contents of this error message.

skunk.exception.ProtocolError: ReadyForQuery(Idle)
Exception in thread "main" scala.MatchError: (((United States Minor Outlying Islands,UMI),None),None) (of class scala.Tuple2)
	at Test$.$anonfun$join$2(Test.scala:34)
	at scala.util.Either.map(Either.scala:382)
	at skunk.Decoder$$anon$1.decode(Decoder.scala:26)
	at skunk.net.protocol.Unroll.$anonfun$unroll$3(Unroll.scala:67)
	at cats.instances.ListInstances$$anon$1.$anonfun$traverse$2(list.scala:78)
	at cats.instances.ListInstances$$anon$1.loop$2(list.scala:68)
	at cats.instances.ListInstances$$anon$1.$anonfun$foldRight$1(list.scala:68)
	at cats.Eval$.loop$1(Eval.scala:336)
	at cats.Eval$.cats$Eval$$evaluate(Eval.scala:368)
	at cats.Eval$Defer.value(Eval.scala:257)
	at cats.instances.ListInstances$$anon$1.traverse(list.scala:77)
	at cats.instances.ListInstances$$anon$1.traverse(list.scala:16)
	at cats.Traverse$Ops.traverse(Traverse.scala:19)
	at cats.Traverse$Ops.traverse$(Traverse.scala:19)
	at cats.Traverse$ToTraverseOps$$anon$2.traverse(Traverse.scala:19)
	at skunk.net.protocol.Unroll.$anonfun$unroll$2(Unroll.scala:66)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:139)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
	at cats.effect.internals.Callback$AsyncIdempotentCallback.run(Callback.scala:131)
	at cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:70)
	at cats.effect.internals.Trampoline.startLoop(Trampoline.scala:36)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.super$startLoop(TrampolineEC.scala:93)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.$anonfun$startLoop$1(TrampolineEC.scala:93)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:93)
	at cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
	at cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:44)
	at cats.effect.internals.Callback$AsyncIdempotentCallback.apply(Callback.scala:137)
	at cats.effect.internals.Callback$AsyncIdempotentCallback.apply(Callback.scala:124)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$unsafeRegister$1(Deferred.scala:205)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$unsafeRegister$1$adapted(Deferred.scala:205)
	at cats.effect.concurrent.Deferred$ConcurrentDeferred.$anonfun$notifyReadersLoop$1(Deferred.scala:241)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

Process finished with exit code 1

PoolTest sometimes hangs

Parallel test execution makes the connection pool test fail with a probability that seems to increase as the number of processors goes down. Need to investigate.

Session/Query seems to not terminate correctly on database error.

As discussed on gitter, the issue is that my http4s-based app, which uses skunk and its Session.pooled pool, fails to produce a response when an update framed as a Query produces a failure.

Using a Command seems to work, but for instances where it is desirable to also return the value of a generated column, Query seems to be the way to go.

My application runs with all effects specialised to cats.effect.IO The following gist shows key points of my setup.

https://gist.github.com/megri/4cae937b638eb2df89b6f9f457df3734

re-enable stack traces, maybe?

SkunkException extends NoStackTrace because IO stacktraces have always been nonsense, but cats-effect 2.2 changes this. See how it looks with stack traces enabled and maybe turn them on?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.