Git Product home page Git Product logo

dhallj's People

Contributors

amesgen avatar note avatar scala-steward avatar timwspence avatar travisbrown avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dhallj's Issues

YAML export escapes quotes

Another one:

@ YamlConverter.toYamlString(parse(""" { a = "\"\n" } """)) 
res19: String = """a: |
  \"
"""

EDIT the newline is irrelevant

@ YamlConverter.toYamlString(parse(""" { a = "\"" } """)) 
res5: String = """a: \"
"""

Investigate whether Paths.get is safe across platforms

Right now we use Paths.get(pathToken) for local imports during parsing. The parser ensures that only / will be used as a separator, but I'm not sure I know for a fact that Paths.get is guaranteed to handle / appropriately on all platforms.

Alternatively we could switch away from using Path in our representation of local imports, which I think might be the better choice.

Parser is not stack-safe for deep records

All operations currently work just fine on arbitrarily long lists:

scala> import org.dhallj.core.Expr
import org.dhallj.core.Expr

scala> import org.dhallj.parser.DhallParser
import org.dhallj.parser.DhallParser

scala> import org.dhallj.core.converters.JsonConverter
import org.dhallj.core.converters.JsonConverter

scala> val longList = DhallParser.parse((0 to 100000).mkString("[", ",", "]"))
longList: org.dhallj.core.Expr.Parsed = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...

scala> longList.normalize
res0: org.dhallj.core.Expr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ...

scala> longList.typeCheck
res1: org.dhallj.core.Expr = List Natural

scala> JsonConverter.toCompactString(longList)
res2: String = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,...

…and most things work just fine on arbitrarily deep record literals (or record types, or union types):

scala> val deepRecord = (0 to 10000).foldLeft(longList: Expr) { 
     |   case (expr, _) => Expr.makeRecordLiteral("foo", expr)
     | }
deepRecord: org.dhallj.core.Expr = {foo = {foo = {foo = {foo = {foo = {foo = ...

scala> deepRecord.normalize.alphaNormalize.hash
res3: String = f41d556f987dd60c59e9b49a367b0bf907dba111c904c88dfda27e2a599a07bc

scala> JsonConverter.toCompactString(deepRecord)
res4: String = {"foo":{"foo":{"foo":{"foo":{"foo":{"foo":{"foo":{"foo":{"foo": ...

Note that dhall produces the same hash for this expression:

$ dhall hash < deep-record.dhall
sha256:f41d556f987dd60c59e9b49a367b0bf907dba111c904c88dfda27e2a599a07bc

Unfortunately the parser can't handle this expression:

scala> DhallParser.parse(deepRecord.toString)
java.lang.StackOverflowError
  at org.dhallj.parser.support.JavaCCParser.jj_ntk_f(JavaCCParser.java:4508)
  at org.dhallj.parser.support.JavaCCParser.BASE_EXPRESSION(JavaCCParser.java:2425)
  at org.dhallj.parser.support.JavaCCParser.RECORD_LITERAL_ENTRY(JavaCCParser.java:390)
  at org.dhallj.parser.support.JavaCCParser.RECORD_LITERAL_OR_TYPE(JavaCCParser.java:666)
  at org.dhallj.parser.support.JavaCCParser.PRIMITIVE_EXPRESSION(JavaCCParser.java:874)
  at org.dhallj.parser.support.JavaCCParser.SELECTOR_EXPRESSION(JavaCCParser.java:1011)
  at org.dhallj.parser.support.JavaCCParser.COMPLETION_EXPRESSION(JavaCCParser.java:1080)
  ...

I think this should just be a matter of doing some more left-factoring, but I'm fairly new to JavaCC and I don't really know how much work this will be, so I've decided not to let this issue block the 0.1.0 release.

Fix DontCacheIfHash test

We're currently ignoring this new-ish test, which turned up a bug in the dhall-imports module. It involves the cache not being consulted when a duplicate import provides a hash, and should not affect most users.

cannot parse string literals with a #

Any string literal containing a # is rejected.

@ val Right(expr) = """ "#" """.parseExpr 
org.dhallj.parser.support.TokenMgrException: Lexical error at line 1, column 3.  Encountered: "#" (35), after : ""
  org.dhallj.parser.support.JavaCCParserTokenManager.getNextToken(JavaCCParserTokenManager.java:7734)
  org.dhallj.parser.support.JavaCCParser.jj_ntk_f(JavaCCParser.java:4508)
  org.dhallj.parser.support.JavaCCParser.DOUBLE_QUOTE_LITERAL(JavaCCParser.java:96)
  org.dhallj.parser.support.JavaCCParser.TEXT_LITERAL(JavaCCParser.java:158)
  org.dhallj.parser.support.JavaCCParser.PRIMITIVE_EXPRESSION(JavaCCParser.java:870)
  org.dhallj.parser.support.JavaCCParser.SELECTOR_EXPRESSION(JavaCCParser.java:1011)
  org.dhallj.parser.support.JavaCCParser.COMPLETION_EXPRESSION(JavaCCParser.java:1080)
  org.dhallj.parser.support.JavaCCParser.IMPORT_EXPRESSION(JavaCCParser.java:1211)
  org.dhallj.parser.support.JavaCCParser.APPLICATION_EXPRESSION(JavaCCParser.java:1306)
  org.dhallj.parser.support.JavaCCParser.WITH_EXPRESSION(JavaCCParser.java:1388)
  org.dhallj.parser.support.JavaCCParser.EQUIVALENT_EXPRESSION(JavaCCParser.java:1410)
  org.dhallj.parser.support.JavaCCParser.NOT_EQUALS_EXPRESSION(JavaCCParser.java:1450)
  org.dhallj.parser.support.JavaCCParser.EQUALS_EXPRESSION(JavaCCParser.java:1490)
  org.dhallj.parser.support.JavaCCParser.TIMES_EXPRESSION(JavaCCParser.java:1530)
  org.dhallj.parser.support.JavaCCParser.COMBINE_TYPES_EXPRESSION(JavaCCParser.java:1570)
  org.dhallj.parser.support.JavaCCParser.PREFER_EXPRESSION(JavaCCParser.java:1610)
  org.dhallj.parser.support.JavaCCParser.COMBINE_EXPRESSION(JavaCCParser.java:1650)
  org.dhallj.parser.support.JavaCCParser.AND_EXPRESSION(JavaCCParser.java:1690)
  org.dhallj.parser.support.JavaCCParser.LIST_APPEND_EXPRESSION(JavaCCParser.java:1730)
  org.dhallj.parser.support.JavaCCParser.TEXT_APPEND_EXPRESSION(JavaCCParser.java:1770)
  org.dhallj.parser.support.JavaCCParser.PLUS_EXPRESSION(JavaCCParser.java:1810)
  org.dhallj.parser.support.JavaCCParser.OR_EXPRESSION(JavaCCParser.java:1841)
  org.dhallj.parser.support.JavaCCParser.IMPORT_ALT_EXPRESSION(JavaCCParser.java:1881)
  org.dhallj.parser.support.JavaCCParser.OPERATOR_EXPRESSION(JavaCCParser.java:1908)
  org.dhallj.parser.support.JavaCCParser.FUNCTION_TYPE_OR_ANNOTATED_EXPRESSION(JavaCCParser.java:2366)
  org.dhallj.parser.support.JavaCCParser.BASE_EXPRESSION(JavaCCParser.java:2472)
  org.dhallj.parser.support.JavaCCParser.COMPLETE_EXPRESSION(JavaCCParser.java:2500)
  org.dhallj.parser.support.JavaCCParser.TOP_LEVEL(JavaCCParser.java:2514)
  org.dhallj.parser.support.Parser.parse(Parser.java:12)
  org.dhallj.parser.DhallParser.parse(DhallParser.java:11)
  org.dhallj.syntax.package$DhallStringOps$.parseExpr$extension(package.scala:13)
  ammonite.$sess.cmd10$.<clinit>(cmd10.sc:1)

Both dhall-haskell and dhall-rust accept this.

Publish API docs

We have sbt-unidoc set up and I've been using it locally, and we could pretty easily publish the API docs now with sbt-ghpages, but the Scaladoc presentation of the Java API docs is pretty terrible in my view, and I'd rather wait and take the time to do this more nicely (probably by publishing the Javadoc and Scaladoc sites separately—I think I'd give up cross-Java-Scala-module linking to have real Javadoc output for the Java modules).

JSON and YAML export doesn't respect toMap format

There's no good reason for this, I've just not implemented it yet.

scala> import org.dhallj.parser.DhallParser
import org.dhallj.parser.DhallParser

scala> import org.dhallj.core.converters.JsonConverter
import org.dhallj.core.converters.JsonConverter

scala> val expr = DhallParser.parse("""[{mapKey = "foo", mapValue = [1]}]""")
expr: org.dhallj.core.Expr.Parsed = [{mapKey = "foo", mapValue = [1]}]

scala> JsonConverter.toCompactString(expr)
res0: String = [{"mapKey":"foo","mapValue":[1]}]

It should do the same thing as dhall-to-json:

$ dhall-to-json <<< '[{mapKey = "foo", mapValue = [1]}]'
{
  "foo": [
    1
  ]
}

I am considering this one a blocker for the 0.1.0 release.

Run acceptance tests related to import resolution

We're not currently running any of the dhall-lang/tests/import tests, or the two type-inference tests related to caching (CacheImports and CacheImportsCanonicalize). I don't think there's any particular reason for this now, and we have our own tests for the imports module that cover some of the same ground, so we should give it a try.

Decide what to do about type checker failure test failure

The type checker acceptance test include the following failure test case:

{ x: Natural, x: Natural }

We currently type check this without a failure:

scala> import org.dhallj.parser.DhallParser
import org.dhallj.parser.DhallParser

scala> DhallParser.parse("{ x: Natural, x: Natural }").typeCheck
res0: org.dhallj.core.Expr = Type

We could easily change the type checker to make it fail here, but the spec seems to explicitly say that we don't need to:

Carefully note that there should be no need to handle duplicate fields by this point because the desugaring rules for record literals merge duplicate fields into unique fields.

Right now I'm ignoring this failure test case, but I need to figure out whether I'm misunderstanding the language in the spec, and fix the type checker if I am.

Parse records with keywords as keys

Thanks for this awesome new package! A minor parsing bug:

@ val Right(expr) = """{ if : Text }""".parseExpr 
scala.MatchError: Left(org.dhallj.core.DhallException$ParsingFailure: Encountered unexpected token: "if" "if"
    at line 1, column 3.

Was expecting one of:

    ","
    "="
    "Location"
    "Some"
    "Text"
    "}"
    <BUILT_IN>
    <NONRESERVED_LABEL>
) (of class scala.util.Left)

With dhall-haskell:

$ dhall <<< '{ if : Text }'
{ if : Text }

dhall-scala dependency is broken: dhall-ast not found

I was trying to add a dependency on "org.dhallj" %% "dhall-scala" % "0.3.2" (for a 2.13 project), and I get this error:

[error] (game-data / update) sbt.librarymanagement.ResolveException: Error downloading org.dhallj:dhall-ast_2.13:0.3.2
[error]   Not found
[error]   Not found
[error]   not found: /Users/gavin/.ivy2/local/org.dhallj/dhall-ast_2.13/0.3.2/ivys/ivy.xml
[error]   not found: https://repo1.maven.org/maven2/org/dhallj/dhall-ast_2.13/0.3.2/dhall-ast_2.13-0.3.2.pom
[error] Total time: 2 s, completed May 26, 2020 12:01:58 PM

Set up JavaScript build

I hacked this together as an experiment this afternoon with J2CL, and it's not too bad—a few Bazel build files, some minimal implementations of java.net and java.nio stuff, and (the worst part) rewriting all instances of String.format.

You can try it out from a web console here.

I think we should at least consider publishing JavaScript artifacts from here. On the "do it" side:

  • Compiling dhall-core and dhall-parser to JavaScript ended up being much smaller than I expected: J2CL / Closure gets a module that exports parsing, normalisation, and type-checking down to 240K. For comparison, the current GHCJS-built JavaScript used on dhall-lang.org is 6.6M minimised (this isn't a direct comparison, since dhall-core and dhall-parser don't provide import resolution and the GHCJS build does, but the difference was still a little surprising to me (25-30x)).

On the "no" side:

  • Living without String.format is really annoying.
  • I don't know if anyone would actually use something like this.

If someone can come up with a single real use case for this I'll clean up my branch and open a PR. I'll also need some help with packaging, etc.—I've essentially not used JavaScript outside of the context of Scala.js for at least a decade.

Type checker is not stack-safe for (extremely) deep records

Similar to #2, although the cause is different, and the point at which it becomes a problem is much deeper.

The type checker will happily give you a type for a record many thousands of nestings deep (although 20k layers takes about a minute on my laptop, so it's not fast):

scala> import org.dhallj.core.Expr
import org.dhallj.core.Expr

scala> val deepRecord = (0 to 20000).foldLeft(Expr.Constants.TRUE) {
     |   case (expr, _) => Expr.makeRecordLiteral("foo", expr)
     | }
deepRecord: org.dhallj.core.Expr = {foo = {foo = {foo = {foo = {foo = {foo = ...

scala> deepRecord.typeCheck
res0: org.dhallj.core.Expr = {foo : {foo : {foo : {foo : {foo : {foo : {foo : ...

All operations except type checking work for arbitrarily deep values—e.g. normalizing or hashing a record 100k layers deep takes seconds:

scala> val deeperRecord = (0 to 100000).foldLeft(Expr.Constants.TRUE) {
     |   case (expr, _) => Expr.makeRecordLiteral("foo", expr)
     | }
deeperRecord: org.dhallj.core.Expr = {foo = {foo = {foo = {foo = {foo = {foo = ...

scala> deeperRecord.normalize
res2: org.dhallj.core.Expr = {foo = {foo = {foo = {foo = {foo = {foo = {foo = {foo = ...

scala> deeperRecord.hash
res3: String = 8a8477b86e27cd48496db13bbd71bb9845c700cb88b9a8bfacd2391541ff38cc

(I'm guessing dhall would agree on this hash, but it's been running for five minutes and I've not heard back from it yet.)

Type checking blows up somewhere between 20k and 100k:

scala> deeperRecord.typeCheck
java.lang.StackOverflowError
  at org.dhallj.core.typechecking.TypeCheck.onRecord(TypeCheck.java:404)
  at org.dhallj.core.typechecking.TypeCheck.onRecord(TypeCheck.java:27)
  at org.dhallj.core.Constructors$RecordLiteral.accept(constructors-gen.java:249)
  at org.dhallj.core.typechecking.TypeCheck.onRecord(TypeCheck.java:408)
  at org.dhallj.core.typechecking.TypeCheck.onRecord(TypeCheck.java:27)
  at org.dhallj.core.Constructors$RecordLiteral.accept(constructors-gen.java:249)
  ...

This is because the type checker is currently implemented as an external visitor, where the visitor drives the recursion manually, while all of the other core operations are implemented as internal visitors, where the driver manages the recursion and maintains its own stack.

20k layers should be enough for any record, so I'm not letting this block the 0.1.0 release, but I'm planning to rework the type checker as an internal visitor soon, anyway, and I'm opening this issue to track the problem.

Fill in gaps in single-quoted literal support

I just noticed that there are some language features related to escaped sequences in single-quoted literals that we don't currently support. I'm working on fixing these now, and will put together some tests for the standard acceptance suite as well.

Fix with precedence

The parser currently accepts some inputs it shouldn't:

scala> org.dhallj.parser.DhallParser.parse("foo { x = 0 } with x = 1")
val res0: org.dhallj.core.Expr.Parsed = (foo {x = 0}) with x = 1

scala> org.dhallj.parser.DhallParser.parse("{ x = 0 } with x = 1 : T")
val res1: org.dhallj.core.Expr.Parsed = ({x = 0} with x = 1) : T

Both of these should be parsing failures, but our handling of the precedence of with is currently somewhat awkward, and while I don't think it would be excessively hard to fix these cases, it's not trivial, and it would be much easier if we move to this approach, so I haven't done it yet.

Note that we're currently ignoring the WithPrecedence2 and WithPrecedence3 tests, which catch this bug.

In any case this bug should not affect most users, and it's unlikely to cause problems. As you can see in the printed code above, even when the parser does accept code it shouldn't, the parses it comes up with aren't unreasonable.

Fix toString's handling of text literals, etc.

The implementation of toString (taking an expression and writing it back to Dhall code as a string) is currently pretty rough and needs some attention in general, but in particular it doesn't serialize text literals containing newlines correctly. toString hasn't been a high priority for me so far, but the newline issue at least needs to be fixed before the 0.1.0 release.

Document identifier index limit

The Haskell implementation will happily parse an arbitrarily large index on an identifier (although it looks like something's overflowing either in the representation or the encoding):

$ dhall encode --json <<< "x @ 9223372036854775808"
[
    "x",
    -9223372036854775808
]

We stop at Long.MaxValue:

scala> import org.dhallj.parser.DhallParser
import org.dhallj.parser.DhallParser

scala> DhallParser.parse("x @ 9223372036854775807")
res0: org.dhallj.core.Expr.Parsed = x@9223372036854775807

scala> DhallParser.parse("x @ 9223372036854775808")
java.lang.NumberFormatException: For input string: "9223372036854775808"
  at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:68)
  at java.base/java.lang.Long.parseLong(Long.java:703)
  ...

I think this is fine, since nobody's ever going to bind 9,223,372,036,854,775,808 xs, so this will never type-check, anyway, but we should document the difference and maybe wrap the NumberFormatException in an ParseException.

Parse quoted URLs correctly

parse("https://example.com/foo/\"bar?baz\"?qux") quietly catches a java.net.URISyntaxException e in org.dhallj.parser.support.ParsingHelpers and returns null

From a very brief look, I'd say we're parsing it correctly as a string but the java.net.URI constructor won't accept it. We might have to roll our own URI which would be very annoying!

Clean up toString implementation

This is a follow-up to #7. I've now got toString for Expr working well enough that it can be used to round-trip the unnormalised prelude, which was my goal for 0.1.0, but the implementation is still a disaster. I'd originally been using it for debugging and didn't really care about producing valid Dhall code, and I just threw together the current version in the past couple of days. It parenthesises unnecessarily, probably still gets precedence wrong in some cases, is an unmaintainable mess, etc.

We'll also probably want some kind of Dhall code pretty-printing at some point, but I'll open a separate issue for that.

Fix type annotation precedence issue

The parser currently gets precedence wrong for type annotations in three situations:

  • Empty lists
  • merge
  • toMap

These cases are some of the few places where the JavaCC grammar isn't a fairly straightforward translation of the Dhall ABNF file. I made the adjustments as a hack because I wasn't initially able to get JavaCC to handle backtracking correctly at the expression level for these cases. The current approach works for most cases, including everything in the parsing acceptance tests, but fails on some valid Dhall code—for example:

scala> import org.dhallj.parser.DhallParser.parse
import org.dhallj.parser.DhallParser.parse

scala> parse("[]: List Natural: Type")
org.dhallj.core.DhallException$ParsingFailure: Encountered unexpected token: ":" ":"
    at line 1, column 17.

Was expecting one of:

    <EOF>
    <WHSP>

Compare dhall:

$ dhall <<< '[]: List Natural: Type'
[] : List Natural

This is parsed as []: (List Natural: Type):

$ dhall encode --json <<< "[]: List Natural: Type"
[
    28,
    [
        26,
        [
            0,
            "List",
            "Natural"
        ],
        "Type"
    ]
]
$ dhall encode --json <<< "[]: ((List Natural): Type)"
[
    28,
    [
        26,
        [
            0,
            "List",
            "Natural"
        ],
        "Type"
    ]
]

I don't believe this bug in our parser can result in incorrect parses, only failed ones, and it's enough easy to work around:

scala> parse("[]: (List Natural: Type)").normalize.hash
res1: String = d79a2e0e14809ab2dbd2d180e60da8e129a5fb197bdd0caed57e3828402e48a9

Or:

scala> parse("[]: List (Natural: Type)").normalize.hash
res2: String = d79a2e0e14809ab2dbd2d180e60da8e129a5fb197bdd0caed57e3828402e48a9

Which gives the same hash as the Haskell implementation:

$ dhall hash <<< "[]: List Natural: Type"
sha256:d79a2e0e14809ab2dbd2d180e60da8e129a5fb197bdd0caed57e3828402e48a9

I've decided not to let this block the 0.1.0 release, but I'm planning to fix it soon.

root directory for imports

For filesystem imports, it would be nice to be able to specify the root directory (espc. as there is no easy and reliable way to change the current working dir on the JVM). imports-mini already supports this.

This should only involve adding e.g. rootDirectory: Path to ResolutionConfig and respecting it in ResolveImportsVisitor. I could create a PR for this.

Leverage Truffle?

I wonder leveraging Truffle (from the Graal project) would be something we'd be interested in - if we'd want to JIT Dhall rather than always interpreting it?

YAML export escapes newlines

Using 0.1.1:

@ YamlConverter.toYamlString(parse(""" { a = "\n" } """).normalize()) 
res15: String = """a: \n
"""

@ JsonConverter.toCompactString(parse(""" { a = "\n" } """).normalize()) 
res16: String = "{\"a\":\"\\n\"}"

THe JSON output is correct, but in the YAML output, the value does not contain a newline (without quotes, \n is not an escape sequence).

Clean up escaping

There are currently many different string encodings with different escapings, with lots of transition points between them. These transitions are currently handled in a fairly ad-hoc way, with escaping done at each of them as needed to get tests to pass.

I don't know of any specific bugs in this respect at the moment, but I'm sure there are some in there, and anyway the current situation won't be fun to maintain in the longer term. I need to work through exactly what needs to be escaped where, and to clean up the transitions. This should be relatively easy given the tests we have now.

Investigate type-checking performance

Right now the normalized Prelude type-checks ~instantaneously:

scala> import org.dhallj.syntax._
import org.dhallj.syntax._

scala> val path = "./dhall-lang/Prelude/package.dhall"
path: String = ./dhall-lang/Prelude/package.dhall

scala> val Right(prelude) = path.parseExpr.flatMap(_.resolve)
prelude: org.dhallj.core.Expr = {`Bool` = {and = let ...

scala> val normalized = prelude.normalize
normalized: org.dhallj.core.Expr = {`Bool` = {and = ...

scala> normalized.typeCheck
res0: Either[org.dhallj.core.typechecking.TypeCheckFailure,org.dhallj.core.Expr] = Right(...

If you don't normalize it, though, it takes a little over a minute:

scala> prelude.typeCheck
res1: Either[org.dhallj.core.typechecking.TypeCheckFailure,org.dhallj.core.Expr] = Right(...

It does produce the correct result, but something is obviously wrong. I haven't really looked into this yet, but I'm assuming it's because it's type-checking the same imported trees over and over.

Fix SomeXYZ parsing case

The parser acceptance suite includes the following test input:

{-
This is always a type error, but if this
was a parse error it would be quite confusing
because Some looks like a builtin.
-}
Some x y z

And the expected CBOR encoding looks like this:

[0, [5, null, ["x", 0]], ["y", 0], ["z", 0]]

We parse this as Some applied to three arguments, and end up with a different encoding:

scala> import org.dhallj.parser.DhallParser
import org.dhallj.parser.DhallParser

scala> DhallParser.parse("Some x y z")
res0: org.dhallj.core.Expr.Parsed = Some x y z

scala> DhallParser.parse("Some x y z").getEncodedBytes
res1: Array[Byte] = Array(-125, 5, -10, -126, 97, 120, 0, -126, 97, 121, 0, -126, 97, 122, 0)

I'm currently set this test case to be ignored, and since it seems like a minor issue and doesn't affect well-typed code (as far as I can see?) I've decided not to let it block the 0.1.0 release, but we need to fix it.

Validate that core modules actually work on Java 7

We're currently using -source 1.7 and -target 1.7 for the Java modules, and I'm pretty sure we're not using any post-7 standard library stuff, but I'm not setting the bootstrap class path, and I haven't actually bothered to try things out on a Java 7 JVM (only Java 8 at the earliest, both locally and in CI).

Import resolution against consul/etcd

Ideally users should be able to provide their own resolver to load configs from storages like consul, etcd, vault. I used to hack this with a includer for typesafe-config.

Unions don't seem to be typechecked correctly

In the following Dhall expression:

let Package =
      < GitHub : { repository : Text, revision : Text }
      >

in [ Package.GitHub { repository = {} } ]

We should get a type error because of repository not being a Text, and revision being missing. Dhall's editor on the website says:

Error: Wrong type of function argument

{ - revision : 
,   repository : - Text
                 + Type
}

5      Package.GitHub { repository = {} }

(input):5:6

Instead, if I try to normalize this expression and encode it as JSON using the circe encoder:

import $ivy.`org.dhallj::dhall-scala:0.8.0-M1`
import $ivy.`org.dhallj::dhall-circe:0.8.0-M1`

import org.dhallj.parser.DhallParser._

val input = """
let Package =
      < GitHub : { repository : Text, revision : Text }
      >

in [ Package.GitHub { repository = {} } ]
"""

import org.dhallj.circe.Converter

println(parse(input))
println(parse(input).normalize)
println(Converter(parse(input).normalize))

The last line prints:

Some([
  {
    
  }
])

The result of normalization prints as [(<GitHub : {repository : Text, revision : Text}>.GitHub) {repository = {}}].

If I skip repository = {} completely, I get Some([\n]).

If Package is just a record and not a union, it works as expected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.