Git Product home page Git Product logo

go-jsonnet's Introduction

go-jsonnet

GoDoc Widget Travis Widget Coverage Status Widget

This an implementation of Jsonnet in pure Go. It is a feature complete, production-ready implementation. It is compatible with the original Jsonnet C++ implementation. Bindings to C and Python are available (but not battle-tested yet).

This code is known to work on Go 1.12 and above. We recommend always using the newest stable release of Go.

Installation instructions

# go >= 1.17
# Using `go get` to install binaries is deprecated.
# The version suffix is mandatory.
go install github.com/google/go-jsonnet/cmd/jsonnet@latest

# Or other tools in the 'cmd' directory
go install github.com/google/go-jsonnet/cmd/jsonnet-lint@latest

# go < 1.17
go get github.com/google/go-jsonnet/cmd/jsonnet

It's also available on Homebrew:

brew install go-jsonnet

jsonnetfmt and jsonnet-lint are also available as pre-commit hooks. Example .pre-commit-config.yaml:

- repo: https://github.com/google/go-jsonnet
  rev: # ref you want to point at, e.g. v0.17.0
  hooks:
    - id: jsonnet-format
    - id: jsonnet-lint

It can also be embedded in your own Go programs as a library:

package main

import (
	"fmt"
	"log"

	"github.com/google/go-jsonnet"
)

func main() {
	vm := jsonnet.MakeVM()

	snippet := `{
		person1: {
		    name: "Alice",
		    welcome: "Hello " + self.name + "!",
		},
		person2: self.person1 { name: "Bob" },
	}`

	jsonStr, err := vm.EvaluateAnonymousSnippet("example1.jsonnet", snippet)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(jsonStr)
	/*
	   {
	     "person1": {
	         "name": "Alice",
	         "welcome": "Hello Alice!"
	     },
	     "person2": {
	         "name": "Bob",
	         "welcome": "Hello Bob!"
	     }
	   }
	*/
}

Build instructions (go 1.12+)

git clone [email protected]:google/go-jsonnet.git
cd go-jsonnet
go build ./cmd/jsonnet
go build ./cmd/jsonnetfmt
go build ./cmd/jsonnet-deps

To build with Bazel instead:

git clone [email protected]:google/go-jsonnet.git
cd go-jsonnet
git submodule init
git submodule update
bazel build //cmd/jsonnet
bazel build //cmd/jsonnetfmt
bazel build //cmd/jsonnet-deps

The resulting jsonnet program will then be available at a platform-specific path, such as bazel-bin/cmd/jsonnet/darwin_amd64_stripped/jsonnet for macOS.

Bazel also accommodates cross-compiling the program. To build the jsonnet program for various popular platforms, run the following commands:

Target platform Build command
Current host bazel build //cmd/jsonnet
Linux bazel build --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64 //cmd/jsonnet
macOS bazel build --platforms=@io_bazel_rules_go//go/toolchain:darwin_amd64 //cmd/jsonnet
Windows bazel build --platforms=@io_bazel_rules_go//go/toolchain:windows_amd64 //cmd/jsonnet

For additional target platform names, see the per-Go release definitions here in the rules_go Bazel package.

Additionally if any files were moved around, see the section Keeping the Bazel files up to date.

Building libjsonnet.wasm

GOOS=js GOARCH=wasm go build -o libjsonnet.wasm ./cmd/wasm 

Or if using bazel:

bazel build //cmd/wasm:libjsonnet.wasm

Running tests

./tests.sh  # Also runs `go test ./...`

Running Benchmarks

Method 1

go get golang.org/x/tools/cmd/benchcmp
  1. Make sure you build a jsonnet binary prior to making changes.
go build -o jsonnet-old ./cmd/jsonnet
  1. Make changes (iterate as needed), and rebuild new binary
go build ./cmd/jsonnet
  1. Run benchmark:
# e.g. ./benchmark.sh Builtin
./benchmark.sh <TestNameFilter>

Method 2

  1. get benchcmp
go get golang.org/x/tools/cmd/benchcmp
  1. Make sure you build a jsonnet binary prior to making changes.
make build-old
  1. iterate with (which will also automatically rebuild the new binary ./jsonnet)

replace the FILTER with the name of the test you are working on

FILTER=Builtin_manifestJsonEx make benchmark

Implementation Notes

We are generating some helper classes on types by using http://clipperhouse.github.io/gen/. Do the following to regenerate these if necessary:

go get github.com/clipperhouse/gen
go get github.com/clipperhouse/set
export PATH=$PATH:$GOPATH/bin  # If you haven't already
go generate

Update cpp-jsonnet sub-repo

This repo depends on the original Jsonnet repo. Shared parts include the standard library, headers files for C API and some tests.

You can update the submodule and regenerate dependent files with one command:

./update_cpp_jsonnet.sh

Note: It needs to be run from repo root.

Updating and modifying the standard library

Standard library source code is kept in cpp-jsonnet submodule, because it is shared with Jsonnet C++ implementation.

For performance reasons we perform preprocessing on the standard library, so for the changes to be visible, regeneration is necessary:

go run cmd/dumpstdlibast/dumpstdlibast.go cpp-jsonnet/stdlib/std.jsonnet > astgen/stdast.go

**The

The above command creates the astgen/stdast.go file which puts the desugared standard library into the right data structures, which lets us avoid the parsing overhead during execution. Note that this step is not necessary to perform manually when building with Bazel; the Bazel target regenerates the astgen/stdast.go (writing it into Bazel's build sandbox directory tree) file when necessary.

Keeping the Bazel files up to date

Note that we maintain the Go-related Bazel targets with the Gazelle tool. The Go module (go.mod in the root directory) remains the primary source of truth. Gazelle analyzes both that file and the rest of the Go files in the repository to create and adjust appropriate Bazel targets for building Go packages and executable programs.

After changing any dependencies within the files covered by this Go module, it is helpful to run go mod tidy to ensure that the module declarations match the state of the Go source code. In order to synchronize the Bazel rules with material changes to the Go module, run the following command to invoke Gazelle's update-repos command:

bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=bazel/deps.bzl%jsonnet_go_dependencies

Similarly, after adding or removing Go source files, it may be necessary to synchronize the Bazel rules by running the following command:

bazel run //:gazelle

go-jsonnet's People

Contributors

alldroll avatar anguslees avatar camh- avatar curusarn avatar deepgoel17 avatar ghostsquad avatar gotwarlost avatar groodt avatar hanyucui avatar hausdorff avatar isarrigiannis avatar jamesonjlee avatar jaymebrd avatar jbeda avatar jdockerty avatar jesse-cameron avatar kardianos avatar lorenz avatar marcelocantos avatar mqliang avatar robx avatar rohitjangid avatar salonijuneja21 avatar sbarzowski avatar sevki avatar sh0rez avatar sparkprime avatar squat avatar tejesh-raut avatar thombashi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-jsonnet's Issues

tailstrict does not prevent blowing the stack

$ go-jsonnet --max-stack 7 -e 'local f(n, c=0) = if n == 0 then c else f(n - 1, c + n) tailstrict; f(100)'
RUNTIME ERROR: Max stack frames exceeded.
        <std>:979:29-30 thunk from <thunk <ta> from <function <anonymous>>>
        <builtin>       builtin function <type>
        <std>:981:33-35 thunk from <function <anonymous>>
        <builtin>       builtin function <primitiveEquals>
        <std>:981:12-40 function <anonymous>

        <cmdline>:1:69-75       $
        During evaluation

When fixed, remove special go golden for test_cmd case max_trace4

Execution performance ideas

First we need to have some benchmarking in place, and some reasonable "corpus" of jsonnet to test on. We can start by using current test suite, but dedicated benchmarks make more sense.

Once we have that, we can try to find bottlenecks and potentially optimize. Instead of having them scattered throughout the code, I'll dump some of these ideas here:

  • caching of object locals (they're desugared to locals in each field now)
  • caching of object fields (controversial and important, we need a discussion first)
  • memory usage of tail recursion
  • making sure that we don't keep objects for too long
  • generate unique numbers for variables statically and use that instead of names
  • faster implementation of std.join, uniq, setInter and other array building builtins
  • native implementation of std.sort, std.parseInt and other functions that the user may expect to be optimized
  • interning of booleans for faster comparisons (by using the pointer directly and never actually dereferencing) and lower memusage.
  • avoid desugaring operators, call builtins directly instead (fewer things to check, no need to do std lookup and verifying that it's actually a function etc.).
  • fast & easy way for users to build arrays- perhaps a more sophisticated representation of an array would help, so that + behaves reasonably and/or more native higher-order functions. This stuff often applies to strings, too.
  • faster indexing of objects with deep inheritance chains
  • reduce overhead of function call argument checking (currently we check for a lot of special cases, because of optional arguments and that requires building maps and multiple lookups for each argument - maybe it would help to skip these checks when there are only positional arguments)
  • check performance of environment capturing (and reuse the environment when possible)
  • interning of identifiers
  • try other representations of cachedThunk

Before we implement any of these we should have a benchmark that proves that it actually helps.

Cannot return RuntimeError from native function

While the RuntimeError type is exported, there does not seem to be an exported way to construct a RuntimeError or an exported way to populate the StackTrace ([]TraceFrame) from.

It would be useful to be able to have an exported variant of the get* functions in evaluator.go that return Go types (e.g. string instead of valueString) and a RuntimeError. However, just being able to get the stacktrace and exporting makeRuntimeError would be sufficient.

Returning other error types from a native function results in an "INTERNAL ERROR" error message, requesting that the user file a bug. The lack of a stack trace in the error message can make it difficult to know what caused the error.

Missing builtins

Still there is a bunch of builtins that are available in C++ version and not here.

Current status:
builtins["makeArray"] = &Interpreter::builtinMakeArray; DONE
builtins["pow"] = &Interpreter::builtinPow; DONE
builtins["floor"] = &Interpreter::builtinFloor; DONE
builtins["ceil"] = &Interpreter::builtinCeil; DONE
builtins["sqrt"] = &Interpreter::builtinSqrt; DONE
builtins["sin"] = &Interpreter::builtinSin; DONE
builtins["cos"] = &Interpreter::builtinCos; DONE
builtins["tan"] = &Interpreter::builtinTan; DONE
builtins["asin"] = &Interpreter::builtinAsin; DONE
builtins["acos"] = &Interpreter::builtinAcos; DONE
builtins["atan"] = &Interpreter::builtinAtan; DONE
builtins["type"] = &Interpreter::builtinType; DONE
builtins["filter"] = &Interpreter::builtinFilter; DONE
builtins["objectHasEx"] = &Interpreter::builtinObjectHasEx; DONE
builtins["length"] = &Interpreter::builtinLength; DONE
builtins["objectFieldsEx"] = &Interpreter::builtinObjectFieldsEx; DONE
builtins["codepoint"] = &Interpreter::builtinCodepoint; DONE
builtins["char"] = &Interpreter::builtinChar; DONE
builtins["log"] = &Interpreter::builtinLog; DONE
builtins["exp"] = &Interpreter::builtinExp; DONE
builtins["mantissa"] = &Interpreter::builtinMantissa; DONE
builtins["exponent"] = &Interpreter::builtinExponent; DONE
builtins["modulo"] = &Interpreter::builtinModulo; DONE
builtins["extVar"] = &Interpreter::builtinExtVar; DONE
builtins["primitiveEquals"] = &Interpreter::builtinPrimitiveEquals; DONE
builtins["native"] = &Interpreter::builtinNative; TODO: see #44
builtins["md5"] = &Interpreter::builtinMd5; DONE

Will go-jsonnet supersede the c++ version?

Hi jsonnet team,

As golang is now the preferred language for ops, will go-jsonnet become the de facto implementation for jsonnet?

IMHO, the go version is more likely to be adopted by the community as a library. Also from code readability perspective, golang version is more likely to get merge-able PRs.

Add comments to AST (and parser/lexer)

If I were writing a jsonnet formatter, I'd like to have access to comments as AST nodes so I can include them when I'm rewriting the source. It would also be helpful if using the AST to generate jsonnet as a conversion from another source.

Consider exposing interface for repeated execution with varying environment

At present, it appears that go-jsonnet's public interface starts with jsonnet.MakeVM, against which one can call (*VM).ExtVar, (*VM).ExtCode, and (*VM).Importer to prepare an environment, and eventually call (*VM).EvaluateSnippet against a sequence of Jsonnet expressions.

That call to (*VM).EvaluateSnippet eventually calls on the internal snippetToAST function, which yields a parsed ast.Node ready for later use in the evaluate function. In there, we find some more code-to-AST translation for environment entries that are Jsonnet code.

Is it viable to cleave a new break into this interface, so that it's possible to parse some Jsonnet—together with its import callbacks, and perhaps code entries from the environment—and yield a prepared object that could then be used for repeated evaluation against different TLAs?

The use case I have in mind is a server that would take a Jsonnet function that expects to receive some JSON as a TLA and evaluate the same Jsonnet function repeatedly and frequently, just with its input changing. It's unfortunate to have to restart this procedure each time from raw text when the code involved won't change between evaluations, throwing away the parsed form of the input text.

I concede that I have no measurements to critique the cost of the current interface compared to the separate prepare-once/evaluate-repeatedly approach sketched above. Has this project considered and rejected this approach before? If not, I'm willing to help pursue it.

A few ideas for an interface:

  • (*vm).PrepareSnippet(filename string, snippet string) (*Program, error)
  • (*Program).Eval(vs map[string]string) (string, error)
  • (*Program).EvalTo(w io.Writer, vs map[string]string) error

libjsonnet compatiblity

We want to be able to use go version as a drop-in replacement for C++ version (for evaluation only, no formatting). This will probably require some changes to happen first on C++ side, e.g. separation of evaluation and formatting.

It's a part of the larger API discussion.

Pre-parsed stdlib

@mqliang wrote in #89:

It would be very interesting if we can dump ast as go source files, then we can dump ast of stdlib as go source code, and embed it into vm, thus avoid parsing stdlib every time. Any idea?

I think we should look into it. It sound quite easy.

Prevent changing meaning of std in desugared expressions

Some constructs are desugared to functions from std. The problem is that if the user wraps them in their own definition of std they can change the meaning of core language constructs (probably just break them) which will be extremely confusing.

We can have special alias for std, unavailable for the user, for example $td.

Parameter names of builtins

The parameter names are visible to the user - they can use named arguments instead of positional for every function, including those in stdlib and builtins. Generally the names of parameters should be considered a part of the interface.

So we need to make sure they are right (consistent with http://jsonnet.org/docs/stdlib.html) and it should be verified by tests.

Full commandline support

It would be nice to use go version as a commandline drop-in replacement for C++ version.

We can avoid implementing it in Go for now if we implement libjsonnet #46 and use that from C++ commandline. The advantage is full compatibility, more tests of go-jsonnet (currently libjsonnet tests are very minimal, interesting tests happen through invoking jsonnet command) and we get it almost for free once we have #46.

Pass through JSON

I've set myself an arbitrary objective today to allow passing through JSON, i.e. add support for desugaring, checking, executing, manifesting basic objects and arrays containing other objects / arrays / primitives.

Object field caching (discussion needed)

Accessing a field multiple times results in multiple evaluations. This can be very easily avoided and innocent looking code may be in fact exponential.

I know there are some concerns about memory usage.

Make super a value?

Super can be a perfectly good value. It's not an object, though. It would be just a selfBinding - it's indexable, but + doesn't make sense on it.

We could desugar in super etc. and avoid treating it specially during evaluation.

Dead ?

Just asking because i am using this with kubernetes and trying to avoid any c++ dependencies.

import support

Hi,

Thanks for open sourcing go-jsonnet library. Do you have any plan implementing the 'import' function in jsonnet language any time soon?

Thanks,
Jim

RUNTIME ERROR: Couldn't manifest function in JSON output.

I found this in code:

case *valueFunction:
		return nil, makeRuntimeError("Couldn't manifest function in JSON output.", i.getCurrentStackTrace(trace))

and my input is something like this:

local a = import "../../../applib/a.libsonnet";
local b = import "../../../applib/b.libsonnet";

function(config={})
  local overall_config = default_config + config;
  (if a.match(overall_config) then
    {
      ...
    }
  else
    {
      ...
    }) + {
    ...
  }

And i get this:

RUNTIME ERROR: Couldn't manifest function in JSON output.
	During manifestation

How can i resolve this?
Thanks!

Tests metadata

Annotate individual end to end tests with things like:

  • Whether it should succeed, end with runtime error or end with a static error.
  • extvars and tlas
  • error formatting settings (in case error is expected)
  • native functions
  • import callbacks
  • Maximum stack
  • Maximum number of printed frames on stack trace
  • etc.

This is needed to properly test some features and it would be very helpful in unifying the Go and C++ test suites.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.