Git Product home page Git Product logo

gen.jl's Introduction

Gen.jl

Build Status

Gen: A General-Purpose Probabilistic Programming System with Programmable Inference

Warning: This is rapidly evolving research software.

See https://gen.dev for introduction, documentation, and tutorials.

Gen was created at the MIT Probabilistic Computing Project. To get in contact, please email [email protected].

If you use Gen in your research, please cite our 2019 PLDI paper:

Gen: A General-Purpose Probabilistic Programming System with Programmable Inference. Cusumano-Towner, M. F.; Saad, F. A.; Lew, A.; and Mansinghka, V. K. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI โ€˜19). (pdf) (bibtex)

gen.jl's People

Contributors

alex-lew avatar ali-ramadhan avatar bzinberg avatar chentoast avatar deoxyribose avatar femtomc avatar fsaad avatar georgematheos avatar github-actions[bot] avatar hsm207 avatar jamesonquinn avatar john-aws avatar josh-joseph avatar kenta426 avatar keorn avatar marcoct avatar mirkoklukas avatar mrvplusone avatar mschauer avatar mugamma avatar nmbrgts avatar postylem avatar samwitty avatar sharanry avatar sharlaon avatar sk-tests avatar werpers avatar ylxdzsw avatar zlatanvasovic avatar ztangent avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gen.jl's Issues

Propose for static DSL is actually implemented in terms of generate

The real propose for the static IR generative functions is simply not yet implemented, but as a workaround for common cases, it currently calls generate. However, in the presence of custom generative functions that use custom internal proposals (e.g. a nested particle filter for use in particle MCMC), this will give wrong weights.

Importance sampling (resampling)

Methods to be added to the standard inference library (names probably could use improvement):

Importance sampling using forward simulation as the proposal. Returns a vector of traces, their log normalized weights, and the estimate of the log marginal likelihood of the observations.

function importance_sample(model::Generator{T,U}, model_args::Tuple,
    observations::ChoiceTrie, num_samples::Int) where {T,U}
..
return (traces::Vector{U}, log_norm_weights::Vector{Float64}, log_ml_estimate::Float64)
end

Sampling importance resampling (rolling, constant space) using forward simulation as the proposal. Returns a single trace, and an estimate of the log marginal likelihood of the observations.

function sampling_importance_resampling(model::Generator{T,U}, model_args::Tuple,
    observations::ChoiceTrie, num_samples::Int) where {T,U}
..
return (trace::U, log_ml_estimate::Float64)
end

Importance sampling using forward simulation as the proposal. Returns a vector of traces, their log normalized weights, and the estimate of the log marginal likelihood of the observations.

function importance_sample(model::Generator{T,U}, model_args::Tuple,
    observations::ChoiceTrie, proposal::Generator, proposal_args::Tuple, num_samples::Int) where {T,U}
..
return (traces::Vector{U}, log_norm_weights::Vector{Float64}, log_ml_estimate::Float64)
end

Sampling importance resampling (rolling, constant space) using a generator as the proposal. Returns a single trace, and an estimate of the log marginal likelihood of the observations.

function sampling_importance_resampling(model::Generator{T,U}, model_args::Tuple,
    observations::ChoiceTrie, proposal::Generator, proposal_args::Tuple, num_samples::Int)  where {T,U}
..
return (trace::U, log_ml_estimate::Float64)
end

conversion between choice tries with array-valued random choices and arrays

to_array and from_array are methods of a choice trie that allow conversion back and forth to flat vectors of Float64. This feature is important because many inference operations on continuous-valued random choices are most efficiently implemented and concisely expressed using vector arithmetic (e.g. HMC, etc.).

Currently, these methods only work for choice tries that contain scalar-valued random choices. However, they should be able to work with array-valued random choices ("leaf nodes") without much trouble.

For from_array the structure of the choice trie prototype determines the structure of the returned choice trie. When this feature is added, the length of the vector-valued random choices in the prototype will determine their length in the returned choice trie.

Separate emission from state in Markov module

Currently, the kernel of a Markov module has to have an output type that is also the type of one of its inputs. However, it is common for a kernel to have one piece of data it passes to the next kernel application, and a separate piece of data that it emits to the caller. An example is emissions (i.e. observations) and hidden states in an HMM. Currently, one has to unnaturally make the kernel accept the emissions as part of its input.

The type signature of return value for a kernel of a Markov module should be Tuple{E,S} where E is the emission type and S is the "state" type. The Markov module will split this tuple into its components and treat them separately.

Note that to reproduce the current behavior, the kernel could return (state, state).

Add pretty printing for traces

For example, println(trace) in the intro to modeling tutorial notebook currently prints the following; it would be easier to read if it involved various degrees of indenting (but were the same up to whitespace changes):

Gen.DynamicDSLTrace{DynamicDSLFunction{Any}}(DynamicDSLFunction{Any}(Dict{Symbol,Any}(), Dict{Symbol,Any}(), Type[Array{Float64,1}], getfield(Main, Symbol("##3#5"))(), getfield(Main, Symbol("##4#6"))(), Bool[false], false), Trie{Any,Gen.ChoiceRecord}(Dict{Any,Gen.ChoiceRecord}((:y, 7)=>ChoiceRecord{Float64}(2.15599, -1.44199),(:y, 9)=>ChoiceRecord{Float64}(-0.217138, 0.755207),(:y, 1)=>ChoiceRecord{Float64}(8.84673, -0.344844),(:y, 10)=>ChoiceRecord{Float64}(-1.32226, 0.528996),(:y, 5)=>ChoiceRecord{Float64}(4.1605, 1.38225),(:y, 4)=>ChoiceRecord{Float64}(3.31077, 1.28686),:intercept=>ChoiceRecord{Float64}(3.04202, -2.76882),(:y, 3)=>ChoiceRecord{Float64}(6.30626, 0.810854),(:y, 6)=>ChoiceRecord{Float64}(2.82405, -0.991964),(:y, 8)=>ChoiceRecord{Float64}(0.736654, 1.2163),(:y, 11)=>ChoiceRecord{Float64}(-2.97161, -6.41178),(:y, 2)=>ChoiceRecord{Float64}(7.50822, 1.34212),:slope=>ChoiceRecord{Float64}(-1.12376, -1.55035)), Dict{Any,Trie{Any,Gen.ChoiceRecord}}()), Trie{Any,Gen.CallRecord}(Dict{Any,Gen.CallRecord}(), Dict{Any,Trie{Any,Gen.CallRecord}}()), false, -6.187158856986298, 0.0, ([-5.0, -4.0, -3.0, -0.2, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0],), 11)

(Suggested as a feature request by @marcoct after I asked if this exists.)

Static is_absorbing property

Some higher order module types (e.g. Markov) are only designed to be constructed from other modules that satisfy certain properties, statically. One property of the inner module (the 'kernel') that is required by Markov is that the return value of the kernel is always conditionally independent from its arguments, given its trace. This is a static property of the kernel module.

Currently, the Markov module does not check whether this property is true, and assumes it is. This could be checked by adding a method to the module/generator interface. The default would be false, but modules can overload this method and return true.

is_absorbing(generator::Generator) = false

A user annotation could be added to the generative function definition syntax to assert this property is true. It would be nice to be able to annotate many properties (we already have @compiled, @ad, and one could imagine others). It would be nice to do:

@is_absorbing
@compiled
@gen function foo()
...
end

However, this causes a Julia parse error. Adding parenthesis does work, but is not as clean:

@is_absorbing(
@compiled(
@gen function foo()
...
end))

It is also possible to string the annotations on one line, but that is not scalable:

@is_absorbing @compiled @gen function foo()
...
end

Dynamic DSL should use the logpdf_grad method for distributions

Currently, the dynamic DSL obtains gradients of the log density of a probability distribution by tracking the parameters on the ReverseDiff AD tape and passing them to the logpdf method of the distribution, and relying on the logpdf method to implemented in a way that plays nicely with ReverseDiff.

The static DSL uses the distributions's logpdf_grad method.

The dynamic DSL should also use the distribution's logpdf_grad method, for separation of concerns reasons, and for performance.

Dynamic DSL should give syntax error if user attempts to trace in a Julia function

In the AST, each @addr expression is associated lexically with the inner-most surrounding @gen function .. end node (there must be at least one such surrounding node).

There may not be a Julia function node (of either form function or () -> .. ) on the path in the AST between the @addr expression and the associated @gen function .. end node.

This would give an error in the following, which currently runs and traces the random choice inside the tracing context of bar:

@gen function foo(f)
    f()
end

@gen function bar()
    f = () -> @addr(bernoulli(0.5), :x)
    foo(f)
end

Internally, the issue is that we do not want to close over the current trace. Guaranteeing that the generative function interface is implemented correctly would be difficult if this were allowed.

Design: should a trace store the observation values used in initialize?

Consider the following:

    xs = ...
    ys = ...
    observations = Gen.DynamicAssignment()
    for (i, y) in enumerate(ys)
        observations[(:y, i)] = y
    end
    (trace, _) = Gen.initialize(model, (xs,), observations)

It seems sensible that the execution trace should store the observations (or perhaps only the addresses) that were constrained upon generation of the trace.

This information could simplify the somewhat awkward design of Gen.importance_sampling:

function importance_sampling(model::GenerativeFunction{T,U}, model_args::Tuple,
                             observations::Assignment,
                             num_samples::Int) where {T,U}

which (like black_box_vi!) requires the actual observations to be passed in, as opposed to a selection::AdressSet which is taken by metropolis_hastings, hmc, map_optimize, mala. By storing the observations in the trace we could in principle remove make the inference workflow be of a more standard form:

  1. Define observations.
  2. Obtain trace using Gen.initialize(model, (args,), observations), with the observations being stored in the trace.
  3. Run inference operators on selection:AddressSet.

Consider making traces directly implement the (read only) assignment interface

@alex-lew suggested that it not be necessary to have separate get_assignment step to go from an execution trace to something that exposes an assignment.

The current behavior is:

assignment = get_assignment(trace)
slope_value = assignment[:slope]

The proposed behavior is:

slope_value = trace[:slope]

However, having a separate data type that does not expose the trace API can result in clearer information hiding; this is similar to wrapping a mutable data type in an immutable value that doesn't expose any mutation methods.

Abstract type for deterministic generative function

If a user has a deterministic generative function or a generative function with no addressable randomness (e.g. a renderer), then the implementation of the GFI can be greatly simplified. For example, update and regenerate behave the same. They should only need to implement a couple methods

  • run forward, producing some state, and a return value (the trace itself will be a type that is provided by Gen, and stores the state, args, etc.)

  • update, given the new arguments, argdiff, state, and old return value; produce new return value, and new state

  • gradient, given state, arguments, return value, and return value grad, return gradients with respect to some subset of the argument(s). must agree with has_argument_grads() and accepts_output_grad().

Consier adding arbitrary hierarchical configuration to module API methods

Many module API methods involve internal inference actions, a simple example being the proposal distribution used within generate(). It should be possible to configure this proposal externally (e.g. choose from a set of proposals with different time-accuracy tradeoffs, all encapsulated in some module deep in the hierarchy), perhaps by introducing an extra optional dynamic hierarchical configuration parameter to most of the module API methods.

Use 'Assignment' instead of 'ChoiceTrie' everywhere

This name is simpler, and more recognizable. The fact that it is a prefix tree is a detail.

Assignment should be abstract type (currently ChoiceTrie)

DynamicAssignment, StaticAssignment, etc. are concrete types

Constructor for simple distributions with bijection applied

Frequently, we want to sample from a distribution that is a simple invertible transformation of an existing probability distribution, but because the distribution is not available, we either have to implement it (which is not conceptually complicated, but is verbose), or we have to reparametrize the random choices in our model.

For example a user may be forced to write:

k = @trace(geometric(0.5), :k_minus_one) + 1

whereas the user wanted to express (but could not):

k = @trace(geometric(0.5) + 1, :k)

Similarly for continuous random variables, a user is forced to write:

x = exp(@trace(normal(0, 1), :log_x))

when what they wanted was:

x = @trace(exp(normal(0, 1), :x)

The mismatch between the ideal semantic names (k, x in the examples above) and the addresses (the parametrization of the model) is not ideal. A concise syntax that handles the above cases, and other cases that are also easy to handle (statically) would be nice.

The same idea holds for generative functions more generally, but more research is required to capture the general case (and the simple case for distributions has already popped up).

Determine policy for controlling entropy

As it stands, Gen programs draw random bits from the global entropy store. Going forward some control over the random seed is required --- for debugging, reproduciblity, and independence of Gen code from other libraries.

The granularity of entropy control is a design choice. Some possibilities include:

  1. Continue using global seed which is set using Random.seed!(n)
    • pros: approach requiring least work
    • cons: unpredictable, no separation, harder to reproduce/debug results.
  2. Furnish each Gen method that makes random choices with an (optional) a formal parameter prng
    • pros: composes easily, reproducible, predictable, modular, adds minimal API complexity, each random procedure becomes deterministic given its inputs.
    • cons: needs implementation and design
  3. Design a special Monadic construct for the RandomSeed
    • pros: potentially more transparent to the user / designer of Gen library
    • cons: Julia does not have good native support for Monadic computation
  4. Design a single global entropy source for Gen only
    • pros: less complex than 2 and 3, separates Gen from other libraries.
    • cons: less modular, and should check if/how to implement such an object in Julia.

Improvements to static gen functions

There are a number of simplifications that would improve performance and simplify the language.

  • Do not store every LHS variable value in the trace, only store (1) the return values of primitive random choices, (2) sub-traces, and (3) arguments to primitive random choices. Arguments of generator invocations are stored in the appropriate sub-trace.

  • Infer the types for return values of primitive random choices from the distribution itself, instead of from user type annotation.

  • Do not store constant Julia expressions in the trace (inline them into the generated code).

Changes to trace update interface methods

Fix-update is an interesting type of trace update, that is intermediate between free update and force update. However, it's not clear that it is needed. Remove fix-update.

Remove deprecation warnings from examples

Several examples need to be updated to remove deprecation warnings introduced in Julia 0.7. These warnings will turn into errors when we move to Julia 1.0. These should all be trivial fixes.

Consider removing some module API methods

simulate is a special case of generate with an empty choice trie. it is possible to JIT-compile an implementation of generate specialized to this case by passing an EmptyChoiceTrie as the constraints. Therefore, there isn't really much of a performance reason to keep simulate around.

However, we should keep assess because it expects a complete choice trie. This is something that is hard to specialize on using JIT compilation. It makes sense to have a specialized procedure for this.

I also considered removing extend, which is a special case of fix_update. The point of extend is that it is an error if any addresses are deleted. This property gives large opportunity for specialization over fix_update; for example there is no discard choice trie needed. However, the args-change value and the constraints should together determine whether or not there will be deleted addresses anyways. But this is not a property that can be trivially encoded in the type of the constraints trie and the args-change value. Therefore, I will keep it. It should be clear that it is a special case of fix_update and exists as an (optional) performance optimization. There should be a default implementation provided for abstract Generators that just calls fix_update, and checks at the end that the discard trie is empty.

Nested modules for generator types

Each generator type has a substantial implementation, and they should be in different namespaces.

For example, the Tree module type should be in module Gen.Tree.

This is also relevant for users, who will sometimes need to refer to values that are not exported as part of the Gen module itself (for example, data types or constants associated specifically with Tree).

Also, the set of generator types is intended to grow over time. Some of these will be treated as plugins, and placed in other packages (like TensorFlow generators in GenTF), but it is conceivable there will be 10 distinct generator types in Gen itself (there are currently 5).

Add a default implementation of regenerate in terms of generate and project

Add the requirement that

q(t; x, u) = q(t; x, u|_{dom t})

Practically speaking, this means that sampling from q should never need to read the values of addresses from u that it doesn't end up putting in the trace. This is indeed the case in ancestral sampling.

The motivation for making this restriction is that it makes it possible to provide a generic default implementation of regenerate in terms of generate and project. This implementation does not exploit incremental computation, but it is nice that there is a default one.

  1. given initial choice map t, initial args, x, new args, x', selection set A
  2. construct choice map u that is the restriction of t to the complement of A
  3. run generate(f, x', u), obtaining new choice map t' and weight w'
  4. B = dom(t') setminus I
  5. w = project(t, B)
  6. return t', weight=w'/w

Parse error for gen functions on no-name argument

Julia permits arguments to methods to not have names, e.g. the second and third arguments below:

function foo(bar::Int, ::Float64, ::Nothing)
..
end

But this currently causes a parse error for Gen functions:

@gen function foo(bar::Int, ::Float64, ::Nothing)
..
end

Perhaps this should be fixed.

However -- It is unlikely that Gen functions will have all features of Julia functions in the near future (e.g. method dispatch is not currently supported for Gen functions). But the limitations of Gen functions relative to Julia functions should be clearly documented.

Proposals should accept model trace as first argument

Some generative functions are used as 'proposals', which are accepted by some built-in inference methods. Being a 'proposal' means the generative function satisfies certain properties relative to some model. Currently, some built-in inference methods expect the proposal to accept a first argument that is a choice trie of the model function.

However, some proposals may wish to efficiently interrogate the model function using e.g. update. This requires access to the trace of the model, and not just the choice trie. For example, Gibbs sampling in which we run update twice on the model to get model scores for two different values of a Bernoulli random variable. Therefore, I am changing the convention so that proposals accept a trace as their first argument, and not a choice trie.

Changes to the map combinator

Consider mapping over only the first argument, and treating the rest of the arguments as 'shared' arguments, that are shared by all kernel applications.

It is common for there to be a large number of shared parameters, and that what is being mapped over is smaller. In a fully-fledged functional language, this would be handled by passing a closure that closes over the shared parameters. Making the shared context explicit seems reasonable in the first-order world of combinators.

This will make the map combinator more closely aligned with the current unfold combinator, which currently distinguishes between 'state' (something that is passed from one kernel application to the next) and 'params' (which are 'shared' arguments that are passed to all kernel applications).

Note that this would obviate the need for the fill in models like the linear regression demo example. Without this change, `fill will be ubiquitous for hierarchical models.

The original motivation for having all the arguments be vectors was that this mirrored regular map. However, we aren't trying to replicate a functional programming language with these combinators.

Consider renaming map to something else, like indep or replicate?

Literal assignment and selection constructors

Each argument is a pair of address and value.
The last element of each list is the value.

assignment = Assignment(
    :foo => :bar => 0.000123,
    :baz => :blah => -0.3,
    :something => 0.1341
)

Each argument is an address:

selection = Selection(
   :foo => :bar,
   :baz
)

Changes to the unfold combinator

Currently, the number of applications of the kernel is an integer argument. The termination could instead be determined by returning a special value from the kernel, as in usual unfold in functional programming. The current behavior could be emulated by augmenting the state with an integer that gets decremented by each kernel application.

Discrete distributions with log space parameters

E.g. a version of categorical that accepts a vector of unnormalized log probabilities, to reduce the boilerplate use of logsumexp by users who are implementing Gibbs-style updates, e.g.:

(_, w1) = update(trace, args, noargdiff, constraints1)
(_, w2) = update(trace, args, noargdiff, constraints2)
(_, w3) = update(trace, args, noargdiff, constraints3)
i = @trace(categorical([w1, w2, w3]), :my_choice)

Similarly for bernoulli?

Choice trie constructor DSL

For expediency, I am currently using Gen functions with deterministic random choices (e.g. dirac distribution) as a DSL for constructing choice tries that are used as observations / constraints on traces of model functions.

If a DSL is desired for constructing choice tries, using a syntax similar to generative functions, it should be separate from the generative function DSL.

Note that it is always possible to construct choice tries without a DSL using DynamicChoiceTrie or StaticChoiceTrie, and some users (@fsaad) mentioned this was more intuitive than using the generative function DSL to construct them.

If a choice trie constructor DSL is used, here is a proposal:

  • It should use syntax @constructor function ... end.

  • @addr(fn(..), key(s) when fn is another choice trie constructor function has the same semantics as in generative functions (putting everything under the namespace determined by key(s)).

  • @addr(value, key(s)) should directly write the value to the given address.

There can be a non-compiled variant (which constructs DynamicChoiceTrie) and a compiled variant (which could construct StaticChoiceTrie). The compiled variant is not imporant for good MCMC performance with compiled generative function because once a compiled function has been run on the constraints, the choice trie is no longer required (the constraints become stored in the trace structure, which is fast for compiled gen functions).

Allow use of 'end' indexing syntax in compiled gen function

Somehow basic block parser does not handle use 'end' in e.g. image[1:4:end,1:4:end] correctly:

@compiled @gen function dynamic_generative_model_high_noise(num_steps::Int, renderer::BodyPoseDepthRenderer)
  init_pose::BodyPose = @addr(body_pose_model(), :init_pose)
  image::Matrix{Float64} = render(renderer, init_pose)
  image_downsampled::Matrix{Float64} = image[1:4:end,1:4:end] # Causes Undef Var error 'end'
  blurred::Matrix{Float64} = imfilter(image_downsampled, Kernel.gaussian(1))
  observable::Matrix{Float64} = @addr(noisy_matrix(blurred, 1.), :init_image)
  hmm_change::MarkovChange = MarkovChange(true, false, false)
  step_data::PersistentVector{Tuple{BodyPose,Matrix{Float64}}} = @addr(hmm(num_steps-1, (init_pose, fill(NaN, 1, 1)), renderer), :steps, hmm_change)
  return step_data
end
``

Improve messages for syntax error in `@addr` expression

Forgetting a comma in an @addr expression:

@gen function foo()
    @addr(normal(0, 1) :x)
end

results in the following error:

ERROR: LoadError: foo
MethodError: no method matching @addr(::LineNumberNode, ::Module, ::Expr)
Closest candidates are:
  @addr(::LineNumberNode, ::Module, ::Expr, ::Any) at /home/marcoct/dev/Gen/src/dsl_common.jl:4
in expression starting at REPL[3]:2

Reduce combinator

Note that this combinator may take a kernel that is a Julia function, not a generative function. Understanding what the generalization to generative functions is would be interesting, but not necessary for the first version.

The combinator returns a generative function that reduces a collection of values in an Abelian group (X, +). It takes (i) the identity element (e.g. 0.), (ii) the operator (e.g. +), and (iii) the inverse function (e.g. negation).

The resulting generative function has input type Vector{X}, and output type X. The resulting generative function would then be able to incrementally adjust its result in response to a change to its input by subtracting the previous value for the changed element(s) and adding their new value back in. Similarly for adding and removing elements.

This is an interesting example of a useful deterministic function that can be incrementalized.

This is also interestingly related to exchangeability.

Language: Remove @splice in favor of a special token indicating to splice?

@gen function foo()
    @addr(normal(0, 1), :y)
end

@gen function bar_splice()
    @addr(bernoulli(0.5), :x)
    @splice(foo())
end

@gen function bar_addr()
    @addr(bernoulli(0.5), :x)
    @addr(foo(), :z)
end;

Is there really a need for the gen macro @splice? It seems like a new concept that can be avoided by reusing @addr. (Moreover, the name is confusing since it is not clear what is being "spliced"---the address---so it should be probably called @addr_splice at least.)

A suggested design which seems cleaner and reuses @addr is to use a sentinel value e.g. the empty tuple (), or nothing as an indicator for requesting splice:

@gen function bar_splice()
    @addr(bernoulli(0.5), :x)
    @addr(foo(), ())
end

and this way there is no need both @addr and @splice.

Allow redefinition of compiled gen function

Currently the name of a Gen function is used as the name of an underlying data type. Since data types cannot be redefined, compiled Gen functions cannot be redefined. This should be possible to solve by having the gen function use a new symbol for the name of the type.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.