Git Product home page Git Product logo

Comments (12)

tshort avatar tshort commented on August 12, 2024 2

I'm good with the third argument (I think I like one of the last two options). I'd prefer to have it optional in that it'll default to erroring just as datos[datos[:x_13] .== "#0_PHY",:] would.

from dataframesmeta.jl.

nalimilan avatar nalimilan commented on August 12, 2024 1

I tend to think DataFramesMeta convenience macros should perform both automatic lifting and dropping, as in SQL. This is both practical, efficient (as it translates directly to SQL requests for backing data bases) and easy to document (as SQL is well-known).

We can still provide variants with different semantics if needed though, e.g. via filter or a keyword argument.

from dataframesmeta.jl.

nalimilan avatar nalimilan commented on August 12, 2024

datos[datos[:x_13] .== "#0_PHY",:] throws the same error.

So that's not related to DataFramesMeta. I'm afraid this is a regression I introduced when porting DataArrays to Julia 0.5. Please file an issue there so that we don't lose track of it.

from dataframesmeta.jl.

nalimilan avatar nalimilan commented on August 12, 2024

So that's actually a design decision in DataArrays. Reopening this issue so that we find a convenient syntax to skip missing values in @where. It could even be made the default only in that case.

from dataframesmeta.jl.

garborg avatar garborg commented on August 12, 2024

I think the design decision came from a conviction that filtering is precisely where implicitly dropping on null shouldn't happen. Agreed that the mismatch between Julia behavior and the SQL verbs used here is awkward, though, especially with the community's desire to have shared API for functionality common to in-memory tables, SQL, etc.

I've been wondering about opting in/out at the code block level for each of "lifting" and treating nulls as false. That would be one way to handle both paradigms, other than wrapping filtering expressions with some sort of 'ifnull' call.

from dataframesmeta.jl.

garborg avatar garborg commented on August 12, 2024

Or maybe the main filtering function for native tables would have a different name (like filter), and where would be (implemented on top, or separately) in the shared API, and behave like you suggested already (like in SQL)?

EDIT: I suppose the filter behavior could be in the shared API, as well -- not entirely sure about the performance of a SQL implementation, though, or whether we should worry about that (as where would be available).

from dataframesmeta.jl.

garborg avatar garborg commented on August 12, 2024

I think I agree about lifting operations within tables (I don't know enough to have an opinion about in the language overall, outside tables).

But I don't think departing from the arguably safer native behavior around filtering nulls should be a default even for a tables-specific API (especially one that's not resigned to targeting only SQL query strings).

Where I've seen filtering out nulls silently being a bad thing: First with teams that are rushed and either don't notice they've introduced nulls or just know the modeling better than the data and underlying processes (i.e. don't know ballpark figures well enough to second guess how many records were cut, or that any were cut). And second, when the source data changes in unexpected ways after initial development.

Generally ,sometimes returning a bad result is worse than indicating a problem and failing to return a result. In those cases, being explicit about dropping nulls can be easier and more efficient than workarounds. Of course things cut both ways -- I just want to emphasize that having to be explicit about where to drop nulls can be more ergonomic than working in a system where dropping null is a default.

In my mind, an ideal solution would make both filtering behaviors explicit and concise (personally, if anything, I'd want the native, non-sql behavior a bit more ergonomic. But mapping to native SQL, where correct, is something to encourage, too).

The two-verbs approach doesn't feel orthogonal to handling null values elsewhere in queries, and having to type a bit more to get that native-Julia-feeling mapping from null to false sounds find personally (if there was developer manpower to translate it to efficient SQL, etc., under the hood), but two verbs seems to strike a decent balance, if it's something like filter being a native verb and where matching SQL.

from dataframesmeta.jl.

dmbates avatar dmbates commented on August 12, 2024

On a somewhat related note, it would be helpful to allow DataArrays.DataArray{Bool, 1} in the second and subsequent arguments. At least I found that I needed to write an expression like

@where(results, convert(Vector{Bool}, map(x -> ismatch(r"^Dyestuff:", x), :dsmodel)))

to extract the rows of results in which the dsmodel value started with Dyestuff:

It would be convenient if the convert(Vector{Bool} could be avoided.

from dataframesmeta.jl.

davidagold avatar davidagold commented on August 12, 2024

@dmbates what if you do

@where(results, bitbroadcast(x -> ismatch(r"^Dyestuff:", x), :dsmodel))

?

from dataframesmeta.jl.

dmbates avatar dmbates commented on August 12, 2024

@davidagold that works, thanks. When CategoricalArrays and NullableArrays are used in DataFrames I think some of these problems will go away. One of the characteristics of the DataArrays package is that if you start with a DataArray or PooledDataArray most operations end up producing another DataArray of some sort.

from dataframesmeta.jl.

nalimilan avatar nalimilan commented on August 12, 2024

Even though I advocated dropping nulls silently above, I now think we should take a different approach. Indeed, we currently propagate nulls everywhere by default. Unless we decide to skip them silently (which would make sense in some specific contexts), it would be more consistent and safer to throw an error by default in the presence of nulls. Indeed the common mistakes done in SQL wouldn't happen with the safe behavior.

A possibility is to add an argument to where/filter which would skip missing values. This would have to be very easy to set since it will be very common, but it will make it more explicit (than e.g. in SQL) that null values are discarded and that this isn't a completely neutral operation. Here's a list of ideas:

  • where(df, cond, null=false/true): the value of the argument is used as a replacement when null is found.
  • where(df, cond, skipnull=true/false): simpler, whether to skip nulls or not; could have the advantage of reusing a vocabulary which is familiar (from Nulls.skip and probably similar skipnull arguments to various functions).
  • where(df, cond, true/false): less explicit, but shorter to type, which is important since it's a very common need.

An argument against this idea is that I couldn't find a language which works that way. For example, both SQL and dplyr drop nulls silently. OTOH the sample size is quite small (AFAICT Pandas doesn't have this problem since it doesn't really have a NA type and uses NaN instead).

Another reasonable approach would be to reject nulls in filter, but drop them silently with where, which would match the SQL operator of the same name. I'm not sure that's a good idea since people will end up using where all the time, defeating the main purpose of this difference (safety).

from dataframesmeta.jl.

pdeffebach avatar pdeffebach commented on August 12, 2024

This is an old issue! But still not resolved.

I suspect the solution will be found in Missings.jl, see related discussion here.

from dataframesmeta.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.