Git Product home page Git Product logo

recordstream's People

Contributors

b4hand avatar benbernard avatar bokutin avatar elindsey avatar jamessan avatar paxunix avatar punya avatar spigwitmer avatar tsibley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recordstream's Issues

Custom Format Output and Timestamps

Hi - I have a question about parsing/converting ECMAScript timestamps (milliseconds from 1 January 1970 as opposed to seconds in Epoch time).

I have log files that are already in the RecordStream newline separated json format - like so...

{"event":"log","timestamp":1412727154796,"tags":["api","trace"],"data":"getImage in images_local.js called"}
{"event":"log","timestamp":1412727155009,"tags":["api","trace"],"data":"Trying to get image via image.js controller"}

I can output using the following cat api-log.047 | recs-totable -f timestamp,data - but I'd like to convert the timestamp value to UTC. Would I use recs-eval for this?

I have a perl commandline command that convert the timestamp to localtime...

echo 1412727154796 | perl -pe 's/(\d+)/localtime($1\/1000)/e'

How might I incorporate this into RecordStream?

Any tips or suggestions greatly appreciated...

Update installation docs

  • PPA and tarball installations are no longer maintained
  • Recommend install from CPAN using either cpan or cpanm
    • Link to install instructions for cpanm
  • Recommend alternate quick/minimal install from https://recs.pl

As noted in #51, the installation docs need some love.

Reduce size of standalone script

The current script weighs in at around 7.5MB. 99% of this heft — I measured! — is from Date::Manip.

Date::Manip is used by normalizetime, which seems pretty useful for core. I also have a few local ops which use Date::Manip which I've previously thought might make sense for core: parsedatetime and datetimetoepoch.

This needs some consideration.

Optionally preserve "natural" key ordering from sources with headers

When the source of a record stream supports some natural key ordering, it'd be nice to optionally support retaining that order. Both recs-fromcsv and recs-fromsplit support the --header option which could preserve the field order as found in the first line of the input. With the natural order retained, various stream output operations can preferentially use it when no specific fields are specified, i.e. a bare ... | recs-tocsv or ... | recs-totable could use it but ... | recs-tocsv -k foo,bar wouldn't. This feature would make general filtering of data sets easier by removing the need to track external to the pipeline what fields you got at input to ensure they're output again.

Ideally it'd be general enough to be applied to any input operation and also be possible to add to records ad-hoc via recs-xform or similar operations for use later in the pipeline.

Since there's no stream-level metadata, we're limited to stashing this ordering information on each record, perhaps under a key like __field_order or __fields. Output operations can examine the first record for the stashed order. It's not the prettiest solution technically, so I'd love if someone had a better idea.

Does this seem reasonable?

Multiplex output to files?

Occasionally I reach for recs multiplex when I want to split a record stream into multiple files. For example, recs multiplex -k foo -- recs-tocsv works great, except all of the CSV output goes to stdout. When I'm using an output format without a distinct marker to split on, I usually work around this limitation using some combination of recs piped to parallel running the recs to... command. recs chain and recs generate seem like they would almost allow me to multiplex to separate files, but either generate needs to support outputting non-records or chain needs to support some sort of interpolation like generate (ick).

In terms of supporting this feature, I see two options:

  1. Build support into multiplex itself. Something like --output-filename-key=<keyspec> or --output-filename=<snippet> on which output is written to for each group. The filename key or evaluated snippet would be added to the set of keys records are grouped upon.
  2. Add a new operation which enables use of the existing multiplex to do this, for example: recs multiplex -k foo -- recs-tofiles -k filename -- recs-tocsv

I think option one is cleaner than option two, both in terms of implementation and command line syntax. Option two however is implementable outside of core recs.

Is this feature worth having in core recs? General thoughts?

fromdb/todb: Order of options/arguments shouldn't be as finicky

These operations are very picky about the order of their arguments, with error messages that seem bogus or are non-sequiturs when the order is wrong. I keep running into this, cursing, and moving on with the task at hand, but I should sit down and make it more liberal in what it accepts!

Tests fail without DBD::SQLite

Should this be a PREREQ_PM, or should DBHandle.t/fromdb.t be skipped when DBD::SQLite isn't installed?

[eli@flatline:~/Code/RecordStream]$ perl -I./lib ./test.pl 
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator.t .................... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Average.t ............ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Concat.t ............. ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Correlation.t ........ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Count.t .............. ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/CountBy.t ............ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Covariance.t ......... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/DistinctCount.t ...... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/First.t .............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Last.t ............... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/LinearRegression.t ... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Max.t ................ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Min.t ................ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Mode.t ............... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Percentile.t ......... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Records.t ............ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/StandardDeviation.t .. ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Sum.t ................ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/UniqConcat.t ......... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Aggregator/Variance.t ........... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/DBHandle.t ...................... 1/? install_driver(SQLite) failed: Can't locate DBD/SQLite.pm in @INC (@INC contains: /Users/eli/Code/RecordStream/tests /Users/eli/Code/RecordStream/lib ./lib /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/site_perl/5.16.3/darwin-2level /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/site_perl/5.16.3 /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/5.16.3/darwin-2level /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/5.16.3 .) at (eval 8) line 3.
Perhaps the DBD::SQLite perl module hasn't been fully installed,
or perhaps the capitalisation of 'SQLite' isn't right.
Available drivers: DBM, ExampleP, File, Gofer, Proxy, Sponge.
 at /Users/eli/Code/RecordStream/lib/App/RecordStream/DBHandle.pm line 124.
# Looks like your test exited with 2 just after 1.
/Users/eli/Code/RecordStream/tests/RecordStream/DBHandle.t ...................... Dubious, test returned 2 (wstat 512, 0x200)
All 1 subtests passed 
/Users/eli/Code/RecordStream/tests/RecordStream/DomainLanguage.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Executor.t ...................... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/FilenameKey.t ................... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/InputStream.t ................... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/KeyGroups.t ..................... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/KeySpec.t ....................... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Operation.t ..................... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/annotate.t ............ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/chain.t ............... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/collate-clumper.t ..... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/collate.t ............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/decollate.t ........... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/delta.t ............... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/eval.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/flatten.t ............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromatomfeed.t ........ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromcsv.t ............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromdb.t .............. 1/? install_driver(SQLite) failed: Can't locate DBD/SQLite.pm in @INC (@INC contains: /Users/eli/Code/RecordStream/tests /Users/eli/Code/RecordStream/lib ./lib /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/site_perl/5.16.3/darwin-2level /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/site_perl/5.16.3 /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/5.16.3/darwin-2level /Users/eli/Code/perlbrew/perls/perl-5.16.3/lib/5.16.3 .) at (eval 18) line 3.
Perhaps the DBD::SQLite perl module hasn't been fully installed,
or perhaps the capitalisation of 'SQLite' isn't right.
Available drivers: DBM, ExampleP, File, Gofer, Proxy, Sponge.
 at /Users/eli/Code/RecordStream/lib/App/RecordStream/DBHandle.pm line 124.
# Looks like your test exited with 2 just after 1.
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromdb.t .............. Dubious, test returned 2 (wstat 512, 0x200)
All 1 subtests passed 
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromkv.t .............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/frommultire.t ......... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromps.t .............. skipped: Missing Proc::ProcessTable Modules!
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromre.t .............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromsplit.t ........... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromtcpdump.t ......... skipped: Missing Modules!
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromxml.t ............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/generate.t ............ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/grep.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/join.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t ........... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/normalizetime.t ....... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/sort.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/stream2table.t ........ ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/tocsv.t ............... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/togdgraph.t ........... skipped: Missing GD::Graph!
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/tognuplot.t ........... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/tohtml.t .............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/topn.t ................ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/toprettyprint.t ....... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/toptable.t ............ ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/totable.t ............. ok    
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/xform.t ............... ok    
/Users/eli/Code/RecordStream/tests/RecordStream/OptionalRequire.t ............... ok   
/Users/eli/Code/RecordStream/tests/RecordStream/OutputStream.t .................. ok   
/Users/eli/Code/RecordStream/tests/RecordStream/Record.t ........................ ok    

Test Summary Report
-------------------
/Users/eli/Code/RecordStream/tests/RecordStream/DBHandle.t                    (Wstat: 512 Tests: 1 Failed: 0)
  Non-zero exit status: 2
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/fromdb.t            (Wstat: 512 Tests: 1 Failed: 0)
  Non-zero exit status: 2
Files=65, Tests=1221, 10 wallclock secs ( 0.41 usr  0.13 sys +  8.43 cusr  0.86 csys =  9.83 CPU)
Result: FAIL
Failed 2/65 test programs. 0/1221 subtests failed.

Feature request: recs decollate

Right now when I use recs decollate it dumps the original object into my new records

% echo '{"field":[{"test":5},{"test":6}]}' | recs decollate -d unarray,field,temp
{"temp":{"test":5},"field":[{"test":5},{"test":6}]}
{"temp":{"test":6},"field":[{"test":5},{"test":6}]}

I can clean this up on my own:

% echo '{"field":[{"test":5},{"test":6}]}' | recs decollate -d unarray,field,temp | recs xform '$r={{temp}}'
{"test":5}
{"test":6}

But it'd be nice to not have to (since there is presumably wasted work)

Unicode and newline characters break totable

When characters like '…' and newlines are in an outputted column, the columns no longer line up. Should probably transform the output or something an also leave a --no-escape option or similar...

Probably also affects toptable. I think those are the only two that require text alignment...

copy from irc discussion:

10:20 <ben_bernard> ugh, just found an annoying bug with totable
10:21 <ben_bernard> if you have a newline in one of the fields, everything goes to hell
10:23 <amling> if you have a lot of things I think you can arrive at hell
10:23 <amling> any interesting unicode characters e.g.
10:23 <ben_bernard> ugh
10:23 <ben_bernard> also
10:23 <ben_bernard> yeah
10:23 <ben_bernard> elipsis
10:23 <ben_bernard> character
10:24 <ben_bernard> https://www.dropbox.com/s/8ea5w1of8g1sm7z/Screenshot%202015-11-24%2010.24.12.png?dl=0
10:24 <ben_bernard> I feel like this is a conquerable problem
10:25 <amling> oh the twittening
10:25 <amling> you can probably super jackass it with pre-xform '{{text}} = some_perl_module_text_escaper({{text}})'
10:25 <amling> ultimately we might like very clear text output sinks to do that internally
10:25 <amling> just like table already automatically encodes non-strings into JSON (IIRC)
10:26 <ben_bernard> yeah, that is what I was thinking
10:26 <amling> uh
10:26 <amling> there's also color
10:26 <amling> and we may want to leave a super insane --no-encode or something around
10:27 <amling> actually color is also fucked by --no-encode
10:27 <amling> hmm
10:27 <amling> okay, maybe --no-encode is useless
10:27 <amling> anyway, should maybe think about color at the same time
10:27 <ben_bernard> yeah
10:28 <amling> some combination of encoding for text output (I think '\n' is unavoidable) and deciding effective width (color is zero, weird unicode stuff is ... something else)

Change in default prereqs for fromxml, fromatomfeed, fromdb, todb

With the move to a cpanfile, I made the prereqs for fromxml, fromatomfeed, fromdb, and todb into optional ones instead of default ones. All four of those operations require XS modules, so I removed them from the defaults in the name of pure-Perl core deps. This is a difference from 4.0.7 that I forgot about until after uploading a trial (unindexed) release of 4.0.8. Suddenly a slew of CPAN testers failure reports came back because of test files that assumed these operations always worked with the default deps. Whoops! But hooray for test suites and trial releases. :)

Two options:

  1. Adjust tests and operations accordingly to account for change in default deps. This will mean the test files are simply skipped the way others currently are when their optional deps aren't available.
  2. Make the cpanfile/Makefile.PL smarter and dynamically add the deps for those operations as defaults, except if a pure-Perl install is requested. This would restore the old behavior while allowing the fatpacked recs to still work as it does now, at the cost of some additional configure-time complexity.

I prefer option 1 since it's simpler and fromxml, fromatomfeed, fromdb, and todb don't seem crucial, although that also reflects my usage bias. When the commands are used, folks will be prompted to install the necessary deps, and it doesn't seem like such a burden to install them at that point.

Thoughts?

Commit 77e1a17 breaks multiplex.t

Looks like the repeat syntax used for aggregator snippets isn't compatible with bundling:

[eli@flatline:~/Code/RecordStream]$ perlbrew exec perl -I./lib ./test.pl 0 /Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t
perl-5.8.9
==========
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. 1/? Cannot repeat while bundling: "mr-agg=s{4}"
Cannot repeat while bundling: "ii-agg=s{4}"
Can't call method "main_usage" on an undefined value at /Users/eli/Code/RecordStream/lib/App/RecordStream/Operation/collate.pm line 164,  line 1.
# Looks like your test exited with 2 just after 24.
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. Dubious, test returned 2 (wstat 512, 0x200)
All 24 subtests passed 

Test Summary Report
-------------------
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t (Wstat: 512 Tests: 24 Failed: 0)
  Non-zero exit status: 2
Files=1, Tests=24,  0 wallclock secs ( 0.03 usr  0.01 sys +  0.10 cusr  0.01 csys =  0.15 CPU)
Result: FAIL
Failed 1/1 test programs. 0/24 subtests failed.
Command terminated with non-zero status.
Command [perl -I./lib ./test.pl 0 /Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t] terminated with exit code 2 ($? = 512) under the following perl environment:
Current perl:
  Name: perl-5.8.9
  Path: /Users/eli/Code/perlbrew/perls/perl-5.8.9/bin/perl
  Config: -de -Dprefix=/Users/eli/Code/perlbrew/perls/perl-5.8.9 -Aeval:scriptdir=/Users/eli/Code/perlbrew/perls/perl-5.8.9/bin
  Compiled at: Apr 22 2013 23:20:14

perlbrew:
  version: 0.67
  ENV:
    PERLBREW_ROOT: /Users/eli/Code/perlbrew
    PERLBREW_HOME: /Users/eli/.perlbrew
    PERLBREW_PATH: /Users/eli/Code/perlbrew/bin:/Users/eli/Code/perlbrew/perls/perl-5.8.9/bin
    PERLBREW_MANPATH: /Users/eli/Code/perlbrew/perls/perl-5.8.9/man


perl-5.16.3
==========
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. 1/? Cannot repeat while bundling: "mr-agg=s{4}"
Cannot repeat while bundling: "ii-agg=s{4}"
Can't call method "main_usage" on an undefined value at /Users/eli/Code/RecordStream/lib/App/RecordStream/Operation/collate.pm line 164,  line 1.
# Looks like your test exited with 2 just after 24.
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. Dubious, test returned 2 (wstat 512, 0x200)
All 24 subtests passed 

Test Summary Report
-------------------
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t (Wstat: 512 Tests: 24 Failed: 0)
  Non-zero exit status: 2
Files=1, Tests=24,  0 wallclock secs ( 0.03 usr  0.01 sys +  0.13 cusr  0.02 csys =  0.19 CPU)
Result: FAIL
Failed 1/1 test programs. 0/24 subtests failed.
Command terminated with non-zero status.
Command [perl -I./lib ./test.pl 0 /Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t] terminated with exit code 2 ($? = 512) under the following perl environment:
Current perl:
  Name: perl-5.16.3
  Path: /Users/eli/Code/perlbrew/perls/perl-5.16.3/bin/perl
  Config: -de -Dprefix=/Users/eli/Code/perlbrew/perls/perl-5.16.3 -Aeval:scriptdir=/Users/eli/Code/perlbrew/perls/perl-5.16.3/bin
  Compiled at: Dec 16 2013 18:19:46

perlbrew:
  version: 0.67
  ENV:
    PERLBREW_ROOT: /Users/eli/Code/perlbrew
    PERLBREW_HOME: /Users/eli/.perlbrew
    PERLBREW_PATH: /Users/eli/Code/perlbrew/bin:/Users/eli/Code/perlbrew/perls/perl-5.16.3/bin
    PERLBREW_MANPATH: /Users/eli/Code/perlbrew/perls/perl-5.16.3/man


perl-5.18.1
==========
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. 1/? Cannot repeat while bundling: "ii-agg=s{4}"
Cannot repeat while bundling: "mr-agg=s{4}"
Can't call method "main_usage" on an undefined value at /Users/eli/Code/RecordStream/lib/App/RecordStream/Operation/collate.pm line 164,  line 1.
# Looks like your test exited with 2 just after 24.
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t .. Dubious, test returned 2 (wstat 512, 0x200)
All 24 subtests passed 

Test Summary Report
-------------------
/Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t (Wstat: 512 Tests: 24 Failed: 0)
  Non-zero exit status: 2
Files=1, Tests=24,  1 wallclock secs ( 0.03 usr  0.01 sys +  0.15 cusr  0.02 csys =  0.21 CPU)
Result: FAIL
Failed 1/1 test programs. 0/24 subtests failed.
Command terminated with non-zero status.
Command [perl -I./lib ./test.pl 0 /Users/eli/Code/RecordStream/tests/RecordStream/Operation/multiplex.t] terminated with exit code 2 ($? = 512) under the following perl environment:
Current perl:
  Name: perl-5.18.1
  Path: /Users/eli/Code/perlbrew/perls/perl-5.18.1/bin/perl
  Config: -de -Dprefix=/Users/eli/Code/perlbrew/perls/perl-5.18.1 -Aeval:scriptdir=/Users/eli/Code/perlbrew/perls/perl-5.18.1/bin
  Compiled at: Oct  6 2013 18:29:51

perlbrew:
  version: 0.67
  ENV:
    PERLBREW_ROOT: /Users/eli/Code/perlbrew
    PERLBREW_HOME: /Users/eli/.perlbrew
    PERLBREW_PATH: /Users/eli/Code/perlbrew/bin:/Users/eli/Code/perlbrew/perls/perl-5.18.1/bin
    PERLBREW_MANPATH: /Users/eli/Code/perlbrew/perls/perl-5.18.1/man

Flat pack seg faults on El Capitan

I have perl 5.22.1 on my mac laptop, used to work great with the flatpack of recs.

Recently upgraded to El Capitan, now I'm getting segfaults when I run any results command

@tsibley don't know if you use a mac or not?

PPA install doesn't work on Ubuntu 14.04 (trusty)

I followed the PPA installation instructions using Ubuntu 14.04 (trusty). When I did sudo apt-get update I got this error:

W: Failed to fetch http://ppa.launchpad.net/ppa-j/recordstream/ubuntu/dists/trusty/main/binary-amd64/Packages  404  Not Found

W: Failed to fetch http://ppa.launchpad.net/ppa-j/recordstream/ubuntu/dists/trusty/main/binary-i386/Packages  404  Not Found

E: Some index files failed to download. They have been ignored, or old ones used instead.

It looks like http://ppa.launchpad.net/ppa-j/recordstream/ubuntu/dists/trusty does not exist.

On the dists page only these directories are listed:

  • lucid
  • maverick
  • natty
  • oneric

Is the PPA still being maintained?

Will support for trusty be added?

xform: Trailing comment breaks generated coderef

$ recs xform '42' <<<'{"foo":13}'
{"foo":13}
$ recs xform '42 #' <<<'{"foo":13}'
42

I haven't looked yet, but I suspect we can just move the trailing semi-colon to a new line in the stringy eval (inside Executor).

Feature: "recs-fromxls"

Pretty self explanatory, right now I open files in excel, then save as CSV, but for big xls, it's....painful.

fromtcpdump.t fails because Net::DNS::Packet doesn't provide TO_JSON

JSON::XS is configured to allow blessed objects and convert them if a TO_JSON method is available. If TO_JSON is not available, however, the object is replaced by a JSON null.

I'm not sure if I should just adjust the test or if I should monkey patch TO_JSON methods into Net::DNS::Packet and related packages so that they JSON-ify as expected by RecordStream.

For example, RecordStream could do this:

sub Net::DNS::Packet::TO_JSON {
    { %{ $_[0] } }
}

although ideally it would be localized to avoid side-effects in other code:

local *Net::DNS::Packet::TO_JSON = sub {
    ....
};

Full test output below:

tom@whaam RecordStream (master=) $ prove -wlv tests/RecordStream//Operation/fromtcpdump.t 
tests/RecordStream//Operation/fromtcpdump.t .. 
ok 1 - use App::RecordStream::Operation::fromtcpdump;
ok 2 - Operation initialization
ok 3 - Record is a App::RecordStream::Record
ok 4 - Record is a App::RecordStream::Record
not ok 5 - Records match: unnamed

#   Failed test 'Records match: unnamed'
#   at /home/tom/projects/RecordStream/lib/App/RecordStream/Test/OperationHelper.pm line 104.
#     Structures begin differing at:
#          $got->[0]{dns}{count} = ARRAY(0x19b6618)
#     $expected->[0]{dns}{count} = Does not exist
ok 6 - Has called finish: unnamed
1..6
Expected and output differed!
Expected:
{"dns":{"answer":[],"buffer":"","question":[{"qclass":"IN","qname":"blog.benjaminbernard.com","qtype":"A"}],"answersize":42,"additional":[],"authority":[],"header":{"cd":0,"nscount":0,"qdcount":1,"ancount":0,"rcode":"NOERROR","tc":0,"opcode":"QUERY","ad":0,"ra":0,"qr":0,"id":3930,"arcount":0,"aa":0,"rd":1},"offset":42},"ip":{"len":70,"hlen":5,"dest_ip":"10.0.0.1","proto":17,"options":"","foffset":0,"flags":{},"ttl":64,"ver":4,"src_ip":"10.0.2.15","cksum":10544,"tos":0,"id":15208},"file":"tests/files/test-capture1.pcap","caplen":84,"length":84,"timestamp":"1294004869.88858","ethernet":{"dest_mac":"525400123502","src_mac":"080027e0fd58"},"udp":{"len":50,"src_port":46578,"cksum":5715,"dest_port":53},"type":"udp"}
{"dns":{"answer":[{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"63.251.171.81","rdata":""},{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"69.25.27.170","rdata":""},{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"63.251.171.80","rdata":""},{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"69.25.27.173","rdata":""},{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"66.150.161.141","rdata":""},{"rdlength":4,"ttl":1800,"name":"blog.benjaminbernard.com","type":"A","class":"IN","address":"66.150.161.140","rdata":""}],"buffer":"","question":[{"qclass":"IN","qname":"blog.benjaminbernard.com","qtype":"A"}],"answersize":138,"additional":[],"authority":[],"header":{"cd":0,"nscount":0,"qdcount":1,"ancount":6,"rcode":"NOERROR","tc":0,"opcode":"QUERY","ad":0,"ra":1,"qr":1,"id":3930,"arcount":0,"aa":0,"rd":1},"offset":138},"ip":{"len":166,"hlen":5,"dest_ip":"10.0.2.15","proto":17,"options":"","foffset":0,"flags":{},"ttl":64,"ver":4,"src_ip":"10.0.0.1","cksum":23131,"tos":0,"id":2525},"file":"tests/files/test-capture1.pcap","caplen":180,"length":180,"timestamp":"1294004869.160748","ethernet":{"dest_mac":"080027e0fd58","src_mac":"525400123500"},"udp":{"len":146,"src_port":53,"cksum":47199,"dest_port":46578},"type":"udp"}
Output from module:
{"dns":null,"ip":{"hlen":5,"len":70,"proto":17,"dest_ip":"10.0.0.1","flags":{},"foffset":0,"options":"","ttl":64,"ver":4,"cksum":10544,"src_ip":"10.0.2.15","tos":0,"id":15208},"file":"tests/files/test-capture1.pcap","caplen":84,"length":84,"timestamp":"1294004869.88858","ethernet":{"src_mac":"080027e0fd58","dest_mac":"525400123502"},"udp":{"len":50,"src_port":46578,"dest_port":53,"cksum":5715},"type":"udp"}
{"dns":null,"ip":{"hlen":5,"len":166,"proto":17,"dest_ip":"10.0.2.15","flags":{},"foffset":0,"options":"","ttl":64,"ver":4,"cksum":23131,"src_ip":"10.0.0.1","tos":0,"id":2525},"file":"tests/files/test-capture1.pcap","caplen":180,"length":180,"timestamp":"1294004869.160748","ethernet":{"src_mac":"525400123500","dest_mac":"080027e0fd58"},"udp":{"len":146,"src_port":53,"dest_port":46578,"cksum":47199},"type":"udp"}
# Looks like you failed 1 test of 6.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/6 subtests 

Test Summary Report
-------------------
tests/RecordStream//Operation/fromtcpdump.t (Wstat: 256 Tests: 6 Failed: 1)
  Failed test:  5
  Non-zero exit status: 1
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.07 cusr  0.00 csys =  0.09 CPU)
Result: FAIL

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.