Git Product home page Git Product logo

sagan's Introduction

beave

Beave's Readthedocs.org Blog.

sagan's People

Contributors

bdoin avatar beave avatar candlerb avatar dr-dd avatar drforbin avatar h8h avatar litew avatar msnriggs avatar nunofernandes avatar peterurbanec avatar rtkrruvinskiy avatar salekseev avatar sathieu avatar smr1983 avatar yoichsec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sagan's Issues

min_email_priority seems to be ignored

I am trying to find how min_email_priority is implemented in sagan, but I can not find any comparison/references in the source code implementing a condition-check against an event's priority.

sagan fails with "Sagan was not compiled with MySQL support. Aborting!" when compiled with enable-postgresql only

I've compiled sagan 0.1.4 with postgresql support

./configure --enable-postgresql --disable-mysql

If I try to start sagan with "sagan -d" it fails with "Sagan was not
compiled with MySQL support. Aborting!". I think the problem is in
output-plugins/sagan-snort.c where the following preprocessor sequence
is possible wrong.

 #ifdef HAVE_LIBMYSQLCLIENT_R
 ...
 #else
removelockfile();
sagan_log(1, "Sagan was not compiled with MySQL support.  Aborting!");
#endif

Thanks
mtg

very high (86%) cpu useage

just installed latest sagan (1.0.0-RC2) compiled from source.
everything working fine except very high cpu usage.
The event count is not high.

thanxs

drforbin

Configure is not finding the mysql client lib even though it is installed.

I am running fedora core 10.

configure: error: The MySQL library libmysqlclient_r is missing!
If you're not interested in MySQL support use the --disable-mysql flag.

I am running a snort sensor on this machine, and can connect to my mysql databases just fine.

The files exist, If I do a locate I get this.
locate mysqlclient
/usr/lib/mysql/libmysqlclient.a
/usr/lib/mysql/libmysqlclient.so
/usr/lib/mysql/libmysqlclient.so.15
/usr/lib/mysql/libmysqlclient.so.15.0.0
/usr/lib/mysql/libmysqlclient_r.a
/usr/lib/mysql/libmysqlclient_r.so
/usr/lib/mysql/libmysqlclient_r.so.15
/usr/lib/mysql/libmysqlclient_r.so.15.0.0

Yum tells me that all of the following are installed and current.
yum list mysql
Loaded plugins: refresh-packagekit
Installed Packages
mysql.i386 5.0.88-1.fc10 @updates

yum list mysql-devel

Loaded plugins: refresh-packagekit
Installed Packages
mysql-devel.i386 5.0.88-1.fc10 @updates

yum list mysql-libs

Loaded plugins: refresh-packagekit
Installed Packages
mysql-libs.i386 5.0.88-1.fc10

I cannot find an option for the configure that will allow me to alter the search path for the mysql libs.

external: not being called

Sagan 0.2.1 (and likely git) do not execute the program via external: This is not related to the spacing issue.

sagan logs the wrong rule

there is some kind of memory corruption which leads to a wrong
notification. I see this first because sagan failed to insert an event
into a database. The problem is a broken date.

 [*] [output-plugins/sagan-snort.c, line 215] PostgreSQL Error: ERROR:  invalid input syntax for type timestamp with time zone: "|56|2010-08-07|00:02:30|sshd|-07|00:02:30|sshd|pam_unix_se"
 DB Query failed: INSERT INTO event(sid, cid, signature, timestamp) VALUES ('2', '128', '2', '|56|2010-08-07|00:02:30|sshd| -07|00:02:30|sshd|pam_unix_se')

The Emailnotification ist inaccurate to (not the same message as above).

 [**] [5000407] [OPENSSH] Session closed [**]
 [Classification: not-suspicious] [Priority: 2]
 -08-07 1:10 127.0.0.1:514 -> 127.0.0.1:22 auth info

 Syslog message: Accepted publickey for user from 1.1.1.1 port 50885 ssh2

Rule 5000407 should match on "Session closed". The "Accepted publickey"
should matched by rule 5000406. The Date is also corrupt.

I can reproduce it with
for i in $(seq 1 5); do
cat >> /var/run/sagan.fifo <<EOF
127.0.0.1|auth|info|info|26|2010-08-07|11:01:06|sshd|Accepted publickey for user from 1.1.1.1 port 57273 ssh2
127.0.0.1|authpriv|info|info|56|2010-08-07|11:01:06|sshd|pam_unix_session(sshd:session): session opened for user user by (uid=0)
127.0.0.1|authpriv|info|info|56|2010-08-07|11:01:07|sshd|pam_unix_session(sshd:session): session closed for user matthias
EOF
done

new feature - add part of a log line into related alert - syslog output

Hello,
I think the following feature could be usefull:

When a log trigger a sagan alert:

  • sagan is able to write the triggered alert and the associated log into an output file
  • However, sagan is not able to send the associated log into a syslog message (it only sends the alerts names and alert additional info)

It could be really great if sagan was able to send all the logs wich triggered the alert (raw format) or a part of thoses log, which could be extracted with liblognorm for example (as rsyslog is also able to use the liblognorm lib).

Let me know if you require any further informations on this potential feature.

Regards,

Host modifier for matching hostname or ip

It would be really useful to be able to match syslog message originating hosts (by IP or hostnames) for which certain SID/rules should apply. I'm thinking specifically about being able to profile my hosts with custom sagan rules based on their configurations. Maybe a host modifier like
... program ssh; host: %BRO_SERVERS% where the rule only matches if a host in %HOSTS% has generated the syslog message currently being analyzed. Then we could do that like create lists of hosts to match using variables.

BRO_SERVERS [ nids1, nids2 ... ]
WEB_SERVERS [ web1, web2 ... ]

OSSEC is able to match the hostname using <hostname></hostname> and it would be great if Sagan could support this as well. Thanks

Sagan doesnt error when normalization file doesn't exist.

I saw this with the Sonicall "normalize: sonicwall" was enabled. In the main configuration file, the line:

normalize: sonicwall, $RULE_PATH/sonicwall-normalize.rulebase

was NOT in the configuration file. However, it was enabled in the rule. This is very confusing when trying to debug an issue. Sagan should "error" if it finds a rule with a normalize flag, but there is no association with it in the configuration file.

parsing of pcre regexp stops at closing braket

If you use parenthesis in pcre regexp, sagan fails in load_rules with
"[E] [sagan-rules.c, line 237] Missing last '/' in pcre: my.rules at line 1"

Example rule:

alert tcp $EXTERNAL_NET any -> $HOME_NET 22
(msg:"[OPENSSH] Authentication success"; pcre: "/(?=^(?:(?!127.0.0.1).)$).?(accepted|authenticated)/i";
classtype: successful-user; program: sshd; find_port;
reference: url,wiki.softwink.com/bin/view/Main/5000075; sid: 5000075; rev:1;)

It looks as if load_rules parses the rule only to the first closing braket.

Maybe it is not intended to use complex regular expressions in rules.

Not really an issue, but a contribution.

I have a contribution, if you're interested.

I wrote a perl script to parse the ossec xml rules, and generate corresponding Sagan rules. The mapping to classtype is a bit of kludge, but it otherwise is an improvement from the stock rules.

I'll happily contribute either the rules, the perl script, or both.

Unable do change MySQL port

When using following config:

output database: log, mysql, user=foo password=bar dbname=sagan host=localhost port=someport

The port parameter is not handled.

Sagan crashes with no output

Hello

I am trying to run sagan on an rsyslog server recieving remote syslog events, after some random amount of time sagan crashes without giving any output. I have already tried to run with gdb to see what causes these crashes, but I could not figure out what could be the problem.

I am using unified output with barnyard2 writing the events into prelude. I am using the latest sagan-rules from git, and the default config file except these lines:

sagan_host 192.168.71.71
sagan_port 1514
output unified2: filename sagan.u2, limit 128, nostamp

Here is the relevant rsyslog.conf:

$template sagan,"%fromhost-ip%|%syslogfacility-text%|%syslogpriority-text%|%syslogseverity-text%|%syslogtag%|%timegenerated:1:10:date-rfc3339%|%timegenerated:12:19:date-rfc3339%|%programname%|%msg%\n"

. |/var/run/sagan.fifo;sagan

You can find a backtrace of the crash at http://pastebin.com/w1SxLddL

Incorrect CID @ First startup

CID in sensor.last_cid is incorrect on a new/unused install. It'll correct itself, but the last_cid is set to a insane arbitrary value.

CPU 100% when writer closes FIFO, because of misplaced sleep(1) in sagan.c

When a process writing to sagan's FIFO closes the handle, Sagan loops relentlessly retrying fgets() on the fifo. But fgets() keeps returning NULL immediatly on and on, until a writer process starts and opens the FIFO again.
This is critical because not everyone uses rsyslog as the FIFO writer, but use a custom FIFO writer instead.

There is a sleep() function to prevent this situation, but it is misplaced, and sagan only sleeps before the first fgets() retry.

The fix is to move sleep(1) to the proper place, so that sagan sleeps between retries:

    while(fd != NULL) { 
        ...
        while(fgets(syslogstring, sizeof(syslogstring), fd) != NULL) {
             ... doesn't enter here while retrying ...
        }
        if ( fifoerr == 0 ) {
            if ( config->sagan_fifo_flag != 0 ) { 
                Sagan_Log(0, "EOF reached. Waiting for threads to catch up");
                sleep(5);
                fclose(fd); 
                Sagan_Log(0, "Exiting.");       /* DEBUG: Rejoin threads */
                exit(0);
            } else { 
                Sagan_Log(0, "FIFO writer closed.  Waiting for FIFO write to restart...."); 
                fifoerr=1;          /* Set flag so our wile(fgets) knows */ 
                //SLEEP DOESN'T MAKE SENSE HERE: <--------------------------
                //sleep(1);             /* So we don't eat 100% CPU */
            }
        }

        //THIS IS THE CORRECT PLACE FOR sleep(1): <----------------------------
        sleep(1);           /* So we don't eat 100% CPU */

    } /* while(fd != NULL)  */

Let me know if you need a pull request.

Keep up the good work, Thanks!!!!!!!

PQexec in sagan-snort.c fails silently

im not sure if it is necessary to check the PQresultStatus but it may help sometimes

 --- src/output-plugins/sagan-snort.c.orig       2010-08-06 07:52:44.000000000 +0200
 +++ src/output-plugins/sagan-snort.c.m2 2010-08-06 09:54:27.000000000 +0200
 @@ -208,6 +208,12 @@
     sagan_log(1, "[%s, line %d] PostgreSQL Error: %s", __FILE__,  __LINE__, PQerrorMessage( psql ));
     }

 +if (PQresultStatus(result) != PGRES_COMMAND_OK && 
 +    PQresultStatus(result) != PGRES_TUPLES_OK) {
 +   sagan_log(0, "[%s, line %d] PostgreSQL Error: %s", __FILE__,  __LINE__, PQerrorMessage( psql ));
 +   PQclear(result);
 +   sagan_log(1, "DB Query failed: %s", sql);
 +   }

  if ( PQntuples(result) != 0 ) { 
      re = PQgetvalue(result,0,0);

support for embedded variables

It would be nice if support could be added for embedded variables, and probably the result of the variable features in snort e.g. negation, modifiers, etc. I haven't seen most of the stuff documented so I'm finding out by trial and error. It's not clear to me how much of the snort config is supported.

var CAT cat
var DOG dog
var ANIMALS [$CAT,$DOG]

alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg:"[TEST] Var Test Embedded"; meta_content:"animal %sagan%", $ANIMALS; classtype: successful-user; sid:10000090; rev:1;)
alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg:"[TEST] Var Test cat"; meta_content:"animal %sagan%", $CAT; classtype: successful-user; sid:10000091; rev:1;)
alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg:"[TEST] Var Test dog"; meta_content:"animal %sagan%", $DOG; classtype: successful-user; sid:10000092; rev:1;)

$ logger "animal cat"
[**] [1:10000091] [TEST] Var Test cat [**]
[Classification: successful-user] [Priority: 1]
2015-07-15 12:23:51 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal cat

$ logger "animal dog"
[**] [1:10000092] [TEST] Var Test dog [**]
[Classification: successful-user] [Priority: 1]
2015-07-15 12:23:35 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal dog

automake warning

Hi,

Looking the building process of sagan I have this warning:

Makefile.am:4: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPF
LAGS')
src/Makefile.am:7: warning: source file 'parsers/parse-ip.c' is in a subdirectory,
src/Makefile.am:7: but option 'subdir-objects' is disabled

automake: warning: possible forward-incompatibility.
automake: At least a source file is in a subdirectory, but the 'subdir-objects'
automake: automake option hasn't been enabled. For now, the corresponding output
automake: object file(s) will be placed in the top-level directory. However,
automake: this behaviour will change in future Automake versions: they will
automake: unconditionally cause object files to be placed in the same subdirectory
automake: of the corresponding sources.
automake: You are advised to start using 'subdir-objects' option throughout your
automake: project, to avoid future incompatibilities.

regards,

External output zombie processes

Using the most recent version of Sagan from Github the usage of an external output script results in an incremental number of zombie processes, eventually leading Sagan to complain that it's out of worker threads.

~# ps aux | grep sagan
sagan    10739  0.0  0.0      0     0 pts/1    Z+   09:30   0:00 [sagan-to-mq.php] <defunct>
sagan    10740  0.0  0.0      0     0 pts/1    Z+   09:30   0:00 [sagan-to-mq.php] <defunct>
sagan    10742  0.0  0.4 8677636 107596 pts/1  S+   09:30   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    10749  0.0  0.0      0     0 pts/1    Z+   09:30   0:00 [sagan-to-mq.php] <defunct>
sagan    10751  0.0  0.0      0     0 pts/1    Z+   09:30   0:00 [sagan-to-mq.php] <defunct>
sagan    10754  0.0  0.4 8677636 107608 pts/1  S+   09:30   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    18233  0.0  0.0      0     0 pts/1    Z+   09:45   0:00 [sagan-to-mq.php] <defunct>
sagan    18235  0.0  0.0      0     0 pts/1    Z+   09:45   0:00 [sagan-to-mq.php] <defunct>
sagan    18237  0.0  0.5 8678296 132548 pts/1  S+   09:45   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    18883  0.0  0.0      0     0 pts/1    Z+   09:46   0:00 [sagan-to-mq.php] <defunct>
sagan    18885  0.0  0.5 8678296 133064 pts/1  S+   09:46   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    28298 17.2  0.5 8678688 145776 pts/1  Sl+  09:00   9:27 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    29119  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29123  0.0  0.1 8676192 37500 pts/1   S+   09:00   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    29124  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29126  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29128  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29129  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29131  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29132  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>
sagan    29134  0.0  0.1 8676192 37548 pts/1   S+   09:00   0:00 /root/sagan_build/sagan-git-new/src/sagan -f /etc/sagan.conf
sagan    29136  0.0  0.0      0     0 pts/1    Z+   09:00   0:00 [sagan-to-mq.php] <defunct>

PostgreSQL Error: ERROR: invalid input syntax for integer in sagan-snort.c

"INSERT INTO sig_class ..." fails in get_sig_sid, sagan-snort.c

 --- src/output-plugins/sagan-snort.c.orig       2010-08-06 07:52:44.000000000 +0200
 +++ src/output-plugins/sagan-snort.c.m2 2010-08-06 09:39:15.000000000 +0200
 @@ -308,7 +308,7 @@

     /* classification hasn't been recorded in sig_class,  so put it in */

 -   snprintf(sqltmp, sizeof(sqltmp), "INSERT INTO sig_class(sig_class_id, sig_class_name) VALUES ('', '%s')", classtype);
 +   snprintf(sqltmp, sizeof(sqltmp), "INSERT INTO sig_class(sig_class_id, sig_class_name) VALUES (DEFAULT, '%s')", classtype);
     sql=sqltmp;
     db_query( dbtype, sql);

output external: spacing issue.

When using "output external: {script name}", the external routine won't execute if it lacks a space. For example, "output external:/tmp/nospace" (won't work), but "output external: /tmp/withspace" (will work).

No support for log events over multiple lines

Some processes generate log output with events spread over multiple lines, which currently can not be tracked.

In particular, auditd will generate output of the following form when an attempt is made to monitor a watched file or directory:

 node=somehost.example.com type=SYSCALL msg=audit(1338461102.756:AUEVID): arch=c000003e syscall=83 success=no exit=-17 a0=170da30 a1=1ff a2=170da48 a3=1717e50 items=1 ppid=9767 pid=9768 auid=707 uid=707 gid=707 euid=707 suid=707 fsuid=707 egid=707 sgid=707 fsgid=707 tty=(none) ses=99818 comm="ruby" exe="/usr/bin/ruby" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="some-audit-key"
 node=somehost.example.com type=CWD msg=audit(1338461102.756:AUEVID):  cwd="/var/www/gitorious"
 node=somehost.example.com type=PATH msg=audit(1338461102.756:AUEVID): item=0 name="/usr/lib64/ruby/gems/1.8" inode=197217 dev=fc:01 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:lib_t:s0
 node=somehost.example.com type=EOE msg=audit(1338461102.756:AUEVID): 

In this particular example, what I would like would be to generate an alert on the third line, where the first line meets some criteria (e.g. key=some-audit-key).

What would be desired for this specific example is a method of extracting the AUEVID from the first line, and than alerting on the third line, which contains the location being modified.

FIFO opened before fork hangs init scripts

There is, however, another problem which may be more difficult to resolve:
Opening the FIFO (src/sagan.c:532) is done earlier, before forking
(src/sagan.c:568).
While this may be a good idea for error handling, this cause a
problem: opening a FIFO is blocking if no writer is connected [1].
This means that the init script will never return.

There, imho, several solutions:

  • open the fifo in non-blocking mode (this may cause many changes the code)
  • open it after forking (but you'll never now it starting the daemon
    succeeded from the console, only in logs)
  • open in non-blocking mode and set it blocking after forking

Personally, I'd suggest #1 even if it costs some changes ..

bro-intel: domain; matches all logs

Can bro-intel: domain; be made to match the INTEL::DOMAIN item anywhere in the log?
The rule below could be really useful.
alert udp $EXTERNAL_NET any -> $HOME_NET $DNS_PORT (msg: "[BRO] Excessive Bad Domains (1000+)"; bro-intel: domain; after: track by_src, count 1000, seconds 86400; parse_src_ip: 1; parse_dst_ip: 2; classtype: suspicious-traffic; sid: 13000003; rev:1;)

However, it doesn't work as expected - it seems to match on every log line. Removing the after option further demonstrates this by resulting in

kernel: sagan[2330]: segfault at 7facb3e43a60 ip 0000003294247e2c sp 00007fa9f08891e0 error 4 in libc-2.12.so[3294200000+18a000]
init: sagan main process (1378) killed by SEGV signal

I don't understand how the domain option currently works for the bro-intel preprocessor. The domain indicator isn't listed in the table at https://wiki.quadrantsec.com/twiki/bin/view/Main/SaganRuleReference#bro_intel_src_ipaddr_dst_ipaddr.

Unclean shutdown leaves Sagan broken.

It appears Sagan doesn't check to see if the last_cid value in the sensor table is valid on startup.

If the shutdown was unclean (a crash or kill -9) this value will be out of date. This will result in an error like this:

[*] [12/27/2010 09:58:27] - [output-plugins/sagan-snort.c, line 203] MySQL Error [1062:] "Duplicate entry '1-13680' for key 'PRIMARY'"
Offending SQL statement: INSERT INTO event(sid, cid, signature, timestamp) VALUES ('1', '13680', '2', '2010-12-27 09:32:52')

Doing a quick query at startup, like "SELECT MAX(cid) FROM event where sid='1';" and comparing the result to the value in the sensor table would avoid this problem.

I'd be happy to take a crack at a patch, but I don't know if you'd prefer to write it yourself.

meta_content doesn't match when space is separator

I'm not able to match appropriately with whitespace as a separator in rule using meta_content.
e.g.

meta_content: "Results %sagan%', $THINGS;
vs.                                 |
meta_content: "Results: %sagan%', $THINGS;

This is illustrated below.

$ grep ANIMAL sagan.conf
var ANIMALS         [cat, mouse]
var ANIMAL           cat

Rules:

  1. alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg: "[TEST] MetaContent Space - Two Items"; meta_content: "animal %sagan%", $ANIMALS; classtype: suspicious-traffic; sid: 13000005; rev:1;)
  2. alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg: "[TEST] MetaContent Space - Single Item"; meta_content: "animal %sagan%", $ANIMAL: classtype: suspicious-traffic; sid: 13000006; rev:1;)
  3. alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg: "[TEST] MetaContent Colon - Two Items"; meta_content: "animal: %sagan%", $ANIMALS; classtype: suspicious-traffic; sid: 13000007; rev:1;)
  4. alert syslog $EXTERNAL_NET any -> $HOME_NET any (msg: "[TEST] MetaContent Colon - Single Item"; meta_content: "animal: %sagan%", $ANIMAL: classtype: suspicious-traffic; sid: 13000008; rev:1;)

Colon as a separator matches both rules _as expected_ for cat.

$ logger "animal: cat"
[**] [1:13000007] [TEST] MetaContent Colon - Two Items [**]
[Classification: suspicious-traffic] [Priority: 2]
2015-07-14 15:40:12 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal: cat

[**] [1:13000008] [TEST] MetaContent Colon - Single Item [**]
[Classification: ] [Priority: 0]
2015-07-14 15:40:12 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal: cat

Again, colons as a separator matches both rules _as expected_ for mouse.

$ logger "animal: mouse"
[**] [1:13000007] [TEST] MetaContent Colon - Two Items [**]
[Classification: suspicious-traffic] [Priority: 2]
2015-07-14 15:40:53 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal: mouse

[**] [1:13000008] [TEST] MetaContent Colon - Single Item [**]
[Classification: ] [Priority: 0]
2015-07-14 15:40:53 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal: mouse

However, a space between the words _does not match as expected_.
Here, 3 rules match and not the Two Items rules.

$ logger "animal cat"
[**] [1:13000006] [TEST] MetaContent Space - Single Item [**]
[Classification: ] [Priority: 0]
2015-07-14 15:40:07 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal cat

[**] [1:13000007] [TEST] MetaContent Colon - Two Items [**]
[Classification: suspicious-traffic] [Priority: 2]
2015-07-14 15:40:07 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal cat

[**] [1:13000008] [TEST] MetaContent Colon - Single Item [**]
[Classification: ] [Priority: 0]
2015-07-14 15:40:07 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal cat

For mouse, the behavior is also different. Two rules match but neither of them are for the space.

$ logger "animal mouse"
[**] [1:13000007] [TEST] MetaContent Colon - Two Items [**]
[Classification: suspicious-traffic] [Priority: 2]
2015-07-14 15:40:35 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal mouse

[**] [1:13000008] [TEST] MetaContent Colon - Single Item [**]
[Classification: ] [Priority: 0]
2015-07-14 15:40:35 141.142.148.97:514 -> 141.142.148.97:514 user notice
Message:  animal mouse

Originating host in alert

It is possible to add the host field of a syslog message to an alert?
My problem is that all my alerts show (see below) 10.1.2.3:514 -> 10.1.2.3:514 to identify the host but that is the IP address of our redundant CARP address for syslog forwarding redundancy which hosts from our network forward to so looking at this alert I'm not able to tell from which host this message originated from. Maybe it could be added after the time stamp or before the facility? I don't know if this will break anything relying on the alert format. What do you think?

Infrastructure image

To find the host from which this syslog message was received I grep through our log archive to find a match which, given by the common messaged exhibited here, will return hundreds-thousands of results. Matching by timestamp will help but creating regexs all the time is a pain just to verify something.

[**] [1:5000119] [SYSLOG] Authentication failure - Brute force [**]
[Classification: unsuccessful-user] [Priority: 1]
2015-06-07 03:32:10 10.1.2.3:514 -> 10.1.2.3:514 daemon notice
Message:  Exiting: authentication failed
[Xref => http://wiki.quadrantsec.com/bin/view/Main/5000119]

Sagan segfaults under FreeBSD at load time.

This is from gdb...

-----
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0 0x08049717 in main (argc=Cannot access memory at address 0xbf5ad784

) at sagan.c:141
141 int main(int argc, char **argv) {
New Thread 28301140 (LWP 100077) bt
#0 0x08049717 in main (argc=Cannot access memory at address 0xbf5ad784

) at sagan.c:141
-----

141 is:

int main(int argc, char **argv) {

sagan-rules.c - Flowbit 'set' detection is incorrect.

flowbit functions like 'unset', 'isset', & 'isnotset' incorrectly check to see if the flowbit in question has been 'set'. See line # 382 (ish) in sagan-rules.c (UNSET) for example. This makes is so that rule loading order is critical. This is bad. It's possible to have a 'isset' rule (or whatever) load a run time before a 'set' rule. The sanity check logic needs to happen after all rules have been loaded, not during.

The code has been commented out at this time so this sanity check doesn't fail. However, it is possible to check for a flowbit with out ever setting, which will likely lead to issues!

program: should support wild cards.

In some situation, the program field might be dynamic. For example , MS-SQL servers put the program field as "MSSQL${SERVERNAME}". Being able to do something like:

program: MSSQL$*;

Would be helpful!

Ubuntu 12.02 libdumbnet verses libdnet

Ubuntu 12.04 now renames "libdnet" libraries to "libdumbnet". This breaks the configuration & compilation of Sagan on Ubuntu. I'll have to modify the configure.ac and some other things to take this in to account when dealing with Ubuntu 12.04+. This isn't a issue for Ubuntu version under 12.04. The temp. fix until I get this done is to do the following as "root".

  1. cd /usr/include && ln -s dumbnet.h dnet.h
  2. cd /usr/lib && libdumbnet.so libdnet.so
  3. ldconfig

This kludge will allow Sagan to compile. I'll push to git a fix for this ASAP.

Support active-response per rule

It would be useful if you could set an active-response script per rule similar to email: [email protected].
This way we don't have to fork the script for every alert thus saving cycles and also don't have to write scripts to test SID or similar before performing SID related action. It would also be nice if sagan supported more than one active-response script. Supporting active-response: /path/to/script in the rule body would be dope. That way I can have multiple concise scripts like:

alert tcp ... active-response: /path/sagan/blacklist.sh ...
alert syslog ... active-response: /path/sagan/intel.sh ...

sagan-config.c / sagan-rules.c 'var' detection.

Sanity checking on 'var' used between sagan.conf and Sagan rules leave something to be desired. For example, if 'var SAGAN_DAYS 012345' is NOT loaded, but specified in a rule, there is no sanity check which could lead to issues.

dst-port / src-port wrong if liblognorm fails

Hey

I'm writing my own normalization rules for the iptables ruleset which is compiled via the Firewall Builder.

With these rulebases some messages get normalized and some not. For those which can't be normalize by sagan / Liblognorm the source and dest port is wrong. It's exactly the same as the previous message which was successfully normalized.

[D] Liblognorm DEBUG output:
[D] ---------------------------------------------------
[D] Log message to normalize: [xxx.xxxxx] RULE x -- DENY IN=eth0 OUT= MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.0.11 DST=192.168.0.1 LEN=xx TOS=0x00 PREC=0x00 TTL=xx ID=xxxx PROTO=UDP SPT=80 DPT=80 LEN=xx
[D] Parsed: { "dst-port": "80", "src-port": "80", "proto": "UDP", "dst-ip": "192.168.0.1", "src-ip": "192.168.0.11" }
[D] Source IP: 192.168.0.11
[D] Destination IP: 192.168.0.1
[D] Source Port: 80
[D] Destination Port: 80
[D] Source Host:
[D] Destination Host:
[D]
[D] Liblognorm DEBUG output:
[D] ---------------------------------------------------
[D] Log message to normalize: [xxxx.xxxx] RULE x -- DENY IN=eth0 OUT= MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=0.0.0.0 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2
[D] Parsed: { "originalmsg": "[xxxx.xxxx] RULE x -- DENY IN=eth0 OUT= MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=0.0.0.0 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 ", "unparsed-data": "DF PROTO=2 " }
[D] Source IP: 0
[D] Destination IP: 0
[D] Source Port: 80
[D] Destination Port: 80
[D] Source Host:
[D] Destination Host:
[D]

Maybe it has something in common with the line 194 / 197 . If tmp is NULL SaganNormalizeLiblognorm->src_port will never be write, but later read. In this case it should be reset like the ip_src, ip_dst, src_host in line 109.

I don't want to offer a PR, because I don't know if this is the cause.

Cheers

Match content on normalized fields

I have an idea for a useful feature. I think being able to match content, pcre, etc. on specific fields that are normalized would be really useful. In variable length logs such as Bro logs snort like modifiers such as within, depth, and offset are too specific. Maybe you can use a numerical index to accomplish this by counting the number of normalize fields in a message based on the rulebase.

#Field:             1                2                3                      4
rule=: Accepted %-:word% for %username:word% from %src-ip:ipv4% port %src-port:number% ssh2

then you could do something like content: "jschipp"; field: 2; in a rule where content jschipp would only match if it's in the 2nd field which is the username field. Also, utilizing the string identifiers from a rulebase file as the field names would be dope e.g. username: "jschipp";. I suspect this data can be easily pulled from liblognorm.

Trying to get sagan working on FreeBSD

I'm working on getting sagan working on FreeBSD. I've created and submitted a port for it (not yet accepted into the ports tree), and I've got syslogd writing to the sagan fifo. I've written a handful of rules for paloalto and it is remotely syslogging to the fifo.

I'm seeing these errors in the sagan log:

[] [11/30/2012 05:04:00] - Becoming a daemon!
[
] [11/30/2012 05:04:00] - Attempting to open syslog FIFO (/var/run/sagan/sagan.fifo).
[] [11/30/2012 05:04:00] - Successfully opened FIFO (/var/run/sagan/sagan.fifo).
[
] [11/30/2012 05:04:35] - Sagan received a malformed 'host' (replaced with 10.110.4.132)
[] [11/30/2012 05:04:35] - Sagan received a malformed 'facility'
[
] [11/30/2012 05:04:35] - Sagan received a malformed 'priority'
[] [11/30/2012 05:04:35] - Sagan received a malformed 'priority'
[
] [11/30/2012 05:04:35] - Sagan received a malformed 'tag'
[] [11/30/2012 05:04:35] - Sagan received a malformed 'date'
[
] [11/30/2012 05:04:35] - Sagan received a malformed 'time'
[] [11/30/2012 05:04:35] - Sagan received a malformed 'program'
[
] [11/30/2012 05:04:35] - Sagan received a malformed 'message'

I'm not sure what these mean.

We're successfully remotely syslogging the paloalto firewalls to Splunk, and the exact same log messages are being sent to the fifo.

I think sagan is complaining because these don't look like "normal" syslog messages?

Here's one alert:

Nov 29 22:19:12 <user.warn> paloalto 1,2012/11/29 22:19:12,0009C101171,THREAT,spyware,1,2012/11/29 22:19:07,66.85.161.75,10.21.5.115,0.0.0.0,0.0.0.0,Trust-to-Untrust,,,web-browsing,vsys1,untrust,trust,ethernet1/1,ethernet1/2,Log-Forwarding-Main,2012/11/29 22:19:12,1565334,1,80,50057,0,0,0x0,tcp,alert,"p6.asp",Suspicious user-agent strings(10004),any,medium,server-to-client,1596504506,0x0,United States,10.0.0.0-10.255.255.255,0,

Do these need to be normalized?

Perfmon stats

Added a perfmon processor to Sagan, like Snort. For graphing and general statistics.

External output anomalies

Hello,

We have recently updated to Sagan 1.0.0 (we've tried both RC1 and RC2) and have encountered some anomalies in the external output. We call a php script using the 'parsable' output. This PHP script doesn't do much except parsing the output to an array and feeding it to our message queue. After updating to 1.0 we've noticed that some alerts get sent to our PHP script with information that isn't correct.
This is the information that our PHP script receives, when parsed to an array (the event_id is an md5 hash we generate and add to the array):

Array
(
    [event_id] => a419d78c299132979c4e72a0a1721a23
    [id] => 5000406
    [message] => [OPENSSH] Accepted publickey
    [classification] => successful-user
    [drop] => False
    [priority] => 1
    [date] => 2014-02-28
    [time] => 13:00:59
    [source] => <srcip>
    [source_port] => 514
    [destination] => <dstip>
    [destination_port] => 22
    [facility] => auth
    [syslog_priority] => info
    [reference] => http://wiki.quadrantsec.com/bin/view/Main/5001126
    [syslog_message] =>  Accepted publickey for webdev from <srcip> port 55327 ssh2
)

For debugging purposes we started writing this array to a log file for every received alert. We then received an FTP bruteforce attack which started logging, creating many instances of the following alert:

Array
(
    [event_id] => bbd5d1156dc1e317b762d72e7080f67f
    [id] => 5000081
    [message] => [PROFTPD] Login failed accessing the FTP server
    [classification] => unsuccessful-user
    [drop] => False
    [priority] => 1
    [date] => 2014-02-28
    [time] => 12:19:45
    [source] => <srcip>
    [source_port] => 514
    [destination] => <dstip>
    [destination_port] => 21
    [facility] => authpriv
    [syslog_priority] => notice
    [reference] => http://wiki.quadrantsec.com/bin/view/Main/5001126
    [syslog_message] =>  0.0.0.0 (<srcip>[<srcip>]) - USER <username> (Login failed): Incorrect password.
)

This is to be expected. However, one of these messages looked different.
To illustrate:

cat logphp.txt | grep -B 17 -A 1 <username>
Array
(
    [event_id] => 36781c515bf518fe8df01ecc8fc7313f
    [id] => 5000081
    [message] => [PROFTPD] Login failed accessing the FTP server
    [classification] => unsuccessful-user
    [drop] => False
    [priority] => 1
    [date] => 2014-02-28
    [time] => 12:26:04
    [source] => <srcip>
    [source_port] => 514
    [destination] => <dstip>
    [destination_port] => 21
    [facility] => authpriv
    [syslog_priority] => notice
    [reference] => http://wiki.quadrantsec.com/bin/view/Main/5001126
    [syslog_message] =>  0.0.0.0 (<srcip>[<srcip>]) - USER <username> (Login failed): Incorrect password.
)
--
Array
(
    [event_id] => a8afac0861a260dbd1b182526ad81fcc
    [id] => 5000081
    [message] => [OPENSSH] Accepted publickey
    [classification] => successful-user
    [drop] => False
    [priority] => 1
    [date] => 2014-02-28
    [time] => 12:26:06
    [source] => <srcip>
    [source_port] => 514
    [destination] => <dstip>
    [destination_port] => 21
    [facility] => auth
    [syslog_priority] => info
    [reference] => http://wiki.quadrantsec.com/bin/view/Main/5001126
    [syslog_message] =>  0.0.0.0 (<srcip>[<srcip>]) - USER <username> (Login failed): Incorrect password.
)
--
Array
(
    [event_id] => add431ca07fe5c94f4c726b2415c8e66
    [id] => 5000081
    [message] => [PROFTPD] Login failed accessing the FTP server
    [classification] => unsuccessful-user
    [drop] => False
    [priority] => 1
    [date] => 2014-02-28
    [time] => 12:26:06
    [source] => <srcip>
    [source_port] => 514
    [destination] => <dstip>
    [destination_port] => 21
    [facility] => authpriv
    [syslog_priority] => notice
    [reference] => http://wiki.quadrantsec.com/bin/view/Main/5001126
    [syslog_message] =>  0.0.0.0 (<srcip>[<srcip>]) - USER <username> (Login failed): Incorrect password.
)

As you can see the second alert contains exactly the same syslog message, however the message, classification, facility and syslog priority are incorrect when compared to the other two.
Over the last few days we've seen this happening multiple times. So there seems to be a bug in the output being sent to the external program.

output-plugins/sagan-external.c: close output pipe

Sagan don't close pipe for external output.

This solve my problem:
--- a/src/output-plugins/sagan-external.c
+++ b/src/output-plugins/sagan-external.c
@@ -133,6 +133,7 @@ if (( pid = fork()) == 0 ) {
pthread_mutex_unlock( &ext_mutex );

n = read(out[0], buf, sizeof(buf));
  • close(out[0]);
    buf[n] = 0;

waitpid(pid, NULL, 0);

Threshold by_dst causing segfault (typo)

Greetz! Using the current git and 3 previous I was getting segfaults and not thresholded alerts using the below rule to parse palo-alto firewall logs (and liblognorm).

alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg: "[PAN FW] Url Blocked by policy or category"; content:"THREAT,url"; content:"block-url"; content:!"online-personal-storage"; threshold: type limit, count 2, seconds 3600, track by_dst; normalize: paloalto; parse_port; parse_proto; parse_src_ip: 1; parse_dst_ip: 2; classtype: suspicious-traffic; reference: url,www.brightcloud.com/tools/url-ip-lookup.php; sid: 8201410; pri: 4;rev:1;)

OR

alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg: "[PAN FW] Foreign URL of unknown category"; content:"THREAT,url"; content:"unknown"; content:!"United States"; normalize: paloalto; threshold: type limit, count 4, seconds 7200, track by_dst; parse_proto; parse_src_ip: 1; parse_dst_ip: 2; country_code: track by_dst, isnot $HOME_COUNTRY; classtype: suspicious-traffic; reference: url,www.brightcloud.com/tools/url-ip-lookup.php; sid:8201418; pri: 4;rev:1;)

While not occuring on all rulehits it always eventually segfaults. Below is the output from GDB running sagan with -d engine,normalize then doing a backtrace after the segfault.

D] [processors/sagan-engine.c, line 792] [Trigger]*******************************
[D] [processors/sagan-engine.c, line 793] Program: pbsagan | Facility: local5 | Priority: info | Level: info | Tag: ae
[D] [processors/sagan-engine.c, line 794] Threshold flag: 0 | After flag: 0 | Flowbit Flag: 0 | Flowbit status: 0
[D] [processors/sagan-engine.c, line 795] Triggering Message: 19:04:06,0003C104127,THREAT,url,1,2014/03/18 19:03:59,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,Users Out,,,ssl,vsys1,Trust,Untrust,ethernet1/16,ethernet1/1,onshore,2014/03/18 19:04:06,21539,1,4649,443,3962,443,0x408000,tcp,alert,"xxx.xxx.xxx.xxx/",(9999),unknown,informational,client-to-server,174835985,0x0,xxx.xxx.xxx.xxx-xxx.xxx.xxx.xxx,Ireland,0,
[D] Liblognorm DEBUG output:
[D] ---------------------------------------------------
[D] Log message to normalize: 19:04:26,0003C104127,THREAT,url,1,2014/03/18 19:04:20,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,Users Out,,,ssl,vsys1,Trust,Untrust,ethernet1/16,ethernet1/1,onshore,2014/03/18 19:04:26,21750,1,3561,443,15766,443,0x408000,tcp,alert,"xxx.xxx.xxx.xxx/",(9999),unknown,informational,client-to-server,174836002,0x0,xxx.xxx.xxx.xxx-xxx.xxx.xxx.xxx,United Kingdom,0,
[D] Parsed: { "therest": "unknown,informational,client-to-server,174836002,0x0,xxx.xxx.xxx.xxx-xxx.xxx.xxx.xxx,United Kingdom,0,", "logurl": "xxx.xxx.xxx.xxx/", "action": "alert", "flags": "0x408000", "nat-dst-port": "443", "nat-src-port": "15766", "dst-port": "443", "src-port": "3561", "count": "1", "session": "21750", "time": "19:04:26", "day": "18", "month": "03", "year": "2014", "logprofile": "onshore", "dstintf": "ethernet1/1", "srcintf": "ethernet1/16", "dstzone": "Untrust", "srczone": "Trust", "vsys": "vsys1", "app": "ssl", "policy": "Users Out", "natdstip": "xxx.xxx.xxx.xxx", "natsrcip": "xxx.xxx.xxx.xxx", "dst-ip": "xxx.xxx.xxx.xxx", "src-ip": "xxx.xxx.xxx.xxx", "dtm": "1", "devserial": "UUUUUUUUU" }
[D] Source IP: xxx.xxx.xxx.xxx
[D] Destination IP: xxx.xxx.xxx.xxx
[D] Source Port: 3561
[D] Destination Port: 443
[D]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff3510700 (LWP 2868)]
0x00007ffff66b71b2 in _IO_vfprintf_internal (s=s@entry=0x7ffff350f260, format=,
format@entry=0x41d148 "Threshold SID %s by destination IP address. [%s]", ap=ap@entry=0x7ffff350f3c8) at vfprintf.c:1649
1649 vfprintf.c: No such file or directory.

(gdb) bt
#0 0x00007ffff66b71b2 in _IO_vfprintf_internal (s=s@entry=0x7ffff350f260, format=,

format@entry=0x41d148 "Threshold SID %s by destination IP address. [%s]", ap=ap@entry=0x7ffff350f3c8) at vfprintf.c:1649

#1 0x00007ffff66dcb49 in _IO_vsnprintf (string=0x7ffff350f420 "Threshold SID ", maxlen=,

format=0x41d148 "Threshold SID %s by destination IP address. [%s]", args=0x7ffff350f3c8) at vsnprintf.c:119

#2 0x000000000040d1a4 in Sagan_Log ()
#3 0x000000000041420e in Sagan_Engine ()
#4 0x000000000040deb8 in Sagan_Processor ()
#5 0x00007ffff6c5e062 in start_thread (arg=0x7ffff3510700) at pthread_create.c:312
#6 0x00007ffff6754a3d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) quit

I found a typo in the function below being the use of afterbysrc. Changing this to afterbydst resolved the issue and now my thresholds work!

 if ( rulestruct[b].after_count < afterbydst[i].count ) {
       after_log_flag = 0;
       Sagan_Log(S_NORMAL, "After SID %s by destination IP address. [%s]", afterbysrc[i].sid, ip_dst);

Sagan seg faults if the "sagan" user doesn't exsist.

#0 0xb7cc1713 in strlen () from /lib/libc.so.6
#1 0xb7c908c4 in vfprintf () from /lib/libc.so.6
#2 0xb7cb3784 in vsnprintf () from /lib/libc.so.6
#3 0x080530e0 in sagan_log (type=1, format=0x8058cd0 "[%s line %d] User %.32s cannot be found. [%s line %d]") at sagan-util.c:193
#4 0x08053690 in droppriv (username=0x8057b55 "sagan", fifo=0x806d3cb "/var/run/sagan.fifo") at sagan-util.c:99
#5 0x0804a2cf in main (argc=1, argv=0xbffff644) at sagan.c:445

sagan open+close log file on each write (and fail if using different uid)

The sagan_log function (sagan_util.c) re-open and close the log on each log, which cause several problems:

  • it is incredibly slow
  • it does not work when using another user, since the first open/close is done as root, and after changing uid fopen will fail with error
    [E] Cannot open /var/log/sagan/sagan.log! [sagan-util.c line 189]

I could have submitted a patch, but I don't know what is you preferred solution. To do simple, I'd suggest storing the FILE* in a static global variable (since it is global to the program).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.