Git Product home page Git Product logo

fuzzbench's Introduction

FuzzBench: Fuzzer Benchmarking As a Service

FuzzBench is a free service that evaluates fuzzers on a wide variety of real-world benchmarks, at Google scale. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt. We invite members of the research community to contribute their fuzzers and give us feedback on improving our evaluation techniques.

FuzzBench provides:

  • An easy API for integrating fuzzers.
  • Benchmarks from real-world projects. FuzzBench can use any OSS-Fuzz project as a benchmark.
  • A reporting library that produces reports with graphs and statistical tests to help you understand the significance of results.

To participate, submit your fuzzer to run on the FuzzBench platform by following our simple guide. After your integration is accepted, we will run a large-scale experiment using your fuzzer and generate a report comparing your fuzzer to others. See a sample report.

Overview

FuzzBench Service diagram

Sample Report

You can view our sample report here and our periodically generated reports here. The sample report is generated using 10 fuzzers against 24 real-world benchmarks, with 20 trials each and over a duration of 24 hours. The raw data in compressed CSV format can be found at the end of the report.

When analyzing reports, we recommend:

  • Checking the strengths and weaknesses of a fuzzer against various benchmarks.
  • Looking at aggregate results to understand the overall significance of the result.

Please provide feedback on any inaccuracies and potential improvements (such as integration changes, new benchmarks, etc.) by opening a GitHub issue here.

Documentation

Read our detailed documentation to learn how to use FuzzBench.

Contacts

Join our mailing list for discussions and announcements, or send us a private email at [email protected].

fuzzbench's People

Contributors

alifahmed avatar andreafioraldi avatar cnheitman avatar davidkorczynski avatar dependabot[bot] avatar dokyungs avatar donggeliu avatar eliageretto avatar fmeum avatar inferno-chromium avatar jdhiser avatar jiradeto avatar jonathanmetzman avatar kjain14 avatar laurentsimon avatar leo-neat avatar lszekeres avatar mboehme avatar microsvuln avatar navidem avatar pietroborrello avatar robertswiecki avatar shadoom7 avatar tanq16 avatar thuanpv avatar tokatoka avatar vanhauser-thc avatar vaush avatar wideglide avatar zchcai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fuzzbench's Issues

Include info on changes to FuzzBench in reports

This is similar to #94 but much easier to implement.
When creating an experiment, save the hash of the latest git commit. Then when generating the report for the experiment include this info and a link to the diff of this hash from the hash of the last experiment.

inconsistent BENCHMARK value

Hi @jonathanmetzman,

When I tested AFLSmart on libpcap_fuzz_both benchmark, I figured out that the BENCHMARK env. variable value may not be consistent with the name of corresponding benchmark stored in benchmarks folder.

For example, when I run the experiment for libpcap_fuzz_both benchmark, BENCHMARK value is "libpcap". Maybe it uses the benchmark name from OSS-Fuzz project? I don't know whether this is an issue or not.

Thuan

The docker image has a too old LLVM version

Hi,
I'm trying to build AFL++ w/o trace-pc using LLVM 9.
However, you base image is Ubuntu 16 that is too old and has a legacy libc++.
Also installing LLVM 9 from the LLVM repos, I cannot replace libc++ w/o recompiling it (and it is not a good idea, it heavily depends in libc).

I tried

ARG parent_image=gcr.io/fuzzbench/base-builder
FROM $parent_image

# Install LLVM 9
RUN apt-get update && \
    apt-get install software-properties-common -y && \
    apt-get install wget -y && \
    wget https://apt.llvm.org/llvm.sh && \
    chmod +x llvm.sh && \
    ./llvm.sh 9 && \
    apt-get install libc++-dev -y && \
    update-alternatives --install /usr/bin/cc cc /usr/bin/clang-9 100 && \
    update-alternatives --install /usr/bin/c++ c++ /usr/bin/clang++-9 100 && \
    update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-9 100 && \
    update-alternatives --install /usr/bin/clang clang /usr/bin/clang-9 100

But nothing works. Can you provide a base-builder with a more recent OS?
I understand that some fuzzers need Ubuntu 16 like QSYM, but LLVM 3.8 is really too old.

wpantund benchmark can initiate network connections

The wpantund benchmark can initiate networking connections to remote networks under some conditions. Attaching a testcase for that:

1788a8b166e1fe4e854bdbb1572e2bb6.00000080.honggfuzz.cov.txt

$ strace -f ~/fuzz/wpantund/wpantund-fuzz ~/fuzz/wpantund/corpus/1788a8b166e1fe4e854bdbb1572e2bb6.00000080.honggfuzz.cov
...
openat(AT_FDCWD, "/tmp", O_RDWR|O_EXCL|O_TMPFILE, 0600) = 6
fcntl(6, F_GETFL)                       = 0x418002 (flags O_RDWR|O_LARGEFILE|O_TMPFILE)
fstat(6, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
write(6, "\235mfProxy:EnabiG:NNN\305SocketPath \""..., 127) = 127
lseek(6, 0, SEEK_SET)                   = 0
read(6, "\235mfProxy:EnabiG:NNN\305SocketPath \""..., 4096) = 127
read(6, "", 4096)                       = 0
close(6)                                = 0
brk(0xb518000)                          = 0xb518000
socket(AF_INET6, SOCK_STREAM, IPPROTO_IP) = 6
connect(6, {sa_family=AF_INET6, sin6_port=htons(4951), inet_pton(AF_INET6, "::ffff:0.0.3.122", &sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, 28^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
..

I know nothing about wpantund internals, so maybe this can be fixed (if you want to fix it) in the fuzzer harness code (maybe some API option needs to be set).

If not on API level, a quick hack could be to add something like

unshare(CLONE_NEWUSER|CLONE_NEWNET);

at the beginning of the file.

I'm not sure if we want to leave current behavior as-is. On one hand it might be a useful case to test fuzzing timeouts, on the other, benchmarks initiating connections over non-loopback sound somehow wrong (and they also add unnecessary timeouts).

Don't set CFLAGS/CXXFLAGS/CC/CXX when building fuzzer builders

The OSS-Fuzz builder images sets these environment variables:

CC=clang
CXX=clang++
CFLAGS=-O1 -fno-omit-frame-pointer -gline-tables-only -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
CXXFLAGS=-O1 -fno-omit-frame-pointer -gline-tables-only -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -stdlib=libc++

In base-builder (used for standard benchmarks) we set:
CXXFLAGS=-stdlib=libc++

This is bad for two reasons:

  1. It makes it easier to mess up compiling fuzzers that have weird dependencies. For example, aflsmart couldn't compile using these flags which were being used.
  2. We shouldn't be compiling fuzzing engines/fuzzers with O1.

We can fix this problem by:

  1. (for standard benchmarks) installing libstdc++-dev so that libc++ doesn't need to be used.
  2. (for OSS-Fuzz benchmarks) not setting these environment variables until the build is done. We could also add an image in between the actual OSS-Fuzz project image and the fuzzer builder, but we can't actually unset environment variables, we can only set them to the empty string due to a limitation in docker.

Support fuzzer feature testing via configurations (also allow configurations in fuzzers dir).

Several fuzzers here like honggfuzz, afl, afl++, aflfast, fairfuzz etc. dont just have one feature but many and also several options to tweak behaviour and performance.

it would be a great opportunity to be able to test them all. as a simple example:
AFLfast has 5 power schedules. To have a benchmark on all 5 schedules on all targets would be beneficial to learn which are better than other and which are better on specific types of targets (no corpus vs large good corpus, binary input vs. text input, etc. etc.)

Is that something that is "allowed", "supported" or a "goal" for this project?
Because IMHO this would be the most important aspect of fuzzbench, as just comparing one fuzzer in a fixed configuration versus other fuzzers in theirs doesnt help much. Effective fuzzing campaigns must deploy various fuzzers and combine their results. hence learning which fuzzer in which configuration is strong on which kind of target scenario - that is IMHO the important learning experience.

plus of course optimizing the fuzzers themselves. but even for that I need to run "my" fuzzer in various configurations and test stages.

Support specifying which benchmarks to include in the report

The current set of benchmarks is diverse, which is great. However, since some fuzzers may not work very well with all types of input formats (e.g., context-free grammar inputs vs chunk-based inputs, binary inputs vs text inputs), it would be great if there is an option for us to choose which benchmarks should be included in the report. Of course, when we submit papers for review, we should explain why we ignore some benchmarks from our evaluation.

compile targets with -O2 or -O3?

aflplusplus and fastcgs compile targets with -O3 (changed by individual devs, one of them me) and I noticed that all other projects are set to -O2.

This changes the target CFGs, execution speeds, etc. so this should either be -O2 or -O3 for all (to be fair comparisons) IMHO.

Is there a reason that -O2 was initially chosen?

Eclipser was mishandling initial seed corpus

Hi, thank you for open-sourcing a great project.

I investigated the results of Eclipser, and found that the coverage achieved by initial seed corpus was not correctly reflected in the final output. The issue has remained unrevealed since most of the experiments in Eclipser paper was performed with empty initial seed corpus.

The issue (along with some others ones) is fixed in commit b072f045324869 of https://github.com/SoftSec-KAIST/Eclipser. I will open a PR to replace the commit of Eclipser.

What benchmarks should be used

Since we can use any OSS-Fuzz target as a benchmark, we should figure out what we should look for in benchmarks. Community feedback here is definitely appreciated!

make install-dependencies fails

Hi,
I runned make install-dependencies fails and it failed with:

source .venv/bin/activate && python3 -m pip install -r requirements.txt
/bin/sh: 1: source: not found
Makefile:20: recipe for target '.venv/bin/activate' failed
make: *** [.venv/bin/activate] Error 127

I'm on Ubuntu 18.04.4 where /bin/sh is symlinked to dash and not bash, where source is not available.

I solved invoking bash directly, however, I'm not opening a PR cause this is a rude workaround:

include docker/build.mk

VENV_ACTIVATE := .venv/bin/activate

${VENV_ACTIVATE}:
	python3 -m pip install --user virtualenv
	rm -rf .venv
	virtualenv -p `which python3` .venv
	bash -c "source ${VENV_ACTIVATE} && python3 -m pip install -r requirements.txt"

install-dependencies: ${VENV_ACTIVATE}

presubmit: install-dependencies
	bash -c "source ${VENV_ACTIVATE} && python3 presubmit.py"

format: install-dependencies
	bash -c "source ${VENV_ACTIVATE} && python3 presubmit.py format"

licensecheck: install-dependencies
	bash -c "source ${VENV_ACTIVATE} && python3 presubmit.py licensecheck"

lint: install-dependencies
	bash -c "source ${VENV_ACTIVATE} && python3 presubmit.py lint"

typecheck: install-dependencies
	bash -c "source ${VENV_ACTIVATE} && python3 presubmit.py typecheck"

docs-serve:
	cd docs && bundle exec jekyll serve --livereload

Use more seeds in non-OSS-Fuzz benchmarks

Hi,

I see in several benchmarks (e.g., LibPNG, LibJPEG-Turbo), there is only one seed (i.e., sample input) in the initial corpus. It would affect the results of several fuzzers, like my AFLSmart structure-aware fuzzer (https://github.com/aflsmart/aflsmart). As we know, the seed corpus quality is very important for (mutational-based) fuzzing. I think it would be good if all benchmarks come with a decent seed corpus.

Best regards,

Thuan

afl++ build is broken on openssl_x509

CC @andreafioraldi

Similar to #107 CI successfully built openssl_x509 with AFL++ but failed to do so on 2020-03-11.

The build fails with this error:

/afl/afl-clang-fast  -Iinclude  -pthread -m64 -fno-omit-frame-pointer -g -Wa,--noexecstack -Qunused-arguments -pthread -Wl,--no-as-needed -Wl,-ldl -Wl,-lm -Wno-unused-command-line-argument -O2 -pthread -Wno-unus
ed-command-line-argument -O2 -fno-sanitize=alignment  -DPEDANTIC -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -MMD -MF test/buildtest_c_e_os2-bin-buildtest_e_os2.d.tmp -MT test/buildtest_c_e_os2-bin-buildtest_e_os
2.o -c -o test/buildtest_c_e_os2-bin-buildtest_e_os2.o test/buildtest_e_os2.c
test/p_test-dso-p_test.o: In function `OSSL_provider_init':
/src/openssl/test/p_test.c:(.text+0x2b): undefined reference to `__afl_area_ptr'
/src/openssl/test/p_test.c:(.text+0x44): undefined reference to `__afl_prev_loc'
/src/openssl/test/p_test.c:(.text+0x7c): undefined reference to `__afl_prev_loc'
test/p_test-dso-p_test.o: In function `p_get_params':
/src/openssl/test/p_test.c:(.text+0x189): undefined reference to `__afl_prev_loc'
/src/openssl/test/p_test.c:(.text+0x198): undefined reference to `__afl_area_ptr'
/src/openssl/test/p_test.c:57: undefined reference to `__afl_prev_loc'
/src/openssl/test/p_test.c:57: undefined reference to `__afl_area_ptr'
/src/openssl/test/p_test.c:93: undefined reference to `__afl_prev_loc'
/src/openssl/test/p_test.c:93: undefined reference to `__afl_area_ptr'
/src/openssl/test/p_test.c:(.text+0x3e8): undefined reference to `__afl_prev_loc'
/src/openssl/test/p_test.c:(.text+0x3f7): undefined reference to `__afl_area_ptr'
clang-10: error: linker command failed with exit code 1 (use -v to see invocation

I will try to investigate the fix for this as well as if there is a way to make CI more likely to catch failures that happen in production.

Support downloading of OSS-Fuzz corpora for use in benchmarks

This should be easy to implement but maybe a bit tough to manage in terms of running this service, should we put these benchmarks in the same report?

This feature would be quite nice as it would allow us to determine if a fuzzer can break through a coverage wall that ClusterFuzz encounters from fuzzing (using LibFuzzer, HonggFuzz and AFL).

Issues in installation

After I initiated the installation, I got this error:

  • make -j 8 -C src fuzzing
    make: Entering directory '/src/BUILD/src'
    GEN hb-buffer-deserialize-json.hh
    GEN hb-buffer-deserialize-text.hh
    GEN hb-ot-shape-complex-indic-machine.hh
    GEN hb-ot-shape-complex-myanmar-machine.hh
    GEN hb-ot-shape-complex-use-machine.hh
    CXX libharfbuzz_fuzzing_la-hb-buffer.lo
    CXX libharfbuzz_fuzzing_la-hb-blob.lo
    CXX libharfbuzz_fuzzing_la-hb-buffer-serialize.lo
    hb-buffer-serialize.cc:389:10: fatal error: 'hb-buffer-deserialize-json.hh' file not found
    #include "hb-buffer-deserialize-json.hh"
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1 error generated.
    Makefile:1606: recipe for target 'libharfbuzz_fuzzing_la-hb-buffer-serialize.lo' failed
    make: *** [libharfbuzz_fuzzing_la-hb-buffer-serialize.lo] Error 1
    make: *** Waiting for unfinished jobs....
    make: Leaving directory '/src/BUILD/src'
    Traceback (most recent call last):
    File "", line 1, in
    File "/src/fuzzer.py", line 41, in build
    utils.build_benchmark()
    File "/src/fuzzers/utils.py", line 42, in build_benchmark
    subprocess.check_call(['/bin/bash', '-ex', build_script], env=env)
    File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call
    raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['/bin/bash', '-ex', 'benchmark/build.sh']' returned non-zero exit status 1.
    The command '/bin/sh -c python3 -u -c "import fuzzer; fuzzer.build()"' returned a non-zero code: 1
    docker/build.mk:114: recipe for target '.libfuzzer-harfbuzz-1.3.2-builder' failed
    make: *** [.libfuzzer-harfbuzz-1.3.2-builder] Error 1

help?

AFLplusplus declined in aggregate rank from 2020-03-09 to 2020-03-11

2020-03-11 was the first experiment we ran with the new AFL++ fuzzer.py.

The aggregate ranking in this report claims AFL++ does about the same as AFL, whereas older reports, such as 2020-03-09, consistently show otherwise.

Since AFL++ already used afl-clang-fast in 2020-03-09, the aggregate change could be related to any of the following changes in fuzzer.py:

  1. Use of fidgety AFL.
  2. Use of libdislocator instead of ASAN.
  3. Use of laf-intel.
  4. Use of instrim.

I think changes causing the slip fall into two broad categories.

  1. The change indeed hurts performance and fuzzbench is correctly pointing this out.
  2. (possible for build time changes like instrumentation) FuzzBench's use of sancov means that new program behavior found using special instrumentation like laf-intel cannot be detected, maybe the current metrics FuzzBench uses are biased against these kinds of instrumentation.

I quickly eyeballed the two reports and put my observations about performance changes on different benchmaks in this table:

First Header Second Header
bloaty decline
curl significant decline
freetype decline
irssi significant decline (2nd->last place)
lcms very significant decline
libjpeg-turbo drop from top tier to next
libpcap increase from bottom tier to top tier
libxml decline from upper tier to lower
sqlite3_ossfuzz big increase to join qsym in top tier

I think libpcap and sqlite are some interesting success stories

Plot error when generating report

Hi there,

I was trying to run the experiment/generate_report.py after experiment.

However I received the error saying:
"No axis named columns for object type <class 'pandas.core.series.Series'>"
which is triggered by this line.

To reproduce this error, here is the script that I used:

#!/bin/bash

EXPERIMENT_NAME=testrun

PYTHONPATH=. python3 experiment/generate_report.py \
    experiments $EXPERIMENT_NAME \
    --quick \
    -c \
    --fuzzers afl libfuzzer \
    --report-dir ${EXPERIMENT_NAME}_report

And here is the raw data file:

data.csv.gz

One should be able to reproduce the error using the script above.

Use bugs to measure fuzzer performance

My view of fuzzers is that "better" fuzzers can find more exploitable bugs than worse fuzzers. While coverage is a decent, easy-to-use, proxy for this, we should look into using crashes to determine fuzzer performance.

We already save the crashes found in each cycle/corpus snapshot.

Here's what's left to do that I can think of:

  1. Implementing a method for identifying crashes. ClusterFuzz's method (similar to stack hashing) works pretty well in practice.

  2. Finding a way to rank fuzzers based on the crashes they find.

aflsmart build is broken

@thuanpv
Running make build-aflsmart-libpcap_fuzz_both gives me the following error:

Step 5/7 : RUN git clone https://github.com/aflsmart/aflsmart /afl &&     cd afl &&     git checkout df095901ea379f033d4d82345023de004f28b9a7 &&     AFL_NO_X86=1 make
fatal: reference is not a tree: df095901ea379f033d4d82345023de004f28b9a7
The command '/bin/sh -c git clone https://github.com/aflsmart/aflsmart /afl &&     cd afl &&     git checkout df095901ea379f033d4d82345023de004f28b9a7 &&     AFL_NO_X86=1 make' returned a non-zero code: 128
make: *** [docker/build.mk:193: .aflsmart-libpcap_fuzz_both-oss-fuzz-builder-intermediate] Error 128

I wonder why this wasn't caught by our CI. I will test this again to check.
What's also weird is how this affected the results of the last experiment: 2020-03-11
Some of the benchmarks are missing because it failed to build for them (e.g. bloaty_fuzz_target), but others such as freetype have data.

I'll point out that the aggregate ranking for aflsmart in that report is not really accurate because it is missing benchmarks.

research feature: tracking branches taken to form branch probabilities

I admit this is more of a research feature request, but I put it in because the FuzzBench setup would seem to be a good way to get the data. I am not sure if this data is already being gathered through one of the other coverage calculations, so please forgive me if they are. The idea would be to track numbers of times different branch paths are taken, independent of entire path of execution to that point. These can be used to form branch probabilities and could be an interesting way to investigate/find rare paths or why a path is rare for all the fuzzers but 1, etc.

I apologize if this exists or I missed another ticket. This may relate in some way to #42 Further, since this is a research feature request, I do not expect it to be high priority. I just wanted to get it known or find out if that information is there to be found.

Use deadsnakes ppa instead of compiling Python from scratch in base image

RUN cd /tmp/ && \
curl -O https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tar.xz && \
tar -xvf Python-$PYTHON_VERSION.tar.xz && \
cd Python-$PYTHON_VERSION && \
./configure --enable-optimizations && \
make -j install && \
rm -r /tmp/Python-$PYTHON_VERSION.tar.xz /tmp/Python-$PYTHON_VERSION

Python is compiled from scratch in the base-image, I noticed during the CI patch, this takes up a decent chunk of time in the start. Instead it's possible to just pull it in from the deadsnakes repo https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa

It'll also probably be a bit of a faster interpreter because they build with profile guided + link time optimizations.

Support specifying which fuzzers to include in the report

One often wants to see a report that compares only two or just a few fuzzers. E.g., compare only

  • afl with libfuzzer, or
  • two different versions of the same fuzzer, or
  • only fuzzers with dynamic binary instrumentation, etc.

This can be supported by adding a --fuzzers flag to the report generator, where users can list the fuzzers they would like the report to compare. We should allow specifying fuzzers with different versions, by either tagging them with their version number or perhaps with an experiment name. E.g., generate_report --fuzzers afl:v2.5 afl:v2.4, or generate_report --fuzzers afl:experiment-2020-01-15 afl:experiment-2020-02-15.

This is also useful to have because specifying a smaller/different subset of fuzzers can affect the result of the top level statistical analysis (Friedman test, critical difference plot). Further, when we only compare two fuzzer, we can even do more precise statistical tests with more specific visualizations. On the benchmark level, we don't need a pairwise comparison matrix of statistical significance, since we only have a single pair. On the experiment level, we don't need to use Friedman/Nemenyi test (which compares more than two fuzzers, and is rather conservative), but we can use the Wilcoxon signed-rank test, which is specifically designed for comparing two things (i.e., matched samples for the different benchmarks).

Set an optimization level by default

Unless there is a specific reason not to in particular cases, we probably want all fuzzers using the same optimization level to compile benchmarks. This means that we should probably set this in CFLAGS/CXXFLAGS before fuzzer.build() is called so that fuzzer integrators don't need to think about this.

Support dictionaries (mostly done, only need to add way to test dictionaryless experiment)

Many of our benchmarks have dictionaries, it would probably be interesting to see how using dictionaries affects results (since dictionaries are pretty widely used in the real world).
The downsides I see to supporting dictionaries are:

  1. A wider interface for integrating fuzzers.
  2. Some fuzzers (like eclipser I believe) don't support dictionaries, so in some sense the comparsion won't be apples to apples.

[reports] Data.csv.gz don't need to contain id column

It has these columns because data.csv.gz contains data from a join query of snapshots on trials.
time_started and time_ended are from trials but they are probably not useful for the analysis people want to do so they just take up space at this point.

Speed up measuring

Measuring is currently a bottleneck that slows done the entire experiment cycle. Experiments are taking a day or two to "complete" even after the last trial terminates because measuring is slow. There are a couple ways measuring can be improved to fix this situation:

  1. Currently measuring happens on one (very large) machine. If it ran on multiple machines it could be scaled horizontally.

  2. Currently measuring works by:
    i. Syncing the entire data directory
    ii. Processing the corpora in parallel
    iii. Go to i.

This means that if some benchmarks are slower to process than others, cores will be idle while corpora for those benchmarks are being measured.

I've had luck with copying each corpus archive in parallel and then processing it. This might break things since gsutil frequently has issues when used in parallel, but I think it's worth trying.

  1. Currently we only measure test cases in a corpus archive that wasn't in the previous archive. This means that corpus archives in a trial must be measured sequentially. We can maybe improve things by letting archives be measured in parallel.

  2. Eliminating slow benchmarks. Although I haven't analyzed this thoroughly, it seems like certain benchmarks slow down the entire measuring process by being much slower than everything else. This can pose a problem for accuracy as well since we won't keep measuring a corpus that takes longer than 5 minutes.

  3. Encourage better use of output_corpus: Currently AFL based fuzzers put their entire output directory in output_corpus, meaning that the measurers need to process hangs and crashes (which we will want to support ).

Solving 2-3 will help with 1 which frankly should be done anyway. I certainly don't plan on implementing all of these solutions, just enough that this problem is solved.

Support benchmark-specific arguments for a fuzzer

Hi,

I am reading this https://google.github.io/fuzzbench/getting-started/adding-a-new-fuzzer/ to add AFLSmart (https://github.com/aflsmart/aflsmart) so it can be benchmarked using FuzzBench. However, AFLSmart supports a few more arguments, especially the one taking an input model (-g) to enable its structure-aware capability. I have two questions regarding this

  1. How to support this -g argument in FuzzBench?
  2. Since the input model is target program-specific. For example, LibPNG takes PNG input model, LibJpeg-turbo takes JPEG input model. As far as I understand, FuzzBench uses projects integrated into OSS-Fuzz for benchmarking so where should I store AFLSmart's input models and how to map them to the corresponding OSS-Fuzz projects?

I am thinking about a simple solution which could answer the above questions but I am not sure it is desirable. The solution is that I fix the -g argument with an input model, say "-g model.xml" and when I build a OSS-Fuzz project, I copy a corresponding input model to the working folder and rename it to model.xml.

Thuan

Support tracks to compare classes of fuzzers (binary-only, source-only, hybrids, etc)

Thanks for maintaing an awesome project!

I am writing to discuss my concern about the unfairness of fuzzbench results due to the difference between binary-level vs. source-level fuzzers/

When I look at the current sample report, all the tools used here except Eclipser run with source-level instrumentation (with afl-cc). Eclipser, on the other hand, uses QEMU to instrument binaries.

It is well-known that binary-level instrumentation incurs significant overhead (several orders of magnitude) compared to source-level instrumentation. Therefore, comparing Eclipser with source-level fuzzers, e.g., AFL, is not entirely fair as they have different goals and uses. However, comparing Eclipser with AFL running in the QEMU mode (-Q option) would be fair, for example.

So I would like to suggest separating tracks in fuzzbench into two: binary track, and source track. In the binary track, we can include AFL-QEMU, Eclipser, VUzzer, etc. I believe showing two sets of graphs for each program would be enough. For your information, having multiple tracks in comparing tools is a common practice in other domains. For example, SMT-COMP currently has 6 tracks: https://smt-comp.github.io/2019/results.html.

This way, people can appreciate more about binary-level fuzzing research ๐Ÿ˜„ I truly believe this will benefit our community as well.

Thank you!

Benchmark DFT-based fuzzing

//cc @kcc

I'd like to benchmark DFT-based fuzzing in the late Q2. Filing this issue now to outline the requirements, which hopefully could be addressed by then (if anything has to be addressed separately at all).

What we need to benchmark

  1. Vanilla libFuzzer (the most recent version on the day of starting the experiment)
  2. (up to N) modified libFuzzer versions, with different heuristics / parameters tuned as per the work mentioned in google/oss-fuzz#1632, google/oss-fuzz#3398, and go/dft-cf design doc.

Requirements

  1. Different libFuzzer builds mentioned above are "fuzzers" for the experiment
  2. Each benchmark should have saturated corpora
  3. Each benchmark should have data flow traces for the given corpora
    This is out of scope of the fuzzbench responsibilities, we just need to be able to include additional files in benchmarks. Prior to the experiment, we will build all benchmarks with DFSan and collect Data Flow Traces using dataflow driver.
  4. Because of the points 2 and 3, we will use arbitrary fuzz targets from OSS-Fuzz

Understanding differences between reports

I don't understand the differences in coverage achieved between different report dates. Specifically, certain targets have a significant change in the coverage achieved by the top 5 fuzzers.

For instance, when comparing 2020-03-19 and 2020-03-16:

  • curl changes from ~6250 to ~5400 edges covered
  • woff2 changes from ~2400 to ~1050 edges covered
  • wpantund changes from ~9100 to ~4100 edges covered

Other targets maintain the same relative number of edges covered. I didn't see changes in the way the targets were compiled that would account for this kind of difference. What am I missing that would cause these kind of differences?

Add benchmarks from svrwb-fuzz-benchmark-suite

I started a similar project last year and since it probably will not be as successful as this one, I figure I should donate the cases I have. They are various OSS apps that have multiple CVE assigned vulnerabilities in a single version; they may be found: https://github.com/veracode-research/svrwb-fuzz-benchmark-suite/tree/master/cases

It seems after a quick look that about 9 of them are not being done by you all (and perhaps some of the sqlite may have gaps between ours). They each include multiple vulnerabilities with CVEs and the samples for those known. They may have other vulns, but that would need to be found as they are used. The apps/libs are:

  • audiofile
  • imageworsener
  • jasper
  • lame
  • libarchive
  • perl
  • tcpdump
  • wavpack
  • ytnef

I am willing to do the work to prep them to be added to your project, but I am curious: should I create an issue for each?

There are additional tests to add that I have not included either, including ChakraCore 1.4.1 (there are multiple vulns), and a few others from recent papers. Any guidance on how to go forward is appreciated.

libpcap issue with AFL-based fuzzers

It seems that libpcap benchmark has some issue with AFL-based fuzzer. I run some tests locally and see that AFL and AFLSmart output something strange as copied below -- queue cycle number keeps increasing rapidly while there are no new test cases

[*] Entering queue cycle 19946.
[*] Fuzzing test case #0 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #1 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #2 (3 total, 0 uniq crashes found)...
[*] Entering queue cycle 19947.
[*] Fuzzing test case #0 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #1 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #2 (3 total, 0 uniq crashes found)...
[*] Entering queue cycle 19948.
[*] Fuzzing test case #0 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #1 (3 total, 0 uniq crashes found)...
[*] Fuzzing test case #2 (3 total, 0 uniq crashes found)...

Meanwhile, libfuzzer works well. Its output is copied below

NEW_FUNC: 0x5924e0 ย (/out/fuzz_both+0x5924e0)
NEW_FUNC: 0x592550 ย (/out/fuzz_both+0x592550)
#117110: cov: 330 ft: 466 corp: 59 exec/s 21364 oom/timeout/crash: 0/0/0 time: 5s job: 2 dft_time: 0
#189290: cov: 378 ft: 560 corp: 123 exec/s 18045 oom/timeout/crash: 0/0/0 time: 9s job: 3 dft_time: 0
#288835: cov: 397 ft: 615 corp: 161 exec/s 19909 oom/timeout/crash: 0/0/0 time: 14s job: 4 dft_time: 0

Link to data bucket in reports

We don't link to data buckets in reports.
This would be helpful since it would allow people to do their own processing on corpora/output directories, and is generally useful for transparency.
The quick and dirty way to do this is to from the report page do 'http://commondatastorage.googleapis.com/fuzzbench-data/' + $EXPERIMENT_NAME
Cleaner way is to save the data directory when we create the experiment in the database.

Use same builder image for OSS-Fuzz and non-OSS-Fuzz benchmarks

Having two different base builder images makes it easier to run into issues like #107.
Instead we should probably have one base-builder image that gets used to build every benchmark.
This image will inherit from OSS-Fuzz's base-builder and can should contain everything needed to build the non-OSS-Fuzz benchmarks.

QSYM results are inaccurate, does not work on modern kernel

As per Josh Bundt,

QSYM's relies on Intel PIN v2.14-71313 and hence it has not worked at all when using a modern kernel. He only found it working on 3.X kernel from Centos 7 or Ubuntu 14.04.

We have verified that QSYM queue dir is empty, e.g. https://storage.cloud.google.com/fuzzbench-data/2020-03-11/experiment-folders/sqlite3_ossfuzz-qsym/trial-55048/corpus/corpus-archive-0097.tar.gz?authuser=0&_ga=2.54350888.-1894397538.1582244984
The current results probably show better results due to running AFL in slave mode (-s), but we need to verify this.

Some honggfuzz targets seem to be not instrumented

Hi, I was looking at some poorer honggfuzz results, e.g. for the systemd parser, and this log file

https://console.cloud.google.com/storage/browser/_details/fuzzbench-data/sample/experiment-folders/systemd_fuzz-link-parser-honggfuzz/trial-40413/results/fuzzer-log.txt

might suggest that the binary is not instrumented at all. I.e. it is slightly instrumented, probably for some str*cmp comparisons during final linking, but not really for trace-pc-guard,trace-cmp,trace-div,indirect-calls b/c logs like

Size:8192 (i,b,hw,ed,ip,cmp): 0/0/0/0/0/24, Tot:0/0/0/0/0/4167

mean, that there're no pc-guard, nor indirect-pc paths seen. Only few cmp hits.

Looking at https://github.com/google/fuzzbench/blob/master/benchmarks/systemd_fuzz-link-parser/oss-fuzz.yaml the target is somehow shared with oss-fuzz, so probably built in some special way, other than majority of targets.

Could it be some obvious problem, that you can spot here, or will it require some deeper debugging? Which I can do, just hoping you might spot the problem way faster :)

Add graphs of coverage/iter

While a graph including coverage over time is the true performance of a fuzzer, this does not allow for controlling for the hardware, threads, scaling, etc. Having a graph which shows coverage over fuzz cases allows seeing the "density" of the fuzzer. It's a really meaningful graph for determining how frequently the fuzzer makes a good input.

-B

Process ```python3 /src/experiment/runner.py``` eats 100% of a single core on a multi-CPU machine

When running a local experiment with e.g.:

make run-honggfuzz-proj4-2017-08-14

The following process tree (inside docker) is created

root     31517  1.8  0.9 2108508 79024 ?       Ssl  16:07   8:08 /usr/sbin/dockerd -H fd://
root     31529  0.3  0.2 1912212 22624 ?       Ssl  16:07   1:35  \_ docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info
root      8625  0.0  0.1 773488 10964 ?        Sl   23:29   0:00      \_ docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/13b1d8df9fcfacc96627cc5d8c6906aa5793fad0b8b5e78935ced70f8c911db4 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      8642  0.0  0.0   4504   748 pts/0    Ss+  23:29   0:00          \_ /bin/sh -c $ROOT_DIR/docker/benchmark-runner/startup-runner.sh
root      8677  0.0  0.0  18068  2904 pts/0    S+   23:29   0:00              \_ /bin/bash -e /src/docker/benchmark-runner/startup-runner.sh
root      8685 96.4  0.5 730840 44724 pts/0    S<l+ 23:29   6:54                  \_ python3 /src/experiment/runner.py
root      8698  0.0  0.1  44380  9592 ?        Ss   23:29   0:00                      \_ python3 -u -c import fuzzer; fuzzer.fuzz('/out/seeds', '/out/corpus', '/out/fuzz-target')
root      8706  198  0.1 730584 10024 ?        Sl   23:29  14:12                          \_ ./honggfuzz --sanitizers --persistent --input /out/seeds --output /out/corpus -na -- /out/fuzz-target
root      8716 62.9  4.8 21475465676 398192 ?  Rs   23:29   4:29                              \_ /out/fuzz-target
root      8717 63.1  4.8 21475465676 398116 ?  Rs   23:29   4:30                              \_ /out/fuzz-target
root      8718 62.8  4.8 21475465676 395052 ?  Rs   23:29   4:29                              \_ /out/fuzz-target
root      8719 63.0  4.8 21475465676 398308 ?  Rs   23:29   4:29                              \_ /out/fuzz-target
root      8720 62.4  4.8 21475465676 398640 ?  Rs   23:29   4:27                              \_ /out/fuzz-target
root      8721 62.8  4.8 21475465676 396120 ?  Rs   23:29   4:28                              \_ /out/fuzz-target
root      8722 62.5  4.8 21475465676 398292 ?  Rs   23:29   4:27                              \_ /out/fuzz-target
root      8723 63.0  4.8 21475465676 396276 ?  Rs   23:29   4:30                              \_ /out/fuzz-target

And the process python3 /src/experiment/runner.py eats 1 core of CPU time on a 8-core machine. If I'm not mistaken, you're running experiments on 1-core machines (or, maybe forcing experiments to 1 core by using sched_affinity), so the effects might be bigger there, and I'm worried that the runner.py is using, say 50% of a single CPU time on a single fuzz-machine.

It could be something related to my setup, or to the fact that the host machine is 8-cores, but would be glad if you could check it on your side as well. BTW, I'm using a regular GCE VM (8-core).

I've done a quick check on this process (here PID=8685), and it's its thread TID=8694 - you can find that by using key H in top) is the thread which actually fetches the data from the honggfuzz output and writes them to the output file. But, it's just a guess from a quick gdb session.

 # gdb
(gdb) attach 8694
Attaching to process 8694
...
PyTuple_New (size=size@entry=0) at Objects/tupleobject.c:135
135	Objects/tupleobject.c: No such file or directory.
(gdb) bt
#0  PyTuple_New (size=size@entry=0) at Objects/tupleobject.c:135
#1  0x000000000043cac4 in _PyStack_AsTuple (stack=stack@entry=0x0, nargs=nargs@entry=0) at Objects/call.c:1284
#2  0x000000000043f47f in _PyObject_FastCallDict (callable=callable@entry=0x7f2663098b90, args=args@entry=0x0, nargs=nargs@entry=0, kwargs=kwargs@entry=0x0) at Objects/call.c:115
#3  0x0000000000424d51 in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:2906
#4  0x00000000004ebf28 in PyEval_EvalFrameEx (throwflag=0, f=0x7f263c005d50) at Python/ceval.c:547
#5  _PyEval_EvalCodeWithName (_co=0x7f265c2806f0, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=1, kwnames=0x7f265c2924a8, kwargs=0x7f265b9c56d8, kwcount=1, kwstep=1, defs=0x7f265c2a8a68, defcount=2, kwdefs=0x0, 
    closure=0x0, name=0x7f266313d170, qualname=0x7f265c27f130) at Python/ceval.c:3930
#6  0x000000000043c9c6 in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:433
#7  0x000000000042880b in call_function (kwnames=0x7f265c292490, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#8  _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3139
#9  0x0000000000420668 in function_code_fastcall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Objects/call.c:283
#10 0x000000000043ca9f in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:415
#11 0x0000000000429aa3 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#12 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3110
#13 0x0000000000420668 in function_code_fastcall (co=<optimized out>, args=<optimized out>, nargs=2, globals=<optimized out>) at Objects/call.c:283
#14 0x000000000043ca9f in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:415
#15 0x000000000042997c in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#16 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3124
#17 0x00000000004ebf28 in PyEval_EvalFrameEx (throwflag=0, f=0x231c480) at Python/ceval.c:547
#18 _PyEval_EvalCodeWithName (_co=0x7f265ba576f0, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=1, kwnames=0x7f266224fb48, kwargs=0x231b0d0, kwcount=4, kwstep=1, defs=0x0, defcount=0, kwdefs=0x7f265ba4cfa0, 
    closure=0x0, name=0x7f26622246f0, qualname=0x7f26622246f0) at Python/ceval.c:3930
#19 0x000000000043c9c6 in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:433
#20 0x000000000042880b in call_function (kwnames=0x7f266224fb30, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#21 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3139
#22 0x0000000000420668 in function_code_fastcall (co=co@entry=0x7f2662221930, args=<optimized out>, args@entry=0x7f265ba528d8, nargs=nargs@entry=2, globals=globals@entry=0x7f26630ddb90) at Objects/call.c:283
#23 0x000000000043c90e in _PyFunction_FastCallDict (func=0x7f265b9efc20, args=0x7f265ba528d8, nargs=2, kwargs=0x7f265b9fa5a0) at Objects/call.c:322
#24 0x0000000000424ea7 in do_call_core (kwdict=0x7f265b9fa5a0, callargs=0x7f265ba528c0, func=0x7f265b9efc20) at Python/ceval.c:4645
#25 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3191
#26 0x0000000000420668 in function_code_fastcall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Objects/call.c:283
#27 0x000000000043ca9f in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:415
#28 0x0000000000429aa3 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#29 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3110
#30 0x0000000000420668 in function_code_fastcall (co=<optimized out>, args=<optimized out>, nargs=1, globals=<optimized out>) at Objects/call.c:283
#31 0x000000000043ca9f in _PyFunction_FastCallKeywords (func=<optimized out>, stack=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at Objects/call.c:415
#32 0x0000000000429aa3 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at Python/ceval.c:4616
#33 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3110
#34 0x0000000000420668 in function_code_fastcall (co=co@entry=0x7f2661fdb780, args=<optimized out>, args@entry=0x7f2643ffee60, nargs=nargs@entry=1, globals=globals@entry=0x7f2661fcaaa0) at Objects/call.c:283
#35 0x000000000043c90e in _PyFunction_FastCallDict (func=func@entry=0x7f2661f954d0, args=args@entry=0x7f2643ffee60, nargs=nargs@entry=1, kwargs=kwargs@entry=0x0) at Objects/call.c:322
#36 0x000000000043fac9 in _PyObject_FastCallDict (kwargs=0x0, nargs=1, args=0x7f2643ffee60, callable=0x7f2661f954d0) at Objects/call.c:98
#37 _PyObject_Call_Prepend (callable=0x7f2661f954d0, obj=<optimized out>, args=0x7f2663127050, kwargs=0x0) at Objects/call.c:908
#38 0x000000000043de50 in PyObject_Call (callable=0x7f266311e320, args=0x7f2663127050, kwargs=0x0) at Objects/call.c:245
#39 0x000000000057c783 in t_bootstrap (boot_raw=boot_raw@entry=0x7f265ba0a300) at ./Modules/_threadmodule.c:994
#40 0x0000000000532797 in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:174
#41 0x00007f2662d646ba in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#42 0x00007f266238a41d in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
(gdb) detach 

Also from strace which suggest it's the thread which is reading/writing data from honggfuzz's stderr

# strace -p 8694
strace: Process 8694 attached
strace: [ Process PID=8694 runs in x32 mode. ]
strace: [ Process PID=8694 runs in 64 bit mode. ]
futex(0x918930, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x918934, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x918930, FUTEX_OP_SET<<28|0<<12|FUTEX_OP_CMP_GT<<24|0x1) = 1
futex(0x918960, FUTEX_WAKE_PRIVATE, 1)  = 1
futex(0x918934, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 2421, {tv_sec=1583365283, tv_nsec=630654000}, FUTEX_BITSET_MATCH_ANY) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x918960, FUTEX_WAKE_PRIVATE, 1)  = 0
futex(0x7f2638002f30, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x918934, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x918930, FUTEX_OP_SET<<28|0<<12|FUTEX_OP_CMP_GT<<24|0x1) = 1
futex(0x918960, FUTEX_WAKE_PRIVATE, 1)  = 1
futex(0x7f2638002f30, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY) = -1 EAGAIN (Resource temporarily unavailable)
write(8, "Size:5756 (i,b,hw,ed,ip,cmp): 0/"..., 69) = 69
write(1, "Size:5756 (i,b,hw,ed,ip,cmp): 0/"..., 69) = 69
futex(0x918930, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x918934, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x918930, FUTEX_OP_SET<<28|0<<12|FUTEX_OP_CMP_GT<<24|0x1) = 1
futex(0x918960, FUTEX_WAKE_PRIVATE, 1)  = 1
write(8, "Size:5949 (i,b,hw,ed,ip,cmp): 0/"..., 69) = 69
write(1, "Size:5949 (i,b,hw,ed,ip,cmp): 0/"..., 69) = 69

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.