Git Product home page Git Product logo

s3backer's Introduction

s3backer is a filesystem that contains a single file backed by the Amazon Simple Storage Service (Amazon S3). As a filesystem, it is very simple: it provides a single normal file having a fixed size. Underneath, the file is divided up into blocks, and the content of each block is stored in a unique Amazon S3 object. In other words, what s3backer provides is really more like an S3-backed virtual hard disk device, rather than a filesystem.

In typical usage, a normal filesystem is mounted on top of the file exported by the s3backer filesystem using a loopback mount (or disk image mount on Mac OS X).

This arrangement has several benefits compared to more complete S3 filesystem implementations:

  • By not attempting to implement a complete filesystem, which is a complex undertaking and difficult to get right, s3backer can stay very lightweight and simple. Only three HTTP operations are used: GET, PUT, and DELETE. All of the experience and knowledge about how to properly implement filesystems that already exists can be reused.

  • By utilizing existing filesystems, you get full UNIX filesystem semantics. Subtle bugs or missing functionality relating to hard links, extended attributes, POSIX locking, etc. are avoided.

  • The gap between normal filesystem semantics and Amazon S3 ``eventual consistency'' is more easily and simply solved when one can interpret S3 objects as simple device blocks rather than filesystem objects (see below).

  • When storing your data on Amazon S3 servers, which are not under your control, the ability to encrypt data becomes a critical issue. s3backer supports secure encryption and authentication. Alternately, the encryption capability built into the Linux loopback device can be used.

  • Since S3 data is accessed over the network, local caching is also very important for performance reasons. Since s3backer presents the equivalent of a virtual hard disk to the kernel, most of the filesystem caching can be done where it should be: in the kernel, via the kernel's page cache. However s3backer also includes its own internal block cache for increased performance, using asynchronous worker threads to take advantage of the parallelism inherent in the network.

Consistency Guarantees

Amazon S3 makes relatively weak guarantees relating to the timing and consistency of reads vs. writes (collectively known as "eventual consistency"). s3backer includes logic and configuration parameters to work around these limitations, allowing the user to guarantee consistency to whatever level desired, up to and including 100% detection and avoidance of incorrect data. These are:

  1. s3backer enforces a minimum delay between consecutive PUT or DELETE operations on the same block. This ensures that Amazon S3 doesn't receive these operations out of order.
  2. s3backer maintains an internal block MD5 checksum cache, which enables automatic detection and rejection of `stale' blocks returned by GET operations.

This logic is configured by the following command line options: --md5CacheSize, --md5CacheTime, and --minWriteDelay.

Zeroed Block Optimization

As a simple optimization, s3backer does not store blocks containing all zeroes; instead, they are simply deleted. Conversely, reads of non-existent blocks will contain all zeroes. In other words, the backed file is always maximally sparse.

As a result, blocks do not need to be created before being used and no special initialization is necessary when creating a new filesystem.

When the --listBlocks flag is given, s3backer will list all existing blocks at startup so it knows ahead of time exactly which blocks are empty.

File and Block Size Auto-Detection

As a convenience, whenever the first block of the backed file is written, s3backer includes as meta-data (in the x-amz-meta-s3backer-filesize header) the total size of the file. Along with the size of the block itself, this value can be checked and/or auto-detected later when the filesystem is remounted, eliminating the need for the --blockSize or --size flags to be explicitly provided and avoiding accidental mis-interpretation of an existing filesystem.

Block Cache

s3backer includes support for an internal block cache to increase performance. The block cache cache is completely separate from the MD5 cache which only stores MD5 checksums transiently and whose sole purpose is to mitigate ``eventual consistency''. The block cache is a traditional cache containing cached data blocks. When full, clean blocks are evicted as necessary in LRU order.

Reads of cached blocks will return immediately with no network traffic. Writes to the cache also return immediately and trigger an asynchronous write operation to the network via a separate worker thread. Because the kernel typically writes blocks through FUSE filesystems one at a time, performing writes asynchronously allows s3backer to take advantage of the parallelism inherent in the network, vastly improving write performance.

The block cache can be configured to store the cached data in a local file instead of in memory. This permits larger cache sizes and allows s3backer to reload cached data after a restart. Reloaded data is verified via MD5 checksum with Amazon S3 before reuse.

The block cache is configured by the following command line options: --blockCacheFile, --blockCacheNoVerify, --blockCacheSize, --blockCacheThreads and --blockCacheWriteDelay.

Read Ahead

s3backer implements a simple read-ahead algorithm in the block cache. When a configurable number of blocks are read in order, block cache worker threads are awoken to begin reading subsequent blocks into the block cache. Read ahead continues as long as the kernel continues reading blocks sequentially. The kernel typically requests blocks one at a time, so having multiple worker threads already reading the next few blocks improves read performance by taking advantage of the parallelism inherent in the network.

Note that the kernel implements a read ahead algorithm as well; its behavior should be taken into consideration. By default, s3backer passes the -o max_readahead=0 option to FUSE.

Read ahead is configured by the --readAhead and --readAheadTrigger command line options.

Encryption and Authentication

s3backer supports encryption via the --encrypt, --password, and --passwordFile flags. When encryption is enabled, SHA1 HMAC authentication is also automatically enabled, and s3backer rejects any blocks that are not properly encrypted and signed.

Encrypting at the s3backer layer is preferable to encrypting at an upper layer (e.g., at the loopback device layer), because if the data s3backer sees is already encrypted it can't optimize away zeroed blocks or do meaningful compression.

Compression

s3backer supports block-level compression, which minimizes transfer time and storage costs.

Compression is configured via the--compress flag. Compression is automatically enabled when encryption is enabled.

Read-Only Access

An Amazon S3 account is not required in order to use s3backer. Of course a filesystem must already exist and have S3 objects with ACL's configured for public read access (see --accessType below); users should perform the looback mount with the read-only flag (see mount(8)) and provide the --readOnly flag to s3backer. This mode of operation facilitates the creation of public, read-only filesystems.

Simultaneous Mounts

Although it functions over the network, the s3backer filesystem is not a distributed filesystem and does not support simultaneous read/write mounts. (This is not something you would normally do with a hard-disk partition either.) s3backer does not detect this situation; it is up to the user to ensure that it doesn't happen.

Statistics File

s3backer populates the filesystem with a human-readable statistics file. See --statsFilename below.

Logging

In normal operation s3backer will log via syslog(3). When run with the -d or -f flags, s3backer will log to standard error.

OK, Where to Next?

Try it out! No Amazon S3 account is required.

See the ManPage for further documentation and the CHANGES file for release notes.

Join the s3backer-devel group to participate in discussion and development of s3backer.

s3backer's People

Contributors

ahmgithubahm avatar archiecobbs avatar bencodestx avatar brianredbeard avatar gromgit avatar jeffbyers avatar jeromerobert avatar lhw avatar myhro avatar nikratio avatar rhardih avatar staktrace avatar zilti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3backer's Issues

failed to create new directory for mounting S3

Hi, I ran s3backer --blockSize=128k --size=1t --listBlocks nanfei-test ~/mnt-s3b by following the wiki guide https://github.com/archiecobbs/s3backer/wiki/CreatingANewFilesystem, but got error "error: http://s3.amazonaws.com/nanfei-test/ appears to be already mounted". Then I added --force to the command and got a new directory, well met another error "Could not stat /home/ubuntu/mnt-s3b/file --- No such file or directory" when running command "mkfs.ext4 -b 4096 -s 513 ~/mnt-s3b/file". I suppose ~/mnt-s3b/file was created by the first command. Any idea if I did wrongly? Thanks!

I/O error on mounted filesystem, Unknown error:0 in debug output

$ opt/s3backer/bin/s3backer --blockSize=128k --size=1t --listBlocks 
rptb1-backup-test2 tmp/rptb1-backup-test2
$ file tmp/rptb1-backup-test2
tmp/rptb1-backup-test2: cannot open `tmp/rptb1-backup-test2' (Input/output 
error)

So I tried:

$ opt/s3backer/bin/s3backer -s -d --blockSize=128k --size=1t --listBlocks 
rptb1-backup-test2 tmp/rptb1-backup-test2
s3backer: auto-detecting block size and total file size...
2010-09-22 17:22:01 DEBUG: HEAD 
http://s3.amazonaws.com/rptb1-backup-test2/00000000
2010-09-22 17:22:01 DEBUG: rec'd 404 response: HEAD 
http://s3.amazonaws.com/rptb1-backup-test2/00000000
s3backer: auto-detection failed; using configured block size 128k and file size 
1t
s3backer: listing non-zero blocks...2010-09-22 17:22:01 DEBUG: GET 
http://s3.amazonaws.com/rptb1-backup-test2?prefix=&max-keys=256
2010-09-22 17:22:01 DEBUG: success: GET 
http://s3.amazonaws.com/rptb1-backup-test2?prefix=&max-keys=256
done
s3backer: found 0 non-zero blocks
2010-09-22 17:22:01 DEBUG: s3backer config:
2010-09-22 17:22:01 DEBUG:                test mode: "false"
2010-09-22 17:22:01 DEBUG:                 accessId: [redacted]
2010-09-22 17:22:01 DEBUG:                accessKey: "****"
2010-09-22 17:22:01 DEBUG:               accessFile: 
"/Users/rb/.s3backer_passwd"
2010-09-22 17:22:01 DEBUG:               accessType: private
2010-09-22 17:22:01 DEBUG:                  baseURL: "http://s3.amazonaws.com/"
2010-09-22 17:22:01 DEBUG:                   bucket: "rptb1-backup-test2"
2010-09-22 17:22:01 DEBUG:                   prefix: ""
2010-09-22 17:22:01 DEBUG:              list_blocks: true
2010-09-22 17:22:01 DEBUG:                    mount: "tmp/rptb1-backup-test2"
2010-09-22 17:22:01 DEBUG:                 filename: "file"
2010-09-22 17:22:01 DEBUG:           stats_filename: "stats"
2010-09-22 17:22:01 DEBUG:               block_size: 128k (131072)
2010-09-22 17:22:01 DEBUG:                file_size: 1t (1099511627776)
2010-09-22 17:22:01 DEBUG:               num_blocks: 8388608
2010-09-22 17:22:01 DEBUG:                file_mode: 0600
2010-09-22 17:22:01 DEBUG:                read_only: false
2010-09-22 17:22:01 DEBUG:                 compress: 0
2010-09-22 17:22:01 DEBUG:               encryption: (none)
2010-09-22 17:22:01 DEBUG:                 password: ""
2010-09-22 17:22:01 DEBUG:                  timeout: 30s
2010-09-22 17:22:01 DEBUG:      initial_retry_pause: 200ms
2010-09-22 17:22:01 DEBUG:          max_retry_pause: 30000ms
2010-09-22 17:22:01 DEBUG:          min_write_delay: 500ms
2010-09-22 17:22:01 DEBUG:           md5_cache_time: 10000ms
2010-09-22 17:22:01 DEBUG:           md5_cache_size: 10000 entries
2010-09-22 17:22:01 DEBUG:         block_cache_size: 1000 entries
2010-09-22 17:22:01 DEBUG:      block_cache_threads: 20 threads
2010-09-22 17:22:01 DEBUG:      block_cache_timeout: 0ms
2010-09-22 17:22:01 DEBUG:  block_cache_write_delay: 250ms
2010-09-22 17:22:01 DEBUG:    block_cache_max_dirty: 0 blocks
2010-09-22 17:22:01 DEBUG:         block_cache_sync: false
2010-09-22 17:22:01 DEBUG:               read_ahead: 4 blocks
2010-09-22 17:22:01 DEBUG:       read_ahead_trigger: 2 blocks
2010-09-22 17:22:01 DEBUG:   block_cache_cache_file: ""
2010-09-22 17:22:01 DEBUG:    block_cache_no_verify: "false"
2010-09-22 17:22:01 DEBUG: fuse_main arguments:
2010-09-22 17:22:01 DEBUG:   [0] = "opt/s3backer/bin/s3backer"
2010-09-22 17:22:01 DEBUG:   [1] = 
"-ofsname=http://s3.amazonaws.com/rptb1-backup-test2/"
2010-09-22 17:22:01 DEBUG:   [2] = "-o"
2010-09-22 17:22:01 DEBUG:   [3] = 
"kernel_cache,allow_other,use_ino,max_readahead=0,subtype=s3backer,entry_timeout
=31536000,negative_timeout=31536000,attr_timeout=0,default_permissions,nodev,nos
uid,daemon_timeout=600"
2010-09-22 17:22:01 DEBUG:   [4] = "-s"
2010-09-22 17:22:01 DEBUG:   [5] = "-d"
2010-09-22 17:22:01 DEBUG:   [6] = "tmp/rptb1-backup-test2"
2010-09-22 17:22:01 INFO: s3backer process 48602 for tmp/rptb1-backup-test2 
started
unique: 0, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.8
flags=0x00000000
max_readahead=0x00100000
   INIT: 7.8
   flags=0x00000000
   max_readahead=0x00000000
   max_write=0x00400000
   unique: 0, error: 0 (Unknown error: 0), outsize: 40
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 2, error: 0 (Unknown error: 0), outsize: 96
unique: 3, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 128
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 2, error: 0 (Unknown error: 0), outsize: 128
unique: 3, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 128
unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 128
unique: 4, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 4, error: 0 (Unknown error: 0), outsize: 128
unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 5, error: 0 (Unknown error: 0), outsize: 128
unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 6, error: 0 (Unknown error: 0), outsize: 128
unique: 7, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 7, error: 0 (Unknown error: 0), outsize: 96
unique: 8, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 8, error: 0 (Unknown error: 0), outsize: 128
unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 7, error: 0 (Unknown error: 0), outsize: 128
unique: 9, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 9, error: 0 (Unknown error: 0), outsize: 96
unique: 9, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 9, error: 0 (Unknown error: 0), outsize: 128
unique: 10, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 10, error: 0 (Unknown error: 0), outsize: 96

Original issue reported on code.google.com by [email protected] on 22 Sep 2010 at 4:23

Multi-part upload

S3 now supports multipart uploads. I have no exact idea how this may improve 
s3backer, but I thought it might be worth mentioning in case there is 
employment for the option to make s3backer better/faster/more reliable

http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?uploadobjusingm
pu.html

Original issue reported on code.google.com by [email protected] on 15 Nov 2010 at 11:16

returned unexpected encoding "gzip"

I'm getting too many errors in /var/log/messages .

s3backer: read of block 00000198 returned unexpected encoding "gzip"
s3backer: read of block 00000904 returned unexpected encoding "gzip"

what does this refers too ?

Should be able to resize s3backer device (maybe already can?)

Hi,

It really needs to be possible to resize an S3 device. For example, if I create 
a partition now to hold my offsite backup, and then discover in five years that 
I've filled it and need more space, I should be able to simply grow the S3 
device and then run resize_reiserfs or whatever to grow the underlying 
filesystem.

Ditto for shrinking a device.

The s3backer documentation has all kinds of dire warnings about doing this, 
claiming that data won't read back properly if the size of a device is changed, 
but I can't figure out why this is the case.

It seems to me that, as long as the block size isn't changed, if the device is 
grown, more blocks will simply be added on to the end of it.

Shrinking a device is a bit more complicated, since the old files left over 
would need to be cleaned up somehow, but it seems like it should be relatively 
easy to code something to make that happen automatically (coupled with a 
listBlocks) when a filesystem is shrunk.

But aside from the shrinking complication mentioned above, if I'm right that 
it's possible to grow a device simply by increasing its size and then 
specifying "force" the next time you mount, then the documentation should be 
updated to say that, and the warning you get when you mount a filesystem with a 
larger size should be edited to be less dire.

However, the warning should definitely remain dire when the block size is 
changed!

Original issue reported on code.google.com by [email protected] on 22 Oct 2010 at 7:26

Feature request: dynamic threads

It would be really neat if s3backer were able to adjust the number of active 
threads based on performance.

Here's my thinking...

When too many threads have been configured for the available upload bandwidth, 
packets start to get lost and s3backer starts to see operation timeouts. 
Suppose s3backer took advantage of this as follows:

* Allow the user to configure a minimum and maximum number of active threads.
* Start out using the maximum configured number of threads.
* Each time an operation timeout happens, decrease the active thread count by 
1, unless it's already at the minimum.
* Whenever a certain number of writes to S3 occurs without any timeouts, 
increase the active thread count by 1 unless it's already at the maximum.
* Log the active thread count each time it is decreased or increased, so that 
the user can determine from his logs the optimal number of threads to use.

With this approach, I believe that s3backer will hover most of the time around 
the optimal active thread count, with occasional short-lived detours lower or 
higher.

I took a stab at implementing this but the code is sufficiently complex that I 
didn't feel like I could do it justice in the time I have available. It would 
probably be easier for the guy who wrote the code. ;-)


Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 4:25

S3Backer via Crontab

What steps will reproduce the problem?
1. Add S3Backer mount command to crontab
2. Let it run automatically via cron
3. Get the error: warning: no accessId specified ...

What is the expected output? What do you see instead?
Mounted S3 Volume :)

What version of the product are you using? On what operating system?
S3Backer 1.3.7 / Ubuntu 12.04 LTS

Please provide any additional information below.
S3Backer works fine if i run it as sudo user. I also specified the the path to 
the access id file as a flag to the s3backer command and made sure the file is 
in place (i tried both the home of the user and /root).

Any help would be very appreciated :)

Original issue reported on code.google.com by [email protected] on 31 Jul 2013 at 1:21

Attempting to mount initial drive while s3backer is running corrupts data

What steps will reproduce the problem?
1. mount the first filesystem 
2. mount the child filesystem, and create a bunch of data 
3. unmount both filesystems
4. ps -Af | grep s3backer, and note the process is still running (sending data)
5. mount the first filesystem again.  Note that 'file' is now corrupted.

What is the expected output? What do you see instead?
I would expect it to either fail to mount the first filesystem, or wait for it 
to finish sending the necessary data.  As is, it appears to be killing the 
original process.

What version of the product are you using? On what operating system?
centos 6, s3backer 1.3.4

Please provide any additional information below.
fstab:
s3backer#backup     /s3/dev/s3backer/backup      fuse    
noauto,size=500g,blockSize=1024k,encrypt,compress=9,passwordFile=xxxxxxxxxxxx,ac
cessFile=xxxxxxxxxxxx,bloc\
kCacheSize=512000,md5CacheSize=512000,md5CacheTime=0,blockCacheFile=/s3/cache/ba
ckup/cachefile,blockCacheWriteDelay=60000,blockCacheThreads=5,timeout=60,listBlo
cks,rrs,ssl,insecure,vhost  0 0     

# backup disk looped onto s3backer mount                                        


/s3/dev/s3backer/backup/file          /s3/backup        ext4 
noauto,loop,noatime,nodiratime,sync  0       0

Original issue reported on code.google.com by [email protected] on 30 Apr 2013 at 2:48

Can't mount anymore when updating to 1.3.3

I created a reiserfs filesystem successfully with s3backer version 1.3.1:

s3backer --encrypt --vhost --blockSize=128k --size=5M --listBlocks mybucket 
mnts3/

It worked fine but when updated to version 1.3.3 I can't mount it anymore. Has 
the encryption changed from version 1.3.1 to 1.3.3?

I can still use the filesystem with s3backer 1.3.1 but not with 1.3.3.

Original issue reported on code.google.com by [email protected] on 7 Jun 2012 at 11:49

make fails for s3backer ver 1.2.2 and 1.2.3 on CentOS 5.x x86_64

What steps will reproduce the problem?
1. download tarball (1.2.2) or checkout svn (1.2.3)
2. run configure -- all OK, no errors
3. make fails with the following errors:
...
http_io.c: In function 'http_io_list_prepper':
http_io.c:440: error: 'CURLOPT_HTTP_CONTENT_DECODING' undeclared (first use
in this function)
http_io.c:440: error: (Each undeclared identifier is reported only once
http_io.c:440: error: for each function it appears in.)
http_io.c: In function 'http_io_read_prepper':
http_io.c:762: error: 'CURLOPT_HTTP_CONTENT_DECODING' undeclared (first use
in this function)
make[1]: *** [http_io.o] Error 1
make[1]: Leaving directory `/root/s3backer-1.2.2'
make: *** [all] Error 2
...

All libraries/dependencies are satisfied.
gcc version: gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42)
OS Version: CentOS release 5.2 (Final)
kernel: 2.6.18-8.el5.028stab031.1 (x86_64)
Build host is an OpenVZ x86_64 container running on an x86_64 hardware node
powered by AMD Athlon 64 X2 Dual Core Processor 4200+

Note: to get svn checkout of version 1.2.3 to create a good configure
requires editing autogen.sh and commenting out the line that sources
cleanup.sh when you run autogen.sh for the first time; you can uncomment
the line (i.e. source cleanup.sh) after the first run. Otherwise autogen.sh
fails due to cleanup.sh trying to delete non-existent dirs.

Original issue reported on code.google.com by [email protected] on 7 Apr 2009 at 4:52

Make Error

What steps will reproduce the problem?
1. $ PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure
2. $ make

What is the expected output? What do you see instead?
Expected: No Errors

Actual:
make  all-recursive
Making all in debian
make[2]: Nothing to be done for `all'.
gcc -DHAVE_CONFIG_H -I.    -D__FreeBSD__=10 -D_FILE_OFFSET_BITS=64 
-I/usr/local/include/fuse   -g -O3 -pipe -Wall -Waggregate-return -Wcast-align 
-Wchar-subscripts -Wcomment -Wformat -Wimplicit -Wmissing-declarations 
-Wmissing-prototypes -Wnested-externs -Wno-long-long -Wparentheses 
-Wpointer-arith -Wredundant-decls -Wreturn-type -Wswitch -Wtrigraphs 
-Wuninitialized -Wunused -Wwrite-strings -Wshadow -Wstrict-prototypes 
-Wcast-qual  -MT main.o -MD -MP -MF .deps/main.Tpo -c -o main.o main.c
In file included from s3backer.h:47,
                 from main.c:25:
/usr/local/include/curl/curl.h:52:23: error: osreldate.h: No such file or 
directory
make[2]: *** [main.o] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

What version of the product are you using? On what operating system?
s3backer: 1.3.2
Mac OS X: 10.6.7

Please provide any additional information below.

Error encountered while following instructions on the BuildAndInstall wiki page.

Original issue reported on code.google.com by [email protected] on 14 Jul 2011 at 8:28

Simple locking mechanism to prevent simultaneous mounts

With the data loss hazard in issue 9, it might be wise to use a small file or 
meta information to 
indicate that the filesystem is mounted. A simple locking mechanism will 
protect against 
unintended unsafe mounts (e.g. laptop did not unmount cleanly due to network 
issues).

Original issue reported on code.google.com by jonsview on 14 Aug 2009 at 1:56

Feature request: change cache size

It would be great to be able to change the size of the on-disk block cache 
between invocations of s3backer without having to throw away the entire cache. 
If the cache size is reduced, it should just be a matter of calling ftruncate 
to chop off the top of it; if it's grown, it should be possible to simply add 
on to the end of the file. It's very bad that if I realize my cache size is 
wrong, I have to throw away the whole thing and start over, which incurs a 
significant performance (and cost) hit while the cache is being repopulated.

Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 4:27

Allow configuring a limit on the number of outstanding dirty blocks

s3backer currently allows an unlimited number of blocks in the block cache
to be dirty at the same time.

Some situations may want to limit this number to avoid the degree of
inconsistency that can occur in case of a crash.

Suggest adding a new flag `--blockCacheMaxDirty=NUMBLOCKS`. When a new
write was attempted while the maximum number of dirty blocks had already
been reached, then the subsequent write would block.

Original issue reported on code.google.com by [email protected] on 1 Oct 2009 at 7:59

Wish: throttling option

I'm using s3backer on one of my home servers to rsync my photo collection
to s3, but this causes my ssh sessions on other machines to be heavily
disturbed (a lot of typing latency). I've tried to throttle rsync
(--bwlimit) but since I cache s3backer the rsync throttling is not very
effective. So, I would love to see a (good) throttling/bwlimiting mechanism
implemented in s3backer, please ;)

Original issue reported on code.google.com by [email protected] on 19 Oct 2009 at 12:46

Data corruption hazard (make cache non-volatile)

The current cache implementation introduces volatility into the system. While a 
filesystem backed by 
s3backer may be journaled there's still a high risk for data loss. For example, 
if there is a system failure with dirty blocks in the cache there is a 
likelihood that the filesystem journal will get out of sync with 
what's actually on S3. The journal can't help you in this case because as far 
as it's concerned the data has 
already been written [to s3backers cache]. The issue is compounded when the 
blocks are uploaded out of 
order. 

The easiest solution is probably to make the cache non-volatile so that the 
system can later recover.

Original issue reported on code.google.com by jonsview on 14 Aug 2009 at 1:49

Can't compile on a Mac OS X 10.5 (u_int, u_long and u_char undefined)

What steps will reproduce the problem?
1. Download, unpack. 
2. ./configure
3. make
4. Make shows errors (below).

I am on a Mac OS X 10.5

If add lines:

typedef unsigned int u_int;
typedef unsigned long u_long;
typedef unsigned char u_char;

to s3backer.h everything compiles Ok.

Original make errors:

In file included from main.c:25:
s3backer.h:84: error: syntax error before โ€˜u_intโ€™
s3backer.h:84: warning: no semicolon at end of struct or union
s3backer.h:85: warning: type defaults to โ€˜intโ€™ in declaration of 
โ€˜block_bitsโ€™
s3backer.h:85: warning: data definition has no type or storage class
s3backer.h:89: error: syntax error before โ€˜connect_timeoutโ€™
s3backer.h:89: warning: type defaults to โ€˜intโ€™ in declaration of
โ€˜connect_timeoutโ€™
s3backer.h:89: warning: data definition has no type or storage class
s3backer.h:90: error: syntax error before โ€˜io_timeoutโ€™
s3backer.h:90: warning: type defaults to โ€˜intโ€™ in declaration of 
โ€˜io_timeoutโ€™
s3backer.h:90: warning: data definition has no type or storage class
s3backer.h:91: error: syntax error before โ€˜initial_retry_pauseโ€™
s3backer.h:91: warning: type defaults to โ€˜intโ€™ in declaration of
โ€˜initial_retry_pauseโ€™
s3backer.h:91: warning: data definition has no type or storage class
s3backer.h:92: error: syntax error before โ€˜max_retry_pauseโ€™
s3backer.h:92: warning: type defaults to โ€˜intโ€™ in declaration of
โ€˜max_retry_pauseโ€™
s3backer.h:92: warning: data definition has no type or storage class
s3backer.h:93: error: syntax error before โ€˜min_write_delayโ€™
s3backer.h:93: warning: type defaults to โ€˜intโ€™ in declaration of
โ€˜min_write_delayโ€™
s3backer.h:93: warning: data definition has no type or storage class
s3backer.h:94: error: syntax error before โ€˜cache_timeโ€™
s3backer.h:94: warning: type defaults to โ€˜intโ€™ in declaration of 
โ€˜cache_timeโ€™
s3backer.h:94: warning: data definition has no type or storage class
s3backer.h:95: error: syntax error before โ€˜cache_sizeโ€™
s3backer.h:95: warning: type defaults to โ€˜intโ€™ in declaration of 
โ€˜cache_sizeโ€™
s3backer.h:95: warning: data definition has no type or storage class
s3backer.h:97: warning: built-in function โ€˜logโ€™ declared as non-function
s3backer.h:102: error: syntax error before โ€˜}โ€™ token
s3backer.h:132: error: syntax error before โ€˜u_intโ€™
s3backer.h:132: warning: function declaration isnโ€™t a prototype
main.c: In function โ€˜mainโ€™:
main.c:41: error: dereferencing pointer to incomplete type
main.c:41: error: โ€˜u_longโ€™ undeclared (first use in this function)
main.c:41: error: (Each undeclared identifier is reported only once
main.c:41: error: for each function it appears in.)
main.c:41: error: syntax error before โ€˜getpidโ€™
main.c:42: error: dereferencing pointer to incomplete type
main.c:42: error: dereferencing pointer to incomplete type
make[1]: *** [main.o] Error 1
make: *** [all] Error 2

Original issue reported on code.google.com by [email protected] on 9 Jul 2008 at 4:19

Segfault on mount

I'm getting a segfault when mounting. Below is gdb output if useful.

Using Centos 6 with kernel 3.19.2-1.el6.elrepo.x86_64

# gdb --args s3backer -s -f -d --blockSize=128k --size=10g --listBlocks  testbkp /mnt/s3backer

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/bin/s3backer...Reading symbols from /usr/lib/debug/usr/bin/s3backer.debug...done.
done.
(gdb) r
Starting program: /usr/bin/s3backer -s -f -d --blockSize=128k --size=10g --listBlocks testbkp /mnt/s3backer
[Thread debugging using libthread_db enabled]
s3backer: auto-detecting block size and total file size...
2015-04-30 13:46:46 DEBUG: HEAD http://s3.amazonaws.com/testbkp/00000000
[New Thread 0x7ffff7d96700 (LWP 24001)]
[Thread 0x7ffff7d96700 (LWP 24001) exited]
2015-04-30 13:46:46 DEBUG: rec'd 404 response: HEAD http://s3.amazonaws.com/testbkp/00000000
s3backer: auto-detection failed; using configured block size 128k and file size 10g
2015-04-30 13:46:46 DEBUG: HEAD http://s3.amazonaws.com/testbkp/s3backer-mounted
[New Thread 0x7ffff7d96700 (LWP 24002)]
[Thread 0x7ffff7d96700 (LWP 24002) exited]
2015-04-30 13:46:46 DEBUG: rec'd 404 response: HEAD http://s3.amazonaws.com/testbkp/s3backer-mounted
s3backer: listing non-zero blocks...2015-04-30 13:46:46 DEBUG: GET http://s3.amazonaws.com/testbkp?max-keys=256&prefix=
[New Thread 0x7ffff7d96700 (LWP 24003)]
[Thread 0x7ffff7d96700 (LWP 24003) exited]
2015-04-30 13:46:47 DEBUG: success: GET http://s3.amazonaws.com/testbkp?max-keys=256&prefix=
done
s3backer: found 0 non-zero blocks
2015-04-30 13:46:47 DEBUG: s3backer config:
2015-04-30 13:46:47 DEBUG:                test mode: false
2015-04-30 13:46:47 DEBUG:                 directIO: false
2015-04-30 13:46:47 DEBUG:                 accessId: "WUT"
2015-04-30 13:46:47 DEBUG:                accessKey: "****"
2015-04-30 13:46:47 DEBUG:               accessFile: "/root/.s3backer_passwd"
2015-04-30 13:46:47 DEBUG:               accessType: private
2015-04-30 13:46:47 DEBUG:              ec2iam_role: ""
2015-04-30 13:46:47 DEBUG:              authVersion: aws4
2015-04-30 13:46:47 DEBUG:                  baseURL: "http://s3.amazonaws.com/"
2015-04-30 13:46:47 DEBUG:                   region: "us-east-1"
2015-04-30 13:46:47 DEBUG:                   bucket: "testbkp"
2015-04-30 13:46:47 DEBUG:                   prefix: ""
2015-04-30 13:46:47 DEBUG:              list_blocks: true
2015-04-30 13:46:47 DEBUG:                    mount: "/mnt/s3backer"
2015-04-30 13:46:47 DEBUG:                 filename: "file"
2015-04-30 13:46:47 DEBUG:           stats_filename: "stats"
2015-04-30 13:46:47 DEBUG:               block_size: 128k (131072)
2015-04-30 13:46:47 DEBUG:                file_size: 10g (10737418240)
2015-04-30 13:46:47 DEBUG:               num_blocks: 81920
2015-04-30 13:46:47 DEBUG:                file_mode: 0600
2015-04-30 13:46:47 DEBUG:                read_only: false
2015-04-30 13:46:47 DEBUG:                 compress: 0
2015-04-30 13:46:47 DEBUG:               encryption: (none)
2015-04-30 13:46:47 DEBUG:               key_length: 0
2015-04-30 13:46:47 DEBUG:                 password: ""
2015-04-30 13:46:47 DEBUG:               max_upload: - bps (0)
2015-04-30 13:46:47 DEBUG:             max_download: - bps (0)
2015-04-30 13:46:47 DEBUG:                  timeout: 30s
2015-04-30 13:46:47 DEBUG:      initial_retry_pause: 200ms
2015-04-30 13:46:47 DEBUG:          max_retry_pause: 30000ms
2015-04-30 13:46:47 DEBUG:          min_write_delay: 500ms
2015-04-30 13:46:47 DEBUG:           md5_cache_time: 10000ms
2015-04-30 13:46:47 DEBUG:           md5_cache_size: 10000 entries
2015-04-30 13:46:47 DEBUG:         block_cache_size: 1000 entries
2015-04-30 13:46:47 DEBUG:      block_cache_threads: 20 threads
2015-04-30 13:46:47 DEBUG:      block_cache_timeout: 0ms
2015-04-30 13:46:47 DEBUG:  block_cache_write_delay: 250ms
2015-04-30 13:46:47 DEBUG:    block_cache_max_dirty: 0 blocks
2015-04-30 13:46:47 DEBUG:         block_cache_sync: false
2015-04-30 13:46:47 DEBUG:               read_ahead: 4 blocks
2015-04-30 13:46:47 DEBUG:       read_ahead_trigger: 2 blocks
2015-04-30 13:46:47 DEBUG:   block_cache_cache_file: ""
2015-04-30 13:46:47 DEBUG:    block_cache_no_verify: false
2015-04-30 13:46:47 DEBUG: fuse_main arguments:
2015-04-30 13:46:47 DEBUG:   [0] = "/usr/bin/s3backer"
2015-04-30 13:46:47 DEBUG:   [1] = "-ofsname=http://s3.amazonaws.com/testbkp/"
2015-04-30 13:46:47 DEBUG:   [2] = "-o"
2015-04-30 13:46:47 DEBUG:   [3] = "kernel_cache,allow_other,use_ino,max_readahead=0,subtype=s3backer,entry_timeout=31536000,negative_timeout=31536000,attr_timeout=0,default_permissions,nodev,nosuid"
2015-04-30 13:46:47 DEBUG:   [4] = "-s"
2015-04-30 13:46:47 DEBUG:   [5] = "-f"
2015-04-30 13:46:47 DEBUG:   [6] = "-d"
2015-04-30 13:46:47 DEBUG:   [7] = "/mnt/s3backer"
2015-04-30 13:46:47 INFO: s3backer process 23998 for /mnt/s3backer started
Detaching after fork from child process 24007.
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.23
flags=0x0003f7fb
max_readahead=0x00020000
[New Thread 0x7ffff7d96700 (LWP 24008)]
[New Thread 0x7ffff715f700 (LWP 24009)]
[New Thread 0x7ffff695e700 (LWP 24010)]
[New Thread 0x7ffff615d700 (LWP 24011)]
[New Thread 0x7ffff595c700 (LWP 24012)]
[New Thread 0x7ffff515b700 (LWP 24013)]
[New Thread 0x7ffff495a700 (LWP 24014)]
[New Thread 0x7fffeffff700 (LWP 24015)]
[New Thread 0x7fffef7fe700 (LWP 24016)]
[New Thread 0x7fffeeffd700 (LWP 24017)]
[New Thread 0x7fffee7fc700 (LWP 24018)]
[New Thread 0x7fffedffb700 (LWP 24019)]
[New Thread 0x7fffed7fa700 (LWP 24020)]
[New Thread 0x7fffecff9700 (LWP 24021)]
[New Thread 0x7fffec7f8700 (LWP 24022)]
[New Thread 0x7fffebff7700 (LWP 24023)]
[New Thread 0x7fffeb7f6700 (LWP 24024)]
[New Thread 0x7fffeaff5700 (LWP 24025)]
[New Thread 0x7fffea7f4700 (LWP 24026)]
[New Thread 0x7fffe9ff3700 (LWP 24027)]

Program received signal SIGSEGV, Segmentation fault.
0x0000000000841f0f in ?? ()
(gdb) bt
#0  0x0000000000841f0f in ?? ()
#1  0x00000037ef4e9b31 in EVP_MD_CTX_cleanup (ctx=0x7fffffffd210) at digest.c:425
#2  0x000000000040b0f3 in http_io_add_auth4 (priv=<value optimized out>, io=<value optimized out>, now=1430426807, payload=<value optimized out>, plen=<value optimized out>) at http_io.c:2211
#3  0x000000000040e954 in http_io_set_mounted (s3b=<value optimized out>, old_valuep=0x7fffffffe00c, new_value=1) at http_io.c:806
#4  0x000000000040ee87 in s3backer_create_store (conf=0x61a7a0) at s3b_config.c:604
#5  0x0000000000408823 in fuse_op_init (conn=<value optimized out>) at fuse_ops.c:156
#6  0x00007ffff7daf8d3 in fuse_fs_init (fs=0x61b1f0, conn=0x61ba44) at fuse.c:2594
#7  0x00007ffff7dbcfe6 in do_init (req=0x61df60, nodeid=<value optimized out>, inarg=0x7ffff7160038) at fuse_lowlevel.c:1835
#8  0x00007ffff7dbe69d in fuse_ll_process_buf (data=0x61b8b0, buf=0x7fffffffe240, ch=0x61bbc8) at fuse_lowlevel.c:2441
#9  0x00007ffff7dbaa7c in fuse_session_loop (se=0x61dff0) at fuse_loop.c:40
#10 0x00007ffff7db5570 in fuse_loop (f=0x61e180) at fuse.c:4313
#11 0x00007ffff7dc2a94 in fuse_main_common (argc=<value optimized out>, argv=<value optimized out>, op=<value optimized out>, op_size=<value optimized out>, user_data=<value optimized out>, compat=<value optimized out>) at helper.c:357
#12 0x0000003b8a41ed5d in __libc_start_main (main=0x403a30 <main>, argc=9, ubp_av=0x7fffffffe4a8, init=<value optimized out>, fini=<value optimized out>, rtld_fini=<value optimized out>, stack_end=0x7fffffffe498) at libc-start.c:226
#13 0x0000000000403969 in _start ()
(gdb)

Licensing conflict

I'm currently thinking about getting s3backer into debian but it currently has a licensing conflict between GPL and the OpenSSL license. As these licenses are incompatible with each other.
There are two possible ways to resolve this conflict.

  • Replace OpenSSL with GnuTLS
  • Add a license exemption for OpenSSL to the source code.

For the later option there are already some prepared lines which can be easily added to the source code: https://people.gnome.org/~markmc/openssl-and-the-gpl.html and https://lists.debian.org/debian-legal/2004/05/msg00595.html

Feature request: list blocks in background

listBlocks can be done in the background in a worker thread while at the same 
time the filesystem is used for reading and writing. It doesn't seem like it 
should be necessary to block the mount until the listBlocks is finished.

Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 2:32

Writing to s3backer uses too much CPU

What steps will reproduce the problem?

1. On a small EC2 instance, start s3backer. Use blockSize=1M. Optionally
try using blockCacheWriteDelay>0 although it doesn't solve this specific
problem.
2. Format it and mount it. I tested several filesystems, ZFS(FUSE) caused
s3backer to use most CPU, JFS least, and Ext2 was in the middle.
3. Using 'dd' create a 100MB test file, initializing it from /dev/zero (or
optionally from /dev/urandom for slightly different results)
4. Copy the 100MB test file to the filesystem mounted on s3backer. Watch
the s3backer process using 100% CPU for more than a minute, while the
throughput is quite modest (1MB/s - 5MB/s, depending on the filesystem).

What is the expected output?

s3backer should be I/O-bound, not CPU-bound. At such low speeds it
shouldn't use 100% CPU. The only two CPU-intensive operations it performs
are AFAIK md5 calculation and zero-block checking. Both of them should be
considerably faster than 1-5 MB/s, which makes me think there is a bug
somewhere. For example, could s3backer be calculating the MD5 hash of the
entire 1MB block each time a 4096-byte sector is written?

What version of the product are you using? On what operating system?

r277 from SVN. OS is Ubuntu Intrepid (8.10) 32-bit on a small EC2 instance.

Original issue reported on code.google.com by onestone on 29 Sep 2008 at 1:18

stats file doesn't seem to update

It doesn't seem like the stats file updates when it should. On many occasions, 
I've looked at the stats file a few seconds, a few minutes, or even many 
minutes apart and found it to have exactly the same contents as it did before, 
even though there had been much s3backer activity since the last time I looked 
at it.

I noticed that it appears in the filesystem as a file rather than a device. I 
wonder if the kernel or fuse is caching its contents because it knows that 
nothing has written to the file since the last time it was read? If so, then 
perhaps it has to be made a device rather than a file so that it'll get reread 
every time it is opened?

Original issue reported on code.google.com by [email protected] on 21 Oct 2010 at 8:13

Multiple Object Delete Support

When deleting multiple blocks the network usage is suboptimal since each block is deleted with a single delete request. It would be more effient to make a single delete request with multiple objects (reference http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html).

The --erase flag is the most impacted scenario, but normal usage (where filesystem is mounted with discard) could also get a boost (eg rm -rf /this/folder, or when deleting big files).

Another quirk seems to be the "max-keys" parameter of the GET /?marker.... http request (that is actually fixed at 256). It would be better to make it an option in the command line, and set its default to the system maximum (1000). This gives network efficiency during the initial block listing.

Repeated 400 Bad Request errors

Very often after some errors (HTTP timeout, 500) s3backer starts to
generate "400 Bad Request" errors and locks in that condition (until retry
timeout and give up message).

By using tcpdump I have found the same pattern:

(Some network error - unplugging cable is enough).

20:04:35.281374 IP macbook.58450 > s3.amazonaws.com.http: tcp 365
E...4.@.@.'....eH....R.Pc....e..P...Ss..PUT /du-backup3/macos0000080f HTTP/1.1
Us
20:04:35.603823 IP s3.amazonaws.com.http > macbook.58450: tcp 25
E([email protected]..&JD..HTTP/1.1 100 Continue



20:04:55.613733 IP s3.amazonaws.com.http > macbook.58450: tcp 630
E([email protected]..&R...HTTP/1.1 400 Bad Request
x-amz-request-id
20:04:55.614898 IP s3.amazonaws.com.http > macbook.58450: tcp 5
H......e.P.R.e..c...P..&9...0


And these messages go until retry timeout.

It looks like s3backer starts PUT request, S3 answers "100 Continue",
nothing happens for 20 seconds and then S3 says "400 Bad Request". S3backer
complaints in syslog, waits and repeats the same pattern.

It happens with s3backer 1.0.4 on Mac OS X 10.5.4

s3backer connect string:

s3backer --prefix=macos --size=75M --filename=<local-file>
--maxRetryPause=5000000 -o daemon_timeout=3600 <bucket> <local-dir>

I am writing file with dd:
dd if=<another local file> of=<local-file on s3backer> bs=4096

tcpdump called like this:
tcpdump -i en1 -A -q 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
((tcp[12]&0xf0)>>2)) != 0)'

Original issue reported on code.google.com by [email protected] on 10 Jul 2008 at 12:21

Support for RRS (Reduced Redundancy Storage)

This is a feature request for the new (cheaper) RRS that Amazon started
offering recently:

http://aws.amazon.com/s3/faqs/#How_do_I_specify_that_I_want_to_store_my_data_usi
ng_RRS

Objects need to be 'PUT' with a different storage class setting to make use
of the new storage class.

Original issue reported on code.google.com by [email protected] on 25 May 2010 at 10:08

Block 00000000 disappeared after power outage

What steps will reproduce the problem?
1. Computer was turned off due to electric failure
2.
3.

What is the expected output? What do you see instead?
Expected output is to access again the filesystem stored with s3backer.
Current result: s3backer is not able to get the 00000000 file from aws,
since it does not exist.

What version of the product are you using? On what operating system?
1.3.1 on Ubuntu 8.10

Please provide any additional information below.
Everything was working fine when an power outage happened. After that, I
tried to mount again the file system with no success. First I thought I had
some permission problem, but after trying some things I saw that the
00000000 block is not on AWS anymore, and that is why it is failing.

Original issue reported on code.google.com by [email protected] on 3 Mar 2010 at 2:12

enhancement request with patch: track zero blocks after startup even if --listBlocks wasn't specified

Even if --listBlocks wasn't specified, it makes sense to keep track of when 
zero blocks are read or written so that they don't have to be read or written 
repeatedly. The attached patch accomplishes this as follows:

* Change the non-zero block map into a zero block map, i.e., a bit in the map 
is set if the corresponding block is zero, rather than being set if it's 
non-zero. This change is not, strictly speaking, entirely necessary, since I 
could have just left it as a non-zero map and then checked for the opposite bit 
value, but I think it logically makes more sense for it to be zero map, and 
hence the code is clearer this way, because what we're really interested in 
knowing is the fact that a block is zero so we don't need to read or write it.

* Create an empty zero map when initializing http_io if --listBlocks wasn't 
specified.

* Add a bit to the zero map if we try to read a block and get ENOENT.

* Add a bit to the zero map if we write a zero block that wasn't previously 
zero.

This is actually the first patch of five I intend to submit in this area, if 
it's OK with you. They are:

1. This patch (track zero instead of non-zero blocks, and track even when 
--listBlocks wasn't specified).

2. Make --listBlocks happen in the background in a separate thread after the 
filesystem is mounted (this should be relatively easy to do now that I've done 
patch 1).

3. When a block that we expect to exist in S3 isn't there when we try to read 
it, restore it from the cache if possible.

4. When a block that we expect to exist in S3 isn't there when we do 
--listBlocks, restore it from the cache if possible.

5. Add an option to rerun --listBlocks periodically in the background while 
s3backer is running.

Patches 3-5 deserve some explanation. My concern is that, to a very small 
extent with regular S3 storage and to a much larger and even likely over time 
extent with reduced redundancy storage (RRS), blocks could simply disappear 
from S3 without any intervention on our part. I'm using s3backer to store my 
backups with rsync, so I'm using RRS, since all the data I'm saving exists on 
my desktop as well. However, the doc for RRS says that it should only be used 
for data that can be restored easily, and indeed it can in this case, since for 
performance reasons, my s3backer cache is big enough to hold my entire backup 
filesystem. Ergo, it makes a great deal of sense to teach s3backer how to 
automatically restore dropped blocks.

Please let me know your thoughts about this patch and my plans for the rest of 
them. Especially since I think I may need some guidance from you when 
implementing patches 3-5 :-).

Thanks,

  jik

Original issue reported on code.google.com by [email protected] on 24 Oct 2010 at 7:45

Attachments:

memory leak?

On my s3backer testsystem that is backing up several gigabytes of photos to
my freshly created s3backer partition, I see a steady increase of memory
usage by s3backer.
I started monitoring this because the backup process kept crashing due to
the unavailability of the loopack mount, which was due to s3backer having
stopped see dmesg output (I'm not sure what the start of the error is)

...
nmbd invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
Pid: 3050, comm: nmbd Tainted: G        W  2.6.28.4 #11
Call Trace:
 [<c01345f6>] oom_kill_process+0x4d/0x17c
 [<c01349aa>] out_of_memory+0x133/0x15d
 [<c01364a4>] __alloc_pages_internal+0x2ce/0x373
 [<c0137af8>] __do_page_cache_readahead+0x74/0x152
 [<c0137ea6>] do_page_cache_readahead+0x3d/0x47
 [<c0133f6a>] filemap_fault+0x133/0x2e1
 [<c0123596>] __wake_up_bit+0x25/0x2a
 [<c013c44d>] __do_fault+0x3f/0x2da
 [<c013d617>] handle_mm_fault+0x205/0x423
 [<c01100ba>] do_page_fault+0x238/0x556
 [<c010fe82>] do_page_fault+0x0/0x556
 [<c03c31e2>] error_code+0x6a/0x70
Mem-Info:
DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: hi:  186, btch:  31 usd: 130
Active_anon:42305 active_file:38 inactive_anon:42423
 inactive_file:767 unevictable:0 dirty:1 writeback:6 unstable:0
 free:1250 slab:2226 mapped:19 pagetables:617 bounce:0
DMA free:1828kB min:92kB low:112kB high:136kB active_anon:4372kB
inactive_anon:4604kB active_file:12kB inactive_file:216kB unevictable:0kB
present:15868kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 427 427
Normal free:3172kB min:2596kB low:3244kB high:3892kB active_anon:164848kB
inactive_anon:165088kB active_file:140kB inactive_file:2852kB
unevictable:0kB present:437768kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
DMA: 1*4kB 4*8kB 0*16kB 0*32kB 2*64kB 1*128kB 0*256kB 1*512kB 1*1024kB
0*2048kB 0*4096kB = 1828kB
Normal: 151*4kB 7*8kB 3*16kB 5*32kB 8*64kB 4*128kB 1*256kB 0*512kB 1*1024kB
0*2048kB 0*4096kB = 3172kB
826 total pagecache pages
0 pages in swap cache
Swap cache stats: add 890828, delete 890828, find 45044/67748
Free swap  = 0kB
Total swap = 610252kB
114400 pages RAM
2165 pages reserved
8404 pages shared
102350 pages non-shared
Out of memory: kill process 9655 (s3backer) score 15018 or a child
Killed process 9655 (s3backer)
Buffer I/O error on device loop0, logical block 261160960
...

After that I've watched cat /proc/`pidof s3backer`/status | grep Vm for
some time, and it shows the numbers steadily increasing over time.

I'm not a C debugger, so I may be completely wrong about a suspected leak,
but the fact remains that s3backer keeps crashing on me on this system.

This is the commandline I use for the s3backer mount:
s3backer --vhost --blockCacheFile=/var/cache/s3backer/s3b-cache
--blockCacheSize=256 ******* /mnt/s3backer

Original issue reported on code.google.com by [email protected] on 15 Oct 2009 at 9:30

on OS X: http_io.c:759:22: error: use of undeclared identifier 'HOST_NAME_MAX'

cc -DHAVE_CONFIG_H -I.    -D_FILE_OFFSET_BITS=64 -D_DARWIN_USE_64_BIT_INODE 
-I/usr/local/Cellar/fuse4x/0.9.2/include/fuse  -g -O3 -pipe -Wall 
-Waggregate-return -Wcast-align -Wchar-subscripts -Wcomment -Wformat -Wimplicit 
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wno-long-long 
-Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wswitch 
-Wtrigraphs -Wuninitialized -Wunused -Wwrite-strings -Wshadow 
-Wstrict-prototypes -Wcast-qual  -D_FILE_OFFSET_BITS=64 
-D_DARWIN_USE_64_BIT_INODE -I/usr/local/Cellar/fuse4x/0.9.2/include/fuse  -MT 
reset.o -MD -MP -MF .deps/reset.Tpo -c -o reset.o reset.c

http_io.c:759:22: error: use of undeclared identifier 'HOST_NAME_MAX'
        char content[HOST_NAME_MAX + 64];
                     ^
1 error generated.


Version 1.3.5

OS X: 10.8.4-x86_64
Xcode: 4.6.2
CLT: 4.6.0.0.1.1365549073

Original issue reported on code.google.com by [email protected] on 5 Jun 2013 at 2:28

Hot cache entries

I think there is space for another optimization in the local caching area. The most often accessed blocks should stay in local cache more time than other blocks. This would allow s3backer to not perform network IO on blocks such as superblocks or inode block maps on ext4, which means more performance since they don't need to be queried again and again during heavy writes periods. A simple statistics I think should be enough to track these blocks and lock them in the cache.

Abort trap doing anything on Mac OS X 10.6

What steps will reproduce the problem?
1. Install fuse etc. with MacPorts
2. Download and build s3backer-1.3.1
3. Try the example command at 
http://code.google.com/p/s3backer/wiki/CreatingANewFilesystem

I modified the Makefile to remove -O3 and ran s3backer under GDB to get some 
traceback.  Here's the result:

(gdb) run --blockSize=128k --size=1t --listBlocks rptb1-backup-test 
/Users/rb/tmp/mnt
Starting program: /Users/rb/opt/s3backer-1.3.1/s3backer --blockSize=128k 
--size=1t --listBlocks rptb1-backup-test /Users/rb/tmp/mnt
Reading symbols for shared libraries .+++++++......... done
s3backer: auto-detecting block size and total file size...

Program received signal SIGABRT, Aborted.
0x00007fff8315e3d6 in __kill ()
(gdb) bt
#0  0x00007fff8315e3d6 in __kill ()
#1  0x00007fff831fe913 in __abort ()
#2  0x00007fff831e2ff0 in __stack_chk_fail ()
#3  0x000000010000cb66 in http_io_get_auth (buf=0x7fff5fbf1960 'A' <repeats 27 
times>, "=", bufsiz=200, config=0x10001aca0, method=0x100014a5f "HEAD", 
ctype=0x0, md5=0x0, date=0x7fff5fbf1a30 "Tue, 21 Sep 2010 22:23:00 GMT", 
headers=0x0, resource=0x7fff5fbf17c9 "/00000000") at http_io.c:1530
#4  0x0000000100009b1f in http_io_detect_sizes (s3b=0x100506220, 
file_sizep=0x7fff5fbf1c08, block_sizep=0x7fff5fbf1c3c) at http_io.c:664
#5  0x0000000100010890 in validate_config () at s3b_config.c:1118
#6  0x000000010000e10a in s3backer_get_config (argc=6, argv=0x7fff5fbff4b0) at 
s3b_config.c:491
#7  0x0000000100001106 in main (argc=6, argv=0x7fff5fbff4b0) at main.c:40

I've tried various combinations of options.  Adding -d, --debug, and 
--debug-http does not produce any extra output.

Original issue reported on code.google.com by [email protected] on 21 Sep 2010 at 10:42

Fsck says uncleanly unmounted after umount used to unmount

I'm using s3backer with an ext2 filesystem.
Recently, I unmounted the ext2 loop filesystem mount and then unmounted the 
s3backer filesystem. It unmounted immediately even though I believe that there 
were pending writes that had not yet been flushed to S3.
Then I remounted the s3backer filesystem and ran fsck on the "file" file in it. 
Fsck reported that the device was not cleanly unmounted and forced an fsck, 
thus I believe confirming that there were pending writes that had not yet been 
flushed.
Shouldn't umount of the FUSE filesystem block until all pending writes are 
flushed to disk? This is how local filesystems work, obviously.
If not, then what's the recommended way to ensure that data is not lost?
I'm using r437 on Linux (Fedora 14).
Thanks.

Original issue reported on code.google.com by [email protected] on 15 Oct 2010 at 2:53

  • Merged into: #40

Needed to add pthread library check to configure.ac

I tried to build r437 from subversion and it complained about pthread_create 
not being there. I had to make this change to make it build:

--- configure.ac~   2010-10-14 23:17:21.977142077 -0400
+++ configure.ac    2010-10-14 23:17:23.055284667 -0400
@@ -51,6 +51,8 @@ PKG_CHECK_MODULES(FUSE, fuse,,
     [AC_MSG_ERROR(["fuse" not found in pkg-config])])

 # Check for required libraries
+AC_CHECK_LIB(pthread, pthread_create,,
+   [AC_MSG_ERROR([required library pthread missing])])
 AC_CHECK_LIB(curl, curl_version,,
    [AC_MSG_ERROR([required library libcurl missing])])
 AC_CHECK_LIB(crypto, BIO_new,,

Thanks for creating and maintaining this package!

Original issue reported on code.google.com by [email protected] on 15 Oct 2010 at 4:00

Support FUSE-passed mount options from /etc/fstab


Quoting from email thread...

-----------------------

Like this

-o size=4T -o blockSize=1M -o blockCacheSize=10

which works with /etc/fstab like so

s3backer#bucket /mnt/bucket fuse
size=4T,blockSize=1M,blockCacheSize=10 0 0

Then you can just mount with

mount /mnt/bucket

If I have to use custom flags on the command line like:
s3backer --size=XXXX --blockSize=1M --blockCacheSize=10 then I can't
use the fuse app with /etc/fstab or autofs because it doesn't
uniformly fit.

See the s3fs on googlecode here for examples.

Sure would be awesome if this is a quick fix.  I would love to
automount s3backer.

-----------------------

Original issue reported on code.google.com by [email protected] on 14 Apr 2009 at 9:09

Not sure blockCacheMaxDirty is working

I have blockCacheMaxDirty=10 in /etc/fstab, and "ps auxww | grep s3backer" 
confirms that it was passed to s3backer when it was started.

Nevertheless, although I've unmounted my reiserfs loop filesystem and "sync" 
has successfully run to completion, s3backer has been actively writing 
additional blocks to S3 for several minutes, far more than 10 of them.

It doesn't appear to me that blockCacheMaxDirty is working.

Here are my /etc/fstab entries:

s3backer#jik2-backup-dev /mnt/s3backer-dev 
fuse    accessFile=/etc/passwd-s3fs,compress=9,rrs,blockCacheFile=/var/cache/s3back
er-dev-cache,size=100G,blockSize=128k,blockCacheSize=327680,blockCacheThreads=6,
blockCacheMaxDirty=10,noatime,noauto
/mnt/s3backer-dev/file /mnt/s3backer-fs reiserfs loop,noatime,noauto

Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 11:42

Segfault while creating filesystem

I am running s3backer in test mode with:

./s3backer --test --prefix=s3backer --size=2g /tmp /mnt

Then:

mke2fs -b 4096 -F /mnt/file

After writing some blocks, I get a segfault in the kernel log:
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180c8 started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180c9 started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180ca started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180cb started (zero
block)
Aug 10 23:07:56 sleepless kernel: [101497.099010] s3backer[6531]: segfault
at 0000011b eip b7ebe86a esp b7065060 error 4

System is Ubuntu 8.04.1 (hardy). Latest updates installed.
s3backer configured and compiled correctly.

When I enable the debug output of FUSE and s3backer and force it stay in
the foreground, this does _not_ happen. 

The machine is running on a Dual-Core-CPU (Athlon64 X2): Linux sleepless
2.6.24-19-generic #1 SMP Fri Jul 11 23:41:49 UTC 2008 i686 GNU/Linux






Original issue reported on code.google.com by [email protected] on 10 Aug 2008 at 9:15

EU buckets support

It seems s3bucket doesn't support EU buckets:

EU buckets MUST be addressed using the virtual hosted style method:
http://<bucket>.s3.amazonaws.com/

See:
http://docs.amazonwebservices.com/AmazonS3/latest/index.html?VirtualHosting.html

Original issue reported on code.google.com by [email protected] on 1 Oct 2009 at 7:51

Configure doesn't find fuse4x from MacPorts

What steps will reproduce the problem?

1. Install fuse4x via MacPorts.  (MacFUSE project is long dead.)

2. Unpack s3backer and try ./configure.  It will complain there is no libfuse.

As far as I can tell, configure is not using pkg-config to look for fuse.  
pkg-config can find it just fine.

You can work around this with a command like

  CFLAGS='-I /opt/local/include' LIBS='-L /opt/local/lib' ./configure

What version of the product are you using? On what operating system?

s3backer-1.3.2 on Mac OS X 10.7.  uname -a says

Darwin Albatross.local 11.0.0 Darwin Kernel Version 11.0.0: Sat Jun 18 12:57:44 
PDT 2011; root:xnu-1699.22.73~1/RELEASE_I386 i386

Please provide any additional information below.

Builds without error. Can run the steps in the Mac wiki page, create a DMG, put 
stuff in it.  However, everything seems to hang on unmount and the S3 
management console shows no data written.  I'll follow up when I've 
investigated further.

Original issue reported on code.google.com by [email protected] on 4 Aug 2011 at 3:03

maxRetryPause doesn't have desired effect on Leopard (Mac OS X 10.5)

What steps will reproduce the problem?

1. Mount s3backer file with large "--maxRetryPause" value:
s3backer --prefix=macos --size=75M --filename=s3-backup3-remote.dmg
--maxRetryPause=300000 -d -f bucket mnt-bucket

2. Copy something to that file with dd:
dd if=local-75M-non-empty-file of=mnt-bucket/s3-backup3-remote.dmg bs=4096

3. Shut down network interface.

4. After attempt #8 s3backer exits and file system unmounts.
And at this moment in system.log you have this message:

Jul 10 17:07:28 macbook kernel[0]: MacFUSE: force ejecting (no response
from user space 5)

When starting s3backer prints correct maxRetryPause value and uses it, but
MacFUSE has its own timeout option "daemon_timeout" that has some default 
value. After that timeout Tiger (10.4) shows up "File system timeout"
dialog box with some useful options, but Leopard does not. It just kills
user process and unmounts file system.

So I believe it worth mentioning in man page, that one has to include
"daemon_timeout" option in command line arguments when maxRetryPause has
non-default value. Or may be just set it to some very high value by default.

It is last version (1.04) of s3backer. Mac OS X 10.5.4.

s3backer startup log:

2008-07-10 17:05:36 INFO: created s3backer using
http://s3.amazonaws.com/du-backup3
s3backer: auto-detecting block size and total file size...
2008-07-10 17:05:36 DEBUG: HEAD
http://s3.amazonaws.com/du-backup3/macos00000000
s3backer: auto-detected block size=4k and total size=75m
2008-07-10 17:05:37 DEBUG: s3backer config:
2008-07-10 17:05:37 DEBUG:         accessId: 
2008-07-10 17:05:37 DEBUG:        accessKey: "****"
2008-07-10 17:05:37 DEBUG:       accessFile: "/Users/demon/.s3backer_passwd"
2008-07-10 17:05:37 DEBUG:           access: "private"
2008-07-10 17:05:37 DEBUG:     assume_empty: false
2008-07-10 17:05:37 DEBUG:          baseURL: "http://s3.amazonaws.com/"
2008-07-10 17:05:37 DEBUG:           bucket: "du-backup3"
2008-07-10 17:05:37 DEBUG:           prefix: "macos"
2008-07-10 17:05:37 DEBUG:            mount:
"/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 DEBUG:         filename: "s3-backup3-remote.dmg"
2008-07-10 17:05:37 DEBUG:       block_size: - (4096)
2008-07-10 17:05:37 DEBUG:       block_bits: 12
2008-07-10 17:05:37 DEBUG:        file_size: 75M (78643200)
2008-07-10 17:05:37 DEBUG:       num_blocks: 19200
2008-07-10 17:05:37 DEBUG:        file_mode: 0600
2008-07-10 17:05:37 DEBUG:        read_only: false
2008-07-10 17:05:37 DEBUG:  connect_timeout: 30s
2008-07-10 17:05:37 DEBUG:       io_timeout: 30s
2008-07-10 17:05:37 DEBUG: initial_retry_pause: 200ms
2008-07-10 17:05:37 DEBUG:  max_retry_pause: 300000ms
2008-07-10 17:05:37 DEBUG:  min_write_delay: 500ms
2008-07-10 17:05:37 DEBUG:       cache_time: 10000ms
2008-07-10 17:05:37 DEBUG:       cache_size: 10000 entries
2008-07-10 17:05:37 DEBUG: fuse_main arguments:
2008-07-10 17:05:37 DEBUG:   [0] = "s3backer"
2008-07-10 17:05:37 DEBUG:   [1] = "-o"
2008-07-10 17:05:37 DEBUG:   [2] =
"kernel_cache,fsname=s3backer,use_ino,entry_timeout=31536000,negative_timeout=31
536000,attr_timeout=31536000,default_permissions,nodev,nosuid"
2008-07-10 17:05:37 DEBUG:   [3] = "-d"
2008-07-10 17:05:37 DEBUG:   [4] = "-f"
2008-07-10 17:05:37 DEBUG:   [5] = "/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 INFO: s3backer process 10403 for
/Users/demon/mounts/mnt-du-backup3 started


And it dies like that:

2008-07-10 17:06:28 DEBUG: PUT http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 NOTICE: HTTP operation timeout: PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 INFO: retrying query (attempt #2): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:06:59 INFO: retrying query (attempt #3): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:00 INFO: retrying query (attempt #4): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:00 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:02 INFO: retrying query (attempt #5): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:02 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:05 INFO: retrying query (attempt #6): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:05 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:11 INFO: retrying query (attempt #7): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:11 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:24 INFO: retrying query (attempt #8): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:24 ERROR: curl error: couldn't resolve host name

(end of the messages. s3backer exits)

Original issue reported on code.google.com by [email protected] on 10 Jul 2008 at 9:29

can't read block zero meta-data: Operation not permitted

Hello,

When I try to execute "s3backer mybucket /root/mybucket/", I get the following 
error:

root@dev-intranet:~# s3backer mybucket /root/mybucket/
s3backer: auto-detecting block size and total file size...
s3backer: can't read block zero meta-data: Operation not permitted

I use Debian and the latest version of s3backer.

Someone already had this error?

Thank you and sorry for my english.

Original issue reported on code.google.com by [email protected] on 2 Jan 2013 at 2:52

s3backer fails on buckets outside the "US Standard" region

What steps will reproduce the problem?
1. Make an S3 bucket in region "Ireland".
2. Apply s3backer to it.

What is the expected output? What do you see instead?

s3backer fails to read or write to the bucket with messages like

* Failed writing received data to disk/application

What version of the product are you using? On what operating system?

s3backer-1.3.2 on Mac OS X 10.7

rb@Crane$ uname -a
Darwin Crane.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 
2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

Please provide any additional information below.

Reading the source code I found the debug-http flag and used it.  Amazon is 
sending back HTTP 301 "Moved Permanently" responses to requests.

< HTTP/1.1 301 Moved Permanently
...
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Thu, 04 Aug 2011 15:28:54 GMT
< Connection: close
< Server: AmazonS3

These do not seem to include a changed location header.

I'm happy to report that s3backer seems to be working fine for me with a bucket 
in the US Standard region.

Original issue reported on code.google.com by [email protected] on 4 Aug 2011 at 3:46

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.