Git Product home page Git Product logo

s3fs's People

Contributors

apetresc avatar benlemasurier avatar ggtakec avatar mooredan avatar

s3fs's Issues

-v switch

Please add a -V or -v (or both?) so upon using those parameters, s3
should say which version it is, so it would be easy to know what
version s3fs I'm using :)

Original issue reported on code.google.com by [email protected] on 3 Mar 2008 at 5:39

~/.s3fs issue with Fedora 4 AMI

I have a quick query. I setup the s3fs onto my fedora 4 AMI.

I setup a cronjob to remove the ~/.s3fs.

However its not working. The cache is growing rapidly and i do not know
where to locate the ~/.s3fs.

Certainly, the cronjob is not working. When i finalized my cronjob and ran
the df -T command i was sitting at 12% in /dev/sda1

now after moving some files its at 45%. 

Please help

Regards
Hareem.

Original issue reported on code.google.com by [email protected] on 4 Jan 2008 at 1:54

getattr()

do this in getattr()

stbuf->st_uid = getuid();
stbuf->st_gid = getgid();

Original issue reported on code.google.com by [email protected] on 13 Mar 2008 at 1:12

discard curl handles found to be "bad"?

when a curl handle causes s3fs to return -EIO, perhaps should *not*
recycle/returnToPool the curl handle?

I ran into a case where it appeared that I had a "bad" curl handle in the
pool... Dunno how it got bad but then there was no way to get rid of the
bad handle short of umount/mount.

Original issue reported on code.google.com by [email protected] on 9 Apr 2008 at 3:42

ls: reading directory .: Input/output error

What steps will reproduce the problem?
1. I have put credential in /etc/passwd-s3fs
2. mkdir /mnt/mybucket

What is the expected output? What do you see instead?

{Make}

[root@vmx1 s3fs]# make install
g++ -ggdb -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -lfuse
-lrt -ldl    -L/usr/kerberos/lib -lcurl -lgssapi_krb5 -lkrb5 -lk5crypto
-lcom_err -lresolv -ldl -lidn -lssl -lcrypto -lz   -I/usr/include/libxml2
-L/usr/lib -lxml2 -lz -lm -lcrypto s3fs.cpp -o s3fs
s3fs.cpp:440: warning: âsize_t readCallback(void*, size_t, size_t, void*)â
defined but not used
ok!
cp -f s3fs /usr/bin

{Mount}

[root@vmx1]# /usr/bin/s3fs mybucket /mnt/mybucket [OK]

{LS}

[root@vmx1]# cd /mnt/mybucket
[root@vmx1]# ls
ls: reading directory .: Input/output error

What version of the product are you using? On what operating system?
OS : CentOS 5.2, Fuse 2.7, s3fs r177


Please provide any additional information below.
I have recheck my access id and secret key, triple time. S3fox is able to
see the bucket and the bucket EXISTS.

Original issue reported on code.google.com by mohd%[email protected] on 20 Jan 2009 at 8:07

implement create()

implement creat()

should be able to save one (re)upload when using rsync

Original issue reported on code.google.com by [email protected] on 1 Mar 2008 at 9:42

directory listings and libcurl 7.19.4

Directory listings return errors under the following combination of conditions:

1. s3fs is compiled with libcurl 7.19.4 in gentoo x86 using either 'emerge
s3fs' or make.
2. The directory being listed contains a subdirectory.
3. The program doing the listing is a graphical ftp or scp client such as
Filezilla, WinSCP, or Gnome "Connect to Server" FTP browser.

Error returned by Filezilla FTP client for Windows:
{{{Command: LIST
Response:   150 Here comes the directory listing.
Response:   500 OOPS: invalid inode size in vsf_sysutil_statbuf_get_size
Error:  Failed to retrieve directory listing}}}

Error returned by WinSCP SCP client for Windows:
{{{Error listing directory '/mnt/mybucketname'.
Unexpected directory listing line 'drwxr-xr-x 1 root root
18446744073709551615 2009-03-09 20:52:46.000000000 -0700 mydirectoryname'.}}}

The correct listing and no errors are returned under the above conditions
with 'ls /mnt/mybucketname/', command line ftp, and the Gnome "Connect to
Server" SSH file browser.

I then tested the same scenario in Ubuntu 8.04 and s3f3 compiled with
libcurl 7.18.0,  keeping everything else the same.  There are no errors
when the same bucket is mounted in Ubuntu 8.04 and s3fs is compiled in
Ubuntu with libcurl 7.18.0.

Gentoo x86, s3fs r177, libcurl 7.19.4:
{{{# ls -al /mnt/mybucketname/
total 0
drwxr-xr-x 1 root root 18446744073709551615 Mar  9 20:52 mydirectoryname}}}

Ubuntu 8.04, s3fs r177, libcurl 7.18.0:
{{{# ls -al /mnt/mybucketname/
total 0
drwxr-xr-x 1 root root 0 2009-03-09 20:52 mydirectoryname}}}

Gentoo returns a strangely large value (16 exabytes) for the size of the
directory object.  Could this be related to libcurl?

Note that Gentoo just cleaned out all but the newest versions of libcurl
from the Portage tree so there is no rolling back for some users.

http://sources.gentoo.org/viewcvs.py/gentoo-x86/net-misc/curl/ChangeLog?view=mar
kup

Original issue reported on code.google.com by [email protected] on 10 Mar 2009 at 8:09

Quick Query

Thanks for the help.Every thing now works great.

Firstly, is it ok if i post my questions here or should i direct them onto
something different.

Secondly, i used the s3fs by mounting a bucket. Data transfer occurred
smoothly and every thing looks good except that every time the instance
re-starts the s3fs mounted bucket disappears.

Is there any way i could tell the instance to make sure the bucket is auto
mounted even when the server comes out of a restart cycle.

Thanks
Hareem.

Original issue reported on code.google.com by [email protected] on 27 Nov 2007 at 3:38

There is no error checking what so ever

As is, s3fs is inherently unreliable. As a simple example, BIO_write and
BIO_flush can return a short write or error, and s3fs will totally fail to
notice it.

Original issue reported on code.google.com by [email protected] on 17 Dec 2007 at 10:51

Performance feedback : relative slow read performance detected

After installing quite a few dev libraries I managed to get s3fs compiled
and working on an S3 Ubuntu 7.10 instance.  I've been having a lot of fun
with it, thanks for taking the effort to write this nice tool.

I'm testing s3fs to see if I can use it to read data (a CSV file in this
case) in parallel across multiple EC2 servers.  

See also: 
- http://www.ibridge.be/?p=101
- http://kettle.pentaho.org

Here is feedback on the write and read performance of s3fs when dealing
with large files like the one I used. (single instance for the time being)

WRITE PERFORMANCE:
-------------------

root@domU-12-31-35-00-2A-52:~# time cp /tmp/customers-25M.txt /s3/kettle/

real    6m27.266s
user    0m0.260s
sys     0m9.210s

root@domU-12-31-35-00-2A-52:~# ls -lrta /tmp/customers-25M.txt
-rw-r--r-- 1 matt matt 2614561970 Apr  4 19:53 /tmp/customers-25M.txt

2614561970 
/ 387.266s
------------
6.4MB/s write capacity.

READ PERFORMANCE:
-------------------

root@domU-12-31-35-00-2A-52:~# time wc -l /s3/kettle/customers-25M.txt
25000001 /s3/kettle/customers-25M.txt

real    4m36.054s
user    0m0.810s
sys     0m0.950s

2614561970
/ 276.054
------------
9.0 MB/s read capacity

I couldn't care less about the write performance, but I would have expected
the read capacity to be higher and so I did a little investigation. 
Apparently, there is intensive "caching" going on on the local disk.  This
happens at around 10MB/s.  When that is done, the actual reading takes
place at 60+MB/s. (see below)

It would be nice if you could find a way to disable this disk-based caching
system altogether.  I tried to create a small ram disk fs and used option
-use_cache /tmp/ramdisk0 but the error I got was:

wc: /s3/mattcasters/customers-25M.txt: Bad file descriptor
0 /s3/mattcasters/customers-25M.txt

The /tmp/ramdisk0/ file system was small and as such very likely too small
to hold the 2.4GB file. (it was at 100% after the test and contained a part
of the file)

I believe that S3 charges per transfer request, not just per data volume
transferred, so perhaps you are doing the right thing cost wise.  However,
perhaps it should possible for users like me to set some kind of max block
size parameter.  With this you could allow the creation of a memory based
cache (say a few hundred MB) that doesn't have a file writing I/O penalty.

That would perhaps also help in the case where you don't want to read the
complete file, but only a portion of it. This can be interesting in our
parallel read case. (where each of the EC2 nodes is going to do a seek in
the file and read 1/Nth of the total file)

Matt


"iostat -k 5" during cache creation:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.40    0.00    1.20   22.51    0.40   75.50

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1            424.90         0.80      8916.33          4      44760
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.60   33.60    0.40   65.40

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1            444.20         0.00      9087.20          0      45436
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0



"iostat -k 5" during cache read-out:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.00    0.00    9.98   87.82    0.20    0.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1           1319.96     55572.06         2.40     278416         12
sda2              0.00         0.00         0.00          0          0
sda3              0.20         0.00         0.80          0          4

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.80    0.00   10.00   86.80    1.40    0.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1           1245.40     52591.20         2.40     262956         12
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.00    0.00   10.38   88.42    0.00    0.20

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1           1191.82     50286.63         2.40     251936         12
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.40    0.00   10.38   86.03    0.80    0.40

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1           1370.26     57829.94         2.40     289728         12
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0

Original issue reported on code.google.com by [email protected] on 5 Apr 2008 at 12:20

Missing NUL check for ContentType causes core dump with certain files


When accessing a bucket containing a "directory" key created by s3fox a
firefox S3 browser extension, s3fs segfaulted.

Debugging showed that when s3fox creates a "directory", it actually creates
a key by the name of foldername_$folder$ which seems to not have a
ContentType associated with it, this caused s3fs to try and strcmp a NULL
pointer in line 373.

To re-create the problem, just use s3fox to create a "directory" ion a
bucket and try to list the bucket content using s3fox.

The attached patch adds a simple NULL check and resolves the issue. With
this s3fs works just fine.

Please apply.

Gilad Ben-Yossef <[email protected]>

Original issue reported on code.google.com by [email protected] on 20 Jan 2008 at 1:59

Attachments:

rsync not working w/use_cache?

[root@localhost ~]# rsync -av workspace2 /s3
building file list ... done
workspace2/
workspace2/.metadata/
rsync: failed to set times on "/s3/workspace2/.metadata": Bad file
descriptor (9)
workspace2/.metadata/.lock
workspace2/.metadata/.log
workspace2/.metadata/aaa
workspace2/.metadata/version.ini
workspace2/.metadata/.plugins/
rsync: failed to set times on "/s3/workspace2/.metadata/.plugins": Bad file
descriptor (9)
workspace2/.metadata/.plugins/org.eclipse.core.resources/
rsync: failed to set times on
"/s3/workspace2/.metadata/.plugins/org.eclipse.core.resources": Bad file
descriptor (9)

Original issue reported on code.google.com by [email protected] on 23 Feb 2008 at 4:10

chmod 1000:1000 not working?

chmod 1000:1000 on a root owned file does not appear to change ownership to
1000:1000

might have something to do with if there is no symbolic uid/gid for 1000:1000

Original issue reported on code.google.com by [email protected] on 14 Aug 2008 at 6:47

curl_multi_timeout issue again in latest release...

What steps will reproduce the problem?
1. make clean
2. make
3. pull hair

What is the expected output? What do you see instead?

starbase2:~/s3fs root# make
g++ -Wall -D__FreeBSD__=10 -D_FILE_OFFSET_BITS=64 -D__FreeBSD__=10 -
D_FILE_OFFSET_BITS=64 -I/usr/local/include/fuse  -L/usr/local/lib -
lfuse   -lcurl -lcrypto -I/usr/include/libxml2 -L/usr/lib -lxml2 -lz -
lpthread -liconv -lm -ggdb s3fs.cpp -o s3fs
s3fs.cpp: In function `int s3fs_readdir(const char*, void*, int (*)(void*, 
const char*, const stat*, off_t), off_t, fuse_file_info*)':
s3fs.cpp:1364: error: 'curl_multi_timeout' was not declared in this scope
s3fs.cpp: In function `std::string trim_right(const std::string&, const 
std::string&)':
s3fs.cpp:103: warning: control reaches end of non-void function
s3fs.cpp: At global scope:
s3fs.cpp:440: warning: 'size_t readCallback(void*, size_t, size_t, void*)' 
defined but not used
make: *** [all] Error 1
starbase2:~/s3fs root#

What version of the product are you using? On what operating system?
s3fs-r166-source.tar.gz - mac osx tiger

Please provide any additional information below.

verified that s3fs.cpp has the r133 changes but issue is back.  Used 
pkgsrc libxml/curl.



Original issue reported on code.google.com by [email protected] on 30 Jul 2008 at 9:07

default_acl Input/output error

I am using 

 -o default_acl=public_read

And I get Input/output error. It works fine without (but uses private
default ACL)

Am I using the wrong switch for this?

Original issue reported on code.google.com by [email protected] on 28 Jul 2008 at 2:27

no-cache

I would also highly recommend to add an option: no-cache so it
wouldn't create the ~/.s3fs directory as many don't know it creates it
and sometimes it's not needed (specially on a very high speed lines).

Original issue reported on code.google.com by [email protected] on 13 Feb 2008 at 11:17

Copying very large files (1 gigabyte and more) is not working

I have been trying to upload .iso files (linux distro files) that are more
than 1 gigabyte, I am using Ubuntu. I mount the s3 folder through s3fs than
open nautilus and copy the files over, the process shows that they are
being copied but then once it is complete the file shows as being 0 bytes.
Is it possible to transfer files that are over 1 gigabyte using s3fs?

Original issue reported on code.google.com by [email protected] on 27 Aug 2008 at 11:32

make command fails to complie

I tried to compile the downloaded source. I get the following errors.

make: xml2-config: Command not found
g++ -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -lfuse  
-lcurl  -ggdb s3fs.cpp -o s3fs
s3fs.cpp:33:23: error: curl/curl.h: No such file or directory
s3fs.cpp:34:27: error: libxml/parser.h: No such file or directory
s3fs.cpp:35:25: error: libxml/tree.h: No such file or directory
s3fs.cpp:58: error: 'CURL' was not declared in this scope
s3fs.cpp:58: error: template argument 1 is invalid
s3fs.cpp:58: error: template argument 2 is invalid
s3fs.cpp:58: error: invalid type in declaration before ';' token
s3fs.cpp:62: error: ISO C++ forbids declaration of 'CURL' with no type
s3fs.cpp:62: error: expected ';' before '*' token
s3fs.cpp:80: error: ISO C++ forbids declaration of 'CURL' with no type
s3fs.cpp:80: error: expected ';' before '*' token
s3fs.cpp:83: error: expected `;' before 'operator'
s3fs.cpp:83: error: expected type-specifier before 'CURL'
s3fs.cpp: In constructor 'auto_curl::auto_curl()':
s3fs.cpp:66: error: request for member 'size' in 'curl_handles', which is
of non-class type 'int'
s3fs.cpp:67: error: 'curl' was not declared in this scope
s3fs.cpp:67: error: 'curl_easy_init' was not declared in this scope
s3fs.cpp:69: error: 'curl' was not declared in this scope
s3fs.cpp:69: error: request for member 'top' in 'curl_handles', which is of
non-class type 'int'
s3fs.cpp:70: error: request for member 'pop' in 'curl_handles', which is of
non-class type 'int'
s3fs.cpp:72: error: 'curl' was not declared in this scope
s3fs.cpp:72: error: 'curl_easy_reset' was not declared in this scope
s3fs.cpp:74: error: 'CURLOPT_CONNECTTIMEOUT' was not declared in this scope
s3fs.cpp:74: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp: In destructor 'auto_curl::~auto_curl()':
s3fs.cpp:78: error: request for member 'push' in 'curl_handles', which is
of non-class type 'int'
s3fs.cpp:78: error: 'curl' was not declared in this scope
s3fs.cpp: In destructor 'auto_curl_slist::~auto_curl_slist()':
s3fs.cpp:94: error: 'curl_slist_free_all' was not declared in this scope
s3fs.cpp: In member function 'void auto_curl_slist::append(const
std::string&)':
s3fs.cpp:100: error: 'curl_slist_append' was not declared in this scope
s3fs.cpp: In function 'std::string calc_signature(std::string, std::string,
std::string, curl_slist*, std::string)':
s3fs.cpp:181: error: invalid use of undefined type 'struct curl_slist'
s3fs.cpp:89: error: forward declaration of 'struct curl_slist'
s3fs.cpp:182: error: invalid use of undefined type 'struct curl_slist'
s3fs.cpp:89: error: forward declaration of 'struct curl_slist'
s3fs.cpp:184: error: invalid use of undefined type 'struct curl_slist'
s3fs.cpp:89: error: forward declaration of 'struct curl_slist'
s3fs.cpp:187: error: invalid use of undefined type 'struct curl_slist'
s3fs.cpp:89: error: forward declaration of 'struct curl_slist'
s3fs.cpp: In function 'int s3fs_getattr(const char*, stat*)':
s3fs.cpp:261: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:261: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:262: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:263: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:264: error: 'CURLOPT_NOBODY' was not declared in this scope
s3fs.cpp:265: error: 'CURLOPT_FILETIME' was not declared in this scope
s3fs.cpp:271: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:273: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp:275: error: 'CURLINFO_RESPONSE_CODE' was not declared in this scope
s3fs.cpp:275: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp:285: error: 'CURLINFO_FILETIME' was not declared in this scope
s3fs.cpp:285: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp:289: error: 'CURLINFO_CONTENT_TYPE' was not declared in this scope
s3fs.cpp:289: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp:293: error: 'CURLINFO_CONTENT_LENGTH_DOWNLOAD' was not declared in
this scope
s3fs.cpp:293: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp: In function 'int s3fs_mknod(const char*, mode_t, dev_t)':
s3fs.cpp:318: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:318: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:319: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:320: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:321: error: 'CURLOPT_UPLOAD' was not declared in this scope
s3fs.cpp:322: error: 'CURLOPT_INFILESIZE' was not declared in this scope
s3fs.cpp:330: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:332: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_mkdir(const char*, mode_t)':
s3fs.cpp:346: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:346: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:347: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:348: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:349: error: 'CURLOPT_UPLOAD' was not declared in this scope
s3fs.cpp:350: error: 'CURLOPT_INFILESIZE' was not declared in this scope
s3fs.cpp:357: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:359: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_unlink(const char*)':
s3fs.cpp:374: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:374: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:375: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:376: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:377: error: 'CURLOPT_CUSTOMREQUEST' was not declared in this scope
s3fs.cpp:383: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:385: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_rmdir(const char*)':
s3fs.cpp:399: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:399: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:400: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:401: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:402: error: 'CURLOPT_CUSTOMREQUEST' was not declared in this scope
s3fs.cpp:408: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:410: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_truncate(const char*, off_t)':
s3fs.cpp:459: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:459: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:460: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:461: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:462: error: 'CURLOPT_UPLOAD' was not declared in this scope
s3fs.cpp:463: error: 'CURLOPT_INFILESIZE' was not declared in this scope
s3fs.cpp:470: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:472: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_read(const char*, char*, size_t, off_t,
fuse_file_info*)':
s3fs.cpp:494: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:494: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:495: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:496: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:497: error: 'CURLOPT_WRITEDATA' was not declared in this scope
s3fs.cpp:498: error: 'CURLOPT_WRITEFUNCTION' was not declared in this scope
s3fs.cpp:504: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:511: error: 'CURLOPT_RANGE' was not declared in this scope
s3fs.cpp:513: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp:515: error: 'CURLINFO_RESPONSE_CODE' was not declared in this scope
s3fs.cpp:515: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp: In function 'int s3fs_flush(const char*, fuse_file_info*)':
s3fs.cpp:560: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:560: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:561: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:562: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:563: error: 'CURLOPT_UPLOAD' was not declared in this scope
s3fs.cpp:564: error: 'CURLOPT_INFILESIZE' was not declared in this scope
s3fs.cpp:565: error: 'CURLOPT_READDATA' was not declared in this scope
s3fs.cpp:566: error: 'CURLOPT_READFUNCTION' was not declared in this scope
s3fs.cpp:573: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:575: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp: In function 'int s3fs_readdir(const char*, void*, int (*)(void*,
const char*, const stat*, off_t), off_t, fuse_file_info*)':
s3fs.cpp:613: error: 'CURLOPT_URL' was not declared in this scope
s3fs.cpp:613: error: 'curl_easy_setopt' was not declared in this scope
s3fs.cpp:614: error: 'CURLOPT_FAILONERROR' was not declared in this scope
s3fs.cpp:615: error: 'CURLOPT_FOLLOWLOCATION' was not declared in this scope
s3fs.cpp:616: error: 'CURLOPT_WRITEDATA' was not declared in this scope
s3fs.cpp:617: error: 'CURLOPT_WRITEFUNCTION' was not declared in this scope
s3fs.cpp:624: error: 'CURLOPT_HTTPHEADER' was not declared in this scope
s3fs.cpp:626: error: 'curl_easy_perform' was not declared in this scope
s3fs.cpp:628: error: 'CURLINFO_RESPONSE_CODE' was not declared in this scope
s3fs.cpp:628: error: 'curl_easy_getinfo' was not declared in this scope
s3fs.cpp:641: error: 'xmlDocPtr' was not declared in this scope
s3fs.cpp:641: error: expected `;' before 'doc'
s3fs.cpp:642: error: 'doc' was not declared in this scope
s3fs.cpp:643: error: 'xmlNodePtr' was not declared in this scope
s3fs.cpp:643: error: expected `;' before 'cur_node'
s3fs.cpp:643: error: 'cur_node' was not declared in this scope
s3fs.cpp:654: error: expected `;' before 'sub_node'
s3fs.cpp:654: error: 'sub_node' was not declared in this scope
s3fs.cpp:655: error: 'XML_ELEMENT_NODE' was not declared in this scope
s3fs.cpp:658: error: 'XML_TEXT_NODE' was not declared in this scope
s3fs.cpp:697: error: 'doc' was not declared in this scope
s3fs.cpp:697: error: 'xmlFreeDoc' was not declared in this scope
make: *** [all] Error 1


Please help me out. Let me know what i need to do.

Original issue reported on code.google.com by [email protected] on 26 Nov 2007 at 7:57

s3fs needs deep directory rename!

createrepo doesn't work: needs deep directory rename!

createrepo uses a working folder of ".repodata" and then renames it to
"repodata"... s3fs does not support deep rename of folders therefore you
end up with an empty "repodata" folder!

Original issue reported on code.google.com by [email protected] on 29 Feb 2008 at 12:24

bucket prefix

>>> is there any way of mounting an existing path? say if I have a "prefix"
under a bucket, <bucket>:/my/path/here shouldn't "s3fs
<bucket>:/my/path/here <mntpt>" work?

Original issue reported on code.google.com by [email protected] on 19 Dec 2007 at 3:44

one more rsync issue

s3fs needs to implement chown... right now s3fs hardcodes uid/gid to
whatever the current user is... looks like rsync attempt to do a chown
(which has no effect currently with s3fs) and on subsequent incremental
rsyncs, rsync keeps trying chown...

Original issue reported on code.google.com by [email protected] on 20 Apr 2008 at 2:58

Copy to Amazon Drive Fails With Large Files

What steps will reproduce the problem?
1. s3fs BUCKET.drive -o accessKeyId=blah -o secretAccessKey=blah ~/s3
2. cd ~/s3
3. mkfile 100m testfile
I also tried "cp ~/very\ large\ file ."
Where very large file is ~ 100mb picture file

Command Line Error
User:~/s3 $ mkfile 100m testfile
mkfile: (testfile removed) (null): Not a directory

Syslog:
Apr 30 15:37:45 hermes-2 kernel[0]: MacFUSE: force ejecting (no response
from user space 5)
Apr 30 15:37:45 hermes-2 KernelEventAgent[35]: tid 00000000 received
VQ_DEAD event (32)


Info:
Leopard OS X: Darwin Host 9.2.0 Darwin Kernel Version 9.2.0: Tue Feb  5
16:13:22 PST 2008; root:xnu-1228.3.13~1/RELEASE_I386 i386

Output from FINK Installed Packages
libcurl3    7.15.5-5
libcurl3-unified    7.15.5-5
libcurl3-unified-shlibs 7.15.5-5
libxml2 2.6.27-1002 
libxml2-bin 2.6.27-1002
libxml2-shlibs  2.6.27-1002

MacFUSE 1.5.1 (which as of MacFUSE v.1.3 includes 2.7.2 of the user-space
FUSE library.)


Original issue reported on code.google.com by [email protected] on 30 Apr 2008 at 7:47

Does not compiled on Fedore Core 8

Current code does not compile on Fedora Core 8 with the following error:

for_each unknown symbol

The solution is to add the following to s3fs.cpp:

#include <algorithm>

Original issue reported on code.google.com by [email protected] on 18 Mar 2008 at 1:25

response 403 and input output error

What steps will reproduce the problem?
1. "s3fs -o default_acl=public-read bucketName /mnt/s3 -o use_cache=/tmp"
and I have triple checked the access info in /etc/passwd-s3fs and connected
with another s3 app

2. no error when I run but "tail -f /var/log/messages" shows... 
s3fs: ###response=403
s3fs: init $Rev: 177 $

3. if I try "ls /mnt/s3/" I get...
ls: reading directory /mnt/s3: Input/output error

What is the expected output? What do you see instead?
I have couple files in the bucket which I can view using Cockpit online but
can't get past the input/output error with s3fs.

What version of the product are you using? On what operating system?
s3fs v177, fedora c3, curl-7.19.0

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 8 Oct 2008 at 10:00

Add background uploading when using local cache

When using a local cache, it would be nice if s3fs would return 
immediately after a file is closed, and allow more files to be created or 
written.

This would have the ramification of errors not being relayed to the 
client if an upload error occurred (such as connectivity lost) but s3fs 
could probably keep track of those things and retry upload when 
connectivity is resumed.

If this option was enabled, copious warnings should be given to the 
effect that just because the client returns, it does not mean the file is 
uploaded, don't turn off your machine, etc.

Original issue reported on code.google.com by [email protected] on 10 Apr 2008 at 12:05

s3sync'ed files not supported

What steps will reproduce the problem?
1. Use s3sync to copy files/folders to an S3 bucket
2. Mount same bucket using s3fs
3. Navigate two levels into the folder structure

What is the expected output? What do you see instead?

You would expect to be able to change directory but you get Input/output errors

What version of the product are you using? On what operating system?

s3fs revision 148. Debian stable (etch)

Please provide any additional information below.

http://s3sync.net/forum/index.php?topic=188.0


Original issue reported on code.google.com by [email protected] on 2 May 2008 at 3:56

Cant write file/direcotry

Hello, i mounted my s3, but cant write into the drive. For example:
[root@li43-18 s3fs]# mkdir /mnt/s3/b
mkdir: cannot create directory `/mnt/s3/b': No such file or directory

and here is output from s3fs
[root@li43-18 s3fs]# /usr/bin/s3fs Turgan /mnt/s3/ -f                     

getattr[path=/b]
mkdir[path=/b][mode=493]

df is showing
fuse                 274877906944         0 274877906944   0% /mnt/s3

What can be the problem?
Thank You

Original issue reported on code.google.com by [email protected] on 27 Oct 2008 at 7:32

need to support utimens()

What steps will reproduce the problem?
1. try using rsync with s3fs as destination
2. observe "Operation cannot be completed because you do not have
sufficient privliges"

What is the expected output? What do you see instead?
rsync should be able to set modification time (mtime) on s3fs volumes

Please use labels and text to provide additional information.

see
http://groups.google.com/group/s3fs-devel/browse_thread/thread/6ed9871e349c1d9f
for discussion

Original issue reported on code.google.com by [email protected] on 17 Oct 2007 at 7:05

mkdir command results in segfault

What steps will reproduce the problem?
1. mount a s3fs volume in /mnt/foo
2. type cd /mnt/foo 
3. type mkdir testdir
4. observe the segfault

What is the expected output? What do you see instead?

expect a directory to be created

What version of the product are you using? On what operating system?

latest from svn as of 10/11/2007 on ubuntu gutsy 


Original issue reported on code.google.com by [email protected] on 12 Oct 2007 at 5:41

Build fails under JDK 1.6

In JDK 1.6, java.nio.ByteBuffer has two methods differing only in return
type (this is allowed).

The C to Java bindings stuff does not include the return type in the
generated name, causing "duplicate array member" compilation errors.

I've attached two patches:

jdk16.patch - fix this bug

linux-patch - fix a couple of warnings on Linux


Original issue reported on code.google.com by [email protected] on 12 Jun 2008 at 8:38

New feature: add "blocksize" mount option for block-oriented persistence

Request to implement a "blocksize" mount option as described in this thread:

http://groups.google.com/group/s3fs-devel/browse_thread/thread/ae05dbcef459610c

Additional thoughts:

- s3fs caching mechanism would have to be generalized to support blocks as
well as entire files.

- Worth considering trying to preserve the ordering of block writes (even
though S3 doesn't guarantee readers will see that same ordering) to
minimize potential problems with journaling file systems mounted via loopback.

- Being able to mount an S3-backed file via loopback means you can have the
filesystem be encrypted using normal Linux encrypted file system techniques.

Original issue reported on code.google.com by [email protected] on 10 Jun 2008 at 2:36

Needed -lssl to compile on ubuntu gutsy

g++ -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -lfuse -lrt
-ldl   -lcurl -I/usr/include/libxml2 -L/usr/lib -lxml2 -ggdb s3fs.cpp -o s3fs
s3fs.cpp: In function ‘std::string calc_signature(std::string, std::string,
std::string, curl_slist*, std::string)’:
s3fs.cpp:254: warning: value computed is not used
s3fs.cpp: At global scope:
s3fs.cpp:273: warning: ‘size_t readCallback(void*, size_t, size_t, void*)’
defined but not used
/home/tv/tmp/cc9ATbRc.o: In function
`__static_initialization_and_destruction_0':
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:198: undefined reference to `EVP_sha1'
/home/tv/tmp/cc9ATbRc.o: In function
`calc_signature(std::basic_string<char, std::char_traits<char>,
std::allocator<char> >, std::basic_string<char, std::char_traits<char>,
std::allocator<char> >, std::basic_string<char, std::char_traits<char>,
std::allocator<char> >, curl_slist*, std::basic_string<char,
std::char_traits<char>, std::allocator<char> >)':
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:248: undefined reference to `HMAC'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:250: undefined reference to
`BIO_f_base64'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:250: undefined reference to `BIO_new'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:251: undefined reference to
`BIO_s_mem'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:251: undefined reference to `BIO_new'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:252: undefined reference to `BIO_push'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:253: undefined reference to
`BIO_write'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:254: undefined reference to `BIO_ctrl'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:256: undefined reference to `BIO_ctrl'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:261: undefined reference to
`BIO_free_all'
/home/tv/tmp/cc9ATbRc.o: In function `s3fs_open':
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:665: undefined reference to `MD5_Init'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:670: undefined reference to
`MD5_Update'
/home/tv/src/s3fs/s3fs.git/s3fs/s3fs.cpp:674: undefined reference to
`MD5_Final'
collect2: ld returned 1 exit status
make: *** [all] Error 1


adding -lssl made it build

Original issue reported on code.google.com by [email protected] on 17 Dec 2007 at 10:50

tmpfile() not being closed

"The /tmp partition is filling up on the server because s3fs is not releasing
the file handles to its temporary files correctly"

Original issue reported on code.google.com by [email protected] on 19 Sep 2008 at 2:48

Use the bucket listing's delimter parameter to simulate directory structure

Unfortunately s3fs is not able to list the contents of existing buckets
which use slashes to simulate directory structure.  Instead s3fs creates a
meta directory.  If this meta directory doesn't exist then the contents of
the directory will not be listed.

Other programs such as S3Fox also use meta directories, albeit, in a way
incompatiable with s3fs.

This raises the issue of why meta directories are necessary in the first
place as S3 allows directory structures to be simulated via the "prefix"
parameter of the bucket listing call.  (See
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/index.html?RESTBucketGET.h
tml).

Original issue reported on code.google.com by [email protected] on 15 Dec 2008 at 3:30

Does s3fs support hard links?

What steps will reproduce the problem?
1. cp -al backup.0 backup.1 

What is the expected output? What do you see instead?
cp: cannot create link `backup.1/test/index.php~': Operation not permitted

What version of the product are you using? On what operating system?
Latest... r177 on Ubuntu 8.10

Please provide any additional information below.
I'm just trying to make daily incremental backups... is there a better way
than what I'm doing?

Original issue reported on code.google.com by [email protected] on 11 Jan 2009 at 4:47

Can't upload files - OSX

What steps will reproduce the problem?

1. after the drive is mounted using MacFuse, I cannot add local files to
remote share.  the error (Mac) is a finder error - "The finder cannot
complete the opeartion because some data in 'my file' could not be read or
written."  It doesn't appear to matter what file/type I try to upload.

2. I can upload the the same files using S3Fox and MacOS X S3 Browser.

3. I can create directories without issue.


What is the expected output? What do you see instead?

I would expect the files to be uploaded without error.

What version of the product are you using? On what operating system?

r177 on a Mac OS 10.5.6
MacFuse 2.0.3,2

Please provide any additional information below.

I had a lot of issues getting this going. Finally found this page which
worked, or at least allowed me to mount my shares:

http://mark.aufflick.com/blog/2007/10/28/leopard-amazon-s3-network-storage


Original issue reported on code.google.com by [email protected] on 17 Feb 2009 at 1:30

Detect S3 network transmission errors

S3 returns an MD5 hash of what it received (PUT case) or what it
transmitted (GET) case. s3fs should check this response header (ETag) or
supply a Content-MD5 request header to guard against the possibility of
data corruption.

There is at least one documented, Amazon-acknowledged incident of S3 data
corruption due to network transmission errors (load-balancing hardware
failure in this case):

http://developer.amazonwebservices.com/connect/message.jspa?messageID=93408#9340
8

Original issue reported on code.google.com by [email protected] on 24 Jun 2008 at 9:11

Memory Full. Urgent Issue

I used the latest version of s3fs and loaded onto my ubuntu 7.10 AMI.

Every thing went fine and i was able to transfer 4.7 GB to my S3 bucket,
the problem was that i did not realize that s3fs does cache data to some
degree on to the local disk before sending the file out to the bucket.

My query is that how can i resolve the /dev/sda1 from being full to the
brim by s3fs. Could you tell me which folder s3fs is caching to so that i
could delete the temp files and recover my storage space.

Regards
Hareem

Original issue reported on code.google.com by [email protected] on 2 Jan 2008 at 3:29

Feature Request: Compatability with other S3FS clients

s3fs currently displays directories created in S3Fox as something like
"MyStuff_$folder$".  But, it shows up as an unknown file type, and not even
a directory.  Being able to understand those "folders" from S3Fox would be
a handy feature, IMO

Original issue reported on code.google.com by [email protected] on 15 Apr 2008 at 6:23

can't run bonnie++

can't run bonnie++

[root@localhost]# bonnie++ -f -u root:root -d /s3/tmp -r 0 -s 1
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
Can't open file ./Bonnie.29089.000
Can't open file ./Bonnie.29089.000
Can't open file ./Bonnie.29089.000
start em...[root@localhost: Transport endpoint is not connected
[root@localhost]#

Original issue reported on code.google.com by [email protected] on 25 Feb 2008 at 1:21

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.