Comments (9)
I think maybe the kernel is just flushing its own buffers. Kernel buffering is
complicated... e.g. there is a buffer in the kernel for the "file" that appears
in the s3backer filesystem. It's not clear that sync(8) will wait for dirty
pages of this file to be written back.
However, there very well could be a bug. If so, you should see it in the block
cache dirty ratio in the stats file. Also you can apply the attached patch, run
s3backer in the foreground (-f --debug) and watch the actual dirty block count
change.
Let me know what you find.
Original comment by [email protected]
on 21 Oct 2010 at 1:09
- Changed state: Feedback
Attachments:
from s3backer.
block_cache_current_size 103962 blocks
block_cache_initial_size 1792 blocks
block_cache_dirty_ratio 0.2071
Sure looks like a bug to me ;-)
Original comment by [email protected]
on 21 Oct 2010 at 1:19
from s3backer.
OK, so I installed your patch, created a new s3backer bucket, mounted it with
max dirty set to 10 blocks, and then wrote 10 blocks of data into it with dd.
Here's what I saw:
#dirties=1
#dirties=2
#dirties=3
#dirties=4
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000000
#dirties=5
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000001
#dirties=6
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000002
#dirties=7
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000003
#dirties=8
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000004
#dirties=9
2010-10-21 07:49:52 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000005
#dirties=10
#dirties=11
#dirties=12
#dirties=13
#dirties=14
#dirties=15
#dirties=16
#dirties=17
#dirties=18
#dirties=19
#dirties=20
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000001
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000005
#dirties=19
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000006
#dirties=18
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000007
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000003
#dirties=17
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000008
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000000
#dirties=16
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000009
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000004
#dirties=15
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000a
2010-10-21 07:49:53 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000002
#dirties=14
2010-10-21 07:49:53 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000b
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000006
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000007
#dirties=13
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000c
#dirties=12
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000d
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000008
#dirties=11
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000e
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000009
#dirties=10
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/0000000f
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000a
#dirties=9
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000010
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000b
#dirties=8
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000011
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000c
#dirties=7
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000012
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000d
#dirties=6
2010-10-21 07:49:54 DEBUG: PUT http://s3.amazonaws.com/jik2-backup-dev2/00000013
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000e
#dirties=5
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/0000000f
#dirties=4
2010-10-21 07:49:54 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000010
#dirties=3
2010-10-21 07:49:55 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000011
#dirties=2
2010-10-21 07:49:55 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000013
#dirties=1
2010-10-21 07:49:55 DEBUG: success: PUT
http://s3.amazonaws.com/jik2-backup-dev2/00000012
#dirties=0
Now, if I understand the desired behavior correctly, what was supposed to
happen is once #dirties got to 10, s3backer should have blocked until a dirty
was successfully written, and the count never should have gotten above 10. But
that doesn't appear to be what happened.
Original comment by [email protected]
on 21 Oct 2010 at 11:53
from s3backer.
Original comment by [email protected]
on 21 Oct 2010 at 2:20
- Changed state: Accepted
from s3backer.
The problem is that max_dirty is enforced when a block is added to the cache
but not when an existing cache block is dirtied.
I believe the attached patch fixes the issue.
Original comment by [email protected]
on 21 Oct 2010 at 2:56
Attachments:
from s3backer.
Thanks! That was indeed the problem. Please verify the attached patch fixes
this problem for you.
Original comment by [email protected]
on 21 Oct 2010 at 7:48
- Changed state: Feedback
Attachments:
from s3backer.
Seems to work fine, although I like my patch better ;-)
Original comment by [email protected]
on 21 Oct 2010 at 7:57
from s3backer.
Thanks, committed in r441.
Original comment by [email protected]
on 21 Oct 2010 at 8:02
- Changed state: Fixed
from s3backer.
Original comment by [email protected]
on 22 Oct 2010 at 8:03
- Added labels: AffectsVersion-1.3.1, FixVersion-1.3.2
from s3backer.
Related Issues (20)
- Mount failure results in mount flag not being cleaned up HOT 1
- PerformanceConsiderations - there is no "Buffer Cache" anymore HOT 1
- s3backer is silently accepting invalid command line parameters HOT 6
- block_cache assertion failures when running in NBD mode HOT 16
- mount token does not take into account bucket subdir HOT 1
- "Broken Pipe" errors when running in NBD mode HOT 7
- Drop features for dealing with eventually consistent servers? HOT 2
- Data corruption when using NBD mode HOT 14
- Cache bandwidth much lower in version 1.6.x than in 1.5.6 HOT 19
- Version 2.0.1 not pushed to AWS S3 download bucket HOT 1
- s3backer --nbd not doing anything HOT 2
- Docker build failing HOT 4
- s3 strong consistency HOT 1
- munmap_chunk(): invalid pointer HOT 2
- TRIM is very inefficient HOT 2
- block cache entry shrink policy not documented HOT 1
- Building with NBD results in configured build prefix being ignored for nbdkit plugin HOT 2
- nbdkit: error: invalid value "deflate" for boolean flag "--compress" HOT 4
- block cache flush and synchronous umount (with fuse) HOT 3
- too many time_wait socket when writing to a newly created disk HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s3backer.