Git Product home page Git Product logo

enhanceio's People

Contributors

bhansaliakhil avatar deepenmehta85 avatar kcgthb avatar nikmartin avatar onlyjob avatar pfactum avatar sanoj-stec avatar xjtuwjp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enhanceio's Issues

Reference to eio_admin in udev rule

When a udev rule is created, one of the TEST lines in the EIO_SOURCE section has a RUN directive to call /sbin/eio_admin. Shouldn't this be /sbin/eio_cli instead??? Is this a missing script or is there a reason we should have a copy or symlink of eio_cli as eio_admin?

Building Linux 3.7.8 with latest EnhanceIO fails on i686

I try to compile EnhanceIO with 32-bit kernel and get the following error on modpost:

Kernel: arch/x86/boot/bzImage is ready  (#14)
  MODPOST 3149 modules
ERROR: "__udivdi3" [drivers/block/enhanceio/enhanceio.ko] undefined!
ERROR: "__umoddi3" [drivers/block/enhanceio/enhanceio.ko] undefined!
ERROR: "__divdi3" [drivers/block/enhanceio/enhanceio.ko] undefined!
WARNING: modpost: Found 12 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2

Could that be fixed?

module does not compile on kernel 3.8.6

I get:

make -C /lib/modules/3.8.6/build M=/usr/src/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory /usr/src/linux-3.8.6' CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_conf.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.o /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.c: In function 'eio_ioctl': /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.c:92:7: error: invalid application of 'sizeof' to incomplete type 'struct uint64_t' /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.c:92:7: error: array type has incomplete element type /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.c:92:7: error: invalid application of 'sizeof' to incomplete type 'struct uint64_t' /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.c:92:7: error: invalid application of 'sizeof' to incomplete type 'struct uint64_t' make[2]: *** [/usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.o] Error 1 make[1]: *** [_module_/usr/src/EnhanceIO/Driver/enhanceio] Error 2 make[1]: Leaving directory/usr/src/linux-3.8.6'
make: *** [modules] Error 2

Print formats incorrect on non 64-bit

A large number of warnings when compiling on x86 (32-bit). Two examples below.

The format specifier %lu is dependent on architecture, but the variables are a mixture of sector_t (whose size depends on large block device enabled) and uint64_t.

/home/mark/src/EnhanceIO/Driver/enhanceio/eio_main.c: In function 'eio_disk_io_callback':
/home/mark/src/EnhanceIO/Driver/enhanceio/eio_main.c:279:3: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'sector_t' [-Wformat]

/home/mark/src/EnhanceIO/Driver/enhanceio/eio_ttc.c: In function 'eio_reboot_handling':
/home/mark/src/EnhanceIO/Driver/enhanceio/eio_ttc.c:1561:5: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'long long int' [-Wformat]

non-fatal error on delete

sudo eio_cli delete -c R6_CACHE

removing file/etc/udev/rules.d/94-enhanceio-R6_CACHE.rules
Traceback (most recent call last):
  File "/usr/sbin/eio_cli", line 453, in <module>
    main()
  File "/usr/sbin/eio_cli", line 411, in main
    cache.delete_rules()
  File "/usr/sbin/eio_cli", line 304, in delete_rules
    os.remove(rule_file_path)
OSError: [Errno 2] No such file or directory: '/etc/udev/rules.d/94-enhanceio-R6_CACHE.rules'

kernel BUG hit while using EIO on 32 bit machine

[ 1213.520768] ------------[ cut here ]------------
[ 1213.522876] kernel BUG at drivers/scsi/scsi_lib.c:1192!
[ 1213.525828] invalid opcode: 0000 [# 1] SMP
[ 1213.526963] Modules linked in: enhanceio_lru enhanceio_fifo enhanceio coretemp aesni_intel ablk_helper cryptd lrw aes_i586 xts gf128mul joydev snd_ens1371 gameport snd_ac97_codec ac97_bus snd_pcm snd_seq_midi vmw_balloon snd_rawmidi snd_seq_midi_event microcode snd_seq psmouse serio_raw snd_timer rfcomm ppdev hid_generic snd_seq_device bnep parport_pc btusb bluetooth i2c_piix4 snd vmwgfx ttm drm soundcore snd_page_alloc mac_hid shpchp lp parport usbhid hid floppy pcnet32 mptspi mptscsih mptbase
[ 1213.538994] Pid: 2706, comm: eio_cli Not tainted 3.7.5 # 9 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
[ 1213.541737] EIP: 0060:[] EFLAGS: 00210046 CPU: 0
[ 1213.543142] EIP is at scsi_setup_fs_cmnd+0x89/0x90
[ 1213.544365] EAX: 00000000 EBX: f06a3800 ECX: 00000002 EDX: f56300f0
[ 1213.545917] ESI: f56300f0 EDI: 00000000 EBP: f307daf4 ESP: f307daec
[ 1213.547360] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 1213.548583] CR0: 80050033 CR2: b474a000 CR3: 33062000 CR4: 000407f0
[ 1213.550034] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 1213.551511] DR6: ffff0ff0 DR7: 00000400
[ 1213.552473] Process eio_cli (pid: 2706, ti=f307c000 task=f05b3f70 task.ti=f307c000)
[ 1213.554205] Stack:
[ 1213.554679] e5370000 f56300f0 f307db5c c13f77fa f38f24c0 00000000 00000000 0000002f
[ 1213.556762] 000000f0 00011200 00005d01 f0620bc0 f069a5c0 00011200 f307db2c 00000000
[ 1213.559522] 000001ac 00000000 86d0c563 00000000 f06a3800 00000110 00000000 00000001
[ 1213.561699] Call Trace:
[ 1213.562331] [] sd_prep_fn+0x2ba/0xf40
[ 1213.563449] [] blk_peek_request+0x97/0x200
[ 1213.564658] [] ? get_request+0x281/0x660
[ 1213.565863] [] ? __schedule+0x311/0x790
[ 1213.566991] [] scsi_request_fn+0x37/0x4f0
[ 1213.568174] [] ? __elv_add_request+0x161/0x260
[ 1213.569557] [] blk_queue_bio+0x2f8/0x3a0
[ 1213.570761] [] generic_make_request+0x9b/0xd0
[ 1213.572066] [] submit_bio+0x5d/0x140
[ 1213.573202] [] ? bio_add_page+0x58/0x70
[ 1213.574395] [] eio_dispatch_io.isra.5+0x13b/0x140 [enhanceio]
[ 1213.576105] [] eio_do_io+0x159/0x210 [enhanceio]
[ 1213.577519] [] eio_io_sync_vm+0x4e/0x70 [enhanceio]
[ 1213.578931] [] eio_md_create+0xcd3/0x1080 [enhanceio]
[ 1213.580386] [] ? printk+0x4d/0x4f
[ 1213.581406] [] ? eio_get_device_size+0x45/0x60 [enhanceio]
[ 1213.582931] [] eio_cache_create+0x63c/0x1a80 [enhanceio]
[ 1213.584429] [] ? vmap_page_range_noflush+0x189/0x240
[ 1213.585908] [] ? map_vm_area+0x3a/0x60
[ 1213.587034] [] ? insert_vmalloc_vmlist+0x19/0x60
[ 1213.588367] [] ? __vmalloc_node_range+0x115/0x1e0
[ 1213.589727] [] ? eio_ioctl+0x72/0x230 [enhanceio]
[ 1213.591076] [] ? __vmalloc_node+0x62/0x70
[ 1213.592236] [] ? eio_ioctl+0x72/0x230 [enhanceio]
[ 1213.593552] [] eio_ioctl+0x97/0x230 [enhanceio]
[ 1213.594865] [] ? eio_get_device_start_sect+0x30/0x30 [enhanceio]
[ 1213.596519] [] do_vfs_ioctl+0x82/0x5b0
[ 1213.597618] [] ? __do_page_fault+0x25f/0x4d0
[ 1213.598958] [] ? sys_fstat64+0x2b/0x30
[ 1213.600144] [] sys_ioctl+0x70/0x80
[ 1213.601252] [] sysenter_do_call+0x12/0x28
[ 1213.602482] Code: ff ff 5b 5e 5d c3 8b 00 85 c0 74 b7 8b 48 24 85 c9 74 b0 89 f2 89 d8 ff d1 85 c0 74 a6 90 8d 74 26 00 eb de b8 02 00 00 00 eb d7 <0f> 0b 90 8d 74 26 00 55 89 e5 83 ec 0c 89 5d f4 89 75 f8 89 7d
[ 1213.609148] EIP: [] scsi_setup_fs_cmnd+0x89/0x90 SS:ESP 0068:f307daec
[ 1213.610983] ---[ end trace f0f6d867f0963a59 ]---

error on create

3.8.2 kernel on Debian Wheezy:

nmartin@nik-mockbook:~$ sudo eio_cli create -s /dev/nik-mockbook/root -d /dev/sdb -m wt -c root_cache
Cache Name : root_cache
Source Device : /dev/sdb
SSD Device : /dev/nik-mockbook/root
Policy : lru
Mode : Write Through
Block Size : 4096
Associativity : 256
ENV{ID_SERIAL}=="SATA_SSD_BA14072B040800328575", ENV{DEVTYPE}=="disk"
None
Traceback (most recent call last):
File "/sbin/eio_cli", line 457, in
main()
File "/sbin/eio_cli", line 391, in main
cache.create_rules()
File "/sbin/eio_cli", line 293, in create_rules
udev_rule = udev_template.replace("<cache_name>",self.name).replace("<source_match_expr>", source_match_expr).replace("<cache_match_expr>", cache_match_expr).replace("", modes[self.mode]).replace("<block_size>", str(self.blksize))
TypeError: expected a character buffer object

Implement skip_seq_thresh

The Facebook Flashcache had a setting skip_seq_thresh. This allowed me to skip big sequential writes. This was really useful while running ceph osd and reduced the i/o waits a lot.

It would be great to see this feature again / reimplemented in enhanceio. Maybe even better instead of writing the first X bytes until skip_seq_thresh to ssd this could happen in mem and than decide whether this goes to ssd < skip_seq_thresh or to disk > skip_seq_thresh.

Thanks!

compilation fails on 3.9.4

Hello.

Fresh checkout.

make -C /lib/modules/3.9.4/build M=/usr/src/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory /usr/src/linux-3.9.4' CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_conf.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_main.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_mem.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_policy.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.o /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function 'eio_stats_open': /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1804:2: error: implicit declaration of function 'PDE_DATA' [-Werror=implicit-function-declaration] /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1804:2: warning: passing argument 3 of 'single_open' makes pointer from integer without a cast [enabled by default] include/linux/seq_file.h:125:5: note: expected 'void *' but argument is of type 'int' /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function 'eio_errors_open': /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1837:2: warning: passing argument 3 of 'single_open' makes pointer from integer without a cast [enabled by default] include/linux/seq_file.h:125:5: note: expected 'void *' but argument is of type 'int' /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function 'eio_iosize_hist_open': /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1872:2: warning: passing argument 3 of 'single_open' makes pointer from integer without a cast [enabled by default] include/linux/seq_file.h:125:5: note: expected 'void *' but argument is of type 'int' /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function 'eio_version_open': /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1894:2: warning: passing argument 3 of 'single_open' makes pointer from integer without a cast [enabled by default] include/linux/seq_file.h:125:5: note: expected 'void *' but argument is of type 'int' /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function 'eio_config_open': /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1931:2: warning: passing argument 3 of 'single_open' makes pointer from integer without a cast [enabled by default] include/linux/seq_file.h:125:5: note: expected 'void *' but argument is of type 'int' cc1: some warnings being treated as errors make[2]: *** [/usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.o] Error 1 make[1]: *** [_module_/usr/src/EnhanceIO/Driver/enhanceio] Error 2 make[1]: Leaving directory/usr/src/linux-3.9.4'
make: *** [modules] Error 2

Cache never being hit for iSCSI volume

WT cache enabled for an iSCSI volume; per /proc/enhanceio/cache/stats, ssd_writes are occuring (as are ssd_readfills), but zero ssd_reads. Per iostat, no reads are issued to the cache device. When two local disc volumes are used, cache hits are seen.

Kernel crashes upon unloading enhanceio.ko - policy "rand"

Hi,

I got a kernel crash using enhanceio configured with policy "rand".
Upon unloading enhanceio.ko, the following kernel panic occurs.

[ 429.398576] ------------[ cut here ]------------
[ 429.398604] WARNING: at fs/proc/generic.c:849 remove_proc_entry+0x251/0x260()
[ 429.398606] Hardware name: Bochs
[ 429.398608] remove_proc_entry: removing non-empty directory '/proc/enhanceio', leaking at least 'CACHE'
[ 429.398609] Modules linked in: enhanceio(-) virtio_net virtio_blk virtio_pci virtio_ring virtio [last unloaded: enhanceio_lru]
[ 429.398616] Pid: 3548, comm: rmmod Not tainted 3.4.23-storage+ #2
[ 429.398618] Call Trace:
[ 429.398627] [] warn_slowpath_common+0x7a/0xb0
[ 429.398629] [] warn_slowpath_fmt+0x41/0x50
[ 429.398631] [] remove_proc_entry+0x251/0x260
[ 429.398634] [] ? kfree+0xf0/0x120
[ 429.398638] [] eio_module_procfs_exit+0x20/0x50 [enhanceio]
[ 429.398641] [] eio_exit+0x8c/0xe0 [enhanceio]
[ 429.398646] [] sys_delete_module+0x178/0x270
[ 429.398649] [] ? fput+0x198/0x240
[ 429.398655] [] ? do_async_page_fault+0x25/0x90
[ 429.398663] [] system_call_fastpath+0x16/0x1b
[ 429.398665] ---[ end trace e93589b03e845988 ]---
[ 436.640120] BUG: unable to handle kernel paging request at ffffffffa005aec8
[ 436.641210] IP: [] 0xffffffffa005aec7

That doesn't seem to happen with other policies, "fifo" or "lru". For the 2 policies this crash cannot happen, because "rmmod enhanceio" immediately exits with errors indicating its dependencies. To unload that, enhanceio_{fifo,lru}.ko have to be unloaded first of all, followed by cleaning up proc entries. That will prevent unloading enhanceio.ko from crashing.

Policy "rand", however, doesn't have policy ops. Also it doesn't clean up cache device entries. Either eio_exit() should check existing cache entries, or policy "rand" should also have its own kernel module.

What do you think?

Dongsu

FTBFS with linux-3.10

/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_module_procfs_init’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1224:3: error: implicit declaration of function ‘create_proc_entry’ [-Werror=implicit-function-declaration]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1224:9: warning: assignment makes pointer from integer without a cast [enabled by default]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1226:9: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_procfs_ctr’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1259:8: warning: assignment makes pointer from integer without a cast [enabled by default]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1261:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1262:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1267:8: warning: assignment makes pointer from integer without a cast [enabled by default]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1269:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1270:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1275:8: warning: assignment makes pointer from integer without a cast [enabled by default]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1277:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1278:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1283:8: warning: assignment makes pointer from integer without a cast [enabled by default]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1285:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1286:8: error: dereferencing pointer to incomplete type
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_stats_open’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1821:2: error: implicit declaration of function ‘PDE’ [-Werror=implicit-function-declaration]
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1821:54: error: invalid type argument of ‘->’ (have ‘int’)
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_errors_open’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1854:55: error: invalid type argument of ‘->’ (have ‘int’)
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_iosize_hist_open’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1889:60: error: invalid type argument of ‘->’ (have ‘int’)
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_version_open’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1911:56: error: invalid type argument of ‘->’ (have ‘int’)
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c: In function ‘eio_config_open’:
/var/lib/dkms/enhanceio/0+git20130619/build/eio_procfs.c:1948:55: error: invalid type argument of ‘->’ (have ‘int’)

filesystem corruption on hdd

I am able to reproduce filesystem corruption across reboots. Here is some basic info:

  • Kernel 3.6.7 from kernel.org
  • Ubuntu 12.10 distribution
  • HDD: 500GB SATA disk (/dev/sdb)
  • SDD: 90GB partition (/dev/sda8)
  • EnhanceIO is using LRU algorithm in write-back mode
  • HDD is formatted with ext4
  • Mount point for the filesystem is /home

I created the ext4 filesystem on /home using a recovery disc, restored my data from backup, and let the udev rule call eio_cli with the create parameter to initialize the cache at boot-time. I then changed the udev rule to perform an enable instead of a create. On subsequent reboots, I always get filesystem corruption. Below is relevant text from dmesg:

[ 1.583187] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[ 1.583270] sd 1:0:0:0: [sdb] Write Protect is off
[ 1.583274] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 1.583321] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.585845] sdb: unknown partition table
[ 1.586063] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 7.466820] enhanceio: Fast (clean) shutdown detected
[ 7.466826] enhanceio: Both clean and dirty blocks exist in cacheget_policy: policy 2 found
[ 7.466843] enhanceio_lru: eio_lru_instance_init: created new instance of LRUenhanceio: Setting replacement policy to lru (2)
[ 7.466852] enhanceio: Allocate 91787KB (4B per) mem for 23497472-entry cache (capacity:92145MB, associativity:256, block size:4096 bytes)<6>
[ 10.005193] enhanceio: Cache metadata loaded from disk with 8484329 valid 2250 dirty blocks
[ 10.005200] enhanceio: Setting mode to write back init: udev-fallback-graphics main process (808) terminated with status 1
[ 39.683640] EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: nodelalloc,data=ordered
[ 10.008800] enhanceio_lru: Initialized 91787 sets in LRU
[ 52.619756] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75158687:freeing already freed block (bit 21663)
[ 53.031011] EXT4-fs error (device sdb): ext4_lookup:1383: inode #74711250: comm xfconfd: deleted inode referenced: 74714806
[ 53.035578] EXT4-fs error (device sdb): ext4_lookup:1383: inode #74711250: comm xfconfd: deleted inode referenced: 74714806
[ 53.037624] EXT4-fs error (device sdb): ext4_lookup:1383: inode #74711250: comm xfconfd: deleted inode referenced: 74714806
[ 53.689067] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75157421:freeing already freed block (bit 20397)
[ 53.689104] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75157422:freeing already freed block (bit 20398)
[ 53.689131] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75158688:freeing already freed block (bit 21664)
[ 53.689146] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75152344:freeing already freed block (bit 15320)
[ 54.930023] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2295, block 75216624:freeing already freed block (bit 14064)
[ 60.949201] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75158349:freeing already freed block (bit 21325)
[ 60.949216] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75158350:freeing already freed block (bit 21326)
[ 60.949224] EXT4-fs error (device sdb): mb_free_blocks:1301: group 2293, block 75158351:freeing already freed block (bit 21327)
[ 82.162493] EXT4-fs error (device sdb): ext4_lookup:1383: inode #74711250: comm xfconfd: deleted inode referenced: 74714806

Note the out-of-order timestamp from the enhanceio driver after the 39 second mark...not sure if that's a fluke in the kernel logger or something that indicates a problem. In any case, it seems like there is some sort of race condition between cache initialization and filesystem mounting or there may be a data consistency problem that occurs when the system goes down for a reboot.

In used device can be used in other place

I noticed after I choose the disk eg /dev/sdb and ssd eg /dev/sdc to create cache, after that the ssd device still can be simply mkfs and mount, it's really not safe , I'd like to see something like device is busy or in use.

And in eio_ttc.c eio_ttc_get_device :
There is a comment:
/*
* Do we need to claim the devices ??
* bd_claim_by_disk(bdev, charptr, gendisk)
*/

I think we should claim the device or open the device with EXEC flag.

What's your opinion?

Jack

Creating cache with source on LVM fails

When creating a cache with the source on a LVM device, it fails. This worked with the master branch code as it was on Jan 14th (i.e., the commits on Jan 15th and 16th seemed to cause the break?)

Output from the eio_cli command:

eio_cli create -d /dev/sdc -s /dev/vg_ssd/eio_backup -c eio_backup -p lru -m wb
Cache Name : eio_backup
Source Device : /dev/sdc
SSD Device : /dev/vg_ssd/eio_backup
Policy : lru
Mode : Write Back
block size : 4096
assocativity : 256
[Errno 22] Invalid argument

dmesg shows the cause of the errno 22:

enhanceio: Cache creation failed: get_device for source device failed.

Kernel OOPs

On "Debian Linux 3.8.5-1~experimental.1 x86_64 GNU/Linux"

eio_cli create -d /dev/md3 -s /dev/md0p3 -c ENH_CACHE0 -p lru -m ro

md3 is 1 TB; md0p3 is 16 GB.

EnhanceIO built from latest commit 7d7221a as of 2013-05-02.

OOPs happened as soon as I run fsck.ext4 /dev/md3.

var/log/messages:

May  4 18:47:30 debmain kernel: [  702.653109] register_policy: policy 1 added
May  4 18:47:30 debmain kernel: [  702.662690] register_policy: policy 3 addedenhanceio: Setting mode to read only 
May  4 18:47:30 debmain kernel: [  702.689421] get_policy: cannot find policy 2enhanceio: policy_init: Cannot find requested policy
May  4 18:47:38 debmain kernel: [  702.689643] Not enough sets to use small metadataenhanceio: Allocate 30428KB (8B per) mem for 3894784-entry cache (capacity:15273MB, associativity:256, block size:4096 bytes)

May  4 18:47:38 debmain kernel: [  710.777005] PGD 224742067 PUD 223a83067 PMD 0 
May  4 18:47:38 debmain kernel: [  710.777008] Oops: 0000 [#1] SMP 
May  4 18:47:38 debmain kernel: [  710.777011] Modules linked in: enhanceio_rand(O) enhanceio_fifo(O) enhanceio(O) drbd lru_cache ip6table_filter ip6_tables iptable_filter ip_tables ebtable_nat ebtables x_tables cpufreq_conservative cpufreq_powersave cpufreq_userspace cpufreq_stats parport_pc ppdev lp parport autofs4 snd_hrtimer binfmt_misc fuse nfsv4 nfsd auth_rpcgss nfs_acl nfs lockd dns_resolver fscache sunrpc bridge stp llc dm_crypt it87 hwmon_vid loop usblp fglrx(PO) tuner_simple snd_hda_codec_hdmi tuner_types snd_hda_codec_realtek snd_hda_intel snd_hda_codec tuner tvaudio snd_bt87x snd_hwdep tda7432 snd_pcm_oss msp3400 snd_mixer_oss bttv snd_pcm snd_page_alloc btcx_risc snd_seq_midi snd_seq_midi_event tveeprom snd_rawmidi videobuf_dma_sg snd_seq rc_core coretemp v4l2_common videodev iTCO_wdt kvm_intel media i2c_i801 videobuf_core snd_seq_device iTCO_vendor_support snd_timer kvm acpi_cpufreq i2c_algo_bit mperf i2c_core snd lpc_ich pcspkr soundcore mfd_core evdev processor button thermal_sys ext4 crc16 jbd2 m
May  4 18:47:38 debmain kernel: bcache btrfs zlib_deflate crc32c libcrc32c dm_mod raid1 linear md_mod hid_generic usbhid hid sr_mod cdrom sg sd_mod crc_t10dif ata_generic microcode ata_piix r8169 mii pata_jmicron ahci libahci libata scsi_mod ehci_pci uhci_hcd ehci_hcd usbcore usb_common
May  4 18:47:38 debmain kernel: [  710.777091] CPU 3 
May  4 18:47:38 debmain kernel: [  710.777095] Pid: 11025, comm: fsck.ext4 Tainted: P           O 3.8-trunk-amd64 #1 Debian 3.8.5-1~experimental.1 Gigabyte Technology Co., Ltd. P35-DS3R/P35-DS3R
May  4 18:47:38 debmain kernel: [  710.777097] RIP: 0010:[<ffffffffa0b76100>]  [<ffffffffa0b76100>] eio_repl_blk_init+0x20/0x20 [enhanceio]
May  4 18:47:38 debmain kernel: [  710.777102] RSP: 0018:ffff8801caf33910  EFLAGS: 00010046
May  4 18:47:38 debmain kernel: [  710.777104] RAX: 0000000000000400 RBX: 0000000000000100 RCX: 0000000000000002
May  4 18:47:38 debmain kernel: [  710.777106] RDX: ffff8801caf33930 RSI: 0000000000000000 RDI: 0000000000000000
May  4 18:47:38 debmain kernel: [  710.777108] RBP: 0000000000000400 R08: ffff8802243012c0 R09: 00ffffffffffffff
May  4 18:47:38 debmain kernel: [  710.777110] R10: 0000000000000000 R11: 0000000008000100 R12: ffff8801cce5b000
May  4 18:47:38 debmain kernel: [  710.777111] R13: 0000000000000100 R14: 0000000000000000 R15: 0000000008000100
May  4 18:47:38 debmain kernel: [  710.777114] FS:  00007fa395115760(0000) GS:ffff88022fd80000(0000) knlGS:0000000000000000
May  4 18:47:38 debmain kernel: [  710.777116] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May  4 18:47:38 debmain kernel: [  710.777118] CR2: 0000000000000030 CR3: 00000001d0322000 CR4: 00000000000007e0
May  4 18:47:38 debmain kernel: [  710.777120] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
May  4 18:47:38 debmain kernel: [  710.777121] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
May  4 18:47:38 debmain kernel: [  710.777124] Process fsck.ext4 (pid: 11025, threadinfo ffff8801caf32000, task ffff8801d7297040)
May  4 18:47:38 debmain kernel: [  710.777125] Stack:
May  4 18:47:38 debmain kernel: [  710.777126]  ffffffffa0b711de 0000000000000246 ffff8801caf339f0 0000000008000100
May  4 18:47:38 debmain kernel: [  710.777130]  ffffffffffffffff ffff8801cce5b000 ffff8801cce5b000 ffff88021b38d3c0
May  4 18:47:38 debmain kernel: [  710.777134]  ffff8802248b5140 ffff8802248b5140 0000000000000000 0000000000000282
May  4 18:47:38 debmain kernel: [  710.777137] Call Trace:
May  4 18:47:38 debmain kernel: [  710.777142]  [<ffffffffa0b711de>] ? eio_lookup.isra.9+0x1de/0x260 [enhanceio]
May  4 18:47:38 debmain kernel: [  710.777146]  [<ffffffffa0b7394d>] ? eio_map+0x5bd/0x1570 [enhanceio]
May  4 18:47:38 debmain kernel: [  710.777151]  [<ffffffff810d936f>] ? zone_statistics+0x41/0x74
May  4 18:47:38 debmain kernel: [  710.777156]  [<ffffffffa0b79a67>] ? eio_make_request_fn+0x347/0x450 [enhanceio]
May  4 18:47:38 debmain kernel: [  710.777160]  [<ffffffff811ac02b>] ? generic_make_request+0x96/0xd5
May  4 18:47:38 debmain kernel: [  710.777162]  [<ffffffff811acc92>] ? submit_bio+0x10a/0x13b
May  4 18:47:38 debmain kernel: [  710.777166]  [<ffffffff81132e92>] ? bio_alloc_bioset+0x78/0xe3
May  4 18:47:38 debmain kernel: [  710.777169]  [<ffffffff811306e5>] ? submit_bh+0x194/0x1af
May  4 18:47:38 debmain kernel: [  710.777172]  [<ffffffff81130f98>] ? block_read_full_page+0x1bf/0x1d8
May  4 18:47:38 debmain kernel: [  710.777175]  [<ffffffff81133b8d>] ? I_BDEV+0x8/0x8
May  4 18:47:38 debmain kernel: [  710.777178]  [<ffffffff810cc686>] ? get_page+0x9/0x25
May  4 18:47:38 debmain kernel: [  710.777181]  [<ffffffff810cc6cd>] ? __lru_cache_add+0x2b/0x51
May  4 18:47:38 debmain kernel: [  710.777184]  [<ffffffff810cb762>] ? __do_page_cache_readahead+0x176/0x1b6
May  4 18:47:38 debmain kernel: [  710.777187]  [<ffffffff810cbbda>] ? ondemand_readahead+0x1d1/0x1e2
May  4 18:47:38 debmain kernel: [  710.777191]  [<ffffffff810c3c5e>] ? generic_file_aio_read+0x23b/0x5b9
May  4 18:47:38 debmain kernel: [  710.777194]  [<ffffffff8110af0a>] ? do_sync_read+0x62/0x9b
May  4 18:47:38 debmain kernel: [  710.777197]  [<ffffffff8110b50b>] ? vfs_read+0x93/0xf5
May  4 18:47:38 debmain kernel: [  710.777200]  [<ffffffff8110b5be>] ? sys_read+0x51/0x80
May  4 18:47:38 debmain kernel: [  710.777204]  [<ffffffff813871e9>] ? system_call_fastpath+0x16/0x1b
May  4 18:47:38 debmain kernel: [  710.777206] Code: 66 66 2e 0f 1f 84 00 00 00 00 00 48 85 ff 74 0b 48 8b 47 28 48 85 c0 74 02 ff e0 31 c0 c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 <48> 8b 47 30 ff e0 66 2e 0f 1f 84 00 00 00 00 00 48 8b 47 38 ff 
May  4 18:47:38 debmain kernel: [  710.777237]  RSP <ffff8801caf33910>
May  4 18:47:38 debmain kernel: [  710.777238] CR2: 0000000000000030
May  4 18:47:38 debmain kernel: [  710.777241] ---[ end trace 31564a2717ba9e3a ]---

eio_cli info:

Cache Name       : ENH_CACHE0
Source Device    : /dev/md3
SSD Device       : /dev/md0p3
Policy           : lru
Mode             : Read Only
Block Size       : 4096
Associativity    : 256
State            : normal

DKMS config

For the moment, EnhanceIO doesn't have a dkms config.

Cache trashing strategy

Hi

I'd like to know how EnhanceIO is going to behave in a following situation.

  1. Cache contains useful data (files accessed frequently, etc)
  2. Now I am starting to write a file with size equal to SSD (cache) size.

What happens to cache contents after file is written?
Would it contain all pages filled with the file pages and all "useful" data gets evicted from cache?

After a lot of writes, stats like write_hit_pct are wrong (> 100%)

While watching the stats, I noticed that the hit_pct can go over 100%

Probably an overflow issue : (I saw up to 250%, it decreases slowly since)

# cat /proc/enhanceio/data/stats                                                                                                                                                              Fri Mar 22 14:30:17 2013

reads                         989210416
writes                       4755020996
read_hits                     530588593
read_hit_pct                         53
write_hits                    731039258
write_hit_pct                       158
dirty_write_hits                      0
dirty_write_hit_pct                   0
cached_blocks                 116746752
rd_replace                     57285407
wr_replace                    386288954
noroom                              224
cleanings                             0
md_write_dirty                        0
md_write_clean                        0
md_ssd_writes                         0
do_clean                              0
nr_blocks                     116746752
nr_dirty                              0
nr_sets                          456042
clean_index                           0
uncached_reads                  1796394
uncached_writes                 4693522
uncached_map_size                     0
uncached_map_uncacheable              0
disk_reads                    458621823
disk_writes                  4755020996
ssd_reads                     530588593
ssd_writes                   5213608170
ssd_readfills                 458603248
ssd_readfill_unplugs            1448177
readdisk                        1796394
writedisk                       1796394
readcache                      66323624
readfill                       57325406
writecache                    651701241
readcount                       3868868
writecount                      4693522
kb_reads                      494605208
kb_writes                    2377510498
rdtime_ms                      60797900
wrtime_ms                    2284120776

udev rule template invokes "eio_cli enable", which is invalid

In the EIO_SETUP section of the 94-Enhanceio.template file, it calls eio_cli with the enable option to setup a cache, but the enable option is invalid.

RUN+="/sbin/eio_cli enable -d /dev/$env{disk_name} -s /dev/$env{ssd_name} <cache_name>"

usage: eio_cli [-h] {delete,edit,info,clean,create} ...
eio_cli: error: invalid choice: 'enable' (choose from 'delete', 'edit', 'info', 'clean', 'create')

block size and assoc always set to 0

[root@localhost enhanceio]# eio_cli create -d /dev/sdb -s /dev/sdc -p lru -m wt -c c1
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : lru
Mode : Write Through
block size : 4096
assoc : 256
[root@localhost enhanceio]# eio_cli info
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : fifo
Mode : Write Through
block size : 0
assoc : 0

'ro' mode doesn't seem to bypass writes

When cache created in 'ro' (read-only) mode then writes to cached device are also go to SSD cache. As far as I understand this is expected for 'wt' (write-through) mode but not for read-only mode which is not expected to save anything to cache during write operations.
This looks like bug as at the moment I don't see any difference between read-only and write-through mode. Please advise.

Kernel Panics During Create On MD Device

I created an md raid 1 mirror using two external usb drives. Another usb flash device is being used as the cache device. Whenever I create the cache, the kernel panics. I testing on two separate machines ( different hardware ) with the same kernel, and the panic and stack traces were consistent.

Creating the cache on a usb disk directly works ok, so something is up when the md device gets involve.

Using latest git commit 94cb7db on linux 3.7.2 .

eio_cli create -d /dev/md/smallraid -s /dev/disk/by-id/usb-Corsair_Survivor_3.0_12331349000015410267-0:0 -c test
[  411.511504] ------------[ cut here ]------------
[  411.511699] kernel BUG at drivers/scsi/scsi_lib.c:1192!
[  411.511823] invalid opcode: 0000 [#1] SMP 
[  411.512061] Modules linked in: enhanceio_fifo enhanceio_lru enhanceio netconsole configfs fcoe libfcoe libfc scsi_transport_fc scsi_tgt rdma_ucm ib_uverbs rdma_cm ib_addr iw_cm ib_cm ib_sa ib_mad ib_core xt_LOG xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables ebtable_nat ebtables x_tables bnep rfcomm parport_pc ppdev lp parport bluetooth rfkill crc16 cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative autofs4 binfmt_misc nfsd auth_rpcgss nfs_acl nfs lockd fscache sunrpc bridge stp llc loop firewire_sbp2 dm_crypt zfs(PO) zunicode(PO) zavl(PO) zcommon(PO) znvpair(PO) spl(O) zlib_deflate snd_hda_codec_hdmi coretemp kvm_intel kvm snd_hda_codec_realtek snd_usb_audio snd_usbmidi_lib snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_page_alloc i7core_edac snd_seq_midi pcspkr edac_core snd_seq_midi_event snd_seq iTCO_wdt snd_rawmidi iTCO_vendor_support i2c_i801 snd_timer snd_seq_device snd serio_raw i2c_core acpi_cpufreq mperf microcode evdev soundcore mxm_wmi lpc_ich joydev processor mfd_core button wmi thermal_sys hid_generic ses enclosure hid_logitech_dj usbhid hid xfs dm_mod raid10 raid1 md_mod usb_storage firewire_ohci xhci_hcd firewire_core uhci_hcd ata_generic crc32c_intel crc_itu_t r8169 ehci_hcd mii sr_mod pata_jmicron cdrom sg usbcore sd_mod crc_t10dif usb_common
[  411.518497] CPU 3 
[  411.518545] Pid: 9108, comm: eio_cli Tainted: P           O 3.7.2+ #5 Gigabyte Technology Co., Ltd. EX58-UD5/EX58-UD5
[  411.518726] RIP: 0010:[<ffffffff8124cd6c>]  [<ffffffff8124cd6c>] scsi_setup_fs_cmnd+0x42/0x8a
[  411.518885] RSP: 0018:ffff8806126cd9f8  EFLAGS: 00010046
[  411.518965] RAX: 0000000000000000 RBX: ffff880617e1c800 RCX: 0000000000000002
[  411.519051] RDX: 0000000000040000 RSI: ffff8806171c7b20 RDI: ffff880617e1c800
[  411.519137] RBP: ffff8806171c7b20 R08: ffffffff8179d9c0 R09: ffff8806126cd9f0
[  411.519222] R10: 0000000000012a00 R11: ffff880617257400 R12: ffff880617e1c800
[  411.519307] R13: 0000000000040000 R14: ffff8806172e8000 R15: ffff880600fa90c0
[  411.519392] FS:  00007f6bbf783700(0000) GS:ffff88063fc60000(0000) knlGS:0000000000000000
[  411.519517] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  411.519599] CR2: ffffffffff600400 CR3: 000000061801e000 CR4: 00000000000007e0
[  411.519701] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  411.519802] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  411.519905] Process eio_cli (pid: 9108, threadinfo ffff8806126cc000, task ffff880618ecea60)
[  411.520032] Stack:
[  411.520119]  0000000000000000 ffff8806171c7b20 0000000000000000 ffffffffa0068163
[  411.520402]  ffff880617257400 00000000efc94f94 ffff8806171c7b20 ffff880617390330
[  411.520665]  ffff8806171c7b20 ffff880617173c00 ffff880600000000 ffff880626b21038
[  411.520927] Call Trace:
[  411.521008]  [<ffffffffa0068163>] ? sd_prep_fn+0x3de/0xc5f [sd_mod]
[  411.521095]  [<ffffffff81176abb>] ? blk_peek_request+0xab/0x185
[  411.528249]  [<ffffffff8124d26c>] ? scsi_request_fn+0x49/0x4ee
[  411.528365]  [<ffffffff8104e3b2>] ? abort_exclusive_wait+0x79/0x79
[  411.528494]  [<ffffffff811736c6>] ? elv_rqhash_add.isra.13+0x26/0x4c
[  411.528604]  [<ffffffff81177147>] ? blk_queue_bio+0x281/0x2d2
[  411.528701]  [<ffffffff81175ad7>] ? generic_make_request+0x96/0xd5
[  411.528807]  [<ffffffffa021da81>] ? make_request+0x39c/0xa8d [raid1]
[  411.528902]  [<ffffffff810d3e6d>] ? kmem_cache_alloc+0x8a/0xae
[  411.529006]  [<ffffffffa020492c>] ? md_make_request+0xcc/0x1c1 [md_mod]
[  411.529099]  [<ffffffff81100849>] ? bio_alloc_bioset+0x78/0xe3
[  411.529201]  [<ffffffffa0204860>] ? new_dev_store+0x158/0x158 [md_mod]
[  411.529298]  [<ffffffffa040b692>] ? eio_issue_empty_barrier_flush+0x82/0xa1 [enhanceio]
[  411.529425]  [<ffffffffa040be93>] ? eio_ttc_activate+0x19b/0x1c7 [enhanceio]
[  411.529525]  [<ffffffffa0403591>] ? eio_cache_create+0x1292/0x1439 [enhanceio]
[  411.529653]  [<ffffffff810c3635>] ? insert_vmalloc_vmlist+0x15/0x4d
[  411.529752]  [<ffffffff810c4fd6>] ? __vmalloc_node_range+0x1b9/0x1e4
[  411.529852]  [<ffffffffa0403791>] ? eio_ioctl+0x30/0x1fd [enhanceio]
[  411.529950]  [<ffffffff81055462>] ? should_resched+0x5/0x23
[  411.530047]  [<ffffffffa04037c7>] ? eio_ioctl+0x66/0x1fd [enhanceio]
[  411.530148]  [<ffffffff810e8a32>] ? vfs_ioctl+0x1e/0x31
[  411.530242]  [<ffffffff810e9269>] ? do_vfs_ioctl+0x3ed/0x42f
[  411.530336]  [<ffffffff810e0111>] ? sys_newfstat+0x23/0x2b
[  411.530433]  [<ffffffff810e92f8>] ? sys_ioctl+0x4d/0x7d
[  411.530531]  [<ffffffff8135d569>] ? system_call_fastpath+0x16/0x1b
[  411.530627] Code: 00 00 48 85 c0 75 0c 66 83 bd e8 00 00 00 00 75 1c eb 18 48 8b 00 48 85 c0 74 ec eb 47 48 89 ee 48 89 df ff d0 85 c0 74 de eb 44 <0f> 0b 48 89 ee 48 89 df e8 bf f7 ff ff 48 85 c0 48 89 c2 74 1d 
[  411.533342] RIP  [<ffffffff8124cd6c>] scsi_setup_fs_cmnd+0x42/0x8a
[  411.533464]  RSP <ffff8806126cd9f8>
[  411.533539] ---[ end trace 1c19110551a620e2 ]---

eio_cli create bug

Ubuntu 13.04 with kernel 3.8.0-25-generic

After install when making:
sudo eio_cli create -d /dev/sda2 -s /dev/sdb1 -c cachedev

Received:

Cache Name : cachedev
Source Device : /dev/sda2
SSD Device : /dev/sdb1
Policy : lru
Mode : Write Through
Block Size : 4096
Associativity : 256
Traceback (most recent call last):
File "/sbin/eio_cli", line 668, in
sys.exit(main())
File "/sbin/eio_cli", line 597, in main
return cache.create()
File "/sbin/eio_cli", line 374, in create
if self.do_eio_ioctl(EIO_IOC_CREATE) == SUCCESS:
File "/sbin/eio_cli", line 308, in do_eio_ioctl
fd = open(EIODEV, "r")
IOError: [Errno 2] No such file or directory: '/dev/eiodev'

Writeback cache flushes data too frequently

Hi, there's another issue about writeback cache.
What I want to set up is an usual writeback cache that flushes only once in a while, let's say once in 5 minutes. For that purpose I set time_based_clean_interval to 5.

Setup command:
eio_cli create -d /dev/sdc -s /dev/md0 -p lru -m wb -c CACHE -b 8192

Problem is that cache flushes data into the source device very frequently, nearly continuously even in the first 10 seconds. Practically I don't have to wait for 5 minutes until cache flush happens. In this case, tuning time_based_clean_interval doesn't bring anything.

As long as I've analyzed so far, eio_write() seems to call eio_uncached_write() nearly always, which might trigger the so many flushes. Is there any way to avoid uncached writes, for normal sync write I/O at least?

Dongsu

Rebasing changes in commit 332fc8431a

The changes to eio_procfs.c are incompatible with the 3.9 kernel (and all prior versions, I would assume).

The error:
eio_procfs.c:1804:2: error: implicit declaration of function ‘PDE_DATA'

And there are several warnings for calling single_open()

warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast

cache always created with default options even if different options are specified

[root@localhost enhanceio]# eio_cli create -d /dev/sdb -s /dev/sdc -p lru -m wb -c c1
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : lru
Mode : Write Back
block size : 4096
assoc : 256
[root@localhost enhanceio]# eio_cli info
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : fifo
Mode : Write Through
block size : 0
assoc : 0
[root@localhost enhanceio]# cat /proc/enhanceio/c1/config
src_name /dev/sdb
ssd_name /dev/sdc
src_size 625142448
ssd_size 60810752
set_size 512
block_size 4096
mode 3
eviction 1
num_sets 118771
num_blocks 60810752
metadata small
state normal
flags 0x00000000

[root@localhost ~]# eio_cli create -s /dev/sdc -d /dev/sdb -m ro -p fifo -b 8192 -c c1
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : fifo
Mode : Read Only
block size : 8192
assoc : 512
[root@localhost ~]# eio_cli info
Cache Name : c1
Source Device : /dev/sdb
SSD Device : /dev/sdc
Policy : fifo
Mode : Write Through
block size : 0
assoc : 0

For more information look at /proc/enhanceio/<cache_name>/config
[root@localhost ~]# cat /proc/enhanceio/c1/config
src_name /dev/sdb
ssd_name /dev/sdc
src_size 625142448
ssd_size 60810752
set_size 512
block_size 4096
mode 3
eviction 1
num_sets 118771
num_blocks 60810752
metadata small
state normal
flags 0x00000000

make install fails

distro: arch linux
kernel: 3.9.7

I'm using the stock kernel and the EnhenceIO linux 3.9 branch.
The modules compiles fine, but make install fails with a strange error.

% sudo make install
make -C /lib/modules/3.9.7-1-ARCH/build M= modules V=0
make[1]: Entering directory /usr/src/linux-3.9.7-1-ARCH' scripts/Makefile.build:44: /usr/src/linux-3.9.7-1-ARCH/arch/x86/syscalls/Makefile: No such file or directory make[2]: *** No rule to make target/usr/src/linux-3.9.7-1-ARCH/arch/x86/syscalls/Makefile'. Stop.
make[1]: *** [archheaders] Error 2
make[1]: Leaving directory `/usr/src/linux-3.9.7-1-ARCH'
make: *** [modules] Error 2

After unclean shutdown, cache is started after fsck leading to corrupted FS

Hi,

While filling the server in writeback mode, and serving a few files, kernel panic occured. I don't know if it is related to eio or not (no logs), but the recovery after reboot failed.

The filesystem (on /home) was auto-checked before eio cache was up, leading to a corrupted FS and unmountable /home. Worse, the cache didn't come up as it tried to mount as readonly but there was dirty data.

Mar 21 14:08:26 ss2 kernel: [    5.738754] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
Mar 21 14:08:26 ss2 kernel: [    5.894272] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro
Mar 21 14:08:26 ss2 kernel: [    5.953532] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
Mar 21 14:08:26 ss2 kernel: [    5.954556] XFS (sdc1): Mounting Filesystem
Mar 21 14:08:26 ss2 kernel: [    6.191279] XFS (sdc1): Starting recovery (logdev: internal)
Mar 21 14:08:26 ss2 kernel: [    6.237324] XFS (sdc1): xlog_recover_inode_pass2: Bad inode magic number, dip = 0xffffc90017c11800, dino bp = 0xffff88090fd61380, ino = 103088659608
Mar 21 14:08:26 ss2 kernel: [    6.237424] XFS (sdc1): Internal error xlog_recover_inode_pass2(1) at line 2266 of file fs/xfs/xfs_log_recover.c.  Caller 0xffffffffa017556f
Mar 21 14:08:26 ss2 kernel: [    6.238047] XFS (sdc1): log mount/recovery failed: error 117
Mar 21 14:08:26 ss2 kernel: [    6.238151] XFS (sdc1): log mount failed
Mar 21 14:08:26 ss2 kernel: [    7.189661] register_policy: policy 1 added
Mar 21 14:08:26 ss2 kernel: [    7.192931] register_policy: policy 2 added
                                           enhanceio: Unclean shutdown detected
                                           enhanceio: Only dirty blocks exist in cacheget_policy: policy 2 found
                                           enhanceio_lru: eio_lru_instance_init: created
Mar 21 14:08:26 ss2 kernel: [   22.861980] enhanceio: md_load: Cannot use read only mode because dirty data exists in the cache
Mar 21 14:08:26 ss2 kernel: [   22.862067] enhanceio: Cache metadata loaded from disk with 35816421 valid 35816421 dirty blocks
                                           enhanceio: md_load: Cannot use read only mode because dirty data exists in the cache
Mar 21 14:08:26 ss2 kernel: [   22.862487] enhanceio: Cache metadata loaded from disk with 35816421 valid 35816421 dirty blocks<3>
                            [   22.876348] enhanceio: Cache creation failed: Failed to reload cache.
Mar 21 14:08:26 ss2 kernel: [   22.876688] enhanceio: Cache creation failed: Failed to reload cache.

# eio_cli info => no caches found

Re-enabled the cache and mounted the FS to allow log replay

# eio_cli enable (with good params)

Mar 21 14:09:53 ss2 kernel: [  115.202069] enhanceio: Unclean shutdown detected
Mar 21 14:09:53 ss2 kernel: [  115.202073] enhanceio: Only dirty blocks exist in cacheget_policy: policy 2 found
Mar 21 14:09:53 ss2 kernel: [  115.202082] enhanceio_lru: eio_lru_instance_init: created new instance of LRUenhanceio: Setting replacement policy to lru (2)
Mar 21 14:10:01 ss2 kernel: [  115.202088] enhanceio: Allocate 456042KB (4B per) mem for 116746752-entry cache (capacity:457823MB, associativity:256, block size:4096 bytes)
                                           enhanceio: Cache metadata loaded from disk with 3581
Mar 21 14:11:02 ss2 kernel: [  123.697241] enhanceio: Setting mode to write back <5>
                            [  184.689270] XFS (sdc1): Mounting Filesystem
Mar 21 14:11:03 ss2 kernel: [  184.981506] XFS (sdc1): Starting recovery (logdev: internal)
Mar 21 14:11:03 ss2 kernel: [  185.743178] XFS (sdc1): Ending recovery (logdev: internal)
Mar 21 14:38:55 ss2 kernel: [ 1857.380497] XFS (sdc1): Mounting Filesystem
Mar 21 14:38:55 ss2 kernel: [ 1857.521968] XFS (sdc1): Ending clean mount
Mar 21 14:41:26 ss2 kernel: [  123.709083] enhanceio_lru: Initialized 456042 sets in LRU
Mar 21 14:41:34 ss2 kernel: [ 2008.743648] enhanceio: Writing out metadata to cache device. Please wait...
                                           enhanceio: Metadata saved on the cache device
Mar 21 14:44:50 ss2 kernel: [ 2016.202840] enhanceio: Valid blocks: 35817056, Dirty blocks: 0, Metadata sectors: 3648496
                                           enhanceio: Setting mode to read only
Mar 21 14:44:50 ss2 kernel: [ 2212.156982] get_policy: policy 2 foundenhanceio_lru: eio_lru_instance_init: created new instance of LRU
Mar 21 14:44:58 ss2 kernel: [ 2212.156988] enhanceio: Setting replacement policy to lru (2)enhanceio: Allocate 456042KB (4B per) mem for 116746752-entry cache (capacity:457823MB, associativity:256, block size:4096 bytes)
Mar 21 14:46:55 ss2 kernel: [ 2219.929048] enhanceio_lru: Initialized 456042 sets in LRU<5>
                            [ 2337.583123] XFS (sdc1): Mounting Filesystem
Mar 21 14:46:55 ss2 kernel: [ 2337.816779] XFS (sdc1): Ending clean mount

after xfs_repair, several files were damaged. Flushed the cache (put in readonly mode, flushing 300 GB of dirty data) and disabled write barrier mount option of /home

EnhanceIO should bring cache up before fsck can happen. Or maybe it can be circumvented with a fstab option which should be documented.

Thanks

Persistence.txt enhancements

It would be helpful for users if Persistence.txt showed how to gather the information required for the udev rules:

1) Change <cache_match_expr> to ENV{ID_SERIAL}=="<ID SERIAL OF YOUR CACHE DEVICE>", ENV{DEVTYPE}==<DEVICE TYPE OF YOUR CACHE DEVICE>

2) Change <source_match_expr> to ENV{ID_SERIAL}=="<ID SERIAL OF YOUR HARD DISK>", ENV{DEVTYPE}==<DEVICE TYPE OF YOUR SOURCE DEVICE>

Find <ID SERIAL OF YOUR xxx DEVICE> and <DEVICE TYPE OF YOUR xxx DEVICE> by running:
    udevadm info --query=env --name=/dev/<source disk or partition name>
and in the output, find the items:
ID_SERIAL
and
DEVTYPE
respectively.

Partitioned cache device causes "attempt to access beyond end of device" and kernel panic

I'm not sure if this is a bug. It's unclear whether partitions are supported as cache devices. Anyway, I partitioned a 256GB ssd drive into two partitions. I tried both MBR and GPT partitions, and had the same issue.

Each partition was used as a cache device for two different md devices. After some disk activity, a kernel panic. I was able to re-produce the issue and captured the output.

The create commands:

eio_cli  create -m ro -d /dev/md/probox  -s /dev/disk/by-id/scsi-SM4-CT256M4SSD2_570000000000-part1  -c proboxcache
eio_cli  create -m ro -d /dev/md/wdarray1  -s /dev/disk/by-id/scsi-SM4-CT256M4SSD2_570000000000-part2  -c wdarray1cache

The kernel create output:

[  275.955175] get_policy: policy 2 foundenhanceio_lru: eio_lru_instance_init: created new instance of LRU
[  275.955198] enhanceio: Setting replacement policy to lru (2)enhanceio: Allocate 130559KB (4B per) mem for 33423104-entry cache (capacity:131069MB, associativity:256, block size:4096 bytes)
[  280.028273] enhanceio_lru: Initialized 130559 sets in LRU

[  339.949042] get_policy: policy 2 foundenhanceio_lru: eio_lru_instance_init: created new instance of LRU
[  339.949095] enhanceio: Setting replacement policy to lru (2)enhanceio: Allocate 112683KB (4B per) mem for 28846848-entry cache (capacity:113123MB, associativity:256, block size:4096 bytes)
[  343.483767] enhanceio_lru: Initialized 112683 sets in LRU

Then lots of kernel errors before a final panic ( sdl1 and sdl2 are ssd the partitions ):

[ 1589.887839] attempt to access beyond end of device
[ 1589.887869] sdl1: rw=1, want=4367632552, limit=268435456
[ 1589.887892] io_callback: io error -5 block 4366588160 action 5[ 1589.914359] attempt to access beyond end of device
[ 1589.914374] sdl1: rw=1, want=4376475816, limit=268435456
[ 1589.914404] io_callback: io error -5 block 4375431176 action 5[ 1589.932185] attempt to access beyond end of device
[ 1589.932214] sdl1: rw=1, want=4367716520, limit=268435456
[ 1589.932228] io_callback: io error -5 block 4366673208 action 5[ 1589.932513] attempt to access beyond end of device
[ 1589.932526] sdl1: rw=1, want=4367716520, limit=268435456
[ 1589.932537] attempt to access beyond end of device
[ 1589.932539] io_callback: io error -5 block 4366673216 action 5[ 1589.932564] sdl1: rw=1, want=4367716528, limit=268435456
[ 1589.932575] attempt to access beyond end of device
[ 1589.932577] sdl1: rw=1, want=4367716536, limit=268435456
[ 1589.932581] attempt to access beyond end of device
[ 1589.932582] sdl1: rw=1, want=4367716544, limit=268435456
[ 1589.932600] io_callback: io error -5 block 4366673224 action 5
[ 1589.932605] io_callback: io error -5 block 4366673232 action 5
[ 1589.932923] attempt to access beyond end of device
[ 1589.932936] sdl1: rw=1, want=4367716520, limit=268435456
[ 1589.932955] io_callback: io error -5 block 4366673240 action 5
...
[ 1595.086656] Call Trace:
[ 1595.086697]  [<ffffffffa01e7da8>] ? eio_post_io_callback+0x48e/0x6e6 [enhanceio]
[ 1595.086712]  [<ffffffff81055611>] ? finish_task_switch+0x7d/0xa4
[ 1595.086730]  [<ffffffff81049cbd>] ? process_one_work+0x15b/0x24a
[ 1595.086743]  [<ffffffffa01e791a>] ? eio_disk_io+0x1bb/0x1bb [enhanceio]
[ 1595.086750]  [<ffffffff81048f95>] ? cwq_activate_delayed_work+0x1e/0x28
[ 1595.086758]  [<ffffffff8104a071>] ? worker_thread+0x118/0x1b2
[ 1595.086767]  [<ffffffff81049f59>] ? rescuer_thread+0x187/0x187
[ 1595.086776]  [<ffffffff8104dbc0>] ? kthread+0x81/0x89
[ 1595.086786]  [<ffffffff81050000>] ? posix_cpu_nsleep+0x33/0xdd
[ 1595.086795]  [<ffffffff8104db3f>] ? __kthread_parkme+0x5b/0x5b
[ 1595.086807]  [<ffffffff81375ebc>] ? ret_from_fork+0x7c/0xb0
[ 1595.086815]  [<ffffffff8104db3f>] ? __kthread_parkme+0x5b/0x5b
[ 1595.086824] Kernel panic - not syncing: VERIFY: assertion (EIO_DBN_GET(dmc, index) == EIO_ROUND_SECTOR(dmc,ebio->eb_sector)) failed at drivers/block/enhanceio/eio_main.c (478)
[ 1595.086824]
[ 1595.086833] Pid: 8134, comm: kworker/u:0 Tainted: P           O 3.7.5+ #2
[ 1595.086839] Call Trace:
[ 1595.086848]  [<ffffffff8136f40a>] ? panic+0xc8/0x1d1
[ 1595.086858]  [<ffffffffa01e7f1a>] ? eio_post_io_callback+0x600/0x6e6 [enhanceio]
[ 1595.086864]  [<ffffffff81055611>] ? finish_task_switch+0x7d/0xa4
[ 1595.086870]  [<ffffffff81049cbd>] ? process_one_work+0x15b/0x24a
[ 1595.086878]  [<ffffffffa01e791a>] ? eio_disk_io+0x1bb/0x1bb [enhanceio]
[ 1595.086884]  [<ffffffff81048f95>] ? cwq_activate_delayed_work+0x1e/0x28
[ 1595.086889]  [<ffffffff8104a071>] ? worker_thread+0x118/0x1b2
[ 1595.086895]  [<ffffffff81049f59>] ? rescuer_thread+0x187/0x187
[ 1595.086900]  [<ffffffff8104dbc0>] ? kthread+0x81/0x89
[ 1595.086906]  [<ffffffff81050000>] ? posix_cpu_nsleep+0x33/0xdd
[ 1595.086911]  [<ffffffff8104db3f>] ? __kthread_parkme+0x5b/0x5b
[ 1595.086918]  [<ffffffff81375ebc>] ? ret_from_fork+0x7c/0xb0
[ 1595.086923]  [<ffffffff8104db3f>] ? __kthread_parkme+0x5b/0x5b
[ 1595.086932] Rebooting in 10 seconds..

Kernel is version 3.7.5 SMP x86_64 and EnhanceIO is git version ac7c103 .
I can rebuild the kernel with debug symbols if that would help. I haven't had any issues yet using the entire SSD device as a cache device, so I'm not sure if, by design, one should use a partition as a cache device.

Can't create cache

Latest git version, kernel 3.8.9-200.fc18.x86_64

eio_cli create -d /dev/mapper/vg_bloody-lv_root -s /dev/vg_i3/cache -m wt -b 4096 -c HOME_CACHE
Cache Name : HOME_CACHE
Source Device : /dev/mapper/vg_bloody-lv_root
SSD Device : /dev/vg_i3/cache
Policy : lru
Mode : Write Through
Block Size : 4096
Associativity : 256
Cache creation failed (dmesg can provide you more info)

but dmesg showing noting, older builds work normally

I can't create cache: enhanceio: md_create: Requested cache size exceeds the cache device's capacity (121872 > 0)

Hi!
I'm trying to use EnhanceIO first time so there is high probability I'm doing something wrong. Probably this question should be asked on mailinglist but I can't see it, this is why I'm creating issue ticket.

I compiled EnhanceIO at commit d88c2bc , lodaded modules without problem. I'd like to create test cache. I'd like to add zcache device as "SSD" device, source device:

./eio_cli create -d /dev/mapper/system-lvtmp -s /dev/zram2 -m ro -p lru -c cache1
Cache Name : cache1
Source Device : /dev/mapper/system-lvtmp
SSD Device : /dev/zram2
Policy : lru
Mode : Read Only
Block Size : 4096
Associativity : 256
[Errno 22] Invalid argument
Cache creation failed (dmesg can provide you more info)

In dmesg I can find:
[14412.249450] enhanceio: Setting mode to read only
[14412.249454] get_policy: policy 2 found
[14412.249457] enhanceio_lru: eio_lru_instance_init: created new instance of LRU
[14412.249458] enhanceio: Setting replacement policy to lru (2)
[14412.249467] Not enough sets to use small metadata
[14412.249468] enhanceio: md_create: Requested cache size exceeds the cache device's capacity (121872 > 0)
[14412.249472] enhanceio: Cache creation failed: Failed to force create cache.

I don't know what does it exactly means. (kernel 3.8)
Thanks.

edit cache results in kernel panic

root@localhost ~]# [ 3342.888479] Kernel panic - not syncing: VERIFY: assertion ((mode != 0) || (policy != 0)) failed at drivers/block/enhanceio/eio_ttc.c (1026)
[ 3342.888479]
[ 3343.587906] Pid: 2531, comm: eio_cli Not tainted 3.7.1nitin #1
[ 3343.877318] Call Trace:
[ 3343.998256] [] panic+0xc1/0x1d0
[ 3344.235986] [] eio_cache_edit+0x554/0x6a0 [enhanceio]
[ 3344.568937] [] eio_ioctl+0x176/0x2d0 [enhanceio]
[ 3344.881624] [] do_vfs_ioctl+0x99/0x580
[ 3345.149384] [] ? inode_has_perm.isra.30.constprop.60+0x2a/0x30
[ 3345.522144] [] ? file_has_perm+0x97/0xb0
[ 3345.798753] [] sys_ioctl+0x91/0xb0
[ 3346.050663] [] ? __audit_syscall_exit+0x3ec/0x450
[ 3346.366893] [] system_call_fastpath+0x16/0x1b

Write-back cache flushes during FIO sequential I/O benchmark

I believe that EnhanceIO is cleaning the cache due to low CPU load during the FIO benchmark. This makes the result of the benchmark useless since it is constantly writing to a slow HDD.

I am able to avoid a cache clean if I change the number of jobs to 4 (numjobs=4) which forks 4 FIO processes instead of 1. However, I would like to do a benchmark without heavy CPU activity.

Steps I followed:
Create an EIO cache using the following command line: sudo ./eio_cli create -d /dev/sdb -s /dev/ram0 -m wb -c test -b 4096

Run the following FIO test:
[global]
randrepeat=1
ioengine=libaio
bs=4k
ba=4k
size=2G
direct=1
gtod_reduce=1
norandommap
filename=/dev/sdb

[seq_write_64]
rw=write
iodepth=64
stonewall

[seq_read_64]
rw=read
iodepth=64

Latest head driver doesn't compile on Ubuntu 13.04

I've installed an Ubuntu 13.04 few weeks ago and successfully build EnhanceIO modules. Today I've noticed few segfaults and decided to build latest version. Now I'm getting the following on fresh clone:

root@DB1:/usr/src/EnhanceIO/Driver/enhanceio# make
make -C /lib/modules/3.8.0-19-generic/build M=/usr/src/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory /usr/src/linux-headers-3.8.0-19-generic' CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_conf.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_ioctl.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_main.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_mem.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_policy.o CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.o /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function ‘eio_stats_open’: /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1804:2: error: implicit declaration of function ‘PDE_DATA’ [-Werror=implicit-function-declaration] /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1804:2: warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [enabled by default] In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio.h:54:0, from /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:30: include/linux/seq_file.h:125:5: note: expected ‘void *’ but argument is of type ‘int’ /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function ‘eio_errors_open’: /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1837:2: warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [enabled by default] In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio.h:54:0, from /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:30: include/linux/seq_file.h:125:5: note: expected ‘void *’ but argument is of type ‘int’ /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function ‘eio_iosize_hist_open’: /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1872:2: warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [enabled by default] In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio.h:54:0, from /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:30: include/linux/seq_file.h:125:5: note: expected ‘void *’ but argument is of type ‘int’ /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function ‘eio_version_open’: /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1894:2: warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [enabled by default] In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio.h:54:0, from /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:30: include/linux/seq_file.h:125:5: note: expected ‘void *’ but argument is of type ‘int’ /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c: In function ‘eio_config_open’: /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:1931:2: warning: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [enabled by default] In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio.h:54:0, from /usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.c:30: include/linux/seq_file.h:125:5: note: expected ‘void *’ but argument is of type ‘int’ cc1: some warnings being treated as errors make[2]: *** [/usr/src/EnhanceIO/Driver/enhanceio/eio_procfs.o] Error 1 make[1]: *** [_module_/usr/src/EnhanceIO/Driver/enhanceio] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.8.0-19-generic'
make: *** [modules] Error 2

Crash when writing to softraid device with cache

Update 20130209

It seems that this no longer happens, I might have been a bit too fast to go to conclusions. When this happened the RAID6 array was still being synced, after sync was finished this has not happened anymore.


I am always crashing while writing to disk, some more info below:

Linux bb 3.7.6-030706-generic #201302040006 SMP Mon Feb 4 05:07:54 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux (Ubuntu 12.10)
4x2TB disks in RAID6 (ST2000DM001-1CH1)
2x64GB OCZ Agility 3 SSD's in RAID1
Driver downloaded & compiled 20130206

Command to create cache
eio_cli create -d /dev/md/raid6p1 -s /dev/md/cache1 -m wb -b 4096 -c cache

It does not crash instantly, I can write around 1GB before this happens.

Don't see anything in the logs. Anyone else having this kind of problem?

Thanks

T

eio_cli udev rules possibly incorrect

After issue #19 was closed, I fetched and built, and am still having persistence issues. here is my config:
Laptop with 500 GB SATA HD, sda, partitoned like:

Number  Start   End     Size    File system     Name  Flags
 1      17.4kB  32.0MB  32.0MB                        bios_grub
 2      32.0MB  544MB   512MB   ext4               /boot
 3      544MB   401GB   400GB   ext4               /
 4      496GB   500GB   4096MB  linux-swap(v1) swap

32 GB Internal SSD, unformatted on /dev/sdb

I created a cache for /dev/sda3 using command:

sudo /sbin/eio_cli create -d /dev/sda3 -s /dev/sdb -m wt -c sda3_cache

And eio_cli created a udev file in /etc/udev/rules.d/94-enhanceio-sda3_cache.rules

With contents:


ACTION!="add|change", GOTO="EIO_EOF"
SUBSYSTEM!="block", GOTO="EIO_EOF"

ENV{ID_SERIAL}=="SATA_SSD_BA14072B040800328575", ENV{DEVTYPE}=="disk", GOTO="EIO_CACHE"

ENV{ID_SERIAL}=="dict_udev[ID_SERIAL]", ATTR{partition}=="3
", GOTO="EIO_SOURCE"

# If none of the rules above matched then it isn't an EnhanceIO device so ignore it.
GOTO="EIO_EOF"

# If we just found the cache device and the source already exists then we can setup
LABEL="EIO_CACHE"
        TEST!="/dev/enhanceio/sda3_cache", PROGRAM="/bin/mkdir -p /dev/enhanceio/sda3_cache"
        PROGRAM="/bin/sh -c 'echo $kernel > /dev/enhanceio/sda3_cache/.ssd_name'"

        TEST=="/dev/enhanceio/sda3_cache/.disk_name", GOTO="EIO_SETUP"
GOTO="EIO_EOF"

# If we just found the source device and the cache already exists then we can setup
LABEL="EIO_SOURCE"
        TEST!="/dev/enhanceio/sda3_cache", PROGRAM="/bin/mkdir -p /dev/enhanceio/sda3_cache"
        PROGRAM="/bin/sh -c 'echo $kernel > /dev/enhanceio/sda3_cache/.disk_name'"

        TEST=="/dev/enhanceio/sda3_cache/.ssd_name", GOTO="EIO_SETUP"
        PROGRAM="/bin/sh -c 'blockdev --setro $kernel'"  
GOTO="EIO_EOF"

LABEL="EIO_SETUP"
        PROGRAM="/bin/sh -c 'cat /dev/enhanceio/sda3_cache/.ssd_name'", ENV{ssd_name}="%c"
        PROGRAM="/bin/sh -c 'cat /dev/enhanceio/sda3_cache/.disk_name'", ENV{disk_name}="%c"

        TEST!="/proc/enhanceio/sda3_cache", RUN+="/sbin/eio_cli enable -d /dev/$env{disk_name} -s /dev/$env{ssd_name} -m wt -b 4096 -c sda3_cache"
LABEL="EIO_EOF"

But the rules don't seem to be firing on reboot, making the cache non-persistent.

Data loss during a particular interval.

Hi,

We again faced an issue with flashcache. One of servers were rebooted recently. Once it came back online we found data loss on all VEs. We had created a write_back flash device before and caching was going on. But we disable write_back but in that case we did not use flash_cache destroy nor dmsetup remove to remove the cache. We just rebooted the server to unload the flashcache.

The removal was done 40 days before. But yesterday when we rebooted the server, we again found 140 GB of data missing. Is this due to flascache. We have our old ssd drive still attached to server.

Please note that the data loss is only during a particular interval. That too from Sep1 to Sep10 only.

How can we know whether the ssd has data? If so how can we recover that data?

Reads and writes causes kernel panic

[root@localhost ~]# dd if=/dev/sdb1 of=/dev/null bs=4k iflag=direct count=1
[ 2995.590905] BUG: unable to handle kernel paging request at ffffc90004c26fff
[ 2995.935535] IP: [] eio_post_io_callback+0x67/0xc40 [enhanceio]
[ 2996.301669] PGD 3f980c067 PUD 3f980d067 PMD 3f75d6067 PTE 0
[ 2996.578634] Oops: 0000 [#1]
[ 2996.721104] Modules linked in: enhanceio_fifo(O) enhanceio_lru(O) enhanceio(O) lockd sunrpc bnep bluetooth rfkill ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 ip6table_filter xt_state nf_conntrack ip6_tables coretemp kvm_intel kvm crc32c_intel ghash_clmulni_intel joydev iTCO_wdt iTCO_vendor_support lpc_ich mfd_core microcode hpwdt hpilo pcspkr serio_raw uinput
[ 2998.447172] CPU 0
[ 2998.537776] Pid: 58, comm: kworker/u:2 Tainted: G O 3.7.1 #5 HP ProLiant ML110 G7
[ 2998.961464] RIP: 0010:[] [] eio_post_io_callback+0x67/0xc40 [enhanceio]
[ 2999.448626] RSP: 0018:ffff8803f77b3da8 EFLAGS: 00010246
[ 2999.711940] RAX: ffffc90004c27000 RBX: ffff8803e1ee6000 RCX: ffffffff81c3cd20
[ 3000.065803] RDX: 0000000000000001 RSI: ffff8803f83100b0 RDI: ffff8803f83100b0
[ 3000.419025] RBP: ffff8803f77b3de8 R08: ffff8803f83100b8 R09: ffffffff81c3ce48
[ 3000.773641] R10: ffffffff81c3ce50 R11: 0000000000000001 R12: ffff8803f83100b0
[ 3001.127909] R13: ffffffffffffffff R14: 0000000000000000 R15: ffff8803f8aac240
[ 3001.482113] FS: 0000000000000000(0000) GS:ffffffff81a28000(0000) knlGS:0000000000000000
[ 3001.883359] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3002.168649] CR2: ffffc90004c26fff CR3: 00000003f881b000 CR4: 00000000000407f0
[ 3002.523217] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3002.876625] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 3003.231149] Process kworker/u:2 (pid: 58, threadinfo ffff8803f77b2000, task ffff8803f76ed940)
[ 3003.654165] Stack:
[ 3003.753475] ffff8803f617b740 ffff8803f83100a0 0000000000000000 ffff8803f774c400
[ 3004.117129] ffff8803e1e65000 0000000000000000 ffffffff81c3cf28 ffff8803f83100b0
[ 3004.483730] ffff8803f77b3e58 ffffffff81071155 ffff8803f77b3fd8 ffff8803f77b3fd8
[ 3004.851174] Call Trace:
[ 3004.971898] [] process_one_work+0x135/0x480
[ 3005.262104] [] ? eio_disk_io+0x240/0x240 [enhanceio]
[ 3005.590450] [] worker_thread+0x12e/0x3d0
[ 3005.866605] [] ? busy_worker_rebind_fn+0xd0/0xd0
[ 3006.177583] [] kthread+0xbf/0xd0
[ 3006.419232] [] ? flush_kthread_worker+0x80/0x80
[ 3006.725812] [] ret_from_fork+0x7a/0xb0
[ 3006.993470] [] ? flush_kthread_worker+0x80/0x80
[ 3007.300062] Code: 4d 85 ff 0f 84 a9 08 00 00 49 83 7f 08 00 0f 84 1f 09 00 00 45 85 f6 0f 85 f6 08 00 00 8b 83 2c 01 00 00 a8 20 75 45 48 8b 43 48 <42> 0f b6 74 a8 03 31 c0 48 c7 c7 9b 9e 21 a0 e8 ac da 3c e1 41

better LRU read-only (avoid unnecessary write to cache)

IMHO at the moment LRU/read-only mode is suboptimal because EnhanceIO tries to save every read to the caching device. However there are many workflows where this strategy falls short. For example if an integrity checking tool reading all the files or file system defragmentation is going there are many worthless writes to caching device -- worthless because of little chance for a cache hit.

Even worse when file system is actively read from EnchanceIO can make access slower because caching device is under constant stress with busy time above 95% which slows down reading in case of rare cache hit.

In general this is not a problem with sequential IO as large amount of random reads will create exactly the same problem not to mention the case when file system is heavily fragmented.

I think the best would be to improve caching strategy to identify potential cache hits when block is read more than once and save only such blocks to cache. If implemented this will save a lot of IO and reduce SSD wear-off as well as dramatically improve cache-hits, at least in read-only/LRU mode.

s it possible to use fusionio iodrive2 as cache device?

Hello.

Is it possible to use fusionio iodrive2 as cache device? Question 2. I was able to compile the enhanceio driver for kernel 3.4. That was a surprise to me because the Readme File talks about kernel 3.7 as minimum.

thanks and best regards
t.

Why Sequential IO bypass feature removed in EnhanceIO

I was wondering why this feature (sequential IO detection and bypassing) has been removed from EnhanceIO. I believe sequential and large IO request only pollute the cache because they have negligible locality. Therefore, bypassing sequential large IO request can improve the hit ratio. The description in the README about the limited use case does not make sense to me. I do appreciate if you can elaborate it.

Data corruption when caching md device that hosts LVM partitions

Hi guys,

I have been trying to use an SSD device (Samsung 840 pro) as a caching
device on my system, and had used enhanceio to cache /dev/md0 and
/dev/md1. It seems that this was a bad idea, even in read mode, since
it seemed to want to corrupt the underlying ext4 filesystems. I'm not
sure if it was actual corruption, or that fsck thought it was corrupted
due to bad reads, ie, ext4 on lvm -> lvm -> enhanceio -> md0. Anyway,
have since disabled enhanceio caching in that manner, and might create a
cache for each ext4 instead to see if it has less corruption.

My partioning is as follows:

  1. /dev/md0 (raid10) as / ext4
  2. /dev/md1 (raid10) as LVM Volume Group

Within the LVM volume group, various partitions, such as /usr, /var, /opt, etc.

  1. EnhanceIO read-only configured against /dev/md0 and /dev/md1, with appropriately sized partitions on the SSD.

Even though using EnhanceIO gave a performance boost on my desktop system, the corruption worried me that it would start destroying important files rather than dpkg managed files

Any ideas why this configuration would cause corruption ?

Cheers,

Damien

Compiling built-in EnhanceIO with 3.8.3 kernel results in fail

If I compile EnhanceIO on 3.8.3 kernel as built-in (not module), I get the following result:

drivers/built-in.o: In function `eio_exit':
eio_conf.c:(.text+0xe221f): undefined reference to `scsi_bus_type'
drivers/built-in.o: In function `eio_notify_ssd_rm':
eio_conf.c:(.text+0xe23ea): undefined reference to `scsi_is_sdev_device'
drivers/built-in.o: In function `eio_init':
eio_conf.c:(.init.text+0x966b): undefined reference to `scsi_bus_type'

Could that be fixed?

make modules returns error for kernel 3.7.x

I installed kernels 3.7.5 (elrepo) and 3.7.4 (Oracle) including kernel-devel for both versions on a Centos 6.3 guest VM.

After applying the given EnchanceIO patch successfully,
make modules identifies the EnhanceIO module successfully as "new" and when the question "enable" is answered either with "y" or "m", make terminates with errors.

For 3.7.5-1.el6.elrepo.x86_64:
[root@centos63 3.7.5-1.el6.elrepo.x86_64]# make modules
make[1]: Nothing to be done for all'. make[1]: *** No rule to make targetarch/x86/tools/relocs.c', needed by `arch/x86/tools/relocs'. Stop.
make: *** [archscripts] Error 2

For 3.7.4-3.7.y.20130122.ol6.x86_64:
[root@centos63 3.7.4-3.7.y.20130122.ol6.x86_64]# make modules
make[1]: *** No rule to make target /usr/src/kernels/3.7.4-3.7.y.20130122.ol6.x86_64/arch/x86/syscalls/syscall_32.tbl', needed byarch/x86/syscalls/../include/generated/uapi/asm/unistd_32.h'. Stop.
make: *** [archheaders] Error 2

So, one can see errors are different but no matter what the modules don't compile.

Help with the above is appreciated as otherwise enhanceIO cannot be used.

Hi,

I was able to compile enhance successfully by downloading the srpm for the current kernel and then:

a) install the srpm
b) unzip the embedded kernel source
c) add enhancio modules as described to the kernel source
d) zip the kernel source again
e) run rpmbuild to get the rpm for the current kernel
f) install the kernel from the rpm with enhanceio modules available and running when started

However, this is a pretty time consuming process having in mind how often kernels are issued.
So, a more efficient solution is needed.

cache create fails on 32 bit os

OS: CentOS Linux release 6.0 (Final) 32 bit

[root@localhost ~]# cat /etc/redhat-release

CentOS Linux release 6.0 (Final)

Kernel: 3.7.10

[root@localhost ~]# uname -r

3.7.10

Steps:

When trying to create cache, I am getting following error

[root@localhost ~]# eio_cli create -d /dev/sdc -s /dev/sdb -p fifo -m wt -b 4096 -c CACHE

Traceback (most recent call last):

File "/sbin/eio_cli", line 457, in

main()

File "/sbin/eio_cli", line 390, in main

cache.create()

File "/sbin/eio_cli", line 271, in create

src_sz.get_device_size_info(self.src_name)

File "/sbin/eio_cli", line 327, in get_device_size_info

buf = ioctl(fd, IOC_BLKGETSIZE64, buf)

IOError: [Errno 22] Invalid argument

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.