Git Product home page Git Product logo

Comments (72)

geerlingguy avatar geerlingguy commented on May 6, 2024 9
# Install dependencies
sudo apt install -y git bc bison flex libssl-dev make libncurses5-dev

# Clone source
git clone --depth=1 https://github.com/raspberrypi/linux

# Apply default configuration
cd linux
export KERNEL=kernel7l # use kernel8 for 64-bit, or kernel7l for 32-bit
make bcm2711_defconfig

# Customize the .config further with menuconfig
make menuconfig
# Enable the following:
# Device Drivers:
#   -> Serial ATA and Parallel ATA drivers (libata)
#     -> AHCI SATA support
#     -> Marvell SATA support
#
# Alternatively add the following in .config manually:
# CONFIG_ATA=m
# CONFIG_ATA_VERBOSE_ERROR=y
# CONFIG_SATA_PMP=y
# CONFIG_SATA_AHCI=m
# CONFIG_SATA_MOBILE_LPM_POLICY=0
# CONFIG_ATA_SFF=y
# CONFIG_ATA_BMDMA=y
# CONFIG_SATA_MV=m

nano .config
# (edit CONFIG_LOCALVERSION and add a suffix that helps you identify your build)

# Build the kernel and copy everything into place
make -j4 zImage modules dtbs # 'Image' on 64-bit
sudo make modules_install
sudo cp arch/arm/boot/dts/*.dtb /boot/
sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
sudo cp arch/arm/boot/zImage /boot/$KERNEL.img

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 7

Performance testing of the Kingston SA400S37/240G drive:

Test Result
hdparm 314.79 MB/s
dd 189.00 MB/s
random 4K read 22.98 MB/s
random 4K write 55.02 MB/s

Compare that to the same drive over USB 3.0 using a USB to SATA adapter:

Test Result
hdparm 296.71 MB/s
dd 149.00 MB/s
random 4K read 20.59 MB/s
random 4K write 28.54 MB/s

So not a night-and-day difference like with the NVMe drives, but definitely and noticeably faster. I'm now waiting on another SSD and a power splitter to arrive so I can test multiple SATA SSDs on this card.

And someone just mentioned they have some RAID cards they'd be willing to send me. Might have to pony up for a bunch of hard drives and have my desk turn into some sort of frankemonster NAS-of-many-drives soon!

from raspberry-pi-pcie-devices.

mo-g avatar mo-g commented on May 6, 2024 5

I'm curious about other OS's. Obviously, Raspbian is a good basis - but as I recall, Fedora Pi 64-bit uses their own custom kernel. I'd be interested in seeing what they've "left in" from the standard kernel config.

I'm looking forward to picking one of these up in a month or so when they become available to the public, then I'll give it a try!

Side note for your list page - could you include PCI ID's as well as just the brand names of the cards? It'll help avoid confusion where cards have multiple revisions, as well as help non-US users identify comparable cards in their own markets.

Great work in the meantime! 👍

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 5

My desk is becoming a war zone:

IMG_2720

Plan is to set up a RAID (probably either 0 if I feel more YOLO-y or 1/10 if I'm more stable-minded) with either 2 or 4 drives, using mdadm.

I was having trouble with the SAS card, not sure if the cards are bad or they just don't work at all with the Pi :(

from raspberry-pi-pcie-devices.

BeauSlim avatar BeauSlim commented on May 6, 2024 2

Since google might land you here, like it did me on a search for "cm4 ubuntu sata", the latest development version Ubuntu Impish Indri has SATA support. Simply "sudo apt install linux-modules-extra-raspi" and then "modprobe ahci" or reboot.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 1

That would be an interesting thing to test, though it'll have to wait a bit as I'm trying to get through some other cards and might also test 2.5 Gbps or 5 Gbps networking if I am able to!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 1

All four drives in RAID0:

Test Result
hdparm 327.32 MB/s
dd 155.00 MB/s
random 4K read 4.46 MB/s
random 4K write 4.71 MB/s

Note: The card is getting HOT:

IMG_0004

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 1

It bears repeating:

4lwbjx

I'm reminded of https://www.youtube.com/watch?v=gSrnXgAmK8k

from raspberry-pi-pcie-devices.

markbirss avatar markbirss commented on May 6, 2024 1

@geerlingguy

You should seriously look at using ZFS raidz over mdadm raid

"calculator"
https://calomel.org/zfs_raid_speed_capacity.html

Official OpenZFS guide now including installation on Raspberry Pi

https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS%20for%20Raspberry%20Pi.html

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024 1

Some benchmarks for 4 Kingston SSDs (2x 120 GB, 2x 240 GB) below:

RAID 0

Test Result
hdparm 296.21 MB/s
dd 169.67 MB/s
random 4K read 28.33 MB/s
random 4K write 61.85 MB/s

RAID 10

Test Result
hdparm 277.14 MB/s
dd 116.33 MB/s
random 4K read 26.61 MB/s
random 4K write 41.82 MB/s

Note: In RAID 10, I ended up getting a total array size of 240 GB, effectively wasting 120 GB of space that could've been used had I gone with four 240 GB drives. In a real-world NAS setup, I would likely go with 1 or 2 TB drives (heck, maybe even more!), and especially in RAID 1 or 10, always use the same-sized (and ideally exact same model) drives.

Note 2: While monitoring with atop and sudo mdadm --detail /dev/md0, I noticed the four drives, while doing their initial sync, were each getting almost identical write speeds of ~100.4 MB/sec, with ~4ms latency. That equates to around 396.8 MB/sec total bus speed... or almost exactly 3.2 Gbps. So the maximum throughput of any RAID array is definitely going to be limited by the Pi's PCIe 1x lane (just like networking).

Note 3: The resync of the four SSDs is WAAAAAY faster than the HDDs. It helps that they're also spanning a smaller volume (224 GB instead of 930 GB), but the raw IO for the sync I believe is 3-4x faster.

Note 4: The IO Crest card is also WAAAAY toastier, hitting up to 121°C on parts of the PCB (without active ventilation... I'm rectifying that situation now). Yowza! With a fan, it stayed under 90°C (still hot though).

from raspberry-pi-pcie-devices.

l0gical avatar l0gical commented on May 6, 2024 1

Save yourself the time and effort, the latest Kernel includes SATA straight out of the box:

#176 (comment)
Read near the bottom

from raspberry-pi-pcie-devices.

SorX14 avatar SorX14 commented on May 6, 2024 1

Got my hands on a Pimoroni NVMe base and used a M.2 to PCIe adapter.

Please excuse the sketchy setup - just wanted to quickly test.

image
$ lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0000:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001

And connected 4 drives (these are old HDDs that have various partitions on):

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    1 698.6G  0 disk
└─sda1        8:1    1 698.6G  0 part /mnt/sda
sdb           8:16   1 698.6G  0 disk
└─sdb1        8:17   1 698.6G  0 part /mnt/sdb
sdc           8:32   1 111.8G  0 disk
├─sdc1        8:33   1   512M  0 part
├─sdc2        8:34   1   513M  0 part
├─sdc3        8:35   1     1K  0 part
└─sdc5        8:37   1 110.8G  0 part /mnt/sdc
sdd           8:48   1 465.8G  0 disk
└─sdd1        8:49   1 465.8G  0 part
mmcblk0     179:0    0  58.9G  0 disk
├─mmcblk0p1 179:1    0   512M  0 part /boot/firmware
└─mmcblk0p2 179:2    0  58.4G  0 part /

And successfully mounted all except sdd which seems to be a dead HDD (although looking at the pic in retrospect the SATA cable doesn't look to be fully seated 🤷 )

$ dmesg
...
[  778.260800] ata4.00: exception Emask 0x0 SAct 0x20 SErr 0x0 action 0x0
[  778.260807] ata4.00: irq_stat 0x40000008
[  778.260811] ata4.00: failed command: READ FPDMA QUEUED
[  778.260813] ata4.00: cmd 60/20:28:00:08:00/00:00:00:00:00/40 tag 5 ncq dma 16384 in
                        res 51/40:20:00:08:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
[  778.260822] ata4.00: status: { DRDY ERR }
[  778.260825] ata4.00: error: { UNC }
[  778.263264] ata4.00: configured for UDMA/133
[  778.263278] sd 3:0:0:0: [sdd] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=3s
[  778.263283] sd 3:0:0:0: [sdd] tag#5 Sense Key : 0x3 [current]
[  778.263286] sd 3:0:0:0: [sdd] tag#5 ASC=0x11 ASCQ=0x4
[  778.263290] sd 3:0:0:0: [sdd] tag#5 CDB: opcode=0x28 28 00 00 00 08 00 00 00 20 00
[  778.263294] I/O error, dev sdd, sector 2048 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 2
[  778.263299] Buffer I/O error on dev sdd, logical block 128, async page read
[  778.263303] Buffer I/O error on dev sdd, logical block 129, async page read
[  778.263347] ata4: EH complete
[  781.644815] ata4.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
[  781.644822] ata4.00: irq_stat 0x40000008
...

Running pibenchmark yields the following in hardware identification:

...
Drives:
  Local Storage: total: 1.99 TiB used: 447.45 GiB (22.0%)
  ID-1: /dev/mmcblk0 model: USD00 size: 58.94 GiB
  ID-2: /dev/sda vendor: Western Digital model: WD7500BPKT-80PK4T0 size: 698.64 GiB
  ID-3: /dev/sdb vendor: Western Digital model: WD7500BPVT-22HXZT1 size: 698.64 GiB
  ID-4: /dev/sdc vendor: Samsung model: SSD 850 EVO 120GB size: 111.79 GiB
  ID-5: /dev/sdd vendor: Hitachi model: HTS725050A9A362 size: 465.76 GiB
  Message: No optical or floppy data found.
...

sda results: https://pibenchmarks.com/benchmark/76956/
sdb results: https://pibenchmarks.com/benchmark/76955/
sdc results: https://pibenchmarks.com/benchmark/76957/
sdd results: DNQ

I then ran two benchmarks in parallel on sda and sdc which both completed (didn't submit results).

All this to say that I think this card now works as expected. I was using the default PCIe link speed.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

I'm going to try out the IO Crest 4-port SATA adapter.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

It has arrived!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

And... I just realized I have no SATA power supply cable, just the data cable. So I'll have to wait for one of those to come in before I can actually test one of my SATA drives.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

First light is good:

$ lspci

01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) (prog-if 01 [AHCI 1.0])
	Subsystem: Marvell Technology Group Ltd. Device 9215
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 0
	Region 0: I/O ports at 0000
	Region 1: I/O ports at 0000
	Region 2: I/O ports at 0000
	Region 3: I/O ports at 0000
	Region 4: I/O ports at 0000
	Region 5: Memory at 600040000 (32-bit, non-prefetchable) [size=2K]
	Expansion ROM at 600000000 [size=256K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit-
		Address: 00000000  Data: 0000
	Capabilities: [70] Express (v2) Legacy Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [e0] SATA HBA v0.0 BAR4 Offset=00000004
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Though dmesg shows that it's hitting BAR default address space limits again:

[    0.925795] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[    0.925818] brcm-pcie fd500000.pcie:   No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[    0.925884] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x0603ffffff -> 0x00f8000000
[    0.925948] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00ffffffff -> 0x0100000000
[    0.953526] brcm-pcie fd500000.pcie: link up, 5 GT/s x1 (SSC)
[    0.953827] brcm-pcie fd500000.pcie: PCI host bridge to bus 0000:00
[    0.953844] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.953866] pci_bus 0000:00: root bus resource [mem 0x600000000-0x603ffffff] (bus address [0xf8000000-0xfbffffff])
[    0.953933] pci 0000:00:00.0: [14e4:2711] type 01 class 0x060400
[    0.954172] pci 0000:00:00.0: PME# supported from D0 D3hot
[    0.957560] PCI: bus0: Fast back to back transfers disabled
[    0.957582] pci 0000:00:00.0: bridge configuration invalid ([bus ff-ff]), reconfiguring
[    0.957802] pci 0000:01:00.0: [1b4b:9215] type 00 class 0x010601
[    0.957874] pci 0000:01:00.0: reg 0x10: [io  0x8000-0x8007]
[    0.957911] pci 0000:01:00.0: reg 0x14: [io  0x8040-0x8043]
[    0.957947] pci 0000:01:00.0: reg 0x18: [io  0x8100-0x8107]
[    0.957984] pci 0000:01:00.0: reg 0x1c: [io  0x8140-0x8143]
[    0.958021] pci 0000:01:00.0: reg 0x20: [io  0x800000-0x80001f]
[    0.958058] pci 0000:01:00.0: reg 0x24: [mem 0x00900000-0x009007ff]
[    0.958095] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[    0.958262] pci 0000:01:00.0: PME# supported from D3hot
[    0.961586] PCI: bus1: Fast back to back transfers disabled
[    0.961605] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.961674] pci 0000:00:00.0: BAR 8: assigned [mem 0x600000000-0x6000fffff]
[    0.961698] pci 0000:01:00.0: BAR 6: assigned [mem 0x600000000-0x60003ffff pref]
[    0.961722] pci 0000:01:00.0: BAR 5: assigned [mem 0x600040000-0x6000407ff]
[    0.961744] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0020]
[    0.961759] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0020]
[    0.961774] pci 0000:01:00.0: BAR 0: no space for [io  size 0x0008]
[    0.961788] pci 0000:01:00.0: BAR 0: failed to assign [io  size 0x0008]
[    0.961803] pci 0000:01:00.0: BAR 2: no space for [io  size 0x0008]
[    0.961817] pci 0000:01:00.0: BAR 2: failed to assign [io  size 0x0008]
[    0.961831] pci 0000:01:00.0: BAR 1: no space for [io  size 0x0004]
[    0.961845] pci 0000:01:00.0: BAR 1: failed to assign [io  size 0x0004]
[    0.961860] pci 0000:01:00.0: BAR 3: no space for [io  size 0x0004]
[    0.961873] pci 0000:01:00.0: BAR 3: failed to assign [io  size 0x0004]
[    0.961891] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.961914] pci 0000:00:00.0:   bridge window [mem 0x600000000-0x6000fffff]
[    0.962217] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    0.962439] pcieport 0000:00:00.0: PME: Signaling with IRQ 55
[    0.962813] pcieport 0000:00:00.0: AER: enabled with IRQ 55

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

I just increased the BAR allocation following the directions in this Gist, but when I rebooted (without the card in), I got:

[    0.926161] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[    0.926184] brcm-pcie fd500000.pcie:   No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[    0.926247] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x063fffffff -> 0x00c0000000
[    0.926312] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00ffffffff -> 0x0100000000
[    1.521386] brcm-pcie fd500000.pcie: link down

Powering off completely, then booting again, it works. So note to self: if you get a link down, try a hard power reset instead of reboot.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Ah... looking closer, those 'failed to assign' errors are for IO BARs, which are unsupported on the Pi.

So... I posted in the BAR space thread on Pi Forums asking 6by9 if that user has had the same logs and if they can be safely ignored. Still waiting on a way to power my drive so I can do an end-to-end test :)

from raspberry-pi-pcie-devices.

kitlith avatar kitlith commented on May 6, 2024

something else that may be interesting is if you can get a sas adapter/raid card working. I know I was looking into SBCs w/ pcie awhile back for the purpose of building a low power/low heat host for some sas drives I have. (ended up just throwing it in a computer and not running 24/7)

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Without the kernel modules enabled, lsblk shows no device:

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
mmcblk0     179:0    0 29.8G  0 disk 
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 29.6G  0 part /

Going to try adding those modules and see what happens!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Yahoo, it worked!

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
├─sda1        8:1    1   256M  0 part /media/pi/boot
└─sda2        8:2    1 223.3G  0 part /media/pi/rootfs
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Repartitioning the drive:

sudo fdisk /dev/sda
d 1    # delete partition 1
d 2    # delete partition 2
n    # create new partition
p    # primary (default)
1    # partition 1 (default)
2048    # First sector (default)
468862127    # Last sector (default)
w    # write new partition table

Got the following:

The partition table has been altered.
Failed to remove partition 1 from system: Device or resource busy
Failed to remove partition 2 from system: Device or resource busy
Failed to add partition 1 to system: Device or resource busy

The kernel still uses the old partitions. The new table will be used at the next reboot. 
Syncing disks.

Rebooted the Pi, then:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
└─sda1        8:1    1 223.6G  0 part 
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /

To format the device, use mkfs:

$ sudo mkfs.ext4 /dev/sda1
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done                            
Creating filesystem with 58607510 4k blocks and 14655488 inodes
Filesystem UUID: dd4fa95d-edbf-4696-a9e1-ddf1f17da580
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done 

Then mount it somewhere:

$ sudo mkdir /mnt/sata-sda
$ sudo mount /dev/sda1 /mnt/sata-sda
$ mount
...
/dev/sda1 on /mnt/sata-sda type ext4 (rw,relatime)

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sda1       220G   61M  208G   1% /mnt/sata-sda

from raspberry-pi-pcie-devices.

mi-hol avatar mi-hol commented on May 6, 2024

And someone just mentioned they have some RAID cards they'd be willing to send me. Might have to pony up for a bunch of hard drives and have my desk turn into some sort of frankemonster NAS-of-many-drives soon!

It would be great to test a RAID card based on Marvell 88SE9128 chipset, because it is used by many suppliers

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Trying again today (but cross-compiling this time since it's oh-so-much faster) now that I have two drives and the appropriate power adapters. I'm planning on just testing a file copy between the drives for now, I'll get into other tests later.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Hmm... putting this on pause. My cross compilation is not dropping in the AHCI module for some reason, probably a bad .config :/

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Also, the adapter gets hot after prolonged use.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

(For anyone interested in testing on an LSI/IBM SAS card, check out #18)

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Testing also with an NVMe using the IO Crest PCIe switch:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
sdb           8:16   1 223.6G  0 disk 
└─sdb1        8:17   1 223.6G  0 part 
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /
nvme0n1     259:0    0 232.9G  0 disk

I'll post some benchmarks copying files between one of the SSDs and the NVMe; will be interesting to see how many MB/sec they can pump through the switch.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

For a direct file copy from one drive to another:

# fallocate -l 10G /mnt/nvme/test.img
# pv /mnt/nvme/test.img > /mnt/sata-sda/test.img

I got an average of 190 MiB/sec, or about 1.52 Gbps. So two-way, that's 3.04 Gbps (under the 3.2 Gbps I was hoping for, but that's maybe down to PCIe switching?

It looks like CPU goes to 99% as SDA takes more than 50% of the CPU—see atop results during a copy:

Screen Shot 2020-11-10 at 9 57 52 AM

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Also comparing raw disk speeds through the PCIe switch:

Kingston SSD

Test Result
hdparm 364.23 MB/s
dd 148.00 MB/s
random 4K read 28.89 MB/s
random 4K write 58.01 MB/s

Samsung EVO 970 NVMe

Test Result
hdparm 363.81 MB/s
dd 166.00 MB/s
random 4K read 46.50 MB/s
random 4K write 75.41 MB/s

These were on 64-bit Pi OS... so the numbers are a little higher than the 32-bit Pi OS results from earlier in the thread. But the good news is the PCIe switching seems to not cause any major performance penalty.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Software RAID0 testing using mdadm:

# Install mdadm.
sudo apt install -y mdadm

# Create a RAID0 array using sda1 and sdb1.
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sd[a-b]1

# Create a mount point for the new RAID device.
sudo mkdir /mnt/raid0

# Format the RAID device.
sudo mkfs.ext4 /dev/md0

# Mount the RAID device.
sudo mount /dev/md0 /mnt/raid0

Benchmarking the device:

Test Result
hdparm 293.35 MB/s
dd 168.00 MB/s
random 4K read 24.96 MB/s
random 4K write 52.26 MB/s

And during the 4K tests in iozone, I can see the sda/sdb devices are basically getting the same bottlenecks, except with a tiny bit of extra overhead from software-based RAID control:

Screen Shot 2020-11-10 at 10 18 00 AM

Then to stop and remove the RAID0 array:

sudo umount /mnt/raid0
sudo mdadm --stop /dev/md0
sudo mdadm --zero-superblock /dev/sd[a-b]1
sudo mdadm --remove /dev/md0

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Software RAID1 (mirrored) testing using mdadm:

# Install mdadm.
sudo apt install -y mdadm

# Create a RAID1 array using sda1 and sdb1.
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sd[a-b]1

# Create a mount point for the new RAID device.
sudo mkdir /mnt/raid1

# Format the RAID device.
sudo mkfs.ext4 /dev/md0

# Mount the RAID device.
sudo mount /dev/md0 /mnt/raid1

And if you want the RAID device to be persistent:

# Add the following line to the bottom of /etc/fstab:
/dev/md0 /mnt/raid1/ ext4 defaults,noatime 0 1

Configure mdadm to start the RAID at boot:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

And check on the health of the array:

sudo mdadm --detail /dev/md0

Thanks to Magpi for their article Build a Raspberry Pi NAS.

Benchmarking the device:

Test Result
hdparm 304.63 MB/s
dd 114.00 MB/s
random 4K read 4.83 MB/s
random 4K write 8.43 MB/s

While it was doing the 4K testing on the software RAID1 array, IO ran a bit slower (both sda/sdb were ~100% the whole time or thereabouts):

Screen Shot 2020-11-10 at 10 27 36 AM

The md0_resync process seemed to be the main culprit. Mirroring drives in software RAID seems to be a fairly heavyweight operation when you're writing tons of small files. For large files it didn't seem to be nearly as much of a burden. I ran iozone with a 1024K block size and got 253.63 MB/sec read, 125.70 MB/sec write.

Even at a 128K block size, I got over 100 MB/sec read and write. It really started to slow down around 8K and even 16K block sizes (to ~20 MB/sec), before falling apart at 4K (4-8 MB/sec, as slow as a microSD card!).

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Hmm... I'm seeing md0_resync continue to run for a long while after the test. So how are they getting out of sync in the first place? Maybe it is trying to sync data that was already on the drive? I thought I had reformatted them though...

Also seeing a lot in dmesg:

[ 3390.917579] cpu cpu0: dev_pm_opp_set_rate: failed to find current OPP for freq 18446744073709551604 (-34)
[ 3390.917596] raspberrypi-clk soc:firmware:clocks: Failed to change fw-clk-arm frequency: -12

And it looks like the resync is almost complete. I'll run the benchmark again afterwards.

sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Nov 10 16:25:37 2020
        Raid Level : raid1
        Array Size : 234297920 (223.44 GiB 239.92 GB)
     Used Dev Size : 234297920 (223.44 GiB 239.92 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Nov 10 16:45:10 2020
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

     Resync Status : 95% complete

              Name : raspberrypi:0  (local to host raspberrypi)
              UUID : 19fd4119:91925607:9b4f77f9:56c91824
            Events : 494

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

It looks like the resync was the major issue—now that it's complete, numbers are looking much better:

Test Result
hdparm 351.38 MB/s
dd 114.00 MB/s
random 4K read 27.95 MB/s
random 4K write 43.21 MB/s

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

What I'd like to test with my 4 spinning disks once I get the rest of my SATA cables in the mail today:

  • Set them up in RAID 10 (mirrored stripe) so I get 1 TB of space for 4 500 GB drives.
  • Configure Samba and create a share on the drive.
  • Connect from my Mac and see if I can saturate the 1 Gbps onboard ethernet connection (~100 MB/sec).
  • (Maybe) Connect to 10 Gbps network via PCIe switch slot #2 and see if that gives any _more) than 100 MB/sec copy performance.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

For the spinning disks (500GB WD5000AVDS), I partitioned, formatted, and mounted them, then I ran my benchmarking tests against them:

Test Result
hdparm 72.43 MB/s
dd 67.30 MB/s
random 4K read 0.48 MB/s
random 4K write 0.60 MB/s

Sometimes you forget just how good we have it with flash memory nowadays. These drives are not a great option as boot volumes for the Pi :P

I then put two of them in a RAID0 stripe with mdadm, and ran the same test:

Test Result
hdparm 154.33 MB/s
dd 109.00 MB/s
random 4K read 0.71 MB/s
random 4K write 1.60 MB/s

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

I also set up SMB:

# Install Samba.
sudo apt install -y samba samba-common-bin

# Create a shared directory.
sudo mkdir /mnt/raid0/shared
sudo chmod -R 777 /mnt/raid0/shared

# Add the text below to the the bottom of the Samba config.
sudo nano /etc/samba/smb.conf

[shared]
path=/mnt/raid0/shared
writeable=Yes
create mask=0777
directory mask=0777
public=no


# Restart Samba daemon.
pi@raspberrypi:~ $ sudo systemctl restart smbd

# Create a Samba password for the Pi user.
pi@raspberrypi:~ $ sudo smbpasswd -a pi

# (On another computer, connect to smb://[pi ip address])

I averaged 75 MB/sec copy performance over the Pi's built-in Gigabit interface for a single large file, 55 MB/sec using rsync with a directory of medium-sized video clips.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Ouch, the initial resync is even slower on these spinny disk drives than it was on the SSDs (which, of course, are half the size in the first place, in addition to being twice as fast). 1% per minute on the sync.

Apparently you could completely skip this option with --assume-clean... but there are many caveats and that's not really intended to happen unless you're in a disaster recovery scenario and you don't want anything to touch the drives when you initialize the RAID device.

So good to know that you should probably plan on letting your array sync up the first time you get it running.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Hmm... now trying all four drives:

$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sd[a-d]1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Unknown error 524

I then zeroed out the superblock:

sudo mdadm --zero-superblock /dev/sd[a-d]1

But then when I tried to create again, I got:

mdadm: super1.x cannot open /dev/sdd1: Device or resource busy
mdadm: /dev/sdd1 is not suitable for this array.
mdadm: create aborted

So I'm going to reboot and try again. Maybe I have a bad drive 😢

Debugging:

$ cat /proc/mdstat
Personalities : 
md0 : inactive sdd1[3](S)
      488253464 blocks super 1.2
       
unused devices: <none>

Trying to format it again with fdisk, I got Failed to add partition 1 to system: Invalid argument. Very odd behavior, but I'm thinking there's a good chance this drive is toast. That's what you get for buying refurbished!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

No matter what I try, I keep getting mdadm: RUN_ARRAY failed: Unknown error 524 in the end.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Weird. After finding this question on Stack Exchange, I tried:

# echo 1 > /sys/module/raid0/parameters/default_layout

And this time, it works:

$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sd[a-d]1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

We'll see how much further I can go.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Another fun thing I just noticed—ext4lazyinit is still running and making it so I can't unmount the volume without forcing it. If I'm going to repartition and reformat anyways, what's the point of letting it finish?

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Resetting the array:

sudo umount /mnt/raid0
sudo mdadm --stop /dev/md0
sudo mdadm --zero-superblock /dev/sd[a-d]1
sudo mdadm --remove /dev/md0

Then set it to RAID 10:

# Install mdadm.
sudo apt install -y mdadm

# Create a RAID10 array using four drives.
sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[a-d]1

# Create a mount point for the new RAID device.
sudo mkdir -p /mnt/raid10

# Format the RAID device.
sudo mkfs.ext4 /dev/md0

# Mount the RAID device.
sudo mount /dev/md0 /mnt/raid10

Confirm the RAID 10 drive gives me 1 TB of mirrored/striped storage:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        915G   77M  869G   1% /mnt/raid1

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sda           8:0    1 465.3G  0 disk   
└─sda1        8:1    1 465.3G  0 part   
  └─md0       9:0    0 930.3G  0 raid10 /mnt/raid1
sdb           8:16   1 465.3G  0 disk   
└─sdb1        8:17   1 465.3G  0 part   
  └─md0       9:0    0 930.3G  0 raid10 /mnt/raid1
sdc           8:32   1 465.3G  0 disk   
└─sdc1        8:33   1 465.3G  0 part   
  └─md0       9:0    0 930.3G  0 raid10 /mnt/raid1
sdd           8:48   1 465.8G  0 disk   
└─sdd1        8:49   1 465.8G  0 part   
  └─md0       9:0    0 930.3G  0 raid10 /mnt/raid1
mmcblk0     179:0    0  29.8G  0 disk   
├─mmcblk0p1 179:1    0   256M  0 part   /boot
└─mmcblk0p2 179:2    0  29.6G  0 part   /

And now the great wait for resync watching sudo mdadm --detail /dev/md0.

Every 2.0s: sudo mdadm --detail /dev/md0                   raspberrypi: Tue Nov 10 23:54:32 2020

/dev/md0:
           Version : 1.2
     Creation Time : Tue Nov 10 23:47:10 2020
        Raid Level : raid10
        Array Size : 975458304 (930.27 GiB 998.87 GB)
     Used Dev Size : 487729152 (465.13 GiB 499.43 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
...
       Update Time : Tue Nov 10 23:54:31 2020
             State : clean, resyncing
...
     Resync Status : 1% complete

It took about 5 hours to do the initial resync (sheesh!), and once that was done, I ran the benchmarks again:

Test Result
hdparm 167.72 MB/s
dd 97.4 MB/s
random 4K read 0.85 MB/s
random 4K write 1.52 MB/s

from raspberry-pi-pcie-devices.

PixlRainbow avatar PixlRainbow commented on May 6, 2024

Have you tested if you can boot from a drive attached through PCIE?

EDIT: It appears that as of now the Raspberry Pi firmware only supports SD card, USB and Network boot. However, you could potentially boot a U-Boot shell from the SD card, load a efi driver for NVMe drives, then load the OS efi bootloader from the drive. But this appears to be completely untested on raspberry pi although it has been found to work on the Rock Pi (Rockchip ARM, not Broadcom).
TianoCore has a more "finished" UEFI implementation on the Raspberry Pi. Unfortunately, the project's NVMe efi driver cannot be built for arm, though TianoCore's UEFI shell may be able to load a driver binary from another project.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Another quick note, just to make sure I point it out: the fastest way to reset the unmounted drives is to run sudo wipefs -a /dev/sd[a-d]. Don't, uh... do that when you're not certain you want to wipe all the drives though :D

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Now this is weird... I kept trying to create an array with 4 SSDs, but kept getting results like:

$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sd[a-d]1
mdadm: super1.x cannot open /dev/sda1: Device or resource busy
mdadm: ddf: Cannot use /dev/sda1: Device or resource busy
mdadm: Cannot use /dev/sda1: It is busy
mdadm: cannot open /dev/sda1: Device or resource busy

But sometimes (after doing a reset where I stopped md0, zeroed the drives, and removed md0), it would be sdb. Sometimes sdc. Sometimes sdd. Sometimes more than one, but never the same.

So it looked like a race condition, and lo and behold, searching around, I found this post from 2012: mdadm: device or resource busy, and in it, it is suggested disabling udev events during creation:

$ sudo udevadm control --stop-exec-queue
$ sudo mdadm --create ...
$ sudo udevadm control --start-exec-queue

Lo and behold... that worked!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

This video will (hopefully) be epic, and still, sadly, won't cover probably more than 50% of what I've learned testing this card. Working on the final script now, hopefully I'll be able to start recording either late tomorrow or early in the week, once I get my notes finished for my Kubernetes 101 series episode!

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

iperf3 measured 942 Mbps between the Pi's 1 Gbps port and my MacBook Pro through a CalDigit TB3 hub, so the maximum possible transfer rate I could achieve is 118 MB/sec on this connection:

Configuration Large file copy Folder copy
SMB RAID 10 Kingston SSD x4 93.30 MB/sec 24.56 MB/sec
NFS RAID 10 Kingston SSD x4 106.20 MB/sec 36.47 MB/sec

Note: During some of the later NFS file copies, I was hitting 100% busy on one or two of the SSDs (measured via atop), and the network interface was also maxing out and getting ksoftirqd queueing some packets. It happened only for short bursts, but enough to impact longer file copies, and I could also see the system RAM (4 GB in this case) getting full. I'm guessing data is buffered in RAM to be written to disk, and that entire operation can't sustain 1 Gbps full-tilt over long periods.

Measuring the temperature of the IOcrest board, it was showing 111°C in the bottom corner, even with my 12V fan at full blast over the board. The temperature didn't seem to affect the queueing though, as it happened even after a shutdown and cooldown cycle (a couple, in fact).

Note 2: It seems like nfs is multithreaded by default, and this allows it to saturate the network bandwidth more efficiently. smbd on the other hand, seems to run one thread that maxes out on one CPU core (at least by default), and that is the primary bottleneck preventing the full network bandwidth to be used in bursts, at least on the Pi which has some IRQ limitations.

SMB Setup

# Install Samba.
sudo apt install -y samba samba-common-bin

# Create a shared directory.
sudo mkdir /mnt/raid10/shared-smb
sudo chmod -R 777 /mnt/raid10/shared-smb

# Add the text below to the the bottom of the Samba config.
sudo nano /etc/samba/smb.conf

[shared]
path=/mnt/raid10/shared
writeable=Yes
create mask=0777
directory mask=0777
public=no

# Restart Samba daemon.
pi@raspberrypi:~ $ sudo systemctl restart smbd

# Create a Samba password for the Pi user.
pi@raspberrypi:~ $ sudo smbpasswd -a pi

# (On another computer, connect to smb://[pi ip address])

Example atop output during peak of file copy using SMB:

atop-smb-large-file-copy

NFS Setup

# Install NFS.
sudo apt-get install -y nfs-kernel-server

# Create a shared directory.
sudo mkdir /mnt/raid10/shared-nfs
sudo chmod -R 777 /mnt/raid10/shared-nfs

# Add the line below to the bottom of the /etc/exports file
sudo nano /etc/exports

/mnt/raid10/shared-nfs *(rw,all_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

# Update NFS exports after saving the file.
sudo exportfs -ra

# Connect to server from Mac (⌘-K in Finder):
nfs://10.0.100.119/mnt/raid10/shared-nfs

Example atop output during peak of file copy using NFS:

atop-nfs-large-file-copy

Benchmark setup

Each benchmark was run three times, and the result averaged.

Large file benchmark

Using 7.35 GB .img file:

pv 2020-08-20-raspios-buster-armhf-full.img > /Volumes/shared-[type]/2020-08-20-raspios-buster-armhf-full.img
Folder with many files benchmark

Using folder with 1,478 images and video clips totaling 1.93 GB:

time cp -R old-sd-card-backup /Volumes/shared-[type]

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Two last things I want to test:

  • Energy consumption between the 4 drives and the IO board + CM4 + IO Crest card. I'll plug it all in through a Kill-A-Watt and observe.
  • Run NFSd with one thread (instead of the 8 default threads allocated) and see if it hits the same bottleneck as SMB (SMB uses one thread per client, AFAICT).

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

For NFS threads:

# Change RPCNFSDCOUNT from 8 to 1.
sudo nano /etc/default/nfs-kernel-server

# Restart nfsd.
sudo systemctl restart nfs-kernel-server

# Confirm there's now one thread.
ps aux | grep nfsd

And the result? Even with only one thread, I was able to hit 900+ Mbps and sustain 105+ MB/sec with NFS (though the single thread was hitting 75-100% CPU usage on one core now).

So something about the NFS protocol seems to be slightly more efficient than Samba—at least on Linux—in general, regardless of the threading model.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Energy consumption (4x Kingston SSD via dedicated AC adapter + IO Board, CM4, IOCrest card via AC adapter):

  • Idle: 6W
  • Avg during file copy: 11W
  • Peak consumption: 12.2W

Screen Shot 2020-11-30 at 11 12 54 AM

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

One more thing I was wondering—is there a technical reason to partition the drives before adding them to the array (vs. just using sda/sdb/etc.)? This SO answer about creating an array using partitions vs. the whole disk seemed to have a few good arguments in favor of pre-partitioning.

from raspberry-pi-pcie-devices.

elFarto avatar elFarto commented on May 6, 2024

The only technical reason I can think of is to get the correct block alignment for drives that have a 4k native sector size and a 512b logical block size (and SSDs where you want to align to the erase block size). But this isn't a requirement, it'll just slow the drives down a little bit if they're not aligned.

Because the wasted space due to alignment is usually greater than that of partitioning, and partitioning has other advantages, you might as well partition them.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

@elFarto - It seemed like that was the most compelling universal reason—though the note about some motherboards (and chipsets, I presume) having trouble with unpartitioned block devices made me nervous enough to recommend always partitioning before RAIDing.

from raspberry-pi-pcie-devices.

paulwratt avatar paulwratt commented on May 6, 2024

disk block i/o sizes have been verified to play a part in slowdowns, especially (but not only) with SSD's. A proper pre-partition just means the start of each partition is on whatever the block size boundry is the minimum offered by the device. (Apparently windows can not properly align the first partition, other partitions are fine).

from raspberry-pi-pcie-devices.

okket avatar okket commented on May 6, 2024

FYI: Here are my configuration options for Samba to make it faster interacting with a macOS client. It has been a while since I researched this topic, but I remember that plain Samba performance was abysmal, esp. with large directories. The xattr stuff seems to help a lot.

[global]
        min protocol = SMB2 
        ea support = yes
        vfs objects = catia fruit streams_xattr acl_xattr shadow_copy2 
        fruit:metadata = stream
        fruit:model = MacPro
        fruit:posix_rename = yes 
        fruit:veto_appledouble = no
        fruit:wipe_intentionally_left_blank_rfork = yes 
        fruit:delete_empty_adfiles = yes

[TimeMachine]
        path = <path>
        browseable = yes
        writeable = yes
        read only = no
        create mask = 0600
        directory mask = 0700
        spotlight = yes
        vfs objects = catia fruit streams_xattr
        fruit:time machine = yes
        fruit:time machine max size = <size in TB> T
        valid users = <usernames>

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Video is here: I built the fastest Raspberry Pi SATA RAID NAS!.

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Closing issues where testing is at least mostly complete, to keep the issue queue tidy.

from raspberry-pi-pcie-devices.

jamesy0ung avatar jamesy0ung commented on May 6, 2024

Hey Jeff this is not working for me.
I compiled the latest kernel, copied it over and booted it fine without the card.
When I decompiled the device tree blob to do the bar patch, I get these errors

root@buster:/mnt/pi-fat32# dtc -I dtb -O dts bcm2711-rpi-cm4.dtb -o test.dts
test.dts: Warning (unit_address_vs_reg): /soc: node has a reg or ranges property, but no unit name
test.dts: Warning (unit_address_vs_reg): /soc/axiperf: node has a reg or ranges property, but no unit name
test.dts: Warning (unit_address_vs_reg): /soc/gpiomem: node has a reg or ranges property, but no unit name
test.dts: Warning (unit_address_vs_reg): /emmc2bus: node has a reg or ranges property, but no unit name
test.dts: Warning (unit_address_vs_reg): /scb: node has a reg or ranges property, but no unit name
test.dts: Warning (unit_address_vs_reg): /v3dbus: node has a reg or ranges property, but no unit name
test.dts: Warning (pci_device_reg): /scb/pcie@7d500000/pci@1,0: PCI unit address format error, expected "0,0"
test.dts: Warning (avoid_unnecessary_addr_size): /soc/firmware: unnecessary #address-cells/#size-cells without "ranges" or child "reg" property
test.dts: Warning (unique_unit_address): /soc/mmc@7e300000: duplicate unit-address (also used in node /soc/mmcnr@7e300000)
test.dts: Warning (unique_unit_address): /soc/firmwarekms@7e600000: duplicate unit-address (also used in node /soc/smi@7e600000)
test.dts: Warning (clocks_property): /symbols:clocks: property size (21) is invalid, expected multiple of 4
test.dts: Warning (gpios_property): /aliases:gpio: property size (19) is invalid, expected multiple of 4
test.dts: Warning (gpios_property): /symbols:gpio: property size (19) is invalid, expected multiple of 4

root@buster:/mnt/pi-fat32# dtc -I dts -O dtb test.dts -o test.dtb
test.dtb: Warning (unit_address_vs_reg): /soc: node has a reg or ranges property, but no unit name
test.dtb: Warning (unit_address_vs_reg): /soc/axiperf: node has a reg or ranges property, but no unit name
test.dtb: Warning (unit_address_vs_reg): /soc/gpiomem: node has a reg or ranges property, but no unit name
test.dtb: Warning (unit_address_vs_reg): /emmc2bus: node has a reg or ranges property, but no unit name
test.dtb: Warning (unit_address_vs_reg): /scb: node has a reg or ranges property, but no unit name
test.dtb: Warning (unit_address_vs_reg): /v3dbus: node has a reg or ranges property, but no unit name
test.dtb: Warning (pci_device_reg): /scb/pcie@7d500000/pci@1,0: PCI unit address format error, expected "0,0"
test.dtb: Warning (avoid_unnecessary_addr_size): /soc/firmware: unnecessary #address-cells/#size-cells without "ranges" or child "reg" property
test.dtb: Warning (unique_unit_address): /soc/mmc@7e300000: duplicate unit-address (also used in node /soc/mmcnr@7e300000)
test.dtb: Warning (unique_unit_address): /soc/firmwarekms@7e600000: duplicate unit-address (also used in node /soc/smi@7e600000)
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 1 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 3 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 5 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 7 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 9 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/cprman@7e101000:clocks: cell 11 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/mmc@7e202000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2s@7e203000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e204000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e205000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dpi@7e208000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dpi@7e208000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e209000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e209000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e209000:clocks: cell 4 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/aux@7e215000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e215040:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e215080:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e2150c0:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/pwm@7e20c000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hvs@7e400000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e700000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e700000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/dsi@7e700000:clocks: cell 4 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e804000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/vec@7e806000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/usb@7e980000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/watchdog@7e100000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/watchdog@7e100000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/watchdog@7e100000:clocks: cell 4 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/watchdog@7e100000:clocks: cell 6 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201400:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201400:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201600:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201600:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201800:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201800:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201a00:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/serial@7e201a00:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e204600:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e204800:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e204a00:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/spi@7e204c00:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e205600:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e205800:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e205a00:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/i2c@7e205c00:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/pwm@7e20c800:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/clock@7ef00000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef00700:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef00700:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef00700:clocks: cell 4 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef00700:clocks: cell 6 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef05700:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef05700:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef05700:clocks: cell 4 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/hdmi@7ef05700:clocks: cell 6 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/mmc@7e300000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/mmcnr@7e300000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/smi@7e600000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/csi@7e800000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/csi@7e800000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/csi@7e801000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /soc/csi@7e801000:clocks: cell 2 is not a phandle reference
test.dtb: Warning (clocks_property): /emmc2bus/emmc2@7e340000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /v3dbus/v3d@7ec04000:clocks: cell 0 is not a phandle reference
test.dtb: Warning (clocks_property): /symbols:clocks: property size (21) is invalid, expected multiple of 4
test.dtb: Warning (dmas_property): /soc/mmc@7e202000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/i2s@7e203000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/i2s@7e203000:dmas: cell 2 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/spi@7e204000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/spi@7e204000:dmas: cell 2 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/hdmi@7ef00700:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/hdmi@7ef05700:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/mmc@7e300000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/mmcnr@7e300000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (dmas_property): /soc/smi@7e600000:dmas: cell 0 is not a phandle reference
test.dtb: Warning (mboxes_property): /soc/firmware:mboxes: cell 0 is not a phandle reference
test.dtb: Warning (msi_parent_property): /scb/pcie@7d500000:msi-parent: cell 0 is not a phandle reference
test.dtb: Warning (phys_property): /soc/usb@7e980000:phys: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/dsi@7e209000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/dsi@7e700000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/vec@7e806000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/usb@7e980000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/csi@7e800000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /soc/csi@7e801000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /scb/xhci@7e9c0000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (power_domains_property): /v3dbus/v3d@7ec04000:power-domains: cell 0 is not a phandle reference
test.dtb: Warning (resets_property): /soc/hdmi@7ef00700:resets: cell 0 is not a phandle reference
test.dtb: Warning (resets_property): /soc/hdmi@7ef05700:resets: cell 0 is not a phandle reference
test.dtb: Warning (resets_property): /scb/pcie@7d500000/pci@1,0/usb@1,0:resets: cell 0 is not a phandle reference
test.dtb: Warning (resets_property): /v3dbus/v3d@7ec04000:resets: cell 0 is not a phandle reference
test.dtb: Warning (thermal_sensors_property): /thermal-zones/cpu-thermal:thermal-sensors: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /aliases:gpio: property size (19) is invalid, expected multiple of 4
test.dtb: Warning (gpios_property): /soc/serial@7e201000/bluetooth:shutdown-gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /soc/spi@7e204000:cs-gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /soc/spi@7e204000:cs-gpios: cell 3 is not a phandle reference
test.dtb: Warning (gpios_property): /soc/serial@7e215040/bluetooth:shutdown-gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /leds/act:gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /leds/pwr:gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /sd_io_1v8_reg:gpios: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /sd_vcc_reg:gpio: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /cam1_reg:gpio: cell 0 is not a phandle reference
test.dtb: Warning (gpios_property): /symbols:gpio: property size (19) is invalid, expected multiple of 4

I tried the compiled dtb and it instantly errored out, giving me an kernal panic because of asynchronous serror interrupt.
With the original dtb, it boots fine, but I get the asynchronous serror interrupt if no sata drives are plugged in. WIth drive plugged in, the card does not work either presumably because of bar space. Here is the entries in dmesg

[ 1.247086] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[ 1.249747] brcm-pcie fd500000.pcie: No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[ 1.252416] brcm-pcie fd500000.pcie: MEM 0x0600000000..0x063fffffff -> 0x00c0000000
[ 1.255082] brcm-pcie fd500000.pcie: IB MEM 0x0000000000..0x00ffffffff -> 0x0400000000
[ 1.577589] brcm-pcie fd500000.pcie: link down

What should I do?

from raspberry-pi-pcie-devices.

stamaali4 avatar stamaali4 commented on May 6, 2024

@geerlingguy Thanks for a detailed writeup...

I am trying to recompile to the kernel to enable SATA Support on IO Crest 4 Port MARVELL 9215 to use it with Raspberry Pi CM4 using UBUNTU 20.04 .

I tried the exact same steps mentioned; however the disks are not detected.... I am giving the try again now as I missed the steps mentioned for increasing the BAR Address(i did see the BAR Address spaces errors in dmesg) space for PCIe.... wanted to check with you(I am a novice in linux) if the steps that you shared applicable for Ubuntu 20.04 or are they specific to Raspbian OS (Buster)... if they are specific to BUSTER can you please share the changes in the above steps for Ubuntu OS 20.4 .... additional details are mentioned below.

  1. output of uname -a: Linux cvbkpgtw 5.4.0-1035-raspi #38-Ubuntu SMP PREEMPT Tue Apr 20 21:37:03 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

  2. output of lspci -vv
    00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2711 (rev 20) (prog-if 00 [Normal decode])
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 43
    Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
    I/O behind bridge: 00000000-00000fff [size=4K]
    Memory behind bridge: f8000000-f80fffff [size=1M]
    Prefetchable memory behind bridge: [disabled]
    Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
    BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B-
    PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
    Capabilities:
    Kernel driver in use: pcieport

01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) (prog-if 01 [AHCI 1.0])
Subsystem: Marvell Technology Group Ltd. Device 9215
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 0
Region 0: I/O ports at 0000
Region 1: I/O ports at 0000
Region 2: I/O ports at 0000
Region 3: I/O ports at 0000
Region 4: I/O ports at 0000
Region 5: Memory at 600040000 (32-bit, non-prefetchable) [size=2K]
Expansion ROM at 600000000 [size=256K]
Capabilities:

  1. output of os-release: cvadmin@cvbkpgtw:~$ cat /etc/os-release
    NAME="Ubuntu"
    VERSION="20.04.2 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.2 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal

  2. Raspberry Pi CM4 - 4GB RAM and 32GB eMMC with IO Board

  3. Connected 4 SATA Drives - 1 SSD 240GB(Seagate) & 3 HDDx2TB(Seagate)

  4. Booting from eMMC

Please let me know if further details are need, thanks for your help in advance....

from raspberry-pi-pcie-devices.

App-Teck avatar App-Teck commented on May 6, 2024

Hey , i didnt get this par while config the SATA:
"nano .config

(edit CONFIG_LOCALVERSION and add a suffix that helps you identify your build)

Build the kernel and copy everything into place"

I got in nano .config at line : "CONFIG_LOCALVERSION="-v7l""
but where to add the commands: below? Thanks for your he;lp
"make -j4 zImage modules dtbs # 'Image' on 64-bit
sudo make modules_install
sudo cp arch/arm/boot/dts/.dtb /boot/
sudo cp arch/arm/boot/dts/overlays/
.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
sudo cp arch/arm/boot/zImage /boot/$KERNEL.img"

Appreciate your help.

from raspberry-pi-pcie-devices.

push-gh avatar push-gh commented on May 6, 2024

Since google might land you here, like it did me on a search for "cm4 ubuntu sata", the latest development version Ubuntu Impish Indri has SATA support. Simply "sudo apt install linux-modules-extra-raspi" and then "modprobe ahci" or reboot.

Thanks. I saw that the required kernel configs were enabled in the kernel config file, but didn't find them in the modules directory. I thought an issue with the distribution and was going to compile the kernel. I didn't know that the extra modules are delivered as a separate package. fortunately I found your comment.

from raspberry-pi-pcie-devices.

stamaali4 avatar stamaali4 commented on May 6, 2024

Thanks @BeauSlim for the inputs, today i tested the card with Ubuntu Impish Indri and got all the 4 drives detected; however i see repeated errors mentioned below and troubleshooting them now;

[ 788.484701] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 788.484925] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 788.484954] sd 0:0:0:0: [sda] Stopping disk
[ 788.485013] sd 0:0:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 788.879084] ata2.15: SATA link down (SStatus 0 SControl 320)
[ 790.521329] ata2.15: failed to read PMP GSCR[0] (Emask=0x100)
[ 790.521374] ata2.15: PMP revalidation failed (errno=-5)

If you know anything about the errors and suggest what to look at would be of great help...

from raspberry-pi-pcie-devices.

BeauSlim avatar BeauSlim commented on May 6, 2024

[ 788.484701] sd 0:0:0:0: [sda] Synchronizing SCSI cache [ 788.484925] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [ 788.484954] sd 0:0:0:0: [sda] Stopping disk [ 788.485013] sd 0:0:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [ 788.879084] ata2.15: SATA link down (SStatus 0 SControl 320) [ 790.521329] ata2.15: failed to read PMP GSCR[0] (Emask=0x100) [ 790.521374] ata2.15: PMP revalidation failed (errno=-5)

Yeah, this Pi SATA stuff is definitely a bit tricky. Googling errors will get you a lot of people saying "your drive is dead", but I bet if you plug that card into a PC, everything will work perfectly even under heavy load.

I don't have a 9215. I have a Marvell 9230 and a JMicron 585. The 9230 card runs well aside from a lack of any way to change the RAID config.

PMP seems to be referring to a port multiplier? Are your 4 drives in an external enclosure? If so, I'd try connecting drives directly. If not, definitely try different cables.

If I push my JMB585 with 4 or 5 drives in a stripe or software RAID 10, it gives me a bunch of errors, but they are mostly "failed command: READ FPDMA QUEUED" which is different from yours. Adding "extraargs=libata.force=noncq" to my cmdline.txt solves that but hurts SATA performance. For other libata.force options to try (like using SATA I speeds), see https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html

You might also try adding "extraargs=pcie_aspm=off" to your cmdline.txt to turn off PCIe power management.

There is probably a firmware update for your card that you could try.

from raspberry-pi-pcie-devices.

stamaali4 avatar stamaali4 commented on May 6, 2024

Thanks @BeauSlim for your inputs; yes i have 4 drives in an IOCRest external enclosure. Definitely drives are not dead as i have tested them with RADXA QUADSATA HAT and they work fine in it will try turning off PCIe power management option & also try different set of cables and update with results.

from raspberry-pi-pcie-devices.

l0gical avatar l0gical commented on May 6, 2024

"You might also try adding "extraargs=pcie_aspm=off" to your cmdline.txt to turn off PCIe power management."

I may also try this, my 9215 works absolutely fine with 3x SATA 8TB Drives, the only issue I get occasionally is the drive power down/up sound and a couple of the disks change from say SDA/SDC to SDD/SDE, it does however break OMV when that happens.

from raspberry-pi-pcie-devices.

mi-hol avatar mi-hol commented on May 6, 2024

The 9230 card runs well aside from a lack of any way to change the RAID config.

Does this mean, there is no CLI to enable/change hardware RAID modes?

from raspberry-pi-pcie-devices.

BeauSlim avatar BeauSlim commented on May 6, 2024

The 9230 card runs well aside from a lack of any way to change the RAID config.

Does this mean, there is no CLI to enable/change hardware RAID modes?

That is correct. The Marvell hardware RAID config (MRU) is available only for x86/x64 processors.

You can put the card into a Windows or Linux PC, connect the disks you plan to use, configure RAID, and then move the card to your Pi setup. You might even be able to have a 9230-based card in the PC and just move the disks since the RAID config is stored on the drives themselves, not on the card.

This is probably fine if you just wanted to use striping or Hyperduo SSD caching, but if you want redundancy you will have no indication that a mirror has failed.

from raspberry-pi-pcie-devices.

mi-hol avatar mi-hol commented on May 6, 2024

I'm testing an ASM1061R based controller basically identical with these
Setup is identical to your description below

put the card into a Windows or Linux PC, connect the disks you plan to use, configure RAID, and then move the card to your Pi setup. You might even be able to have a 9230-based card in the PC and just move the disks since the RAID config is stored on the drives themselves, not on the card.

Issue:

if you want redundancy you will have no indication that a mirror has failed.

is affecting my controller too and was even confirmed by distributor's technical support in this FAQ
"Question:
I did not find a way to get a alert if a disk in a raid-1 set fails. the controller does not even stop the POST processes when a disk failed. I would expected that some RED blinking WARNING comes up or something and the PC only continues the POST if the degraded raid status gets committed. Documentation is very very poor..

Answer:
Hello Kalle,
thanks for your request. We're are sorry, that's the way this product works.
Kind regrads
InLine Support Team"

@geerlingguy should such severe limitations not be documented in the "Raspberry Pi PCI Express device compatibility database" ?

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

Testing on the Raspberry Pi 5:

pi@pi5:~ $ lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0000:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001

pi@pi5:~ $ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    1 223.6G  0 disk 
├─sda1        8:1    1   200M  0 part 
└─sda2        8:2    1 223.4G  0 part 
mmcblk0     179:0    0 119.1G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0 118.8G  0 part /

At PCIe Gen 2.0, I'm getting some link errors—but otherwise the card seems to pass through clean-ish at least:

[   47.906098] ata1: softreset failed (device not ready)
[   48.382111] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   48.387657] ata1.00: ATA-10: KINGSTON SA400S37240G, SBFKB1D2, max UDMA/133
[   48.390106] ata1.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 32), AA
[   48.394334] ata1.00: configured for UDMA/133
[   48.394449] scsi 0:0:0:0: Direct-Access     ATA      KINGSTON SA400S3 B1D2 PQ: 0 ANSI: 5
[   48.394861] sd 0:0:0:0: [sda] 468862128 512-byte logical blocks: (240 GB/224 GiB)
[   48.394875] sd 0:0:0:0: [sda] Write Protect is off
[   48.394878] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[   48.394896] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   48.394920] sd 0:0:0:0: [sda] Preferred minimum I/O size 512 bytes
[   48.425872] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   48.458218] ata1.00: exception Emask 0x10 SAct 0x600000 SErr 0x380000 action 0x6 frozen
[   48.458227] ata1.00: irq_stat 0x08000000, interface fatal error
[   48.458229] ata1: SError: { 10B8B Dispar BadCRC }
[   48.458234] ata1.00: failed command: READ FPDMA QUEUED
[   48.458237] ata1.00: cmd 60/10:a8:00:00:00/00:00:00:00:00/40 tag 21 ncq dma 8192 in
                        res 40/00:b0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   48.458244] ata1.00: status: { DRDY }
[   48.458246] ata1.00: failed command: READ FPDMA QUEUED
[   48.458248] ata1.00: cmd 60/10:b0:10:00:00/00:00:00:00:00/40 tag 22 ncq dma 8192 in
                        res 40/00:b0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   48.458253] ata1.00: status: { DRDY }
[   48.458258] ata1: hard resetting link
[   48.934105] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   53.982112] ata1.00: qc timeout after 5000 msecs (cmd 0xec)
[   53.982130] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[   53.982135] ata1.00: revalidation failed (errno=-5)
[   53.982143] ata1: hard resetting link
[   54.458109] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   54.459118] ata1.00: configured for UDMA/133
[   54.459157] ata1: EH complete
[   54.490108] ata1: limiting SATA link speed to 3.0 Gbps
[   54.490113] ata1.00: exception Emask 0x10 SAct 0x6000000 SErr 0x380000 action 0x6 frozen
[   54.490117] ata1.00: irq_stat 0x08000000, interface fatal error
[   54.490120] ata1: SError: { 10B8B Dispar BadCRC }
[   54.490126] ata1.00: failed command: READ FPDMA QUEUED
[   54.490130] ata1.00: cmd 60/10:c8:00:00:00/00:00:00:00:00/40 tag 25 ncq dma 8192 in
                        res 40/00:d0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   54.490139] ata1.00: status: { DRDY }
[   54.490143] ata1.00: failed command: READ FPDMA QUEUED
[   54.490146] ata1.00: cmd 60/10:d0:10:00:00/00:00:00:00:00/40 tag 26 ncq dma 8192 in
                        res 40/00:d0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   54.490154] ata1.00: status: { DRDY }
[   54.490160] ata1: hard resetting link
[   54.966104] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[   54.966147] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x100)
[   54.966150] ata1.00: revalidation failed (errno=-5)
[   60.126109] ata1: hard resetting link
[   60.602105] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[   65.758102] ata1.00: qc timeout after 5000 msecs (cmd 0xec)
[   65.758117] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[   65.758120] ata1.00: revalidation failed (errno=-5)
[   65.758128] ata1: limiting SATA link speed to 1.5 Gbps
[   65.758132] ata1: hard resetting link
[   66.234102] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[   66.234338] ata1.00: configured for UDMA/133
[   66.234363] ata1: EH complete
[   66.278102] ata1.00: exception Emask 0x10 SAct 0x2 SErr 0x300000 action 0x6 frozen
[   66.278106] ata1.00: irq_stat 0x08000000, interface fatal error
[   66.278108] ata1: SError: { Dispar BadCRC }
[   66.278112] ata1.00: failed command: READ FPDMA QUEUED
[   66.278114] ata1.00: cmd 60/10:08:90:44:f2/00:00:1b:00:00/40 tag 1 ncq dma 8192 in
                        res 40/00:08:90:44:f2/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
[   66.278121] ata1.00: status: { DRDY }

I think the PCIe issues are down to the FFC cable and PCIe interference :(

from raspberry-pi-pcie-devices.

geerlingguy avatar geerlingguy commented on May 6, 2024

I have some questions in to Raspberry Pi surrounding SATA support, and PCIe link quality. It seems like both cards I've tested run into some errors (more so than I get with NVMe...).

from raspberry-pi-pcie-devices.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.