How can I improve USB 3.0 performance for SMB transfers in Proxmox with a TrueNAS virtual machine?
I’ve set up a NAS system based on Proxmox with a TrueNAS virtual machine, but I’m experiencing very low SMB transfer speeds (10-30 MB/s), despite the USB 3.0 hardware being capable of much higher speeds (150-180 MB/s).
Current configuration:
- External HDD enclosure with two 8 TB and 6 TB drives (RAID disabled, normal/clear mode)
- Mini PC with:
- 16 GB RAM (dual-channel mode)
- 256 GB SSD for the system
- USB 3.0 ports (using front ports as I’m unsure about the rear ones)
- Proxmox installed on bare metal
- TrueNAS VM with:
- 16 GB SSD for the system
- Two HDDs connected via USB using the command:
qm set 100 -scsi1 /dev/disk/by-id/ata-[serial_number] - A storage pool created from both HDDs
Requirements met:
- Online server for home use with a shared disk pool
- Streaming movies to devices in the home network (works great with Jellyfin)
- File access via SMB with upload/download capabilities (this is where the problem lies)
Network configuration:
- Mini PC connected to a Wi-Fi repeater (2.5 Gbit bandwidth)
- Ethernet cable: 1 Gbit
- Router: ~1.2 Gbit
- Testing was conducted via Wi-Fi with 1 Gbit bandwidth on Windows and macOS
Problem:
Despite the hardware being capable of 150-180 MB/s speeds (verified on Windows), SMB transfers through the TrueNAS VM are limited to 10-30 MB/s. This is significantly below the expected USB 3.0 performance and even below the theoretical minimum of ~125 MB/s for a 1 Gbit network connection.
Question:
How can I troubleshoot and improve USB 3.0 performance for SMB transfers in my Proxmox configuration with a TrueNAS virtual machine? What settings or configurations might be causing this bottleneck?
Low USB 3.0 Performance for SMB Transfers in Your Proxmox Configuration with TrueNAS VM
Low USB 3.0 performance for SMB transfers in your Proxmox configuration with a TrueNAS virtual machine is typically related to USB device passthrough issues, network settings, or SMB parameters. The main causes may include suboptimal SCSI controller configuration, USB interrupt problems, network stack limitations, or incorrect SMB settings in TrueNAS.
Table of Contents
- Analysis of Current Configuration
- USB Passthrough Optimization
- Network Performance Configuration
- SMB Optimization in TrueNAS
- Additional Performance Settings
- Monitoring and Diagnosis
- Conclusion
Analysis of Current Configuration
Your current configuration has several potential bottlenecks:
USB Passthrough Issues:
- Using front USB ports instead of rear ones can cause power and interrupt issues
- Configuration using
qm set 100 -scsi1 /dev/disk/by-id/ata-[serial_number]may not provide optimal performance - Lack of direct USB 3.0 access without a virtual SCSI controller
Network Limitations:
- Using Wi-Fi for testing even with the stated 1 Gbit capacity
- An intermediate Wi-Fi repeater can add latency
Storage Settings:
- Disk mode set to “normal/clear” instead of more optimized modes
- Lack of caching at the TrueNAS level
For diagnostics, start by checking the current USB device configuration in Proxmox:
lsusb lspci | grep -i usb
USB Passthrough Optimization
Using Direct USB Passthrough Instead of SCSI
Instead of passing through disks via SCSI controller, try using direct USB passthrough:
- Identify the USB device:
lsusb -t
- Configure the VM for direct USB passthrough:
qm set 100 -usb1 1-1
- Or add a USB controller to the VM:
qm set 100 -usbadd
qm set 100 -usbattach 0
USB Controller Configuration
For optimal performance, use a USB 3.0 controller instead of USB 2.0:
qm set 100 -args "-device usb-ehci,id=ehci"
qm set 100 -args "-device usb-xhci,id=xhci"
USB Interrupt Optimization
Interrupt issues can significantly reduce performance:
- Check the current interrupt status:
cat /proc/interrupts | grep usb
- Disable power saving for USB ports:
echo 'auto' > /sys/bus/usb/devices/usb*/power/control
echo 'on' > /sys/bus/usb/devices/usb*/power/autosuspend
Network Performance Configuration
Proxmox Network Configuration Optimization
To improve network performance, make the following changes:
- Set acceptable TCP buffer values:
echo 'net.core.rmem_max = 134217728' >> /etc/sysctl.conf
echo 'net.core.wmem_max = 134217728' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_rmem = 4096 87380 134217728' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_wmem = 4096 65536 134217728' >> /etc/sysctl.conf
- Optimize TCP parameters:
echo 'net.core.netdev_max_backlog = 3000' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_congestion_control = bbr' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
- Apply changes:
sysctl -p
TrueNAS VM Network Configuration
Inside the TrueNAS virtual machine, optimize network settings:
- Increase TCP buffer size:
sysctl -w net.core.rmem_max=134217728 sysctl -w net.core.wmem_max=134217728
- Configure SMB parameters:
sysctl -w net.ipv4.tcp_rmem='4096 87380 134217728'
sysctl -w net.ipv4.tcp_wmem='4096 65536 134217728'
Using Ethernet Instead of Wi-Fi
For maximum performance, connect the client device directly to Ethernet:
- Use Cat 5e or better cable
- Connect to the same switch as the Proxmox host
- Disable Wi-Fi for performance testing
SMB Optimization in TrueNAS
SMB Version Configuration
Use SMB 3.1.1 for maximum performance:
- Edit the SMB configuration:
nano /etc/samba/smb.conf
- Add the following parameters:
[global]
server min protocol = SMB3
server max protocol = SMB3_11
smb2 leases = yes
smb2 max credits = 8192
server multi channel support = yes
server encrypt passwords = yes
- Restart the SMB service:
service samba restart
Disk Performance Optimization
- Enable caching in TrueNAS:
sysctl -w vfs.read_max_bufspace=104857600 sysctl -w vfs.write_max_bufspace=104857600
- Optimize ZFS parameters (if using ZFS):
sysctl -w kstat.zfs.arc.max=8589934592 # 8GB
sysctl -w kstat.zfs.prefetch_disable=0
SMB Transfer Size Configuration
Increase the maximum SMB packet size:
sysctl -w net.ipv4.tcp_mtu_discover=2 sysctl -w net.core.optmem_max=65536
Additional Performance Settings
Disk Caching Configuration
- Enable disk caching in Proxmox:
qm set 100 -args "-drive cache=writeback"
- Configure disk parameters in VM:
qm set 100 -args "-device virtio-blk-pci,scsi=off,drive=drive0"
qm set 100 -args "-device virtio-blk-pci,scsi=off,drive=drive1"
CPU and Memory Optimization
- Ensure the VM has sufficient vCPU:
qm set 100 -cores 4
- Configure disk I/O priority:
qm set 100 -args "-device virtio-blk-pci,scsi=off,drive=drive0,iothread=1"
qm set 100 -args "-device virtio-blk-pci,scsi=off,drive=drive1,iothread=1"
Alternative Storage Configurations
Consider alternative approaches to storage organization:
- Using iSCSI instead of USB passthrough:
qm set 100 -args "-iscsi initiator-name=iqn.2024-01.proxmox:100"
- NFS export instead of SMB:
qm set 100 -args "-nfs share=/mnt/data"
- Direct disk access (if supported):
qm set 100 -args "-drive file=/dev/sdb,if=none,id=drive0"
qm set 100 -args "-device virtio-blk-pci,drive=drive0"
Monitoring and Diagnosis
Performance Monitoring Tools
Use the following commands for performance monitoring:
- I/O monitoring in Proxmox:
iotop -oP iostat -x 1
- Network monitoring:
nethogs iftop -nNP
- Monitoring in VM:
vmstat 1 iostat -x 1 sar -n DEV 1
USB Configuration Verification
Check the current USB device configuration:
qm showconfig 100 | grep -i usb
ls -la /dev/disk/by-id/ata*
Performance Testing
Use the following tools for testing:
- Read/write speed in VM:
dd if=/dev/zero of=/mnt/data/testfile bs=1M count=1024 oflag=direct
dd if=/mnt/data/testfile of=/dev/null bs=1M iflag=direct
- SMB performance testing:
smbclient //<server_ip>/share -U <username> -c 'put /dev/zero testfile'
smbclient //<server_ip>/share -U <username> -c 'get testfile'
- Network packet analysis:
tcpdump -i any -s 0 -w capture.pcap 'port 445'
Conclusion
To improve USB 3.0 performance for SMB transfers in Proxmox with a TrueNAS virtual machine, perform the following steps:
- Switch to direct USB passthrough instead of SCSI passthrough for lower latency
- Optimize network settings for both Proxmox and TrueNAS VM with increased TCP buffers
- Use SMB 3.1.1 protocol with multi-channel support enabled
- Check physical USB device connection and use rear ports instead of front ones
- Enable I/O caching at all system levels
- Use Ethernet instead of Wi-Fi for testing maximum performance
The main suspected bottlenecks: issues with USB passthrough via SCSI, network stack limitations, and suboptimal SMB settings. Start by checking the physical connection of USB devices and switching to direct USB passthrough, which should provide the greatest performance boost.