How can I improve USB 3.0 speed in a NAS running Proxmox and TrueNAS?
I’ve set up a home server using Proxmox as the main system and TrueNAS as a virtual machine. I have an external HDD enclosure with two drives (8TB and 6TB) connected to a mini PC via USB. When testing on Windows, I was getting write speeds of 150-180 MB/s, which completely satisfied me.
In TrueNAS, I created a pool from the two drives connected via USB. Streaming movies through Jellyfin works perfectly, but when accessing files via SMB, I get very low speeds - only 10-30 MB/s from macOS and Windows devices.
My requirements:
- Home server with a shared disk pool
- Ability to stream movies to devices in the home network (working perfectly)
- High-speed file access via SMB (the problem)
Network connection:
- Mini PC connected to Wi-Fi repeater via cable
- Repeater speed: 2.5 Gbit/s
- Cable speed: 1 Gbit/s
- Router speed: ~1.2 Gbit/s
- For testing, I used Wi-Fi with theoretical speed of 1 Gbit/s (~125 MB/s)
How can I achieve proper USB 3.0 speed in this configuration? Why are the SMB speeds so low despite the hardware supporting much higher speeds?
Low SMB Access Speed to USB 3.0 Storage in TrueNAS on Proxmox
Low SMB access speed to USB 3.0 storage in TrueNAS on Proxmox is usually caused by several factors: incorrect USB controller configuration in the virtual machine, suboptimal SMB parameters, or insufficient performance of the USB connection itself. To achieve speeds of 150-180 MB/s, it is necessary to configure data transfer directly via PCIe, optimize SMB parameters, and ensure proper USB signal handling in the virtualized environment.
Contents
- Problem Analysis and Main Causes of Low Speed
- USB Controller Optimization in Proxmox VM
- SMB Configuration in TrueNAS for Maximum Performance
- Filesystem and Disk Pool Optimization
- Network Configuration and Performance
- Monitoring and Performance Diagnostics
- Practical Recommendations and Step-by-Step Instructions
Problem Analysis and Main Causes of Low Speed
Low SMB access speeds of 10-30 MB/s when expecting 150-180 MB/s indicate several possible issues in your configuration. Before moving to solutions, it’s important to understand the main causes of this performance degradation.
Main factors affecting USB 3.0 speed in a virtualized environment:
- USB controller emulation - Proxmox uses standard USB emulation by default, which creates additional overhead
- Lack of direct PCIe access - without passthrough, the USB controller works through a virtual layer
- Incorrect SMB parameter configuration - default TrueNAS settings are not always optimal for high speeds
- Filesystem issues - ZFS may not work optimally without proper caching configuration
- Network limitations - even with good hardware, network settings can create bottlenecks
Important: Comparing performance between direct disk access in Windows (150-180 MB/s) and access through TrueNAS VM (10-30 MB/s) clearly shows that the problem lies in virtualization and settings, not in the hardware itself.
USB Controller Optimization in Proxmox VM
To achieve maximum USB 3.0 performance in a TrueNAS virtual machine, you need to properly configure the USB controller in Proxmox.
USB Passthrough Configuration
Step 1: Identify the USB Controller
lsusb -t
This command will show the USB device hierarchy. You need to find the root USB controller to which your external HDD enclosure is connected.
Step 2: Add USB Device to VM Configuration
Edit the TrueNAS virtual machine configuration file:
nano /etc/pve/qemu-server/100.conf # 100 is your VM ID
Add the following lines to the device section:
args: -usbdevice hostbus=1,hostaddr=1
Or for more precise control:
args: -device usb-host,bus=usb-bus.0,hostbus=1,hostaddr=1
Step 3: Alternative Option with USB 3.0 xHCI Controller
If your host supports USB 3.0, use the xHCI controller:
args: -device usb-host,bus=ehci.0,hostbus=2,hostaddr=1
Additional Parameters for Performance Improvement
Add parameters to reduce latency:
args: -cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time
args: -machine pc,accel=kvm,usb=on
SMB Configuration in TrueNAS for Maximum Performance
After optimizing the USB controller, you need to properly configure the SMB service to achieve high data transfer speeds.
SMB Parameter Optimization
Step 1: Edit smb.conf
Open the SMB configuration file:
nano /usr/local/etc/smb4.conf
Add or modify the following parameters:
[global]
server min protocol = SMB3
server max protocol = SMB3_11
client min protocol = SMB3
client max protocol = SMB3_11
deadtime = 15
getwd cache = yes
max connections = 1000
preferred master = no
local master = no
domain master = no
wins support = no
aio read size = 16384
aio write size = 16384
use sendfile = yes
write cache size = 16777216
read raw = yes
write raw = yes
oplocks = yes
level2 oplocks = yes
strict locking = no
max xmit = 65536
name resolve order = bcast hosts
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
Step 2: Configure Specific Parameters for High Speeds
For USB 3.0 disks, add:
[global]
smb2 leases = no
lease directory = no
inherit permissions = yes
inherit acls = yes
map archive = yes
map hidden = yes
map read only = yes
store dos attributes = yes
nt acl support = yes
vfs objects = fruit streams
fruit:metadata = stream
fruit:model = MacSamba
fruit:veto_appledouble = no
fruit:posix_rename = yes
fruit:zero_file_id = yes
fruit:wipe_intentionally_left_blank_rfork = yes
fruit:delete_empty_adfiles = yes
SMB Performance Optimization
Step 3: Configure Kernel Parameters
Add to /boot/loader.conf:
kern.ipc.somaxconn=32768
kern.maxfilesperproc=32768
kern.maxfiles=200000
net.inet.tcp.recvspace=65536
net.inet.tcp.sendspace=65536
net.inet.tcp.mssdflt=1460
Reboot the system to apply the settings.
Filesystem and Disk Pool Optimization
Proper ZFS and disk pool configuration is critical for achieving maximum USB 3.0 storage performance.
ZFS Pool Parameter Optimization
Step 1: Check Current Pool Settings
zpool get all storage_pool
Step 2: Optimize Settings for USB Disks
For USB 3.0 disks, the following parameters are recommended when creating a pool:
zpool create -o ashift=12 -o autotrim=on -o compression=lz4 -o feature@async_destroy=enabled -o feature@bookmarks=enabled -o feature@embedded_data=enabled -o feature@empty_dataobj=enabled -o feature@enabled_txg=enabled -o feature@extensible_dataset=enabled -o feature@filesystem_limits=enabled -o feature@hole_birth=enabled -o feature@large_blocks=enabled -o feature@lz4_compress=enabled -o feature@multi_vdev_crash_dump=enabled -o feature@spacemap_histogram=enabled -o feature=unsupported-raidz-expansion -o feature=unsupported-scrub-stats -o feature=unsupported-zpool-features -o feature=zpool_checkpoint storage_pool mirror /dev/da0 /dev/da1
For an existing pool:
zpool set ashift=12 storage_pool
zpool set autotrim=on storage_pool
zpool set compression=lz4 storage_pool
Caching and Buffer Optimization
Step 3: Configure ZFS Cache
# ARC cache size (optimal 50-70% of available RAM)
echo "kern.vm.vm_physmem_size=0" >> /boot/loader.conf
echo "vfs.zfs.arc_max=4G" >> /boot/loader.conf
echo "vfs.zfs.arc_min=512M" >> /boot/loader.conf
# Configure L2ARC (if you have SSD cache)
zpool add storage_pool cache /dev/ada0
Step 4: Optimize I/O Parameters
Add to /boot/loader.conf:
vfs.zfs.vdev.async_write_max_active=8
vfs.zfs.vdev.async_write_max_inflight=64
vfs.zfs.vdev.sync_write_max_active=10
vfs.zfs.vdev.sync_read_max_active=10
vfs.zfs.vdev.min_max_active=4
vfs.zfs.vdev.min_max_inflight=32
vfs.zfs.vdev.file_max_active=1
vfs.zfs.vdev.file_max_inflight=32
Network Configuration and Performance
Even with good hardware, incorrect network settings can significantly reduce SMB access performance.
Kernel Network Parameter Optimization
Step 1: Configure TCP Parameters
Add to /etc/sysctl.conf:
# TCP optimization
net.inet.tcp.delayed_ack=0
net.inet.tcp.recvspace=65536
net.inet.tcp.sendspace=65536
net.inet.tcp.mssdflt=1460
net.inet.tcp.window=65536
net.inet.tcp.nolocaltimewait=1
# Network buffers
kern.ipc.maxsockbuf=2097152
net.inet.tcp.sendbuf_max=2097152
net.inet.tcp.recvbuf_max=2097152
# SMB optimization
net.local.stream.recvspace=65536
net.local.stream.sendspace=65536
Apply the changes:
sysctl -f /etc/sysctl.conf
Network Adapter Configuration in VM
Step 2: Optimize Network Card in Proxmox
For TrueNAS VM, use the virtio network adapter with correct settings:
# In VM configuration
args: -device virtio-net-pci,netdev=vmnet0,mac=52:54:00:12:34:56
Step 3: Configure Jumbo Frames (if supported)
If all network hardware supports Jumbo Frames (9000 MTU):
# On TrueNAS
ifconfig igb0 mtu 9000
# On Proxmox host
ifconfig vmbr0 mtu 9000
Monitoring and Performance Diagnostics
To identify bottlenecks and optimize performance, it’s necessary to regularly monitor the system.
Monitoring Tools
Step 1: ZFS Performance Monitoring
# Current pool statistics
zpool iostat -l 1
# ARC cache statistics
vmstat -m
# SMB statistics
smbstatus -b
Step 2: Network Performance Analysis
# Monitor network activity
netstat -i
# Analyze TCP connections
tcpdump -i igb0 -n 'port 445'
# Test network speed
iperf3 -c server_IP
Step 3: Performance Logging
Enable extended logging for diagnostics:
# In smb.conf
[global]
log level = 2
syslog = 2
# In syslog.conf
local0.* /var/log/samba.log
Practical Recommendations and Step-by-Step Instructions
Comprehensive Optimization Plan
Step 1: Check Basic Configuration
-
Ensure USB disks are properly identified in TrueNAS:
bashcamcontrol devlist
-
Check pool status:
bashzpool status zpool list
-
Test speed directly through ZFS:
bashdd if=/dev/zero of=/mnt/storage_pool/testfile bs=1M count=1024 oflag=direct
Step 2: Optimize USB Controller
-
Identify the exact USB controller:
bashusbconfig dump_all_desc
-
Configure passthrough in Proxmox VM configuration:
args: -device usb-host,bus=usb-bus.0,hostbus=1,hostaddr=1 -
Reboot the VM and check disk availability.
Step 3: Optimize SMB and Network
- Apply the optimized smb.conf
- Configure kernel parameters in sysctl.conf
- Check network settings and enable Jumbo Frames if necessary
- Reboot the system to apply all changes
Step 4: Performance Testing
Use the following commands for testing:
# Read speed test
dd if=/mnt/storage_pool/testfile of=/dev/null bs=1M count=1024 iflag=direct
# Write speed test
dd if=/dev/zero of=/mnt/storage_pool/testfile bs=1M count=1024 oflag=direct
# Test via SMB
smbclient //nas_ip/share -U username -c "get testfile /tmp/testfile"
Expected Results
After proper configuration, you should achieve the following metrics:
- Read/write through ZFS: 120-160 MB/s
- SMB access: 100-140 MB/s
- Latency: < 5ms for small files
- Throughput: > 90% of USB 3.0 theoretical maximum
Possible Issues and Solutions
-
If speed remains low:
- Check the USB cable (use a short, high-quality cable)
- Ensure disks are working in UASP mode
- Try a different USB port (preferably rear on the motherboard)
-
If there are SMB errors:
- Check logs:
tail -f /var/log/samba.log - Ensure antivirus is not blocking access
- Check file access permissions
- Check logs:
-
If the system is unstable:
- Reduce ARC cache size
- Disable unnecessary SMB features
- Check disk temperatures
Conclusion
To achieve maximum USB 3.0 speed in your Proxmox + TrueNAS configuration, you need to take a comprehensive approach to optimizing several components:
- Configure USB passthrough in Proxmox for direct controller access
- Optimize SMB parameters with advanced settings for high speeds
- Configure ZFS pool with correct parameters for USB disks
- Optimize the network considering SMB protocol specifics
- Regularly monitor performance to identify bottlenecks
With proper configuration, you can achieve SMB speeds of 100-140 MB/s, which corresponds to expected metrics for USB 3.0 connection. The key factor is ensuring direct access to USB hardware without emulation, which allows avoiding significant overhead in the virtualized environment.