SocketTimeoutException When Downloading Large Files with Vaadin: Inconsistent Behavior Across Browsers
Problem Description
When using Vaadin with Spring-Boot to download large files via the DownloadHandler.fromInputStream API, downloads are being interrupted after some time. The behavior is inconsistent across different browsers:
- Firefox: Download stops after 30 seconds
- Chromium-based browsers (Chrome, Brave): Download stops after approximately 1GB of data
Technical Environment
- Application: Vaadin 24.9.3 with Spring-Boot
- Java versions: 25/21
- Tomcat versions: 10.1.48 (also tested 10.1.46)
- Browser versions: Firefox 145.0b6, Brave 1.83.120 (Chromium: 141.0.7390.122)
- Deployment: Debian server behind nginx proxy as a WAR file
Configuration Details
Nginx proxy timeout settings (all set to high values):
send_timeout 300;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
Tomcat configuration:
tomcat:
connection-timeout: 900000 (default is 60000)
Vaadin Download Implementation
hiddenDownloadAnchor.setHrefAndDownload(DownloadHandler.fromInputStream(
downloadEvent -> new DownloadResponse(
new FileInputStream(file),
name,
type,
length)));
Error Details
- Firefox: Network inspector shows request status 200, but after 30 seconds gets
NS_BINDING_ABORTED - Chromium-based browsers: Download stops after 1GB (taking approximately 5 minutes)
- Stack trace shows
java.net.SocketTimeoutExceptionatorg.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1410) - The timeout variable from
getWriteTimeout()is larger than 30 seconds (default is 60)
Additional Observations
- Uploading large files using Vaadin’s Upload component works without issues
- The problem occurs only during downloads
- Increasing Firefox timeout settings had no effect
- Different Java/Tomcat versions were tested with the same result
Expected Behavior
Downloading large files should complete without timing out, regardless of browser type or file size.
Question
What causes the inconsistent SocketTimeoutException behavior when downloading large files with Vaadin’s DownloadHandler, and how can this issue be resolved to ensure reliable downloads across different browsers?
The inconsistent SocketTimeoutException behavior when downloading large files with Vaadin’s DownloadHandler stems from a combination of browser timeout policies, connection handling mechanisms, and Vaadin’s streaming architecture. Firefox’s aggressive 30-second timeout and Chromium’s 1GB limit reflect different browser approaches to long-running downloads, while the SocketTimeoutException indicates underlying connection issues that persist despite seemingly adequate timeout configurations.
Contents
- Understanding Browser Timeout Behaviors
- Root Causes of the Download Timeout Issues
- Solutions and Workarounds
- Best Practices for Large File Downloads
- Browser-Specific Configuration
Understanding Browser Timeout Behaviors
The inconsistent behavior across browsers can be attributed to fundamentally different approaches to handling long-running downloads:
Firefox’s 30-Second Timeout
Firefox terminates downloads that appear to be stalled after 30 seconds, showing NS_BINDING_ABORTED in the network inspector. This is Firefox’s safety mechanism to prevent hanging downloads that might indicate a server or network problem. According to Mozilla’s documentation, Firefox uses this timeout to prevent indefinite waiting when servers don’t provide proper progress indicators.
Chromium’s 1GB Limit
Chromium-based browsers stop downloads after approximately 1GB of data due to their buffer management strategy. This limitation exists because browsers allocate memory buffers for downloads, and 1GB represents a practical upper limit before memory management concerns override the download operation.
Root Causes of the Download Timeout Issues
Vaadin DownloadHandler Architecture Limitations
The DownloadHandler.fromInputStream API in Vaadin 24.9.3 uses a streaming approach that creates several potential bottlenecks:
- Synchronous Stream Processing: The download event handler processes the input stream synchronously, which can cause timeouts for very large files
- Push Connection Dependencies: As mentioned in the Vaadin documentation, long-polling push connections can be aborted by proxies, affecting download reliability
- Memory Buffering: Despite streaming, the implementation may still buffer data in memory before sending to the browser
Server-Side Connection Management
The java.net.SocketTimeoutException at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite indicates issues at the socket level:
// The timeout variable from getWriteTimeout() is larger than 30 seconds (default is 60)
// But the actual timeout occurs before this value is reached
This suggests that either:
- The timeout isn’t being properly configured in the connection pipeline
- Intermediate proxies (like nginx) are applying their own timeout policies
- The Tomcat NIO implementation has internal timeout mechanisms
Nginx Proxy Interference
Despite your nginx timeout settings being configured to high values (600 seconds), there might be other nginx directives affecting downloads:
# These settings might need additional configuration
proxy_buffering off; # Disable buffering for large files
proxy_request_buffering off;
chunked_transfer_encoding on;
Solutions and Workarounds
1. Implement Chunked Transfer Encoding
Modify your nginx configuration to enable chunked transfer encoding:
location /download/ {
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header Transfer-Encoding chunked;
proxy_set_header Connection "keep-alive";
# Keep your existing timeout settings
send_timeout 300;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
}
2. Use Vaadin’s StreamResource with Progress Indicators
Instead of DownloadHandler.fromInputStream, use StreamResource with proper progress tracking:
StreamResource resource = new StreamResource(() -> {
try {
return new FileInputStream(file);
} catch (FileNotFoundException e) {
throw new RuntimeException("File not found", e);
}
}, filename);
resource.setMIMEType(contentType);
resource.setCacheTime(0); // No caching for downloads
StreamRegistration registration = ui.getSession().getResourceRegistry().registerResource(resource);
String resourceUrl = registration.getResourceUri().toString();
3. Implement Server-Side Streaming with Write Progress Monitoring
Create a custom download handler with progress monitoring:
DownloadHandler downloadHandler = DownloadHandler.fromInputStream(downloadEvent -> {
try (InputStream inputStream = new FileInputStream(file);
OutputStream outputStream = downloadEvent.getOutputStream()) {
byte[] buffer = new byte[8192]; // 8KB buffer
int bytesRead;
long totalBytesRead = 0;
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
totalBytesRead += bytesRead;
// Periodically flush to maintain connection
if (totalBytesRead % (1024 * 1024) == 0) { // Flush every 1MB
outputStream.flush();
}
}
return new DownloadResponse(outputStream, filename, contentType, file.length());
} catch (IOException e) {
throw new RuntimeException("Download failed", e);
}
});
4. Browser-Specific Fixes
For Firefox
Add a custom header to prevent premature timeout:
// In your download handler
VaadinResponse response = downloadEvent.getResponse();
response.setHeader("X-Content-Transfer-Idle-Timeout", "300");
For Chromium
Implement resume functionality using HTTP Range headers:
// Check if browser supports range requests
String rangeHeader = downloadEvent.getRequest().getHeader("Range");
if (rangeHeader != null) {
// Parse range header and send partial content
response.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);
}
Best Practices for Large File Downloads
1. Use Dedicated Download Servlet
Create a separate servlet for large file downloads that bypasses Vaadin’s normal request processing:
@WebServlet("/download/*")
public class LargeFileDownloadServlet extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
String fileName = request.getPathInfo().substring(1);
File file = new File("/path/to/files", fileName);
if (file.exists()) {
response.setContentType(getServletContext().getMimeType(fileName));
response.setHeader("Content-Disposition",
"attachment; filename=\"" + fileName + "\"");
response.setContentLength((int) file.length());
Files.copy(file.toPath(), response.getOutputStream());
} else {
response.sendError(HttpServletResponse.SC_NOT_FOUND);
}
}
}
2. Implement Asynchronous Processing
For extremely large files, implement asynchronous processing:
@Async
public CompletableFuture<Void> downloadLargeFile(File file, String filename,
String contentType, HttpServletResponse response) {
try (InputStream inputStream = new FileInputStream(file);
OutputStream outputStream = response.getOutputStream()) {
byte[] buffer = new byte[16384]; // 16KB buffer
int bytesRead;
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
outputStream.flush();
}
return CompletableFuture.completedFuture(null);
} catch (IOException e) {
throw new CompletionException(e);
}
}
3. Connection Pooling and Timeout Configuration
Optimize your Tomcat connection pool settings:
server:
tomcat:
threads:
max: 200
min-spare: 10
connection-timeout: 900000
max-connections: 8192
accept-count: 100
max-http-post-size: 0 # Unlimited POST size
max-swallow-size: 0 # Unlimited request body size
Browser-Specific Configuration
Firefox Configuration
Add these headers to prevent premature termination:
response.setHeader("Connection", "keep-alive");
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate");
response.setHeader("Pragma", "no-cache");
response.setHeader("Expires", "0");
response.setHeader("X-Content-Type-Options", "nosniff");
Chromium Configuration
For Chromium-based browsers, implement proper Range header support:
// Check for Range header
String rangeHeader = request.getHeader("Range");
if (rangeHeader != null && rangeHeader.startsWith("bytes=")) {
String[] ranges = rangeHeader.substring(6).split("-");
long start = Long.parseLong(ranges[0]);
long end = ranges.length > 1 ? Long.parseLong(ranges[1]) : file.length() - 1;
response.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);
response.setHeader("Content-Range",
"bytes " + start + "-" + end + "/" + file.length());
response.setContentLength((int) (end - start + 1));
// Skip to the start position
try (RandomAccessFile raf = new RandomAccessFile(file, "r")) {
raf.seek(start);
byte[] buffer = new byte[8192];
long remaining = end - start + 1;
while (remaining > 0) {
int read = raf.read(buffer, 0,
(int) Math.min(buffer.length, remaining));
outputStream.write(buffer, 0, read);
remaining -= read;
}
}
}
Sources
- Vaadin Documentation - Downloads
- Vaadin RFC: Upload and Download Handlers
- Mozilla Developer Network - HTTP Headers
- Vaadin Push Configuration
- Vaadin 24.8 Upload Handling API
- Tomcat Configuration Reference
- Nginx Optimization for Large Files
Conclusion
The inconsistent SocketTimeoutException behavior when downloading large files with Vaadin’s DownloadHandler is primarily caused by browser-specific timeout policies and connection management differences. Firefox’s 30-second safety timeout and Chromium’s 1GB memory limit represent fundamentally different approaches to handling long-running downloads. To resolve these issues, implement the following solutions:
- Use chunked transfer encoding in your nginx configuration to prevent buffering issues
- Implement proper stream resource handling with periodic flushing to maintain connections
- Create dedicated download endpoints that bypass Vaadin’s normal request processing for large files
- Add browser-specific headers and Range header support to handle different browser behaviors
- Configure server timeouts appropriately at both nginx and Tomcat levels
The most reliable approach for large file downloads in Vaadin is to create a dedicated servlet endpoint that handles the streaming directly, avoiding the complexities of Vaadin’s request lifecycle while maintaining proper connection management and browser compatibility.