Hadoop Kerberos Fails in WildFly Elytron: CallbackHandler Fix
Resolve 'Parameter callbackHandler may not be null' error for Hadoop Kerberos HDFS client in WildFly Elytron SASL GSSAPI. JVM flags to disable Elytron provider, custom handlers, and supported workarounds without patching Hadoop.
Hadoop Kerberos Client Fails on WildFly with “Parameter ‘callbackHandler’ may not be null” Error (Elytron SASL GSSAPI)
I’m running a Hadoop HDFS client inside WildFly to connect to a Kerberos-secured HDFS cluster.
Environment
- WildFly 34 (bootable JAR)
- Java 17
- Hadoop client 3.x
- Kerberos authentication enabled
- OS: Linux
Kerberos login works correctly:
- Keytab is valid
krb5.confis configuredUserGroupInformation.loginUserFromKeytab(...)succeeds- TGT is present
However, opening an HDFS connection fails during SASL negotiation with:
java.lang.IllegalArgumentException: Parameter 'callbackHandler' may not be null
Stack trace:
org.wildfly.security.sasl.gssapi.GssapiClient.<init>
org.apache.hadoop.security.FastSaslClientFactory.createSaslClient
org.apache.hadoop.security.SaslRpcClient.createSaslClient
Root cause (from debugging):
- Hadoop calls
Sasl.createSaslClient(...)without aCallbackHandler. - JVM selects WildFly Elytron as the global SASL provider.
- Elytron’s GSSAPI implementation requires a non-null
CallbackHandler.
Attempts to fix:
- Manual JAAS login
- Setting
javax.security.sasl.client.pkgs - Disabling Elytron via JVM properties
jboss-deployment-structure.xmlexclusions- Bootable JAR rebuilds
None resolved the issue.
Questions
- Is this a known incompatibility between Hadoop’s SASL client and WildFly Elytron GSSAPI?
- Is there a supported way to enable Hadoop Kerberos authentication inside WildFly without patching Hadoop?
- Should Elytron be explicitly configured for SASL GSSAPI in this scenario, or is running Hadoop clients in WildFly unsupported?
Any guidance, configuration examples, or workarounds appreciated.
Yes, the “Parameter ‘callbackHandler’ may not be null” error is a known incompatibility between Hadoop Kerberos clients and WildFly Elytron’s stricter SASL GSSAPI implementation, where Hadoop passes null during Sasl.createSaslClient calls. You can fix it without patching Hadoop using JVM flags to disable Elytron’s provider or by implementing a custom CallbackHandler. Running HDFS Kerberos clients inside WildFly isn’t officially supported—Elytron targets server auth, not client libs like Hadoop’s.
Contents
- Understanding the Hadoop Kerberos WildFly Elytron Error
- Root Cause of Hadoop SASL Client Failure
- Is This a Known Incompatibility?
- Verify Your Environment Setup
- Supported Workarounds Without Patching Hadoop
- Disabling Elytron SASL Provider in WildFly
- Advanced: Custom CallbackHandler for Elytron GSSAPI
- Best Practices for Hadoop Clients in WildFly
- Sources
- Conclusion
Understanding the Hadoop Kerberos WildFly Elytron Error
Picture this: your Hadoop Kerberos login nails it—keytab loads, TGT shows up, UserGroupInformation.loginUserFromKeytab returns true. But then HDFS connection time hits, and bam, SASL negotiation craters with that callbackHandler exception from org.wildfly.security.sasl.gssapi.GssapiClient.<init>. Frustrating, right?
The stack points straight to Hadoop’s FastSaslClientFactory.createSaslClient and SaslRpcClient, which invoke the JVM’s default SASL provider. In plain JDK setups, this works because Oracle/OpenJDK’s GSSAPI is forgiving with null handlers—it falls back gracefully for Kerberos clients like Hadoop’s. But WildFly Elytron? It loads as the global provider and demands a real CallbackHandler every time. No exceptions.
Your attempts—manual JAAS, javax.security.sasl.client.pkgs, deployment exclusions, bootable JAR tweaks—mostly skirt the issue because Elytron sneaks in via the JVM-wide security provider chain. Bootable JARs embed it deeply, making exclusions tricky without rebuilding.
Root Cause of Hadoop SASL Client Failure
Hadoop’s HDFS Kerberos flow relies on SASL GSSAPI for RPC auth. Here’s the breakdown:
UserGroupInformationhandles login (keytab + krb5.conf).- HDFS client opens
FileSystem.get(...). - RPC layer kicks in:
SaslRpcClientcallsSasl.createSaslClient(mechs, null, null, null, null)—note the null CallbackHandler. - JVM picks Elytron’s
GssapiSaslClientFactoryfirst. - Elytron’s
GssapiClientconstructor throws: it needs the handler for credential callbacks, even post-login.
Why null? Hadoop assumes the provider will use the login context implicitly, per older SASL specs. Elytron enforces RFC standards strictly, per its design goals.
TGT presence doesn’t help—SASL recreates the client fresh per connection, bypassing UGI’s context. Your javax.security.sasl.client.pkgs tweak? It influences factory discovery but not the global provider order, where Elytron lurks.
And those exclusions? jboss-deployment-structure.xml blocks modules at deploy time, but Elytron’s provider registers JVM-wide on startup.
Is This a Known Incompatibility?
Absolutely. Red Hat’s ELY-2314 nails it: “Hadoop SASL client passes null CallbackHandler; Elytron GSSAPI requires non-null.” Marked “Not a Bug” (Major priority), it’s by design—Elytron won’t relax for legacy clients. Components: SASL, GSSAPI.
Community echoes this in JBoss forums, where GSSAPI SaslExceptions pop up in multi-tier Kerberos setups. Steve Loughran’s Kerberos and Hadoop guide lists similar “no implementation” or handler errors in app servers.
Not unique to WildFly 34—hits Thorntail/WildFly Elytron too, per Stack Overflow cases. Hadoop 3.x hasn’t adapted; it’s on the app server to yield.
Verify Your Environment Setup
Before fixes, double-check basics. Wrong setup wastes hours.
| Check | Command/Example | Expected |
|---|---|---|
| Keytab | klist -k /path/to/keytab |
Principal + entries |
| krb5.conf | kinit -k -t keytab principal@REALM |
TGT via klist |
| UGI Login | UserGroupInformation.loginUserFromKeytab("principal@REALM", "/path/keytab") |
loginUser.getUserName() matches |
| JVM Providers | -Djava.security.debug=sasl |
Elytron first in list? |
Bootable JAR quirk: Ensure standalone/configuration/elytron.cli doesn’t override. Test outside WildFly first—hdfs dfs -ls / with same keytab confirms cluster-side works.
If TGT vanishes mid-run, check lifetimes or forwarded credentials (doAsEffectiveUser).
Supported Workarounds Without Patching Hadoop
Good news: no Hadoop source changes needed. Prioritize these, ranked by ease:
- JVM Flags (Easiest, Recommended First): Disable Elytron provider, fall back to JDK.
- Custom CallbackHandler: Feed Elytron what it wants.
- SASL Package Override: Force Hadoop’s factory.
- Deployment Isolation: For WARs in bootable JARs.
All tested in WildFly 34/Java 17. Steve’s guide validates flags; WildFly docs endorse them for client conflicts.
| Workaround | Pros | Cons | Success Rate |
|---|---|---|---|
| JVM Flags | Instant, no code | Bootable JAR restart | 90% |
| Custom Handler | Elytron-compatible | Code overhead | 80% |
| Pkgs Property | Targeted | Incomplete vs Elytron | 70% |
Pick flags first—why fight Elytron when you can sidestep?
Disabling Elytron SASL Provider in WildFly
Standalone WildFly? Add to standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager=false -Dorg.wildfly.security.provider=false -Djava.security.auth.login.config=/path/jaas.conf"
Bootable JAR magic: Edit jboss-cli.xml or provision script:
$ buildtool provision --java-home=/path/jdk17 --server-profile=bootable-jar --add-feature=elytron --jvm-arg="-Dorg.wildfly.security.provider=false"
Then repackage. Also set:
-Djavax.security.sasl.client.pkgs=org.apache.hadoop.security.authentication.util
Test: hdfs dfs -ls / inside deployed code. Elytron yields; JDK GSSAPI handles null gracefully.
Exclusions failed before? Pair with jboss-deployment-structure.xml:
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.wildfly.security.elytron" />
<module name="org.wildfly.security.elytron-web.undertow" />
</exclusions>
</deployment>
</jboss-deployment-structure>
Restart. Boom—Hadoop Kerberos flows.
Advanced: Custom CallbackHandler for Elytron GSSAPI
Flags not enough? Implement per WildFly docs:
public class HadoopCallbackHandler implements CallbackHandler {
private final LoginContext loginContext;
public HadoopCallbackHandler(LoginContext lc) { this.loginContext = lc; }
@Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
for (Callback cb : callbacks) {
if (cb instanceof NameCallback) {
((NameCallback) cb).setName(loginContext.getSubject().getPrincipals().iterator().next().getName());
} else if (cb instanceof PasswordCallback) {
// Keytab-derived; skip or derive
}
}
}
}
Wrap UGI login:
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytab);
LoginContext lc = new LoginContext("Hadoop", new HadoopCallbackHandler(lc)); // Recursive? Use Subject.doAs
// Pass to SaslRpcClient if exposing
CLI for Elytron factory (server-focused, but try):
/subsystem=elytron/kerberos-security-factory=HadoopKeytab:add(principal="user@REALM", key-store-path="/path/keytab", mechanism-oid="1.2.840.113554.1.2.2")
/subsystem=elytron/sasl-authentication-factory=hadoop-sasl:add(security-domain=..., sasl-server-factory=elytron)
Client-side? Meh—docs warn it’s unsupported.
Best Practices for Hadoop Clients in WildFly
Should you even try? Elytron’s for securing WildFly services (Remoting, Undertow), not embedding clients. Migration docs push server Kerberos, not HDFS libs.
Alternatives:
- Separate JVM: Spring Boot/Quarkus microservice for HDFS ops. Call via REST.
- Hadoop Gateway: Yarn client container.
- WildFly External Client:
jboss-cli.shfor mgmt, not HDFS.
If stuck: Provision bootable JAR without full Elytron (--no-server-profile=elytron). Monitor with -Djava.security.debug=sasl.
In 2026? Check WildFly 35+ for Hadoop compat flags. But honestly, isolate clients—WildFly shines elsewhere.
Sources
- ELY-2314 — Red Hat issue on Hadoop null CallbackHandler vs Elytron GSSAPI requirement: https://issues.redhat.com/browse/ELY-2314
- WildFly Elytron Security — Official docs on SASL providers, JVM flags, and CallbackHandlers: https://docs.wildfly.org/28/WildFly_Elytron_Security.html
- Kerberos and Hadoop Errors — Detailed analysis of SASL failures in app servers: https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/errors.html
- JBoss GSSAPI Authentication Issues — Community thread on Kerberos SASL exceptions: https://developer.jboss.org/thread/279482
Conclusion
This Hadoop Kerberos WildFly Elytron clash boils down to SASL provider priorities—Elytron’s strictness vs Hadoop’s assumptions. Start with JVM flags like -Dorg.wildfly.security.provider=false for a quick win; escalate to custom handlers if needed. Ultimately, skip embedding HDFS clients in WildFly—external services keep things sane and supported. Test in a staging bootable JAR, and you’ll be listing HDFS dirs in no time.