Filter Kubernetes dependency telemetry in App Insights
Drop internal Kubernetes HTTP dependencies in Application Insights with an ITelemetryProcessor. Filter by target/URL, type, or duration; or use OTEL/sampling.
How can I filter out dependency telemetry for internal Kubernetes calls in Application Insights using an ITelemetryProcessor?
I have an application that produces about 400k dependency telemetry events per 24 hours, which has a real, tangible cost on my company’s cloud hosting budget. 99% of those events are various calls to what appears to be kubernetes-related things, such as calls to:
http://10.0.10.161:8126/v0.4/traceshttp://10.0.10.161:8126/infohttp://10.0.1.120:8126/v0.1/pipeline_stats
which all execute in microseconds.
This is my telemetry processor that tries to filter this stuff out:
public class DependencyFilter(ITelemetryProcessor next, DependencyFilterSettings settings)
: ITelemetryProcessor
{
private readonly TimeSpan
_longSql = TimeSpan.FromMilliseconds(settings.LogSQLCallsLongerThanMs);
private readonly TimeSpan
_longHttp = TimeSpan.FromMilliseconds(settings.LogHTTPRequestsLongerThan);
public void Process(ITelemetry item)
{
if (item is DependencyTelemetry dep) {
switch (dep.Type) {
case "HTTP":
HandleHttp(dep);
break;
case "SQL":
HandleSql(dep);
break;
default:
next.Process(item);
break;
}
} else {
next.Process(item);
}
}
private void HandleHttp(DependencyTelemetry dep)
{
if (dep.Duration <= _longHttp) return;
next.Process(dep);
}
private void HandleSql(DependencyTelemetry dep)
{
if (dep.Duration <= _longSql) return;
next.Process(dep);
}
}
I have confirmed in the debugger that this telemetry processor is executed; I have not forgotten to add it to DI. The value of _longHttp is 10s. Why is this log noise still showing up in my logs? How do I write a dependency filter that just ignores all of this? Is that even possible, or should I figure something else out? The documentation is very bad.
You can remove internal Kubernetes dependency telemetry from Application Insights by adding an ITelemetryProcessor that inspects DependencyTelemetry (Type, Target/Data/Name, Duration) and simply returns (doesn’t call next.Process) when the item matches known internal-agent patterns like 10.0.*:8126 or the tracer endpoints (/v0.4/traces, /info). Keep in mind that some telemetry may be produced outside the AI SDK pipeline (host-generated or exported by OpenTelemetry) and therefore won’t pass through your processor — in those cases filter at the instrumentation or OpenTelemetry collector instead.
Contents
- Problem: Excessive Kubernetes dependency telemetry
- Why ITelemetryProcessor is the right place to start
- DependencyTelemetry properties to inspect
- ITelemetryProcessor implementation example
- Registration & ordering pitfalls
- When ITelemetryProcessor can’t stop it (host / OpenTelemetry)
- Alternatives: sampling, OTEL collector, or disabling instrumentation
- Troubleshooting & verification checklist
- Sources
- Conclusion
Problem: Excessive Kubernetes dependency telemetry
You’re seeing ~400k dependency events per day, nearly all tiny HTTP calls to internal IPs like:
- http://10.0.10.161:8126/v0.4/traces
- http://10.0.10.161:8126/info
- http://10.0.1.120:8126/v0.1/pipeline_stats
Those calls are extremely short (microseconds) and are driving ingestion and cost. They look like local tracer/agent endpoints (note the common port 8126) and are being recorded as DependencyTelemetry items. The good news: if those dependency items actually go through the Application Insights SDK pipeline, you can drop them cheaply with an ITelemetryProcessor. The catch: if they don’t go through that pipeline (for example they’re created by host-level instrumentation or exported from OpenTelemetry), an ITelemetryProcessor won’t see them and you’ll need a different approach — more on that below.
Why ITelemetryProcessor is the right place to start
ITelemetryProcessor is the SDK extension point intended for pre-send filtering. The typical pattern is simple: cast ITelemetry to DependencyTelemetry, inspect key fields (Type, Target, Name, Data, Duration), and if the item matches your “ignore” rules then return without calling _next.Process(item). Many community examples use this pattern to drop short SQL queries or tag requests before passing them on — see a practical filter example and discussion here and here. For cost-control this is the right first line of defense because it stops the telemetry early, lowering network and ingestion costs.
That said, there are documented cases where telemetry is generated outside the SDK pipeline and therefore bypasses processors entirely — for example some host-generated telemetry in Azure Functions (see the Azure repo issue below). If your microsecond 10.0.*:8126 calls still show up after you’ve implemented and registered a correct processor, they may be coming from such a source.
(Examples of these SDK patterns: Szołkowski filter example, an Optimizely blog showing telemetry processor usage example, and a cost-reduction writeup here.)
DependencyTelemetry properties to inspect
Key fields to use when deciding to drop an item:
- Type: dependency type string (e.g., “HTTP” or “SQL”). Compare case-insensitively.
- Target: usually host[:port] (e.g., “10.0.10.161:8126”).
- Data: extra details; sometimes contains the full URL.
- Name: often “GET /path” or similar; can include the URL path.
- Duration: TimeSpan of call time. Use Duration.TotalMilliseconds (or compare TimeSpan values) — do not rely on the Milliseconds property (that returns only the millisecond component).
- Success: bool? indicating outcome.
Quick debug helper: in your processor temporarily log or emit the tuple (Type, Target, Name, Data, Duration.TotalMilliseconds) for a sample of items so you can confirm where the tracer calls are recorded.
ITelemetryProcessor implementation example
Below is a pragmatic, robust ITelemetryProcessor. It matches internal IPs/ports and known tracer URL fragments and drops very short HTTP or SQL dependencies. It uses DI-friendly IOptions for settings.
using System;
using System.Text.RegularExpressions;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Extensions.Options;
public class DependencyFilterSettings
{
public int LogSQLCallsLongerThanMs { get; set; } = 15;
public int LogHTTPRequestsLongerThanMs { get; set; } = 10000; // 10 s by default
public string[] IgnoreUrlFragments { get; set; } = new[] { "/v0.4/traces", "/v0.1/pipeline_stats", "/info" };
public string[] IgnoreIpPrefixes { get; set; } = new[] { "10.", "127.", "localhost" };
public int[] IgnorePorts { get; set; } = new[] { 8126 };
}
public class DependencyFilter : ITelemetryProcessor
{
private readonly ITelemetryProcessor _next;
private readonly TimeSpan _longSql;
private readonly TimeSpan _longHttp;
private readonly string[] _ignoreUrlFragments;
private static readonly Regex PrivateIpRegex = new(@"10.\d{1,3}.\d{1,3}.\d{1,3}", RegexOptions.Compiled | RegexOptions.CultureInvariant);
public DependencyFilter(ITelemetryProcessor next, IOptions<DependencyFilterSettings> settings)
{
_next = next ?? throw new ArgumentNullException(nameof(next));
var s = settings?.Value ?? throw new ArgumentNullException(nameof(settings));
_longSql = TimeSpan.FromMilliseconds(s.LogSQLCallsLongerThanMs);
_longHttp = TimeSpan.FromMilliseconds(s.LogHTTPRequestsLongerThanMs);
_ignoreUrlFragments = s.IgnoreUrlFragments ?? Array.Empty<string>();
}
public void Process(ITelemetry item)
{
if (item is not DependencyTelemetry dep)
{
_next.Process(item);
return;
}
if (ShouldDrop(dep))
{
// drop: do not call _next.Process(dep)
return;
}
_next.Process(dep);
}
private bool ShouldDrop(DependencyTelemetry dep)
{
var target = (dep.Target ?? string.Empty).ToLowerInvariant();
var data = (dep.Data ?? string.Empty).ToLowerInvariant();
var name = (dep.Name ?? string.Empty).ToLowerInvariant();
// 1) match known tracer endpoints / port 8126 / URL fragments
foreach (var frag in _ignoreUrlFragments)
{
if (!string.IsNullOrEmpty(frag) && (data.Contains(frag) || name.Contains(frag))) return true;
}
if (target.Contains(":8126") || data.Contains(":8126") || name.Contains(":8126")) return true;
if (PrivateIpRegex.IsMatch(target) || PrivateIpRegex.IsMatch(data) || PrivateIpRegex.IsMatch(name)) return true;
// 2) drop very short calls by duration
var duration = dep.Duration;
if (duration == TimeSpan.Zero) return true; // treat zero/unknown as short
if (string.Equals(dep.Type, "HTTP", StringComparison.OrdinalIgnoreCase) && duration <= _longHttp) return true;
if (string.Equals(dep.Type, "SQL", StringComparison.OrdinalIgnoreCase) && duration <= _longSql) return true;
return false; // keep otherwise
}
}
Registration (ASP.NET Core / .NET 6+ style, Program.cs):
builder.Services.Configure<DependencyFilterSettings>(builder.Configuration.GetSection("DependencyFilter"));
builder.Services.AddApplicationInsightsTelemetry();
builder.Services.AddApplicationInsightsTelemetryProcessor<DependencyFilter>();
appsettings.json snippet:
"DependencyFilter": {
"LogSQLCallsLongerThanMs": 15,
"LogHTTPRequestsLongerThanMs": 10000,
"IgnoreUrlFragments": [ "/v0.4/traces", "/v0.1/pipeline_stats", "/info" ]
}
Notes:
- Constructor uses IOptions so AddApplicationInsightsTelemetryProcessor can create the processor with DI-supplied settings.
- Use TotalMilliseconds when you need a numeric comparison; comparing TimeSpan values (duration <= threshold) is also fine.
- For initial debugging you can temporarily tag items instead of dropping them (set dep.Properties[“SeenByFilter”]=“true”; _next.Process(dep)) and search for that property in App Insights to confirm the processor sees those items.
Registration & ordering pitfalls
If you’ve confirmed the processor runs in the debugger but the noise still shows up, check these common problems:
- Multiple TelemetryConfiguration instances: some libraries or manual TelemetryClient instantiations create their own TelemetryConfiguration and won’t use your processor. Make sure the instrumentation uses the same DI-registered configuration.
- Conflicting registration APIs across packages: worker-service vs. ASP.NET Core helper methods can create ambiguity; there are known issues around ambiguous registration of telemetry processors — see the ApplicationInsights-dotnet issue below.
- Processor order: the order processors are added to the chain can matter (e.g., sampling processors). If you add your processor but another processor later re-adds or modifies items you intended to drop, review the chain order.
If you suspect registration problems, do this quick test:
- In your processor add a temporary marker property and call _next.Process(dep).
- Query Application Insights for that property on the dependency. If you see it, your processor did see the item; if you don’t but the dependency still appears, the dependency is created elsewhere.
(Registration conflicts are discussed in the ApplicationInsights repo: https://github.com/microsoft/ApplicationInsights-dotnet/issues/2972.)
When ITelemetryProcessor can’t stop it (host / OpenTelemetry)
Why might your processor be executed but still not stop the noise? Because not every dependency is created inside the AI SDK pipeline:
- Host or platform components can emit telemetry after or outside the SDK pipeline. Example: an Azure Functions host issue documents that certain built-in telemetry (like change-feed polling) can’t be filtered by OTEL processors on the app side and may require host-level configuration or disabling: https://github.com/Azure/azure-functions-host/issues/11236.
- If you use OpenTelemetry SDKs/agents and then export to Application Insights, the exporter may convert spans into dependency telemetry without routing them through your AI SDK’s processors. In practice this means ITelemetryProcessor won’t see them and you must filter upstream (OpenTelemetry SDK instrumentation or the OpenTelemetry Collector).
Community threads also show people offloading heavy filtering to a central collector because SDK-level filtering wasn’t enough: https://www.reddit.com/r/OpenTelemetry/comments/1ip7bmq/reducing_the_amount_of_data_sent_to_application/.
Alternatives: sampling, OTEL collector, or disabling instrumentation
If ITelemetryProcessor can’t reach the sources of those events, consider:
- Filter instrumentations upstream:
- OpenTelemetry HttpClient instrumentation supports a Filter option so outgoing agent calls can be excluded at the tracer level.
- Disable or configure the library that generates those calls (tracer/agent settings).
- Use an OpenTelemetry Collector running in your cluster to drop or filter spans before exporting to Application Insights (central control for many pods).
- Sampling: apply adaptive or fixed-rate sampling to reduce volume; this reduces cost but also reduces visibility.
- Disable specific automatic dependency collection modules (if the events are produced by a particular AI module) — only if you know what you’re losing.
For guidance on mixing OpenTelemetry with Application Insights and using a collector for centralized processing, see the practical guides here: https://cicoria.com/leveraging-azure-application-insights-with-opentelemetry-distributed-tracing-done-right/ and community discussion above.
Troubleshooting & verification checklist
- Confirm your processor sees the items:
- Add an ILogger or Debug.WriteLine in Process for matching items.
- Or temporarily tag matched dependencies: dep.Properties[“SeenByDependencyFilter”]=“true”; _next.Process(dep); then search in Application Insights.
- Query App Insights to locate the offending dependencies:
- Example Kusto (Application Insights Analytics):
dependencies
| where target contains “10.0.” or target contains “8126”
| summarize count() by target
| order by count_ desc
- If your processor saw the items but they still appear:
- Check that you actually return for matches (do not call _next.Process).
- Validate processor ordering if you have custom sampling or other processors.
- If your processor did not see the items:
- Check whether OpenTelemetry / another tracer is exporting directly to AI.
- Verify there aren’t multiple TelemetryConfiguration / TelemetryClient instances.
- Check host/platform instrumentation (Azure Functions, SDKs, or sidecars) that might emit dependencies after the pipeline.
- Final validation:
- After deploying the drop-rule, run the Kusto query in step 2 across a 24h window and confirm the 10.0.*:8126 events are gone or vastly reduced.
- Remove any temporary tags/logs once you confirm.
If you need, I can produce a minimal “test mode” processor snippet that marks items (instead of dropping) and a Kusto query you can paste into Logs to validate hits quickly.
Sources
- Tunning Application Insights telemetry filtering in Optimizely | Szołkowski Blog
- Extending Application Insights in an Optimizely PaaS CMS Solution
- Reducing Azure Application Insights logging costs - ViSoft Solutions
- How to filter or suppress built-in Application Insights telemetry in Azure Functions when using OpenTelemetry? · Issue #11236 · Azure/azure-functions-host
- r/OpenTelemetry on Reddit: Reducing the amount of data sent to Application Insights
- ambiguous reference: AddApplicationInsightsTelemetryProcess · Issue #2972 · microsoft/ApplicationInsights-dotnet
- r/dotnet on Reddit: Open telemetry in Azure without application insights?
- Observability in ASP.NET Core with OpenTelemetry and Azure Monitor – dotnetintellect
- Metrics with IMeterFactory and Azure Application Insights - DEV Community
- Leveraging Azure Application Insights with OpenTelemetry: Distributed Tracing Done Right
Conclusion
Implement a focused ITelemetryProcessor that matches the kernel of the problem (10.0.*:8126 and tracer endpoints, or ultra-short durations) and return without calling _next.Process to drop those dependency telemetry items. If events persist, they’re probably emitted outside the AI SDK pipeline — so filter them at the instrumentation source (OpenTelemetry SDK), or centrally with an OpenTelemetry Collector, or disable the offending instrumentation. Do this and you’ll cut the Application Insights bill from those noisy internal Kubernetes calls.