Using Langfuse with Sentry
This guide covers how to configure Langfuse alongside Sentry. If you haven't already, read Using Langfuse with an Existing OpenTelemetry Setup to understand the general concepts.
Why Sentry can conflict with Langfuse
Sentry's JavaScript/Node SDK (v8+) and Python SDK (v3+) automatically initialize OpenTelemetry when you call Sentry.init(). This includes:
- Creating a
SentrySpanProcessor - Setting up
SentryPropagatorfor distributed tracing - Installing
SentryContextManager - Registering itself as the global TracerProvider
Because Sentry "claims" the global TracerProvider, simply initializing Langfuse afterward won't work: Langfuse's span processor never gets attached to the provider Sentry controls.
Note: The Python
sentry_sdkv2.x does not use OpenTelemetry by default. It uses Sentry's own instrumentation engine, so Python v2.x users typically don't need any special configuration and can use Option A directly.
Setup options
Option A: Use Sentry without OpenTelemetry (simplest)
If you don't need Sentry's performance tracing (i.e., you primarily use Sentry for error monitoring), you can disable Sentry's OTEL integration entirely. This is the simplest way to avoid conflicts since Sentry and Langfuse never interact at the OpenTelemetry level.
import * as Sentry from "@sentry/node";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { LangfuseSpanProcessor } from "@langfuse/otel";
// Initialize Sentry for error monitoring only, no OTEL, no tracing
Sentry.init({
dsn: process.env.SENTRY_DSN,
skipOpenTelemetrySetup: true,
// Do NOT set tracesSampleRate - this disables Sentry's performance tracing
});
// Langfuse owns the global TracerProvider with no conflicts
const sdk = new NodeSDK({
spanProcessors: [new LangfuseSpanProcessor()],
});
sdk.start();import sentry_sdk
from langfuse import Langfuse
# Python sentry_sdk v2.x uses its own instrumentation by default, no OTEL conflict
sentry_sdk.init(
dsn=os.environ["SENTRY_DSN"],
traces_sample_rate=1.0, # Sentry tracing works independently of OTEL
)
# Langfuse initializes normally
langfuse = Langfuse()For Python sentry_sdk v3+, which uses OTEL under the hood, use Option B or Option C instead.
Option B: Shared TracerProvider
If you need both Sentry's performance tracing and Langfuse in the same distributed trace, disable Sentry's automatic OTEL setup and configure a shared TracerProvider that includes both processors.
Use Option B only when you want a shared distributed trace
With Option B, Langfuse inherits the active OpenTelemetry context from
Sentry. Incoming sentry-trace headers can therefore change Langfuse trace
IDs and sampling decisions. If you want reliable standalone Langfuse traces
for each AI operation, prefer Option C.
npm install @sentry/opentelemetryimport * as Sentry from "@sentry/node";
import { LangfuseSpanProcessor } from "@langfuse/otel";
import {
SentryPropagator,
SentrySampler,
SentrySpanProcessor,
} from "@sentry/opentelemetry";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
// Step 1: Initialize Sentry WITHOUT automatic OTEL setup
const sentryClient = Sentry.init({
dsn: process.env.SENTRY_DSN,
skipOpenTelemetrySetup: true, // Critical: prevents Sentry from claiming global provider
tracesSampleRate: 1.0,
});
// Step 2: Create a shared TracerProvider with both processors
const provider = new NodeTracerProvider({
sampler: sentryClient ? new SentrySampler(sentryClient) : undefined,
spanProcessors: [
// Langfuse processor - default smart filter (Langfuse + GenAI/LLM spans)
new LangfuseSpanProcessor(),
// Sentry processor - receives all spans
new SentrySpanProcessor(),
],
});
// Step 3: Register with Sentry's propagator and context manager
provider.register({
propagator: new SentryPropagator(),
contextManager: new Sentry.SentryContextManager(),
});Sentry's Sample Rate Affects Langfuse
When using a shared TracerProvider, Sentry's tracesSampleRate applies to all traces, including those going to Langfuse.
Sentry.init({
tracesSampleRate: 0.1, // Only 10% of traces are created
// ...
});If you set this to 0.1, only 10% of your LLM calls will appear in Langfuse. To send all traces to Langfuse while sampling for Sentry, use the isolated TracerProvider approach instead (Option C).
Incoming sentry-trace Headers Also Affect Langfuse
When SentryPropagator is enabled, backend requests continue the incoming
Sentry distributed trace. Langfuse spans created in that context inherit both
the upstream trace ID and the upstream sampling decision.
- If the incoming trace is unsampled, the backend span can become non-recording and nothing will appear in Langfuse.
- If multiple backend operations continue the same upstream trace, they can appear merged into a single Langfuse trace.
If you want Langfuse to always create a separate trace for an AI workflow, prefer Option C. If you need Option B for the rest of your app, you can detach a specific Langfuse-traced block by starting it in a fresh root context:
import { context, ROOT_CONTEXT } from "@opentelemetry/api";
import { startActiveObservation } from "@langfuse/tracing";
await context.with(ROOT_CONTEXT, async () => {
await startActiveObservation("generateText", async () => {
// your Langfuse-traced AI call
});
});This deliberately breaks parentage to the incoming Sentry trace for that block.
Filtering Langfuse Spans
Langfuse already applies a default LLM-focused filter. In most Sentry setups, this means no extra filtering code is required.
If you need stricter routing, you can provide shouldExportSpan:
new LangfuseSpanProcessor({
shouldExportSpan: ({ otelSpan }) =>
otelSpan.instrumentationScope.name === "langfuse-sdk",
}),This keeps only Langfuse SDK spans in Langfuse, while Sentry still receives everything.
To export everything during debugging, temporarily set shouldExportSpan: () => true.
Adjust your allowed scopes based on what you want in Langfuse. You can find the scope name of a span in the Langfuse UI by clicking on any span and looking for metadata.scope.name.
Required Sentry Components
When using skipOpenTelemetrySetup: true, you must manually configure all of Sentry's OTEL components:
| Component | Purpose |
|---|---|
SentrySampler | Applies Sentry's sampling decisions |
SentrySpanProcessor | Sends spans to Sentry |
SentryPropagator | Handles distributed tracing headers |
SentryContextManager | Manages async context for Sentry |
If you omit any of these, Sentry's tracing may not work correctly.
Option C: Isolated TracerProvider
If you don't need distributed tracing across Sentry and Langfuse spans, you can use a completely isolated TracerProvider for Langfuse. This is the recommended setup for most AI applications because it keeps Langfuse traces independent from Sentry's sampling and propagation behavior.
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { LangfuseSpanProcessor } from "@langfuse/otel";
import { setLangfuseTracerProvider } from "@langfuse/tracing";
import * as Sentry from "@sentry/node";
// Sentry uses its own automatic OTEL setup
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 1.0,
// No skipOpenTelemetrySetup - let Sentry manage global provider
});
// Langfuse uses a completely separate provider
const langfuseProvider = new NodeTracerProvider({
spanProcessors: [new LangfuseSpanProcessor()],
});
setLangfuseTracerProvider(langfuseProvider);import sentry_sdk
from opentelemetry.sdk.trace import TracerProvider
from langfuse import Langfuse
# Initialize Sentry normally
sentry_sdk.init(
dsn=os.environ["SENTRY_DSN"],
traces_sample_rate=1.0,
)
# Langfuse uses a completely separate provider
langfuse = Langfuse(tracer_provider=TracerProvider())Trade-offs
- Simpler configuration
- Sentry's sampling doesn't affect Langfuse traces
- Langfuse and Sentry traces won't share context
- Some spans may appear orphaned in Langfuse if their parent is in Sentry's provider
Common Issues
No traces in Langfuse after adding Sentry
Cause: Sentry initialized OTEL before Langfuse could attach its processor.
Solution: Use Option A (Sentry without OTEL), Option B (shared setup) with skipOpenTelemetrySetup: true, or Option C (isolated provider).
Setting skipOpenTelemetrySetup breaks Sentry tracing
Cause: You're not manually configuring all required Sentry OTEL components.
Solution: Ensure you're registering the provider with SentryPropagator and SentryContextManager as shown in Option B.
Infrastructure spans appearing in Langfuse
Cause: Your custom shouldExportSpan is too permissive.
Solution: Tighten your shouldExportSpan rules.
Only some traces appear in Langfuse
Cause: In Option B, Langfuse inherits Sentry's sampling decisions. This can come from Sentry's local tracesSampleRate or from an incoming sentry-trace header that was already unsampled upstream.
Solution: Set tracesSampleRate: 1.0 if you want all traces and ensure upstream requests are sampled, or use Option C (isolated provider) to avoid the shared sampling issue entirely.
No traces in Langfuse for requests coming from a Sentry-instrumented frontend
Cause: In Option B, SentryPropagator continues the incoming sentry-trace header. If the upstream trace was unsampled, the backend span can be non-recording and Langfuse receives nothing.
Solution: Prefer Option C if you want Langfuse to create its own traces. If you need Option B elsewhere, wrap the Langfuse-traced block in context.with(ROOT_CONTEXT, ...) to detach it from the incoming Sentry trace.
Separate Langfuse traces appear merged into one trace
Cause: In Option B, multiple backend operations can continue the same incoming sentry-trace header and therefore share the same trace ID.
Solution: Prefer Option C for standalone Langfuse traces, or start the relevant Langfuse-traced block in a fresh ROOT_CONTEXT if you need to detach it from the incoming Sentry trace.
AWS Lambda Considerations
In serverless environments like AWS Lambda, you may need additional configuration:
new LangfuseSpanProcessor({
exportMode: "immediate", // Don't batch - export before Lambda freezes
}),The exportMode: "immediate" setting ensures spans are exported right away rather than batched, which is important because Lambda may freeze the execution context before batched spans are flushed. Read more on how Langfuse captures and sends spans here.