Tech Talk / Brown Bag
Purpose: Share technical knowledge across the team in a low-stakes, collaborative format
How to run this meeting
Keep the format tight: 30 minutes of presentation followed by 15 minutes of Q&A. This is long enough to go deep on a topic and short enough to respect attendees' calendars. Resist the urge to let talks run long — a focused 30 minutes is more valuable than an exhaustive 60. If the presenter has more to say, schedule a follow-up or point people to written resources.
Actively recruit presenters beyond your senior engineers. Junior engineers often have the freshest perspective on what's confusing or what new tools can do, and presenting is a high-leverage career development opportunity for them. The facilitator's job between sessions is to maintain a backlog of topic suggestions so no one has to scramble for ideas. Good topic sources: recent incidents with interesting root causes, new libraries or tools someone evaluated, an approach tried at a previous company, a conference talk worth sharing internally, or a deep dive into an area of the codebase that most people don't understand.
Record every session and post it to a shared drive or wiki within 24 hours. This extends the value to team members who are out, in different time zones, or who join the company later. Encourage discussion in a dedicated Slack thread after the session — the best questions often come from people who watched the recording. Prioritize topics that benefit multiple teams, not just the presenting team's immediate concerns.
Before the meeting
- Confirm presenter, topic, and any technical requirements (screen sharing, live demo environment) 48 hours in advance
- Send the topic and presenter name to the team channel the morning of the session to drive attendance
- Set up recording before the meeting starts — don't scramble for it mid-session
- Prepare 2–3 seed questions to kick off Q&A in case the room is slow to engage
- Add the session to the team knowledge base index with a placeholder for the recording link
Meeting Details
- Date:
- Facilitator:
- Presenter:
- Attendees:
- Duration: 45 minutes (30 min talk + 15 min Q&A)
Topic
The subject of the talk in one sentence. What will attendees know or be able to do after this session that they couldn't before?
Practical observability with OpenTelemetry: adding distributed tracing to a Node.js service in under an hour
After this session, attendees will understand what distributed tracing is, why it matters for debugging microservices, and how to instrument a Node.js service with OpenTelemetry to send traces to our existing Grafana stack.
Presenter
Who's presenting, their role, and why they're well-positioned to talk about this topic.
@sam_chen — Backend Engineer, Platform Team
Sam spent the last two sprints adding OpenTelemetry instrumentation to the Meridian notification service after a series of hard-to-debug latency spikes. She has hands-on experience with the tradeoffs and gotchas that don't show up in the official docs.
Background
Context that helps attendees follow the talk. What problem does this topic solve? Why does it matter now?
Our microservices architecture has grown to 14 services, and when something goes wrong, the current approach is to grep logs across multiple services hoping to piece together what happened. Distributed tracing gives each request a unique trace ID that follows it through every service, making it possible to see the full call chain and identify exactly where latency is introduced or errors originate.
We already run Grafana and Tempo in our observability stack — this talk covers the missing piece of how to get trace data from our services into that stack cheaply.
Key Concepts
The 3–5 central ideas from the talk, captured for the record. These become the searchable reference for people who watch the recording later.
Traces, spans, and context propagation A trace represents a single request's journey through the system. A span is one unit of work within that journey (e.g., a database query or an HTTP call). Context propagation is how the trace ID is passed between services — usually via HTTP headers.
Auto-instrumentation vs. manual instrumentation OpenTelemetry's Node.js SDK provides auto-instrumentation for common libraries (Express, Axios, pg) that requires almost no code changes. Manual instrumentation is needed for custom business logic you want to trace.
Sampling Collecting 100% of traces is expensive at scale. Head-based sampling (decide at the start of a request) vs. tail-based sampling (decide after seeing the full trace) — and when each is appropriate.
The instrumentation code pattern Initializing the SDK at process startup before requiring other modules — the order of operations matters and is a common source of "why isn't this working" confusion.
Discussion
Key questions, debates, and insights that came up during Q&A. Capture the most useful exchanges so they're preserved in the recording notes.
Q (@priya): Does adding instrumentation meaningfully impact service performance? A: In testing, the overhead was under 2ms per request for auto-instrumented spans. The SDK is designed to be low-overhead, and sampling reduces the impact further at scale.
Q (@marcus): Can we instrument the legacy Ruby services the same way? A: OpenTelemetry has a Ruby SDK with similar auto-instrumentation. The context propagation headers are standardized (W3C TraceContext), so traces from Ruby and Node.js services can be joined in the same trace view.
Insight: Several people noted that the hardest part isn't the instrumentation itself — it's agreeing on span naming conventions so traces are searchable and comparable across services. Action: draft a naming conventions doc.
References
Links to slides, code, documentation, and further reading. This section is the durable artifact of the session.
- Slides: [link to deck]
- Recording: [link to recording — add within 24 hours]
- Example PR with full instrumentation:
github.com/meridian/notification-service/pull/847 - OpenTelemetry Node.js SDK docs: https://opentelemetry.io/docs/instrumentation/js/
- W3C TraceContext spec: https://www.w3.org/TR/trace-context/
- Grafana Tempo setup guide (internal): [link to wiki]
Action Items
| Owner | Action | Due Date | Status |
|---|---|---|---|
| @sam_chen | Post recording and slides to #eng-knowledge channel | 2025-02-08 | Open |
| @priya | Draft span naming conventions doc and share for team review | 2025-02-15 | Open |
| @facilitator | Add session to brown bag index in the wiki | 2025-02-08 | Open |
| @marcus | Investigate OpenTelemetry Ruby SDK for legacy services | 2025-02-22 | Open |
Follow-up
Post the recording link and slides in the team channel within 24 hours. Add to the knowledge base index with tags so it's discoverable. Open a Slack thread for async questions from people who watched the recording later. The facilitator follows up with the presenter to ask if they'd be interested in doing a deeper dive or pairing session based on interest from the Q&A. Add any topics that surfaced during discussion to the talk backlog for future sessions.
Skip the template
Let Stoa capture it automatically.
In Stoa, the AI agent listens to your tech talk / brown bag and captures decisions, drafts artifacts, and tracks open questions in real time — no note-taking required.
Create your first Space — free