SLIDE 1 — Act 1: Why This Matters
Speaker notes: 1. Welcome, introduce yourself 2. Frame the session: practical strategies, not a product pitch 3. "By the end of this, you'll have concrete things to take back to your projects"
SLIDE 2
Speaker notes: Don't linger — the audience wants to know HOW to use it, not WHAT it is 1. Quick context: this exists, it's free for maintainers, here's what's in it 2. Mention eligibility: verified against maintainers.cncf.io --- - Blog announcement: [12] contribute.cncf.io - If they haven't applied yet, the link is on the references slide at the end
SLIDE 3
Speaker notes: 1. This is the through-line for the entire talk 2. Every tool I'm about to show you keeps the human in the loop --- - Source: Brynjolfsson, Chandar & Chen, "Canaries in the Coal Mine?" (Nov 2025), pp. 11, 16 https://digitaleconomy.stanford.edu/app/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf - The 16% figure is from p.16 (Fact 4): "most AI-exposed occupations" after firm-time controls - The automate vs augment split is from p.11 (Fact 3, Figure 3 Panels B & C) - If challenged: "The 16% is about exposed occupations broadly; the paper separately shows the decline concentrates in automative applications, not augmentative ones."
SLIDE 4 — Act 2: The Toolkit
Speaker notes: - This is a quick visual overview — don't spend more than 90 seconds here 1. The progression left to right is: increasing autonomy, decreasing real-time oversight 2. Most maintainers are already using completions and chat 3. New column: Agentic Workflows (technical preview, Feb 2026) — the "Continuous AI" layer - Think of it as CI/CD for AI-assisted maintenance: triage, docs sync, test improvement - Authored in Markdown, executed by coding agents in GitHub Actions - Read-only by default with safe outputs — strong guardrails by design --- - Sets up the next three slides
SLIDE 5
Speaker notes: 1. This is the "senior dev pair programming with you" metaphor 2. Key point: you're in the loop the whole time — this is augmentation 3. Agent mode supports multiple models — mention the model picker briefly
SLIDE 6
Speaker notes: --- - MCP = Model Context Protocol, an open standard for connecting AI to external tools - [2] = GitHub blog on MCP, add to references
SLIDE 7
Speaker notes: 1. This is the "diligent teammate clearing the backlog" metaphor 2. Emphasize: it runs YOUR CI pipeline, YOUR linters, YOUR tests — it follows your rules --- - Built-in security: CodeQL, secret scanning, dependency checks run automatically [3] - Each run consumes GitHub Actions minutes AND premium requests — be strategic - [3] = GitHub changelog on security validation
SLIDE 8
Speaker notes: 1. Stress: pick something small and well-tested for the first try
SLIDE 9
Speaker notes: 1. "Continuous AI" — GitHub Next's framing, parallel to CI/CD 2. Guardrails: read-only by default, safe outputs constrain what the agent can do (specific labels only, title-prefixed PRs only, never merges) --- - [10] = GitHub blog on agentic workflows
SLIDE 10
Speaker notes: 1. Don Syme's Repo Assist is a concrete example: cleared over half the technical debt across 4 F# repos in a weekend using this --- 3. [11] = Don Syme's blog post on Repo Assist
SLIDE 11
Speaker notes: 1. This is the "peanut butter and jelly" slide — they're meant to be used together 2. Key takeaway: agent mode for novel work, coding agent for well-defined work, agentic workflows for the recurring stuff nobody wants to do manually --- - Cite: [4] GitHub blog on agent mode vs coding agent
SLIDE 12
Speaker notes: 1. This is where the talk gets actionable — these are files they can add TODAY
SLIDE 13
Speaker notes: 1. copilot-instructions.md: first 4,000 characters matter most for code review 2. SHOW a real example from the companion repo here --- - You can also use path-specific instructions (.github/instructions/*.instructions.md) with applyTo frontmatter for rules that only apply to certain file types/paths - [5] = GitHub docs on custom instructions
SLIDE 14
Speaker notes: 1. AGENTS.md at repo root: think of it like onboarding instructions for a new contributor — general rules that apply to everything 2. Custom agents in .github/agents/ extend AGENTS.md with specialist personas — they inherit the base rules but add their own scope and constraints 3. All agent files live in your repo, versioned alongside your code — easy to review in PRs --- - AGENTS.md is an open standard (AAIF/LF), not GitHub-specific — same file works with other tools
SLIDE 15
Speaker notes: 1. Don't go too deep on MCP setup — point to the docs, show the companion repo example 2. IMPORTANT NUANCE on cross-repo: - MCP gives agent mode "eyes" into other repos (search, read, check status) via the GitHub API - But the actual code edits from agent mode still happen in your local workspace - The coding agent works within ONE repo per task — it doesn't hop between repos 3. For the audience: "Think of MCP as giving Copilot read access across your org, while the coding agent handles the writes one repo at a time" --- - Cross-repo workflow in practice: use MCP in agent mode to investigate across repos, then file separate issues for the coding agent in each repo - Neither tool does true "edit repo A and repo B in one atomic operation" - MCP is configured via .vscode/mcp.json — the GitHub MCP server is first-party
SLIDE 16
Speaker notes: 1. This is where you tie security back to augmentation: the tooling catches a lot, but YOU are the reviewer 2. The security checks run without extra licensing — included in Enterprise --- - [3] = GitHub changelog on security validation - TODO: Review CodeQL docs to be able to explain it in detail if asked: https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql — Key points: static analysis engine, treats code as queryable data, finds security vulns (not style), free for public repos, supports Go/Python/JS/TS/Java/C++ and more.
SLIDE 17
Speaker notes: 1. IP considerations: Microsoft provides indemnification for unmodified Copilot suggestions, but maintainers should still review for license compatibility 2. LF policy sets licensing/attribution minimums — projects can add stricter rules, not waive those. If challenged: "The minimums around licensing and attribution aren't optional." --- - Invisible character attacks are a known research concern — compromised upstream code could influence suggestions. Good reason to review carefully. - [8] = LF generative AI policy
SLIDE 18
Speaker notes: 1. SET THE SCENE: KubeStellar Console had massive contribution volume — 1,300+ PRs in 5 weeks, 229 issues filed. Small maintainer team couldn't keep up manually. 2. WALK THE NUMBERS: 952 PRs merged total. 75% of issues were auto-detected by CI workflows — not humans finding them. Copilot generated 101 PRs that got merged. 3. LAND THE PUNCHLINE: 52% acceptance rate. Tie back to Slide 3 (augmentation): "That's not a failing grade. It means half the work was fully handled, and for the other half, Copilot gave reviewers a starting point instead of a blank page. That's augmentation." 4. CREDIT: "This work was done with Andy Anderson and Ashley Wolf." --- - If time is tight, speed through the numbers and focus on the 52% takeaway - DCO NOTE (for your info, not on slide): DCO + AI-generated code is still actively debated. The core issue: DCO requires you to certify you have the right to submit the code, but U.S. Copyright Office says purely AI-generated works aren't copyrightable. So who signs? Community is splitting: some require disclosure trailers (Assisted-by:), some require human review/transformation, some projects (QEMU) have restricted pure AI contributions. A March 2026 blog post "DCO and AI is a no-go" (brokenco.de/2026/03/02/copyright-ai.html) lays out the argument directly. Red Hat also has a good writeup: redhat.com/en/blog/ai-assisted-development-and-open-source-navigating-legal-issues
SLIDE 19
Speaker notes: 1. This is the pipeline diagram from the KubeStellar work 2. Walk through each step briefly — the audience should see how it maps to the tools you just explained (coding agent = the auto-fix, code review = CI validates) 3. The key insight: the loop is fully automated, but a human is always the final gate 4. Encourage the audience: "You don't need 37 workflows to start. Pick one pain point, write one workflow, and let the coding agent handle the fixes." --- - "Goodnight agent" example: an agent that runs nightly to update documentation - This is the bridge to Act 3 — "here's how to start"
SLIDE 20 — Act 3: Your Next Move
Speaker notes: 1. Keep this punchy — these are concrete, low-risk actions 2. Step 1 is for anyone who hasn't applied yet 3. Step 2 is the lowest-effort, highest-value thing — even without the coding agent, copilot-instructions.md improves every Copilot interaction in that repo 4. Step 3 is where they see the magic — but stress: pick something small and well-tested for the first try. Don't start with a massive refactor. --- - If they already have access, they can skip to steps 2 and 3
SLIDE 21
Speaker notes: 1. Open for Q&A — aim for ~5 minutes --- - Likely questions to prep for: - "How does this work with CLAs/DCOs?" (see your Slide 12 notes) - "What about CodeQL?" (see your Slide 11 notes) - "Can I use this with [other AI tool]?" — MCP is an open standard, AGENTS.md is too - "What models does it use?" — model picker lets you choose; Auto mode recommended - "Is my code used for training?" — No, Enterprise code is not used for training
SLIDE 22
Speaker notes: 1. This is the "take a photo of this slide" moment 2. All the references are also on the next slides with full citations --- - TODO: Add companion repo section back once repo is created. Include working examples of copilot-instructions.md, AGENTS.md, custom agents, and MCP config — they can fork and adapt
SLIDE 23
SLIDE 24
Speaker notes: 1. This slide positions the talk as "not a GitHub sales pitch" — the standards are open 2. AAIF is to agentic AI what CNCF is to cloud infrastructure --- - Goose is interesting but out of scope for today — it's a local agent framework, model-agnostic (works with any LLM), Apache 2.0 licensed - If someone asks about Goose: "It's an open-source alternative to agent mode that runs on your machine with whatever model you choose. Worth exploring, but today we're focused on what's included in the CNCF's Enterprise bundle." - AAIF had 146 member organizations by Feb 2026 — growing fast - David Nalley (AWS, former CNCF experience) chairs the governing board - [9] = LF press release on AAIF formation
SLIDE 25 (References 1/4)
SLIDE 26 (References 2/4)
SLIDE 27 (References 3/4)
SLIDE 28 (References 4/4)
SLIDE 29 (Colophon)