CTO Candidate Presentation

Building an AI-First
Engineering Culture

A 90-day framework for stabilizing a talented but bruised engineering organization, aligning it to a focused product vision, and making it the most effective AI-native team in the creator economy.

Process over vibes Measurable outcomes Human-in-the-loop AI Stable focus, fast execution Culture repair
Core leadership philosophy

If you can't measure it, you can't fix it. Every engineering and product process will have a measurable outcome. Changes are hypotheses — we validate them with data, not gut feel.

On AI-assisted engineering

Engineers should not be writing raw code — AI tooling is non-negotiable. But we cannot hold an AI agent accountable. Every AI output requires a human owner who is responsible for correctness, security, and outcomes. No vibe coding. No anonymous AI output in production.

On the organization's current state

Low morale, pivot fatigue, and reluctance to adopt AI tooling are symptoms — not root causes. The root causes are unstable focus, a history of misaligned leadership, and senior engineers who were never given the tools or coaching to lead people. This plan addresses root causes first.

Use the tabs above to navigate each section of this presentation.

First 90-Day Priorities

The first 90 days are not about making big moves. They are about earning the right to make big moves — by listening, measuring, and building trust with a team that has been burned before.

Phase 1
Listen & Measure
Days 1–30
  • 1:1s with every engineer
  • Map all active systems & debt
  • Establish baseline metrics
  • Audit AI tooling adoption
  • Sit in on product planning
  • Read the last 3 roadmaps
Phase 2
Diagnose & Design
Days 31–60
  • Present findings to CEO/board
  • Define team OKRs with eng leads
  • Draft AI tooling standards
  • Identify future eng leads
  • Propose process improvements
  • Scope quick wins
Phase 3
Execute & Validate
Days 61–90
  • Launch pilot AI workflow
  • Begin people-lead coaching
  • Ship first measurable OKR
  • Establish sprint retros
  • Present 6-month roadmap
  • Review and iterate
90-day priority timeline Three phases with milestones and a continuous measurement loop Phase 1: Listen Days 1–30 · observe only Phase 2: Diagnose Days 31–60 · design changes Phase 3: Execute Days 61–90 · ship & validate Baseline metrics set OKRs + AI standards First OKR validated Continuous measurement loop (all phases) DORA metrics · Team NPS · AI adoption rate · Cycle time · Defect rate Cycle time ↓ Morale NPS ↑ AI adoption rate ↑
Fig 1 — 90-day priority phases with continuous measurement loop

On inheriting a morale problem

This team has been pushed too hard, then managed too passively, in a cycle. The playbook isn't motivation speeches — it's consistency, transparency, and visible follow-through. If I say something, I do it. If I don't know something, I say so. Rebuilding trust is a process, not an event — and it will show up in our Team NPS scores over the first 90 days.

"The best thing a new CTO can do in month one is listen more than they talk — and take extremely good notes."

Evaluating the Engineering Org

Assessment is not an audit. It's a diagnostic. The goal is to understand what this organization is actually capable of — not just what the org chart says — and to identify the gap between current state and what we need to build.

Assessment principle: measure before you prescribe

No process changes, no reorgs, no tooling mandates until we have baseline data. Every intervention needs a before and an after. Opinions are free; data costs effort and is worth it.

Four dimensions of org health

Four-quadrant org assessment framework Engineering org evaluated across delivery, culture, technical health, and AI readiness Engineering org assessment framework Delivery health Cycle time, lead time, deploy frequency Change failure rate, MTTR Sprint predictability, backlog health On-call burden per engineer Culture & people Team NPS (anonymous, tracked monthly) Attrition signals, sentiment patterns Leadership pipeline readiness Manager-to-IC coaching ratio Technical health Tech debt inventory (scored, not subjective) Test coverage, CI/CD reliability Incident frequency and severity Architecture vs. product roadmap fit AI readiness Current AI tooling in use (if any) Resistance patterns and root causes Data pipeline maturity for AI products Human-in-loop accountability gaps
Fig 2 — Four-dimension engineering org assessment model

Assessing the known issues

Senior engineers with no people-leadership experience

Exceptional technical talent is an asset, not a liability — but only if we invest in developing the non-technical skills around it. I will identify the 2–3 engineers with the highest leadership potential in month one, and build a structured coaching program with clear expectations and feedback loops. Promotion to leadership is earned, not assumed based on tenure.

AI tooling resistance

Resistance to AI tooling is not laziness — it's usually fear (job security), skepticism (quality concerns), or prior bad experience (rushed tooling mandates). I will survey the specific objections before proposing solutions. The mandate is clear — engineers don't write code without AI assistance — but the path to adoption is led by demonstration and data, not top-down diktat.

Exceptional technical talent

This is the most important asset we have. The job is to remove the organizational dysfunction that is stopping this talent from doing its best work, not to replace or re-hire. We retain and grow from the inside.

Key Risks & Early Gaps

These are not hypothetical risks — they are patterns already visible in the company profile that will surface as blockers if not addressed deliberately and early.

Risk
Description & mitigation
Priority
Pivot fatigue → org churn
Frequent focus pivots destroy sprint velocity, erode confidence in leadership, and cause the best engineers to leave. Mitigation: establish a change-control process for roadmap pivots that requires quantified impact assessment before engineering is redirected.
High
Leadership vacuum in eng
Senior ICs who've never managed people will be promoted into leads with no support structure. They will fail — not from lack of skill but lack of scaffolding. Mitigation: structured people-lead coaching program starting month two.
High
AI tooling non-adoption
A team that refuses AI-assisted development will be outpaced by every comparable team within 18 months. Mitigation: mandate clear, phased AI workflow adoption with support structures, not punitive mandates. Track adoption rate weekly.
High
Unaccountable AI output
The fastest route to a production AI incident is AI-generated code with no named human owner. Mitigation: every AI-generated artifact — code, architecture decisions, test plans — has a named engineer who signs off and is accountable for it.
High
Two-product complexity
Game dev AI tools and AI video generation are adjacent but not identical in engineering demands. Shared infra decisions made too early will constrain both products. Mitigation: assess platform overlap in month one before committing to a shared-platform strategy.
Medium
CTO credibility deficit
The team has been burned by leadership before. Any gap between what I say and what I do will close the door permanently. Mitigation: explicit 30/60/90 commitments shared with the team on day one, with public progress check-ins.
Medium
Morale → attrition spiral
Low morale increases attrition; attrition increases workload; increased workload decreases morale. This spiral is self-reinforcing. Mitigation: Team NPS tracked monthly from day one — if it doesn't trend up by day 60, the diagnosis is wrong and the plan needs to change.
Medium
Risk management process Identify, measure, mitigate, validate loop with feedback arrow Risk management process Identify Survey + 1:1s Measure Baseline metrics Mitigate Targeted process Validate Did metrics move? If metrics didn't move → diagnosis was wrong → re-identify Every risk mitigation is a hypothesis. We validate it, or we update the hypothesis.
Fig 3 — Risk management as a continuous measurement loop

Aligning Engineering with Product & Company Goals

The biggest source of engineering waste in a pivot-heavy company is building things twice — once for a direction that got abandoned, and once for the new direction. The antidote is tight feedback loops between product decisions and engineering costs, before commitments are made.

The pivot problem: focus is a resource

Every pivot consumes engineering capital: context-switching cost, abandoned work, re-architecture, and team motivation. My job as CTO is to make the cost of pivoting visible — not to prevent pivots, but to ensure they are made with full information. A pivot that would cost 6 engineer-months in rework is a different decision than one that costs 6 hours.

Engineering and product alignment model Company goals flow down through product vision, tech strategy, and engineering OKRs to sprint execution, with measurement feedback back up Company goals (CEO + board) Product vision What we're building & why Tech strategy Platform, AI, infra bets Engineering OKRs Measurable team outcomes Sprint planning + execution Every ticket traceable to an OKR Outcome measurement Did delivery move the metric? Feedback loop
Fig 4 — Company goals → product vision → engineering OKRs → sprint execution → measurement feedback loop

Making pivot costs visible

Roadmap change protocol

  • Any mid-sprint scope change requires a cost estimate
  • CTO co-signs changes above 2 sprint-days
  • Pivot impact logged and reviewed quarterly
  • Historical pivot cost visible to leadership

Product-engineering sync cadence

  • Weekly: sprint planning + product lead present
  • Bi-weekly: OKR review against delivery
  • Monthly: roadmap horizon check (rolling 3mo)
  • Quarterly: strategy + OKR reset with CEO
The game platform and video platform share a challenge: fast-moving AI capabilities

Both products depend on AI models that are evolving faster than traditional software. I will establish a dedicated AI integration practice — a small, rotating team of 2–3 engineers whose job is to track model capabilities, prototype integrations, and bring them to the product teams. This keeps the CEO from being the only person watching the horizon.

AI-First Engineering Organization

An AI-first organization is not one where AI does everything — it's one where every engineer is amplified by AI tooling, and the organization is structured to capture that leverage systematically rather than individually.

"AI doesn't replace engineers. It raises the floor. A great engineer with AI is a force multiplier. A mediocre engineer with AI is faster at making mediocre things. We hire and develop for judgment, not typing speed."

What AI-first actually means here

No engineer writes code without AI assistance

This is a firm standard, not a suggestion. The tools (Cursor, GitHub Copilot, or equivalent) are provided, onboarded, and supported. Resistance is addressed through support and evidence first — but the destination is not optional.

Every AI output has a named human owner

AI-generated code that passes review is signed off by an engineer who is accountable for it. This isn't bureaucracy — it's accountability architecture. We can't fire a GPT instance for shipping a security vulnerability. The engineer who approved it is responsible.

Human-in-the-loop AI engineering workflow AI tooling workflow with mandatory human review gate before deployment Human-in-the-loop AI engineering workflow Ticket Human writes spec AI generates Code / tests / docs Human review Named owner signs off CI / tests Automated gates Deploy Attributed PR Rejected → AI regenerates with human feedback Accountability architecture Every commit: AI tool used → human reviewer named → ticket link → OKR reference Incidents traced to approving engineer. No anonymous AI output in production.
Fig 5 — Human-in-the-loop AI workflow. The review gate is mandatory; the human owner is accountable.

How team structure shifts in an AI-first org

Engineering leverage model — target state
AI tooling adoption
Target: 95%
Code review (AI-gen)
100% mandatory
Test automation rate
Target: 80%+
Eng-to-product ratio
Fewer, better

Roles that grow in an AI-first org

  • AI integration engineers — own the model pipeline layer
  • Prompt architects — design AI interaction models for products
  • Accountability leads — own the review + sign-off process
  • Platform engineers — tooling infrastructure for the team

Roles that change or reduce

  • Boilerplate / CRUD engineers → retrained or redeployed
  • Manual QA → shifts to test design + AI test orchestration
  • Documentation writers → AI-assisted, human-verified
  • Junior devs → fewer, higher leverage from day one
On staying current: the CEO shouldn't be the only one watching the AI horizon

I will personally track AI model releases, tooling advances, and competitor adoption every week — and publish a monthly "state of AI" briefing to the engineering org and leadership team. An AI-first creator platform that falls behind the AI curve is not AI-first — it's AI-past. This is a core CTO responsibility, not a side project.