10 months. 43 projects. A product leader who didn't wait for permission to reshape how a software organization works.
I genuinely enjoy watching people who work hard — people taking tangible steps to get better — succeed and grow. We're all at different points in our lives and careers, and when you find someone who is at a moment of real growth and you get to support that, it's one of the most rewarding things about leadership.
That instinct shows up in how I build teams, how I work with engineers, and why I invest in tooling and frameworks that make the people around me more capable. The best version of this job isn't about the product — it's about the people building it.
The work in this portfolio reflects the last 10 months. Behind it is 20+ years — a 17-year career at Thomson Reuters managing platforms deployed to 60,000+ financial advisor desktops and $100M+ contract implementations, followed by six years at a leading health and wellness company growing a commercial product line from $100M to $144M in annual revenue.
The core responsibility of Product Management is to maximize the value of the Product Engineering team. That means ensuring they are always focused on the most important thing — and that they have the context needed to devise the best solutions given the constraints they're working within.
A team building the wrong thing perfectly is failing. A team building the right thing with incomplete context will produce the wrong solution. Both are PM failures, not engineering failures.
I am excited about the prospect of giving teams the outcomes sought and asking them to devise the solutions — with proof they succeeded. The best engineers are problem solvers. When you hand them a solution to implement rather than a problem to solve, you leave most of their value on the table.
The delivery of software should never be the end of the conversation. What changed? Who benefited? What did we learn? Those questions close the loop and make the next cycle better.
The Spark — pushing for access, then proving what's possible
AI Integration — connecting models to real business workflows
Team Productivity — tools that give whole teams visibility
Org Capability — systems that change how the org works
This portfolio spans June 2025 through April 2026 — roughly 10 months — and traces a VP of Product's evolution from advocating for AI tooling access to reshaping how an entire product organization works.
The journey didn't start with code. It started with a request. When the company began distributing a limited number of Amazon Q Developer licenses to a select few, it created an opening — Kiro had just launched as an agentic AI IDE built on the same AWS infrastructure, making it a natural and defensible pathway to access. By July, adoption and evangelism were underway. By August, AI tools were being built at a pace that drove the organization into the top 100 enterprise customers by Kiro usage within months of the tool's public launch.
What followed was a compounding loop: build a tool, prove its value, use what you learned to build something more ambitious. Early projects were single-purpose browser tools and React SPAs. Later projects were full-stack platforms with property-based test suites, undocumented API integrations, infrastructure-as-code, and Figma plugin development.
By April 2026, the output wasn't just tools — it was an organizational framework: a tiered design repo template spawning 6+ active initiative repos, an AI-First SDLC playbook projecting 58% faster cycle times, and a product organization that treats requirements as code and AI as infrastructure.
The distinguishing characteristic throughout: every tool was built with the same discipline a good PM applies to product development — PRDs, business cases, architecture decisions, test suites, deployment guides. Not side projects. Organizational assets.
A zero-dependency, browser-based tool for creating interconnected system diagrams using hexagonal components. Canvas API rendering with tessellation mathematics, drag-and-drop, multiple layout algorithms, category color coding, and PNG export. ~1,300 lines of vanilla JS — open index.html and go.
We were pitching product and org changes that required showing how multiple components could combine into something new. PowerPoint is the dominant language internally, but honeycomb diagrams are notoriously fiddly to build and align. Rather than wrestling with slide tools, I built a webpage to create the images, rearrange components, group by color, and guarantee alignment — then export directly into a deck.
An Opportunity Solution Tree builder — a full React 18 SPA with component architecture, multi-workspace tabs, SVG/PNG export, and Atlassian Confluence/JPD integration for cloud save/load. Shipped with a complete PRD.
My company still operates as a feature factory. I wanted to introduce outcome-based thinking — heavily influenced by Teresa Torres and Continuous Discovery — and create a way to synthesize customer interview feedback into opportunity trees that leadership could actually engage with. Getting buy-in required something visual and shareable, not a document.
A full-stack web application built to visualize subscription KPIs and metrics for a weekly Operations and Intelligence meeting. Processes Excel-based reporting data, tracks Paid/New/Termed/NetNew subscribers across healthcare clients, and generates PowerPoint-ready outputs. Later evolved into the Optimization Dashboard with AI-powered insights layered on top.
The weekly Ops & Intel meeting for our monthly subscription product had no consistent, visual view of performance. Leaders were making decisions from raw spreadsheets. This gave the meeting a shared source of truth — and established the pattern of turning operational data into something a room full of people could actually reason about together.
A 5,279-line single-page web application proposing a $1.4M AI Center of Excellence. Interactive Gantt chart, org chart visualizations, financial models, Year 1 OKRs, and implementation roadmap. Projects $2.5M–$6M in value within 12 months (180–330% ROI). Three-year projection: $19M–$36M in value creation.
My company had run a series of disconnected, low-value AI science experiments with no coordination or measurable outcomes. I wanted to make the case for a comprehensive Center of Excellence — one that would align priorities, establish governance, build with discipline, and measure results. The proposal needed to be credible enough to survive a CFO and risk/compliance review, so I built it as a full financial model with ROI projections, an org design, a phased roadmap, and Year 1 OKRs.
A feature configuration matrix for comparing customer segments (Enterprise, SMB, Individual, Partner). 2,366 lines of JS with gradient scales, a business rules engine, and a multi-scenario revenue calculator with seasonal growth modeling, churn calculations, and 36-month projections across 3 scenarios.
Leadership conversations about where to invest across multiple client segments had become circular — the same debates repeating without resolution. By building condition-specific revenue growth models that updated in real time based on segment, feature set, and growth assumptions, I could bring the conversation back to data. The goal wasn't to be right — it was to make the tradeoffs visible enough that decisions could actually happen.
A general-purpose AI business assistant powered by AWS Bedrock with chat interface, file upload, and multi-modal analysis. Handles spreadsheets, images, PDFs, PowerPoints. Full Pulumi IaC for AWS deployment (Lambda, S3, API Gateway, CloudWatch, WAF). Business case: $975K annual productivity savings, 572% projected first-year ROI.
This started with a conversation with our CFO at an executive offsite. He wanted help justifying AI tool licenses for colleagues, but I knew the path through risk and compliance would be long and uncertain. So I built my own chat assistant — no proprietary data, no internet access — using AWS Bedrock as an already-approved internal pathway. I effectively did inference-time instruction tuning, giving the model detailed guidance on how to behave as a company-context assistant: drafting emails, interpreting spreadsheets, building presentations. It was my first production use of in-app AI API calls, and it got AI into colleagues' hands months before any vendor license would have.
The next evolution of the Ops & Intelligence Dashboard — same subscription metrics foundation, now with an AI-powered insight layer. Analysts can upload budget spreadsheets and ask natural language questions about performance. Client names are anonymized before any data is sent to the model, keeping sensitive information out of the AI layer entirely.
Once the dashboard was in use, the next question was obvious: what does it mean? Adding AI-generated narrative on top of the visualizations let the meeting shift from "what are the numbers" to "what should we do about them." The anonymization approach was deliberate — it was the fastest way to get AI insights into a compliance-sensitive environment without opening a lengthy review process.
A multi-tenant dashboard connecting to Jira to track Flow Metrics, OKRs, time tracking, and team capitalization across engineering squads. 40+ API endpoints, 30+ TypeScript interfaces. Scope creep detection via automatic snapshot comparison. Admin interface with team management and status mapping wizard. Built in 4 weeks vs. a 12-week traditional estimate. 727 tests across 67 suites, including 12 property-based suites.
My company had no clear picture of where engineering dollars were going or how efficiently they were being spent. After reading From Project to Product and studying the Flow Framework, I started building both a top-down and bottom-up view of what teams were actually doing and the value they were creating. The dashboard lets you define teams, assign members, pull features and OKRs from Jira Product Discovery, and correlate them with epics, stories, and time-tracked tasks from Jira Software. The result: flow distribution, velocity, flow time, flow efficiency, and cost per point — visualized across multiple cuts of the data. Approved for a pilot with the Commercial Engineering team in 2026 after executive presentation.
Automates sprint reporting — pulls Jira data, calculates metrics, generates AI narratives, publishes to Confluence. Uses point-in-time values from the undocumented Greenhopper Sprint Report API for accuracy. Pre-computes sprint-over-sprint deltas so the AI doesn't do arithmetic. Batch update system retroactively improved 40+ historical pages. 1,739 tests across 38 suites, including 12 property-based suites using fast-check.
The CPO started asking for sprint review artifacts, and I didn't want my PMs spending cycles on that. The tool works across all teams — each team queries for their sprint's stories, captures velocity and scope metrics, and sends story descriptions to the model for a natural language summary. Results publish automatically to Confluence for leadership. It has since grown to include Scrum of Scrum updates that Product and Engineering review every sprint — giving leadership a consistent, low-effort view of delivery without taxing the people doing the work. Now used by every engineering team each sprint.
Imports ideas from Jira Product Discovery, visualizes as interactive timelines, tracks changes between versions, exports to Confluence and Figma Slides. Change tracking engine detects New, Moved Up, Moved Out, Removed. Figma Slides plugin generates Board Meeting decks programmatically — cover, executive summary, quarterly roadmap, investment theme heatmap, competitive posture, change tracking. Brand theming across all exports. 771 tests across 56 suites, including 20 property-based suites.
Keeping roadmap artifacts synchronized across audiences — steering committee, sales, operations — was a manual, error-prone process. This tool pulls in the prior roadmap, syncs it against Jira Product Discovery, and generates three tailored artifacts that highlight what changed and why. Steerco gets the full picture; sales gets a redacted view; Ops & Intel gets the operational lens. Change tracking means no one has to manually diff versions — the tool does it.
A knowledge base and implementation framework covering how AI transforms the entire software development lifecycle — from ideation and discovery through requirements, development, QA, and production measurement. Three workflow paths based on feature complexity, an agentic workflow roadmap, requirements-as-code principles, and AI-powered test automation via Playwright and MCP integrations.
I wanted to advance a specific point of view: that AI isn't just a coding tool, it changes how the entire SDLC operates. This repo became the conceptual foundation that led directly to the product template repo — the thinking came first, the infrastructure followed.
A reusable template repo that can be cloned to start any new product initiative. On clone it lays down MCPs, steering files, example docs, and templates — everything needed to begin working immediately with AI-assisted tooling. Supports two modes: greenfield work, or a Tier 2 configuration where a production code repo is cloned alongside it to give the AI context about the existing landscape. Full Jira and Confluence integration pushes all work into the systems the organization already operates in. The template itself is configured as an upstream remote, so improvements can be pulled down to any downstream repo as the template evolves.
This is the practical instantiation of the SDLC vision — a system any PM or designer can pick up and use without needing to configure anything. The upstream remote pattern is a deliberate architectural choice: it means the template improves over time and every initiative that uses it gets better automatically. It's not just a starting point, it's a living standard. Already the foundation for 6+ active initiative repos spanning gamification, eligibility codes, Spanish language support, global privacy control, reseller distribution, and member journey documentation.
A design and prototyping workspace for gamification features (badges, streaks, missions, challenges) built as a React component library with Storybook, aligned to production member portal patterns. Full TypeScript type system, React Context provider, API layer following production REST conventions, GTM tracking, and detailed requirements with Jira epics and acceptance criteria.
This was the first real initiative to use the product template repo end-to-end — a core business feature worked through entirely with AI from requirements creation through component build and engineering handoff. The hypothesis: by investing heavily in upfront requirements clarity with AI assistance, the team would pick up better-defined work and ship with fewer defects.
A design repo for a core business feature — single-use eligibility codes distributed by employers as an alternative member enrollment path. Complete 5-phase lifecycle documentation, 12 Jira stories with full acceptance criteria, DynamoDB single-table design with GSI queries, rate limiting, security logging, 6 Lambda packages specified, and conditional writes for race conditions documented.
This was a business-critical feature that needed to move fast. Using the template repo and AI-assisted requirements creation, I was able to produce a thorough, implementation-ready specification in a matter of hours — work that would typically take days of back-and-forth. The team could pick it up and execute without the usual requirements clarification cycle.
A working repo covering the full scope of Global Privacy Control and Universal Opt-Out Mechanism compliance — technical implementation approaches, legal and risk implications, state-by-state requirements across 12+ US jurisdictions, and the organizational positions needed to support them. Used to house research, produce documentation, and facilitate meetings and follow-ups with stakeholders across product, legal, and engineering.
GPC/UOOM is genuinely complex — the technical, legal, and business dimensions all intersect in non-obvious ways, and the nuances matter. Getting leaders to form well-informed opinions and positions on something this multi-faceted requires more than a summary deck. This repo became the living record of that education process — surfacing the right detail for the right audience at the right time.
A discovery repo for adding Spanish language support across the member portal — a new business priority. Uses the Tier 2 configuration of the product template repo, with the production code repo cloned alongside to understand the existing architecture and identify what Spanish support would actually require. Covers strategies, effort sizing, and requirements across multiple brands.
This is the template repo working as intended at its most capable tier — a PM doing discovery work directly against the production codebase context, using AI to understand the existing landscape before defining the path forward. The clone-plus-template approach compresses what would normally be weeks of engineering consultation into a structured, AI-assisted discovery process.
A discovery repo for a distribution model where the member portal is embedded as an iframe within a third-party reseller's webpage. Covers legal considerations, technical constraints of the iframe approach, and the product requirements for offering fitness network access within an external marketplace.
Embedding your product in someone else's webpage surfaces a non-obvious set of legal, technical, and UX constraints that intersect in ways that aren't immediately apparent. This repo was the structured way to work through those considerations — using the template repo to organize discovery and surface the right questions before committing to an approach.
| AWS services used | Lambda, S3, Bedrock, API Gateway, CloudWatch, WAF, SSM, DynamoDB |
| AI models integrated | Claude Sonnet 3.7 → 4.5 via AWS Bedrock (upgraded as new versions became available) |
| Figma integrations | MCP Server, FigmaMake, Figma Slides Plugin |
| Property-based testing | fast-check — 44 suites across Sprint Summary Writer, Timeline Creator, Engineering Squad Dashboard |
| Total test suites | 191 suites · ~3,812 tests · 99.97% pass rate across projects with clean runs |
| Most recently updated | Timeline Creator — Apr 11, 2026 |
| Largest single codebase | AI CoE Proposal — 5,279 lines, interactive SPA |
AI is here. Successful leaders will ruthlessly lead by example, evangelize relentlessly, and remove the organizational and conditioned barriers that prevent their teams from operating in this new world. Hedging, waiting for permission, or managing AI as a risk rather than a capability is a strategic error that compounds quickly. The window to establish fluency is now — not after the next planning cycle.
When a PM can write requirements that become code, when a designer can validate a component in Storybook without touching production, when an engineer can generate a test suite from a Jira ticket — the traditional boundaries between roles start to dissolve. This will be uncomfortable for organizations built around those boundaries. It is also inevitable. The leaders who thrive will be the ones who reframe role evolution as capability expansion rather than job threat.
Amid the disruption, a lot of what matters stays constant. Velocity. Planned vs. unplanned work. Cost per story point. The distribution of effort across features, debt, defects, and risk. These aren't legacy metrics — they're the signal underneath the noise. AI changes how fast you can move and what becomes possible, but it doesn't change the need to measure whether you're moving in the right direction. The leaders who combine AI fluency with delivery rigor will be the ones who actually produce durable results.
VP of Product Management · AI transformation practitioner · Builder