Gabe Eldred

Most software gets built around the company's org chart, not the customer's life. I spend my days trying to fix that — building consumer AI that holds context across a life, and designing the product orgs that can actually ship it.

GM of Product at Capital One. Fifteen years in regulated fintech — Chase First Banking, Secure Banking, Greenlight Pay Link — building products for the parts of people's financial lives everyone else found too hard.

Resume →

What I'm building now

Family Pulse screenshot coming soon

Family Pulse

2025 –

An AI command center for the household inbox. Started as SchoolPulse — a Claude-powered agent that reads school emails and drops the right events on the family calendar, so parents stop missing pajama day. Family Pulse is the product-scale successor: a shared context layer for how a household actually runs.

What I'm learning: The gap between "interesting research" and "thing my wife actually uses" is the most fun problem in software right now. Claude Code has made it shorter than I thought possible.

The Compound Product Organization

2025 –

A thesis on how AI changes the economics of context, and what that means for product org design. The one-line version: shared, persistent context lets teams compound intelligence across cycles instead of losing it in handoffs. Running live at Capital One right now with 2 AI-native sprint teams as the first real test.

Read the thesis →

How I think

What consumer AI actually unlocks

Essay · 2025

The consumer products I've spent fifteen years shipping — Chase First Banking, Greenlight's Pay Link, Secure Banking for the underbanked — all tried to solve the same structural problem from inside a box: help someone's actual life when the product only sees one slice of it. Financial health is a connected exercise, but the tools are siloed by org chart.

Claude is the first technology I've seen that can collapse that. Not because it's a smarter chatbot. Because it can hold context across a life, not just a transaction.

The real unlock isn't automation. It's continuity. A financial product that remembers the thing you said in October when you're making a decision in March. A family tool that understands your household's patterns, not just today's event. An AI that knows the difference between what you said and what you meant.

Most consumer AI products today are still built like individual features — useful, contained, forgettable. The companies that win the next decade will be the ones that figured out how to make the AI hold the thread across the whole thing. That's a product problem, not just a model problem. And it's the most interesting product problem I've ever seen.


How AI-native product teams should work

Essay · 2025

The way most product teams use AI right now is additive — a faster way to do the same things. Write the doc faster. Generate the ticket faster. Summarize the meeting faster. That's real value, but it's not the real opportunity.

The structural problem in product development isn't speed. It's context loss. Every handoff — design to eng, PM to stakeholder, sprint to sprint — loses something. The tacit knowledge that explains why a decision got made. The customer quote that shaped the spec. The constraint that ruled out the obvious solution. By the time a feature ships, the team that built it has moved on, and the institutional memory of why it works the way it does lives in no one's head.

AI changes this if you architect for it. Shared, persistent context — a working memory for the team that compounds across cycles instead of resetting every sprint. A system where the decision made in week two is still legible in week twelve, and the new engineer joining in month four can understand not just what was built but why.

That's the thesis behind the Compound Product Organization. I've been running it as a live experiment at Capital One with 2 AI-native sprint teams. Early results: the teams move faster, but more importantly, they make better decisions — because the context that usually evaporates in a handoff is still there.


Shipping in a regulated category without losing the plot

Essay · 2025

The standard take on regulated categories is that compliance is the enemy of speed. Fifteen years of building consumer fintech products has taught me the opposite: the constraint is the product.

At JPMorgan, we built Secure Banking — a product designed for people who had been excluded from traditional banking. The regulatory constraints didn't slow us down. They sharpened the design. No overdraft fees meant we had to actually solve for the customer's reality instead of hiding the cost in the fine print. Compliance wasn't the obstacle; it was the forcing function.

AI in regulated categories works the same way. Reliable, interpretable, steerable — those aren't marketing copy to a regulated consumer product leader. They're the difference between a feature that ships and one that doesn't. The teams I've seen try to move fast and figure out compliance later end up rebuilding everything. The teams that architect for it from day one ship faster in the long run.

The implication for consumer AI: the companies that will win in healthcare, finance, legal — the categories that actually matter to people's lives — are the ones that treat safety as the product thesis, not a post-hoc adjustment. That's a much harder path to walk. It's also the only one with a real moat.


The real shape of an AI operating layer

Essay · 2026

Handing a founder or a GM a license to Claude Code isn't a product — it's an ingredient. The question is what you build around it.

The product suite that doesn't exist yet, and should: an AI-native operating layer for how exec teams, product orgs, and the people inside them actually work. Not a chatbot. Not a copilot bolted onto an existing workflow. A system that understands the shape of real org work — the decisions that need to get made, the context that needs to survive handoffs, the moments where a team loses the thread — and holds that context for them.

This is a consumer-to-enterprise gradient. It starts with the individual — the PM who uses Claude Code to prototype, the exec who uses it to draft the strategy doc. Then it extends to the team — shared context, compound memory, decisions that don't evaporate between sprints. Then the org.

The companies that figure this out early will have an advantage that compounds. Every cycle where the AI holds context that would otherwise be lost is a cycle where the team gets smarter instead of re-explaining. That's the business case. The product case is simpler: it's just genuinely useful for the first time.


Say hello

If you're building something in this space and want to compare notes, I'm easy to reach.

gabe@customgabe.com