How I Use AI

By request, I am writing this post to describe ways that I use AI day-to-day as a product manager.


Generally, AI has become a tool that I reach for more and more often. I use it for help on a variety of tasks such as:

  • Writing data queries and understanding the patterns in the data I get back from them
  • Sorting out my thinking into easily understood documents
  • Summarizing customer call notes in a standardized way
  • Analyzing a variety of data from user research to come to conclusions
  • Troubleshooting random things, like why my webcam doesn’t work consistently

I can do all these things without AI, but with AI I can do them faster and with much more confidence.


Principles for Using AI


AI is helpful, but remember: it’s trained to sound right, not be right. These are the principles I follow to ensure that it is not confidently leading me off the edge.


1. Start with a thorough prompt.


I always tell the LLM very clearly what I am trying to do, what I expect as output, who the audience is for any outputted document, and I almost always tell it to make a plan and ask me clarifying questions up front. I also attach lots of documents to provide as much context as possible. This sets the stage for a back and forth collaboration with the LLM to iterate quickly on ideas. Here is an example of a prompt I gave to Copilot when I wanted to do some work on strategy:

I need to draft a product vision and strategy for GitHub Code Quality. I have attached a number of supporting resources, including a text file that is a transcribed braindump I just did of all the things I am thinking about.

My goal is to produce a first draft document of a phased vision and strategy document for GitHub Code Quality to be consumed by any internal stakeholder so that they can understand what Code Quality wants to become over time, how we get there, and how we win. I would like to take a draft that you and I create to my direct team for further iteration: my manager, my engineering counterparts, and my design counterparts.

Some context of the documents I have attached directly here:

  • discovery backlog: I have been working on this for the past few weeks. This represents a list of opportunities and possible initiatives for us to work on in the short/medium term, coming from customer conversations and market research. Each opportunity is categorized as something we want to be Amazing at, Baseline at, or Crappy at in order to deliver a focused and differentiated experience.
  • strategy & vision folders: these folders contain supporting documents I’ve accumulated here over time for context
  • code quality short pager: a short strategy doc that I wrote
  • bas code scanning + code quality: a document that my manager wrote that kicked off this workstream. It is thorough and, while a bit aged at this point (it is over a year old), much of it still rings true and should serve as the basis for what we describe.
  • vision braindump: as described, I just transcribed a braindump of what I’m thinking about with regards to strategy and vision.

I want to end up with a document that is easy to refer back to, has decisions that stand the test of time and easily adapts to changing times, and gives people a vision that they can really sink their teeth into and make decisions aligned to it.

Let’s start by having you review all the documents and create a plan for doing this. It is important to me that you do not create anything new or imagine anything yourself, instead I want to make sure that you ask me clarifying questions along the way so that we are going in the right direction.


A thorough prompt reduces the risk of hallucination and is more likely to provide an excellent first draft.


2. Tell it to be mean to me.


LLMs are designed to please. Every question you ask it is insightful and every answer you give it is clever and thoughtful. This is not helpful as a product manager where you need to be constantly honing your own thinking and ideas. Here are some instructions/prompts I give it to try to make it be meaner:

I expect you to: ask me questions when warranted to gain more context, fill in important missing information, and challenge my assumptions. Ask me questions that will let you most effectively coach and assist me in my role.

Encourage me to: be proactive, think strategically, and consider the long-term implications of my decisions. Help me identify opportunities for innovation and improvement in our products and processes. I want you to be my thought partner, helping me to think through complex problems and providing insights and perspectives that I may not have considered.

I want you to find the balance of: being supportive and challenging, providing guidance while also encouraging me to think broadly, and being both analytical and creative in your approach.

Tell it like it is; don’t sugar-coat responses. Adopt a skeptical, questioning approach. Be encouraging, but not effusive in your praise. Be practical above all. Act as an equal collaborative partner, not a deferential assistant. Prioritize intellectual honesty over agreement, offering candid, objective assessments. Apply domain expertise to challenge assumptions and elevate outcomes beyond the initial framing.


3. Check its output


Model improvement and specific prompting have vastly reduced hallucinations, but LLMs can still get things wrong or provide bad advice. Some examples of places where the LLM got 95% there and I had to adjust:

  • I asked it to help me write a harassment policy for a non-profit I volunteer for. It invented roles that do not exist (but that you might expect to exist at a professional organization)
  • I asked it to look through a transcript of a customer call to pull out some positive quotes from the customer. It grabbed things that I had said instead, not realizing who was the “customer”.
  • When asking it to summarize customer calls based on AI-generated transcripts, I have to go through and check spellings of names in case the transcript got it wrong (AIs have called me: Carol, Carl, Cairo, just to name a few)

4. Trust but verify


This goes along with #3 but goes a little deeper. LLMs, much like humans, can really get overly focused when they think they have found the answer, and sound really confident right up until you question them and then they fold like a house of cards. This is pretty easy to spot when you’re asking the LLM to do something you already know how to do, but can get tricky when the LLM is doing something that is out of your own depth. Here are a few examples of how I handle working with an LLM on something I don’t personally have a deep understanding of.


Computer troubleshooting: I like to think of myself as pretty capable when it comes to software, but when it comes to hardware, I’m like a toddler mashing buttons. I have been having intermittent but annoying problems with my webcam, so I asked ChatGPT to help me troubleshoot. It gave me a lot of options and things to look at and try, but every step of the way (especially when it was suggesting I run Bash commands I was not familiar with), I checked with it or via Google: “What does this Bash command do, and what are the potential side effects?” No one wants to be the next “whoops deleted my entire file system because AI told me it would fix my camera” story. Each time I discovered something new, ChatGPT changed its mind on the most likely cause, and every time it was equally confident that we’d finally discovered the problem. (Sidenote: still haven’t fixed this)


Statistical Analysis: I took the requisite statistics classes in school, so I have a solid foundation, but it’s never been my forte. In particular, anything beyond a typical z-test or looking at averages, medians, and standard deviations is a bit beyond me. But it’s helpful to be able to do more thorough analysis of data so that I can answer questions like “how should I bill for my product” or “which of these datapoints is most predictive of a user coming back?” In these cases, I try to be very clear with the LLM what I am asking for, and check the code it writes for any obvious logic errors. And with any conclusions it produces, I apply my product sense—given what I know to be true about my product, could this result be plausible? If not, I dig in further – Did it do something wrong? Is the data weird (garbage in, garbage out)? Or do I legitimately need to check my assumptions?


My AI Setup


I have 2 daily drivers: GitHub Copilot in VS Code, and ChatGPT in its own app. (ChatGPT is a personal subscription, so I don’t talk about work things on it.)


GitHub Copilot


I don’t primarily use it for coding. Instead, I have created a private repository in the GitHub org that I am trying to make into my “second product brain.” There’s a bunch of documents I’ve brought in as context:

  • Vision/strategy documents for the various levels of leadership above me
  • Information about my immediate org chart
  • Documents I’ve written in the past that I consider to be good examples I want to emulate
  • Information about the data tables related to my product that I need to query

This is where I do work like:

  • Customer call summaries
  • Data analysis
  • Formulating Kusto queries
  • Strategy writing

Because of the context in this repo, I can ask it to pull customer quotes, analyze patterns, etc.


ChatGPT


I treat ChatGPT like a colleague sitting behind me—someone I can turn to for a quick reality check who will tell me when I’m being ridiculous. Sometimes I just need to break out of spiraling thought patterns. Important to note that AI is NOT a replacement for human interaction or a therapist; it’s just a tool that I use to reinforce mental health habits. Like a parrot who can give you a CBT prompt.


When I want to search for something, I increasingly reach for ChatGPT instead of a search engine, especially if it’s just a quick lookup and it doesn’t matter if the answer is “wrong” (like: what spices should I put in fried rice). It’s also fun to be able to do the image generation.




As product managers, we live and die by our product sense. AI isn’t going to replace our entire jobs any time soon (I’d like to see ChatGPT try to solve my stakeholder management challenges!), but it helps me to do some parts of my job faster and at a higher quality than I would otherwise have the time and energy to.