Strategic Discovery & Hypothesis Modeling

Rebuild business analysis from a product mindset. Bring a real initiative from your organisation (or a past engagement), surface assumptions with AI, and produce evidence-backed opportunities with governance guardrails before stakeholder interviews even begin.

Time Commitment 6–8 hours this week

2h reading, 3h lab build, 1h team critique, optional 2h deep dive.

Primary Tools Codex CLI · ChatGPT · Gemini

Plus open-source editors (VS Code), diagramming (Excalidraw), and spreadsheets.

Deliverables Discovery backlog · Opportunity canvas · Governance checklist

All tied to your chosen real-world use case.

Setup & Prerequisites

1. Choose Your Use Case

Select one initiative that you already know well. Ideal candidates:

  • A workflow you previously redesigned that can benefit from AI acceleration.
  • A backlog item under consideration (e.g., knowledge base automation, intelligent triage, onboarding assistant).
  • If no live project exists, craft a realistic scenario mirroring your industry; base it on documented processes.

Document a one-paragraph summary and baseline metrics (current cycle time, quality KPIs, stakeholder pain points). This becomes the anchor for all module artefacts.

2. Environment Check
  • Install Codex CLI and ensure it authenticates with your OpenAI API key.
  • Log in to ChatGPT (GPT-4o or 4.1) and Gemini 2.5; save custom instructions emphasising enterprise BA focus.
  • Prepare workspace directories: mkdir -p ~/ai-ba-course/week1/artifacts and version control with git.
  • Install optional helpers: pip install jinja2 pandas rich for templating persona sheets and backlog exports.
3. Reference Reading (90 minutes)
  • IBM Global AI Adoption Index 2023 – focus on Chapter 2 maturity segmentation.
  • McKinsey "The State of AI in 2024" – adoption vs scaling stats.
  • IIBA Business Analysis Standard 2024 – sections on strategy analysis & governance.

Capture 3–4 data points that will inform your personas and opportunity scoring.

Select Use Case Baseline metrics Synthesize Insights Personas & interviews Score Opportunities Canvas & backlog Document Controls Governance checklist
Week 1 flow: bring a familiar workflow, generate synthetic discovery assets, evaluate opportunities, and define guardrails.

Learning Outcomes

Master the discovery foundations for AI-first business analysis.

  • Diagnose organisational AI maturity using triangulated data sources.
  • Construct synthetic personas and interview scripts tied to your real workflow.
  • Prioritise opportunities with quantified value, feasibility, and readiness scores.
  • Design governance checklists that anticipate data, ethics, and security risks.

Concept Briefings

Reality Check: AI Adoption Signals

Position your project by segmenting stakeholders into experimenting, piloting, or scaled AI adopters. Align expectations with actual capability; adoption is rarely uniform inside the same enterprise.

Data Points to Cite
  • IBM AI Adoption Index (2023): 35% in production, 42% exploring.
  • McKinsey (2024): 72% experimenting with GenAI, ~30% scaled.
  • IIBA State of BA (2023): Top BA focus areas include AI strategy & governance (38%).

Synthetic Personas & Interviews

Use Codex CLI to iterate persona drafts quickly, then interrogate them with data to avoid stereotypes. Treat AI outputs as hypothesis generators, not truth.

codex prompt persona-banking You are a senior BA preparing AI discovery. Generate four personas involved in loan-origination modernization: - Include role, success metrics, AI maturity (1-5), blockers, compliance concerns. - Base maturity on IBM & McKinsey adoption stats. - Output as Markdown table.

Governance First Mindset

Capture risks upfront so every downstream artefact references the same constraints. Include regulatory, data quality, model bias, security, and change management dimensions.

chatgpt prompt risk-register "List governance checkpoints for deploying an AI copilot in [your use case]. Group by Data, Model, Process, People. Add probability (Low/Med/High) and mitigation owner."

Project Guidance

Analyse the Current Approach

Break down how the workflow currently operates: responsible teams, tools, pain points, and metrics. Map inputs/outputs so the AI opportunity becomes tangible.

  • Document end-to-end steps in a spreadsheet.
  • Highlight manual decisions, queues, and rework loops.
  • Note any data privacy or compliance checkpoints already in place.

Rebuild with AI Support

Use personas to hypothesise how AI can collapse timelines, augment decisions, or enable new offerings. Consider human-in-the-loop for critical judgments.

  • Identify highest-friction steps amenable to automation or augmentation.
  • Plot AI leverage points (classification, summarisation, predictions, generation).
  • Capture expected value (cycle time reduction, accuracy lift, employee hours saved).

Compare Traditional vs AI-First

Create a side-by-side view showing the delta between your historic approach and the AI-enabled plan. This becomes a compelling narrative for stakeholders.

  • Timeline view (before vs after) focusing on speed and quality.
  • Impact on roles: which responsibilities shift to AI vs remain human-led.
  • Risk profile changes: new risks introduced, mitigations required.

Guided Exercise Timeline

Step 1 · Persona Generation (90 min)

Create synthetic stakeholder profiles

Use your data points to seed prompts. Validate each persona with a colleague or your mentor to ensure realism.

  • Command: codex run scripts/build_personas.py --use-case loan-origination (customise script for your context).
  • Store in artifacts/week1/personas.md with revision history.
  • Annotate each persona with references to supporting market data.
Team practice: swap personas and run the "red team" exercise—try to disprove assumptions or highlight missing stakeholders.
Step 2 · Interview Script (75 min)

Draft discovery conversations

Craft 12–15 questions focusing on process, data, change management, and success metrics. Use AI to suggest follow-ups, then refine manually.

  • Template location: templates/interview-guide.md.
  • Prompt: chatgpt with persona context to propose scenario-based probes.
  • Plan data capture: decide on note-taking format and consent reminders.
Step 3 · Opportunity Canvas (120 min)

Prioritise hypotheses

Use the provided scoring model (Value, Feasibility, Readiness, Risk) to rank opportunities. Include numeric estimates or ranges.

  • Use notebooks/opportunity_scoring.ipynb (Jupyter) to calculate weighted scores.
  • Export the final canvas to PDF for sharing.
  • Ensure each hypothesis maps to personas and supporting data.
Step 4 · Governance Checklist (60 min)

Document controls & mitigations

List data sources, regulatory obligations, bias checks, human oversight plans, and escalation paths. Tie each risk to a responsible owner.

  • Use templates/governance-checklist.md.
  • Cross-reference with IIBA governance practices and organisational policy.
  • Flag risks requiring legal, security, or ethics board engagement.

Lab 01 · Discovery Backlog & Opportunity Canvas

Build a reusable discovery package for your chosen use case.

Inputs
  • Baseline workflow description (current state).
  • Market data points collected during reading.
  • Persona, interview, and governance templates.
Outputs
  • Discovery backlog (CSV + Markdown summary).
  • Opportunity canvas (PDF).
  • Governance checklist (Markdown) with risk owners.
Collaboration
  • Peer review of personas and canvas.
  • Slack channel #week1-build for async feedback.
  • Schedule 30-min critique call by Thursday.

Execution Steps

  1. Initialise repo: cd ~/ai-ba-course/week1 && git init. Commit baseline files.
  2. Create scripts/build_personas.py using Codex CLI starter. Run and refine outputs.
  3. Complete interview-guide.md; annotate each question with persona relevance.
  4. Run scoring notebook to rank opportunities. Export summary to artifacts/week1/opportunity_canvas.pdf.
  5. Complete governance checklist and integrate it into backlog items as prerequisite tags.

Validation Checkpoints

  • Backlog entries include hypothesis statement, evidence, expected value, feasibility score, and readiness notes.
  • Opportunity canvas clearly contrasts traditional vs AI-first workflow metrics.
  • Governance checklist covers Data, Model, Process, People, and includes escalation owners.
  • Peer reviewer signs off via comment in repo or shared doc.

Reflection & Submission

Submission Checklist

  • artifacts/week1/discovery_backlog.csv
  • artifacts/week1/opportunity_canvas.pdf
  • artifacts/week1/governance_checklist.md
  • Reflection (artifacts/week1/reflection.md) — what shocked you when comparing traditional vs AI-first?
  • Peer review evidence (screenshot or comment link).

Assessment Rubric

Insight Quality (40%): Persona depth, data references, hypothesis clarity.
Opportunity Prioritisation (30%): Quantified value, feasibility, readiness mapping.
Governance Readiness (20%): Completeness of controls, escalation plans.
Collaboration Evidence (10%): Peer review integration and iteration notes.

Submission Process

Push your repo to your private remote or zip the week1 folder. Upload artefacts and repo link to the learning portal under "Week 1 Deliverables". Notify your mentor in Slack with a brief summary of your use case and top-ranked hypothesis.

Troubleshooting & FAQ

My personas feel generic.

Increase prompt specificity with context: industry regulations, KPIs, customer demographics. Run "persona stress tests" by asking AI to challenge biases, then update accordingly. Incorporate at least two real data points per persona.

The opportunity scores are too close.

Expand the scoring criteria: add risk-adjusted value or implementation cost. Run sensitivity analysis in the scoring notebook to see which factors differentiate options.

How do I handle confidential data?

Redact or generalise specifics before using AI tools. For internal data references, use internal documentation offline and summarise insights in a sanitized format before entering prompts.

Further Study & Next Steps

Recommended Reading

  • Gartner (2024): AI Adoption Patterns Across Industries.
  • TDWI Checklist: Responsible Synthetic Data Practices.
  • MIT Sloan: "Designing AI Products with Human Stakeholders in Mind".

Prepare for Week 2

Gather process documentation, compliance requirements, and sample datasets for your use case. These feed directly into workflow redesign and synthetic data generation.

Preview Module 2 →