MPP Test – Demonstrating Significance: Phase 1 is Live

For over 60 years we have been focused on human-centric communication in our networks. While we still have payment problems in this interaction, a whole world is evolving where machines interact with other machines. The scale of this interaction is limited by value exchange — after all, who wants to spend resources answering a bot’s question if they are just stealing your data and delivering no new customers (see this blog covering Cloudflare CEO Matthew Prince’s comments).

As discussed, Human in the Loop agentic is the near-term future for the big AI platforms — from ACP to UCP — with merchants controlling the checkout page. Today’s blog is about machine-to-machine communication and how the innovation of Stripe’s new Machine Payment Protocol (MPP) solves a real problem AND unlocks a new economy.

To test MPP I first had to create a service that another machine would actually want to buy. That’s my Phase 1: the Ask Tom API — a pay-per-query payments intelligence service. In this phase a human is involved in the purchase (entering credentials). In Phase 2, machines buying from my service will carry pre-approved stablecoin credentials and will skip that step entirely, courtesy of MPP/Tempo.

Why Build This?

In my MPP blog last week, I argued that the Machine Payment Protocol will unlock a new economy, one where autonomous agents can pay for services on behalf of their operators without any human in the loop. But there’s a prerequisite that most people skip over: you can’t test machine payments without first building a service worth paying for.

So I built one. The Ask Tom API answers questions about payments, grounded in 653 posts from my blog corpus. Every query costs $1 and every answer cites its sources. It’s a real API with real economics, which makes it a real test bed for what MPP changes.

Two-Phase Experiment

Phase 1 — Human in the Loop (Today)

The current version requires human registration. There are 2 entry points:

  1. My blog (top right of navigation) click on Ask Tom page on my blog, enter your card details via Stripe, and receive a payment_method_id (a pm_xxx token). You then paste that token into the ChatGPT skill when prompted. From that point, every question you ask costs $1 — charged automatically, no further interaction required.
  2. Start with ChatGPT and the specialized Ask-Tom Skill (which you will then be redirected to my checkout page.

The human involvement is in the credential setup, not in each transaction. But it’s still a human-in-the-loop system — a machine couldn’t autonomously register a new card and start querying. That’s the limitation MPP is designed to eliminate.

Phase 2 — True Machine-to-Machine (Coming, Tempo and Stripe Offered Assistance)

In Phase 2, an AI agent will arrive at the Ask Tom API already holding pre-authorized stablecoin credentials via Tempo/MPP. No registration page. No human intervention. The agent will query, pay, and receive its answer autonomously, at whatever scale and frequency its operator has budgeted for. This is what MPP makes possible: value exchange as a primitive for machine communication.

How It Works: The Technical Flow

Here’s what happens under the hood on every Ask Tom query:

  1. Payment first — The caller submits a question and their Stripe payment_method_id. Before anything else, the API charges $1 via Stripe PaymentIntent. No payment, no answer.
  2. RAG retrieval via OpenSearch — The question is run against an OpenSearch index containing all 653 published posts from blog.starpointllp.com. The query uses BM25 full-text search across post title (weighted 3×), post excerpt (2×), and full post content — returning the top 5 most relevant posts by score.
  3. Scope check — If the top BM25 score falls below threshold, the question is flagged as out-of-scope (the $1 charge still applies — sorry, it’s the cost of keeping the service honest).
  4. Synthesis via Claude Sonnet — The top 5 retrieved posts are passed as grounding context to Claude Sonnet, which synthesizes a direct answer in Tom’s voice, with source citations. The model is instructed to be authoritative and specific — not to speculate beyond the source material.
  5. Structured JSON response — The API returns answer, sources (with title, URL, relevance score), charge_id, and cost_usd: 1.00.

The Architecture

ChatGPT Skill (or any caller)
        │
        ▼
   API Gateway  (api.starpointllp.com/v1/ask)
        │
        ▼
   AWS Lambda
        ├── Stripe PaymentIntent  →  charge $1
        ├── OpenSearch BM25       →  retrieve top 5 blog posts
        └── Claude Sonnet         →  synthesize grounded answer
        │
        ▼
   JSON Response: answer + sources + charge_id

The whole thing runs serverless on AWS — Lambda handles the compute, API Gateway handles routing, and OpenSearch holds the blog index. The direct Anthropic connection (not Bedrock, as we tested this week) keeps latency and cost predictable.

How the RAG Works with OpenAI / ChatGPT

The Ask Tom ChatGPT skill connects to the API via OpenAI’s Custom GPT Actions framework — which uses the OpenAPI spec (openapi.json) to describe the available endpoints, request schema, and response format. When you ask the GPT a question, it:

  1. Prompts you for your payment_method_id (your pm_xxx token from registration)
  2. Calls POST /v1/ask with your question and token
  3. Receives the structured JSON response from my API
  4. Presents the answer and source citations in the ChatGPT interface

The key insight is that ChatGPT itself is not doing the retrieval — it’s acting as a conversational front-end to a purpose-built RAG pipeline. The intelligence in the answer comes from the combination of BM25 retrieval (finding the right posts) and Claude’s synthesis (turning source text into a grounded answer). ChatGPT is the UX layer.

How to Use Ask Tom

Ask Tom is live now. Here’s how to get started:

Step 1 — Register on the blog

Go to blog.starpointllp.com/ask-tom (it’s in the top navigation). You’ll see a Stripe payment form — enter your card details (or use Stripe Link if you’ve checked out with Stripe before, it’ll auto-fill). Once registered, you’ll receive a pm_xxx token. Copy it — you’ll need it in Step 2.

Step 2 — Open Ask Tom on ChatGPT

Go to Ask Tom on ChatGPT. When you ask a question, the skill will prompt you to enter your payment token. Paste your pm_xxx from Step 1.

Step 3 — Ask your question

Ask anything in scope — payments, card networks, stablecoins, identity, regulation, agentic commerce. Each question costs $1, charged at the moment you ask. You’ll get an answer grounded in my published research, with source links.

Example session in ChatGPT

You: What is Tom’s view on the Credit Card Competition Act?

Ask Tom: Please enter your payment token (pm_xxx) to proceed.

You: pm_1OqLh2CZ6qsJgndJ…

Ask Tom: Tom’s position on the CCCA is that it represents a roughly 5% probability of affecting roughly 5% of card network EBIT — meaningful as a tail risk but not an existential threat in its current form. He argues the bill’s routing mandate would primarily affect debit rather than credit, and that the card networks’ lobbying firepower and the Senate’s appetite for financial deregulation make passage unlikely in the near term. [Source: CCCA: 5% chance that 5% of Network EBIT could be affected — blog.starpointllp.com/…]

About the Cost

$1 per question is a real charge not a placeholder. Here’s the economics: Stripe takes approximately 30 cents (which will fall significantly when its stablecoin), AWS Lambda and OpenSearch cost roughly 25 cents per query, and Claude Sonnet costs another 40–50 cents depending on context length. That leaves very little margin. This is explicitly not a money-making exercise (it’s a test bed).

The $1 price point serves two purposes: it throttles abuse (a bot can’t vacuum my entire corpus for free), and it makes the service a genuine candidate for MPP Phase 2. A $0.001 API isn’t interesting to test stablecoin micropayments against. A $1 query is.

I’m also being transparent about out-of-scope questions: if you ask about something outside my expertise, the API will tell you, but the $1 is still charged. Please ask within scope.

What This Proves (and What Comes Next)

Phase 1 demonstrates that a pay-per-query RAG API is viable, deployable, and usable via a ChatGPT skill today. The human registration step is a solvable UX problem — it’s not an architectural limitation.

Phase 2 will show what changes when you remove that human step. An AI agent with a pre-authorized MPP credential will be able to call Ask Tom, pay for the answer, and use that answer to inform its next action — without any human involved at any point in the chain. That’s the machine economy. And MPP is the primitive that makes it work.

Stay tuned for Phase 2.


Ask Tom is live at blog.starpointllp.com/ask-tom. The ChatGPT skill is at chatgpt.com/g/…/ask-tom. The API spec is available at api.starpointllp.com/v1.

One thought on “MPP Test – Demonstrating Significance: Phase 1 is Live

  1. I just asked Tom… but it didn’t charge me? Anyway you gave me a good answer…

    “Early MPP adoption won’t topple cards—people still use perks for familiarity.
    Once agents reach a tipping point (roughly 50% transaction coverage in an agentic environment), card perks become marginal.
    Cards don’t disappear—they pivot to credit, lending, legal protections, and specialized perks that MPP cannot easily replicate.”

Please Login to Comment.