What is the ICP tool in MeetBri?
This is the story of how a test became a product. While I was building the original AI interviewer, I needed some long form interviews to use as models, they had to be real, generally business oriented and available at volume.
Podcasts were the perfect solution. I could download hundreds and would generally be the right type of people with some quality interviewers.
So, I built a pipeline to download, transcribe, categorize and analyze podcasts at scale.
The bad news is it ended up not being very helpful with the self service surveys.
The good news is the data I could collect at scale is fantastic.
I review transcripts at 4 levels. Each is a small agent, typically Haiku that analyzes the interview independently.
First is categorically, who is in the interview and what is it about, company, industry, scale and scope.
This gives the general meta data for each transcript. Then I have the three specialty analysis agents they look for mood, language patterns, extract predefined patterns. The results of the analysis are data, data about the content of each interview that can be deterministically organized, summarized and analyzed with no AI involved. All of this is saved with the full transcript in an analytical database that is designed for research extraction.
This research database is used to develop the articles and custom analysis. Summaries of this data are pushed to the production database that is used to drive the MeetBri website. The actual analysis reports are deterministically organized, generated and are summarized by larger agents.
What I evolved for the April edition was the deeper use of the research database as we expand the depth of custom analysis options. This shows up in some of the month's blog posts like those on Rabbits and Beer you have already seen.
What's next:
For May we will greatly expand our footprint of interview data and test a few report dimensions.
If you have an interest in the data for articles or research, let me know. I think there are some interesting use cases there.
Sidebar: Why 4 small agents not one big run?
This design is more accurate and cheaper to run. The small agents are great at answering direct and clear questions about a block of text, even a big block. There is very little reasoning and deep thinking required if the task is properly and tightly designed. It also runs on Anthropic's batch process API giving me a 50% discount on token costs. I can process an interview for less than a couple of cents.