Ask a question.
Get a research doc.

Multi-agent research across internal intel, customer signal, competitive landscape, and quantitative data — verified by DBRA and curated Genie Spaces — synthesized into a shareable Google Doc.

One-time setup — run these three commands in terminal:
First-time plugin users
git clone https://github.com/databricks-eng/plugin-marketplace.git ~/plugin-marketplace
Copy
Install plugin
isaac plugin add db-bricksearch@experimental
Copy
Create /bricksearch command
cp ~/.claude/plugins/cache/experimental-plugin-marketplace/db-bricksearch/*/commands/bricksearch.md ~/.claude/commands/
Copy
Already installed? Run isaac plugin update to get the latest version
Then type /bricksearch in any Isaac session.

For best results, enable Google and Glean MCPs — they power doc publishing and internal search. Run isaac configure mcp to select servers, or visit the MCP connections page in your workspace. See go/mcp for setup docs.

Question triggers multi-agent searches

Bricksearch breaks your question into targeted sub-queries, confirms the plan with you, then dispatches specialized agents in parallel. Each covers a distinct source domain — from internal Logfood data and customer notes to competitor docs and academic standards. DBRA runs alongside for independent quantitative verification. Every agent follows a strict source protocol: nothing is silently skipped.

AgentCoversKnown bias
DBRAAutonomous Logfood SQL via Databricks Research Agent — runs twice (dispatch + verification)May hallucinate table names
QuantitativeCurated Genie Spaces, metric views, DBRA, direct SQL — adoption, spend, trends, funnelsInstrumented features only
Internal IntelPRDs, UXR archive, PM customer notes, ES tickets, CUJs, prior design workPast internal thinking
Sales & GTMBrickBites, SFDC win/loss, QBR themes, PMM messaging, field playbooksDeal-winning narratives
RoadmapQuarterly pre-reads, Aha!, Jira epics, leadership priorities, OKRsFormally planned work only
CompetitiveBattlecards, competitor docs, feature matrices, pricing, UX comparisonsDatabricks-favorable framing
Market LandscapeGartner, Forrester, IDC, MAD landscape (~930 companies), VC fundingAnalyst views lag 6–18 months
Community VoiceDatabricks forums, Reddit, Stack Overflow, Ideas Portal, GitHub issuesVocal power users
Official DocsDatabricks docs, release notes, blog posts, API referencesShipped features only
Product DesignNNGroup, design systems, competitor UX patterns, accessibility, IAUX elegance over feasibility
FoundationsAcademic papers, W3C/ISO/NIST standards, canonical definitionsTheoretical

Findings are verified, conflicts resolved

As agents report back, findings are cross-referenced and ranked by authoritativeness. DBRA re-queries Logfood to verify any quantitative claim from a non-authoritative source. Contradictions are resolved — not hidden. Gaps trigger targeted follow-up that you approve before re-dispatch.

When sources conflict, the higher-ranked source wins:

1
Genie Space metric views + Logfood data
Curated metrics & observed behavior
2
Internal PRDs & strategy docs
Intent & decisions
3
Official Databricks docs
Publicly committed
4
Customer notes / Win-loss data
Direct customer signal
5
Industry standards / Field signal
Established definitions
6
Community & Ideas Portal
Public signal
7
Competitive battlecards
Useful, not neutral
8
External analysts
Independent but lagging
9
External blogs
Opinion-based

A shareable Google Doc

Executive summary, themes, conflicts, gaps, and recommendations up front. Raw agent outputs and methodology in the appendix. Follow-ups update the same doc — URL never changes.