Multi-agent research across internal intel, customer signal, competitive landscape, and quantitative data — verified by DBRA and curated Genie Spaces — synthesized into a shareable Google Doc.
isaac plugin update to get the latest version/bricksearch in any Isaac session.For best results, enable Google and Glean MCPs — they power doc publishing and internal search. Run isaac configure mcp to select servers, or visit the MCP connections page in your workspace. See go/mcp for setup docs.
Bricksearch breaks your question into targeted sub-queries, confirms the plan with you, then dispatches specialized agents in parallel. Each covers a distinct source domain — from internal Logfood data and customer notes to competitor docs and academic standards. DBRA runs alongside for independent quantitative verification. Every agent follows a strict source protocol: nothing is silently skipped.
| Agent | Covers | Known bias |
|---|---|---|
| DBRA | Autonomous Logfood SQL via Databricks Research Agent — runs twice (dispatch + verification) | May hallucinate table names |
| Quantitative | Curated Genie Spaces, metric views, DBRA, direct SQL — adoption, spend, trends, funnels | Instrumented features only |
| Internal Intel | PRDs, UXR archive, PM customer notes, ES tickets, CUJs, prior design work | Past internal thinking |
| Sales & GTM | BrickBites, SFDC win/loss, QBR themes, PMM messaging, field playbooks | Deal-winning narratives |
| Roadmap | Quarterly pre-reads, Aha!, Jira epics, leadership priorities, OKRs | Formally planned work only |
| Competitive | Battlecards, competitor docs, feature matrices, pricing, UX comparisons | Databricks-favorable framing |
| Market Landscape | Gartner, Forrester, IDC, MAD landscape (~930 companies), VC funding | Analyst views lag 6–18 months |
| Community Voice | Databricks forums, Reddit, Stack Overflow, Ideas Portal, GitHub issues | Vocal power users |
| Official Docs | Databricks docs, release notes, blog posts, API references | Shipped features only |
| Product Design | NNGroup, design systems, competitor UX patterns, accessibility, IA | UX elegance over feasibility |
| Foundations | Academic papers, W3C/ISO/NIST standards, canonical definitions | Theoretical |
As agents report back, findings are cross-referenced and ranked by authoritativeness. DBRA re-queries Logfood to verify any quantitative claim from a non-authoritative source. Contradictions are resolved — not hidden. Gaps trigger targeted follow-up that you approve before re-dispatch.
When sources conflict, the higher-ranked source wins:
Executive summary, themes, conflicts, gaps, and recommendations up front. Raw agent outputs and methodology in the appendix. Follow-ups update the same doc — URL never changes.