faq · 31 answers
Frequently asked questions.
The questions teams ask before adopting Pulse, answered straight. If something here isn't covered or contradicts what you find elsewhere on the site, tell us on /contact and we'll fix it.
About Pulse
What is Pulse?
Pulse is a company brain for software teams. It indexes the work that actually happens across Slack, GitHub, Notion, Linear, Calendar, Drive, Confluence, Jira, and meeting transcripts, then turns it into a permission-aware graph of decisions, commitments, owners, and open questions that a team can ask, navigate, and act on.Who is Pulse for?
Software teams of 5 to 500 people. The sweet spot is engineering, product, and design organizations that have outgrown one Slack workspace, accumulated three or four different sources of truth, and want a memory layer that respects existing access controls rather than a chat box that ignores them.What problem does Pulse solve?
Three problems. Decisions disappear into Slack scrollback the day they are made. Context lives in tools that don't talk to each other, so onboarding a new person takes weeks of asking around. Existing AI tools either ignore who is allowed to see what, or wrap a chat box around a doc index and call it knowledge. Pulse fixes all three with a typed decision graph, ACL-mirrored retrieval, and approval-gated agent actions.Is Pulse a chatbot?
No. Pulse can answer questions, but the value is the decision graph and the approval-gated actions that hang off it. A chat box is the easy part of building this category. The hard parts are modeling work as typed entities, mirroring ACLs from every source tool, and getting calibration honest enough that you can trust the confidence score.
How it works
How does Pulse build the knowledge graph?
Connectors pull source data on a schedule and via real-time webhooks where available. Each item is parsed into typed entities (decision, commitment, action item, PR review, meeting outcome) with extracted owners, deadlines, and relations. The graph is stored in Postgres with pgvector for semantic retrieval. Every node carries the source IDs and the access control list of the document it came from.How does Ask work end to end?
Five stages, visible on every answer. Retrieve fetches candidate sources, filtered by what the asker is allowed to see. Rank reorders by relevance and recency. Cite attaches sentence-level source IDs to each passage. Synthesize composes the answer using Anthropic Claude or OpenAI on a zero-retention endpoint. Calibrate scores confidence using the workspace's historical accuracy on similar questions and refuses to synthesize below threshold.How does Pulse decide what to surface in a briefing?
Each briefing has weighted sections (overnight decisions, commitments coming due, stuck PRs, customer signal, calibration drift). Per-user knobs raise or lower section weights. Always-include entities and never-surface filters apply on top. Delivery channel and quiet hours are respected so briefings land when you're ready for them, not at 3am.What is the decision graph?
Every decision Pulse extracts becomes a typed node with rationale, owners, the sources that informed it, and the dependencies that hang off it. You navigate to a decision instead of re-Googling it. The graph is editable, so when Pulse gets a relation wrong or merges two entities that should be separate, edits feed back into the calibration loop.Where does Pulse host data?
Pulse runs on AWS in us-east-1 today with EU residency on the Enterprise plan. Postgres with row-level security holds the graph and content. pgvector holds embeddings. S3 holds binary attachments under per-tenant prefixes with bucket policies that block cross-tenant access at the IAM layer.
Integrations
Which tools does Pulse integrate with today?
Nine integrations live today: Slack, GitHub, Notion, Linear, Google Calendar, Google Drive, Confluence, Jira, and Meeting transcripts (Granola, Fireflies, Read.ai, Otter, Zoom Cloud via webhook). Two more are on the near-term roadmap and a Push API is available for custom sources on the Enterprise plan.Are connectors read-only?
Connectors are read-only by default. Writes happen through a separate agent-actions surface that requires explicit approval per action, allowlist enforcement on the recipient or target, and per-policy rate limits. Read scopes and write scopes are requested separately so a workspace can adopt Pulse for retrieval first and enable agent actions later.What if a tool we use isn't on the list?
Three options. (1) The Push API on Enterprise lets a customer pipeline arbitrary data in via signed webhooks. (2) Self-hosted MCP bridges run a connector inside the customer's VPC with mTLS back to Pulse. (3) For high-priority asks the Pulse team prioritizes new connectors on the roadmap based on customer demand. Tell us what you need on /contact.Can Pulse use my company's existing search index?
Pulse does not depend on an upstream search index. It builds its own typed graph from source content. That said, Pulse's MCP server can be called by any agent that already speaks the Anthropic Agent Skills standard, so a team's existing assistants can query Pulse without ripping out anything they already have.
Security and trust
Can my coworker see things I shouldn't see?
No. Slack channel privacy, Notion page sharing, GitHub repo permissions, and Drive file ACLs replicate into Pulse's ResourceAcl rows during connector sync. Every retrieval call (Ask, Search, briefings, agent drafts) passes through visibleDocumentIds before returning. When Pulse spots context you don't have access to, it tells you and offers a one-click request-access flow that mirrors back to the source tool. Every access decision is logged in AuditLog with actor, resource, decision, and timestamp.Will Pulse train AI models on our data?
No. Pulse calls Anthropic Claude and OpenAI on their zero-retention, no-training endpoints. Customer content stays in row-level-secured Postgres + pgvector inside the customer's tenant. There is no aggregation pipeline that crosses tenants for model training. Federated benchmarks (the optional network feature) are differential-privacy aggregates of percentile bands only, with a minimum tenant count before any band is published.What compliance posture does Pulse hold?
SOC 2 Type II is in progress with audit window closing in Q3. GDPR-aligned data handling, DPA available on /dpa, sub-processor list on /legal/subprocessors. The product enforces ACL mirroring, audit logging, BYOK for Anthropic, and per-tenant key isolation as engineering invariants, not policy promises.What happens when an agent action goes wrong?
Five minute undo on every external write. The source system's delete endpoint retracts the message, comment, ticket, or page. Before that, every action passes through allowlist enforcement (recipient email domain for Slack DMs, channel ID for channel messages, repo name for GitHub, team key for Linear). Per-policy rate limits clamp blast radius. After 5 minutes the AuditLog row is permanent but the action stays reversible by hand.Can we bring our own model key?
BYOK Anthropic ships today. Paste a key from /app/admin/byok and every Claude call (Ask synthesis, decision extraction, devil's advocate, drafting) routes through it. Keys are encrypted at rest with AES-256-GCM, decrypted only at request time, and the last four characters are stored as a hint. AWS Bedrock and Azure OpenAI BYOK are on the next release roadmap.
Skills and AI features
What is a Skill?
A Skill is a scoped, reviewable agent that does one job. Drafting a renewal note. Preparing a one-on-one. Writing a launch summary. Each Skill has a manifest, an explicit capability budget that bounds what it can spend, and an eval harness that scores it against expected outputs. Skills are versioned and shippable.Where do Skills come from?
Two paths. (1) Auto-extracted: Pulse observes a recurring pattern in the workspace and compiles the steps into a draft Skill manifest. A human reviews and merges. (2) Hand-authored: a teammate writes a Skill manifest from scratch using the Skills authoring surface. Both paths produce the same artifact and ship through the same eval harness.How accurate is Pulse's AI?
Calibrated per workspace and per topic. When Pulse says 87 percent confident, the actual hit rate sits near 87 percent on that kind of question in that workspace. ConfidenceCalibrationCurve buckets by tenant, topic, and decile; the learner re-tunes from thumbs up and thumbs down feedback weekly. Per-paragraph hallucination flagging marks every passage as cited or inferred.Does Pulse work with Claude Desktop and Cursor?
Yes. Pulse follows the Anthropic Agent Skills standard. SKILL.md tarballs work in Claude Desktop and Cursor today. Pulse also runs a native MCP server with nine read-only tools (search, get_decision, find_expert, get_pulse, plus five more) that any Skills-compatible client can call.Can a team write their own custom agents?
Yes. The Custom agents surface under /app/admin/agents lets a workspace owner define agents with a system prompt, a capability budget, an allowlist of tools, and an approval policy. Action policies under /app/admin/action-policies set per-tool rate limits and recipient allowlists separately.
How Pulse compares
How is Pulse different from Glean?
Glean is enterprise search, optimized for an organization of thousands with a search-bar shaped problem. Pulse is a process graph for software teams of 5 to 500, optimized for decision memory, calibrated answers, and approval-gated agent actions. If a team has tens of thousands of documents and wants a search box, Glean is a strong choice. If a team has decisions to remember and work to act on, Pulse is the right shape. Full comparison at /vs/glean.How is Pulse different from Notion AI?
Notion AI is great inside Notion. It searches the Notion workspace and writes new Notion content. It does not see Slack, GitHub, Linear, or Jira, and it does not model decisions or commitments as typed entities. Pulse pulls from all the sources a software team actually uses and turns them into a graph. Full comparison at /vs/notion-ai.How is Pulse different from Coworker AI?
Coworker AI is a single chat companion. Pulse is a memory and action layer. The two are not the same thing, even though they share the chat surface. A team can run Pulse alongside any chat companion via the MCP server and benefit from both. Full comparison at /vs/coworker.Why not just build this on top of ChatGPT or Claude?
An LLM is one of five stages in Pulse's pipeline. The hard work is in the other four. Retrieval that respects ACLs at the document level. Ranking that combines semantic and recency signals. Citation at the sentence level. Calibration that is workspace-specific and topic-specific. A bare LLM call gives a confident-sounding answer with no audit trail and no way to refuse below a confidence threshold.
Getting started
How long does setup take?
Three phases. About 30 minutes of conversational interview with a Pulse founder so the graph isn't cold on day one. About 10 minutes of OAuth handshakes for the connectors that match the team's stack. 2 to 6 hours of background backfill before retrieval is dense. Useful answers land within minutes for recent activity; the long tail of historical decisions takes the rest of the first day.What does a Pulse rollout look like?
Three pilot tracks are available. Knowledge first (read-only, briefings + Ask, no actions) ships in week one. Action layer (drafting Slack DMs, Linear tickets) enables in week two with explicit allowlists. Custom Skills (codify a team's recurring rituals) ship over the following two weeks. The whole rollout fits inside a one-month pilot with a clear success metric agreed up front.Do we need to migrate any data?
No migration. Pulse reads from existing tools via OAuth. The source of truth stays in Slack, GitHub, Notion, and the rest. If a team disconnects Pulse, the source tools are untouched and Pulse's mirrored copy is deleted on a 7-day soft-delete schedule (or immediately on request).How does support work?
Weekly office hours, in-app chat with published response times (P1 within 1 hour, P2 within 4 hours, P3 within 1 business day), and email at support@pulsehq.tech. Every channel is staffed by humans, not by a model. Full details at /help.
Still have a question?
Office hours every Tuesday at 17:00 UTC and every Thursday at 09:00 PT. Or grab a slot from /contact and we'll answer in writing within one business day.