The Pulse Network.
Skills travel, data doesn’t.
Most knowledge work is repeated work. Every Series-A finance team builds the same investor-update template. Every customer-success org reinvents the same QBR rhythm. The Network lets one org publish a skill, a reproducible workflow, and another org pull it, run it on their own data, and never share a byte across the wire.
How it actually works
A skill is a versioned, reproducible workflow, not data. It’s a recipe of “ask Pulse this, draft that, route to that team.” When you import a skill from another org, you import the recipe. Your data stays put. Their data stays put. The map graphs never touch.
Skills are recipes, not data dumps.
An org compiles a workflow they’ve been running internally, say, the weekly investor update, into a Skill manifest: prompts, retrieval scope, output format, calibration thresholds. Click publish. Pulse strips anything that looks like a value (names, numbers, internal references) and ships only the structural template.
SKILL investor-update.weekly
version 1.2.0
author @stride · verified org
retrieval {revenue, hires, ships, blockers}
tone "terse, numbers-first, no hedging"
deps {map, briefings, calendar}
runs/wk 280 across 47 orgsImporting a skill never imports a byte.
Your admin browses the registry, finds a skill, and clicks install. The skill arrives in your workspace as a sealed module. When you run it, it queries your Pulse map, with your permissions, against your data. The author can’t see what you ran it on.
$ pulse skills install investor-update.weekly ✓ manifest verified ✓ capability scope: read:map, read:metrics ✓ sandboxed in workspace.pulsehq $ pulse skills run investor-update.weekly → retrieved 14 sources → drafted in 3.2s → no external calls made
Every skill has a provenance chain.
Skills are signed with org keys. The registry shows: who published, what version, how many other orgs run it, the average satisfaction score, and what the diffs were across versions. We re-vet automatically when the prompt or the dep list changes.
verified by @stride · @runwise runs in prod 47 orgs sat. score 4.6 / 5.0 (n=312) last audit 8 days ago version diffs 3 minor, 1 patch calibration brier 0.09 (good)
Forking is encouraged, expected, and credited.
If a skill works almost-but-not-quite for your team, fork it. Tweak the prompts, change the retrieval scope, ship a new version under your org. The Network tracks the lineage. The original author sees usage; you see your fork’s adoption. No royalties, the unit of credit is reputation, not money.
FORK investor-update.weekly parent @stride@1.2.0 changes +arr-bridge, -hires-section your version @pulsehq@1.2.0-fork ↑ shipped to org
Three trust modes, pick yours per skill
Not every team can run unvetted code on their data plane. The Network ships three modes, configurable per skill. The default for new tenants is shadow, preview-only until an admin promotes.
Shadow
Skill runs on a snapshot of your data; output is preview-only. Nothing writes to systems, nothing routes to humans. Default for fresh installs.
- Sandbox tenant, snapshot data
- Output renders inline, never sent
- Audit trail per run
- Promote by admin to live or sealed
Live
Skill writes to connected systems within its declared scope. Reversible for 7 days (90 on Enterprise). For skills you’ve vetted and use weekly.
- Writes through the policy engine
- Reversal pointer per call
- Per-skill rate limit
- Author cannot push silent updates
Sealed
Skill is pinned to a specific version, frozen. No auto-updates, no telemetry. For regulated industries that need a stable artefact for compliance.
- Version pinned for the contract term
- Re-audit only on explicit upgrade
- No usage telemetry sent to author
- Available on Enterprise + sealed addon
The threat model, written down
If you’re going to run code from another org on your data, you should know exactly what they can and cannot do. Here’s the honest version.
Aggregate run count, satisfaction scores, and version-level error rates.
Nothing about your data, prompts, or outputs. Aggregations are k-anonymized at k=10, if fewer than 10 orgs are running a skill, the publisher sees only “small audience” badges, not specific usage.
Manifest, signature, and aggregate stats. No prompts, no outputs.
The registry stores skill manifests (signed) and aggregate run telemetry. It never sees the data the skill ran on. Pen tests are quarterly; reports are under NDA.
Capability scope is enforced at the policy engine, not the skill.
A skill declares scopes (e.g. read:metrics, write:tasks). The policy engine checks every call. A buggy or malicious skill can’t reach beyond its declared scope, there’s no escape hatch.
It can’t be. Versions are immutable, signed, and pinned per install.
A new version is a new install, your admin gets a diff, sees the changed scopes, and approves. No silent updates, even for “patch” versions. Sealed mode pins this further.
The skill can read what its scope allows, but the output is rendered locally.
A skill cannot make external network calls. Its output goes to your Pulse, your Slack, your email, nowhere the publisher can see it. We tested this against intentional prompt-injection attacks.
No. Every skill run is tenant-isolated end-to-end.
We don’t aggregate across tenants for training. Calibration uses per-tenant feedback only. The retrieval index is per-tenant. The Network is a recipe registry, not a data lake.
The shape of work is shared.
So should be the shape of how it gets done.
If your team has a workflow that works, publish it. Other orgs will fork it, improve it, send it back better. If you’re new to Pulse, join with a single skill and pull the rest.