fix(metrics): return real total memory count instead of capped page size#1674
fix(metrics): return real total memory count instead of capped page size#1674Sanjays2402 wants to merge 2 commits intoMemTensor:mainfrom
Conversation
The Web UI memory-count card on the Memories page was stuck at 500
even when the database held 1400+ traces. `countTraces` was counting
rows returned by `repos.traces.list({ limit: 100_000 })`, but the
shared `buildPageClauses`/`clampLimit` helper silently clamps every
list `limit` to 500. The result was that `countTraces` returned at
most 500 regardless of how many traces actually existed.
Switch `countTraces` to use the repo's dedicated SELECT COUNT(*)
queries (`repos.traces.count` / `countTurns`), which have no
page-size cap, so the displayed total reflects the real database size.
The substring-search path now pages through traces explicitly so it
also accurately counts results above the 500-row cap. Listings keep
their page-size limits; only the count was wrong.
Adds a regression test that inserts 600 trace rows across 6 turns and
verifies both `countTraces({})` and `countTraces({ groupByTurn: true })`
return the real total instead of being clamped at 500.
Fixes MemTensor#1593
There was a problem hiding this comment.
Pull request overview
Fixes an incorrect “total memories” count in the memos-local-plugin metrics path where totals were capped at 500 due to repo list pagination clamping, causing the Web UI Memories count to get stuck at 500 (issue #1593).
Changes:
- Updated
countTracesto use repo-level COUNT queries when no substring search is provided, avoiding thelist()500-row cap. - Updated substring-search counting to page through trace listings in batches so counts can exceed 500.
- Added a regression unit test that inserts 600 traces and validates both raw-trace and group-by-turn totals.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| apps/memos-local-plugin/core/pipeline/memory-core.ts | Fixes countTraces total counting logic to avoid the 500-row clamp (COUNT queries + explicit paging for search). |
| apps/memos-local-plugin/tests/unit/pipeline/memory-core.test.ts | Adds regression coverage ensuring totals are correct when trace rows exceed 500. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // cap, so they return the actual total. | ||
| return input?.groupByTurn | ||
| ? handle.repos.traces.countTurns({ sessionId: input?.sessionId }) | ||
| : handle.repos.traces.count({ sessionId: input?.sessionId }); |
Address Copilot review on MemTensor#1674: the new COUNT path bypassed the visibility predicate that list() applies, so countTraces could return totals that included rows owned by other profiles/namespaces. This broke pagination math and risked leaking the existence of cross-profile data through the visible count. Wire the same visibility WHERE clause into the count and countTurns methods (sharing a single buildVisibilityClause helper with list() to keep the predicate authoritative in one place), and pass the relevant options through countTraces in memory-core. Extend the runtime visibilityWhere() SQL fragment to match isVisibleTo() exactly, including the legacy unknown/default-owner branch so pushing the predicate into SQL doesn't silently drop pre-namespace seed rows. Regression test: insert traces under two profiles and assert each profile's countTraces returns only its own rows.
|
Real catch — addressed in the latest commit. The new COUNT path was bypassing the visibility predicate that Fix: shared |
Summary
The Web UI memory-count card on the Memories page was stuck at 500 even when the database held 1400+ traces (#1593). The metrics path counted items from an already-limited listing instead of running a dedicated COUNT query.
Root cause
countTracesinapps/memos-local-plugin/core/pipeline/memory-core.tscomputed the total by callinghandle.repos.traces.list({ limit: 100_000 })and readingrows.length. The shared helperbuildPageClauses/clampLimit(incore/storage/repos/_helpers.ts) silently clamps every listlimitto 500, solist({ limit: 100_000 })actually returns at most 500 rows. The resultingrows.length(or groupedturnKeys.size) was therefore capped at 500 regardless of how many traces existed.The
/api/v1/tracesendpoint usescountTraces, and the Memories view (web/src/views/MemoriesView.tsx) renders that response'stotalin its pagination footer, so the Web UI showed 500 forever.Fix
In
countTraces:SELECT COUNT(*)queries directly (repos.traces.count/countTurns). These don't go throughbuildPageClausesand have no 500-row cap, so the returned total reflects the real database size.list()call (which would also be clamped at 500). Each batch is filtered for visibility and matched against the search needle, so the count stays accurate even above 500 rows.Listings keep their existing page-size limits; only the count was wrong.
Verification
Added a regression test in
tests/unit/pipeline/memory-core.test.tsthat inserts 600 trace rows across 6 distinct turns and asserts:countTraces({})returns600(was 500 before the fix)countTraces({ groupByTurn: true })returns6(was capped at the smaller of 6 or 500 before; would break for >500 turns)Files changed:
apps/memos-local-plugin/core/pipeline/memory-core.ts(~34 net lines)apps/memos-local-plugin/tests/unit/pipeline/memory-core.test.ts(regression test)Fixes #1593