Closed
Bug 982782
Opened 10 years ago
Closed 10 years ago
Large objects truncated
Categories
(Cloud Services Graveyard :: Server: Sync, defect, P1)
Cloud Services Graveyard
Server: Sync
Tracking
(Not tracked)
VERIFIED
FIXED
People
(Reporter: rnewman, Assigned: rfkelly)
References
Details
(Whiteboard: [qa+])
Attachments
(1 file)
774 bytes,
patch
|
rmiller
:
review+
|
Details | Diff | Splinter Review |
let payload = { "foo": new Array(70000).join("a") }; Weave.Service.resource(Weave.Service.storageURL + "history/testtesttest").put({"id":"testtesttest","collection":"history","payload":JSON.stringify(payload)}); Weave.Service.resource(Weave.Service.storageURL + "history?ids=testtesttest&full=1").get().length > 65604 JSON.stringify(payload).length > 70009
Assignee | ||
Updated•10 years ago
|
Assignee: nobody → rfkelly
Assignee | ||
Updated•10 years ago
|
Severity: normal → blocker
Priority: -- → P1
Updated•10 years ago
|
Whiteboard: [qa+]
Assignee | ||
Comment 1•10 years ago
|
||
Bleh, the app is using a TEXT column for the payload, which has a max size of 64K. This is a carry-over bug from sync1.1 codebase, but it's only showing up for sync1.5 because we let the app create its own tables. The ops-defined schema for sync1.1 uses a LONGTEXT. MEDIUMTEXT, with defined limit of ~16M, should be sufficient for our purposes.
Comment 2•10 years ago
|
||
Modified the schema creation script for Sync 1.5 to set the payload column in the bso table to be MEDIUMTEXT.
Assignee | ||
Comment 3•10 years ago
|
||
And of course, sqlalchemy doesn't make it obvious how to get a mediumtext column. Turns out that adding a length to the Text() column does the trick, I've tested it for sensible results on both mysql and sqlite. This should be committed in both sync1.1 and sync1.5 repos.
Attachment #8390094 -
Flags: review?(telliott)
Assignee | ||
Comment 4•10 years ago
|
||
Comment on attachment 8390094 [details] [diff] [review] sync15-payload-column-length.diff Actually IIRC Toby is PTO at the moment, switching Rob for review of this simple low-risk patch
Attachment #8390094 -
Flags: review?(telliott) → review?(rmiller)
Updated•10 years ago
|
Attachment #8390094 -
Flags: review?(rmiller) → review+
Assignee | ||
Comment 5•10 years ago
|
||
Committed in both old and new sync: https://github.com/mozilla-services/server-syncstorage/commit/97fd39391aab21f9499fd82f6c5ec77609404156 http://hg.mozilla.org/services/server-storage/rev/5338568c7e0a Per :bobom, our strategy for fixing up the live data is: 1) Fix ops-side db creation script (done) 2) Deploy new nodes with correct schema 3) Migrate everyone over to the new nodes Given the low user count this seems like the best tradeoff between service availability and safety. It means everyone will have to re-sync but it eliminates the possibility of leaving accounts in a bad state. Plus it's a nice opportunity to sanity-check the migration process.
Assignee | ||
Updated•10 years ago
|
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Comment 6•10 years ago
|
||
:bobm - do we have a Stage/Prod schedule for these fixes?
Comment 7•10 years ago
|
||
OK. I verified in code. I just need to know 1. The deploy schedule for Sync 1.5 (if it has not already happened) 2. The CW schedule for Sync 1.1
Comment 8•10 years ago
|
||
> 1. The deploy schedule for Sync 1.5 (if it has not already happened) Already happened. > 2. The CW schedule for Sync 1.1 Wasn't a problem for Sync 1.1
Comment 9•10 years ago
|
||
OK, I probably misinterpreted this one: "3) Migrate everyone over to the new nodes" as sync 1.1 nodes
Status: RESOLVED → VERIFIED
Updated•1 year ago
|
Product: Cloud Services → Cloud Services Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•