Closed Bug 982782 Opened 10 years ago Closed 10 years ago

Large objects truncated

Categories

(Cloud Services Graveyard :: Server: Sync, defect, P1)

defect

Tracking

(Not tracked)

VERIFIED FIXED

People

(Reporter: rnewman, Assigned: rfkelly)

References

Details

(Whiteboard: [qa+])

Attachments

(1 file)

let payload = {
  "foo": new Array(70000).join("a")
};

Weave.Service.resource(Weave.Service.storageURL + "history/testtesttest").put({"id":"testtesttest","collection":"history","payload":JSON.stringify(payload)});

Weave.Service.resource(Weave.Service.storageURL + "history?ids=testtesttest&full=1").get().length
> 65604

JSON.stringify(payload).length
> 70009
Assignee: nobody → rfkelly
Severity: normal → blocker
Priority: -- → P1
Whiteboard: [qa+]
Bleh, the app is using a TEXT column for the payload, which has a max size of 64K.  This is a carry-over bug from sync1.1 codebase, but it's only showing up for sync1.5 because we let the app create its own tables.  The ops-defined schema for sync1.1 uses a LONGTEXT.

MEDIUMTEXT, with defined limit of ~16M, should be sufficient for our purposes.
Modified the schema creation script for  Sync 1.5 to set the payload column in the bso table to be MEDIUMTEXT.
And of course, sqlalchemy doesn't make it obvious how to get a mediumtext column.  Turns out that adding a length to the Text() column does the trick, I've tested it for sensible results on both mysql and sqlite.

This should be committed in both sync1.1 and sync1.5 repos.
Attachment #8390094 - Flags: review?(telliott)
Comment on attachment 8390094 [details] [diff] [review]
sync15-payload-column-length.diff

Actually IIRC Toby is PTO at the moment, switching Rob for review of this simple low-risk patch
Attachment #8390094 - Flags: review?(telliott) → review?(rmiller)
Attachment #8390094 - Flags: review?(rmiller) → review+
Committed in both old and new sync:
https://github.com/mozilla-services/server-syncstorage/commit/97fd39391aab21f9499fd82f6c5ec77609404156
http://hg.mozilla.org/services/server-storage/rev/5338568c7e0a

Per :bobom, our strategy for fixing up the live data is:

1) Fix ops-side db creation script (done)
2) Deploy new nodes with correct schema
3) Migrate everyone over to the new nodes

Given the low user count this seems like the best tradeoff between service availability and safety.  It means everyone will have to re-sync but it eliminates the possibility of leaving accounts in a bad state.  Plus it's a nice opportunity to sanity-check the migration process.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
:bobm - do we have a Stage/Prod schedule for these fixes?
OK. I verified in code. I just need to know
1. The deploy schedule for Sync 1.5 (if it has not already happened)
2. The CW schedule for Sync 1.1
> 1. The deploy schedule for Sync 1.5 (if it has not already happened)
Already happened.

> 2. The CW schedule for Sync 1.1
Wasn't a problem for Sync 1.1
OK, I probably misinterpreted this one:
"3) Migrate everyone over to the new nodes"
as sync 1.1 nodes
Status: RESOLVED → VERIFIED
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: