Closed
Bug 1032975
Opened 10 years ago
Closed 10 years ago
Add new table(s) to shipit database
Categories
(Release Engineering :: Release Automation: Other, defect)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: zeller, Assigned: zeller, Mentored)
References
Details
(Whiteboard: [shipit])
Attachments
(1 file, 11 obsolete files)
2.06 KB,
patch
|
bhearsum
:
review+
bhearsum
:
checked-in+
|
Details | Diff | Splinter Review |
Before we can ship the changes in bug 826753, we must add new table(s) to accommodate the additional functionality
Assignee | ||
Updated•10 years ago
|
Assignee: nobody → johnlzeller
Assignee | ||
Updated•10 years ago
|
Whiteboard: [shipit]
Assignee | ||
Comment 1•10 years ago
|
||
I am thinking that we should add 2 tables to the shipit database, called release_data and release_status. I have attached the schemas I have for each table. For release_data, the table would be used to store all of the data coming from pulse.m.o messages. My methodology here is that we could store all of this data (the messages pertaining to release), rather than just cherry picking the steps needed for status. This way we can derive more detail when needed. I am open to the fact that this could be considered feature creep and planning for an unspecified need. Nevertheless, it had been a nice abstraction for me while in dev, so that I can focus on how to sort out the necessary data. For release_status, I tried to keep the schema here fairly high level, consisting of a simple status integer (0 = not started, 1 = started, and 2 = finished). More detail could be necessary, but I feel that should be determined by what we do with the release_data table. Currently the plan is to update a single release_status row for each release, and store any meta data about that release (error, warnings, other messages, etc, etc) in release_data. What do we think of this setup? I would love some feedback. (Going to add a few, but not necessary for everyone to respond)
Flags: needinfo?(rail)
Flags: needinfo?(bhearsum)
Comment 2•10 years ago
|
||
My only concern is the size and performance of the release_data table. Do we have any estimates on how many messages per release we store there? Also, you can optimize column types. Probably would be great to run the schema though sheeri, when it's ready.
Flags: needinfo?(rail)
Assignee | ||
Updated•10 years ago
|
Flags: needinfo?(bhearsum)
Assignee | ||
Comment 3•10 years ago
|
||
This has been a tough one to settle on. I want to hold only pertinent information. And also make it useful for phase 2. I think this is the compromise, but I would love feedback. The idea here is that as shipit-agent digests pulse messages, it sends each valid one to the REST API which turns it into a new release_data row, and then compares all release_data rows for that name ('Firefox-31.0b5-build1') against the current release_status row for that name, and then submits an update to that release_status row. All the intergers in the release_status columns are set like this (0 = not started, 1 = started, 2 = finished).
Attachment #8448952 -
Attachment is obsolete: true
Attachment #8458881 -
Flags: feedback?(bhearsum)
Assignee | ||
Comment 4•10 years ago
|
||
(In reply to John Zeller [:zeller] from comment #3) > Created attachment 8458881 [details] > schema.sqlite > > This has been a tough one to settle on. I want to hold only pertinent > information. And also make it useful for phase 2. I think this is the > compromise, but I would love feedback. > > The idea here is that as shipit-agent digests pulse messages, it sends each > valid one to the REST API which turns it into a new release_data row, and > then compares all release_data rows for that name ('Firefox-31.0b5-build1') > against the current release_status row for that name, and then submits an > update to that release_status row. All the intergers in the release_status > columns are set like this (0 = not started, 1 = started, 2 = finished). I am also considering removing the following Columns from release_data: * routing_key * sent or submittedAt
Comment 5•10 years ago
|
||
Comment on attachment 8458881 [details] schema.sqlite I'll provide feedback on this as part of the patch in bug 826753.
Attachment #8458881 -
Attachment is obsolete: true
Attachment #8458881 -
Flags: feedback?(bhearsum)
Assignee | ||
Comment 6•10 years ago
|
||
Comments leading up to this review can be seen in bug 826753
Attachment #8461754 -
Flags: review?(bhearsum)
Assignee | ||
Comment 7•10 years ago
|
||
Added migrate_repo
Attachment #8461754 -
Attachment is obsolete: true
Attachment #8461754 -
Flags: review?(bhearsum)
Attachment #8461767 -
Flags: review?(bhearsum)
Assignee | ||
Comment 8•10 years ago
|
||
Adjusted a few things so that it won't cause merge conflicts
Attachment #8461767 -
Attachment is obsolete: true
Attachment #8461767 -
Flags: review?(bhearsum)
Comment 9•10 years ago
|
||
Comment on attachment 8461895 [details] [diff] [review] bug1032975.patch Review of attachment 8461895 [details] [diff] [review]: ----------------------------------------------------------------- ::: kickoff/model.py @@ +204,5 @@ > + """A base class to store release events primarily from buildbot.""" > + __tablename__ = 'release_events' > + name = db.Column(db.String(500), nullable=False, primary_key=True) > + sent = db.Column(db.DateTime(pytz.utc), nullable=False) > + event_name = db.Column(db.String(500), nullable=False) event_name needs to have primary_key=True set, too. Don't forget to update it in the migrate script too.
Assignee | ||
Comment 10•10 years ago
|
||
(In reply to Ben Hearsum [:bhearsum] from comment #9) > Comment on attachment 8461895 [details] [diff] [review] > bug1032975.patch > > Review of attachment 8461895 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: kickoff/model.py > @@ +204,5 @@ > > + """A base class to store release events primarily from buildbot.""" > > + __tablename__ = 'release_events' > > + name = db.Column(db.String(500), nullable=False, primary_key=True) > > + sent = db.Column(db.DateTime(pytz.utc), nullable=False) > > + event_name = db.Column(db.String(500), nullable=False) > > event_name needs to have primary_key=True set, too. Don't forget to update > it in the migrate script too. Are we sure event_name is unique enough?
Assignee | ||
Comment 11•10 years ago
|
||
Attachment #8461895 -
Attachment is obsolete: true
Attachment #8462762 -
Flags: review?(bhearsum)
Comment 12•10 years ago
|
||
Comment on attachment 8462762 [details] [diff] [review] bug1032975.patch Review of attachment 8462762 [details] [diff] [review]: ----------------------------------------------------------------- Looks good! We'll get this landed on Monday.
Attachment #8462762 -
Flags: review?(bhearsum) → review+
Comment 13•10 years ago
|
||
Comment on attachment 8462762 [details] [diff] [review] bug1032975.patch I tried to land this but ran into sqlalchemy.exc.OperationalError: (OperationalError) (1071, 'Specified key was too long; max key length is 767 bytes') '\nCREATE TABLE release_events (\n\tname VARCHAR(100) NOT NULL, \n\tsent DATETIME NOT NULL, \n\tevent_name VARCHAR(500) NOT NULL, \n\tplatform VARCHAR(500), \n\tresults INTEGER NOT NULL, \n\t`chunkNum` INTEGER NOT NULL, \n\t`chunkTotal` INTEGER NOT NULL, \n\tPRIMARY KEY (name, event_name)\n)\n\n' () According to https://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html, 255 characters of VARCHAR is the maximum key length. With "name" already being 100 chars, we're limited to ~150 chars for event_name. Do you think that will be long enough, John?
Updated•10 years ago
|
Flags: needinfo?(johnlzeller)
Assignee | ||
Comment 14•10 years ago
|
||
(In reply to Ben Hearsum [:bhearsum] from comment #13) > Comment on attachment 8462762 [details] [diff] [review] > bug1032975.patch > > I tried to land this but ran into sqlalchemy.exc.OperationalError: > (OperationalError) (1071, 'Specified key was too long; max key length is 767 > bytes') '\nCREATE TABLE release_events (\n\tname VARCHAR(100) NOT NULL, > \n\tsent DATETIME NOT NULL, \n\tevent_name VARCHAR(500) NOT NULL, > \n\tplatform VARCHAR(500), \n\tresults INTEGER NOT NULL, \n\t`chunkNum` > INTEGER NOT NULL, \n\t`chunkTotal` INTEGER NOT NULL, \n\tPRIMARY KEY (name, > event_name)\n)\n\n' () > > > According to > https://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html, 255 > characters of VARCHAR is the maximum key length. With "name" already being > 100 chars, we're limited to ~150 chars for event_name. Do you think that > will be long enough, John? Shoot. Yeah I think 150 is probably okay for event_name. The longest builderName I am seeing in my dataset is 50. Also we could stand to shorten the varchar for name to at least 50, which would buy us another 50 for event_name.
Flags: needinfo?(johnlzeller)
Assignee | ||
Comment 15•10 years ago
|
||
Attachment #8462762 -
Attachment is obsolete: true
Attachment #8463445 -
Flags: review?(bhearsum)
Updated•10 years ago
|
Attachment #8463445 -
Attachment is patch: true
Attachment #8463445 -
Attachment mime type: text/x-patch → text/plain
Updated•10 years ago
|
Attachment #8463445 -
Flags: review?(bhearsum) → review+
Comment 16•10 years ago
|
||
Comment on attachment 8463445 [details] [diff] [review] bug1032975.patch John landed this himself (yay!) and I did the database upgrades to stage/prod. Dev is still down.
Attachment #8463445 -
Flags: checked-in+
Assignee | ||
Updated•10 years ago
|
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 17•10 years ago
|
||
Seeing as how we are adding the status_type property to pulse messages. I think we might reconsider adding it as a Column in the ReleaseEvents table. What do you think?
Attachment #8464168 -
Flags: review?(bhearsum)
Assignee | ||
Updated•10 years ago
|
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 18•10 years ago
|
||
Comment on attachment 8464168 [details] [diff] [review] bug1032975-2.patch Review of attachment 8464168 [details] [diff] [review]: ----------------------------------------------------------------- Yes, I think this makes sense. You need to create a new version though. The production database is already upgraded to version 5, so we can't re-uprgade it. r+ with this in version 6.
Attachment #8464168 -
Flags: review?(bhearsum) → review-
Assignee | ||
Comment 19•10 years ago
|
||
Once I test migrate_repo, this is good to go I think!
Attachment #8464168 -
Attachment is obsolete: true
Attachment #8465622 -
Flags: review?(bhearsum)
Assignee | ||
Comment 20•10 years ago
|
||
Tested! Good to go!
Attachment #8465622 -
Attachment is obsolete: true
Attachment #8465622 -
Flags: review?(bhearsum)
Attachment #8465632 -
Flags: review?(bhearsum)
Comment 21•10 years ago
|
||
Comment on attachment 8465632 [details] [diff] [review] bug1032975-2.patch Per our Vidyo conversation, status_type is getting a new name.
Attachment #8465632 -
Attachment is obsolete: true
Attachment #8465632 -
Flags: review?(bhearsum)
Assignee | ||
Comment 22•10 years ago
|
||
Changed status_type to category
Attachment #8463445 -
Attachment is obsolete: true
Attachment #8469448 -
Flags: review?(bhearsum)
Comment 23•10 years ago
|
||
Comment on attachment 8469448 [details] [diff] [review] bug1032975-2.patch Review of attachment 8469448 [details] [diff] [review]: ----------------------------------------------------------------- Looks fine. I'm going to wait until Monday to land this, though.
Attachment #8469448 -
Flags: review?(bhearsum) → review+
Assignee | ||
Comment 24•10 years ago
|
||
Changed category to group
Attachment #8469448 -
Attachment is obsolete: true
Attachment #8469480 -
Flags: checked-in?
Comment 25•10 years ago
|
||
Comment on attachment 8469480 [details] [diff] [review] add group column to ship it tables I checked this in and upgraded the dev, stage, and prod databases.
Attachment #8469480 -
Flags: review+
Attachment #8469480 -
Flags: checked-in?
Attachment #8469480 -
Flags: checked-in+
Assignee | ||
Updated•10 years ago
|
Status: REOPENED → RESOLVED
Closed: 10 years ago → 10 years ago
Resolution: --- → FIXED
Updated•10 years ago
|
Attachment #8469480 -
Attachment description: bug1032975-2.patch → add group column to ship it tables
You need to log in
before you can comment on or make changes to this bug.
Description
•