Open Bug 1203953 Opened 5 years ago Updated 5 years ago
re-design add-on submission process to allow for submission of listed and unlisted add-ons
With add-on signing developers need to submit their add-ons even if they do not want them to be listed on AMO. This additional function was quickly implemented to handle submissions. This submission should however be optimized to handle listed and unlisted add-ons equally well, and guide the developer to the right choices for their desired goal. This involves optimizing the submission process: (WIP) https://www.lucidchart.com/documents/view/32a2afd1-fbde-4e28-a200-f915df35d3d0/2 Creating clear and concise descriptions and names for different reviews. (bug 1186369) And building this into a new UI flow: (WIP) https://invis.io/ZY455PMTE
The new UI flow is expressed in wireframe format here: https://www.lucidchart.com/documents/view/e8578542-dbaa-4e2e-acbe-9a8b645dde24 When the functionality of each page in the flow is exactly what we want, I will port the design to Adobe Illustrator. I’m currently waiting for my software to be ordered.
Notable changes in this iteration of the submission wireframe: * Added interface to upload source code * If source code is uploaded, add-on must undergo full review * If add-on is experimental, the details interface contains less fields * Mandatory: name, brief summary, primary category, license * Optional: screenshot, detailed description, secondary category * Consider removing the ability to edit listing URL. It will not only make the details interface simpler (one less field). It will also make all our listing URLs consistent. * Added an interface to select a platform upon successful validation * Added a way to submit notes for AMO reviewers on the Full Review landing page Note that the names used for various reviews are still not finalised. Refer to bug 1186369 for that. Lisa, Andreas, Jorge, please review?
My notes: * Step 2: let's not call it 'validation' to avoid those checks from being confused with the ones done later on. * Step 3: it's not true that Experimental add-ons can't have minified or obfuscated code. It's sensible to think that add-ons with attached sources are more likely to be Fully Public, but Experimental is still an option. * Step 4: there's a possible error case when the add-on name or slug already exists. Not saying it should be in the wireframes, but it's worth keeping in mind. * Step 5: the wording needs work, but that should go along with the new names we'll use.
For a simple submission: * why do the terms and conditions have to be agreed to every time? (bug 1205904) * why do I have to select the platforms for my add-on to run on, can we assume there's a default (in fact it looks like in the mocks one is assumed) What percentage of add-ons work on all platforms? When we have Web Extensions, what percentage will? * can we infer any information in step 4 so we don't have to show it to users? * could we add an add-on to the unlisted queue automatically and then ask developers to move it to the listed queue, collecting more information then? I'm hoping we can have an API that can submit an app in just one step, prompting for the more complicated stuff when its needed.
Changes * Added a check for accepting agreement. If developer has accepted agreement, then it’s never shown again. * Reworded “validation” into “automated test”. This name is up for discussion. What should we call it? * Name of the second validation is still “signing analysis”. This is also up for discussion. * Removed source code checking. You can now select experimental add-on even when source code is attached. * Used correct wording (as specified in bug 1186369 comment 20) on complete screens. Question for Jorge: If we disable manual slug (ie. AMO automatically assigns slug), does it mean that we avoid the duplicated slug problem? If yes, it’s worth considering. The hypothesis here is that a developer isn’t likely to want a customised AMO slug. If there’s anything they’d like to customise, it may be short URL that leads to the AMO listing page. For example: http://addons.io/adblockplus or http://addons.io/ghostery
Attachment #8662769 - Attachment is obsolete: true
Attachment #8664047 - Attachment description: Add-on submission wireframes - i07.png → Add-on submission wireframes - i07
(In reply to Andy McKay [:andym] from comment #4) Hi Andy. I have thought a lot about a super short submission process that automatically assumes all add-on to be unlisted and to only need automatic signing. Then if user wants more features (to be listed, to be promoted, to be side-loaded, etc.), we would prompt for more options. This submission flow proposal doesn’t follow our current wireframe. It’s a rethinking. A map of what our process could be when taking the ideas above into account. It may need to belong in another bug. What do you think?
I like this more, its seperating out the two steps: 1) validate, upload, sign the add-on and 2) decide how it wants to be shown on AMO and the steps needed to do that. Rewriting the submission flow is a big step that would take a while for us to do, I've got no problem saying it should be part of the AMO rewrite and we should be focusing on what the optimal flow for a user is overall and seeing what underlying changes are the result. Btw bram, we've got https://github.com/mozilla/addons, if we want to start doing this in github :)
(In reply to Andy McKay [:andym] from comment #4) > * why do I have to select the platforms for my add-on to run on, can we > assume there's a default (in fact it looks like in the mocks one is assumed) From the manifest we can deduce if an add-on supports Firefox for Android or not, and I think it's fair to assume that an add-on supports all 3 desktop platforms. I agree we can hide the platform selection somewhere, or leave it for after you're done. > What percentage of add-ons work on all platforms? When we have Web > Extensions, what percentage will? I don't have any numbers, but it's probably above 90% when we're talking about all desktop platforms. Android support is less common, but again, we can figure that out by looking at the manifest. > * can we infer any information in step 4 so we don't have to show it to > users? We can pre-fill the name and description using information from the manifest, but there's the possibility of it conflicting with an existing name. I think Andreas also mentioned that we currently require a source code license. I would prefer it if we didn't. > * could we add an add-on to the unlisted queue automatically and then ask > developers to move it to the listed queue, collecting more information then? We currently don't support moving from listed to unlisted without deleting the add-on. If that sounds dumb, it's because it is. We decided to go that way because we didn't have the time/resources to do it correctly. (In reply to Bram Pitoyo [:bram] from comment #5) > * Reworded “validation” into “automated test”. This name is up for > discussion. What should we call it? Some possibilities: Metadata check, Integrity test, Pre-test. > * Name of the second validation is still “signing analysis”. This is also > up for discussion. I would prefer we just call this one Add-on Validation for both listed and unlisted. Could be Listed Add-on Validation and Unlisted Add-on Validation to tell them apart. > If we disable manual slug (ie. AMO automatically assigns slug), does it mean > that we avoid the duplicated slug problem? If yes, it’s worth considering. I think we currently generate a reasonable slug based on the name. Don't know if we do a duplicate check and find a non-duplicate slug, but we should definitely do that in order to make the process quicker. > The hypothesis here is that a developer isn’t likely to want a customised > AMO slug. If there’s anything they’d like to customise, it may be short URL > that leads to the AMO listing page. For example: > http://addons.io/adblockplus or http://addons.io/ghostery Some developers probably care about the slug, and many change them when their add-ons change name. I think it's okay to give them a default during first submission, but it shouldn't hurt to let them change it later, right?
The super short “minimal decisions” submission flow has been updated to address comment 7 from Andy, and some of Jorge’s comments on comment 8. Changes: * Hide platform selection, because we can deduce it from manifest * Decision making points are minimised by having AMO perform various tests, make various assumptions, and direct users to certain paths that we think the user wants. These assumptions can be corrected, but if it’s already correct, then the user doesn’t need to make any decision. Decision logic: * If your add-on qualifies for automatic signing, it requires no input on your part. * If you want to side-load, click the “This add-on will be bundled” button * If your add-on doesn’t contain updateURL, we assume that you want to list it on AMO. We also assume that it’s ready for public consumption. * If your add-on is experimental, choose the “Experimental” tab * If you still want to self-host, choose “I want to host this add-on myself” * Then choose if you want to host on your own server * Or choose if you want to side-load Andy, what do you think? Markus, this process contains many of the same screen, but it’s not in the same order as our previous iteration (this is why the name is different – it’s a fork of the old process). What do you think? Is it better or worse than what we have designed?
Attachment #8664070 - Attachment is obsolete: true
Here’s the “minimal decisions” wireframe companion to the flow. You’ll get some sense of how the flow translates to real-world screens. Amongst other things, you’ll notice that the lines are more straightforward, and there is one less screen.
What is goal of this bug? When I started it was optimizing the current flow to integrate self-hosted add-ons better in the flow to help devs understand what is going on and what review is best for them and to lower the amount of users requesting unnecessary reviews. All with an eye on what can we implement soon as the process currently it very confusing and results in unnecessary reviews. I have the feeling this discussion more moves towards what an optimal process for an all new amo might be. I think this is a discussion worth having, but maybe we can split those discussions in separate bugs. Andy, are there resources to optimize the current flow, and how much could we change now? Feedback based on optimizing the current flow towards devs understanding why we make which decisions for them: (In reply to Bram Pitoyo [:bram] from comment #9) > Changes: > * Hide platform selection, because we can deduce it from manifest Great to have one question less. > * Decision making points are minimised by having AMO perform various tests, > make various assumptions, and direct users to certain paths that we think > the user wants. These assumptions can be corrected, but if it’s already > correct, then the user doesn’t need to make any decision. With minimizing the decision making points we also reduce the transparency of the process. In our previous iteration the screens that helped you pick distribution looked similar even if we pre-selected different decisions based on your submission. (see attachment 8664047 [details] - Add-on submission wireframes - i07 - step 3 - distribution) The new flow does not help to understand why the flow sometimes if different. For example it does not hint that having an updateURL or not decides a lot in this process. (That is a fact someone would need to know.) And why is it not possible to get automatic review if you have no updateURL? > * If your add-on doesn’t contain updateURL, we assume that you want to list > it on AMO. We also assume that it’s ready for public consumption. > * If your add-on is experimental, choose the “Experimental” tab The details page got very complex with adding the self-hosted, and public/experimental buttons into one screen with all the details. I think on that page we have to further optimize. Maybe we can split the decisions from the details. > Markus, this process contains many of the same screen, but it’s not in the > same order as our previous iteration (this is why the name is different – > it’s a fork of the old process). What do you think? Is it better or worse > than what we have designed? To me the new process looks more complicated for users submitting add-ons to understand. Splitting the process based on updateURL without feedback or clear option to change that, seams confusing. The AMO details page looks very full with having the switch to self-hosted and public/experimental on the same page. Further I do not understand why the self-hosted process looks different depending on whether I have an updateURL, or not. -- If we think about an all new submission process we might consider guiding the user through their whole development process with it. From first alpha or beta versions of add-ons that are not publicly available (but maybe hosted on amo), via experimental to public and up to featured if they are successful and match our suggestions for a good experience and style. Such a new vision seams worth having a dedicated meeting about to exchange ideas and set expectations such a process should meet.
Flags: needinfo?(mjaritz) → needinfo?(bram)
(In reply to Markus Jaritz [:maritz] (UX) from comment #11) > Andy, are there resources to optimize the current flow, and how much could > we change now? You've got me, this is the key question. Between filing this bug and where we are now, we've greatly increased the resources available to this feature. If we did the original change it would be: make the changes, hope we get it done "in time" to meet needs, then start on a complete AMO redesign and then re-do this flow again. That concerns me. I also feel like Bugzilla is not the place to have a detailed conversation on this. So I'll schedule a meeting and you can all hate me for scheduling meetings. But you can blame me for that.
Andy, let’s schedule a meeting with a time that works for all of us. If not, feel free to meet with Markus and I separately. Markus has been working on this project for a while, and I’ve only been helping him for the past 2 weeks, so you should talk to him first before talking to me. -- Markus, I agree with your assessment. By automating and assuming a lot of things, user input is reduced to a minimum, but the process is also less transparent. I think we should first clarify what we want out of the redesign, and then we’ll know exactly the things to optimise. My ideal version of an all-new submission process is one where developers give us the URL to the repository where their add-on source code is located. But this repo contains more than just source code. Developers would also have to include everything: whether the app is self-hosted, side-loaded, or hosted on AMO. What the screenshots look like. What license they’re using. etc. Then, our submission process can be a single box that says “paste the link to your repository here. We will perform an automated test that will determine everything. If you need help, read the manual on MDN”. Regardless, we should have a meeting about these visions. Otherwise, the old wireframe I’ve designed seems to be good. And if we decide to go with that, I can turn it into Illustrator mockups.
In a meeting with Andy we agreed to have some quick fixes implemented quickly. To learn what is possible to implement in this version, we will consolidate with Mathieu (:magopian). Everything that is not possible to implement in this time-frame will be held of for the next big redesign of AMO. Mathieu, what do you think is possible within that time-frame. Should we do that in this bug, or would you prefer to meet with me or Bram to talk about what we can implement?
Updated wireframes with feedback from my conversation with Mathieu. There are two big caveats here: * If we want a flow with simpler pages, there has to be more steps. If we want a flow with less steps, each page will need to be more complex. * The submission logic is complex, with a lot of corner cases and exceptions, so there will be a lot of questions for Lisa and Jorge. Onwards with the changelog: * Source code * If add-on type == extension, and if binary == true, source code is mandatory * If add-on type == extension, and if binary == false, source code is optional, because binary might actually be minified and we cannot detect it. * If add-on type =/= extension, then binary is optional (most don’t need it) * Platform detection * This iteration assumes that we can detect platform from manifest. If we can, then we show what platform is compatible. To change platform, add-on must be re-uploaded. This shaves one step from the flow, but makes the validation page more complex. * If we cannot detect platform from manifest, then we can revert to the design in the prior iteration, where we have a page to select platform manually. * Self-host and Mozilla-hosted * They now appear as buttons on the validation success page. You can still select manually by clicking on the right button. * Reasoning: if developer can quickly see their hosting option, they can tweak their manifest until it’s the way they want it. * It allows us to reduce one step from the flow, but the validation page is now more complex. It’s a tradeoff. * Bundled/side-loaded * It’s possible for Mozilla-hosted add-ons to be bundled/side-loaded * Therefore, I have added a checkbox on the promoted/unpromoted selection page. Even if you select “Unpromoted”, if the bundle checkbox is checked, then your add-on will require full review * Automatic Review * When the add-on doesn’t qualify for automatic review, I have added the “Update file” button back. * Originally, I took this button out because it could be used to cheat the submission system. However, according to Mathieu, the submission system can already be cheated easily thanks to having a separate validator. So instead of preventing cheating, we should design with the goal of providing quick feedback that can be acted on immediately, instead of making user restart the flow. One idea: Can we do away entirely with the idea of Full Review for Mozilla-Hosted add-on? The way it works is, every Mozilla-hosted add-on is always assigned Prelim Review when submitted (unless it’s side-loaded). When it has passed review, then developer can ask for an upgrade (a Full Review) using the management interface. OMathieu will look at this wireframe and provide a list of what’s feasible to implement in the near future. I have left his needinfo intact. What do you think?
Attachment #8664047 - Attachment is obsolete: true
To me the new flow looks very straight forward and understandable. Is this what we can implement as a quick fix for the current version? That would be great! Only minor confusion I had was the bundles checkbox. As the first option on this page already includes bundeling I did not know what the checkbox would do. After some consideration I think it is only part of the second option. Am I right? Great Flow.
Feedback on the updated wireframes: 1/ the case "type =/= extension & binary not detected" should read a "or" instead of the "&" 2/ we only detect if there's binary files when the validator tells us so, so the "add source code" should really be on future steps. As discussed, it could be on the same step the user can add "review notes". Most of the "blocking errors" are detected using the validator 3/ regarding platform detection, it seems the target platform can only be specified in the install.rdf manifests (https://developer.mozilla.org/en-US/Add-ons/Install_Manifests#targetPlatform), not in the package.json ones (https://developer.mozilla.org/en-US/Add-ons/SDK/Tools/package_json) nor the manifest.json ones (for webextensions: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/manifest.json). So while it was a good idea, I don't think it's feasible. 4/ we don't need the "bundled checkbox" for listed add-ons if we have "promoted and/or bundled" as the title of the button as you put it 5/ i think the "review notes" makes sense in each case, not only for full reviews 6/ i don't know if the reviewers would rather have the "full review" or "prelim review" (for the listed add-ons) as the default selected choice => NI :jorgev 7/ there's no such thing as "signing analysis": as soon as the validator runs (after the upload on step 2) we know if we'll be able to auto sign the add-on or not, though we only allow it if it's an unlisted non-sideload add-on 8/ do we want to have yet another name for "prelim review": "Express review" (which is a choice on step 3 when user chooses a manual review if the add-on can't get auto signed)? 9/ I don't understand the last paragraph "what are the benefits of automatic review" on step 5 if the unlisted add-on was auto signed: if an add-on is auto-signed, it's available immediately (as is visible on the step with the link to "download your signed add-on". I think this last paragraph is a leftover and should be removed? 10/ on step 5 of "review for unpromoted mozilla-hosted add-on", there is a question "why is there no option to select full-scale review?": there's two issues here, and imho this whole paragraph should go: - if the user selected "unpromoted", then this question makes no sense: they chose this option themselves - if the user submitted an unlisted add-on, they don't need to care/know about a "full-scale review" (is that yet another name for "full review"? do we want that?) I like a lot of the things that were done here, but I have a dream (which may only ever be a dream): have a flow which is much simpler and shorter. Here's an idea: - User uses one of two links in the devhub or the top menu: submit a new listed add-on/submit a new unlisted add-on - User sees a big "upload <listed|unlisted> add-on package" button, selects a file, and on selection, sees a big spinner saying something like "hold tight while we're running some automated tests" - User gets told one of the following: - we didn't detect an updateUrl, but you're uploading an unlisted add-on, please add one to your manifest - we detected an updateUrl, but you're uploading a listed add-on, and this isn't allowed, please remove it from your manifest - we detected <some blocking error>, please fix that and try again - you're submitting a listed add-on, would you like to promote it (takes longer), or not (quicker, but less visible, ranks lower in searches). Please note that if you plan on distributing your add-on with a software installer (side-loading), you need the "promoted" option - you're submitting an unlisted add-on, but we've detected a few signing-related warnings that you might want to fix for us to automatically and immediately sign it. If you can't, you can still apply for a manual review which will take longer. If you're planning on distributing it with a software installer (side-loading), then please use this other button (it needs a manual review). - you're submitting an unlisted add-on and we didn't detect any signing-related warnings, so we're going to automatically and immediately sign it! That's unless you're planning on distributing it with a software installer (side-loading), in which case please use this other button (it needs a manual review). - User then sees the last page from the submission flow, one of the following: - here's a link to your signed unlisted-addon - you're now in the queue for a review, there's currently 123 add-ons in front of you. While you're waiting for your review, please upload the source code for any binary or minified/obfuscated code. Also fill in any details that would help the reviewers. You can add more information for your add-on like screenshots, categories, a description and much more in the devhub <link to the devhub page of this submission> This dream doesn't go into detail of each and every corner cases (like the platform selection: we might need an extra step for add-ons that have binaries), and doesn't take the "beta files" into account either (which are only available for fully reviewed add-ons). This is only for the "update submission flow", which is anyway slightly different than what we're currently discussing here. Regarding the "quick fix for current version", here's a list of things that I think would be feasible without changing to much of the underlying code: - accept agreement once and for all (until next change): add a setting that holds the last agreement update, and add a field on the UserProfile model holding the date the user accepted the agreement. If the last agreement change date is prior to the user's acceptation date, don't display the user agreement - some of the wording changes should be doable rather easily - i like the "why is there..." explanations here and there, I'm sure they'll be very useful to reduce the confusion, we could add those I'll be submitting a tracker bug for those "quick fixes" shortly.
Flags: needinfo?(mathieu) → needinfo?(jorge)
Can I check that we want to keep working on this in a bug like this?
(In reply to Mathieu Agopian [:magopian] from comment #17) > 6/ i don't know if the reviewers would rather have the "full review" or > "prelim review" (for the listed add-ons) as the default selected choice => > NI :jorgev For listed add-ons we prefer they request full review. (In reply to Andy McKay [:andym] from comment #18) > Can I check that we want to keep working on this in a bug like this? I'm okay with moving it elsewhere, but not sure what would be better. Email thread?
I’ve revised the flow to address feedback from Jorge and Mathieu. You can think of this flow as our ideal but still short-term fixes. Ideal because covers fixes that are both easy and hard. Short-term because it’s doesn’t change the existing submission system drastically. Mathieu will file some easy-fix tracking bugs, which will cover most changes proposed here. Wireframe changelog: Step 2 * Added back manual platform selection. It appears below validation. Step 3 * Removed signing analysis page. It’s unnecessary. * Removed the checkbox saying “bundled”. It’s unnecessary. Step 5 * Put Add Source Code/Upload Binary here, below Review Notes * On Prelim Review success page, add Review Notes * On Prelim Review success page, removed explanation: “Why is there no option to select a full review?”. It’s already explained well on the rest of the flow. * On Automatic Review success page, removed “Benefits of Automatic Review” Minor changes: * Logic: "type =/= extension AND binary not detected", change AND => OR * Fixed a few more wording to make it consistent across the board
Attachment #8668812 - Attachment is obsolete: true
Attachment #8664638 - Attachment is obsolete: true
Attachment #8664637 - Attachment is obsolete: true
Submitted a few bugs for the "quick fixes": - accept agreement once and for all (until next change): - partially fixed by bug 1209226 - new bug 1216527 - new bug 1216528 - some of the wording changes should be doable rather easily - "full review" for listed add-ons renamed to "Promoted and/or bundled": bug 1216533 - "sideload" for unlisted add-ons" renamed to "promoted": bug 1216534 - "unlisted" renamed to "host add-on on my own server" and "self-hosted": bug 1216536 - "validation" renamed to "automated tests": bug 1216544 - "automatically signed" renamed to "automatic review": bug 1216548 - visual changes - display the "validation box" with a green, orange or red background depending on the errors/warnings: bug 1216553
Another bug, only slightly related: removing the "beta" mentions in the submission flow: bug 1216561
A related bug that is not as easy to accomplish as wording changes, but we may still want to consider: replace the checkbox for unlisted add-ons with a two-button interface (bug 1220977).
Product: addons.mozilla.org → addons.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.