Created attachment 8724082 [details] Link to google doc describing sharing pan For collecting behavioral data in Hello we plan to send event data to Google Analytics through our own server. The data will be strictly limited to: - Add-on version number - Firefox version number/channel - OS information - Session identifier - Event data I propose the session identifier be a random identifier reset each day. This allows us to track events that happen before and after any specific Hello session. We have a Google Doc that describes all the events we want to capture in Hello: https://docs.google.com/document/d/1ti7xmZ0qV70Ihwn40h-MqIaTv7mrLdwyMTaG0Eoba6E/edit?usp=sharing
As a generic first pass data review: * I believe that this data still falls in the category that it must provide user benefit. Can we assign specific user benefit to each of these metrics? Either specific product changes that will be made based on this data, or operational monitoring that this data will provide. * I'd like the end review to be in the form of data documentation committed to some github repo. So each event describes the parameters that will be sent and we know the URL endpoint, any cookies that will be sent (none, right?), and so forth.
1. Do you want specific user benefit for each item? Note we are doing behavioral analysis, and combined with the session ID these will produce click flows. Individual events have to be captured, but the value is in the combined view. To achieve this we're proposing all interactions should be recorded. Gareth Cull is going to propose specific event names and categories for each of these events, which may clarify the exact data. Note also for every future UI change we make, we would want to add events related to that new UI – so we'd really like approval of events fitting a certain criteria, rather than specific events which will change with every UI change we do. I might suggest "signaling an event for any user event, and for connection related events, with no parameters". We DO have a small number of proposed analytics that include parameters, which we should discuss individually, and we could continue to get review if we add more events like that. (I would propose we also specifically highlight these events.) 2. I tried recreating the document in Markdown, and given the images it was not feasible. I'd rather not lose the images, as I think they help clarify the exact targets. Can we review it as a Google Doc, then once we have something reviewed we can export it as a PDF into a repo?
The plan: In the Google document we'll be enumerating all the specific events, fully spec'd out (event type, category, value). Then we'll take just the events and put them into a separate text document (probably in our repository), with a reference back to the Google Doc where there are helpful images and explanation. Benjamin will review that text document. The document will also include all the information that accompanies the event stream. I'll include a user value statement at that time. It will look something like: We will use data in the event stream to look for: evidence of bad experience, misleading experiences, or unmet user expectations. We will use this to guide development of Hello to fulfill those expectations and remove or improve confusing or misleading interfaces. We can use a long-lived session ID, generated specifically for Hello. Ideally this would be stored in the cookie jar so the user has some control over it. It could be attached to the loop-server URL. (I don't believe it can be attached to about:loopconversation) After this review I will take responsibility for reviewing new events that we add to the event stream to ensure that they do not have any identifiable information associated with the events. If they do have information we will re-submit for review.
Support for Hello/Loop has been discontinued. https://support.mozilla.org/kb/hello-status Hence closing the old bugs. Thank you for your support.