Closed Bug 1508741 Opened 6 years ago Closed 6 years ago

Support consuming events from Pulse using a guest account

Categories

(Webtools :: Pulse, enhancement)

enhancement
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: emorley, Unassigned)

References

(Blocks 1 open bug)

Details

For projects that consume events from Pulse, it's currently hard to have their development environments work "out of the box", since we have to ask users to manually create accounts on Pulse Guardian, then pass them to the development environment locally via environment variables or config files. It would be great if projects could use guest credentials to access pulse (which would be overridden for non-development environments) to avoid this. To reduce risk of abuse, these guest credentials would not be able to create durable queues, and would presumably have various other low limits/restrictions set.
Access control in RabbitMQ is a somewhat complicated topic, but I don't think we can enforce durable vs nondurable queues in the RabbitMQ itself. We'd have to implement this in PulseGuardian, monitoring the guest user and automatically killing durable queues (or maybe just killing them when there are no connections, effectively making them nondurable queues). That wouldn't be too hard. I'm not sure if a single guest account is a great idea though; sounds like it would be an easy way to accidentally clobber someone else's work. Maybe we could have a way to anonymously create a guest user but give it a short lifetime, like a day or a week or something, and have other enforced restrictions as you say, like a maximum number of guests, maximum number of queues per guest, maximum number of queued messages per guest queue, etc. What kind of use case do you see for this guest user, that is, what's the strictest set of restrictions we could have while still making it useful?
Flags: needinfo?(emorley)
Ah true about a single guest account. If support were added for a way to anonymously create a guest user I guess we could come up with a way to build it into the Treeherder development environment setup scripts - though if it has a short lifetime we'll have to handle expiry etc at an app level too (since people don't run the environment setup scripts every time they spin up the VM). The use-case I'm envisaging is someone spinning up a Treeherder development environment, running one command to start the existing pulse listener process (with no need for the current manual login/copy-paste credentials steps), and then them being able to see live prod data start streaming into their local Treeherder instance. When they stop the listener process, destroying their user/queue or dropping messages on the floor is fine. In terms of limits, number of connections per user would be however many concurrent workers are spun up locally (so guess 5-20?), but all would be from the same IP - any way to limit that way with the current architecture? Re maximum queued messages I'm happy with zero. Max queues would be the half-dozen queues from which Treeherder pulls data (https://github.com/mozilla/treeherder/blob/ff81c50117fe619b5a0937ee348d6a3d31820bb2/treeherder/services/pulse/sources.py#L20-L33).
Flags: needinfo?(emorley)
Mark, do you have any more thoughts about this? :-)
No I don't think there's anything built into RabbitMQ to restrict connections by IP. We'd have to add monitoring to guardian.py. Maybe we could let the users (er, "accounts", in our new nomenclature :) be longer lived and delete them only if there is no activity (e.g. no connections) for some period of time. Would that work?
Yeah that sounds reasonable :-)

We support giving accounts to basically anyone with an email, which is pretty close to "public", while still allowing some measure of abuse prevention and user isolation.

Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → WONTFIX

We worked around this by generating queues dynamically.
For instance, a Heroku Review App will generate queue names upon creation as env variables (see code.

You need to log in before you can comment on or make changes to this bug.