Closed Bug 863098 Opened 11 years ago Closed 11 years ago

[UX Spec] User should be able to switch to speaker out while listening to FM Radio using ear plugs.

Categories

(Firefox OS Graveyard :: Gaia::FMRadio, defect, P1)

Other
Gonk (Firefox OS)
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: swilkes, Assigned: brad)

References

Details

(Whiteboard: ux-tracking, ux-priority1.2)

Attachments

(2 files)

Spec bug for Brad from Tribe to upload wireframes that address this user story from the LG 19.
Here is the work that we have been doing at Modern Tribe to address the FM Radio SpeakerOut issue:

https://www.dropbox.com/s/7bpjgw93iluuznh/Radio.8.pdf

Thanks!
Hi Brad,

May I know what would be happened in scenario as below?

1. Launch FM app with headset plugged in and start to play FM.
2. Press icon of speaker on and we can expect sound from speaker.


Q1: After moving to music app and start to play a Music.
    Does the music come from speaker or headset?

Q2: If device receive an incoming call, the remote voice will be output from speaker or headset?

Q3: If FM app in the background is crashed, is output device still on speaker or back to headset?

thanks.
Hey Marco,

Here's my take on the answers:

Q1: Music comes from default for Music app with headset in - I would assume headset.

Q2: I think it makes most sense for the remote voice to be played via the speaker, while showing the call screen with the Speaker button toggled _on_. Toggling that button off would move remote voice to headset.

Q3: If the app crashes, output moves to headset until user opens FM app again and turns speaker back on.

Thanks!
Just a reminder that this bug is only for wireframe exploration, NOT for development. :)
Is it ok to discuss what the various buttons in the wireframe does? If so I have some followup questions.

First some vocabulary: There are 4 potential audio sources:

* Speaker. Used for most audio, such as ringer and media, when no headset is plugged
  in.
* Headset.
* Earphone. Used by default by telephony but almost nothing else. Telephony can go
  to speaker if user presses a "speaker" button in the dialer UI.
* Bluetooth headset.

What happens in the following scenarios:

A)
User starts radio app, plugs in headset and presses speaker button. Audio now comes from speaker. User receives an incoming phone call, where does the ringer audio sound?

Given that the user at this point likely doesn't have the headset connected to his ears, I would imagine we'd not want to use the headset.

B)
User starts radio app, plugs in headset and presses speaker button. User then switches to browser app and goes to a webpage. There's some sound effects on this webpage. Where do these sound effects sound?

C)
User starts radio app, plugs in headset and presses speaker button. User then switches to dialer app and places a phone call. Once the call is connected, where does the audio sound.

Under normal circumstances, i.e. if the user hadn't gone to the radio app and pressed the speaker button, the audio would come from the earphone. But there's a button in the dialer UI to make the audio come from the speaker.

D)
User starts radio app, plugs in headset and presses speaker button. Do we display anything in the notification bar, or in the notification center, indicating to the user that the plugged in headphones operate in a special mode right now.

E)
User starts radio app, plugs in headset, presses speaker button and turns on some background music. User then goes to a game which is playing some occasional sound effects. The user then closes the radio app using the application switches. Where does the game sound come from?

Does closing the radio app automatically disable the behavior of not using the headphones as audio source? Or does that happen through some other means?

F)
User starts radio app, plugs in headset, presses speaker button and turns on some background music. User then goes to a game which is playing some occasional sound effects. At a random point in time the system runs low on memory and so the radio app is killed in order to free up memory.

Same scenario as E, except that the radio app is killed automatically rather than through an explicit user action. We might need this to behave the same as E.



One of the overarching meta-questions here is if the button in the radio app means "use headphones as only antenna, not as an audio source" or does the button mean "make all audio go to the speaker".

Another meta-question is when does the effect of that button terminate given that the radio app might get killed due to running low on audio.

I had imagined that the button means "use headphones as only antenna", and that once pressed, the user had to either press the button again, or unplug and replug the headphones in order to make them act as an audio source.

But that we'd put a warning somewhere in the UI whenever the headphones are plugged in but only act as antenna.

However I gladly leave these decisions to UX. The above is just intended to be a description of one of the possible solutions.
Whiteboard: u=user c=System s=tribe → u=user c=System s=tribe target:05/17
Whiteboard: u=user c=System s=tribe target:05/17 → u=user c=System s=tribe
A, The ringer would go through speaker + headset, the same as normal behavior
B. The webapp's sound need to go through speaker because user shouldn't wear headphone in this case.
C. For phone call case, The FM radio will mute, I would like to use the headphone. (For voice case, users usually don't want others to listen remote voice...)
D. Maybe we can have a icon that FM icon with speaker? :) and keep headphone icon on. 
E. This behavior sounds like what is we want the speakerOn is global or localize. As a user I want to let the audio sound go through headphone as normal behavior. 
F. FM app was killed and the speaker status should recover to normal. There is no setting for controlling the speaker status right now.

BTW, this speaker related API may also need to consider the voip case, Do we want the voip application call the telephony.speakerEnabled API?
Randy: The question isn't what we can support with the current APIs, but rather how UX prefers things to work.

In B and E you are suggesting that two different things should happen. What is the difference between webpage sound effects and game background music? What about music on a webpage?

Brad, or someone from UX, what is the behavior you are expecting in the scenarios in comment 6?
For B, the radio is keep playing, But on E, the FM radio is stopped. So I have two options.
Hi all,

According to our audio system is under Android::AudioPolicyManager (One of Audio HAL), I would like to introduce some high level description about setForceUse() (which will be used in Gonk level for setSpeakerOn/BTSCOon from Gecko level.)

1. There are 4 categories when you tried to call setForceUse(). ex: setForceUse(category, outputDevice). And they are independent between each categories. ex: set speaker to media category will not effect the communication category.

   Categories: media, communication, record, dock.

So this imply me that the scope is not on global or app but different categories. 
Thus on B, they are all belong to media category so all of them will be output to speaker.
     on C, the voice from in_call state will be output to headset because it's a communication category.

     on E, I think we are talking about the case on the same category. Then I would suggest to cancel setSPKOn() from FM app.
Hi all,

Based on concept of audiochannel, maybe we can set categories as below and change setSpeakerOn() to setSpeakerOn(AudioChannel, bool)
  1. normal/content
  2. notification/alarm/ringer/publicnofication (not accepted by setSpeakerOn because they are all output to both of headset & speaker)
  3. telephony
So set speaker on for normal/content doesn't effect telephony and vice versa.

And we use refcount on appID or childID for life cycle of set speaker on.
So no matter E or F, audio will be set back to headset 
  1. when FM app crashed.
  2. when FM app call setSpeakerOn to false no matter what reason.

How about this?
Hi guys, I'd really like to get input from UX here.

I'd rather at least start the discussion by looking at what user experience we want, not what the user experience is on other platforms or what user experience we can get with a specific API.
Flags: needinfo?(brad)
Jonas, please let me know what you need (that isn't shown or isn't clear in the UX specs attached). We can chat with Brad via Vidyo as necessary.
Stephany: See comment 6. Would be great to have descriptions of what happens in those scenarios.

The comments after it also contains some good discussion about those scenarios.
Hey guys,

Thanks for pointing these out. I think these might be our _preferred_ solutions:

A) Incoming call ringer audio goes through speaker, so the user can hear it. We're definitely assuming they don't have the headphones in their ears at this point.

B) I would say also speaker. What does Music app do? I would think that once you've turned on Speaker it plays all sounds from your phone.

C) Button in dialer app should already be in ON state when user opens dialer. If they don't turn it off before placing a call, call audio is output through speaker.

D) No, they've already seen a tip about what is happening with FM/headset/speakers. Although in future releases we'd like to add Notifications if they remove the headset.

E) Unless there is some sort of overriding "Speaker" button I would say that yes, once the FM app closes the audio reverts to default mode. In this case since they have the headphones in the audio would go back to headphones.

F) Yes, should behave the same except that would feel confusing to kill an app I'm using unless I'm told why. But that's another thing altogether…

My take on your meta questions are:
- "Make all audio go to the speaker" while Radio is running and speaker button is on.
- Terminating the app for any reason forces audio back to normal mode - audio to headset when it is plugged in, audio to speakers when it isn't.
Flags: needinfo?(brad)
Hi Jonas,

Yes, I agree. We should listen the UX input first then design a way to fit UX's expectation not mimic other platform's behavior. That's my bad.

Hi Brad,

I collect your reply into one points, can you help to confirm? Thanks.

  -> One app enabled setSpeakerOn will apply to any other apps and scenarios until that app is closed or crashed.
  
May I know your opinion on case as below?
Case 1
  a. FM is enabled and setSpeakerOn is enabled too.
  b. phone call is coming then user see speaker button is enabled too.
  c. User uses task switch to terminate FM app.

  question: Does remote voice go to headset or still on speaker?

Case 2
  a. the same with case 1
  b. User pause the FM but not press the button for disabling setSpeakerOn.
  
  Question: Is setSpeakerOn still valid?

Case 3
  a. During on call state, user presses the button for speaker on.
  b. Call is end.

  Question: Is setSpeakerOn still valid? (Note: currently button for speaker on is only appeared in on call screen)
Marco,

Yes, your summary looks good.

Case 1
Phone call audio switches to headset since FM has been terminated.

Case 2
Phone call audio still on Speaker, with Speaker Button enabled on phone app.

Case 3
I would suggest that at the end of a call the speaker button is automatically turned off. 
If Radio is playing the entire time in the background with speaker on, then user ends call, Radio continues to play on speakers.


The only other thing to consider is audio suppression - do apps ever turn of sound from other apps? Can I turn on Radio, then a website that makes noise, then go to a game that makes noise and we now have audio from 3 sources? Anyone know how the Music app handles this?
FM radio sound can mix the games or website music noise.
Maybe let the speakerOn function followed by the major audio stream is easier to define the audio routing behavior.
(In reply to Brad from comment #17)
Hi Brad,

Your answer on case 3 means end of call in dialer should "disable speakerOn" if there is no others play in the background. This is broken the temp rule as below from comment 16. Because dialer app doesn't be closed or crashed but speakerOn should be disabled automatically.

 -> One app enabled setSpeakerOn will apply to any other apps and scenarios until that app is closed or crashed.

If you really want this behavior then 
  1. Let individual apps call WebAPI for enabling/disabling speakerOn by itself.
  2. Or Gecko needs to know this speakerOn is made by on_call screen then disable speakerOn when call is end automatically.
If Radio is playing on speaker while user talks in phone app, then user hangs up, then yes I guess speaker button should stay toggled on.
(In reply to Brad from comment #20)
> If Radio is playing on speaker while user talks in phone app, then user
> hangs up, then yes I guess speaker button should stay toggled on.

So may I make sure the two use case as below?

  Use case 1: 
    a. User enabled FM Radio then toggle the speaker on.
    b. User stopped FM Radio but not closing FM App.
    c. User open and play a song from Music app.

    Q: Is audio came from speaker or headset?

  Use case 2:
    a. User dial a call to remote side then toggle the speaker on.
    b. User hang up the call but not closing dialer app.
    c. User open and play a song from Music app.

    Q: Is audio came from speaker or headset?
Use case 1:
Option 1: Stopping music also turns _off_ speaker. Music app audio now comes from headphones.
Option 2: Stopping music has no effect on speaker button. Music app audio now comes from speakers, with speaker button toggled _on_ in Music app.

Option 2 would be preferred I think, as this would produce a cross-app speaker button.

Use Case 2:
Again, depends on whether the speaker button is toggled universally or not. It would be nice if it was, but okay if it's not.
(In reply to Brad from comment #22)
> Again, depends on whether the speaker button is toggled universally or not.
> It would be nice if it was, but okay if it's not.

Hi Brad,

Based on your input, could we make this rule as
  The speaker button on any apps is functional universally so once it is toggled by app A then it will be disabled only by "toggling speaker button again on any apps" no matter app A is closed or not

1. Could you confirm this rule?
2. And we will wait for wireframes then start the implementation.

Thanks.
Hey Marco,

I think this is certainly a potential possibility for universal functionality, as long as the Speaker toggle button is easily accessible in any app using it or there is a system-wide Speaker button somewhere.

Ultimately this is probably a bigger decision than I should be making on the OS. Maybe Rob is the guy to ask here? Or Francis? They'd probably be better equipped to make this call and provide you with wireframes needed (as the Modern Tribe team isn't currently creating new assets at this time)
Blocks: 854753
Thanks, Brad. Marco, if you ever need more UX guidance, please needinfo? firefoxos-ux and we'll triage it from there.
Hi UX team,

We need UX's support to get the behavior of enabling speaker.
Thanks.
Flags: needinfo?(firefoxos-ux-bugzilla)
Reassigning to Rob again, since he addressed this previously.
Flags: needinfo?(firefoxos-ux-bugzilla) → needinfo?(rmacdonald)
Hmm.. I feel like the proposals here have gotten a little less consistent.

Things to keep in mind:

We can't ensure that a speaker button is consistently available in all apps. Or even all audio related apps (like music players or radio apps). Simply because a lot of apps will be developed by 3rd party developers.

The only way we can ensure that a speaker button is consistently available is to stick it in the platform somewhere. For example in the notification center. This is certainly an option, but if we want to go with that solution we should be explicit about it. I don't have a strong opinion on it either way, but I'll note that most audio is going to the speaker anyway. Radio and telephony are the only contexts where I think users might need to force speaker on.


So that said, the behavior I think I'm hearing is:

* When the speaker button is pushed, it affects all audio. Telephony, background
  music, background radio, etc are all affected.

* When the speaker button is pushed in the radio app, it should be automatically
  "unpushed" whenever the radio app is closed.

* If the user pushes the speaker button in the radio app and then launches the dialer
  app. The speaker button should render as pushed there too. Presumably pushing the
  speaker in the dialer at that time should "unpush" it, both in the radio app and
  in the dialer app.

Open questions:

* What happens if the user opens the radio app and pushes the speaker button there,
  then starts the dialer app and sees the speaker button pushed there. Then uses
  the task switcher to close the radio app. Should this cause the speaker button in
  the dialer app to "unpush"?

* What should the behavior be if the user never uses the radio app, but instead
  directly opens the dialer app. During a phone call the user presses the speaker
  button. What happens when the call ends? Does that unpush the speaker button in
  the dialer app?


Here is what I suggest:

We enable applications to tell the platform if it has a speaker button. As soon as all applications that has a speaker button closes, we turn off the force-speaker behavior.

As long as there is any application running which does have a speaker button, we only turn speaker on or off in response to the user actually pressing the speaker button to toggle it on or off. I.e. as long as the dialer app is running, we don't automatically turn speaker off just because the radio app was closed.

If the user presses the speaker button during a call, we turn it back off at the end of the call. This is done by the application itself and not by the platform.


Brad: Does this match what you had in mind? The main thing that I feel could be confusing is when a background dialer or radio app is closed since we'd automatically switch to turn off speaker. Longer term I'd like to move to a model where we're not really exposing to the user which apps are running in the background and which ones aren't. But we're not there yet, and we can change the behavior at that time.
Brad is no longer working on this. Rob is, however, working on this in line with the 1.2 priorities for Media. Rob, please post a link to the working spec in Box.
Whiteboard: u=user c=System s=tribe → ux-tracking, ux-priority1.2
Apologies for the delay on this. I drafted a spec but am currently reviewing it with my colleagues and ensuring it's aligned with current audio policies. I should have this ready on Friday. 

There is no change to the UI that was posted previously. However, there are changes related to exception handling, which we're currently reviewing.
I've put together a draft spec here - https://mozilla.box.com/s/6yiq70tfptzadbvenade. Sri is organizing a meeting to discuss this next week but in the meantime feel free to flag me if you have any questions. Thanks!
Flags: needinfo?(rmacdonald)
It's kind'a hard to read the draft, what with the giant "DRAFT" letters covering the important parts of the draft :)

If nothing else, it'd be great to read the draft to be able to prepare for the meeting.
Blocks: koi-media
Hi Jonas,
Can you help to review this idl?
I will put this module on dom/system/gonk folder.
Attachment #772583 - Flags: feedback?(jonas)
Stephany,

Since this has been confirm for the implementation in but 854753. Shall we close this one and mark it as duplicate of 854753?
Blocks: 929960
Flags: needinfo?(swilkes)
Ivan, that's fine with me as long as the UX design pieces included in this thread have been noted. Thanks!
Flags: needinfo?(swilkes)
Blocks: 912317
No longer blocks: 929960, 854753
I'm closing this out since the feature associated this has landed with UX signed off on the visual design. Rob will be following up to check the interaction design post landing.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: