Closed Bug 1103064 Opened 10 years ago Closed 9 years ago

Add Futuremark Peacekeeper benchmark to mozbench

Categories

(Testing :: General, defect)

All
Other
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: dminor, Assigned: mcc.ricardo, Mentored)

References

Details

Attachments

(1 file)

Mozbench is a cross-browser benchmark suite. For more information, see [1] and [2]. The mozbench repository is located at [3] and instructions on setting it up and running the benchmarks can be found in the readme.md file at that location.

We're interested in adding the Futuremark Peacekeeper benchmark. This task involves adding it to the mozbench repo and modifying it to report its results back to the harness. Information on adding a benchmark can be found in the mozbench readme.md.

[1] https://wiki.mozilla.org/Game_Benchmark_Automation
[2] https://wiki.mozilla.org/Auto-tools/Projects/Mozbench
[3] https://github.com/dminor/mozbench
[4] http://peacekeeper.futuremark.com/
Picking up this one.
Assignee: nobody → mcc.ricardo
Status: NEW → ASSIGNED
There's communication with some server side code. At first glance I believe we can, without much fuss, extract the the benchmark and it's results and send them back to mozbench.
(In reply to Ricardo Castro from comment #2)
> There's communication with some server side code. At first glance I believe
> we can, without much fuss, extract the the benchmark and it's results and
> send them back to mozbench.

Sounds good :) Let me know if it starts to look complicated.
Just need to check the JS code but, at first glance, seems rather straightforward.
After an evening/night without electricity, I finally got around to taking a better look at this.

This is where the benchmark actually get run: http://peacekeeper.futuremark.com/run.action

Basically the benchmark runs inside an iframe, and it's composed by several individual tests. For each individual test interaction with server side code is done.

What I think we can do  is monitor to those requests and fetch manually all the resources the tests get. After that we might be able to make our standalone version for mozbench.
Flags: needinfo?(dminor)
(In reply to Ricardo Castro from comment #5)
> After an evening/night without electricity, I finally got around to taking a
> better look at this.
> 
> This is where the benchmark actually get run:
> http://peacekeeper.futuremark.com/run.action
> 
> Basically the benchmark runs inside an iframe, and it's composed by several
> individual tests. For each individual test interaction with server side code
> is done.
> 
> What I think we can do  is monitor to those requests and fetch manually all
> the resources the tests get. After that we might be able to make our
> standalone version for mozbench.

Ok, sounds good! Let me know if there is anything I can do to help.
Flags: needinfo?(dminor)
I can start working on this tomorrow. I'll let you know if I get stuck or need any help to speed this up.
Hi Ricardo,

Are you still interested in working on this one?

Dan
Flags: needinfo?(mcc.ricardo)
Hi Dan,

Sorry for not completing this earlier.I'll  pick this up next Monday.

Ricardo
Flags: needinfo?(mcc.ricardo)
Hi Dan,

I had to refresh what was going on with this bug. As I mentioned above, the benchmark makes a lot of request (see attached image) for several reasons. We could monitor both requests and responses and try and get everything ourselves but looking at it for a while, it might take sometime to fetch everything. 

You could either go down that path or maybe ask the creators for some help to speed this up. What do you think?
Flags: needinfo?(dminor)
Hi Ricardo,

You should be able to write a script to fetch everything based upon that output, but if you start running into a bunch of problems with paths or something, then it might not be worth it. If you're not able to make much progress after an hour or so, please let me know, and I'll follow up and see if we can make contact with the creators.

Thanks!

Dan
Flags: needinfo?(dminor)
I think we hit a roadblock here, this benchmark does not lend itself to being run in automation, and Futuremark is a company that makes money by licensing their benchmarks [1] [2], so I doubt they'd be happy if we copied all of their files into a publicly accessible repository, even if we could.

If we really care about this benchmark, we should reach out to them and see if we can license a copy that can live on a repository internal to our network. For now, I'm going to WONTFIX.

[1] http://www.futuremark.com/business
[2] http://www.futuremark.com/business/benchmark-development-program
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: