Closed Bug 637089 Opened 13 years ago Closed 12 years ago

Evaluate whether we need 4096x4096 max texture size

Categories

(Core :: Graphics, defect)

x86
Windows 7
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: shaunld, Unassigned)

References

Details

User-Agent:       Mozilla/5.0 (Windows NT 6.1; rv:2.0b13pre) Gecko/20110226 Firefox/4.0b13pre
Build Identifier: Mozilla/5.0 (Windows NT 6.1; rv:2.0b13pre) Gecko/20110226 Firefox/4.0b13pre

After spoofing 

MOZ_GFX_SPOOF_VENDOR_ID=0
MOZ_GFX_SPOOF_DRIVER_VERSION=8.15.10.1930 

in Windows' global environmental variables (System Properties > Advanced), here is the about:support output - 


        Adapter Description
        V1.2 Sherry Driver for 945

        Vendor ID
        0000

        Device ID
        27a2

        Adapter RAM
        Unknown

        Adapter Drivers
        igdumdx32 igd10umd32

        Driver Version
        8.15.10.1930

        Driver Date
        8-15-2010

        Direct2D Enabled
        Blocked on your graphics card because of unresolved driver issues.

        DirectWrite Enabled
        false (6.1.7601.17514, font cache 1.18 MB)

        WebGL Renderer
        Google Inc. -- ANGLE -- OpenGL ES 2.0 (ANGLE 0.0.0.541)

        GPU Accelerated Windows
        0/1

    
Spoofing the GfxInfo should completely unblock the graphics chipset and hence expose the user to the full consequences of his decision to do so, regardless of existing driver issues.

Reproducible: Always

Steps to Reproduce:
1. Spoof MOZ_GFX_SPOOF_VENDOR_ID=0
2. Spoof MOZ_GFX_SPOOF_DRIVER_VERSION=8.15.10.1930
3. Close and re-open Firefox
4. Open about:support and check graphics details
Actual Results:  
i945GM is "blocked because of unresolved driver issues."

Expected Results:  
i945GM chipset is not blocked on anything, and can perform hardware acceleration to the limits of its capabilities.

Windows 7 SP1
The i945GM chipset can accelerate certain IE9 showcase demos (Fish Tank, Speedreading) without the need to fiddle with a blocklist.

Custom drivers used are Sherry V1.2 from http://angelictears9xxssf.wordpress.com
Erratum: "The i945GM chipset can accelerate certain IE9 showcase demos (Fish Tank,
Speedreading) in the Release Candidate of IE 9 without the need to fiddle with a blocklist.
Erratum: "The i945GM chipset can accelerate certain IE9 showcase demos (Fish Tank,
Speedreading) in the Release Candidate of IE 9 without the need to fiddle with a blocklist."
Version: unspecified → Trunk
The claim in bug 637048 is that the Intel 945GM has a maximum texture size of 2048x2048, and we require 4096x4096 minimum - it's not possible to override that requirement.

Bas, is there any particular reason we need this minimum?
Status: UNCONFIRMED → NEW
Ever confirmed: true
Summary: GfxInfo blocklist unable to be circumvented according to Bug 604771 on i945GM graphics → Evaluate whether we need 4096x4096 max texture size
For Verification.

Intel's Data Sheet on the 945GM:
http://www.intel.com/Assets/PDF/datasheet/309219.pdf

Section 10.4.1.6.5.

"The Mobile Intel 945GM/GME/GMS/GU/GSE, 943/940GML and Intel 945GT Express
Chipsets support up to 12 Levels-of-Detail (LODs) ranging in size from 2048x2048 to 1x1 texels."
:) "For Nothing"

Well, let me to rephrase:

Is there any *particular* (not abstract "from a fresh air" but some /real/) reason for this (exact 4kx4k not 1kx1k nor 8kx8k nor 2kx2k nor any else) minimum?
So, we start with the assumption that layers of the window size are quite common, and that images above 2048 pixels in one dimension are quite common. Then we conclude that two main things cause significant trouble:

1) Dual monitor systems quite often have windows stretched over 2 monitors, and can easily achieve window sizes over 2048 pixels.
2) When an image doesn't fit in a texture, we go through a slow (non-caching) partial uploading or software resampling path (tiling is very hard because of seaming issues), the performance of this path is particularly big.

Then there's also the fact that chipsets that don't support 4096x4096 don't have too great performance characteristics with hardware acceleration anyway. With several 8600M solutions it looks like there's already some performance issues in during browsing (in IE9 RC as well from what one person reported). Although some of these issues may be fixable, I'd rather invest in future systems than systems that are slowly (but surely!) phased out.

Obviously for pure 'blitting power' tests like FishIE or IE Speed Reading the performance of even a Radeon 8550 would be be better than the CPU, but that's not the main body of your browsing experience.
Hi Bas,

Are the current scrolling issues quite intertwined with the current status of hardware acceleration? I am very unfamiliar with much of the technical details here (unless there are reading sources for the barely-competent layman), but what in Layers have caused so many scrolling issues on non-accelerated systems?

Regarding futureproofing, I surely agree with your prognosis, but there is a balance between planning for Tomorrow and dealing with Today. Is it time and cost-inefficient to code for compatibility with Directx 9_2?

Is there also a link to the IE9 RC problem that was reported?
I.e. "let f.ck 'em all about!" where "all" is (at least) 1) many netbook, 2) some nettop and 3) some cheap notebook users. Nobody wishes to pay them for a new hw isn't?

I don't think that staying current with a bleeding edge and throwing away anything well proven but not so fresh is a good way. Shaun say'd so too.

Let's think:  who are most in need of food -- hungry or full? Where are /any/ possible perf. imrovements needs more? And a FF software renderer is too slow (abt 2/3 both of Opera and Chrome) already. While ANY hw acceleration is faster.

Bas, You've made some false assumptions:
1) everyone uses multimon
2) everyone using it uses landscape layout
3) 2k+ images are frequent, let leave pron away for a while
4) everybody want and can to by a new iron even when it's enough
5) nobody can't build a multimon that overheads THIS 4kx4k in a one dimension at least

Look on http://www.webdevelopersnotes.com/design/web-screen-resolution-usage-statistics.php or http://marketshare.hitslink.com/report.aspx?qprid=17 it may b interesting.
3)b) all images around fits into 4k dims (I mean some bitmap maps for example)
Most netbook are sold with atom and gma 3150, even some old desktop have 3100 inside. And it only have 2k*2k. And even new dual core atom have the same 3150 inside.
Also:
1) netbook can't use multimon, they have only one vga output so you can't stretch windows on two monitor at the same time... and even if attach a little monitor aside it doesn't reach 2k*2k
2) any hw acceleration is better than nothing considering atom performance. And Internet Explorer 9 reach to provide HW acceleration on my atom N450/GMA3150 netbook on W7.
3)i don't think there are many images bigger than 2k*2k, and however you can just resize them, it would smooth it a little. In any case they would be resized to fit on a netbook screen...
(In reply to comment #12)
> Most netbook are sold with atom and gma 3150, even some old desktop have 3100
> inside. And it only have 2k*2k. And even new dual core atom have the same 3150
> inside.
> Also:
> 1) netbook can't use multimon, they have only one vga output so you can't
> stretch windows on two monitor at the same time... and even if attach a little
> monitor aside it doesn't reach 2k*2k
> 2) any hw acceleration is better than nothing considering atom performance. And
> Internet Explorer 9 reach to provide HW acceleration on my atom N450/GMA3150
> netbook on W7.
> 3)i don't think there are many images bigger than 2k*2k, and however you can
> just resize them, it would smooth it a little. In any case they would be
> resized to fit on a netbook screen...

On cards like the 3100 you quickly start talking about hardware deceleration :).

(In reply to comment #10)
> I.e. "let f.ck 'em all about!" where "all" is (at least) 1) many netbook, 2)
> some nettop and 3) some cheap notebook users. Nobody wishes to pay them for a
> new hw isn't?
> 
> I don't think that staying current with a bleeding edge and throwing away
> anything well proven but not so fresh is a good way. Shaun say'd so too.
> 
> Let's think:  who are most in need of food -- hungry or full? Where are /any/
> possible perf. imrovements needs more? And a FF software renderer is too slow

Yes, the firefox software renderer needs to be faster. I suggest we invest in that rather than old GPU's which give little to no advantage for using hardware acceleration.

> (abt 2/3 both of Opera and Chrome) already. While ANY hw acceleration is
> faster.

This is inaccurate, hw accelerated is not always faster (we've had plenty of reports of it being -slower- on intel hardware when we had level9_3 as default in try server builds). There's plenty of bugs (on a range of browsers) that show this.

> 
> Bas, You've made some false assumptions:
> 1) everyone uses multimon

I never said that, however there are people that do.

> 2) everyone using it uses landscape layout

The majority of people using multimon do.

> 3) 2k+ images are frequent, let leave pron away for a while

If any time you hit one performance go to hell, you're pretty screwed. Be aware we're not just talking photos here. Some websites huge large (non-square!) tile maps that have one dimension bigger since they tile ineffectively (i.e. not square), all those websites would be big performance pitfalls, or require falling back to software and create more complicated control flows.

> 4) everybody want and can to by a new iron even when it's enough
> 5) nobody can't build a multimon that overheads THIS 4kx4k in a one dimension
> at least

I have no idea what you're saying there, I've got a multimon that reached over 4k width, fwiw.
> On cards like the 3100 you quickly start talking about hardware deceleration
> :).
Unfortunately 3150 in todays netbooks is the same as old 3100, just smaller... And don't think atom is necessarely better, on-order cpu at 1.3 to 1.7ghz depending on power saving...

You're right about make software renderer faster but now if the browser encounters an image bigger than 4k*4k what happen? crash? If there is a path to make it, it should work also on slow hardware. At least as an option. And I ask again, you seem the gfx guru here, resize to fit can't be done? It would be resized anyway when on screen.
Also netbook users don't use obviously multimon.
(In reply to comment #14)
> > On cards like the 3100 you quickly start talking about hardware deceleration
> > :).
> Unfortunately 3150 in todays netbooks is the same as old 3100, just smaller...
> And don't think atom is necessarely better, on-order cpu at 1.3 to 1.7ghz
> depending on power saving...
> 
> You're right about make software renderer faster but now if the browser
> encounters an image bigger than 4k*4k what happen? crash? If there is a path to
> make it, it should work also on slow hardware. At least as an option. And I ask
> again, you seem the gfx guru here, resize to fit can't be done? It would be
> resized anyway when on screen.
> Also netbook users don't use obviously multimon.

No, it won't crash, for layers (D3D9), it doesn't matter.

For D2D the limit we have for enabling it by default is actually 8K in either direction. If we hit images bigger than that we take a complicated resize or partial uploading path (depending on how it's displayed). So these big images may actually be slower with hardware accel than without. Images bigger than 8K are rare though, fortunately.
:bas
(In reply to comment #13)
> If any time you hit one performance go to hell, you're pretty screwed...
> ...Some websites... would be big performance pitfalls, or require
> falling back to software and create more complicated control flows.

I vote with both hands (sure marco_pal and many others too) to have "performance go to hell" /sometimes/ only and not /everytime/ :)
(In reply to comment #16)
> :bas
> (In reply to comment #13)
> > If any time you hit one performance go to hell, you're pretty screwed...
> > ...Some websites... would be big performance pitfalls, or require
> > falling back to software and create more complicated control flows.
> 
> I vote with both hands (sure marco_pal and many others too) to have
> "performance go to hell" /sometimes/ only and not /everytime/ :)

Performance doesn't suck on software, or shouldn't, anyway. HW accel is not some magical silver bullet that makes everything great and fast :). I can show you plenty of demo's that are slower with hardware acceleration, even if you have a state of the art GPU like a Radeon 69xx series or an NVidia GTX 5xx. Hardware acceleration makes it faster to fill or composite large surface areas, that's all. Some other things can easily become slower, especially on older hardware (in a way maximum texture size is a way of measuring 'general' hardware capabilities).

So, the choice is really huge fluctuations in performance (ranging from -very- bad, to pretty good) and consistent 'decent' performance.

And even if you choose to enable hardware acceleration, there's no guarantee it will even have better performance than software only under -good- circumstances. Yes, it will always be faster at something like FishIE, but for normal browsing this is not true at all.
(In reply to comment #15)
> (In reply to comment #14)
> > > On cards like the 3100 you quickly start talking about hardware deceleration
> > > :).
> > Unfortunately 3150 in todays netbooks is the same as old 3100, just smaller...
> > And don't think atom is necessarely better, on-order cpu at 1.3 to 1.7ghz
> > depending on power saving...
> > 
> > You're right about make software renderer faster but now if the browser
> > encounters an image bigger than 4k*4k what happen? crash? If there is a path to
> > make it, it should work also on slow hardware. At least as an option. And I ask
> > again, you seem the gfx guru here, resize to fit can't be done? It would be
> > resized anyway when on screen.
> > Also netbook users don't use obviously multimon.
> 
> No, it won't crash, for layers (D3D9), it doesn't matter.
> 
> For D2D the limit we have for enabling it by default is actually 8K in either
> direction. If we hit images bigger than that we take a complicated resize or
> partial uploading path (depending on how it's displayed). So these big images
> may actually be slower with hardware accel than without. Images bigger than 8K
> are rare though, fortunately.

why for D3D9 it doesn't matter? We are obviously talking about that since poor HW support only DX9
Maybe is a little out of topic but right now the only I can enable is directwrite, it works in hw or is completely software, and then better switch off?
I think this is won't fix. I believe we see very few users on DX9 hardware that doesn't meet our requirements. Most people on hardware so old it doesn't support 4096x4096, are using drivers so old that they will be blacklisted anyway. I don't think there's a big group of users that we can deliver something to that is worth the effort.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WONTFIX
Then please, please, get the software renderer up to top speed. I look forward to that and the improvements in Project Snappy.
We should use about:jank to evaluate if any possible slowness is due to sub-par implementation of D2D (see https://blog.mozilla.com/tglek/2012/01/26/snappy-january-26/) or really a fundamental characteristic of exceeding the hardware texture limit.
You need to log in before you can comment on or make changes to this bug.