Intermittent [taskcluster:error] Task timeout after 1800 seconds. Force killing container.
Categories
(Firefox Build System :: Task Configuration, defect, P5)
Tracking
(firefox-esr102 wontfix, firefox-esr115 fixed, firefox117 wontfix, firefox118 fixed, firefox119 fixed)
People
(Reporter: intermittent-bug-filer, Assigned: jmaher)
References
Details
(Keywords: intermittent-failure, regression)
Attachments
(1 file)
Filed by: cbrindusan [at] mozilla.com
Parsed log: https://treeherder.mozilla.org/logviewer.html#?job_id=266189296&repo=mozilla-central
Full log: https://queue.taskcluster.net/v1/task/fy0chMgJQRyTTDqeemH7-Q/runs/0/artifacts/public/logs/live_backing.log
Reftest URL: https://hg.mozilla.org/mozilla-central/raw-file/tip/layout/tools/reftest/reftest-analyzer.xhtml#logurl=https://queue.taskcluster.net/v1/task/fy0chMgJQRyTTDqeemH7-Q/runs/0/artifacts/public/logs/live_backing.log&only_show_unexpected=1
[task 2019-09-11T22:27:05.472Z] [osmesa-src 0.1.1] CXXLD glsl_compiler
[task 2019-09-11T22:27:06.054Z] [osmesa-src 0.1.1] make[4]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/compiler'
[task 2019-09-11T22:27:06.054Z] [osmesa-src 0.1.1] make[3]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/compiler'
[task 2019-09-11T22:27:06.055Z] [osmesa-src 0.1.1] Making all in mesa
[task 2019-09-11T22:27:06.087Z] [osmesa-src 0.1.1] make[3]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:27:06.096Z] [osmesa-src 0.1.1] CC x86/gen_matypes.o
[task 2019-09-11T22:27:06.319Z] [osmesa-src 0.1.1] CCLD gen_matypes
[task 2019-09-11T22:27:06.393Z] [osmesa-src 0.1.1] GEN matypes.h
[task 2019-09-11T22:27:06.394Z] [osmesa-src 0.1.1] make all-recursive
[task 2019-09-11T22:27:06.426Z] [osmesa-src 0.1.1] make[4]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:27:06.433Z] [osmesa-src 0.1.1] Making all in .
[task 2019-09-11T22:27:06.465Z] [osmesa-src 0.1.1] make[5]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:27:06.470Z] [osmesa-src 0.1.1] CC main/libmesa_sse41_la-streaming-load-memcpy.lo
[task 2019-09-11T22:27:06.470Z] [osmesa-src 0.1.1] CC main/libmesa_sse41_la-sse_minmax.lo
[task 2019-09-11T22:27:06.471Z] [osmesa-src 0.1.1] CC main/accum.lo
[task 2019-09-11T22:27:06.471Z] [osmesa-src 0.1.1] CC main/api_arrayelt.lo
[task 2019-09-11T22:27:06.471Z] [osmesa-src 0.1.1] CC main/api_exec.lo
[task 2019-09-11T22:27:06.472Z] [osmesa-src 0.1.1] CC main/api_loopback.lo
[task 2019-09-11T22:27:06.472Z] [osmesa-src 0.1.1] CC main/arrayobj.lo
[task 2019-09-11T22:27:06.473Z] [osmesa-src 0.1.1] CC main/arbprogram.lo
[task 2019-09-11T22:27:06.473Z] [osmesa-src 0.1.1] CC main/attrib.lo
[task 2019-09-11T22:27:06.474Z] [osmesa-src 0.1.1] CC main/atifragshader.lo
[task 2019-09-11T22:27:06.474Z] [osmesa-src 0.1.1] CC main/barrier.lo
[task 2019-09-11T22:27:06.474Z] [osmesa-src 0.1.1] CC main/blend.lo
[task 2019-09-11T22:27:06.475Z] [osmesa-src 0.1.1] CC main/blit.lo
[task 2019-09-11T22:27:06.475Z] [osmesa-src 0.1.1] CC main/bbox.lo
[task 2019-09-11T22:27:07.039Z] Compiling glutin v0.21.0
[task 2019-09-11T22:27:07.039Z] Running CARGO_PKG_HOMEPAGE= CARGO_PKG_VERSION_MINOR=21 CARGO_PKG_VERSION=0.21.0 CARGO_PKG_DESCRIPTION='Cross-platform OpenGL context provider.' CARGO_PKG_VERSION_MAJOR=0 CARGO_PKG_REPOSITORY='https://github.com/tomaka/glutin' LD_LIBRARY_PATH='/builds/worker/checkouts/gecko/gfx/wr/target/release/deps:/builds/worker/fetches/rustc/lib' CARGO_PKG_AUTHORS='The glutin contributors:Pierre Krieger <pierre.krieger1708@gmail.com>' CARGO_MANIFEST_DIR=/builds/worker/checkouts/gecko/gfx/wr/vendor/glutin CARGO_PKG_VERSION_PATCH=0 CARGO_PKG_VERSION_PRE= CARGO=/builds/worker/fetches/rustc/bin/cargo CARGO_PKG_NAME=glutin rustc --edition=2018 --crate-name glutin /builds/worker/checkouts/gecko/gfx/wr/vendor/glutin/src/lib.rs --color never --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C panic=abort -C debuginfo=2 -C metadata=43d05b56303f05aa -C extra-filename=-43d05b56303f05aa --out-dir /builds/worker/checkouts/gecko/gfx/wr/target/release/deps -L dependency=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps --extern derivative=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libderivative-3e2220e59672f504.so --extern glutin_egl_sys=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libglutin_egl_sys-1943a81841230dfe.rlib --extern glutin_glx_sys=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libglutin_glx_sys-bb3ef7f04759e07d.rlib --extern lazy_static=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/liblazy_static-cc4585a2431a86ec.rlib --extern libloading=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/liblibloading-a2a10c5a59261c70.rlib --extern osmesa_sys=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libosmesa_sys-cb42dfe02ead9c04.rlib --extern parking_lot=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libparking_lot-86085a37287b0ead.rlib --extern wayland_client=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libwayland_client-311357f005555246.rlib --extern winit=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libwinit-fd2915af04f09bc6.rlib --cap-lints warn --deny warnings -L native=/builds/worker/checkouts/gecko/gfx/wr/target/release/build/libloading-bf307d24304a4bc5/out
[task 2019-09-11T22:27:07.223Z] warning: trait objects without an explicit dyn
are deprecated
[task 2019-09-11T22:27:07.224Z] --> /builds/worker/checkouts/gecko/gfx/wr/vendor/glutin/src/lib.rs:326:28
[task 2019-09-11T22:27:07.224Z] |
[task 2019-09-11T22:27:07.224Z] 326 | NoBackendAvailable(Box<std::error::Error + Send + Sync>),
[task 2019-09-11T22:27:07.224Z] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use dyn
: dyn std::error::Error + Send + Sync
[task 2019-09-11T22:27:07.224Z] |
[task 2019-09-11T22:27:07.224Z] note: lint level defined here
[task 2019-09-11T22:27:07.224Z] --> /builds/worker/checkouts/gecko/gfx/wr/vendor/glutin/src/lib.rs:81:5
[task 2019-09-11T22:27:07.224Z] |
[task 2019-09-11T22:27:07.224Z] 81 | warnings,
[task 2019-09-11T22:27:07.224Z] | ^^^^^^^^[task 2019-09-11T22:27:07.224Z] = note: #[warn(bare_trait_objects)] implied by #[warn(warnings)]
[task 2019-09-11T22:27:07.224Z]
[task 2019-09-11T22:27:07.224Z] warning: trait objects without an explicit dyn
are deprecated
[task 2019-09-11T22:27:07.224Z] --> /builds/worker/checkouts/gecko/gfx/wr/vendor/glutin/src/lib.rs:406:32
[task 2019-09-11T22:27:07.224Z] |
[task 2019-09-11T22:27:07.224Z] 406 | fn cause(&self) -> Option<&std::error::Error> {
[task 2019-09-11T22:27:07.224Z] | ^^^^^^^^^^^^^^^^^ help: use dyn
: dyn std::error::Error
[task 2019-09-11T22:27:07.224Z]
[task 2019-09-11T22:27:07.226Z] [osmesa-src 0.1.1] CC main/bufferobj.lo
[task 2019-09-11T22:27:07.292Z] [osmesa-src 0.1.1] CC main/buffers.lo
[task 2019-09-11T22:27:07.316Z] [osmesa-src 0.1.1] CC main/clear.lo
[task 2019-09-11T22:27:07.395Z] [osmesa-src 0.1.1] CC main/clip.lo
[task 2019-09-11T22:27:07.514Z] [osmesa-src 0.1.1] CC main/colortab.lo
[task 2019-09-11T22:27:07.515Z] [osmesa-src 0.1.1] CC main/compute.lo
[task 2019-09-11T22:27:07.538Z] [osmesa-src 0.1.1] CC main/condrender.lo
[task 2019-09-11T22:27:07.709Z] [osmesa-src 0.1.1] CC main/conservativeraster.lo
[task 2019-09-11T22:27:07.710Z] [osmesa-src 0.1.1] CC main/context.lo
[task 2019-09-11T22:27:07.804Z] [osmesa-src 0.1.1] CC main/convolve.lo
[task 2019-09-11T22:27:07.844Z] [osmesa-src 0.1.1] CC main/copyimage.lo
[task 2019-09-11T22:27:08.214Z] [osmesa-src 0.1.1] CC main/cpuinfo.lo
[task 2019-09-11T22:27:08.505Z] [osmesa-src 0.1.1] CC main/debug.lo
[task 2019-09-11T22:27:08.533Z] [osmesa-src 0.1.1] CC main/debug_output.lo
[task 2019-09-11T22:27:08.589Z] [osmesa-src 0.1.1] CC main/depth.lo
[task 2019-09-11T22:27:08.589Z] [osmesa-src 0.1.1] CC main/dlist.lo
[task 2019-09-11T22:27:08.762Z] [osmesa-src 0.1.1] CC main/drawpix.lo
[task 2019-09-11T22:27:08.829Z] [osmesa-src 0.1.1] CC main/drawtex.lo
[task 2019-09-11T22:27:09.037Z] [osmesa-src 0.1.1] CC main/draw_validate.lo
[task 2019-09-11T22:27:09.084Z] [osmesa-src 0.1.1] CC main/enable.lo
[task 2019-09-11T22:27:09.100Z] [osmesa-src 0.1.1] CC main/enums.lo
[task 2019-09-11T22:27:09.104Z] [osmesa-src 0.1.1] CC main/errors.lo
[task 2019-09-11T22:27:09.118Z] [osmesa-src 0.1.1] CC main/eval.lo
[task 2019-09-11T22:27:09.213Z] [osmesa-src 0.1.1] CC main/execmem.lo
[task 2019-09-11T22:27:09.523Z] [osmesa-src 0.1.1] CC main/extensions.lo
[task 2019-09-11T22:27:09.525Z] [osmesa-src 0.1.1] CC main/externalobjects.lo[task 2019-09-11T22:27:09.652Z] [osmesa-src 0.1.1] CC main/fbobject.lo
[task 2019-09-11T22:27:09.768Z] [osmesa-src 0.1.1] CC main/feedback.lo
[task 2019-09-11T22:27:09.780Z] [osmesa-src 0.1.1] CXX main/ff_fragment_shader.lo
[task 2019-09-11T22:27:09.975Z] [osmesa-src 0.1.1] CC main/ffvertex_prog.lo
[task 2019-09-11T22:27:10.106Z] [osmesa-src 0.1.1] CC main/fog.lo
[task 2019-09-11T22:27:10.207Z] [osmesa-src 0.1.1] CC main/format_fallback.lo
[task 2019-09-11T22:27:10.305Z] [osmesa-src 0.1.1] CC main/format_pack.lo
[task 2019-09-11T22:27:10.514Z] [osmesa-src 0.1.1] CC main/format_unpack.lo
[task 2019-09-11T22:27:10.515Z] [osmesa-src 0.1.1] CC main/formatquery.lo
[task 2019-09-11T22:27:10.680Z] [osmesa-src 0.1.1] CC main/formats.lo
[task 2019-09-11T22:27:10.698Z] [osmesa-src 0.1.1] CC main/format_utils.lo
[task 2019-09-11T22:27:10.704Z] [osmesa-src 0.1.1] CC main/framebuffer.lo
[task 2019-09-11T22:27:10.730Z] [osmesa-src 0.1.1] CC main/get.lo
[task 2019-09-11T22:27:10.895Z] [osmesa-src 0.1.1] CC main/genmipmap.lo
[task 2019-09-11T22:27:10.986Z] [osmesa-src 0.1.1] CC main/getstring.lo
[task 2019-09-11T22:27:11.042Z] [osmesa-src 0.1.1] CC main/glformats.lo
[task 2019-09-11T22:27:11.320Z] [osmesa-src 0.1.1] CC main/glspirv.lo
[task 2019-09-11T22:27:11.347Z] [osmesa-src 0.1.1] CC main/glthread.lo
[task 2019-09-11T22:27:11.606Z] [osmesa-src 0.1.1] CC main/hash.lo
[task 2019-09-11T22:27:11.648Z] [osmesa-src 0.1.1] CC main/hint.lo
[task 2019-09-11T22:27:11.827Z] [osmesa-src 0.1.1] CC main/histogram.lo
[task 2019-09-11T22:27:11.829Z] [osmesa-src 0.1.1] CC main/image.lo
[task 2019-09-11T22:27:12.009Z] [osmesa-src 0.1.1] CC main/lines.lo
[task 2019-09-11T22:27:12.009Z] [osmesa-src 0.1.1] CC main/light.lo
[task 2019-09-11T22:27:12.009Z] [osmesa-src 0.1.1] CC main/marshal.lo
[task 2019-09-11T22:27:12.220Z] [osmesa-src 0.1.1] CC main/marshal_generated.lo
[task 2019-09-11T22:27:12.288Z] [osmesa-src 0.1.1] CC main/matrix.lo
[task 2019-09-11T22:27:12.295Z] [osmesa-src 0.1.1] CC main/mipmap.lo
[task 2019-09-11T22:27:12.374Z] [osmesa-src 0.1.1] CC main/mm.lo
[task 2019-09-11T22:27:12.502Z] [osmesa-src 0.1.1] CC main/objectlabel.lo
[task 2019-09-11T22:27:12.517Z] [osmesa-src 0.1.1] CC main/multisample.lo
[task 2019-09-11T22:27:12.782Z] [osmesa-src 0.1.1] CC main/objectpurge.lo
[task 2019-09-11T22:27:12.994Z] [osmesa-src 0.1.1] CC main/pack.lo
[task 2019-09-11T22:27:12.995Z] [osmesa-src 0.1.1] CC main/pbo.lo
[task 2019-09-11T22:27:12.999Z] [osmesa-src 0.1.1] CC main/performance_monitor.lo
[task 2019-09-11T22:27:13.355Z] [osmesa-src 0.1.1] CC main/performance_query.lo
[task 2019-09-11T22:27:13.380Z] [osmesa-src 0.1.1] CC main/pipelineobj.lo
[task 2019-09-11T22:27:13.380Z] [osmesa-src 0.1.1] CC main/pixel.lo[task 2019-09-11T22:27:13.381Z] [osmesa-src 0.1.1] CC main/pixelstore.lo
[task 2019-09-11T22:27:13.713Z] [osmesa-src 0.1.1] CC main/points.lo
[task 2019-09-11T22:27:13.714Z] [osmesa-src 0.1.1] CC main/pixeltransfer.lo
[task 2019-09-11T22:27:13.716Z] [osmesa-src 0.1.1] CC main/polygon.lo
[task 2019-09-11T22:27:13.867Z] [osmesa-src 0.1.1] CC main/program_binary.lo
[task 2019-09-11T22:27:14.003Z] [osmesa-src 0.1.1] CC main/program_resource.lo
[task 2019-09-11T22:27:14.210Z] [osmesa-src 0.1.1] CC main/querymatrix.lo
[task 2019-09-11T22:27:14.281Z] [osmesa-src 0.1.1] CC main/queryobj.lo
[task 2019-09-11T22:27:14.281Z] [osmesa-src 0.1.1] CC main/rastpos.lo
[task 2019-09-11T22:27:14.448Z] [osmesa-src 0.1.1] CC main/readpix.lo
[task 2019-09-11T22:27:14.467Z] [osmesa-src 0.1.1] CC main/remap.lo
[task 2019-09-11T22:27:14.528Z] [osmesa-src 0.1.1] CC main/renderbuffer.lo
[task 2019-09-11T22:27:14.801Z] [osmesa-src 0.1.1] CC main/robustness.lo
[task 2019-09-11T22:27:14.802Z] [osmesa-src 0.1.1] CC main/samplerobj.lo
[task 2019-09-11T22:27:15.022Z] [osmesa-src 0.1.1] CC main/scissor.lo
[task 2019-09-11T22:27:15.071Z] [osmesa-src 0.1.1] CC main/shaderapi.lo
[task 2019-09-11T22:27:15.151Z] [osmesa-src 0.1.1] CC main/shaderimage.lo
[task 2019-09-11T22:27:15.382Z] [osmesa-src 0.1.1] CC main/shaderobj.lo
[task 2019-09-11T22:27:15.383Z] [osmesa-src 0.1.1] CXX main/shader_query.lo
[task 2019-09-11T22:27:15.457Z] [osmesa-src 0.1.1] CC main/shared.lo
[task 2019-09-11T22:27:15.458Z] [osmesa-src 0.1.1] CC main/state.lo
[task 2019-09-11T22:27:15.963Z] [osmesa-src 0.1.1] CC main/stencil.lo
[task 2019-09-11T22:27:15.975Z] [osmesa-src 0.1.1] CC main/syncobj.lo
[task 2019-09-11T22:27:16.180Z] [osmesa-src 0.1.1] CC main/texcompress.lo
[task 2019-09-11T22:27:16.200Z] [osmesa-src 0.1.1] CXX main/texcompress_astc.lo
[task 2019-09-11T22:27:16.326Z] [osmesa-src 0.1.1] CC main/texcompress_bptc.lo
[task 2019-09-11T22:27:16.343Z] [osmesa-src 0.1.1] CC main/texcompress_cpal.lo
[task 2019-09-11T22:27:16.420Z] [osmesa-src 0.1.1] CC main/texcompress_etc.lo
[task 2019-09-11T22:27:16.447Z] [osmesa-src 0.1.1] CC main/texcompress_fxt1.lo
[task 2019-09-11T22:27:16.506Z] [osmesa-src 0.1.1] CC main/texcompress_rgtc.lo
[task 2019-09-11T22:27:16.521Z] [osmesa-src 0.1.1] CC main/texcompress_s3tc.lo
[task 2019-09-11T22:27:16.522Z] [osmesa-src 0.1.1] CC main/texenv.lo
[task 2019-09-11T22:27:16.708Z] [osmesa-src 0.1.1] CC main/texformat.lo
[task 2019-09-11T22:27:16.754Z] [osmesa-src 0.1.1] CC main/texgen.lo
[task 2019-09-11T22:27:16.967Z] [osmesa-src 0.1.1] CC main/texgetimage.lo
[task 2019-09-11T22:27:17.022Z] [osmesa-src 0.1.1] CC main/teximage.lo
[task 2019-09-11T22:27:17.274Z] [osmesa-src 0.1.1] CC main/texobj.lo
[task 2019-09-11T22:27:17.607Z] [osmesa-src 0.1.1] CC main/texparam.lo
[task 2019-09-11T22:27:17.654Z] [osmesa-src 0.1.1] CC main/texstate.lo
[task 2019-09-11T22:27:17.669Z] [osmesa-src 0.1.1] CC main/texstorage.lo
[task 2019-09-11T22:27:17.788Z] [osmesa-src 0.1.1] CC main/texstore.lo
[task 2019-09-11T22:27:17.789Z] [osmesa-src 0.1.1] CC main/textureview.lo
[task 2019-09-11T22:27:17.794Z] [osmesa-src 0.1.1] CC main/texturebindless.lo
[task 2019-09-11T22:27:17.805Z] [osmesa-src 0.1.1] CC main/transformfeedback.lo
[task 2019-09-11T22:27:17.806Z] [osmesa-src 0.1.1] CXX main/uniform_query.lo
[task 2019-09-11T22:27:17.984Z] [osmesa-src 0.1.1] CC main/uniforms.lo
[task 2019-09-11T22:27:18.401Z] [osmesa-src 0.1.1] CC main/varray.lo[task 2019-09-11T22:27:18.589Z] [osmesa-src 0.1.1] CC main/vdpau.lo
[task 2019-09-11T22:27:18.744Z] [osmesa-src 0.1.1] CC main/version.lo
[task 2019-09-11T22:27:19.107Z] [osmesa-src 0.1.1] CC main/viewport.lo
[task 2019-09-11T22:27:19.125Z] [osmesa-src 0.1.1] CC main/vtxfmt.lo
[task 2019-09-11T22:27:19.149Z] [osmesa-src 0.1.1] CC main/es1_conversion.lo
[task 2019-09-11T22:27:19.721Z] [osmesa-src 0.1.1] CC x86/common_x86.lo
[task 2019-09-11T22:27:19.893Z] [osmesa-src 0.1.1] CC program/arbprogparse.lo
[task 2019-09-11T22:27:19.936Z] [osmesa-src 0.1.1] CXX program/ir_to_mesa.lo
[task 2019-09-11T22:27:19.941Z] [osmesa-src 0.1.1] CC program/lex.yy.lo
[task 2019-09-11T22:27:20.180Z] [osmesa-src 0.1.1] CC program/prog_cache.lo
[task 2019-09-11T22:27:20.181Z] [osmesa-src 0.1.1] CC program/prog_execute.lo
[task 2019-09-11T22:27:20.205Z] [osmesa-src 0.1.1] CC program/prog_instruction.lo
[task 2019-09-11T22:27:20.238Z] [osmesa-src 0.1.1] CC program/prog_noise.lo
[task 2019-09-11T22:27:20.274Z] [osmesa-src 0.1.1] CC program/prog_opt_constant_fold.lo
[task 2019-09-11T22:27:20.283Z] [osmesa-src 0.1.1] CC program/prog_optimize.lo
[task 2019-09-11T22:27:20.288Z] [osmesa-src 0.1.1] CC program/prog_parameter_layout.lo
[task 2019-09-11T22:27:20.291Z] [osmesa-src 0.1.1] CC program/prog_print.lo
[task 2019-09-11T22:27:20.348Z] [osmesa-src 0.1.1] CC program/program.lo
[task 2019-09-11T22:27:20.584Z] [osmesa-src 0.1.1] CC program/programopt.lo
[task 2019-09-11T22:27:20.815Z] [osmesa-src 0.1.1] CC program/program_parse_extra.lo
[task 2019-09-11T22:27:20.964Z] [osmesa-src 0.1.1] CC program/program_parse.tab.lo
[task 2019-09-11T22:27:21.036Z] [osmesa-src 0.1.1] CC program/prog_statevars.lo
[task 2019-09-11T22:27:21.188Z] [osmesa-src 0.1.1] CC program/prog_to_nir.lo
[task 2019-09-11T22:27:21.208Z] [osmesa-src 0.1.1] CCLD libmesa_sse41.la
[task 2019-09-11T22:27:21.253Z] [osmesa-src 0.1.1] CC math/m_debug_clip.lo
[task 2019-09-11T22:27:21.291Z] [osmesa-src 0.1.1] ar: u' modifier ignored since
D' is the default (see U') [task 2019-09-11T22:27:21.327Z] [osmesa-src 0.1.1] CC math/m_debug_norm.lo [task 2019-09-11T22:27:21.328Z] [osmesa-src 0.1.1] CC math/m_debug_xform.lo [task 2019-09-11T22:27:21.364Z] [osmesa-src 0.1.1] CC math/m_eval.lo [task 2019-09-11T22:27:21.365Z] [osmesa-src 0.1.1] CC math/m_matrix.lo [task 2019-09-11T22:27:21.388Z] [osmesa-src 0.1.1] CC math/m_translate.lo [task 2019-09-11T22:27:21.665Z] [osmesa-src 0.1.1] CC math/m_vector.lo [task 2019-09-11T22:27:21.803Z] [osmesa-src 0.1.1] CC vbo/vbo_context.lo [task 2019-09-11T22:27:21.841Z] [osmesa-src 0.1.1] CC vbo/vbo_exec_api.lo [task 2019-09-11T22:27:21.866Z] Running
CARGO_PKG_HOMEPAGE= CARGO_PKG_VERSION_MINOR=60 CARGO_PKG_VERSION=0.60.0 CARGO_PKG_DESCRIPTION='A GPU accelerated 2D renderer for web content' CARGO_PKG_VERSION_MAJOR=0 CARGO_PKG_REPOSITORY='https://github.com/servo/webrender' LD_LIBRARY_PATH='/builds/worker/checkouts/gecko/gfx/wr/target/release/deps:/builds/worker/fetches/rustc/lib' CARGO_PKG_AUTHORS='Glenn Watson <gw@intuitionlibrary.com>' OUT_DIR=/builds/worker/checkouts/gecko/gfx/wr/target/release/build/webrender-12ce6d73a3ab027f/out CARGO_MANIFEST_DIR=/builds/worker/checkouts/gecko/gfx/wr/webrender CARGO_PKG_VERSION_PATCH=0 CARGO_PKG_VERSION_PRE= CARGO=/builds/worker/fetches/rustc/bin/cargo CARGO_PKG_NAME=webrender rustc --edition=2018 --crate-name webrender webrender/src/lib.rs --color never --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C panic=abort -C debuginfo=2 --cfg 'feature="base64"' --cfg 'feature="capture"' --cfg 'feature="debugger"' --cfg 'feature="default"' --cfg 'feature="freetype-lib"' --cfg 'feature="image_loader"' --cfg 'feature="no_static_freetype"' --cfg 'feature="png"' --cfg 'feature="profiler"' --cfg 'feature="replay"' --cfg 'feature="ron"' --cfg 'feature="serde"' --cfg 'feature="serde_json"' --cfg 'feature="ws"' -C metadata=c4b99501ce346314 -C extra-filename=-c4b99501ce346314 --out-dir /builds/worker/checkouts/gecko/gfx/wr/target/release/deps -L dependency=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps --extern base64=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libbase64-ed4d931e9d0374de.rlib --extern bincode=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libbincode-09ef1a56e9df9795.rlib --extern bitflags=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libbitflags-135e767708272f40.rlib --extern byteorder=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libbyteorder-a376eeba025755c8.rlib --extern cfg_if=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libcfg_if-71f5021fabb88759.rlib --extern cstr=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libcstr-74433944c9b3413e.rlib --extern euclid=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libeuclid-3bfd147afb8d6bfb.rlib --extern freetype=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libfreetype-d676fb9cac71f347.rlib --extern fxhash=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libfxhash-083643596a1d0e0c.rlib --extern gleam=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libgleam-a1a4beb9d744aafb.rlib --extern image_loader=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libimage-645e222890fa6974.rlib --extern lazy_static=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/liblazy_static-cc4585a2431a86ec.rlib --extern libc=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/liblibc-5ffd38a5239ddeee.rlib --extern log=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/liblog-ebb6b0627e95911d.rlib --extern malloc_size_of_derive=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libmalloc_size_of_derive-41c3bbf3488dbb3f.so --extern num_traits=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libnum_traits-bae0d09cc3c8a576.rlib --extern plane_split=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libplane_split-c62748c5acf9a5e3.rlib --extern png=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libpng-b82461dd55353f7d.rlib --extern rayon=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/librayon-24256f4c3d84cded.rlib --extern ron=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libron-51caa561e2d145c3.rlib --extern serde=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libserde-9f629d3683105e80.rlib --extern serde_json=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libserde_json-05774a7b8678760a.rlib --extern sha2=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libsha2-6a67c85c9a149f48.rlib --extern smallvec=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libsmallvec-5ec69c5f44dc323c.rlib --extern svg_fmt=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libsvg_fmt-cc66a307b89c1780.rlib --extern thread_profiler=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libthread_profiler-eb1b24a154187634.rlib --extern time=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libtime-237e75cbee21b919.rlib --extern api=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libwebrender_api-fbd6df5c6ec806f2.rlib --extern webrender_build=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libwebrender_build-92822ddc31e35cd3.rlib --extern malloc_size_of=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libwr_malloc_size_of-4aeda56f4a3c043b.rlib --extern ws=/builds/worker/checkouts/gecko/gfx/wr/target/release/deps/libws-bd65e67cf56ef9fb.rlib --deny warnings -L native=/usr/lib/x86_64-linux-gnu[task 2019-09-11T22:27:22.141Z] [osmesa-src 0.1.1] CC vbo/vbo_exec.lo [task 2019-09-11T22:27:22.141Z] [osmesa-src 0.1.1] CC vbo/vbo_exec_array.lo [task 2019-09-11T22:27:22.141Z] [osmesa-src 0.1.1] CC vbo/vbo_exec_draw.lo [task 2019-09-11T22:27:22.144Z] [osmesa-src 0.1.1] CC vbo/vbo_exec_eval.lo [task 2019-09-11T22:27:22.335Z] [osmesa-src 0.1.1] CC vbo/vbo_minmax_index.lo [task 2019-09-11T22:27:22.342Z] [osmesa-src 0.1.1] CC vbo/vbo_noop.lo [task 2019-09-11T22:27:22.471Z] [osmesa-src 0.1.1] CC vbo/vbo_primitive_restart.lo [task 2019-09-11T22:27:22.488Z] [osmesa-src 0.1.1] CC vbo/vbo_save_api.lo [task 2019-09-11T22:27:22.488Z] [osmesa-src 0.1.1] CC vbo/vbo_save.lo [task 2019-09-11T22:27:22.805Z] [osmesa-src 0.1.1] CC vbo/vbo_save_draw.lo [task 2019-09-11T22:27:22.835Z] [osmesa-src 0.1.1] CC vbo/vbo_save_loopback.lo [task 2019-09-11T22:27:22.883Z] [osmesa-src 0.1.1] CC state_tracker/st_atifs_to_tgsi.lo [task 2019-09-11T22:27:23.056Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_array.lo [task 2019-09-11T22:27:23.073Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_atomicbuf.lo[task 2019-09-11T22:27:23.331Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_blend.lo [task 2019-09-11T22:27:23.361Z] [osmesa-src 0.1.1] CC state_tracker/st_atom.lo [task 2019-09-11T22:27:23.361Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_clip.lo [task 2019-09-11T22:27:23.431Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_constbuf.lo [task 2019-09-11T22:27:23.712Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_depth.lo [task 2019-09-11T22:27:23.714Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_framebuffer.lo [task 2019-09-11T22:27:23.730Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_image.lo [task 2019-09-11T22:27:23.920Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_msaa.lo [task 2019-09-11T22:27:24.093Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_pixeltransfer.lo [task 2019-09-11T22:27:24.093Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_rasterizer.lo [task 2019-09-11T22:27:24.255Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_sampler.lo [task 2019-09-11T22:27:24.273Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_scissor.lo [task 2019-09-11T22:27:24.274Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_shader.lo [task 2019-09-11T22:27:24.303Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_stipple.lo [task 2019-09-11T22:27:24.808Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_texture.lo [task 2019-09-11T22:27:24.814Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_storagebuf.lo [task 2019-09-11T22:27:24.814Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_tess.lo [task 2019-09-11T22:27:24.816Z] [osmesa-src 0.1.1] CC state_tracker/st_atom_viewport.lo [task 2019-09-11T22:27:24.818Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_bitmap.lo [task 2019-09-11T22:27:25.345Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_bitmap_shader.lo [task 2019-09-11T22:27:25.346Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_bufferobjects.lo [task 2019-09-11T22:27:25.347Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_blit.lo [task 2019-09-11T22:27:25.348Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_clear.lo [task 2019-09-11T22:27:25.354Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_compute.lo [task 2019-09-11T22:27:25.365Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_condrender.lo [task 2019-09-11T22:27:25.685Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_copyimage.lo [task 2019-09-11T22:27:25.730Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_drawpixels.lo [task 2019-09-11T22:27:25.772Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_drawtex.lo [task 2019-09-11T22:27:25.772Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_drawpixels_shader.lo [task 2019-09-11T22:27:25.893Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_eglimage.lo [task 2019-09-11T22:27:26.107Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_fbo.lo [task 2019-09-11T22:27:26.398Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_feedback.lo [task 2019-09-11T22:27:26.399Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_flush.lo [task 2019-09-11T22:27:26.400Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_memoryobjects.lo [task 2019-09-11T22:27:26.413Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_msaa.lo [task 2019-09-11T22:27:26.414Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_perfmon.lo [task 2019-09-11T22:27:26.415Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_program.lo [task 2019-09-11T22:27:26.454Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_queryobj.lo [task 2019-09-11T22:27:26.599Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_rasterpos.lo [task 2019-09-11T22:27:26.954Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_readpixels.lo [task 2019-09-11T22:27:26.955Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_semaphoreobjects.lo [task 2019-09-11T22:27:26.957Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_strings.lo [task 2019-09-11T22:27:27.117Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_syncobj.lo [task 2019-09-11T22:27:27.200Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_texturebarrier.lo [task 2019-09-11T22:27:27.334Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_texture.lo [task 2019-09-11T22:27:27.352Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_viewport.lo [task 2019-09-11T22:27:27.352Z] [osmesa-src 0.1.1] CC state_tracker/st_cb_xformfb.lo[task 2019-09-11T22:27:27.352Z] [osmesa-src 0.1.1] CC state_tracker/st_context.lo [task 2019-09-11T22:27:27.442Z] [osmesa-src 0.1.1] CC state_tracker/st_debug.lo [task 2019-09-11T22:27:27.444Z] [osmesa-src 0.1.1] CC state_tracker/st_copytex.lo [task 2019-09-11T22:27:27.447Z] [osmesa-src 0.1.1] CC state_tracker/st_draw.lo [task 2019-09-11T22:27:27.538Z] [osmesa-src 0.1.1] CC state_tracker/st_draw_feedback.lo [task 2019-09-11T22:27:27.818Z] [osmesa-src 0.1.1] CC state_tracker/st_extensions.lo [task 2019-09-11T22:27:27.856Z] [osmesa-src 0.1.1] CC state_tracker/st_format.lo [task 2019-09-11T22:27:28.003Z] [osmesa-src 0.1.1] CC state_tracker/st_gen_mipmap.lo [task 2019-09-11T22:27:28.012Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_to_nir.lo [task 2019-09-11T22:27:28.206Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_to_tgsi.lo [task 2019-09-11T22:27:28.208Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_to_tgsi_array_merge.lo [task 2019-09-11T22:27:28.343Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_to_tgsi_temprename.lo [task 2019-09-11T22:27:28.345Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_to_tgsi_private.lo [task 2019-09-11T22:27:28.474Z] [osmesa-src 0.1.1] CXX state_tracker/st_glsl_types.lo [task 2019-09-11T22:27:28.475Z] [osmesa-src 0.1.1] CC state_tracker/st_manager.lo [task 2019-09-11T22:27:28.541Z] [osmesa-src 0.1.1] CC state_tracker/st_mesa_to_tgsi.lo [task 2019-09-11T22:27:28.543Z] [osmesa-src 0.1.1] CC state_tracker/st_nir_lower_builtin.lo [task 2019-09-11T22:27:28.932Z] [osmesa-src 0.1.1] CC state_tracker/st_nir_lower_tex_src_plane.lo [task 2019-09-11T22:27:28.933Z] [osmesa-src 0.1.1] CC state_tracker/st_nir_lower_uniforms_to_ubo.lo [task 2019-09-11T22:27:29.016Z] [osmesa-src 0.1.1] CC state_tracker/st_pbo.lo [task 2019-09-11T22:27:29.064Z] [osmesa-src 0.1.1] CC state_tracker/st_program.lo [task 2019-09-11T22:27:29.451Z] [osmesa-src 0.1.1] CC state_tracker/st_scissor.lo [task 2019-09-11T22:27:29.451Z] [osmesa-src 0.1.1] CC state_tracker/st_sampler_view.lo [task 2019-09-11T22:27:29.565Z] [osmesa-src 0.1.1] CC state_tracker/st_shader_cache.lo [task 2019-09-11T22:27:29.682Z] [osmesa-src 0.1.1] CC state_tracker/st_texture.lo [task 2019-09-11T22:27:29.748Z] [osmesa-src 0.1.1] CC state_tracker/st_tgsi_lower_yuv.lo [task 2019-09-11T22:27:29.787Z] [osmesa-src 0.1.1] CC state_tracker/st_vdpau.lo [task 2019-09-11T22:27:30.205Z] [osmesa-src 0.1.1] CPPAS x86-64/xform4.lo [task 2019-09-11T22:27:49.133Z] [osmesa-src 0.1.1] CXXLD libmesagallium.la [task 2019-09-11T22:27:49.993Z] [osmesa-src 0.1.1] ar:
u' modifier ignored since D' is the default (see
U')
[task 2019-09-11T22:28:00.052Z] [osmesa-src 0.1.1] make[5]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:28:00.052Z] [osmesa-src 0.1.1] Making all in main/tests
[task 2019-09-11T22:28:00.056Z] [osmesa-src 0.1.1] make[5]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa/main/tests'
[task 2019-09-11T22:28:00.061Z] [osmesa-src 0.1.1] make[5]: Nothing to be done for 'all'.
[task 2019-09-11T22:28:00.061Z] [osmesa-src 0.1.1] make[5]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa/main/tests'
[task 2019-09-11T22:28:00.061Z] [osmesa-src 0.1.1] Making all in state_tracker/tests
[task 2019-09-11T22:28:00.064Z] [osmesa-src 0.1.1] make[5]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa/state_tracker/tests'
[task 2019-09-11T22:28:00.069Z] [osmesa-src 0.1.1] make[5]: Nothing to be done for 'all'.
[task 2019-09-11T22:28:00.069Z] [osmesa-src 0.1.1] make[5]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa/state_tracker/tests'
[task 2019-09-11T22:28:00.069Z] [osmesa-src 0.1.1] make[4]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:28:00.069Z] [osmesa-src 0.1.1] make[3]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/mesa'
[task 2019-09-11T22:28:00.070Z] [osmesa-src 0.1.1] Making all in loader
[task 2019-09-11T22:28:00.074Z] [osmesa-src 0.1.1] make[3]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/loader'
[task 2019-09-11T22:28:00.080Z] [osmesa-src 0.1.1] CC libloader_la-loader.lo
[task 2019-09-11T22:28:00.080Z] [osmesa-src 0.1.1] CC libloader_la-pci_id_driver_map.lo
[task 2019-09-11T22:28:00.733Z] [osmesa-src 0.1.1] CCLD libloader.la[task 2019-09-11T22:28:00.824Z] [osmesa-src 0.1.1] ar: u' modifier ignored since
D' is the default (see `U')
[task 2019-09-11T22:28:00.917Z] [osmesa-src 0.1.1] make[3]: Leaving directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/loader'
[task 2019-09-11T22:28:00.918Z] [osmesa-src 0.1.1] Making all in gallium
[task 2019-09-11T22:28:00.921Z] [osmesa-src 0.1.1] make[3]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/gallium'
[task 2019-09-11T22:28:00.930Z] [osmesa-src 0.1.1] Making all in auxiliary
[task 2019-09-11T22:28:00.959Z] [osmesa-src 0.1.1] make[4]: Entering directory '/builds/worker/checkouts/gecko/gfx/wr/target/release/build/osmesa-src-ac01c37ffe6dcf3f/out/src/gallium/auxiliary'
[task 2019-09-11T22:28:00.991Z] [osmesa-src 0.1.1] CC indices/u_indices_gen.lo
[task 2019-09-11T22:28:00.992Z] [osmesa-src 0.1.1] CC indices/u_unfilled_gen.lo
[task 2019-09-11T22:28:00.994Z] [osmesa-src 0.1.1] CC util/u_format_table.lo
[task 2019-09-11T22:28:00.995Z] [osmesa-src 0.1.1] CC cso_cache/cso_cache.lo
[task 2019-09-11T22:28:00.999Z] [osmesa-src 0.1.1] CC cso_cache/cso_context.lo
[task 2019-09-11T22:28:01.001Z] [osmesa-src 0.1.1] CC cso_cache/cso_hash.lo
[task 2019-09-11T22:28:01.005Z] [osmesa-src 0.1.1] CC draw/draw_context.lo
[taskcluster:error] Task timeout after 1800 seconds. Force killing container.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 21•5 years ago
|
||
Spike here is actually bug 1614852, please disregard the next OF message.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 45•4 years ago
|
||
The recent spike seems to start with bug 1492362.
https://treeherder.mozilla.org/#/jobs?repo=autoland&searchStr=linting%2Copt%2Cpedantic%2Cchecks%2Csource-test-mozlint-file-perm%2Cfile-perm&tochange=ac0fdd0c661ef098e5b7f9c35c0263b2c358ae77&fromchange=c878b0fb16c7b10587a50d79ae5fd81d94b85e9a&selectedTaskRun=UgNEfzjESvemblYSMmIqUw.0
Glandium, could you, please, take a look?
Comment hidden (Intermittent Failures Robot) |
Comment 47•4 years ago
|
||
bug 1492362 was backed out for a while, so it can't be involved in the most recent ones. Looking at random logs, it seems pypi.pub.build.mozilla.org is being slow sometimes.
Comment 48•4 years ago
|
||
Hm. Dave, does Relops own pypi.pub.build.mozilla.org? Do we have any logs or metrics?
Comment 49•4 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-7) from comment #48)
Hm. Dave, does Relops own pypi.pub.build.mozilla.org? Do we have any logs or metrics?
I was thinking that cloudops owned it. We just discussed it last week in infra because pypi had high activity.
Comment 50•4 years ago
|
||
Brian, do you know if cloudops hosts pypi.pub.build.mozilla.org?
![]() |
||
Comment 51•4 years ago
|
||
I can speak briefly(?) here wrt history. pypi.pub.build.mozilla.org
is a frontend address on the loadbalancer in mdc1. It frontends a pair of VM webservers, which serve up the content of an NFS volume. They were the responsibility of webops (bug 1469675), and to my knowledge the boxes were pretty much hands-off: nobody had any issues, so nobody touched them. Being a load-balanced pair we put them on an auto-patching routine to keep them up to date, and turned them loose.
Since webops dissolved, and :ericz left the company, and his former SE team has been reorged, pypi has slipped through the cracks. Certain releng personnel have access to the hosts, but looking through the logs it's VERY infrequent. In the last year it looks like 2 visits from :callek, and that's it.
On Friday 2020-08-21, through unrelated work, I noticed slowness on both pypi nodes around 1700UTC. Upon logging in, both webservers were utterly hammered (load average of ~250), so, yeah, I'll agree: it was slow.
As the load subsided, I rotated the nodes out of the loadbalancer one at a time and added an extra vCPU. They didn't appear memory bound and I didn't research if CPU was the choke point. I just added the extra core for my own benefit: improving the likelihood of being able to troubleshoot over ssh if the storm persisted.
Now, I'm not an expert on this service, but, looking at the logs for one webhead:
$ cat access_2020-08-21-17 | wc -l
97106
$ cat access_2020-08-21-17 | grep -E " /pub " | wc -l
46955
$ cat access_2020-08-21-17 | grep -E " /pub/ " | wc -l
46488
$ cat access_2020-08-21-17 | grep -E " /pub/? " | wc -l
93443
Of the 97k hits in one hour, 93k were just asking for an index of /pub
, getting a 301, and reasking for /pub/
. That's just getting the autoindex of the page.
63.245.208.200 - - [21/Aug/2020:17:59:59 +0000] "GET /pub HTTP/1.1" 301 246 "-" "pip/19.3.1 {\"ci\":null,\"cpu\":\"x86_64\",\"distro\":{\"id\":\"bionic\",\"libc\":{\"lib\":\"glibc\",\"version\":\"2.27\"},\"name\":\"Ubuntu\",\"version\":\"18.04\"},\"implementation\":{\"name\":\"CPython\",\"version\":\"2.7.17\"},\"installer\":{\"name\":\"pip\",\"version\":\"19.3.1\"},\"openssl_version\":\"OpenSSL 1.1.1 11 Sep 2018\",\"python\":\"2.7.17\",\"setuptools_version\":\"41.6.0\",\"system\":{\"name\":\"Linux\",\"release\":\"4.4.0-1014-aws\"}}"
63.245.208.200 - - [21/Aug/2020:17:59:46 +0000] "GET /pub/ HTTP/1.1" 200 281656 "-" "pip/19.3.1 {\"ci\":null,\"cpu\":\"x86_64\",\"distro\":{\"id\":\"bionic\",\"libc\":{\"lib\":\"glibc\",\"version\":\"2.27\"},\"name\":\"Ubuntu\",\"version\":\"18.04\"},\"implementation\":{\"name\":\"CPython\",\"version\":\"2.7.17\"},\"installer\":{\"name\":\"pip\",\"version\":\"19.3.1\"},\"openssl_version\":\"OpenSSL 1.1.1 11 Sep 2018\",\"python\":\"2.7.17\",\"setuptools_version\":\"41.6.0\",\"system\":{\"name\":\"Linux\",\"release\":\"4.4.0-1014-aws\"}}"
Filtering out internal monitoring and such, there were 959 actual file requests in that hour. So about 95% index, 4% monitoring, 1% data transfers.
So, there's many possibilities here. There might be an opportunity to clean up a URI (/pub/
instead of /pub
) and cut cheap hits. Might be something where we could do 'dumber' indexing (i.e. drop FancyIndexing
) on the webserver and save processing. Might be you had something go runaway and clobber us. Might be we need some kind of caching since the data is so very static. I just don't have enough info here. But I hope this helps guide some thoughts.
Comment 52•4 years ago
|
||
Brian, do you know if cloudops hosts pypi.pub.build.mozilla.org?
(:/ I somehow missed adding a NI here)
:bpitts, it looks like this isn't in cloudops, based on :gcox's detailed investigation. Does :cloudops possibly have monitoring for it?
I'm guessing most of the hits are from taskcluster workers in aws.
Updated•4 years ago
|
Comment 54•4 years ago
|
||
(In reply to Greg Cox [:gcox] from comment #51)
Filtering out internal monitoring and such, there were 959 actual file requests in that hour. So about 95% index, 4% monitoring, 1% data transfers.
:gcox, Could you post some log entries for the monitoring? Maybe that can show us what or where the monitoring is.
So, there's many possibilities here. There might be an opportunity to clean up a URI (
/pub/
instead of/pub
) and cut cheap hits. Might be something where we could do 'dumber' indexing (i.e. dropFancyIndexing
) on the webserver and save processing. Might be you had something go runaway and clobber us. Might be we need some kind of caching since the data is so very static. I just don't have enough info here. But I hope this helps guide some thoughts.
With bug 1661022 Aki has appended the trailing slash on the uri. So that will soon remove those redirect hits on /pub.
Caching sounds ideal for this since each CI worker hits these, and when we scale up to 1000's in aws/gcp that may be too many reaching over to this url, repeatedly.
Do we have some proxy/caching we are doing elsewhere for datacenter hosted http? and related, perhaps this should be moved to a cloud (s3 or something to be near the primary requestors)
![]() |
||
Comment 55•4 years ago
|
||
(In reply to Dave House [:dhouse] from comment #54)
:gcox, Could you post some log entries for the monitoring? Maybe that can show us what or where the monitoring is.
It's pypi[12].webapp.mdc1, /var/log/httpd/pypi.pub.build.mozilla.org/access*
Based on watching logs a few hours after bug 1661022 comment 6, 301's didn't drop off. Looking after the ronin change landed, I see a decrease, but not to zero, and also not so far down that I have sample-size enough to call it a real impact yet. Let's say, "it looks better, but it could also be part of the 'after hours' lull".
With bug 1661022 Aki has appended the trailing slash on the uri. So that will soon remove those redirect hits on /pub.
I think this might need some more rollout time. I'm still seeing MacOS, bionic, and Windows with 301s.
Caching sounds ideal for this since each CI worker hits these, and when we scale up to 1000's in aws/gcp that may be too many reaching over to this url, repeatedly.
Do we have some proxy/caching we are doing elsewhere for datacenter hosted http?
The Zeus loadbalancers do have content caching, and it's flagged as enabled on the pypi virtual server... but when I look at caching in Zeus, it shows the /pub/ URL in the cache, but almost no hits reusing it. I'm not sure why we're not showing better caching results.
Looking at the Apache config, /etc/httpd/conf.d/pypi.pub.build.mozilla.org.conf
, the index section is:
IndexOptions FancyIndexing +TrackModified HTMLTable VersionSort NameWidth=* Charset=UTF-8
TrackModified
should be sending up a lastmodified in the response, which should be cached, and should be causing cache hits. So it feels like there is something wrong around here. Either the requests are saying "I need a fresh copy not from cache," or we're not really caching in Zeus, or we're not really producing a cacheable copy from Apache to Zeus. I'm afraid I need to punt you at someone with more Zeus knowledge here (cc'ed) because I don't know enough to research these possibilities.
and related, perhaps this should be moved to a cloud (s3 or something to be near the primary requestors)
"You've always had the power to go back to Kansas."
It's probably just never been enough of a priority. While pypi is in the datacenters, you're ingesting data to the cloud, so, DC workers get it for free because it's local, and cloudfolk get it for the near-free cost of an outbound request, since data is flowing the right direction, and the service is already set up so it's no work and maybe nobody thought about it. Now if you want it in the cloud, you'll likely need to (a) set it up to begin with, (b) set it up one per cloud so that it's not doing cross-cloud internet transfers, (c) retool your methods for putting the pypi packages out there in multiple places. 'You' can do it... it's just that nobody has done it.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 58•4 years ago
|
||
(In reply to Greg Cox [:gcox] from comment #55)
(In reply to Dave House [:dhouse] from comment #54)
Caching sounds ideal for this since each CI worker hits these, and when we scale up to 1000's in aws/gcp that may be too many reaching over to this url, repeatedly.
Do we have some proxy/caching we are doing elsewhere for datacenter hosted http?
The Zeus loadbalancers do have content caching, and it's flagged as enabled on the pypi virtual server... but when I look at caching in Zeus, it shows the /pub/ URL in the cache, but almost no hits reusing it. I'm not sure why we're not showing better caching results.
Zeus can cache the requests. However the origin servers need to set the right cache headers - Cache-Control
and Expires
- at a minimum. Ex:
$ curl -I 'http://fr.fxfeeds.mozilla.com/fr/firefox/headlines.xml'
HTTP/1.1 302 Found
Server: Apache/2.4.6 (CentOS)
X-Backend-Server: redirect2.webapp.mdc1.mozilla.com
Cache-Control: max-age=604800
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 02 Sep 2020 15:49:40 GMT
Location: http://www.lemonde.fr/rss/sequence/0,2-3208,1-0,0.xml?nav=firefox
Expires: Wed, 09 Sep 2020 15:49:40 GMT
Transfer-Encoding: chunked
Connection: Keep-Alive
X-Cache-Info: cached
Once the headers are correctly set, Zeus returns X-Cache-Info: cached
vs. X-Cache-Info: caching
in the response. Sending the above request to the origin to verify:
$ curl -I -H 'Host: fr.fxfeeds.mozilla.com' http://redirect1.webapp.mdc1.mozilla.com/fr/firefox/headlines.xml
HTTP/1.1 302 Found
Date: Wed, 02 Sep 2020 15:51:55 GMT
Server: Apache/2.4.6 (CentOS)
X-Backend-Server: redirect1.webapp.mdc1.mozilla.com
Location: http://www.lemonde.fr/rss/sequence/0,2-3208,1-0,0.xml?nav=firefox
Cache-Control: max-age=604800
Expires: Wed, 09 Sep 2020 15:51:55 GMT
Content-Type: text/html; charset=iso-8859-1
I believe in the current state, Zeus is doing a "best effort" guess at what needs to be cached and for how long. So we see some requests being cached for ~60s but the right way to fix this is to set Cache-Control
and Expires
on the origin servers.
Ref:
![]() |
||
Comment 59•4 years ago
|
||
Consulted with :ashish, we landed a change to the pypi servers (puppet aca9aebd4f5f46e04a5110178a7527c1eb6ac8d3
) that adds 120s of caching on the origin servers. (120s seems like a "by the time you wonder why the cache isn't updating, it'll have updated" length of time, while still taking down the number of hits by a decent amount).
Then we noticed that the /pub/
hits weren't diminishing. Very long story short, pip hardcodes a "Cache-Control: max-age=0" on the request for the index URI, so even though we're telling Zeus to cache, it won't, because it's trusting the client.
Ashish added a Trafficscript rule in the loadbalancer:
$path = http.getPath();
if( string.cmp( $path, "/pub/" ) == 0 ) {
http.removeHeader( "Cache-Control" );
}
to remove the "never cache" order from pip for index requests against /pub/
. That puts the loadbalancers in the position of being allowed to cache based on the origin's config, which, as stated before, we chose to be 120s.
When we rolled this out, we saw the loadbalancer absorb almost 1500 cache hits in those 2mins, for every 1 that went to the origin nodes.
This will still let requests for /pub
through to the origins, where we'll 301 to /pub/
. Those haven't gone away fully, though they have dropped dramatically (from 24h/day with thousands of 301's, to maybe 1h/day that gets thousands, and 23h/day that get dozens). I think there's still a good number of hosts out there that haven't gotten a URL update for pip, somehow.
Comment 60•4 years ago
|
||
We haven't uplifted the pub -> pub/ patch to beta/release/esr. These branches have much lower activity, but they are still active. If this will help, we can uplift.
![]() |
||
Comment 61•4 years ago
|
||
For OCD/correctness/eliminating noise reasons, I'd love for it to be uplifted, but I can't say it's urgent.
Comment hidden (Intermittent Failures Robot) |
Comment 63•4 years ago
|
||
Landed on beta and esr78. esr68 is EOL in the next couple weeks, and we'll merge beta to release next Monday. We should hopefully see the usage of pub
without a trailing slash drop to approximately zero in the coming weeks.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 70•4 years ago
|
||
Looking at the logs from https://treeherder.mozilla.org/intermittent-failures.html#/bugdetails?startday=2020-09-01&endday=2020-10-19&tree=trunk&bug=1580652, these recent failures since 09/08 are not related to the pypi slowness. So that appears to be fixed.
Most of these timeouts occur during hg update/clones.
copy-pastes from the logs of the line(s) immediately before the timeout entry "[taskcluster:error] Task timeout after 1800 seconds. Force killing container.":
2020-10-19 16:28:57 autoland b2663efeb9516b:
[vcs 2020-10-19T17:03:23.750Z] updating [============================================> ] 266100/282574 1m29s
2020-10-17 05:33:38 autoland f9a6da3ea564c023:
[task 2020-10-17T06:26:40.382Z] timed out waiting for profiles.ini
[task 2020-10-17T06:26:40.691Z] launch_application: am start -W -n org.mozilla.geckoview.test/org.mozilla.geckoview.test.TestRunnerActivity -a android.intent.action.MAIN --es env9 LLVM_PROFILE_FILE=/sdcard/pgo_profile/default_%p_random_%m.profraw --es env8 R_LOG_DESTINATION=stderr --es args '-no-remote -profile /sdcard/test_root/profile -marionette' --es env3 R_LOG_VERBOSE=1 --es env2 XPCOM_DEBUG_BREAK=warn --es env1 MOZ_WEBRENDER=0 --es env0 MOZ_CRASHREPORTER=1 --es env7 MOZ_CRASHREPORTER_SHUTDOWN=1 --es env6 MOZ_IN_AUTOMATION=1 --es env5 MOZ_LOG=signaling:3,mtransport:4,DataChannel:4,jsep:4 --es env4 MOZ_HIDE_RESULTS_TABLE=1 --ez use_multiprocess True --es env13 R_LOG_LEVEL=6 --es env12 MOZ_PROCESS_LOG=/tmp/tmpwC3fSYpidlog --es env11 MOZ_CRASHREPORTER_NO_REPORT=1 --es env10 MOZ_JAR_LOG_FILE=/sdcard/pgo_profile/en-US.log
[task 2020-10-17T06:43:14.939Z] merged.profdata
2020-10-15 17:10:07 autoland 07cdb26af0a30:
[vcs 2020-10-15T17:43:40.829Z] updating [================> ] 103900/284742 20m15s
2020-10-14 15:07:40 autoland de1f0f43e0733:
[vcs 2020-10-14T15:40:27.205Z] updating [====================> ] 130300/284858 15m20s
2020-10-14 09:52:12 mozilla-central f6615f1735525:
[vcs 2020-10-14T10:27:42.878Z] clone [==================> ] 1732264069/3753071261 34m09s
2020-10-13 00:15:28 autoland 5bfba9144099b:
[task 2020-10-13T00:49:07.244Z] 00:49:07.244 avoid-blacklist-and-whitelist (94) | Finished in 144.78 seconds
2020-10-13 00:07:12 autoland c19042d451119:
[vcs 2020-10-13T00:12:24.624Z] ensuring https://us-east-1.hgmointernal.net/integration/autoland@c19042d45111972cf29075ee1155b99a02d180bf is available at /builds/worker/checkouts/gecko
[vcs 2020-10-13T00:12:24.777Z] (cloning from upstream repo https://us-east-1.hgmointernal.net/mozilla-unified)
2020-10-12 23:01:12 autoland 910e223290db:
[vcs 2020-10-12T23:35:08.967Z] updating [=======================================> ] 242314/284285 9m10s
... ^ followed by 7 more hg timeouts on Oct 12.
then a few task issues:
2020-10-12 14:11:19 autoland 19a46b1ac5db0a7946d6e5de939b754372a1187f:
[task 2020-10-12T14:44:27.594Z] 14:44:27.594 avoid-blacklist-and-whitelist (93) | Finished in 71.63 seconds
2020-10-12 10:47:16 autoland 466c14dd254ee6eeff58be9254a105b22e3fa480:
[task 2020-10-12T11:20:20.233Z] created virtual environment CPython3.6.9.final.0-64 in 3429ms
[...details of venv]
Then a quiet period back to:
2020-10-05 15:11:22 mozilla-central 7a0c019956469:
[task 2020-10-05T16:52:48.102Z] launch_application: am start -W -n org.mozilla.geckoview.test/org.mozilla.geckoview.test.TestRunnerActivity -a android.intent.action.MAIN --es env9 LLVM_PROFILE_FILE=/sdcard/pgo_profile/default_%p_random_%m.profraw --es env8 R_LOG_DESTINATION=stderr --es args '-no-remote -profile /sdcard/test_root/profile -marionette' --es env3 R_LOG_VERBOSE=1 --es env2 XPCOM_DEBUG_BREAK=warn --es env1 MOZ_WEBRENDER=0 --es env0 MOZ_CRASHREPORTER=1 --es env7 MOZ_CRASHREPORTER_SHUTDOWN=1 --es env6 MOZ_IN_AUTOMATION=1 --es env5 MOZ_LOG=signaling:3,mtransport:4,DataChannel:4,jsep:4 --es env4 MOZ_HIDE_RESULTS_TABLE=1 --ez use_multiprocess True --es env13 R_LOG_LEVEL=6 --es env12 MOZ_PROCESS_LOG=/tmp/tmprjB_QRpidlog --es env11 MOZ_CRASHREPORTER_NO_REPORT=1 --es env10 MOZ_JAR_LOG_FILE=/sdcard/pgo_profile/en-US.log
and then quiet back to more hg timeouts, and task issues on 9/29:
2020-09-29 17:36:20 autoland 55175b2bbcf81:
[vcs 2020-09-29T17:40:54.608Z] (cloning from upstream repo https://us-west-1.hgmointernal.net/mozilla-unified)
2020-09-29 16:04:16 autoland 94b3b8f32af5cd:
[vcs 2020-09-29T16:37:11.946Z] clone [================================> ] 2962738508/3723176556 11m27s
2020-09-29 15:38:31 mozilla-central 324ea565091e3:
[task 2020-09-29T16:12:54.493Z] 16:12:54.493 file-perm (91) | Finished in 50.34 seconds
[task 2020-09-29T16:12:54.552Z] 16:12:54.552 maybe-shebang-file-perm (91) | Passing the following paths:
[...paths listed]
and silence again back to 9/16,15,14 for more apparent hg timeouts:
2020-09-16 20:05:25 autoland 761ef7b1a3498:
[vcs 2020-09-16T20:39:07.475Z] updating [============================================> ] 267642/282990 5m04s
2020-09-16 19:13:17 autoland 05a47a0c800af:
[vcs 2020-09-16T19:46:47.764Z] updating [=========================> ] 160000/282986 10m39s
2020-09-15 18:53:33 autoland 76c42497ba0a7:
[vcs 2020-09-15T18:57:22.804Z] (cloning from upstream repo https://us-west-1.hgmointernal.net/mozilla-unified)
2020-09-15 17:55:41 autoland a2e9a24387c28:
[vcs 2020-09-15T18:31:36.279Z] clone [============================> ] 2602040178/3694294269 41m58s
2020-09-14 18:50:22 autoland c7bf67ecaeb91:
[vcs 2020-09-14T19:23:57.151Z] updating [================================================> ] 279110/282947 14s
2020-09-14 15:22:46 autoland c6d80d05c8f41:
[vcs 2020-09-14T15:56:28.940Z] updating [============================> ] 178700/282949 15m22s
2020-09-11 15:10:42 mozilla-central b133e2d673e8e:
[vcs 2020-09-11T15:45:09.502Z] updating [==================================> ] 207500/282897 9m43s
2020-09-11 15:05:05 autoland 60f90497d2b311:
[vcs 2020-09-11T15:40:31.947Z] updating [===========================> ] 167100/282899 9m40s
2020-09-11 02:31:36 mozilla-central f92ce84f27df482:
[task 2020-09-11T03:30:49.729Z] timed out waiting for profiles.ini
[task 2020-09-11T03:30:50.449Z] launch_application: am start -W -n org.mozilla.geckoview.test/org.mozilla.geckoview.test.TestRunnerActivity -a android.intent.action.MAIN --es env9 LLVM_PROFILE_FILE=/sdcard/pgo_profile/default_%p_random_%m.profraw --es env8 R_LOG_DESTINATION=stderr --es args '-no-remote -profile /sdcard/test_root/profile -marionette' --es env3 R_LOG_VERBOSE=1 --es env2 XPCOM_DEBUG_BREAK=warn --es env1 MOZ_WEBRENDER=0 --es env0 MOZ_CRASHREPORTER=1 --es env7 MOZ_CRASHREPORTER_SHUTDOWN=1 --es env6 MOZ_IN_AUTOMATION=1 --es env5 MOZ_LOG=signaling:3,mtransport:4,DataChannel:4,jsep:4 --es env4 MOZ_HIDE_RESULTS_TABLE=1 --ez use_multiprocess True --es env13 R_LOG_LEVEL=6 --es env12 MOZ_PROCESS_LOG=/tmp/tmpQadgoIpidlog --es env11 MOZ_CRASHREPORTER_NO_REPORT=1 --es env10 MOZ_JAR_LOG_FILE=/sdcard/pgo_profile/en-US.log
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 77•4 years ago
|
||
In the last 7 days there have been 25 occurrences, all on android-4-0-armv7-api16 pgo.
Recent failure: https://treeherder.mozilla.org/logviewer?job_id=323126575&repo=mozilla-central&lineNumber=1714
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 80•4 years ago
|
||
https://wiki.mozilla.org/Bug_Triage#Intermittent_Test_Failure_Cleanup
For more information, please visit auto_nag documentation.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 84•4 years ago
|
||
Recent failure log: https://treeherder.mozilla.org/logviewer?job_id=339572507&repo=autoland&lineNumber=1696
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 112•3 years ago
|
||
Another examples of this failure can be found on this try build:
https://treeherder.mozilla.org/jobs?repo=try&revision=2996ae28b9238a044c51a595b0a2398e20967f87
While test jobs that are passing mostly make use of an existing repository shared store
most of the failing ones need to clone from upstream first:
fail: https://treeherder.mozilla.org/logviewer?job_id=358054962&repo=try&lineNumber=33-34
pass: https://treeherder.mozilla.org/logviewer?job_id=358060037&repo=try&lineNumber=34-35
Might there be a problem with syncing the state of the shared unified repository across workers? Or is that expected given that not all workers have the same base revision checked out.
Then we have around 5 extra minutes for preparing the repository: transferred 3.91 GB in 290.8 seconds (13.8 MB/sec)
. Some other jobs show a way faster transfer speed like transferred 3.91 GB in 122.7 seconds (32.6 MB/sec)
which only results in extra minutes to clone the repository.
Also updating the repo to the required changeset varies quite a lot. Fast jobs only take about 1 minute, while for slow jobs it can take up to 4 minutes!
Is there anything that can be done about that? Mike could you help?
Updated•3 years ago
|
Comment 113•3 years ago
|
||
(In reply to Henrik Skupin (:whimboo) [⌚️UTC+1] from comment #112)
Another examples of this failure can be found on this try build:
https://treeherder.mozilla.org/jobs?repo=try&revision=2996ae28b9238a044c51a595b0a2398e20967f87While test jobs that are passing mostly make use of an
existing repository shared store
most of the failing ones need to clone from upstream first:fail: https://treeherder.mozilla.org/logviewer?job_id=358054962&repo=try&lineNumber=33-34
pass: https://treeherder.mozilla.org/logviewer?job_id=358060037&repo=try&lineNumber=34-35Might there be a problem with syncing the state of the shared unified repository across workers? Or is that expected given that not all workers have the same base revision checked out.
IIRC the shared repository is a .hg
directory mounted into the worker without a working directory checkout. This .hg
directory is shared across task containers so each container can use the repo to create a working directory checkout for the task. If the worker doesn't have a copy of the repo in it's shared mount, it has to do a full clone to get one. Later tasks running on the same worker re-use this cached repo to do their working directory checkouts. So we essentially do a full clone when we spawn new workers.
Then we have around 5 extra minutes for preparing the repository:
transferred 3.91 GB in 290.8 seconds (13.8 MB/sec)
. Some other jobs show a way faster transfer speed liketransferred 3.91 GB in 122.7 seconds (32.6 MB/sec)
which only results in extra minutes to clone the repository.Also updating the repo to the required changeset varies quite a lot. Fast jobs only take about 1 minute, while for slow jobs it can take up to 4 minutes!
It should be noted that the calculated transfer speed is just the number of bytes transmitted divided by the elapsed time for the entire clone job, thus the filesystem IO is also included in that time. Given we are seeing working directory updates also taking 4m+ on slow jobs, I'd reckon the slowness is due to issues with the filesystem on those tasks. It could be a number of things (misconfigured filesystem, "noisy neighbors" on the host ie the machine is running several tasks at once and hitting bottlenecks, etc).
I have a few ideas about how to make things better here but I don't think I'll be taking them on before 2022. If someone investigates the differences in clone time I would be interested in hearing what you find.
Comment 114•3 years ago
|
||
(In reply to Connor Sheehan [:sheehan] from comment #113)
IIRC the shared repository is a
.hg
directory mounted into the worker without a working directory checkout. This.hg
directory is shared across task containers so each container can use the repo to create a working directory checkout for the task. If the worker doesn't have a copy of the repo in it's shared mount, it has to do a full clone to get one. Later tasks running on the same worker re-use this cached repo to do their working directory checkouts. So we essentially do a full clone when we spawn new workers.
How often do we create workers? It feels like that at some times during the day it's happening more often than in others, so that failures for test jobs are more likely to hit.
Also given that the initial clone from a remote location can take such a long time would it be possible to already run this command when a new worker is getting created?
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 118•3 years ago
|
||
(In reply to Henrik Skupin (:whimboo) [⌚️UTC+1] from comment #114)
(In reply to Connor Sheehan [:sheehan] from comment #113)
IIRC the shared repository is a
.hg
directory mounted into the worker without a working directory checkout. This.hg
directory is shared across task containers so each container can use the repo to create a working directory checkout for the task. If the worker doesn't have a copy of the repo in it's shared mount, it has to do a full clone to get one. Later tasks running on the same worker re-use this cached repo to do their working directory checkouts. So we essentially do a full clone when we spawn new workers.How often do we create workers? It feels like that at some times during the day it's happening more often than in others, so that failures for test jobs are more likely to hit.
I'm not sure to be honest. We ought to have some telemetry that can answer these questions. I'm not entirely sure if that exists or where it might live.
Also given that the initial clone from a remote location can take such a long time would it be possible to already run this command when a new worker is getting created?
This was actually one of the things I had in mind - moving the clone time out of task-execution time and into worker-creation time, so the long clone time isn't developer facing. Then vcs operations in tasks would be much more consistent in their runtimes. This may affect some assumptions about how fast we can stand up workers when demand arises, so it's likely a more involved change I'll be looking into next year.
If the issue with these exceptionally slow tasks is filesystem/storage related we will obviously still see slow vcs operations, so that area is ripe for optimization regardless of whether we move the clone to worker-startup time.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 124•3 years ago
|
||
(In reply to Intermittent Failures Robot from comment #123)
41 failures in 2801 pushes (0.015 failures/push) were associated with this bug in the last 7 days.
For more details, see:
https://treeherder.mozilla.org/intermittent-failures/bugdetails?bug=1580652&startday=2022-01-03&endday=2022-01-09&tree=all
Most of these failures are from the puppeteer job. I filed bug 1749266. Could we re-classify these failures please?
Updated•3 years ago
|
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Updated•2 years ago
|
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 171•2 years ago
|
||
There have been 60 total failures in the last 7 days, recent failure log.
Affected platforms are:
- windows2012-64-shippable
- linux1804-64-tsan-qr
[task 2022-12-04T03:13:32.268Z] 03:13:32 INFO - TEST-START | accessible/tests/browser/tree/browser_shadowdom.js
[task 2022-12-04T03:13:33.175Z] 03:13:33 INFO - GECKO(8459) | must wait for focus in content
[taskcluster:error] Task timeout after 1800 seconds. Force killing container.
[taskcluster 2022-12-04 03:13:33.882Z] === Task Finished ===
[taskcluster 2022-12-04 03:13:33.882Z] Unsuccessful task run with exit code: -1 completed in 1801.187 seconds
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 174•2 years ago
|
||
There have been 63 total failures in the last 7 days, recent failure log.
Affected platforms are:
- linux1804-64-tsan-qr
Comment hidden (Intermittent Failures Robot) |
Comment 176•2 years ago
|
||
There is a visible delay during shutdown of Firefox of about 5s which seems to be related to Telemetry. Given that mochitests start and quit Firefox quite often this most likely is causing the task timeout. I filed bug 1805153 for that.
Updated•2 years ago
|
Comment hidden (Intermittent Failures Robot) |
Comment 178•2 years ago
|
||
The TSAN mochitest-browser-a11y failures are basically just running really close to the max runtime of the job and going a bit over about half the time. I've confirmed on Try that just bumping the max runtime from 30min to 45min makes it pass reliably. Will file a new bug for that.
Updated•2 years ago
|
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 215•1 year ago
•
|
||
Lately (maybe past 2 weeks) there is a spike in this bug for two kind of jobs, https://treeherder.mozilla.org/intermittent-failures/bugdetails?startday=2023-08-16&endday=2023-09-15&tree=trunk&failurehash=all&bug=1580652:
- 1st is browser accessible jobs like this - failure log https://treeherder.mozilla.org/logviewer?job_id=428376014&repo=autoland&lineNumber=54050
- 2nd are these wpt canvas timeous - failure log https://treeherder.mozilla.org/logviewer?job_id=429239646&repo=autoland&lineNumber=13787
Joel, is this something you can investigate or maybe forward to someone? Thank you.
Assignee | ||
Comment 217•1 year ago
|
||
Updated•1 year ago
|
Comment 218•1 year ago
|
||
Comment 219•1 year ago
|
||
bugherder |
Updated•1 year ago
|
Comment 220•1 year ago
|
||
uplift |
Comment 221•1 year ago
|
||
uplift |
Updated•1 year ago
|
Description
•