Closed
Bug 714634
Opened 14 years ago
Closed 13 years ago
Frontend Zeus network issues in phx1
Categories
(Infrastructure & Operations Graveyard :: WebOps: Other, task)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: ashish, Assigned: oremj)
Details
Attachments
(1 file)
174.78 KB,
image/png
|
Details |
Around 0805 pp-zlb08.phx went down. Remote console showed lots of CPU soft lockup errors. Attached a screenshot of remconn.
SUMO, socorro might have been impacted. I've restarted all socorro processors and the monitor on sp-admin01.
Reporter | ||
Comment 1•14 years ago
|
||
I had to powercycle to get the box back up (sorry, missed putting this in).
Assignee | ||
Updated•14 years ago
|
Assignee: server-ops → oremj
Assignee | ||
Comment 2•14 years ago
|
||
Looks like we actually had a kpanic:
Jan 2 08:25:33 pp-zlb08 kernel: ------------[ cut here ]------------
Jan 2 08:25:33 pp-zlb08 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted)
Jan 2 08:25:33 pp-zlb08 kernel: Hardware name: ProLiant DL360 G7
Jan 2 08:25:33 pp-zlb08 kernel: Modules linked in: mptctl mptbase zcluster(U) bridge pcc_cpufreq 8021q garp stp llc bonding ipv6 power_meter nx_nic(U) bnx2 microcode serio_raw sg iTCO_wdt iTCO_vendor_support hpilo hpwdt i7core_edac edac_core shpchp ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix hpsa radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Jan 2 08:25:33 pp-zlb08 kernel: Pid: 3446, comm: zeus.zxtm Not tainted 2.6.32-220.2.1.el6.x86_64 #1
Jan 2 08:25:33 pp-zlb08 kernel: Call Trace:
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff814ed902>] ? schedule_timeout+0x192/0x2e0
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff8107c020>] ? process_timeout+0x0/0x10
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff811b9b59>] ? sys_epoll_wait+0x239/0x300
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff8105e770>] ? default_wake_function+0x0/0x20
Jan 2 08:25:33 pp-zlb08 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b
Jan 2 08:25:33 pp-zlb08 kernel: ---[ end trace 597171cdcc9bff76 ]---
Let's keep an eye on it. I'm going to close this out.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Comment 3•14 years ago
|
||
case opened with Riverbed support,
[RVBD: 207190] box died, panic on zeus.zxtm [ ref:00D33N0.5004ILooA:ref ]
Reporter | ||
Comment 4•14 years ago
|
||
pp-zlb08 hung again at 08:13 but the kernel seemed responsive until 08:25 when it had to be power cycled.
Earliest sign of trouble:
Feb 6 08:11:30 pp-zlb08 kernel: zeus.zxtm: page allocation failure. order:2, mode:0x20
Feb 6 08:11:30 pp-zlb08 kernel: Pid: 28879, comm: zeus.zxtm Tainted: G W ---------------- 2.6.32-220.2.1.el6.x86_64 #1
Feb 6 08:11:30 pp-zlb08 kernel: Call Trace:
Feb 6 08:11:30 pp-zlb08 kernel: <IRQ> [<ffffffff81123d2f>] ? __alloc_pages_nodemask+0x77f/0x940
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81421900>] ? __alloc_skb+0x10/0x180
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8115dbe2>] ? kmem_getpages+0x62/0x170
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8115e7fa>] ? fallback_alloc+0x1ba/0x270
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8115e24f>] ? cache_grow+0x2cf/0x320
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8115e579>] ? ____cache_alloc_node+0x99/0x160
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff814232e4>] ? pskb_expand_head+0x64/0x270
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8115f1a9>] ? __kmalloc+0x189/0x220
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff814232e4>] ? pskb_expand_head+0x64/0x270
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81423bba>] ? __pskb_pull_tail+0x2aa/0x360
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffffa02fa82a>] ? unm_nic_xmit_frame+0xba/0xbc0 [nx_nic]
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81421367>] ? __kfree_skb+0x47/0xa0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8142c86c>] ? dev_hard_start_xmit+0x2bc/0x3f0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8102fd8e>] ? physflat_send_IPI_mask+0xe/0x10
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81449c1a>] ? sch_direct_xmit+0x15a/0x1c0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81449ceb>] ? __qdisc_run+0x6b/0xe0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8142a390>] ? net_tx_action+0x130/0x1c0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81071f81>] ? __do_softirq+0xc1/0x1d0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff810d9342>] ? handle_IRQ_event+0x92/0x170
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81071fda>] ? __do_softirq+0x11a/0x1d0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff81071d65>] ? irq_exit+0x85/0x90
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff814f4dd5>] ? do_IRQ+0x75/0xf0
Feb 6 08:11:30 pp-zlb08 kernel: [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11
Feb 6 08:11:42 pp-zlb08 ntpd[3153]: Deleting interface #549 bond0:1, 63.245.217.108#123, interface stats: received=0, sent=0, dropped=0, active_time=338314 secs
Feb 6 08:11:42 pp-zlb08 ntpd[3153]: Deleting interface #550 bond0:2, 63.245.217.51#123, interface stats: received=0, sent=0, dropped=0, active_time=338314 secs
Feb 6 08:11:42 pp-zlb08 ntpd[3153]: Deleting interface #551 bond0:3, 63.245.217.117#123, interface stats: received=0, sent=0, dropped=0, active_time=338314 secs
Feb 6 08:11:42 pp-zlb08 ntpd[3153]: Deleting interface #552 bond0:4, 63.245.217.58#123, interface stats: received=0, sent=0, dropped=0, active_time=338314 secs
Feb 6 08:11:42 pp-zlb08 ntpd[3153]: Deleting interface #553 bond0:5, 63.245.217.77#123, interface stats: received=0, sent=0, dropped=0, active_time=338314 secs
Followed by a CPU lockup:
Feb 6 08:12:47 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:12:47 pp-zlb08 kernel: Modules linked in: zcluster(-)(U) iptable_filter ip_tables mptctl mptbase bridge pcc_cpufreq 8021q garp stp llc bonding ipv6 power_meter nx_nic(U) bnx2 microcode serio_raw sg iTCO_wdt iTCO_vendor_support hpilo hpwdt i7core_edac edac_core shpchp ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix hpsa radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: zcluster]
Feb 6 08:12:47 pp-zlb08 kernel: CPU 13
Feb 6 08:12:47 pp-zlb08 kernel: Modules linked in: zcluster(-)(U) iptable_filter ip_tables mptctl mptbase bridge pcc_cpufreq 8021q garp stp llc bonding ipv6 power_meter nx_nic(U) bnx2 microcode serio_raw sg iTCO_wdt iTCO_vendor_support hpilo hpwdt i7core_edac edac_core shpchp ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix hpsa radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: zcluster]
Feb 6 08:12:47 pp-zlb08 kernel:
Feb 6 08:12:47 pp-zlb08 kernel: Pid: 28879, comm: zeus.zxtm Tainted: G W ---------------- 2.6.32-220.2.1.el6.x86_64 #1 HP ProLiant DL360 G7
Feb 6 08:12:47 pp-zlb08 kernel: RIP: 0010:[<ffffffff814ef3fe>] [<ffffffff814ef3fe>] _spin_lock+0x1e/0x30
Feb 6 08:12:47 pp-zlb08 kernel: RSP: 0000:ffff88063d4c3cf0 EFLAGS: 00000206
Feb 6 08:12:47 pp-zlb08 kernel: RAX: 0000000000005cf6 RBX: ffff88063d4c3cf0 RCX: ffff880616858bf0
Feb 6 08:12:47 pp-zlb08 kernel: RDX: 0000000000005d07 RSI: ffff880616858020 RDI: ffff880616858bf0
Feb 6 08:12:47 pp-zlb08 kernel: RBP: ffffffff8100bc13 R08: ffff880616858020 R09: 0000000000000000
Feb 6 08:12:47 pp-zlb08 kernel: R10: 0000000000000000 R11: ffff88063d4c3d0c R12: ffff88063d4c3c70
Feb 6 08:12:47 pp-zlb08 kernel: R13: ffff8806168586e0 R14: ffff8806cc0713e8 R15: ffffffff814f4ebb
Feb 6 08:12:47 pp-zlb08 kernel: FS: 00007f39fcd33700(0000) GS:ffff88063d4c0000(0000) knlGS:0000000000000000
Feb 6 08:12:47 pp-zlb08 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 6 08:12:47 pp-zlb08 kernel: CR2: 0000000000b68728 CR3: 000000041c8cc000 CR4: 00000000000006e0
Feb 6 08:12:47 pp-zlb08 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Feb 6 08:12:47 pp-zlb08 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Feb 6 08:12:47 pp-zlb08 kernel: Process zeus.zxtm (pid: 28879, threadinfo ffff880113df4000, task ffff880615b95540)
Feb 6 08:12:47 pp-zlb08 kernel: Stack:
Feb 6 08:12:47 pp-zlb08 kernel: ffff88063d4c3da0 ffffffffa02fa86f ffff88063d4c3d20 ffffffff81421367
Feb 6 08:12:47 pp-zlb08 kernel: <0> ffff880617c90000 ffff880693f79180 ffff88063d4c3d40 00000000814213fb
Feb 6 08:12:47 pp-zlb08 kernel: <0> ffff880693f79180 ffff880616858020 ffff880616858bf0 00000002814c1cac
Feb 6 08:12:47 pp-zlb08 kernel: Call Trace:
Feb 6 08:12:47 pp-zlb08 kernel: <IRQ>
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffffa02fa86f>] ? unm_nic_xmit_frame+0xff/0xbc0 [nx_nic]
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81421367>] ? __kfree_skb+0x47/0xa0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8142c86c>] ? dev_hard_start_xmit+0x2bc/0x3f0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8102fd8e>] ? physflat_send_IPI_mask+0xe/0x10
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81449c1a>] ? sch_direct_xmit+0x15a/0x1c0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81449ceb>] ? __qdisc_run+0x6b/0xe0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8142a390>] ? net_tx_action+0x130/0x1c0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071f81>] ? __do_softirq+0xc1/0x1d0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff810d9342>] ? handle_IRQ_event+0x92/0x170
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071fda>] ? __do_softirq+0x11a/0x1d0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071d65>] ? irq_exit+0x85/0x90
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff814f4dd5>] ? do_IRQ+0x75/0xf0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11
Feb 6 08:12:47 pp-zlb08 kernel: <EOI>
Feb 6 08:12:47 pp-zlb08 kernel: Code: 00 00 00 01 74 05 e8 22 7a d8 ff c9 c3 55 48 89 e5 0f 1f 44 00 00 b8 00 00 01 00 f0 0f c1 07 0f b7 d0 c1 e8 10 39 c2 74 0e f3 90 <0f> b7 17 eb f5 83 3f 00 75 f4 eb df c9 c3 0f 1f 40 00 55 48 89
Feb 6 08:12:47 pp-zlb08 kernel: Call Trace:
Feb 6 08:12:47 pp-zlb08 kernel: <IRQ> [<ffffffffa02fa86f>] ? unm_nic_xmit_frame+0xff/0xbc0 [nx_nic]
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81421367>] ? __kfree_skb+0x47/0xa0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8142c86c>] ? dev_hard_start_xmit+0x2bc/0x3f0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8102fd8e>] ? physflat_send_IPI_mask+0xe/0x10
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81449c1a>] ? sch_direct_xmit+0x15a/0x1c0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81449ceb>] ? __qdisc_run+0x6b/0xe0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8142a390>] ? net_tx_action+0x130/0x1c0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071f81>] ? __do_softirq+0xc1/0x1d0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff810d9342>] ? handle_IRQ_event+0x92/0x170
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071fda>] ? __do_softirq+0x11a/0x1d0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff81071d65>] ? irq_exit+0x85/0x90
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff814f4dd5>] ? do_IRQ+0x75/0xf0
Feb 6 08:12:47 pp-zlb08 kernel: [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11
Feb 6 08:12:47 pp-zlb08 abrtd: Directory 'oops-2012-02-06-08:12:47-3294-0' creation detected
Feb 6 08:12:47 pp-zlb08 abrt-dump-oops: Reported 1 kernel oopses to Abrt
Feb 6 08:12:50 pp-zlb08 kernel: <EOI>
Feb 6 08:12:50 pp-zlb08 kernel: __ratelimit: 5 callbacks suppressed
Feb 6 08:12:50 pp-zlb08 abrtd: Can't open file '/var/spool/abrt/oops-2012-02-06-08:12:47-3294-0/uid': No such file or directory
There were more lockups, all of them only with CPU #13:
[root@pp-zlb08.phx ~]# grep "soft lockup" /var/log/messages
Feb 6 08:12:47 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:14:11 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:15:35 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:16:59 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:18:23 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:19:47 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:21:11 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:22:35 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:23:59 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
Feb 6 08:25:23 pp-zlb08 kernel: BUG: soft lockup - CPU#13 stuck for 67s! [zeus.zxtm:28879]
FWIW, this server is running rhel-6.2.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 5•14 years ago
|
||
pp-zlb08 was still inundated with traffic when it came back, we moved bedrock and bugzilla-tg01 to pp-zlb10 and casey noticed an improvement right away:
09:07 < casey> woh thats better.
09:08 < casey> Mon Feb 6 09:06:57 PST 2012 15976
09:08 < casey> Mon Feb 6 09:07:02 PST 2012 7477
09:08 < casey> thats the tcp retransmits counter every 5 seconds
09:09 < casey> the other multicast zlbs idle at around ~4k/5 seconds and 08 is
is now down in that range too.
09:10 < casey> 4 seconds is much better than the 10-30 it was previously
Comment 6•14 years ago
|
||
Quick update:
This nic should have 4 MSI-X entries for interrupt handling, which do appear in /proc/interrupts. However, only one is being used. This leads to horrible performance, and interrupts are happening quicker than one cpu core can handle. This causes upwards of 50k TCP retransmits per minute per pp-zlb node in peak time, has caused overloading of individual zeus nodes, and has caused individual nodes to crash.
We have reproduced this problem with both nx_nic and netxen_nic drivers, leading us to believe that the problem is either in the firmware (which is up to date) or the kernel.
We are escalating the issue within HP. Technically, Riverbed is right in that this is not (nor has been) a Zeus/Stingray issue. However, they were less than helpful in pointing out what the issue is exactly.
Another option here is to replace the NC522SFP NIC with one of another chipset. We are working all options right now.
Status: REOPENED → NEW
Summary: pp-zlb08 hung → Frontend Zeus network issues in phx1
Comment 7•14 years ago
|
||
HP case opened up: 4669674003
I've passed on the CFG2HTML data to them as well. Working with David and Rich to escalate this
Comment 8•14 years ago
|
||
While HP is working on our case, we have also ordered 2 NC523SFP cards to try in place of the NC522SFP. These should be delivered to the data center tomorrow.
When our load goes down in the PM wednesday we will take one node serving just the multicast VIPs offline and replace the card. If interrupts look to be behaving properly we have a second that we can put into the node that serves the bulk of our single-hosted VIPs. We can then reassess the situation and where to go further.
This may not be necessary if HP gets us a solution before then. I'm not counting on that.
Comment 9•14 years ago
|
||
RedHat case opened: 00596103
between cases with 3 vendors maybe we'll reach a conclusion..
Comment 10•14 years ago
|
||
Brief notes on Today:
in pp-zlb12 we swapped the NC522SFP card (netxen) with a NC523SFP card (qlogic 8200).
Ran into the same interrupt limitations.
Dan came up with the idea to remove the bonding connection (we typically run mode=1, active/passive bonding). We worked on this and were able to distribute the interrupts appropriately so the bonding seems to be the root problem here and not the NIC.
However, without bonding we ran into issues on serving the multicast VIPs, in that the multicast traffic for VAMO was not being received. This is still unresolved. We also caused a brief zeus outage in changing this one node's interface assignment. With that in mind it is still in an unbonded state right now.
One thing we did do during this troubleshooting was assigned all VAMO hits to pp-zlb12 alone. This would normally kill the node. pp-zlb12 was about 40% loaded but held on fine. So this tells me that a 'properly functioning' server with a 10g NIC can handle VAMO just fine directly where 5 of the same nodes in a degraded state have issues through multicast.
High retransmits are still an issue and did not seem to be fixed with these changes.
It is also worth mentioning that we are looking to possible solutions outside of Zeus as well. All options are on the table at this point as we look to fix this "for good" and provide a highly available stack.
Updated•14 years ago
|
Group: infra
Assignee | ||
Comment 11•13 years ago
|
||
This was the last update from riverbed:
Hi Corey, Jeremy,
I wanted to follow-up with you regarding the action items from our troubleshooting sessions on 2/23/12.
1. In-house testing conducted by Riverbed Dev using solarflare NIC cards and detailed test observation and results are attached in doc file. Testing with solarflare NIC results in positive results.
2. Based on the testing, results from QLogic NIC were undesirable and wanted to know if any headway was made with QLogic engineering. As discussed on the call, Riverbed will provide any technical backup that may be required to Mozilla from our testing for further investigation with QLogic. Furthermore, if getting RedHat’s engineering involved with QLogic in determining the underlying issue with both the vendors, I can help facilitate to that end. Please provide any details of QLogic case opened.
3. During testing, it was advised you were waiting for new servers with different set of NIC cards would be arriving at your facility and wanted to know if they have been arrived and if any testing has begun to isolate with new hardware. Do you know which type of NIC cards you expect on the new hardware being delivered?
4. Finally, Laurence provided commands that can be tried in the existing setup on QLogic NIC since some NIC filter multicast by default unless either one of these is set. These commands are non-intrusive and can be tested in your environment ‘ifconfig ethX allmulti’ or ‘ifconfig ethX promisc’.
Please let us know your test results and information as we continue to further isolate and resolve the outstanding issue. Please let me know if you have any questions/concerns with the above and if you would like to get RedHat and QLogic together for a vendor meet, we can work to understand if there are any known or unknown compatibility issues.
Assignee | ||
Comment 12•13 years ago
|
||
Going to close this out. We've been actively working on problems when they come up.
Status: NEW → RESOLVED
Closed: 14 years ago → 13 years ago
Resolution: --- → FIXED
Updated•12 years ago
|
Component: Server Operations: Web Operations → WebOps: Other
Product: mozilla.org → Infrastructure & Operations
Updated•6 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•