ensure specific java package & prevent updates on ES nodes

RESOLVED FIXED

Status

P3
normal
RESOLVED FIXED
5 years ago
4 years ago

People

(Reporter: dmaher, Assigned: cliang)

Tracking

Details

(Reporter)

Description

5 years ago
Elasticsearch really doesn't like it when nodes are running different versions of Java - in fact, even seemingly minor differences can cause weird problems (see bug 944390 for a prod example).  Therefore, much like the Elasticsearch package is already verion defined in Puppet and then excluded in Yum, we should do the same for the Java package on ES nodes.
(Assignee)

Updated

5 years ago
Assignee: dmaher → cliang
(Assignee)

Comment 1

4 years ago
I've tweaked the parameters for the newlasticsearch module to accept a "javaversion" parameter.  By default, it is empty.  When that parameter is empty, the java package will default to "ensure => latest" (which is the current behavior).  Otherwise, puppet will ensure that the javapackage is at the version specified in javaversion.

  
@averez: I incorporated the changes to the mozdef node manifests. 

@phrawzty: I did NOT add the yum::conf exclusion to the bunker manifests since I didn't know how you wanted to handle it (i.e. trigger it off of es_exclude = true or add a new parameter specifically for it and do something like concatenating values to be used in the yum::conf excludes parameter).
(Reporter)

Comment 2

4 years ago
(In reply to C. Liang [:cyliang] from comment #1)
> @phrawzty: I did NOT add the yum::conf exclusion to the bunker manifests
> since I didn't know how you wanted to handle it (i.e. trigger it off of
> es_exclude = true or add a new parameter specifically for it and do
> something like concatenating values to be used in the yum::conf excludes
> parameter).

The Bunker manifests will likely be scrapped wholesale - that said, I do still intend to make an enormous ES cluster, so now is a good time to talk about the best way to deal with this issue.  Would it be interesting to also yum_exclude Java along with the version declaration (as we do the ES package) ?
(Reporter)

Comment 3

4 years ago
(woops, hit submit too soon)

In other words, I'm totally open to other ideas on how to handle this situation. :)
(Assignee)

Comment 4

4 years ago
Does it make sense to roll the yum exclusion into the module itself?  (I'm defaulting to 'false' for the exclusions to keep the default behavior of the module the same.)

--- modules/newlasticsearch/manifests/init.pp	2014-04-03 10:21:13.000000000 -0500
+++ modules/newlasticsearch/manifests/new_init.pp	2014-04-04 14:17:57.000000000 -0500
@@ -8,11 +8,15 @@
   $user = 'elsrch',
   $group = 'elasticsearch',
   $nofile_limit = '65535',
-  $plugins = {}
+  $plugins = {},
+  $exclude_es = false,
+  $exclude_java = false,
 ) {
   # Validate input
   validate_hash($plugins)
   validate_string($javaversion)
+  valudate_bool($exclude_es)
+  validate_bool($exclude_java)

   # Used all over the place for sanity tests.
   include stdlib
@@ -63,4 +67,15 @@
     create_resources(newlasticsearch::plugin, $plugins, $plugin_defaults)
   }

+  $yum_exclusions = []
+  if ($exclude_es == true) {
+    # Prevent the Elasticsearch package from accidental upgrades; bug 833539.
+    $yum_exclusions += [ 'elasticsearch' ]
+  }
+  if ($exclude_java == true) {
+    # Prevent the java package from accidental upgrade; bug 944895.
+    $yum_exclusions += [ "${javapackage}" ]
+  }
+  class { 'yum::conf': excludes => $yum_exclusions }
+
 }
(Reporter)

Comment 5

4 years ago
(In reply to C. Liang [:cyliang] from comment #4)
> Does it make sense to roll the yum exclusion into the module itself?  (I'm
> defaulting to 'false' for the exclusions to keep the default behavior of the
> module the same.)

There are two issues: one philosophical and one operational.  Operationally, declaring yum::conf here will end up colliding with any other declaration of yum::conf that might be applied on a given node.  This plays into a larger issue with our Puppet repo which is that we tend to try to do way too much in module-space, which ends up creating monstrous modules that end up becoming difficult to use and manage (the aforementioned collision being an example).

Ideally modules should do *one* thing well, and should try to avoid tripping over another as much as possible.  Nodes should then use intermediary "profile" manifests that declare and configure the modules as necessary.  That said, this is a battle that I've been failing to win at Mozilla for two years now, so I'm not sure how much longer I should be trying to fight it. :P
(Assignee)

Comment 6

4 years ago
I don't think that there's any particularly good way to define the scope of what any particular module handles; it doesn't help that (at Mozilla) you can not assume that someone using a module is conversant with whatever service you're trying to configure and there's relatively little over-arching / common architecture.


Some possibilities:

* A stupid-easy answer would be to stuff the yum excludes into a per-node hiera variable.  Then invoke yum::conf exclusion w/in node definitions or newlasticsearch profile definition.  I'm fairly certain, tho, that this is not readily "discoverable" (without a comment in the newlasticsearch module) and trying to get other Moz puppet module writers to observe this convention is futile.

* If yum's versionlock is sufficient for preventing accidental upgrades, it should be possible to crib together a yum::versionlock from https://github.com/CERIT-SC/puppet-yum/blob/master/manifests/versionlock.pp.  

For me, management of the ES package falls within the scope of the newlastic class or a newlastic "sub-class" (i.e. newlastic::install).  The ES package lock would toggle for locking based on OS (i.e. yum::versionlock or apt::pin).

WRT to managing java package version:  The puppetlabs java module is present, but doesn't appear to be used.  I don't know if it is a logistical nightmare to extend the local copy to handle locking of the java package, invoked either from the node definition, a newlastic "sub-class", or ES profile (since this is critical for clusters and less so for standalone servers).   

* If it has to be yum::conf excludes or bust, this might mean something like creating a variable to stuff package names into and performing the yum::conf exclusion in base::osfamily::redhat in a stage later than "main".
(Reporter)

Comment 7

4 years ago
> * If yum's versionlock is sufficient for preventing accidental upgrades, it
> should be possible to crib together a yum::versionlock from
> https://github.com/CERIT-SC/puppet-yum/blob/master/manifests/versionlock.pp.

Versionlock should be fine; I used yum::conf previously because it was already there, not because it was the only possible solution.  The only caveat here is that yum-plugin-versionlock.noarch is not part of our default package list, so it would need to be installed as necessary (luckily it appears to be part of the default RHEL repo).

> WRT to managing java package version:  The puppetlabs java module is present, but doesn't appear to be
> used.

That Java module was integrated into our repo after I initially wrote newlasticsearch (in fact, :atoll sent out an email regarding said Java module on 14 November 2013).  I'd be happy to see the responsibility for managing the Java package passed to the appropriate module (and eventually removed from newlasticsearch altogether).
(Assignee)

Updated

4 years ago
Depends on: 998371
(Assignee)

Comment 8

4 years ago
The newlasticsearch class now takes lock_es parameter.  This should be a boolean.  If true, lock elasticsearch to the version specified in $ver using util::lock_package.

  class { 'newlasticsearch':
    ver         => $ver,
    plugins     => $plugins,
    javapackage => $javapackage,
    javaversion => $javaversion,
    lock_es     => true,
  }

This lock can also be used to manage "locking" the java package.  Due to the aforementioned issues with contention over managing a single file from multiple modules and the ability to readily extend util::lock_package to non-RHEL, non-Debian OSes, use of util::lock_package is probably preferable to  yum::conf.
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.