Chapter 2. Installation
In this part of the book, we will walk you through building a fully functional MCollective environment on several of your hosts. You will deploy a simple configuration for your initial tests. We will use this baseline configuration as we expand your knowledge in each of the following chapters.
We will not review every configuration parameter or utilize every feature in this initial installation. The initial installation will provide a basic setup suitable for learning. In Part II, we’ll step back and review this configuration in detail, along with optional changes that can be used to fine-tune your installation.
This baseline configuration will use:
-
ActiveMQ as the messaging broker middleware
-
The Pre-Shared Key (PSK) plugin to validate data sent between the clients and the servers
-
A simple Admin User Has Total Control authorization scheme
You’ll find this baseline configuration useful as a foundation to build upon as your MCollective installation grows.
Requirements
Before you install MCollective, you will need to check that you have all of the required elements, as listed in the next two sections.
Puppet Labs Repositories
If you are using RedHat, Fedora, CentOS, Debian, or Ubuntu Linux and are willing to use the Puppet Labs repositories, you can skip this section, as all of these components are available in your operating system packages or supplied in the Puppet Labs Products or Dependencies repositories.
Operating System
The operating system requirements are as follows:
- Working time synchronization
-
Many problems are due to systems having a different idea of what time it is. It is essential that all systems in the collective have a consistent view of the current time through use of Network Time Protocol (NTP). Active Directory/W32Time, the Unix Time Protocol used by
rdate
, and the original Daytime protocol are not accurate enough to provide sufficiently high-resolution time synchronization. - Ruby 1.8.7, 1.9.3, 2.0
-
MCollective does not work with Ruby versions below 1.8.7. If your operating system does not provide you with a modern version of Ruby, refer to Appendix B for assistance.
- Ruby STOMP Gem 1.2.10, 1.3.2, or higher
-
STOMP is the Simple Text Oriented Messaging Protocol used by MCollective.
- 5 MB of disk space
- 256 MB of RAM
- A git client, usually available from your operating system package repository
-
The git client is only necessary when installing MCollective or plugins from source. It is possible to finish this book without using git.
Are These Versions Higher Than Puppet Labs Documentation?
The versions specified here are chosen to avoid known bugs and common problems as reported in the MCollective email, IRC, and ticketing support channels. You can use the lower versions from the Puppet Labs documentation, but you may encounter well-known issues you’d avoid by using these versions.
Middleware Broker
And these are the middleware broker requirements:
-
500 MB of memory minimum
-
One of the following messaging middleware options:
-
Disk space dependent on middleware service installed (45 MB for ActiveMQ and 10 MB for RabbitMQ)
The middleware broker will not require any disk space beyond the installation packages but will need processor and network capacity for handling at least two concurrent connections for each server node. Most modern systems can handle hundreds of MCollective server connections. Instructions for tuning the broker to handle thousands of concurrent connections is provided in “Large-Scale Broker Configurations”.
Where to Install
In the remainder of this book, we discuss MCollective as if you are installing it in your production environment. I would imagine that you are smarter than that, but just in case, here are some great ways to build a suitable environment to test and learn MCollective:
-
An already established test lab you maintain
-
A group of VMware or Openstack host instances
-
Vagrant machines running on your personal computer (you can find good Vagrant images at http://puppet-vagrant-boxes.puppetlabs.com/)
The choice of virtualization platform is entirely up to you. As you read earlier, MCollective’s needs are minimal. Until your broker is supporting hundreds of connected servers, its needs are likewise very minimal. A t1.micro
free Amazon Web Services (AWS) instance is suitable for any role in a small MCollective environment. I’ve built a complete test installation on my Macbook using a total of 4 GB of RAM to support a half-dozen Vagrant nodes.
In all cases, I recommend using either CentOS 6.5 or Ubuntu 13.10 x86 for learning purposes. These platforms are fully supported by every stock MCollective plugin, allowing you to breeze through the learning exercises without distractions. After you have a working MCollective setup, you’ll be able to find help in Appendix A for other operating systems.
A nice thing about MCollective is that the names of your nodes aren’t important. The only name that will be hardcoded in your configuration files is the name of your middleware broker. This means that you can build your test environment and then easily transition to production hosts while changing only a single value. As you are likely thinking right now, you can simplify even further by using a DNS alias or CNAME
and then avoid any configuration file changes.
Dirty Little Secret
I have a dirty little secret to share with you. I’ve run every single command in this book against a live production environment. Simply put, there’s no command example in this book that will cause a production outage. If your environment is safe for testing out ideas in, or if you’re just running cowboy, there are no commands shown in this book that will cause an outage.
Naturally, if you run mco destroy the world
, well you knew what you were doing when you blew your foot right off. You’ll have a lot of powerful features in hand by the end of this book. You’ll know what each command does, and how to filter your targets effectively. If you’re operating cowgirl1 in a live environment, you’ll want to be careful what you ask MCollective to do. But every command shown in this book should be safe to run in production.
Build yourself a group of nodes, physical or virtual, to learn on. Use CentOS 6.5 or Ubuntu 13.10 if possible while learning. Pick one of the nodes to be your middleware broker, and let’s get started.
Passwords and Keys
We are going to simplify the initial installation of MCollective to make it easy for you to understand and work with it initially. For this installation, we will need three unique strings used for authentication. You won’t type these strings at a prompt—they’ll be stored in a configuration file. So we shall cryptographically generate long and complex random passwords.
Run the following command three times and save the values:
$ openssl rand -base64 32
Copy the three random strings into your Sticky app, text editor, or write them down on a piece of paper. We’re going to use them in the next few sections when configuring your service.
The first string will be the Client Password used by clients to connect to ActiveMQ with permissions to issue commands to the server hosts.
The second string will be the Server Password used by servers to connect to ActiveMQ with permissions to subscribe to the command channels.
The third string will be a Pre-Shared Key used as a salt in the cryptographic hash used to validate communications between server and client, ensuring that nobody can alter the request payload in transit.
Ensure That the Client and Server Passwords Are Different
Many online guides for setting up MCollective suggest using the same username and password for clients and servers. This leads to a problem where the compromise of any server allows control messages to be sent from the compromised server to any host in the collective. We’ll explain this problem in “Detailed Configuration Review”.
You want the username and password installed on every server to be able to subscribe to topics, but not to be able to send requests to them. If you use the same username and password for both, someone who can read any one server’s configuration file will be able to issue requests to every host in the collective. Keep these usernames and passwords distinct and separate.
In Chapter 13, we will discuss alternative security plugins. The SSL/TLS security plugins can encrypt the transport and provide complete cryptographic authentication. However, the simplicity of the pre-shared key model is useful to help get you up and running quickly and provides a reasonable level of security for a small installation.
Puppet Labs Repository
Puppet Labs provides APT and YUM repositories containing packages for open source products and their dependencies. These community repositories are intended to supplement the OS vendor repositories for the more popular Linux distributions. These repos contain the Puppet Labs products used in this book, including MCollective, Puppet, and Facter, and packages for the dependencies of these products, including Ruby 1.8.7 for RHEL 5.x systems.
Supported Platforms
Puppet Labs maintains Product and Dependency repositories for the operating systems listed in the following sections. Other operating systems can use MCollective by following the instructions in Appendix B.
Enterprise Linux 6
To install the repositories on Enterprise Linux 6, run the following command:
$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
Note
Enterprise Linux versions include RedHat, CentOS, Scientific, Oracle, and all downstream Linux distributions using the same number.
Enterprise Linux 5
This repository includes a build of Ruby 1.8.7 for RHEL-based 5.x systems, which is essential for MCollective:
$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
Fedora
At the time this book was written, Fedora 19–20 are supported and available as shown here:
$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-fedora-20.noarch.rpm
Debian and Ubuntu
For Debian and Ubuntu systems, you have to download the .deb file appropriate for your release. It is best to browse to http://apt.puppetlabs.com/ and look at the files available there to decide the appropriate one to install.
If you are running the unstable release of Debian (Sid) at the time this book was written, you should install the repository as follows:
$ wget http://apt.puppetlabs.com/puppetlabs-release-sid.deb $ sudo dpkg -i puppetlabs-release-sid.deb $ sudo apt-get update
Likewise, if you are running the latest Ubuntu (Trusty Tahr), you should use the following:
$ wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb $ sudo dpkg -i puppetlabs-release-trusty.deb $ sudo apt-get update
Other platforms
Most platforms (e.g., Solaris and FreeBSD) have package repositories that contain binary packages for MCollective. Consult Appendix A for specific instructions to get MCollective packages installed on other operating systems.
Configuring ActiveMQ
The one thing that every MCollective environment must have is publish/subscribe middleware. In this section, we will install ActiveMQ, the middleware recommended by Puppet Labs for being best performing, most scalable, and well tested. After you have a working installation, instructions for changing the middleware to RabbitMQ are provided in “Using RabbitMQ”.
Install the Software
The first step is to install the middleware used for communication between clients and servers. You can install this on an existing Puppet or Chef server. Unless you have hundreds of nodes, it won’t require a dedicated system. Its resource needs are very minimal.
For RedHat, CentOS, and Fedora-based systems, run the following:
$ sudo yum install activemq $ sudo chkconfig activemq on
For Debian or Ubuntu, run:
$ sudo apt-get install activemq $ sudo update-rc.d activemq multiuser
And for FreeBSD, run:
$ sudo pkg add activemq $ echo "activemq_enable=YES" | sudo tee -a /etc/rc.conf
Tune the Configuration File
Next, we will tune the ActiveMQ configuration file, which should be installed in the appropriate etc/ directory for your platform (on most Linux systems, this will be /etc/activemq/activemq.xml). Edit the default file installed by the ActiveMQ package according to the following suggestions. At the time this book was written, even the default configuration provided by the Puppet Labs-provided package needs some tweaking.
Note
We’ll cover the configuration file in depth in Part II. During this installation, we will only cover the minimum changes necessary to get ActiveMQ working for MCollective.
Enable purging in the broker
Look for the broker
statement (usually located five lines into most default configurations I have seen). You’ll need to add schedulePeriodForDestinationPurge
to this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="hostname
" dataDirectory="leave this untouched
" schedulePeriodForDestinationPurge="60000" >
schedulePeriodForDestinationPurge
is necessary to clean up stale queues. This will be explained comprehensively in “Detailed Configuration Review”.
Disable producerFlowControl
Here we will use policyEntry
statements to disable flow control on both topics and queues, and to enable garbage collection on stale queues:
<destinationPolicy>
<policyMap>
<policyEntries>
<!-- MCollective expects producer flow control to be turned off. -->
<policyEntry
topic=
">"
producerFlowControl=
"false"
memoryLimit=
"1mb"
/>
<!-- MCollective generates a reply queue for most commands.
Garbage-collect these after five minutes to conserve memory.
-->
<policyEntry
queue=
">"
producerFlowControl=
"false"
memoryLimit=
"10mb"
gcInactiveDestinations=
"true"
inactiveTimoutBeforeGC=
"300000"
/>
</policyEntries>
In topic and queue names, the >
character is a wildcard that will match any character until the end of the string. Since it is the first character used, all topic and queue names will match these rules.
Define logins for clients and servers in simpleAuthenticationPlugin
You will find the plugins
section in the ActiveMQ configuration provided by Puppet Labs, but you may have to add it to most vendor or stock Apache configurations. If the configuration file has a plugins
section, then replace it completely with the example that follows. Otherwise, place this just below the destinationPolicy
section.
In this section, we will define the usernames and passwords used by the MCollective servers and clients:
<plugins> <simpleAuthenticationPlugin> <users> <authenticationUser username="client" password="Client Password
" groups="servers,clients,everyone" /> <authenticationUser username="server" password="Server Password
" groups="servers,everyone" /> </users> </simpleAuthenticationPlugin>
These lines are pretty easy to understand. You are entering the username and password to be used for clients and servers to authenticate. The groups
parameter assigns this user to the following groups used for authorization.
Tip
Note that plugins
does not terminate here. We have broken the plugins
block in two halves for ease of reading. The plugins
XML block closes at the end of the authorization section.
Define permissions for clients and servers in authorizationPlugins
In the remainder of the plugins
block, we define rights and permissions for the users we created in the previous section. Be very careful to get this text exactly correct, as periods, wildcards, and > characters in particular are significant:
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry
queue=
"mcollective.>"
write=
"clients"
read=
"clients"
admin=
"clients"
/>
<authorizationEntry
topic=
"mcollective.>"
write=
"clients"
read=
"clients"
admin=
"clients"
/>
<authorizationEntry
queue=
"mcollective.nodes"
read=
"servers"
admin=
"servers"
/>
<authorizationEntry
queue=
"mcollective.reply.>"
write=
"servers"
admin=
"servers"
/>
<authorizationEntry
topic=
"mcollective.*.agent"
read=
"servers"
admin=
"servers"
/>
<authorizationEntry
topic=
"mcollective.registration.agent"
write=
"servers"
read=
"servers"
admin=
"servers"
/>
<authorizationEntry
topic=
"ActiveMQ.Advisory.>"
read=
"everyone"
write=
"everyone"
admin=
"everyone"
/>
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
We will review this configuration in great detail in Chapter 10. At this time, it is simply essential that it is entered exactly as it appears here.
Start the Service
Now that we’ve updated the configuration file, it is time to start the service:
$ service activemq start Starting ActiveMQ Broker...
After starting the service, check to see that ActiveMQ is listening on TCP port 61613:
$ netstat -an | grep 61613
If you don’t see a LISTEN socket available for incoming connections, check the logfile (Java errors can be verbose, so page through the output carefully):
$ tail -200f /var/log/activemq/activemq.log
Firewall Change
You should ensure that inbound TCP sessions to port 61613 can be created from every MCollective server and client.
Most Linux systems use iptables
firewalls. On a Linux system, you could use the following steps to add a rule before the global deny. If all of your servers will fit within a few subnet masks, it is advisable to limit this rule to only allow those subnets, as shown here:
$ sudo iptables --list --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED...etc
$ sudo iptables --list --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all anywhere anywhere state RELATED,ESTABLISHED 2 ACCEPT ipv6-icmp anywhere anywhere...etc
Look through the output and find an appropriate line number for the new rule. Then use the following syntax to insert the rule into this location in the list:
$ sudo iptables -I INPUT20
-m state --state NEW -p tcp \ --source192.168.200.0/24
--dport 61613 -j ACCEPT $ sudo ip6tables -I INPUT20
-m state --state NEW -p tcp \ --source2001:DB8:6A:C0::/24
--dport 61613 -j ACCEPT
Don’t forget to save that rule to your initial rules file. For RedHat-derived systems, this can be as easy as this:
$ sudo service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] $ sudo service ip6tables save ip6tables: Saving firewall rules to /etc/sysconfig/ip6table:[ OK ]
Warning
I’ve shown the syntax here for both IPv4 and IPv6 using non-routed networks. Customize to suit your local networks. You can ignore the steps for one protocol or the other if you don’t have nodes using both protocols. You can find more details about how to best handle dual-stack nodes in “IPv6 Dual-Stack Environments”.
Check Appendix A for platform-specific instructions.
Installing Servers
The mcollectived
application server runs on nodes that will process requests from clients. You should pick several target nodes that you desire to make requests of and install the server as described in the following section.
Install the Software
For RedHat, CentOS, and Fedora-based systems, run the following:
$ sudo yum install mcollective $ sudo chkconfig mcollective on
For Debian or Ubuntu, run:
$ sudo apt-get install ruby-stomp mcollective $ sudo update-rc.d mcollective multiuser
And for FreeBSD, run:
$ sudo pkg add mcollective $ echo "mcollectived_enable=YES" | sudo tee -a /etc/rc.conf
Server Configuration File
The following is the MCollective server configuration file, which should be installed on every host you want to control. Edit the default /etc/mcollective/server.cfg file installed by the package to look like this:
# /etc/mcollective/server.cfg daemonize = 1 direct_addressing = 1 # ActiveMQ connector settings: connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host =activemq.example.net
plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = server plugin.activemq.pool.1.password =Server Password
plugin.activemq.heartbeat_interval = 30 # How often to send registration messages registerinterval = 600 # Security provider securityprovider = psk plugin.psk =Pre-Shared Key
# Override platform defaults? libdir =/usr/libexec/mcollective
#logger_type = file #loglevel = info #logfile = /var/log/mcollective.log #keeplogs = 5 #max_log_size = 2097152 #logfacility = daemon
Note
Note that you have to replace two of the passwords in this file and also the libdir directory.
Note that libdir will vary between operating systems. For this stage of the learning process, either test on a single operating system or adjust it by hand as necessary for each different OS. In Chapter 7, we’ll introduce you to a Puppet module and a Chef cookbook that will handle this cleanly for you.
Start the Service
To start the service, run the following command:
$ service mcollective start Starting mcollective: [ OK ]
At this time, you should see the server bound to the ActiveMQ server on the port listed in both the server.cfg and activemq.xml files:
$ netstat -an | grep 61613 tcp 0 0 192.168.200.10:58006 192.168.200.5:61613 ESTABLISHED
If you are using IPv6, the response may look like this:
$ netstat -an -A inet6 | grep 61613 tcp 0 0 2001:DB8:6A:C0::200:10:45743 2001:DB8:6A:C0::200:5:61613 ESTABLISHED
Note
You may find that you are using IPv6 when you didn’t expect it. This isn’t generally a problem in most sites, so don’t rush to turn it off. How to control which protocol to use is covered in “IPv6 Dual-Stack Environments”.
Creating a Client
You only need to install the client software on systems from which you will be sending requests. This may be your management hosts, a bastion host, or could be your laptop or desktop systems in the office.
Install the Software
For RedHat, CentOS, and Fedora-based systems, run the following:
$ sudo yum install mcollective-client
For Debian or Ubuntu, run:
$ sudo apt-get install mcollective-client
And for FreeBSD, run:
$ sudo pkg add mcollective-client
Client Configuration File
The following is the client configuration file, which should be installed only on hosts from which you will submit requests. Edit the /etc/mcollective/client.cfg file installed with the package to look like this:
# /etc/mcollective/client.cfg direct_addressing = 1 # Connector connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host =activemq.example.net
plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = client plugin.activemq.pool.1.password =Client Password
plugin.activemq.heartbeat_interval = 30 # Security provider securityprovider = psk plugin.psk =Pre-Shared Key
# Use auto-discovery default_discovery_method = mc direct_addressing_threshold = 10 # ...or pre-configure the list of nodes #default_discovery_method = flatfile #default_discovery_options = /etc/mcollective/nodes.txt # Miscellaneous settings color = 1 rpclimitmethod = first # Performance settings direct_addressing_threshold = 10 ttl = 60 # Override platform defaults? libdir =/usr/libexec/mcollective
#logger_type = console #logfacility = daemon #loglevel = warn #logfile = /var/log/mcollective.log #keeplogs = 5 #max_log_size = 2097152
Note
Note that you have to replace two of the passwords in this file and also the libdir directory if the operating systems differ.
Security Considerations
With the pre-shared key security model, anyone who can read the client.cfg file can find the password used to publish requests. I recommend that you limit the people who can read the client file to the people who you trust to execute commands on every system:
$ sudo chmod 640 /etc/mcollective/client.cfg $ sudo chown root:wheel /etc/mcollective/client.cfg
Note
The Puppet module provided in this book does this step for you. You only need to execute the commands just shown during our initial learning installation. Later on, if you are using the provided Puppet module, this will be handled for you.
We’ll cover more flexible security designs in Chapter 13.
Installing from Source
If you have installed the packages from the Puppet Labs repository, you can skip directly down to “Testing Your Installation”.
If there are no suitable packages for your operating system, you can install MCollective from source. The installer will place the files in the standard Ruby locations for your platform, or to directories which you give it as options.
You will need to set up init scripts for your operating system on your own. We’ll show you where the examples are that you can build from.
Warning
Do not attempt to install from RubyGems. The version in RubyGems was not created by Puppet Labs and is quite a bit older than, and incompatible with, recent versions of MCollective. It also does not install the connector or security plugins.2
Using the Installer
Download a source tarball from https://github.com/puppetlabs/marionette-collective/tags/.
Use the installer to place the files in your standard system locations:
$ tar xzf marionette-collective-2.5.3
.tar.gz $ cd marionette-collective-2.5.3
$ sudo ./install.rb mc-call-agent: mco: mcollectived: log.rb: mcc............. agent_definition.rb: mmc..... standard_definition.rb: mmc.......snip test results...
Files: 113 Classes: 137 Modules: 151 Methods: 788 Elapsed: 23.397s mkdir -p -m 755 /etc/mcollective install -c -p -m 0644 etc/facts.yaml.dist /etc/mcollective/facts.yaml mkdir -p -m 755 /etc/mcollective install -c -p -m 0644 etc/server.cfg.dist /etc/mcollective/server.cfg mkdir -p -m 755 /etc/mcollective install -c -p -m 0644 etc/client.cfg.dist /etc/mcollective/client.cfg mkdir -p -m 755 /etc/mcollective...snip many more files...
You could also install to a different path and use the RUBYLIB
environment variable to add it to Ruby’s load path:
$ cd marionette-collective-2.5.3
$ sudo/path/to
/ruby ./install.rb \ --configdir=/opt/mcollective/etc
\ --bindir=/opt/mcollective/bin
\ --sbindir=/opt/mcollective/sbin
\ --plugindir=/opt/mcollective/plugins
\ --sitelibdir=/opt/mcollective/lib
$ export PATH=${PATH}:/opt/mcollective/bin
$ export RUBYLIB=${RUBYLIB}:/opt/mcollective/lib
Creating an Init Script
If you didn’t install MCollective from a package, you’ll need to create an init script to start MCollective at system boot time. There are a few startup scripts in the MCollective source tree to use as starting points:
-
ext/debian/mcollective.init
-
ext/redhat/mcollective.init
-
ext/solaris/mcollective.init
Start with these examples to tailor an appropriate startup script for the MCollective server daemon.
Creating a Package
You may want to create a package for your platform to avoid installing from source on every node. To create a package for your operating system, invoke the installer with an option to build a chroot tree for you:
$ cd marionette-collective-2.5.3
$ ./installer.rb --destdir=/package/root/mcollective No newer files. Files: 0 Classes: 0 Modules: 0 Methods: 0 Elapsed: 0.009s mkdir -p -m 755 /package/root/mcollective/etc/mcollective install -c -p -m 0644 etc/facts.yaml.dist /package/root/mcollective/etc/mcollective/facts.yaml mkdir -p -m 755 /package/root/mcollective/etc/mcollective install -c -p -m 0644 etc/server.cfg.dist /package/root/mcollective/etc/mcollective/server.cfg mkdir -p -m 755 /package/root/mcollective/etc/mcollective install -c -p -m 0644 etc/client.cfg.dist /package/root/mcollective/etc/mcollective/client.cfg...snip many more files...
Once you have done this, copy the init script you created into the package root, adjust the configuration files if necessary, and then build the package according to your operating system standards.
Testing Your Installation
After you have set up a middleware host, at least one server and one client, you can run a test to confirm that your configuration settings are correct. At this point, the installation used for this chapter looks like the diagram shown in Figure 2-1.
Note that host geode
has both the server and client software installed. It will receive requests through the middleware the same as every other server.
The ping test is a low-level query that confirms that the server node is communicating through the middleware:
$ mco ping sunstone time=88.09 ms geode time=126.22 ms fireagate time=126.81 ms heliotrope time=127.32 ms ---- ping statistics ---- 4 replies max: 127.32 min: 88.09 avg: 117.11
If you get back a list of each server connected to your middleware and its response time, then congratulations! You have successfully created a working MCollective framework.
Troubleshooting
If you didn’t get the responses we expected, here are some things to check.
Passwords
The number one problem you’ll see is that you didn’t use the correct passwords in each location. Ensure that the three passwords we created are used correctly, and replace them if you need to do so for testing purposes:
- Client Password
-
Should be assigned to the user
client
in the /etc/activemq/activemq.xml file and used for plugin.activemq.pool.1.password in /etc/mcollective/client.cfg - Server Password
-
Should be assigned to the user
server
in the /etc/activemq/activemq.xml file and used for plugin.activemq.pool.1.password in /etc/mcollective/server.cfg - Pre-Shared Key
-
Should be used as the value for plugin.psk in both /etc/mcollective/server.cfg and /etc/mcollective/client.cfg
Networking
The second most likely problem is a firewall blocking access between the server and the middleware, or the client and the middleware. Test the server connectivity by going to the middleware system and confirm that you see connections to port 61613 from each of the servers:
$ netstat -a |grep 61613 tcp 0 0 :::61613 :::* LISTEN tcp 0 0 192.168.200.5.61613 192.168.200.10:58028 ESTABLISHED tcp 0 0 192.168.200.5.61613 192.168.200.11:22123 ESTABLISHED tcp 0 0 192.168.200.5.61613 192.168.200.12:42488 ESTABLISHED tcp 0 0 2001:DB8:6A:C0::200:5:61613 2001:DB8:6A:C0::200:5:32711 ESTABLISHED tcp 0 0 2001:DB8:6A:C0::200:5:61613 2001:DB8:6A:C0::200:13:45743 ESTABLISHED
If you don’t see connections like these, then there is a firewall that prevents the servers from reaching the middleware broker.
Connector Names
One potential point of confusion is that ActiveMQ defines the transportConnector
very differently than MCollective’s connector
setting. These settings will not match.
In the MCollective configuration files for the server and client, it should indicate activemq
, like so:
connector = activemq plugin.activemq.heartbeat_interval = 30 plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = activemq.example.net
This tells MCollective that it is communicating with ActiveMQ. MCollective always uses the STOMP protocol when connecting with ActiveMQ, but this is not listed here.
In the ActiveMQ configuration, you don’t mention MCollective but instead tell the transportConnector
to provide STOMP protocol transport using the New IO (NIO) Java library. (We’ll cover what this means in “Detailed Configuration Review”.)
<transportConnectors>
<transportConnector
name=
"stomp+nio"
uri=
"stomp+nio://[::0]:61613"
/>
</transportConnectors>
Warning
When doing searches on the Internet, you may find references to a stomp
connector. This connector was deprecated in MCollective 2.2.3 and removed in 2.3. Always use the native activemq
and rabbitmq
connectors.
1 Cowboys and cowgirls both shoot from the hip.
2 This may be fixed; check Improvement MCO-320.
Get Learning MCollective now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.