chitopunk: sorry a bit late, but for master/slave to work you need the uri to look like this: uri="masterslave:(tcp://activemq1.example.net:61616,tcp://activemq2.example.net:61616)"
the master is listed first, the slave second in BOTH configs.
without that you don't have master/slave, you only have basic peering.
stresler has quit
FriedBob has quit
alcy joined the channel
alcy_ joined the channel
mrvisser joined the channel
ddevon joined the channel
ddevon has quit
mrvisser has quit
mrvisser joined the channel
saurajeetd joined the channel
mrvisser_ joined the channel
mrvisser has quit
alcy_ has quit
alcy has quit
alcy joined the channel
bato has quit
gmcwhist_ has quit
zleslie has quit
zleslie joined the channel
gmcwhistler joined the channel
alcy_ joined the channel
ayaz joined the channel
inecas joined the channel
ale_p joined the channel
alcy__ joined the channel
charlieS has quit
charlieS joined the channel
stetho
Hi all. I have only set up MCollective for the first time this week. It was working perfectly until I came in today. It's installed on 24 servers but any queries return 843 responses. I've discovered that this is because Server28 is registered 821 times but I can't figure out how this happened or how to undo it. Any pointers?
inecas has quit
jantman joined the channel
alcy___ joined the channel
ale_p has quit
t0m
stetho: Server28 will be running 821 mcollective processes, at a guess
and 'how it happened' -> I guess your init script is broken, and puppet or chef or whatever started it 821 times (as the init script's status action is broken?)
'how to undo it' -> kill the wrong prcoesses, fix the init script..
ale_p joined the channel
stetho
t0m: The how it happened bit makes sense except for the fact that it would have happened on the other servers too, wouldn't it? But your diagnosis is correct - I was looking at rabbitmq being the culprit, not the actual server.
stresler joined the channel
t0m
depends :)
someone may have echo '' > /var/run/mcollective.pid ; chattr +i /var/run/mcollective.pid on one box (or similar)
and given a broken init script, that'd cause that behavior..
I mean, yeah, I dunno why one and not another, but it's totally possible :)
f13o joined the channel
stresler has quit
f13o has quit
alcy____ joined the channel
chitopunk has quit
ale_p has quit
stetho
t0m: I can't see anything immediately obvious. At least I know not to blame RabbitMQ if it happens again. I'll keep digging.
ale_p joined the channel
mrvisser_ has quit
mrvisser joined the channel
mrvisser has quit
inecas joined the channel
alcy____ has quit
alcy__ has quit
alcy has quit
alcy___ has quit
alcy_ has quit
jantman has quit
ale_p has quit
FriedBob joined the channel
saurajeetd has quit
[o__o] joined the channel
ircuser-1 has quit
jantman joined the channel
gmcwhist_ joined the channel
jobewan joined the channel
ircuser-1 joined the channel
chitopunk joined the channel
stresler joined the channel
Wolfspyre joined the channel
vladgh joined the channel
mrvisser joined the channel
ale_p joined the channel
ale_p has quit
ale_p joined the channel
ramindk has quit
mrvisser has quit
[o__o] joined the channel
inecas has quit
ramindk joined the channel
GitHub189 joined the channel
GitHub189
[mcollective-puppet-agent] jantman opened pull request #31: MCOP-67 make runall work with compound filters (master...MCOP-67) http://git.io/Gi5TqQ
GitHub189 has left the channel
jrhett joined the channel
jrhett has quit
ddevon joined the channel
jrhett joined the channel
ayaz has quit
jrhett has quit
jrhett joined the channel
gmcwhist_ has quit
ale_p has quit
mrvisser joined the channel
mrvisser has quit
vladgh has quit
jrhett has quit
jrhett joined the channel
jrhett has quit
Zipad joined the channel
jrhett joined the channel
Zipad has quit
londo_ has quit
londo__ joined the channel
ddevon has quit
chitopunk
Hi, I want to know if the following scenario is feasible, I have 2 locations, i have setup an activemq cluster for location 1, I have setup an activemq single server for location 2, then all mcollective server configuration in location 1 have the following order.. plugin.activemq.1.host = location1-server1, plugin.activemq.2.host = location1-server2, plugin.activemq.3.host = location2-server1, same way for the location2 servers plugin.
jantman has quit
then if I bring activemq down for location2, all servers connects to the location1 one, good, but when the location2 activemq retunrs, the servers are still connected to the location1 ones..
is there a way to tell the servers,.. check if your activemq.1.host is up, if so connect to it..?
like a timeout stuff
otherwise I have all my servers in location2 connected to location1 activemq