Hello. I've just get to know hammer-cli, which is a great tool. I'd like to know if there is a script that automates the export process?
Forgot to add, using foreman plugin.
ihre joined the channel
awll joined the channel
jsherrill is now known as jsherrill_afk
Ma_ has quit
jyejare has quit
sghai joined the channel
Lud97 joined the channel
devmodem has quit
devmodem joined the channel
egilman has quit
hmw joined the channel
Lud97
Hi guys, I have just installed Foreman on ubuntu 16.04, but unfortunatly the reports doesn't work. I have try already to regenerate the certificats but the foreman-proxy doesn't recognize them anymore.
jsherrill: how about making this a happy friday and releasing 3.3.2
ad^2 has quit
jackNemrod joined the channel
Dj_ has quit
tamarin joined the channel
bryan_kearney1 has quit
Lud97
ihre: thank you for your help, it's works now.
akofink has quit
booleanfour joined the channel
booleanfour has quit
Dj_ joined the channel
Dj_
Hello, i want to understand my infra design for laying out capsules and master
fatmcgav has quit
aitrus
Dj_: What questions do you have?
Dj_
I have regions like Asia, EU, US .. and Asia has other small regions like Singapore, HK, India, Koria ... and UK has 2 DCs and US has 2 DCs
now how should i place master and capsules .. are there any criterias
i will have globally approx 6000 servers .. is 1 master sufficient ?
akofink joined the channel
aitrus
Capsules (Smart Proxies) can host content as well as run services like Puppet Master, which takes load off of the "master"
So the super basic answer is "yes, 1 master is sufficient - as long as your proxies run services that spread the load"
Dj_
aitrus: do you have any testing data, how many capsules a master can handle ? over the WAN what could be latency ?
acrobot has quit
acidrainfall joined the channel
aitrus
I don't have that data handy. The project may have some recommendations but I honestly haven't looked into them because in my scenario the guiding factor was isolation and reduced WAN traffic
acrobot joined the channel
Dj_
But WAS still plays role while syncing contents right ?
aitrus
There is also the question of what services you run on your capsules and what content you sync there (and your sync strategy)
Dj_
I will sync RHEL repsmostly
repos*
and all capsules will run Puppet
master will not have puppet.
devmodem has quit
mhulan has quit
aitrus
Are you familiar with the different sync options (on demand vs background/immediate)?
Download policies is a better term
Dj_
Not much, i read about it, but what i understand is, they are mostly while doing first time setup or syncing content to satellite master from RHN.
how it affects capsule
i know on demand is good where it only syncs contents when client demands it, but not sure how capsules plays role
bryan_kearney1 joined the channel
If Clinet connected to capsule demands some rpm ... satellite will download from RHN -> satellite then send to capsule -> client will get it ?
aitrus
I believe the capsule follows the same policy. So when you sync from your master to a capsule it only sends metadata instead of all of the package content (until a client requests it)
bryan_kearney1 has quit
So that will help with reducing WAN traffic during syncs
However, on demand does have some drawbacks
Dj_
But then it will not slow down when client actually requests for rpm ?
aitrus
For example, if the upstream is down you can't pull a package
But if you're using RHEL that risk is mitigated fairly well by their CDN (ignoring the recent outage, of course)
The first client to pull a package may experience some lag
on demand is also much more complicated internally, so there are more opportunities for bugs to creep up
So basically you have to decide what is more important - reduced resource usage or simplicity/stability
Dj_
Do you know capsule <-> master syncs using http ? does qpid plays role in it ?
i have seen other doc, for LB, using pacemaker, but that's complex and dont have subscriptions for pacemaker
felskrone has quit
permalac has quit
Fobhep has left the channel
Capesteve has quit
akofink has quit
akofink joined the channel
Lud97 has quit
ikonia
has the 1.15 installed changed the expected value of foreman-foreman-url
in previous version I set the user to include http or https
with 1.15 it doesn't seem to be happy with it
awll joined the channel
cfoo is now known as oofc
or has how foreman::params::foreman_url gets set changed in some way ?
didn't this used to get set from hostname -f
sean797 joined the channel
Garreat has quit
sean797 has quit
it looks like it's being fed from lower_fqdn which is just a lowercase ::fqdn
facter fqdn shows a valid url
sjagtap joined the channel
bkorren joined the channel
mmello joined the channel
smeyer has quit
akofink has quit
jcalla is now known as jcalla|lunch
1.14 and 1.15 removed the --foreman-db-username and --foreman-db-password options from the installer, so how do you set the user/pass you want foreman to use ?