#rancher

/

      • TheFuzzball has quit
      • TheFuzzball joined the channel
      • TheFuzzball has quit
      • cro has quit
      • TheFuzzball joined the channel
      • TheFuzzball has quit
      • cro joined the channel
      • RancherBot
        <longwuyuan> did you use rke to create the cluster
      • <longwuyuan> or thru rancher gui
      • <james9074> @vincent (or anyone) Any word on how long we can expect 1.X to be 'formally' supported by the core contributor team (at least from a patching perspective, I'd assume new features will stop soon if not already)?
      • <james9074> I'm sure that it's along the lines of "as long as people keep pushing fixes" - but I wasn't sure if talks have been had about the future strategy of the 1.x platform.
      • <vincent> At least until June 2019, https://rancher.com/support-maintenance-terms/
      • <james9074> Fantastic! That was the page I was looking for :slightly_smiling_face:
      • <james9074> :star: Thanks for the quick response, you're the hero we have but don't deserve :star:
      • Mistell has quit
      • <jun.shen.neo> Is it possible to install RancherOS on Dell R720 Server HWs?
      • <longwuyuan> @jun.shen.neo https://rancher.com/docs/os/v1.x/en/ . what is the cpu on that ?
      • <jan.nylund> I found an hetzner node-driver yesterday. Can report it works nicely. \o/
      • <jan.nylund> Only downside is the lack of firewall as a service. And load balancer as a service.
      • <jan.nylund> I setup a cluster with Ubuntu and Longhorn runs nicely.
      • <rsredsq> Hello colleagues,
      • <rsredsq> Does anyone saw this issue before? I've found issue https://github.com/rancher/rancher/issues/13314 But it doesn't seems to be resolved
      • EgyParadox joined the channel
      • TheFuzzball joined the channel
      • sbarbact joined the channel
      • sbarbact
        Hi guys, is it possible to have a custom haproxy.cfg in rancher 2.x? In rancher 1.6 it was possible to do it from ui
      • sbarbact has quit
      • jeffreylevesque joined the channel
      • jeffreylevesque
        when i run my dockerfile manually, everything is good
      • but, if its corresponding docker-compose.yml is executed via rancher, then it says "No such file or directory"
      • specifically, the entrypoint file cannot be found
      • ipsecguy joined the channel
      • RancherBot
        <jan.nylund> Trying to run the Rancher Library app for EFK. Turns out elasticsearch fails to start with warning on virtual memory. There’s a sysctl initcontainer that should supposedly handle this, but for some reason unknown to me, this fails. Any suggestions?
      • <jan.nylund> Error is `[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]`
      • <jan.nylund> but that’s exactly what the sysctl container is supposed to handle.
      • <jan.nylund> Seems I can’t get any logs out of that initcontainer, I wonder how to debug it.
      • <itihoitiho> rancher gui in vsphere
      • <itihoitiho> I update rancher to 2.0.3 and the master nodes (etcd+control) no work. I receive this error: This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready. Exit status 1, W0623 22:45:31.463123 29098 factory_object_mapping.go:423] Failed to download OpenAPI (an error on the server ("Error: 'waiting for cluster agent to connect'\nTrying to reach:
      • 'https://10.0.152.117:6443/swagger-2.0.0.pb-v1'") has prevented the request from succeeding), falling back to swagger Error from server (InternalError): an error on the server ("Error: 'waiting for cluster agent to connect'\nTrying to reach: 'https://10.0.152.117:6443/api'") has prevented the request from succeeding
      • <longwuyuan> bpate commands, screenshots, logs, configs
      • jeffreylevesque
        @longwuyuan, almost got rancher working in AWS
      • all my containers except one is unhealthy
      • big progress
      • my entrypoint fails when executed via rancher, but not if i run the container manually - https://github.com/jeff1evesque/machine-learnin...
      • RancherBot
        <longwuyuan> the compose file, the screenshots, the logs are needed to comment
      • jeffreylevesque
        could it be a problem with how i set up my `volumes` - https://github.com/jeff1evesque/machine-learnin...
      • RancherBot
        <longwuyuan> manually launch a container from shell prompt and check if that path exists
      • jeffreylevesque
        yup manually works
      • RancherBot
        <longwuyuan> manually means manually save the dead container as image and check
      • ipsecguy has quit
      • ipsecguy joined the channel
      • majestic joined the channel
      • majestic has quit
      • majestic joined the channel
      • <jan.nylund> I’m playing with traefik again. Not really sure how this is supposed to work. With the default kube-ingress, I can route traffic into wahtever node and internal routing will ensure it’s delivered to the proper workload. With traefik, it seems the intention is to run a single instance which means only one of my nodes will be listening to incoming traffic. Scaling up the amount of pods just makes it go
      • insane. :slightly_smiling_face:
      • <jan.nylund> the traefik catalog info says `You are deploying the chart to a cluster with a cloud provider capable of provisioning an external load balancer (e.g. AWS or GKE)`
      • <jan.nylund> which I of course don’t have in hetzner.
      • <jan.nylund> so, should I drop Traefik and stick with ingress-nginx or is there an (somewhat easy) way to make traefik behave nicely while listening on multiple nodes as ingress (meaning, I can rely on dns rr at lest)
      • <longwuyuan> helm chart or from traefik website
      • <longwuyuan> somebody here got traefik working for acme but they put a lb outside using traefik in lieu of nginx-ingress
      • <jan.nylund> yeah. I get it working too, but it’s only deploying a single pod so only listening in one of the nodes.
      • <jan.nylund> I would of course not want to depend on a single ip as incoming.
      • <jan.nylund> yeah. from the helm chart documentation, seems adding replicas will just do that. compete with each other on letsencrypt calls. :S
      • <jan.nylund> back to ingress-nginx. once more.
      • <cam.ilker> Hi, I have a single node deployment in Google Cloud, can provision clusters in GC without any issues. However, I face an issue when I try to deploy cluster in digitalocean; Failed to start [rke-worker-port-listener] container on host [167.99.129.xxx]: Error response from daemon: mkdir, /var/run/docker/libcontainerd/containerd/69f1b72f58c5473158fa6a9fe770cb3c6af42dc23bec441b0e42244d58d2936d: file exists
      • Any comments appreciated, thanks.
      • Elsmorian joined the channel
      • <jan.nylund> I’ve deployed lots of tests setups in DO without issues. (though I was using rancher ui)
      • EgyParadox has quit
      • Elsmorian has quit
      • Elsmorian joined the channel
      • nostrora joined the channel
      • nostrora
        Someone have success use of traefik ingress in rancher 2 ?
      • i need traefik to host SSL website with rancher 2
      • jeffreylevesque joined the channel
      • Elsmorian has quit
      • majestic has quit
      • majestic joined the channel
      • nostrora has quit
      • RancherBot
        <cam.ilker> Also using rancher ui
      • knutix_ joined the channel
      • knutix has quit
      • TheFuzzball has quit
      • TheFuzzball joined the channel
      • TheFuzzball has quit
      • TheFuzzball joined the channel
      • TheFuzzball has quit
      • TheFuzzball joined the channel
      • <rocco.langeweg> Using VMWare EXSI, what is the best OS to load for creating a cluster in Rancher 2?
      • <derek.horn> What is the best way to mount a nfs share on a RancherOS node so I can share those volumes with multiple pods?
      • lukrop joined the channel
      • <jesus> recommendations on using EFS with Rancher 2.0?
      • <itihoitiho> boot2docker
      • <itihoitiho> rancheros-vmware.iso works but i dont like
      • <andy> anyone familiar with Longhorn and willing to help prevent me from burning my cluster to the ground?