#weavenetwork

/

      • PaulCape_ joined the channel
      • bencevans joined the channel
      • fons joined the channel
      • fons has quit
      • dpw joined the channel
      • seti_k joined the channel
      • seti_k
        hello
      • fons joined the channel
      • fons
        Hi seti_k
      • seti_k
        hi, i wanted to ask, if there is a possibility to get weave up & running after reboot
      • or is it always a manual task at the moment?
      • fons
        seti_k: With the exception of Amazon’s ECS (which uses Upstart) we don’t provide integrations with boot systems.
      • seti_k: what are you using?
      • seti_k
        im testing aon fedora nodes at the moment
      • thiniking about how to get the network up after restart or crashes.
      • and how to use the env in web frontends or on cronjobs..
      • fons
        seti_k: I would recommend you to write a Systemd service https://fedoraproject.org/wiki/Packaging:System...
      • seti_k
        yes i don't think this is a problem, but i'm not sure how to set the environmet for the docker api proxy globaly..
      • fons
        I see, how do you start your containers?
      • seti_k
        a little different ;)
      • i'm creating it on the command line, in most cases
      • managing ( restart, stop, start on demand) of the fedora cockpit
      • and some things get triggerd ofer cron .. which is also a shell
      • fons
        It seems that running “eval $(weave env)“ in the command line and in cron should suffice
      • (before running “docker run” to spawn your containers)
      • seti_k
        yes this would work, but i dont think in the web frontend
      • so i thought about if there is a global setting or something like this.
      • fons
        seti_k: what web frontend?
      • seti_k
        cockpit
      • fons
        (I am not that familiar with Fedora so excuse me if that’s obvious for fedora users)
      • seti_k
      • it's not the problem
      • i think it wont work with any general web interface or management tool für docker, or am i wrong?
      • if there isnt an explicit support for weave.
      • fons
        Have you tried setting DOCKER_HOST (i.e. eval $(weave env)) when cockpit starts? (I would bet there’s a systemd service for it)
      • seti_k
        hm i could try this.
      • fons
        If cockpit honors DOCKER_HOST it should work
      • give it a try and let me know if it worked
      • ofthecure joined the channel
      • seti_k
        hm, and what about containers using "restart=always"
      • i think they get startet after a reboot erlier than the weave network gets up
      • so now the feedback
      • there is nothing to do with cockpit, it seems to take out all env's from where the container was startet.
      • with the unit file example in the weave repo, https://github.com/weaveworks/weave/blob/master...
      • i haven't found this before.. this works very well on startu
      • statup
      • and the containers run with weave before reboot gets an weave ip after reboot. so it seems to work very nice.
      • and with the fast data path i get an acceptable performance.. looks really nice now
      • fons
        ok, great, that aligns with what we would expect. We recently made some improvements to make Weave re-attach containers to the network on restarts, see https://github.com/weaveworks/weave/pull/1518
      • (which is part of Weave 1.2.0)
      • seti_k: Can you elaborate on your usecase? Particularly, why wasn’t performance acceptable before and why is it just “acceptable” now?
      • seti_k: Also, regarding cockpit, if you are starting containers with it (i.e. not using it just for monitoring) you will need to set its DOCKER_HOST at start up to point to the weave proxy.
      • seti_k
        from the cockpit, there isn't a option to create new ones. only to manage existing. stop / restart / start existing
      • about performance.
      • if i loose abet 95% performance its not acceptable. i tested it with qperf. so i haven't really tested in "near production environment"
      • at the moment i'm still lossing about 40%, don't know exactly why, cause it was described as nearly native, but its alot better now
      • the usecase is to cluster some hosts for tomcate clustering with cache / session replication
      • and connection to solr reserver or clusters on some other hosts
      • sorry for my typo today..
      • dpw
        Hi seti_k
      • You are losing 40% network throughput compared to host-host performance?
      • seti_k
        yes.. one moment .. i retest it
      • [root@fedsrv01 ~]# qperf fedbox2 tcp_bw tcp_lat
      • tcp_bw:
      • bw = 228 MB/sec
      • tcp_lat:
      • latency = 90.9 us
      • [root@fedsrv01 ~]# docker run --rm -ti arjanschaaf/centos-qperf qperf1 tcp_bw tcp_lat
      • tcp_bw:
      • bw = 108 MB/sec
      • tcp_lat:
      • latency = 96 us
      • this are my 2 tests
      • is there something wrong?
      • the 2 boxes are both on virtualbox on the same host.
      • the latency seems really good.
      • dpw
        How is the CPU use on that host while the test runs? With fast datapath, weave CPU use is lower than it was, but it is still higher than host networking. Is it possible that the CPU is the bottleneck in the weave test?
      • seti_k
        on host - host test i got a max cpu usage of 36%
      • on docker - weave - docker test i got a max cpu usage of about 45%
      • on the host
      • do you think i get other values on a different environment?
      • i can push something to the data center to test it.
      • but my first tests are always on my workstation
      • djx joined the channel
      • dpw
        If you are testing between virtual machines on a single host, then the virtual network may be passing 64KB segments between the VMs (I'm not sure whether this is true for virtualbox or not). So the vxlan might be having a bigger impact there than it does on a real network. Results on a real network might be different.
      • seti_k
        ok im trying ;)
      • but what is a "real network" today? ;) most of them is also virtual
      • dpw
        We've had some other reports where there is a big gap between host-host throughput and weave fast datapath throughput. We're looking into them.
      • seti_k
        i get some results in a few minutes..
      • hm ok, same thing in the dc
      • 2 fedora boxes on 2 different hardware machines
      • host - host:
      • qperf fedbox02 tcp_bw tcp_lat
      • tcp_bw:
      • bw = 777 MB/sec
      • tcp_lat:
      • latency = 115 us
      • docker - weave - docker :
      • docker run --rm -ti arjanschaaf/centos-qperf qperf1 tcp_bw tcp_lat
      • tcp_bw:
      • bw = 255 MB/sec
      • tcp_lat:
      • latency = 137 us
      • dpw
        Hmmm
      • seti_k
        but its alot faster than in earlier releases without fast data path :)
      • some hints? other testing methods? or something else i could test?
      • dpw
        I don't have anything to suggest right now. Check back tomorrow?
      • seti_k
        i'm in here tomorrow. so tell me if there is something to test ;)
      • fons has quit
      • PaulCapestany joined the channel
      • trifygri joined the channel
      • ofthecure has quit
      • djx joined the channel
      • trifygri joined the channel