I've been using weave for a month or two now (since it was announced), and we've built a nice orchestration/configuration in Saltstack, compatible with the fig.yml format. However, my newest servers are only communicating on networks that have been "exposed /weave expose", I'm not getting any comms between my normal container subnets
I've banged my head against this for a day or two now, anyone have some troubleshooting tips?
My main usage form is to run docker containers, and then issue "weave attach"
moonfish
what do you mean by "between subnets"?
containers can't talk *between* subnets; they can talk *within* subnets.
stephanbuys
moonfish: that should be read "container subnets", ie, the subnets specified by weave attach (network/mask)
moonfish: yeah, I'm well aware of that 192.168.0.1/24 should be able to see 192.168.0.2/24 but not 192.168.1.1/24
moonfish
so, to be clear, you `weave attach` two containers to the same subnet, but they can't, say, 'ping' each other?
stephanbuys
moonfish: correct
moonfish
does it work when the containers reside on the same host?
stephanbuys
moonfish: thats the mystery, all of the containers are on the same host at the moment
moonfish
does the `weave ps` output look sensible?
stephanbuys
from what I can see yes, is it ok to paste output into the channel?
for example, 10.1.1.7 can ping .1 and .6, but not .5
moonfish
communication between containers on a single host shouldn't even reach the weave router, i.e. it's all handed directly by the kernel.
attach wireshark to the weave bridge and see what's flying across it.
stephanbuys
moonfish: whats strange is that I'm pretty sure this used to work, unless I'm in some strange territory on my VM/Kernel
maybe, just not to waste anyone's time, there is no limitation wrt which network, etc should/could be used? Anything in the same network (as per the CIDR notation) should be able to ping each other and have full access to all services on the container ports?
moonfish
yep. though you need to make sure that the same network isn't used by anything else on the host.
stephanbuys
ok, interesting, I detached and attached the networks from the container that wasn't reachable and now it works
I'm wondering if I'm not running into a race-condition
its very likely that saltstack would issue multiple "weave attach" commands at once
moonfish
might be a race condition in the kernel. wouldn't surprise me :)
stephanbuys
so weave really only plays the role of the router between hosts? Other anomalies should be seen in the context of the OS/kernel?
moonfish
correct.
stephanbuys
aaaaah
moonfish, thanks for listening and helping me reason through this, I appreciate it
errordeveloper has quit
errordeveloper joined the channel
stephanbuys has quit
moonfish has quit
bryanb joined the channel
errordeveloper has quit
errordeveloper joined the channel
errordeveloper has quit
bfirsh has quit
bfirsh joined the channel
errordeveloper joined the channel
moonfish joined the channel
moonfish has quit
moonfish joined the channel
errordeveloper has quit
moonfish has quit
moonfish joined the channel
hesco
stephanbuys: 'weave' is actually two components: the weaver SDN router (which manages the bridge across docker hosts) and the weave bash script (which provides helper functions which mostly rely on the ip utility underneath) to create and manage bridges, subnets and virtual interfaces.
the bash script is fairly short and can be deconstructed in an afternoon of careful reading. That will provide much insight into what is happening when you launch or attach or expose.
I imagine the latest version is more complex with the addition of the dns component and I have not reread the script since that release, yet.