#joyent

/

      • ircretary joined the channel
      • marsell has quit
      • trentm has quit
      • knite joined the channel
      • sigusr1 has left the channel
      • knite
        anyone here using triton? just got access, and I'm a bit stumped on how to do service discovery. let's say I want to spin up a cluster of etcd/consul/etc, how should those nodes find each other? and when I want other containers to use etcd/consul for service registration and discovery, how should they do that?
      • rmustacc
        If Triton didn't exist, what would you do for service discovery?
      • knite
        background: coming from a setup where I was running consul and registrator on every host, so my services would automatically register themselves, and passing consul to all other containers as the dns search domain.
      • rmustacc
        And in the old setup you used docker to set the dns search domain and presumably also the resolvers?
      • knite
        rmustacc: that's right. I bootstrapped consul by hardcoding host IPs. with Triton, I don't know which host I'll land on, so I'm not sure how to bootstrap.
      • rmustacc
        knite: Well, there isn't a conventional host ip. Each container has its own IP, so you can just refer to things with that.
      • I guess manaually bootstrap that, set it to always restart, and then use those for everything else.
      • knite
        rmustacc: right, every container has its own IP, but I don't know the IP prior to spinning up the container. I suppose I could spin up the first consul, use that IP to spin up the others. but I can't push a new config to node #1 so that it knows about the rest of the cluster, because when I bounce node #1 with the new config, there's no guarantee it retains the previous IP.
      • so in effect, nodes #2 and #3 see it come up as #4, and we have basically the same bootstrapping issue.
      • rmustacc
        knite: Yeah, I understand. We'd give you the same IP if the docker API had a way of letting us.
      • knite
        the goal being: a 3 or 5 node cluster where each node knows about the others, so that any node can be bounced (or crash) independently.
      • rmustacc
        The only thing I can think of in this case, which is mostly off the top of my head, is going in and manually changing the settings.
      • Creating a container with a specific IP, reserving it, etc. is all something the underlying system can do, but the challenge at the moment, unfortuantely, is expressing it without extending the docker API.
      • knite
        rmustacc: are you internal @ Joyent?
      • rmustacc
        Yup
      • knite
        nice. :-)
      • rmustacc
        I worked on a bunch of the underlying network infrastructure.
      • knite
        so I can probably make the config changes in memory, and a host volume to store data so it looks like the same node on a bounce, with an assumption that it will restart on the same IP unless there's a major problem at Joyent?
      • rmustacc
        Yes, a container that's set to always restart will keep the same IP on that restart.
      • Just not across an explicit create/destroy cycle.
      • knite
        hm...so let's say I even get my core cluster up. now how should my non-core containers communicate with it? I can't use --dns-search, so is my only option hardcoding the core service IPs into every image (or passing them as env vars, works out about the same)?
      • knite has quit
      • kash
        rmustacc: can containers use DHCP?
      • wondering if you can't just send commands to dnsmasq to specify a static IP.
      • ed209 has quit
      • ed209 joined the channel
      • rmustacc
        kash: In what context?
      • kash
        in the context described above, Docker can't assign a static IP
      • but if you can make Docker containers use DHCP then they'll get one from another channel; dnsmasq allows CLI tools to set IPs for devices
      • rmustacc
        kash: No, the underlying network virtualization doesn't do DHCP for auto-learning.
      • Well, doesn't support DHCP or auto-learning / updating mappings.
      • kash
        that's how i designed jentu to work.
      • edwardb has quit
      • it assigns static IPs during boot via PXE
      • it's actually not that easy to get all the clients to play along with iBFT
      • rmustacc
        Well, I forgot that you could actually assign a docker guest a mac address.
      • If you could do that, there's no reason you couldn't use that to refer to an existing IP/mac combo from the API perspective.
      • kash
        seems like a potential solution then
      • rmustacc
        And not have to bother with running a DHCP server.
      • Yup, something to look at.
      • kash
        DHCP isn't so bad :-)
      • especially on internal bridges
      • rmustacc
        Sure, we use it for the physical machines.
      • Bit more annoying for learning where to send stuff on overlay networks and debugging duplicate address assignments, etc.
      • kash
        duplicate address assignments?
      • rmustacc
        Well, debugging a bug in your configuration where you've managed a duplicate assignment or you manually decided to plumb something up you hoped wasn't in dhcp, or you think something is no longer in use, etc.
      • kash
        i never do static assignments via DHCP. i pass out a temporary address in an unimportant subnet and then they retrieve their actual IP
      • pmooney has quit
      • rmustacc: here's a little cute tour of the software, though it needs a bit of work it's a good idea of how our UI works http://jentu-networks.com/tour/
      • rmustacc
        Looks interesting, though I have to admit, I learned the hard way to not go back to centralized storage. ;)
      • But, seems an interesting approach to VDI.
      • kash
        it's not so much centralized because you can scale horizontally to have a large # of storage nodes
      • i don't think it's VDI. we use physical workstations with no hypervisors.
      • rmustacc
        Sure, I'm not well versed in that space, it serves a similar purpose.
      • kash
        i've got lots of customers running windows and there's no special tunings - their networks are latency-free
      • ancoron_ joined the channel
      • rmustacc
        And we do something similar with DHCP and pxe booting in SDC itself.
      • Just not for individual containers, that's a different story.
      • kash
        it was a part of my goal to make windows run off of ZFS
      • :)
      • rmustacc
        And well, you can scale it out horizontally with multiple nodes, but at the end of the day, you're talking block and if that server goes down, it's going to take down everything.
      • pmooney joined the channel
      • kash
        that one server can go down and only take down whoever was working on it. and the other servers hold identical OS data so you can just bring them up on that.
      • rmustacc
        And most traditional storage clustering was built in ignorance of CAP.
      • We learned that the hard way when we built it at Sun.
      • kash
        it's not traditional clustering ;)
      • images are only sent between the nodes when required
      • since most data on the platform is meant to be destroyed at boot-time, we only preserve updates when a user clicks a button. then it creates a new ZFS snapshot for machines to provision from on next boot.
      • rmustacc
        So are you storing all working data using local storage on the workstation then and it's only boot media?
      • kash
        no you do different techniques for different client OS
      • for linux i'd use ansible or cloud-init to customize the environments at boot
      • windows has Active Directory
      • folder redirection to SAN means user data is safe, they manage it separately from boot volumes
      • rmustacc
        Oh.
      • ancoron has quit
      • So all the data the user cares about is sitting on a SAN somewhere.
      • kash
        i consider the boot volume 'unsafe' and 'volatile', ready to be destroyed on reboot. if a user infects their system.. reboot
      • rmustacc
        So user data is covered by all the pros and cons of a SAN.
      • kash
        same with Linux VMs etc. the OS is in the volume but i've got SAN data separate, so when i do updates i clone to a new physical / virtual machine, test / execute updates, do the jentu snapshot and then reboot prod at night into new environment
      • right but you can also use local SSD or HDD
      • rmustacc
        Gotcha. Okay, that clarifies the split responibilities, which wasn't clear initially.
      • kash
        :D
      • rmustacc
        So, then basically the failure domain of the image node is just those machines connected to it and they get an unexpected reboot?
      • Ignoring the user data.
      • kash
        right
      • there are ways to configure the clients in a HA fashion but i don't care for it and most users were fine with 'unexpected reboots' since the servers have stayed up for 180+ days at a time
      • the Microsoft storage crap has more problems, and those are some absurd ones like, blue screen rebooting randomly and then our MS guy has to go there and do some magic i'll never want to understand
      • right now i'm using debian as the base OS with zfs on linux but i've been meaning to set up an OmniOS dev environment to figure out the iscsi commands and roll it up into a basic install script
      • i have an interesting issue netbooting KVM where the network stack times out after a 2nd or 3rd reboot without destroying the VM instance in-between
      • it's one of those things that will never get fixed only because i don't know how to debug it
      • rmustacc
        Linux KVM?
      • kash
        yeah
      • using virt-manager
      • and bridging
      • pmooney has quit
      • lloydde joined the channel
      • pmooney joined the channel
      • pmooney has quit
      • pmooney joined the channel
      • pmooney has quit
      • rmustacc: i love this http://jentu.biz/
      • its the only company that shares our name
      • and their logo is just so cute LOL
      • lloydde joined the channel
      • jhurlbert joined the channel
      • trentm joined the channel
      • marsell joined the channel
      • BretFisher joined the channel
      • papertigers joined the channel
      • rcj has quit
      • rcj joined the channel
      • lloydde has quit
      • lloydde joined the channel
      • pmooney joined the channel
      • pmooney has quit
      • edwardb joined the channel
      • pmooney joined the channel
      • pmooney has quit
      • marsell has quit
      • lloydde has quit
      • pgale joined the channel
      • trentm has quit
      • KermitTheFragger joined the channel
      • ipalreadytaken joined the channel
      • edwardb has quit
      • bixu_ joined the channel
      • marsell joined the channel
      • ed209 has quit
      • ed209 joined the channel
      • chorrell joined the channel
      • rsully joined the channel
      • pmooney joined the channel
      • pmooney has quit
      • pmooney joined the channel
      • KermitTheFragger has quit
      • iffy has left the channel
      • trentm joined the channel
      • deet joined the channel
      • jhurlbert1 joined the channel
      • jhurlbert has quit
      • jhurlbert1 has quit
      • jhurlbert joined the channel
      • lloydde joined the channel
      • bbwonthefly joined the channel
      • jnoon has quit