#docker

/

      • kozy has quit
      • kozy joined the channel
      • milardovich joined the channel
      • agc93
        Botanic: that's building a local Dockerfile using the same tag as the image you just pulled. Is that what you're going for?
      • Botanic
        thats the idea
      • that way it only builds what it needs and pull's the rest
      • else the whole build compiles a ton of stuff every time
      • preyalone joined the channel
      • grayhemp has quit
      • ah looks like --cache-from peragro/peragro-at does what i want :D
      • milardovich has quit
      • slappymcfry has quit
      • chasmo77 has quit
      • grayhemp joined the channel
      • Scrxtchy has quit
      • kingarmadillo joined the channel
      • grayhemp has quit
      • milardovich joined the channel
      • theoceaniscool has quit
      • grayhemp joined the channel
      • kgirthofer joined the channel
      • tacoboy joined the channel
      • atomiccc has quit
      • milardovich has quit
      • pandeiro joined the channel
      • mmars joined the channel
      • hhee has quit
      • Peleus has quit
      • handlex joined the channel
      • handlex has quit
      • jameser joined the channel
      • mmars has quit
      • tacoboy has quit
      • pandeiro
        I'm still trying to figure out the One True Way of getting private info (keys, passwords, secrets, etc) into a Docker image at build time so that eg private dependencies can be retrieved during the build
      • I have been using `ARG` but I hate that it busts the build cache
      • Meaning I have to download >250MB of dependencies every time I build
      • tacoboy joined the channel
      • agc93
        pandeiro: there is definitely no One True Way, but I would also strongly discourage against adding them at build time..
      • pandeiro
        agc93: how would one retrieve dependencies that are not public? Wait until runtime?
      • agc93
        Well, are we talking non-public dependencies or actual private secrets?
      • pandeiro
        We're talking about auth credentials that are needed to retrieve those private dependencies
      • milardovich joined the channel
      • agc93
        Ah, that's a bit tricky yeah. Docker's not really built for that use case since builds are intended to be host-independent. One option is to use the new multi-stage builds, include the private creds in the first stage image (where you fetch your dependencies), then add those dependencies to another image (this time without the creds)
      • pandeiro
        agc93: 1) That's exactly what I'm doing (multi-stage), but the use of build args invalidates the cache and I download all deps every single build; 2) Can you explain what you mean by builds being host-independent?
      • agc93
        pandeiro: well say for example they allowed you to use environment variables from the host during the build? Now that build *only* works on your host (where you've defined those variables), breaking host-independence
      • milardovich has quit
      • the idea has always been that builds are host-independent so you can use the same Dockerfile (obviously including args) on any host and get the same result
      • kgirthofer has quit
      • pandeiro
        agc93: Fine with that, but the need for build-time params is obvious (hence the addition of that feature) and caching could maybe be made to work with it?
      • I seem to remember `ADD`/`COPY` also invalidated the cache early on, and then a more sophisticated detection was added
      • Same should be done for build args
      • agc93
        yeah perhaps. That'd be a question for the team. Have you tried raising an issue maybe?
      • ozcanesen joined the channel
      • pandeiro
        Because losing caching b/c of some password that is identical forever is really rough
      • No, I haven't raised it
      • I was hoping maybe there was another mechanism for this use case
      • Like 'secrets' (but those don't seem to help at all here)
      • kostajh joined the channel
      • agc93
        most secret handling for Docker is focussed on runtime
      • evansde77 joined the channel
      • Botanic
        is there any way to have a dockerfile add the files from the path into it?
      • agc93
        Botanic: I'm not 100% sure but I don't think so since the PATH is essentially a host env var
      • vjacob joined the channel
      • Botanic
        problem is we using git clone to get our code into the docker container, but that makes it hard to do dev as it only clones the current repo...
      • evansde77 joined the channel
      • fishcooker joined the channel
      • bnason23 has quit
      • milardovich joined the channel
      • grayhemp has quit
      • grayhemp joined the channel
      • agc93
        Botanic: well you can still add any files from the host using ADD or COPY, you'll just need to fully qualify the path
      • Botanic
        i changed the dockerfile to use ADD . /opt/files
      • will that work even with url's then?
      • it seems to work with local paths at least
      • milardovich has quit
      • or is there some env variable or otherwise i should use?
      • artok
        what is the actual problem? needing Dockerfile to work on developers machine with his/hers code?
      • enderandpeter
        Botanic: With COPY and ADD, I'm quite sure you can only name a destination that is a path in the container.
      • Botanic
        both on developers machines as well as from the remote url repo
      • ADD . /path works when using a local path
      • artok
        and mounting host OS path to some docker directory isn't option?
      • Botanic
        i want to actually copy all the files into it
      • as users wont have the files locally to mount via a volume
      • enderandpeter
        Botanic: Yeah, ADD . /path would copy everything in the folder with the Dockerfile, perhaps including the Dockerfile itself, to /path.
      • Botanic
        doesnt seem to work if the build path is a url tho...
      • let me try nuking it and seeing if there some wierd cache issue or w/e
      • enderandpeter
        Botanic: Well if you wanted to download something online and get it into the container, you'd probably want to just use curl, wget, etc. in a RUN statement to download it where it should go in the container.
      • Botanic
        ya problem is that its the actual git repo i need, so sometimes its local, sometimes its a url
      • developers need the local git repo, users need the remote url git repo
      • errl joined the channel
      • coventry joined the channel
      • enderandpeter
        Botanic: Ah, I see. Well then, it should be possible to make sure git is installed in the container in the Dockerfile, wherever the container's program's are installed, and then later have a command to `git clone` that repo where it needs to go.
      • Botanic
        ya thats what i had it worked fine for users, developers however have to rejugger stuff since they arent using the url
      • coventry
        Should "ENV DEBIAN_FRONTEND=noninteractive" work for debian-based Dockerfiles? I've got a case where it doesn't. It's always worked before, but I've primarily based on ubuntu. Disabling interactivity with debconf works, though.
      • Botanic
        ill figure it out prolly just gonna use an env variable to overwrite it
      • thanks :)
      • AtumT has quit
      • enderandpeter
        Botanic: Yeah, totally. An env variable set in the docker run command for devs or something like that might be needed
      • errl
        hi, i have a server with 4 nics and i was wondering if it is possible to proxy connections to two containers that are using two ip addresses on the host's network. i've read docker0 handles all connections, but i'd like to be able to utilize all of my nics instead of just one without virtualizing another docker host
      • TheFuzzball has quit
      • milardovich joined the channel
      • milardovich has quit
      • Masber_080 joined the channel
      • iamchrisf joined the channel
      • texasmynsted has quit
      • masuberu has quit
      • drawde_ joined the channel
      • drawde_
        hey all, i've used docker before but using a front end gui (unraid).. i started using it again using just the cli and no front end, what's the best way to save the docker run commands?
      • usually i just save them to a script and run the script
      • is this what a docker composefile is for?
      • agc93
        drawde_: it can be used like that, yes. Technically Compose is intended for apps using multiple containers, but it can be used for essentially single-container apps
      • holmser joined the channel
      • drawde_
        agc93, thanks
      • agc93
        There's a few slight differences: Compose by default will put the container in it's own network (not the default) and a few other minor quirks like that
      • But as a bare minimum, yes, saving it as a script will also do the job
      • drawde_
        ah okay yeah i was just wondering what most people do
      • holmser
        hey, I'm working on a presentation on Docker vs VMs and I'm trying to think of all the implications of sharing the kernel
      • drawde_
        i think unraid saves the config you have and generates the run command for you so i never had to do it before
      • also, what's the proper way to update a docker image? for example whatever docker i'm doing has an updated image available. usually what do is stop the container, delete container and image and re-run docker run. is this the real way or is there a better way i should be using?
      • holmser
        drawde_: pull latest, restart
      • eshlox has quit
      • drawde_
        holmser, is that a command? or are you telling me to pull the latest then restart
      • coventry has quit
      • pandeiro
        I'm experiencing something very weird where the files resulting from a `RUN` step cannot be seen (eg with `ls -l`) in the next `RUN` step. Does anyone know how/why that could be?
      • texasmynsted joined the channel
      • farstrider has quit
      • milardovich joined the channel
      • holmser
        drawde_: no, just pull the latest image and restart
      • wasn't a command
      • drawde_
        holmser, just tried and well, that's much easier than how was i doing it lol
      • thank you!
      • milardovich has quit
      • grayhemp has quit
      • grayhemp joined the channel
      • Spanktar has quit
      • Botanic has left the channel
      • grayhemp has quit
      • br|ck joined the channel
      • juvenal joined the channel
      • d^sh joined the channel
      • cdaley joined the channel
      • pandeiro
        Answering myself: I was unknowingly using a directory that had been declared a volume in a build instruction; hence the `RUN` instructions that left files there not remaining for the next instruction; interestingly, `COPY` instructions do stick around in that same dir
      • kostajh has quit
      • farstrider joined the channel
      • atomiccc joined the channel
      • LennardW|afk joined the channel
      • ziyourenxiang joined the channel
      • jameser joined the channel
      • kingarmadillo has quit
      • martindlut joined the channel