Hi there, I'm trying to find an example with terraform being used for bigger deployments. Examples with 1 nodejs cluster and one haproxy aren't doing it for me. I have many similar but different elastic beanstalk applications, I need to know how this looks in terraform when I need to re-use many components yet stay flexible as they can differ. Anyone has a link to a bigger repo with terraform code ?
t0m
maarten__: The key (we've found) is to seperate out your Terraform configs
So for example you'd have a module for a 'regular' or 'base' elastic beanstalk application
and then you'd reuse that N times (once for each app)
The simple ones would just be passing parameters into the module
The complex/different ones might completely not use the module at all.
And then the base stuff (e.g. your VPC / shared infra / etc) is all in different terraform configs, which you pull into the individual apps with remote state
As the base stuff will change much less frequently (if at all once setup)
Does that help at all?
maarten__ has quit
wonton_ joined the channel
wonton_
getting errors when trying to apply some iam policies. They are saved in json files. The json is valid, but I get "MalformedPolicyDocument: This policy contains invalid Json"
ppinkerton joined the channel
maarten2__ joined the channel
maarten2__
online irc client.. Tom, so although my applications have a lot of overlap, it would be better to not modularize my applications as they still differ too much. Only modularize when they are 100% the same.
ninjada joined the channel
permalac joined the channel
t0m
maarten2__: ok, don't then - whatever works best for you :)
Or just modularize the base set of common stuff shared by some/most apps
You can cut it any way you want :)
The trick is to make the base infra (that is shared) remote state and pull it into each individual app/concern/system
and then if you want/need to do global DNS, you can do that on top - pulling in all the individual apps as remote state
ninjada has quit
pappy has quit
maarten2__
OK, might you know by chance how I can parameterize elastic beanstalk environment variables, that would make it possible to modularize it more
ppinkerton joined the channel
t0m
you can just pass them as variables, right?
HaZrD
is there any way to ignore errors on terraform destroy? i.e. keep on deleting even if we get "Received Azure RM Request status code 404 Not Found" returned?
t0m
HaZrD: no, basically. That sounds like a bug to me, i.e. if the resource is already gone, terraform destroy should just react by going 'oh, ok' and deleting it from the state
HaZrD
okay - where is a good place to submit a bug report?
maarten2__
I could, but the amount of the differes, also the name value pairs , something like this is possible:
setting { namespace = "aws:elasticbeanstalk:application:environment" name = "AWS_DEV_DB_USERNAME" value = "${var.db_username}" }
but per application i have different key/value pairs
HaZrD
ah, found github link
d'oh!
t0m
maarten2__: yeah, I gotcha - currently parameterizing that type of stuff is hard :(
Terraform 0.7 will make this better (somewhat), but it still needs more work
maarten2__
ok, thanks, last question, and cool that you're so super helpful
t0m
You're welcome, it's good to share (mostly) my mistakes so other folks don't have to ;)
maarten2__
is it preffered/possible to seperate the applications in different files so I won't get a huge .tf
exactly, not so much info's out there but Im seeing the potential, need to get onboard :)
t0m
you can split things into multiple TF files however you wish - terraform will parse them all and do the right thing.
to split different parts of the infra / stuff that's mostly looked after by different teams
ppinkerton joined the channel
(where the vpc.tf in each pulls in vpc/uswest1-prod as remote state)
fish_
hrm now I try to build terraform from source but `make dev` only builds terraform, not the plugin..?
either this isn't intended or the README is wrong. it say: "To compile a development version of Terraform and the built-in plugins, run `make dev`"
t0m
And then geo_dns/{vpcs.tf,frontend.tf}
fish_: if you're looking at 0.7 then it's all one binary now
fish_
t0m: OH :) okay
t0m: well then I can take the official binary as well
t0m
maarten2__: does that make sense / help at all?
fish_
t0m: ah, nice. I downloaded the official binary but it didn't work because I still had the old plugins there. but makes sense, thanks!
maarten2__
makes sense to me! thanks
t0m
fish_: aha, cool :)
maarten2__: yw!
maarten2__
but, mysql is then really seperate from the application right, to me it would make sense to have
vpc seperate
but application.tf contains also the rds mysql and the s3 bucket or redis resource
t0m
depends how your org/team is split :)
maarten2__
hehe, one person show
t0m
We are pretty large, so we have a DBA team who look after the databases
So it makes sense to split them
Ditto with elasticsearch clusters (different team)
maarten2__
are you using ci/cd with code review ?
t0m
And kafka clusters (different team)
for some things. I mean everything gets code reviewed, *some* of our TF gets auto deployed/applied
Stuff like the VPC layer - not (as it's not worth it, we don't build new VPCs particularly often)
Stuff like the elasticsearch layer - all the terraform code gets autogenerated from YAML (the same YAML with the puppet settings for the cluster config), and applied by Jenkins
So that scaling up a cluster is as simple as changing a number in a YAML file
maarten2__
wow, sounds like you have some setup running there :)