ElBlivion: btw, the commit you have shown me is also producing a v1 plugin.
Looking at other plugins...
It seems that they are built as libraries now.
brendan- joined the channel
pll
I have an existing infrastructure set up in AWS. I need to add to it. One of the things I need to do is add a new CIDR block to existing SGs. If I just add the new block to the existing SG list, terraform seems to think it needs to tear down the SG and rebuild it. But this is problematic because there are entities using this SG which do not have to be torn down. How do I tell terraform "this is just a new rule I want you to add to
the existing SG!" ?
I thought of using aws_security_group_rule, but I can't use that resource with an existing aws_security_group resource which has inline rules to begin with. And if I move the existing rules out to separate aws_security_group_rule resources, I still have to tear down the existing entities using the original rule. Any thoughts?
bdashrad
it shouldn't be tearing the whole SG down to add a rule
pll: ^
pll
bdashrad: I agree. But it is.
bdashrad
are you changing a rule block, or adding another one?
pll
I went from 'ingress { ... cidr_blocks = ["0.0.0.0/0"]'
to 'ingress { ... cidr_blocks = ["0.0.0.0/0", ${var.other_cidr_block} ]'
And it tears the entire SG down and tries to rebuild it.
bdashrad
something else is coming into play, because i have done that multiple times without rebuilds
pll
hmm
I also added a 'tags {}' section.
Would that do it?
bdashrad
actually, it looks like i have `lifecycle = { create_before_destroy = true }` set, so maybe it was creating new groups
i don't have this particular env set up, so i can't test it right now
pll
Hmm, I don't have lifecycle set...
LarsFronius joined the channel
Or rather, the code I inherited doesn't.
bdashrad
if it does re-create the security group every time, this would create the new one before tearing down the old
pll
What does that do to the existing entities which are dependent on that SG?
bdashrad
it adds the new security group to them, before destroying the old
pll
Oh, okay. Hmm, maybe I just need to add that to the code then.
ElBlivion
pll: fairly sure someone mentioned this bug recently here
I've used aws_security_group_rule from the start so I didn't run into it
ruippeixotog
i don't know if anyone read what I said above… if anyone could help me or point me to a suitable tutorial I would thank you :)
pll
ruippeixotog: I wasn't here when you said whatever it was you said.
is that a problem with my setup, or is this really failing on master?
i'm testing it on my computer… is it supposed to fail because of that?
yeah, it seems the 3 errors are IAM-related… how do you usually test locally? you just ignore those tests?
actinide joined the channel
Spanktar joined the channel
rmenn joined the channel
LarsFronius has quit
LarsFronius joined the channel
Guest46101 is now known as mgagne
mgagne joined the channel
tphummel joined the channel
clong has quit
promorphus joined the channel
actinide_ joined the channel
actinide has quit
rmenn
i have an instance profile which i need to assoiciate to multiple rules, but looks like i can do that in iam_role_policy where it only takes one role as argument
now i have a role `beta-api` now what if i needed `delta-api` but wanted to use the same policies
i dont want to go about creating multiple policies
i hope im being clear\
pll
Why would you not just copy and past the 'resource "aws_iam_role" "beta-api" {' section, and s/beta/delta/g ?
rmenn
if you look at line #9 and #3 cause thats where i associate the role to the policy
sergey joined the channel
wraithm joined the channel
tazz joined the channel
bosszaru joined the channel
timvisher joined the channel
ruippeixotog has quit
impi joined the channel
timvisher joined the channel
jlecren joined the channel
nya_ has quit
nya_ joined the channel
SrWaffles joined the channel
eTux joined the channel
eTux
Hello everyone, I'm provisioning 3 aws environments in 3 regions with TF. I want to use s3 for remote storage of state file. Can I use 1 bucket for it (example: state_files/{dev,stg,prd}/file.state or I need to use 3 buckets in matching regions for these environments?
jlecren has quit
timvisher joined the channel
jlecren joined the channel
mrwacky42
eTux: yup
eTux
mrwacky42: yup, but to which part of my question? :D
muddymud joined the channel
toky joined the channel
mrwacky42
eTux: ha, I only read the first clause.. Yup - you can use 1 bucket for everything, just make sure they all have a unique key
eTux
this doesn't seem to be working because bucket is in different region than my environment and tf plan fails with: " data.terraform_remote_state.remote_state: BucketRegionError: incorrect region, the bucket is not in 'us-west-1' region"
bucket is created in us-east-1 and this environment is in us-west-1, that's this leads me to think that we need separate buckets that will match our regions
It's interesting to see that 'terraform remote config...' didn't fail on that part
c4urself_
hey all, having trouble getting Lambda function to trigger from SNS, even though I've added the aws_sns_topic_subscription
I then manually added the "trigger" in the console, but can't find a Terraform object that corresponds to it
("trigger" added in the Lambda console)
once I added the "trigger" in the Lambda console things started working, is there an object/configuration I'm missing in Terraform?
b-dean joined the channel
b-dean
if I'm writing a terraform module, what would be a good way to get a version number into terraform from the source repo or a file (like the .hgtags, or shelling out to git describe, etc) in the repo?
say the version was in a json file, I could use a data.template_file to read it but then I might have to use some local-exec on something else to run jq to get the version number or whatever. I ask because I'd like to use a rev=v1.2.3 on the module source and not have to have the people maintaining the module have to copy their version number into a variables.tf in addition to wherever they had it before
if that makes sense
impi joined the channel
brendan- joined the channel
bhughes has quit
bhughes joined the channel
trentonstrong joined the channel
JamesBaxter joined the channel
bosszaru has quit
nya_ has quit
nya_ joined the channel
trentonstrong joined the channel
muddymud joined the channel
gusmat joined the channel
nya_ has quit
Spanktar
here’s a new one
Decoding state file failed: json: cannot unmarshal object into Go value of type string
mrwacky42
eTux: Ahh, we just have/use one bucket in one region.
Spanktar
ah, I see TF version mismatch
ivanjaros has quit
munky joined the channel
trentonstrong joined the channel
failshell joined the channel
failshell
anyone knows which is excuted first for aws_instance: file or user_data? as in, if i use file to copy files required in user_data, would that work?
bosszaru joined the channel
muddymud joined the channel
tmichael joined the channel
fryguy joined the channel
fryguy
any best practice for switching between 2 copies of terraform remote state (for example, a staging and a production AWS account)
c4urself_
fryguy: i wrote a wrapper that basically does it for me `./terraform.sh prod <region> <component> plan
not sure how other people do it
fryguy
mind sharing yours? i'm about to write a similar thing