Server Bug Fix: Parameter parsing when using AWS SSM send-command from Terraform

Original Source Link

I am using AWS SSM send-command to run a PowerShell script on an AWS instance.

The following commanad works fine in a command shell but when called in Terraform gets an error.

aws ssm send-command –instance-ids ${self.id} –document-name AWS->RunPowerShellScript –parameters commands=’C:Installersbootstrap_test.ps1 >test’

when called in Terraform using:

provisioner “local-exec” {
command = “aws ssm send-command –instance-ids ${self.id} –document-name AWS-RunPowerShellScript –parameters commands=’C:Installersbootstrap_test.ps1 test’ –output-s3-bucket-name ellucian-ecrm-lab –output-s3-key-prefix bootstraplogs”
}

The error returned is:

exit status 255. Output: usage: aws [options] [ …] [parameters]
To see help text, you can run:

aws help
aws help
aws help

Unknown options: test’

So I don’t thing Terraform is parsing the string the way AWS needs it. what can I do to format this string correctly in Terraform?

Try by double quoting the commands, e.g.:

... --parameters commands='"C:Installersbootstrap_test.ps1 > test"'

Here is the example with the loop:

aws ssm send-command --instance-ids "i-01234" --document-name "AWS-RunPowerShellScript" --query "Command.CommandId" --output text --parameters commands='"While ($i -le 100) {$i; $i += 1}"'

Or the opposite, e.g.

--parameters commands="'While ($i -le 100) {$i; $i += 1}'"

Tagged : /

Server Bug Fix: Service account does not have storage.buckets.create access

Original Source Link

I have created a Service Account for Terraform. Apart of our process is to create some storage buckets and maintain them through Terraform.

However, when we run terraform apply we get the following error:

google_storage_bucket.state_bucket: googleapi: Error 403: [email protected]{project}.iam.gserviceaccount.com does not have storage.buckets.create access to project {project_id}.

I have applied the following IAM permissions to no avail:

  • Project Owner
  • Storage Admin
  • Storage Object Admin

I had the same problem.
We are using Organizations on GCP. And I used this script to create the terraform account in a terraform-admin project I created just for holding the master terraform service account which we use for setting up higher level projects and environments.

It turns out that the roles I set up for [email protected]{project}.iam.gserviceaccount.com in the admin project are local to that project. i.e. in the organization IAM view this service account shows up with only ‘Billing Account User’ and ‘Project Creator’.

I am not sure but I think that other organization scope projects can’t read the roles set in other projects (or the roles set in other projects for a specific service account are overridden by the roles setup in the organization scope roles for that service account.)

Adding ‘Storage Admin’ and ‘Viewer’ roles to the organization scope service account fixed this error.

P.S
I think that using terraform enterprise allows managing organization-wide users and thus makes it possible to create and manage terraform service accounts in the organization scope, avoiding the need to manually add the organization scope roles to the service account one experiences with the community version.

I am not sure if this is related, but I use a service account to back up from cloudberry to a storage bucket, and they have just today started failing with similar access problems that you describe. I think google changed something, they have the same

Tagged : / / /

Code Bug Fix: Terraform 12 convert count to for_each

Original Source Link

I am currently experiencing some challenges with the way that count index’s in Terraform. I am seeking some help to convert this to for_each.

# Data Source for github repositories. This is used for adding all repos to the teams.
data "github_repositories" "repositories" {
  query = "org:theorg"
}

resource "github_team_repository" "business_analysts" {
  count      = length(data.github_repositories.repositories.names)
  team_id    = github_team.Business_Analysts.id
  repository = element(data.github_repositories.repositories.names, count.index)
  permission = "pull"
}

I have tried the following with no success:

resource "github_team_repository" "business_analysts" { 

for_each = toset(data.github_repositories.repositories.names) 
team_id = github_team.Business_Analysts.id 
repository = "${each.value}" 
permission = "pull" 
}

I am querying a Github organization and receiving a huge list of repositories. I am then using count to add those repositories to a Team. Unfortunately, Terraform will error out once a new repository is added or changed. This being said I think the new for_each function could solve this dilemma for me however, I am having trouble wrapping my head around how to implement it; in this particular scenario. Any help would be appreciated.

False Alarm. The I had the answer all along and the issue was attributed to the way I was referencing a variable…..

if someone stumbles upon this then you want to structure your loop like so

for_each = toset(data.github_repositories.repositories.names) 
team_id = github_team.Business_Analysts.id 
repository = each.key 
permission = "pull" 
}

Tagged : / / / /

Code Bug Fix: How to flatten a tuple of subnet ids to a string in Terraform

Original Source Link

I am trying to flatten a tuple to a string as follows

network_acls {
default_action             = "Deny"
bypass                     = "AzureServices"
virtual_network_subnet_ids = ["${data.azurerm_subnet.blah_snet.id}", "${join("","", azurerm_subnet.subnets.*.id)}"]

}

This is very similar to How to flatten a tuple of server ids to a string?, however it’s not working for me.

The result is: "*subnetid1*","*subnetid2*" – where "," should be properly escaped and result as ","

I can’t figure out why this isn’t working. I’ve tried many variations of escaping this to no benefit

This is an expression which has varying Terraform 0.11.x and Terraform 0.12.x outcomes:

["${data.azurerm_subnet.blah_snet.id}", "${join("","", azurerm_subnet.subnets.*.id)}"]

In Terraform 0.11.x, it will result in a concatenation as you shown with "," separating the subnet IDs. In Terraform 0.12.x, it will be a type-error and fail to apply. FWIW, Terraform 0.12.x is correct here.

If you are using Terraform 0.12.x, the following should work:

virtual_network_subnet_ids = join(",", concat(
  [data.azurerm_subnet.blah_snet.id], azurerm_subnet.subnets[*].id
)

If you are using Terraform 0.11.x, the following should work:

virtual_network_subnet_ids = "${join(",", concat(
  [data.azurerm_subnet.blah_snet.id], azurerm_subnet.subnets[*].id
)}"

This is a potential solution using Terraform 12 syntax (it looks like the splat syntax wasn’t supported for non-count resources until TF12 so your mileage may vary depending on your version).

The local item approximates the attribute structure of a data.azurerm_subnet.

Note the use of both the \ to escape the single literal or the simple “,” since I couldn’t quite tell what the desired output string was meant to be.

locals {
  subnets = [
  {
    id = 12345,
    name = "item1"
  },
  {
    id = 9354,
    name = "item2"
  }
  ]
}

output "comma_and_escape_char_for_literal" {
  value = "${join("\,", local.subnets.*.id)}"
}

output "comma_only" {
  value = "${join(",", local.subnets.*.id)}"
}

Tagged : /

Code Bug Fix: See what GCP services are enabled for a project (node js)?

Original Source Link

As per the documentation, with the gcloud cli if you run gcloud services list --available you can get a list of the services that are enabled or available to be enabled for a google cloud project. What is the equivalent library/call to use to do this in node? I’ve taken a look at the libs listed here and can’t seem to find how to do this.

I’m using terraformer which is running in a node js env to go and programmatically crawl an account but it will error out if certain services are not enabled for a project when you try and run it. Basically, before I run terraformer I want to get a list of what services are enabled and only import those services.

The Google Cloud documentation is quite good and I would recommend a quick Google search in most cases. You can find several examples of what you are looking for here.

The actual http request looks something like the following (this example does not show how to attach authentication information)

curl 'https://serviceusage.googleapis.com/v1/projects/357084163378/services?filter=state:ENABLED'

I would recommend submitting an Ajax request to this endpoint. If you are using Google SDKs already you should be able to get an access token to attach to the api request

Just in case anyone else is curious:

import { google } from 'googleapis'

const usage = google.serviceusage('v1')

const project = "myProject"

const authorize = async (scopes) => {
  const auth = new google.auth.GoogleAuth({ scopes })
  return await auth.getClient()
}

const {
  data: { services }
} = await usage.services.list({
  parent: `projects/${project}`,
  filter: 'state:ENABLED',
  auth: await authorize(['https://www.googleapis.com/auth/cloud-platform'])
})

Tagged : / / / /

Code Bug Fix: Why do I get Unable to create container with image Unable to pull image error pulling image when using multiple Docker hosts?

Original Source Link

I am trying to perform a simple deployment using terraform (0.12.24), and multiple Docker providers (plugin version 2.7.0). My aim using the terraform template below is to deploy two different containers to two different Docker-enabled hosts.

# Configure the Docker provider
provider "docker" {
  host = "tcp://192.168.1.10:2375/"
}

provider "docker" {
  alias = "worker"
  host = "tcp://127.0.0.1:2375/"
}

# Create a container
resource "docker_container" "hello" {
  image = docker_image.world.latest
  name  = "hello"
}

resource "docker_container" "test" {
  provider = docker.worker
  image = docker_image.world.latest
  name  = "test"
}

resource "docker_image" "world" {
  name = "hello-world:latest"
}

The docker command runs successfully without root privileges. The Docker daemons of both machines 192.168.1.10 and 127.0.0.1 listen on 2375, are reachable from the host machine and can respond to direct Docker REST API calls (create,pull etc.) performed with curl. Manually pulling images also works in both hosts, and I did that to be sure that the latest hello-world image exists in both.

However, the terraform deployment (terraform apply) fails with the following error:

docker_container.hello: Creating...
docker_container.test: Creating...
docker_container.hello: Creation complete after 1s [id=77e515b4269aed255d4becac61f40d38e09838cdf8285294bf51f3c7cddbf2bf]

Error: Unable to create container with image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Unable to pull image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: error pulling image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Error response from daemon: pull access denied for sha256, repository does not exist or may require 'docker login'

  on test.tf line 17, in resource "docker_container" "test":
  17: resource "docker_container" "test" {

Why do I get Unable to create container with image Unable to pull image error pulling image when using multiple Docker hosts?

docker_container.test references docker_image.world, but they use different providers (default and docker.worker):

resource "docker_container" "test" {
  provider = docker.worker
  image = docker_image.world.latest
  name  = "test"
}

resource "docker_image" "world" {
  name = "hello-world:latest"
}

This is fatal as docker_image.world uses provided default which runs the docker pull on tcp://192.168.1.10:2375/ (not on tcp://127.0.0.1:2375/).

This can be fixed by creating a docker_image using provider docker.world_worker to match docker_container.test as follows:

resource "docker_container" "test" {
  provider = docker.world_worker
  image = docker_image.world.latest
  name  = "test"
}

resource "docker_image" "world_worker" {
  provider = docker.world_worker
  name = "hello-world:latest"
}

There are some problems with the template which was originally used in the question.
Firstly, short-running hello-world containers are used, which leads to at least one of the services exiting with an error message by Terraform.
Then, with the important help of @Alain O’Dea (see relevant answer and comments) I created the following modified template, which works and accomplishes my goal.

# Configure the Docker provider
provider "docker" {
  host = "tcp://192.168.1.10:2375/"
}

provider "docker" {
  alias = "worker"
  host = "tcp://127.0.0.1:2375/"
}

# Create a container
resource "docker_container" "hello" {
  image = docker_image.world.latest
  name  = "hello"
}

resource "docker_container" "test" {
  provider = docker.worker
  image = docker_image.world_image.latest
  name  = "test"
}

resource "docker_image" "world" {
  name = "prom/prometheus:latest"
}

resource "docker_image" "world_image" {
  provider = docker.worker
  name = "nextcloud:latest"
}

Tagged : /

Code Bug Fix: How to generate reports for Azure Log Activity Data using python for any resource alongwith ‘Tags’?

Original Source Link

My colleague used the below powershell query to retrieve log data for past 4 days excluding today which matches operations of the resources and collects features such as EventTimeStamp, Caller, SubscriptionId etc.

Get-AzureRmLog -StartTime (Get-Date).AddDays(-4) -EndTime (Get-Date).AddDays(-1) | Where-Object {$_.OperationName.LocalizedValue -match "Start|Stop|Restart|Create|Update|Delete"} |
Select-Object EventTimeStamp, Caller, SubscriptionId, @{name="Operation"; Expression = {$_.operationname.LocalizedValue}},

I am a newbie to azure and want to generate a report where I can also fetch the ‘Tags’ name & value against a resource for past 90 days in this report. What will be the powershell query for this? Can I also use python to query this data? I tried searching the documentation and was unable to dig into it, so if anybody could redirect me to the right place it will be helpful.

First of all, you should know that not all azure resources can specify tags, so you should consider this in your code. Please refer to Tag support for Azure resources to check which azure resource supports tags.

For powershell query, I suggest using the new azure powershell az module instead of the old azureRM module.

Here is a simple powershell code with az module. And for testing purpose, I just introduce how to fetch and add tags to the output. Please feel free to change it as per your requirement.

#for testing purpose, I just get the azure activity logs from a specified resource group
$mylogs = Get-AzLog -ResourceGroupName "a resource group name"

foreach($log in $mylogs)
{   

    if(($log.Properties.Content.Values -ne $null))
    {
        #the tags is contains in the Properties of the log entry.
        $s = $log.Properties.Content.Values -as [string]
        if($s.startswith("{"))
        {
            $log | Select-Object EventTimeStamp, Caller, SubscriptionId,@{name="Operation"; Expression = {$_.operationname.LocalizedValue}}, @{name="tags"; Expression = {($s | ConvertFrom-Json).tags}}      
        }
        #if it does not contains tags.
        else
        {
            $log | Select-Object EventTimeStamp, Caller, SubscriptionId,@{name="Operation"; Expression = {$_.operationname.LocalizedValue}}, @{name="tags"; Expression = {""}}
        }
    }
    #if it does not contains tags.
    else
    {
        $log | Select-Object EventTimeStamp, Caller, SubscriptionId,@{name="Operation"; Expression = {$_.operationname.LocalizedValue}}, @{name="tags"; Expression = {""}}
    }

    Write-Output "************************"
}

The test result:

enter image description here

For python, you can take a look at this github issue which introduces how to fetch logs from azure activity logs, but you need do some research on how to add tags to the output.

Hope it helps.

Tagged : / / / /

Code Bug Fix: Curl producing JSON error saving JS file in Terraform data

Original Source Link

I have a test.js file that I am trying to pull in my Terraform folder like:

data external "file"{
    program = ["curl", "-o", "Content-Type:application/js", "${path.pwd}/testing.js", "https://raw.github.com/test/test.js"]
}

Keep getting command “curl” produced invalid JSON: unexpected end of JSON input.
When I try to curl locally on my PC it works.

The problem isn’t that curl is failing, it’s that you aren’t producing an output that terraform can consume. You must produce a json blob. Here’s an external script that I use:

data "external" "terraform_versions" {
  program = ["scripts/terraform_versions.sh"]
}

#!/bin/bash

BRANCH=$(git rev-parse --abbrev-ref HEAD)
COMMIT=$(git rev-parse HEAD)

jq -n --arg branch "${BRANCH}" --arg commit "${COMMIT}" '{"branch":$branch, "commit":$commit}'

Which outputs data as:

{
  "branch": "master",
  "commit": "abc123deadbeef..."
}

Which terraform CAN consume:

resource "aws_ssm_parameter" "terraform_branch" {
  name = "/${terraform.workspace}/tf/branch"
  type = "SecureString"
  value = data.external.terraform_versions.result["branch"]
  overwrite = true
}

resource "aws_ssm_parameter" "terraform_commit" {
  name = "/${terraform.workspace}/tf/commit"
  type = "SecureString"
  value = data.external.terraform_versions.result["commit"]
  overwrite = true
}

More here

EDIT: I’ve personally never used it, but there IS an http provider that might fit your need better. See here

The external data source is a last resort for when there isn’t some other data source that can meet your needs in Terraform.

In this case it seems like you want to obtain JSON data from a remote HTTP endpoint. If so, it’ll be less complicated to use the http data source:

data "http" "example" {
  url = "https://raw.github.com/test/test.js"
}

You can then use data.http.example.body to work with the data returned from that URL.

The URL you mentioned doesn’t actually work, so I assume you have some other URL you intend to use in practice. If the URL in question returns JavaScript then you’ll presumably just be working with it as a plain string, because Terraform has no built-in functionality for parsing or otherwise processing JavaScript code.

If the real URL you are trying to access actually returns JSON rather than JavaScript, then you can access the data structure represented by that JSON by using the jsondecode function. For example, you might assign the result to a named local value so you can use it from multiple locations elsewhere in the module:

locals {
  example_data = jsondecode(data.http.example.body)
}

Elsewhere in your module, local.example_data will then refer to the value that resulted from decoding the JSON data, assuming that the body did indeed contain valid JSON. If it does not, then you’ll see a similar error than you saw with the external data source: the jsondecode function can only decode valid JSON data.

Tagged : / /

Making Game: Terraform – huaweicloudstack provider – unable to login with ssh key

Original Source Link

With the help of terraform and huaweicloudstack provider I created ECS instance with specific ssh key, but I cannot login nor with key, neither with admin password. If I create machine manually through webconsole, everything works. Webconsole also shows correct ssh key.

Is there something that needs to be done after machine creation?

resource "huaweicloudstack_compute_instance_v2" "testsrv" {
  name              = "basic"
  flavor_name       = "s3.small.1"
  key_pair          = "authorized-key"
  admin_pass        = "somepass"
  security_groups   = ["default", "base"]
  availability_zone = "az1.dc0"
  user_data         = "#cloud-confignhostname: basicnfqdn: basic"

  network {
    port = huaweicloudstack_networking_port_v2.port_1.id
  }

  block_device {
    uuid                  = huaweicloudstack_blockstorage_volume_v2.volume_1.id
    source_type           = "volume"
    destination_type      = "volume"
    boot_index            = 0
    delete_on_termination = true
  }
}

Thank you for help.

Quick solution – use
config_drive: true
within a instance

Reason:
Without config_drive: true an instance is trying to get config from network. Builtin version of cloudinit in newest centos has bug:

failed run of stage init                                                                 
------------------------------------------------------------                             
Traceback (most recent call last):                                                       
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 652, in status_wrapper                                                                                       
    ret = functor(name, args)                                                            
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 362, in main_init  
    init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))               
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 649, in apply_network_config                                                                                   
    netcfg, src = self._find_networking_config()                                         
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 636, in _find_networking_config                                                                                
    if self.datasource and hasattr(self.datasource, 'network_config'):      
  File "/usr/lib/python3.6/site-packages/cloudinit/sources/DataSourceOpenStack.py", line 115, in network_config                                                                    
    self.network_json, known_macs=None)     
  File "/usr/lib/python3.6/site-packages/cloudinit/sources/helpers/openstack.py", line 645, in convert_net_json                                                                    
    'Unknown network_data link type: %s' % link['type'])                                 
ValueError: Unknown network_data link type: cascading

The newest centos has builtin version 18.5, but ubuntu has 20.1 and there is this bug solved, so cloudinit run without errors.

The reason why I was able to start instance manually through console is that huawei’s openstack use configdrive module, not network. So the finding with cdrom was actually solution.

So there are two options:

  1. Use config_drive: true
    openstack will generate cd drive and mount it into instance. Easy.
  2. Update cloud-init in centos
    Start manually instance, get in, update cloud-init, make an image from instance and then use this new image. Harder.
Tagged : /

Linux HowTo: Terraform – huaweicloudstack provider – unable to login with ssh key

Original Source Link

With the help of terraform and huaweicloudstack provider I created ECS instance with specific ssh key, but I cannot login nor with key, neither with admin password. If I create machine manually through webconsole, everything works. Webconsole also shows correct ssh key.

Is there something that needs to be done after machine creation?

resource "huaweicloudstack_compute_instance_v2" "testsrv" {
  name              = "basic"
  flavor_name       = "s3.small.1"
  key_pair          = "authorized-key"
  admin_pass        = "somepass"
  security_groups   = ["default", "base"]
  availability_zone = "az1.dc0"
  user_data         = "#cloud-confignhostname: basicnfqdn: basic"

  network {
    port = huaweicloudstack_networking_port_v2.port_1.id
  }

  block_device {
    uuid                  = huaweicloudstack_blockstorage_volume_v2.volume_1.id
    source_type           = "volume"
    destination_type      = "volume"
    boot_index            = 0
    delete_on_termination = true
  }
}

Thank you for help.

Quick solution – use
config_drive: true
within a instance

Reason:
Without config_drive: true an instance is trying to get config from network. Builtin version of cloudinit in newest centos has bug:

failed run of stage init                                                                 
------------------------------------------------------------                             
Traceback (most recent call last):                                                       
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 652, in status_wrapper                                                                                       
    ret = functor(name, args)                                                            
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 362, in main_init  
    init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))               
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 649, in apply_network_config                                                                                   
    netcfg, src = self._find_networking_config()                                         
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 636, in _find_networking_config                                                                                
    if self.datasource and hasattr(self.datasource, 'network_config'):      
  File "/usr/lib/python3.6/site-packages/cloudinit/sources/DataSourceOpenStack.py", line 115, in network_config                                                                    
    self.network_json, known_macs=None)     
  File "/usr/lib/python3.6/site-packages/cloudinit/sources/helpers/openstack.py", line 645, in convert_net_json                                                                    
    'Unknown network_data link type: %s' % link['type'])                                 
ValueError: Unknown network_data link type: cascading

The newest centos has builtin version 18.5, but ubuntu has 20.1 and there is this bug solved, so cloudinit run without errors.

The reason why I was able to start instance manually through console is that huawei’s openstack use configdrive module, not network. So the finding with cdrom was actually solution.

So there are two options:

  1. Use config_drive: true
    openstack will generate cd drive and mount it into instance. Easy.
  2. Update cloud-init in centos
    Start manually instance, get in, update cloud-init, make an image from instance and then use this new image. Harder.
Tagged : /