Server Bug Fix: How to post logs to Cloudwatch with CLI?

Original Source Link

Im tring to create a bash script that checks the status of the website, Im using this command:

This one to create the logstream

aws logs create-log-stream --log-group-name "WebsiteStatusMessage" --log-stream-name $timestamp

This other one to post the logs

aws logs put-log-events --log-group-name "WebsiteStatusMessage" --log-stream-name "$timestamp" --log-events file://$iam/logsoutput.json

logsoutput.json

 [ 
   { 
     "timestamp": 202006041832, 
     "message": "test event1" 
   } 
 ]

And when I execute the command or the script it shows this message:

{
    "rejectedLogEventsInfo": {
        "tooOldLogEventEndIndex": 1
    }
}

And also this other message on console:

Expecting property name enclosed in double quotes: line 6 column 2 (char 84)

Bash script code

#!/bin/bash

#variables
iam=$(pwd)
timestamp=$(date +"%Y%m%d%H%M")
instance_id="i-######"

#Read line per line and storage it on a array
getArray()
{
array=()
  while IFS= read -r line
  do
       array+=("$line")
   done < "$1"
}
getArray "$iam/sites.txt"


#working
for url in "${array[@]}"
do
  echo "The website is: $url"
  STATUS=$(curl -s -o /dev/null -w "%{http_code}n" $url)
    if [ "$STATUS" == "200" ] || [ "$STATUS" == "301" ] || [ "$STATUS" == "302" ]; then
        echo "$url is up, returned $STATUS"
    else
        echo "$url is not up, returned $STATUS"
###
# This will send the metric to metrics Cloudwatch
###

        rm $iam/metricsOutput.json
        echo " [ " >> $iam/metricsOutput.json
        echo " { " >> $iam/metricsOutput.json
        echo " "MetricName": "SiteStatus", " >> $iam/metricsOutput.json        
        echo " "Timestamp": "$timestamp", " >> $iam/metricsOutput.json
        echo " "Value": 1, " >> $iam/metricsOutput.json
        echo " } " >> $iam/metricsOutput.json
        echo " ] " >> $iam/metricsOutput.json


        aws cloudwatch put-metric-data --namespace "Custom-2" --metric-data file://$iam/metricsOutput.json


###
# This sends the message to logstream on Cloudwatch
###
        rm $iam/logsoutput.json
        echo " [ " >> $iam/logsoutput.json
        echo " { " >> $iam/logsoutput.json
        echo " "timestamp": $timestamp, " >> $iam/logsoutput.json
        echo " "message": "test event1" " >> $iam/logsoutput.json
        echo " } " >> $iam/logsoutput.json
        echo " ] " >> $iam/logsoutput.json

        aws logs create-log-stream --log-group-name "WebsiteStatusMessage" --log-stream-name $timestamp
        aws logs put-log-events --log-group-name "WebsiteStatusMessage" --log-stream-name "$timestamp" --log-events file://$iam/logsoutput.json


    fi
done

I tried with different json structures but still nothing, any idea?
(The aws cli have full cloudwatch permissions)

Haven’t tested but I think the timestamp should be a unix timestamp (seconds since 1970-01-01 00:00:00) not a date, i.e. $(date +%s) and quite possibly in millisecond precision so append 000 at the end.

Hope that helps 🙂

I’d suggest you seperate the two functions of your script. You can make your script handle the outputting of the logs into the .json files, and install the CloudWatch agent on the instance. When you configure the CloudWatch agent, you can tell it to include your custom log folder and it will push everything in a clean fashion to CloudWatch. There are some minor charges for using the agent because of the granularity so definitely check out the pricing for it. It’s a much better solution than using a CLI command in a bash script to manually push your logs. If there’s a managed service for something, always use that – that’s my motto at least.

Tagged : / / / /

Server Bug Fix: Parameter parsing when using AWS SSM send-command from Terraform

Original Source Link

I am using AWS SSM send-command to run a PowerShell script on an AWS instance.

The following commanad works fine in a command shell but when called in Terraform gets an error.

aws ssm send-command –instance-ids ${self.id} –document-name AWS->RunPowerShellScript –parameters commands=’C:Installersbootstrap_test.ps1 >test’

when called in Terraform using:

provisioner “local-exec” {
command = “aws ssm send-command –instance-ids ${self.id} –document-name AWS-RunPowerShellScript –parameters commands=’C:Installersbootstrap_test.ps1 test’ –output-s3-bucket-name ellucian-ecrm-lab –output-s3-key-prefix bootstraplogs”
}

The error returned is:

exit status 255. Output: usage: aws [options] [ …] [parameters]
To see help text, you can run:

aws help
aws help
aws help

Unknown options: test’

So I don’t thing Terraform is parsing the string the way AWS needs it. what can I do to format this string correctly in Terraform?

Try by double quoting the commands, e.g.:

... --parameters commands='"C:Installersbootstrap_test.ps1 > test"'

Here is the example with the loop:

aws ssm send-command --instance-ids "i-01234" --document-name "AWS-RunPowerShellScript" --query "Command.CommandId" --output text --parameters commands='"While ($i -le 100) {$i; $i += 1}"'

Or the opposite, e.g.

--parameters commands="'While ($i -le 100) {$i; $i += 1}'"

Tagged : /

Server Bug Fix: Amazon SES data via AWS S3: Is there a simple way to list and download a folder and get line counts?

Original Source Link

I’ve set up Amazon SES to send a company announcement to a list of about 1,000 contacts. I set up the Kenesis Firehose to log all e-mail events (e.g., Send, Bounce, Click) to a bucket in S3. SES seems to provide tools for bulk analysis of massive numbers of e-mails, but I want to see results for each individual recipient. I don’t know if I missed something, but the only way I found to do that was to download the files from S3 and parse them in a spreadsheet. I’ve developed a pretty complex spreadsheet to do that.

The files are stored in S3 in a folder hierarchy by month, day, and hour. The S3 console allows me to download each file individually by navigating through the folder tree manually and right-clicking on each file. The S3 console documentation says:

You can download a single object per request using the Amazon S3 console. To download multiple objects, use the AWS CLI, AWS SDKs, or REST API.

I’ve become familiar with the AWS SDK for PHP and am using it to send the e-mails in SES. The S3 Developer Guide has instructions to Get an Object Using the AWS SDK for PHP. It doesn’t seem to have instructions for getting multiple objects, and I guess I can accomplish that by writing a loop through the folders and files.

I have not installed the AWS CLI. There’s a Server Fault answer that seems to say that one can download a folder by the CLI command sync.

So what it looks like right now is that in order to download all the files in a folder, I either need to write an SDK program or install the CLI and learn the sync command. Either one of those seems like a lot of work for something I can do in Windows with a mouse drag or in Filezilla with a double click of the mouse. Am I missing something, or do I really need to do all that work just to download the files in a folder tree?

Windows and Filezilla also easily allow me to see a whole folder tree at once with all the files in each folder. The S3 console only allows me to see one subfolder at a time. Again, do I need to write an SDK program or learn the CLI just to get a listing of a folder tree?

While I’m asking those two questions, something else that would be helpful would be to see the number of lines in each file because each line represents an SES event. I’ll have that information easily from my analysis after I have the files, but I’m surprised that SES doesn’t seem to give me a way to see the number of events except by doing that analysis. Is that correct, or have I overlooked something in SES that would give me that information?

And one final question: All of the above would be unnecessary if I could simply ask SES to give me a dump of all the event data. The only way I’ve found to get those data is by downloading those S3 files, which I then have to merge into my spreadsheet. So again, have I overlooked something in SES that would allow me to get all event data without going through all this rigmarole in S3?

You can simply use cyberduck to list and download files, or use Athena to analyze your data directly from S3

Tagged : / / / /

Code Bug Fix: ParallelizationFactor in kinesis event source mapping

Original Source Link

I’m looking to connect a Kinesis stream to a Lambda function via event source mapping, want to set the parallelization-factor value to any value between 1- 10 as suggested in the documention:

https://docs.aws.amazon.com/cli/latest/reference/lambda/create-event-source-mapping.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventsourcemapping.html#cfn-lambda-eventsourcemapping-startingposition

and an example at https://aws.amazon.com/blogs/compute/new-aws-lambda-scaling-controls-for-kinesis-and-dynamodb-event-sources/

The following command results in an error:

aws lambda create-event-source-mapping --function-name myLambdaFunction 
--parallelization-factor 2 --batch-size 100 --starting-position LATEST 
--event-source-arn arn:aws:kinesis:eu-west-1:id:stream/mystreamname

Unknown options: --parallelization-factor, 2

If I look at the AWS CLI aws lambda create-event-source-mapping help, there is no option for --parallelization-factor

SYNOPSIS
            create-event-source-mapping
          --event-source-arn <value>
          --function-name <value>
          [--enabled | --no-enabled]
          [--batch-size <value>]
          [--starting-position <value>]
          [--starting-position-timestamp <value>]
          [--cli-input-json <value>]
          [--generate-cli-skeleton <value>]

There is a difference in documentation and the aws cli help. Is the parallelization factor not enabled for kinesis? What am I doing wrong?

PS: I have tried in eu-west-1 and us-east-1 regions.
I am getting the same error if I try to set --maximum-batching-window-in-seconds.

Check the version of AWS CLI your using, because i can find the –parallelization-factor option when i tried the below command

aws lambda create-event-source-mapping help

enter image description here

This is version i’m using

enter image description here

Tagged : / / / /

Server Bug Fix: How do I use the AWS CLI to find the uptime of servers?

Original Source Link

I’ve reviewed reference documentation for the AWS CLI. I know how to use the aws ec2 describe-instances command. Is there a variation to list the uptime (not the creation time) of the servers? The OS uptime is useful. It seems like this could be obtained via an aws cli command.

There is no specific ‘uptime’ measure in Amazon EC2.

If AWS Config has been configured, you could use get-resource-config-history to retrieve the history of an instance, eg:

aws configservice get-resource-config-history --resource-type AWS::EC2::Instance --resource-id i-1a2b3c4d

AWS Config will show the state change of an Amazon EC2 instance (eg stopped, running) as a Configuration.State.Name value. The change will also have a timestamp.

Using this configuration history, you could piece together enough information to calculate uptime.

Alternatively, you could calculate the uptime from within the instance (eg from system logs or via a custom app) rather than obtaining it from EC2.

From the AWS documentation, you are probably after LaunchTime:

aws ec2 describe-instances --query "Reservations[].Instances[].[LaunchTime]"

From there, you can calculate the offset from the current date and time to find the uptime.

Tagged : / / / /

Server Bug Fix: Accessing aws cli from a private network

Original Source Link

do you know where I can find a comprehensive list of IPs that need to be whitelisted in order to be able to use AWS CLI?

The scenario: I have my orchestrator server on an on-prem server without access to the internet, I can access the internet by asking networking folks to whitelist endpoints. I need to be able to fire aws cli commands from this private server, any ideas? Thanks!

AWS provides their public ips here: https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html .

I don’t think the list is static so I would write a script to fetch the IPs. It should propably work only to whitelist the IPs for the region you are intending to use.

Try checking this, I think It’s what you are looking for:

AWS service endpoints

As these update frequently, a quick google search for AWS service endpoints will get latest results

Tagged : / /

Code Bug Fix: Is there any way to create multiple profiles in azure just like the way we do in aws ?(I’m newbie to azure/aws) [closed]

Original Source Link

In aws we have the concept of creating different profile with configure cmd as

aws configure set key value ---profile profile-name.

So is there any similar way we can do it in azure through azure cli?

Tagged : / / / /

Code Bug Fix: How I can find file downloaded from s3 in Docker container?

Original Source Link

I am writing Dockerfile in which trying to download file from s3 to local using aws cli and ADD those files to docker container as below following this page

FROM nrel/energyplus:8.9.0

RUN apt-get update && 
    apt-get install -y 
        python3 
        python3-pip 
        python3-setuptools 
        groff 
        less 
    && pip3 install --upgrade pip 
    && apt-get clean

RUN pip3 --no-cache-dir install --upgrade awscli

ADD . /var/simdata
WORKDIR /var/simdata

ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_REGION=ap-northeast-1

RUN aws s3 cp s3://file/to/path .

RUN mkdir -p /var/simdata

ENTRYPOINT [ "/bin/bash" ]

After running docker build -it <container id>, I expected that I can find the file downloaded from s3 in container, but I could not.
Does anyone know where I can find this file downloaded from s3?

The file should be in /var/simdata.

Since you do not change your work dir before downloading the file from s3, your image uses nrel/energyplus:8.9.0 default work dir.

$ docker pull nrel/energyplus:8.9.0
$ docker inspect --format '{{.ContainerConfig.WorkingDir}}' nrel/energyplus:8.9.0
/var/simdata

A better approach would be download the file to your local from s3 and in Dockerfile copy that file to /var/simdata

COPY file /var/simdata/file

This will give you more control.

Tagged : / /

Server Bug Fix: Is there any way of viewing, in AWS, what ips in a subnet have been allocated?

Original Source Link

Is there any way of seeing what ip addresses AWS thinks have been allocated in a subnet? I’ve run a ping scan, and I’ve checked our internal ip management software, and there should be more than 8 ips free, however the Network Load Balancer creation wizard is insisting that I have less than 8 ips free.

It’d be super awesome if I could see what ips amazon thinks we’re using, so I can see what the discrepancy is, but I don’t see any way of doing that. Anyone know how I could do this? It needs to show all the allocated ips, not just the ones attached to ec2 instances.

Your count may be off because AWS reserves five IP addresses per subnet CIDR block. The first four IP addresses in a subnet CIDR block and the last IP address in that CIDR block for its internal networking.

For example in a 10.0.0.0/24 subnet AWS will reserve:

10.0.0.0: Network address.

10.0.0.1: Reserved by AWS for the VPC router.

10.0.0.2: Reserved by AWS. The IP address of the DNS server is the base of the VPC network range plus two. For VPCs with multiple CIDR blocks, the IP address of the DNS server is located in the primary CIDR. We also reserve the base of each subnet range plus two for all CIDR blocks in the VPC. For more information, see Amazon DNS server.

10.0.0.3: Reserved by AWS for future use.

10.0.0.255: Network broadcast address. We do not support broadcast in a VPC, therefore we reserve this address.

To get a list of the IP addresses in use from the command line:

aws ec2 describe-network-interfaces --filters Name=subnet-id,Values=<subnet id> |jq -r '.NetworkInterfaces[].PrivateIpAddress' |sort

Using the subnet-id filter allows you to exclude the subnets you’re not concerned about.

If you want a count just replace sort with wc -l.

You can visually see the number of IP’s free per subnet in the VPC -> Subnet section of the AWS Console.

References

describe-network-interfaces
AWS VPC Subnets

In EC2 console go to Network interfaces view down in the left hand side column. It will show all the network interfaces allocated not only to EC2s but also to Fargate, RDS, VPC Lambdas, NAT Gateways, etc.

Note also that there is a couple of IPs reserved in each VPC and Subnet – IGW, AWS DNS, etc. IIRC it’s the first 5 IPs that are reserved. These will not show in the list above.

Hope that helps 🙂

Tagged : / /

Code Bug Fix: How to get AWS SES template to use parameters in the “HTMLPart”:

Original Source Link

How can i get the {{name}} and {{favoriteanimal}} parameter to work, i have a custom html file that has been escaped to use in the .json format. When adding in my parameters to the html like the template below my emails do not send, when i take out the parameters from my hmtlpart my email then sends just fine. I am able to successfully get the subject {{name}} parameter to work but am struggling on getting the html parameters to work. I do have my parameters in the json file i am using to send the email in the TemplateData section.

{
      "Template": {
        "TemplateName": "MyTemplate",
        "SubjectPart": "Greetings, {{name}}!",
        "HtmlPart": "<h1>Hello {{name}},</h1><p>Your favorite animal is {{favoriteanimal}}.</p>",
        "TextPart": "Dear {{name}},rnYour favorite animal is {{favoriteanimal}}."
      }
    }

Tagged : / /