Linux HowTo: Cron scheduled backup script not working or writing to log file

Original Source Link

I have a duplicity backup script set to run once per day through the root cron tab.

I set it up like this:

sudo crontab -e -u root

and added the line:

00 03   *   *   *  root   /home/[myusername]/Scripts/Backups/DuplicityBackup.sh

the script has commands to write to a text file like this:

echo "Creating backup of data directory"
PASSPHRASE='[mypassword]' duplicity /usr/share/nginx/data file:///dysonbackup/datadir &>> /home/[myusername]/Scripts/Backups/backup_logs/data_backup_log.txt

echo "Done"

Running the script itself as root starts the backup and writes to the text file.

Running

sudo crontab -l

shows it listed

Running

grep DuplicityBackup.sh /var/log/syslog

shows

May 28 03:00:01 [myservername] CRON[89749]: (root) CMD (root   /home/[myusername]/Scripts/Backups/DuplicityBackup.sh)

so I am assuming it at least triggered to run, but there is no new information on the text file.

And it does not appear that the duplicity backup is running as there are no new files in the backup directory.

Ubuntu 20.04

duplicity 0.8.12

Excessive field

sudo crontab -e -u root edits crontab for the user root. This is different than the system-wide crontab (/etc/crontab). The former uses entries in a form of

m h  dom mon dow  command

and the latter uses

m h  dom mon dow  user  command

You used the latter syntax where you should have used the former. The tool tried to run

root   /home/myusername/Scripts/Backups/DuplicityBackup.sh

Solution: either move your job to /etc/crontab as-is; or fix the syntax by removing root.

If it’s not enough then keep reading.


No shebang?

Does DuplicityBackup.sh contain a shebang that specifies Bash as the interpreter?

cron will run /home/myusername/Scripts/Backups/DuplicityBackup.sh in sh. If there is no shebang the situation is complicated in general, but sh should run it in sh, regardless of who provides sh (there are different implementations, compare this).

When you run the script from Bash, it gets run in Bash.

&>> you used is a bashism. In Bash &>> file is equivalent to >> file 2>&1 and this is what you want. In sh it’s equivalent to & >> file and this is not what you want.

Solution: use a shebang.


Inaccessible /home/myusername?

/home/myusername may not be available because:

  • /home/myusername is automatically unmounted when myusername logs out (especially possible if it’s encrypted);

  • or /home/myusername is mounted as FUSE without allow_other and therefore cron running as root cannot access it;

  • or cron is run by systemd and the service (cron.service) uses ProtectHome= or InaccessiblePaths= to restrict access to /home/myusername (not likely though).

Solution: do not use /home/myusername with cron jobs run by root.

Tagged : / / / /

Making Game: Cron scheduled backup script not working or writing to log file

Original Source Link

I have a duplicity backup script set to run once per day through the root cron tab.

I set it up like this:

sudo crontab -e -u root

and added the line:

00 03   *   *   *  root   /home/[myusername]/Scripts/Backups/DuplicityBackup.sh

the script has commands to write to a text file like this:

echo "Creating backup of data directory"
PASSPHRASE='[mypassword]' duplicity /usr/share/nginx/data file:///dysonbackup/datadir &>> /home/[myusername]/Scripts/Backups/backup_logs/data_backup_log.txt

echo "Done"

Running the script itself as root starts the backup and writes to the text file.

Running

sudo crontab -l

shows it listed

Running

grep DuplicityBackup.sh /var/log/syslog

shows

May 28 03:00:01 [myservername] CRON[89749]: (root) CMD (root   /home/[myusername]/Scripts/Backups/DuplicityBackup.sh)

so I am assuming it at least triggered to run, but there is no new information on the text file.

And it does not appear that the duplicity backup is running as there are no new files in the backup directory.

Ubuntu 20.04

duplicity 0.8.12

Excessive field

sudo crontab -e -u root edits crontab for the user root. This is different than the system-wide crontab (/etc/crontab). The former uses entries in a form of

m h  dom mon dow  command

and the latter uses

m h  dom mon dow  user  command

You used the latter syntax where you should have used the former. The tool tried to run

root   /home/myusername/Scripts/Backups/DuplicityBackup.sh

Solution: either move your job to /etc/crontab as-is; or fix the syntax by removing root.

If it’s not enough then keep reading.


No shebang?

Does DuplicityBackup.sh contain a shebang that specifies Bash as the interpreter?

cron will run /home/myusername/Scripts/Backups/DuplicityBackup.sh in sh. If there is no shebang the situation is complicated in general, but sh should run it in sh, regardless of who provides sh (there are different implementations, compare this).

When you run the script from Bash, it gets run in Bash.

&>> you used is a bashism. In Bash &>> file is equivalent to >> file 2>&1 and this is what you want. In sh it’s equivalent to & >> file and this is not what you want.

Solution: use a shebang.


Inaccessible /home/myusername?

/home/myusername may not be available because:

  • /home/myusername is automatically unmounted when myusername logs out (especially possible if it’s encrypted);

  • or /home/myusername is mounted as FUSE without allow_other and therefore cron running as root cannot access it;

  • or cron is run by systemd and the service (cron.service) uses ProtectHome= or InaccessiblePaths= to restrict access to /home/myusername (not likely though).

Solution: do not use /home/myusername with cron jobs run by root.

Tagged : / / / /

Server Bug Fix: Newline-separated xargs

Original Source Link

Is it possible to make xargs use only newline as separator? (in bash on Linux and OS X if that matters)

I know -0 can be used, but it’s PITA as not every command supports NUL-delimited output.

Something along the lines of

alias myxargs='perl -p -e "s/n//;" | xargs -0'
cat nonzerofile | myxargs command

should work.

GNU xargs (default on Linux; install findutils from MacPorts on OS X to get it) supports -d which lets you specify a custom delimiter for input, so you can do

ls *foo | xargs -d 'n' -P4 foo 

With Bash, I generally prefer to avoid xargs for anything the least bit tricky, in favour of while-read loops. For your question, while read -ar LINE; do ...; done does the job (remember to use array syntax with LINE, e.g., ${LINE[@]} for the whole line). This doesn’t need any trickery: by default read uses just n as the line terminator character.

I should post a question on SO about the pros & cons of xargs vs. while-read loops… done!

Not with the standard version. I have a hacked version that does that – I used it before I knew about ‘find ... -print0 | xargs -0 ...‘. Contact me if you want a copy – see my profile. (No rocket science – it just about does the job, though. It is not a complete xargs replica.)

$ echo "1n2 3n4 5 6" | xargs -L1 echo  "#"
# 1
# 2 3
# 4 5 6

What about cat file | xargs | sed 's/ /n/ig' this will convert spaces to newlines, using standard Linux bash tools.

Tagged : / /

Ubuntu HowTo: rbash does not work when set as the default shell for a user

Original Source Link

Even though /etc/passwd and $SHELL confirm that the default shell is rbash, the restrictions as described by rbash manual does not apply.
For example, I can change directory with cd, I can change the PATH, etc.

However, when I manually type bash -r, all restrictions are in place.
I’m using Ubuntu 18.04.4 LTS

Any ideas what is causing this behaviour?
Is there any workaround for Ubuntu as it seems to be working on CentOS?

You need to restart the user session, either by logging out and back in or by rebooting the machine, to make the changes take place.

Tagged : /

Server Bug Fix: extract information from a log file

Original Source Link

I wrote a bash script to extract a few Items like the IP addresses that makes the most number of connection attempts, now I want to limit all of this within a time range , lets say the last 5 days/hours.

Example of what I wrote :

-G: Which IP address makes the most number of connection attempts?

if [ "$3" = "-G" ]; then 
 I won't write the whole code ! 
echo "IP address makes the most number of connection attempts: n"
awk '{print $1}' $4 | sort | uniq -c | sort -r -n | awk '{print $2 "t" $1}' >> Connections 
cat Connections | head -$2 
 rm Connections

now I want to add this Items

-O: Limit it to last number of hours

-P: Limit it to the last number of days

and I run it like this : sh -O -P -G *.log

example log file:

213.46.27.204 - - [15/Sep/2011:22:55:21 +0100]
213.46.27.204 - - [16/Sep/2011:22:55:21 +0100]
213.46.27.204 - - [17/Sep/2011:22:55:21 +0100]
213.46.27.204 - - [18/Sep/2011:22:55:21 +0100]
213.46.27.204 - - [19/Sep/2011:22:55:21 +0100]

please answer just with bash script not python or perl


we have the last date in the log file and we want to extract the last 5 hour/day so :

I find this scrip that converted the date to unix recognizable format but since I have Mac OSX I could not run it 🙁 do know why ? )

#!/bin/env bash

  temp_date=`cat ./serverlog.log | tail -n1 
  | cut -d [ -f 2 | cut -d ] -f 1`

  echo "$temp_date"

  temp_date2=`echo $temp_date | 
  sed -e 's/Jan/01/g' 
  -e 's/Feb/02/g' -e 's/Mar/03/g' 
  -e 's/Apr/04/g' -e 's/May/05/g' 
  -e 's/Jun/06/g' -e 's/Jul/07/g' 
  -e 's/Aug/08/g' -e 's/Sep/09/g' 
  -e 's/Oct/10/g' -e 's/Nov/11/g' 
  -e 's/Dec/12/g'`

  echo "$temp_date2"

  temp_year=`echo $temp_date2 | gawk '{print substr($0,7,4)}'`
  temp_month=`echo $temp_date2 | gawk '{print substr($0,4,2)}'`
  temp_day=`echo $temp_date2 | gawk '{print substr($0,1,2)}'`
  temp_time=`echo $temp_date2 | gawk '{print substr($0,12,8)}'`

  #UTC format
  utc_date="$temp_year-$temp_month-$temp_day $temp_time"

  echo "$utc_date"

  reference_seconds=`date --utc -d "$utc_date" +%s`

  echo "$reference_seconds"

I recognized that the last step would be to subtract 5 hours/days from the last date

lastdate(converted) – (5*3600) = X, now we can extract the last 5 hours from log file .
last date – ( 5 *24*3600 ) + X, now we can extract the last 5 days from the log file .

now any idea how to exactly write this?

There’s a script here that will output the lines within a certain date range. Pipe the output to your program and that’s it.

it was not what i wanted , I want the Items within the last X
hours/days

  1. Get the last time with:

    endtime=`tail -1 $3 | awk '{ print substr($4, 2, length($4)-1) }'`
    
  2. Convert it to Epoch time.

  3. starttime is calculated by subtracting the last hours (days) you
    want to filter from the endtime:

    if [ $1 = "-O" ]; then 
        time=`expr $2 * 60 * 60`
    fi
    if [ $1 = "-P" ]; then 
        time=`expr $2 *24 * 60 * 60`
    fi
    
    starttime=`expr endtime.epoch - time`
    

Refer to this topic which I mentioned above to complete your task.

Tagged : / /

Server Bug Fix: How to post logs to Cloudwatch with CLI?

Original Source Link

Im tring to create a bash script that checks the status of the website, Im using this command:

This one to create the logstream

aws logs create-log-stream --log-group-name "WebsiteStatusMessage" --log-stream-name $timestamp

This other one to post the logs

aws logs put-log-events --log-group-name "WebsiteStatusMessage" --log-stream-name "$timestamp" --log-events file://$iam/logsoutput.json

logsoutput.json

 [ 
   { 
     "timestamp": 202006041832, 
     "message": "test event1" 
   } 
 ]

And when I execute the command or the script it shows this message:

{
    "rejectedLogEventsInfo": {
        "tooOldLogEventEndIndex": 1
    }
}

And also this other message on console:

Expecting property name enclosed in double quotes: line 6 column 2 (char 84)

Bash script code

#!/bin/bash

#variables
iam=$(pwd)
timestamp=$(date +"%Y%m%d%H%M")
instance_id="i-######"

#Read line per line and storage it on a array
getArray()
{
array=()
  while IFS= read -r line
  do
       array+=("$line")
   done < "$1"
}
getArray "$iam/sites.txt"


#working
for url in "${array[@]}"
do
  echo "The website is: $url"
  STATUS=$(curl -s -o /dev/null -w "%{http_code}n" $url)
    if [ "$STATUS" == "200" ] || [ "$STATUS" == "301" ] || [ "$STATUS" == "302" ]; then
        echo "$url is up, returned $STATUS"
    else
        echo "$url is not up, returned $STATUS"
###
# This will send the metric to metrics Cloudwatch
###

        rm $iam/metricsOutput.json
        echo " [ " >> $iam/metricsOutput.json
        echo " { " >> $iam/metricsOutput.json
        echo " "MetricName": "SiteStatus", " >> $iam/metricsOutput.json        
        echo " "Timestamp": "$timestamp", " >> $iam/metricsOutput.json
        echo " "Value": 1, " >> $iam/metricsOutput.json
        echo " } " >> $iam/metricsOutput.json
        echo " ] " >> $iam/metricsOutput.json


        aws cloudwatch put-metric-data --namespace "Custom-2" --metric-data file://$iam/metricsOutput.json


###
# This sends the message to logstream on Cloudwatch
###
        rm $iam/logsoutput.json
        echo " [ " >> $iam/logsoutput.json
        echo " { " >> $iam/logsoutput.json
        echo " "timestamp": $timestamp, " >> $iam/logsoutput.json
        echo " "message": "test event1" " >> $iam/logsoutput.json
        echo " } " >> $iam/logsoutput.json
        echo " ] " >> $iam/logsoutput.json

        aws logs create-log-stream --log-group-name "WebsiteStatusMessage" --log-stream-name $timestamp
        aws logs put-log-events --log-group-name "WebsiteStatusMessage" --log-stream-name "$timestamp" --log-events file://$iam/logsoutput.json


    fi
done

I tried with different json structures but still nothing, any idea?
(The aws cli have full cloudwatch permissions)

Haven’t tested but I think the timestamp should be a unix timestamp (seconds since 1970-01-01 00:00:00) not a date, i.e. $(date +%s) and quite possibly in millisecond precision so append 000 at the end.

Hope that helps 🙂

I’d suggest you seperate the two functions of your script. You can make your script handle the outputting of the logs into the .json files, and install the CloudWatch agent on the instance. When you configure the CloudWatch agent, you can tell it to include your custom log folder and it will push everything in a clean fashion to CloudWatch. There are some minor charges for using the agent because of the granularity so definitely check out the pricing for it. It’s a much better solution than using a CLI command in a bash script to manually push your logs. If there’s a managed service for something, always use that – that’s my motto at least.

Tagged : / / / /

Linux HowTo: In zsh how can I run a script which is written for bash

Original Source Link

I have switched to zsh, how can I run a script which is written for bash?

I don’t want to modify the script to put #!/bin/bash in the beginning for the script since it is version controlled.

Is there another way?

The line of the script having problem is:

for f in `/bin/ls v/*/s.sh v/*/*/v.sh d/*/*/v.sh 2> /dev/null`

You can run the script with bash manually:

bash myscript.sh

A better and more permanent solution is to add a shebang line:

#!/usr/bin/env bash

Once that line is added, you can run it directly, even in ZShell:

% ./myscript.sh

As far as version control goes, you should commit this line, for the good of all the developers involved.

In addition to the solution commented by @neersighted, you should also make sure your bash script has the executive permission to run in the zsh. Thus, you should first set the first line of your bash script with shabang:

#!/usr/bin/env bash

Then run:

% chmod +x myscript.sh

to provide such permission. Then you can run it in zsh as:

% ./myscript.sh

Tagged : /

Code Bug Fix: Command not found when running inside a SH file

Original Source Link

I’m in a weird situation trying to run a Docker image

The image is failing for not finding the dotnet, so then I connected to it to run manually the commands and see what’s going on.

My structure is:

  • Docker:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update && apt-get -y install python3-pip
WORKDIR /app
COPY requirements.txt .
COPY publish .
COPY IRT .
COPY start.sh .
RUN chmod u+x ./start.sh 
RUN pip3 install -r requirements.txt
ENTRYPOINT ["sh", "start.sh"]
  • start.sh (simplified for show the problem)
timeout 15s ping google.com | grep -m 1 "icmp_seq=6" && dotnet Gatekeeper.dll

The Problem

If I run sh start.sh, this is the output:

[email protected]:/app# sh start.sh
: not found: start.sh:
64 bytes from 216.58.211.110 (216.58.211.110): icmp_seq=6 ttl=37 time=15.9 ms
  It was not possible to find any installed .NET Core SDKs
  Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from:
      https://aka.ms/dotnet-download

BUT, If I run the command itself without calling the start.sh, it works perfectly:

[email protected]:/app# timeout 15s ping google.com | grep -m 1 "icmp_seq=6" && dotnet Gatekeeper.dll
64 bytes from 172.217.17.78 (172.217.17.78): icmp_seq=6 ttl=37 time=23.1 ms
info: AWSSDK[0]
      Found AWS options in IConfiguration
info: AWSSDK[0]
      Found credentials using the AWS SDK's default credential search
...

I have already tried to use the full path for dotnet (/usr/bin/dotnet) but still it doesn’t work.

Also something very weird. if I modify the start.sh to have only dotnet Gatekeeper.dll, it works.

I am not very experienced with linux sh and have no idea why this is happening.
Can you help me ?

Extra:
Why do I need this command?

  • I run 2 servers for the entrypoint, a python server that is required to be ready before the Dotnet start, otherwise I have a kind of “Cold start” error on my containers.
  • I have already a workaround where I modify the c# code to keep trying to connect to for some seconds and then proceed with the startup, but this is not ideal for several reasons.
Tagged : / / /

Making Game: In zsh how can I run a script which is written for bash

Original Source Link

I have switched to zsh, how can I run a script which is written for bash?

I don’t want to modify the script to put #!/bin/bash in the beginning for the script since it is version controlled.

Is there another way?

The line of the script having problem is:

for f in `/bin/ls v/*/s.sh v/*/*/v.sh d/*/*/v.sh 2> /dev/null`

You can run the script with bash manually:

bash myscript.sh

A better and more permanent solution is to add a shebang line:

#!/usr/bin/env bash

Once that line is added, you can run it directly, even in ZShell:

% ./myscript.sh

As far as version control goes, you should commit this line, for the good of all the developers involved.

In addition to the solution commented by @neersighted, you should also make sure your bash script has the executive permission to run in the zsh. Thus, you should first set the first line of your bash script with shabang:

#!/usr/bin/env bash

Then run:

% chmod +x myscript.sh

to provide such permission. Then you can run it in zsh as:

% ./myscript.sh

Tagged : /

Linux HowTo: Setting an alias in an ssh session without using bashrc/profile

Original Source Link

How can I set an alias on a remote machine over SSH with local ssh config (not editing the remote bashrc/profile files).

Background: I have a remote dev build machine that is shared with a few devs. We only use it for building software and publishing it, but occasionally, we need to make small changes in the code and git commit those. I don’t want ssh keys on the remote machine, so we are using SSH agent forwarding so our local SSH keys are available on the remote machine for git push. But we still need to set the git user.name and user.email on the remote machine. Currently, just after logging in, I execute alias git='git -c "user.name=My Name" -c "[email protected]"' so the git commits are attributed to me. Since we share the same account with a few devs, I don’t want to set that alias in the remote ~/.bashrc file. So is there a way to set that command in my local ~/.ssh/config for the host so it gets executed in the shell when I login?

Ok, from a bunch of different questions I did manage to piece a solution together.
First of all, you can use RemoteCommand and LocalCommand in your ~/.ssh/config file to execute a local command before logging in, and then instead of just dropping to a shell, execute a different command (shell). I couldn’t make it work with only RemoteCommand (there is no way it seems to execute a remote command that includes setting an alias), but what you can do is provide a different .bashrc file.

So it works like this: copy a local .bashrc file to the remote machine under a different name in /tmp. Login, execute bash and specify the .bashrc file, and at logout, remove it again. Also, when you use RemoteCommand in ~/.ssh/config you need to tell ssh to force a pseudo-tty so you get a normal interactive shell.

This is my relevant section in ~/.ssh/config:

Host mymachine
    Hostname mymachine.mydomain.com
    ForwardAgent yes
    User someuser
    RequestTTY force
    PermitLocalCommand yes
    LocalCommand scp ~/.sshrc [email protected]:/tmp/.bashrc_someuser
    RemoteCommand bash --rcfile /tmp/.bashrc_someuser; rm /tmp/.bashrc_someuser

The ~/.sshrc filename is arbitrary. It’s just a file anywhere on your local machine. you can even make multiple versions for different remote ssh machines.

This is what my ~/.sshrc file looks like:

source ~/.bashrc
alias git='git -c "user.name=Me" -c "[email protected]"'

This will ensure the .bashrc of the remote machine is applied, but then you can add your own customizations afterwards. Now I just ssh there with ssh mymachine and the git command works and uses whatever keys I have available in my local machine.

Tagged : / / /