Server Bug Fix: How to fix postfix: Sender address rejected: need fully-qualified address (in reply to RCPT TO command))?

Original Source Link

I’ve installed dovecot, postfix and roundcube on an ubuntu 12.04 box.
The system is basically working – i.e it is able to send/receive mails to/from other domains.

However, some domains cause the following error message in /var/log/mail.log

Jul 15 01:59:21 one postfix/smtp[2019]: 0D0399C025F: to=<[email protected]>,
      relay=sm01.destdomain.com[x.x.x.x]:25, delay=0.56, delays=0.4/0/0.06/0.1,
      dsn=5.5.2, status=bounced (host sm01.destdomain.com[x.x.x.x] said:
      504 5.5.2 <[email protected]>: Sender address rejected: need
      fully-qualified address (in reply to RCPT TO command))

Do you have any idea what’s wrong here? I.e. How to force postfix to use “[email protected]” instead of “[email protected]” when connecting to another mail server?

Any hints are appreciated.

$mydomain is used as a default value for many other configuration parameters, but it does not set the sender domain.

Take a look to the $myorigin parameter in /etc/postfix/main.cf

It specifies the domain that appears in mail that is posted on this machine. The default is to use the local machine name, $myhostname, which defaults to the name of the machine.

For more information, see Postfix basic configuration README

Programs like “mail [email protected]” may not use myorigin
Instead create /etc/postfix/canonical

@local @realdomain.com

And activate the stuff

postmap /etc/postfix/canonical
service postfix restart

Tagged : /

Server Bug Fix: How do I set locale when building an Ubuntu Docker image with Packer?

Original Source Link

I’m using Packer to build a Docker image based on Ubuntu 14.04, i.e., in my Packer template I have:

"builders": [{
    "type": "docker",
    "image": "ubuntu",
    "commit": true
}],

and I build it using:

$ packer build my.json

What do I need to put in the template to get a specific locale (say en_GB) to be set when I subsequently run the following?

$ sudo docker run %IMAGE_ID% locale

Additional info

As it stands, I get:

LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
...
LC_IDENTIFICATION="POSIX"
LC_ALL=

which causes a few problems for things I want to do next, like installing certain Python packages.

I’ve tried adding:

{
    "type": "shell",
    "inline": [
        "locale-gen en_GB.UTF-8",
        "update-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB.UTF-8 LC_ALL=en_GB.UTF-8"
    ]
}

but while that does set up the locale config it doesn’t affect the env used by docker run. Even if I add extra export lines like:

{
    "type": "shell",
    "inline": [
    ...
        "export LANG=en_GB.UTF-8"
    ]
}

they have no effect, presumably because when using docker run, it’s not a child process of the command packer build uses when running these commands initially.

As a workaround I can pass env vars to docker run, but don’t want to have to do that each time, e.g.:

sudo docker run -e LANG=en_GB.UTF-8 -e LANGUAGE=en_GB.UTF-8 -e LC_ALL=en_GB.UTF-8 %IMAGE_ID% locale

I haven’t tried it, but according to the documentation, you should be able to do this using the docker-import post-processor: https://www.packer.io/docs/post-processors/docker-import.html

Example:

{
  "type": "docker-import",
  "repository": "local/ubuntu",
  "tag": "latest",
  "changes": [
    "ENV LC_ALL en_GB.UTF-8"
  ]
}

Tagged : / / /

Making Game: dovecot WILDCARD SSL (certicate.crt, private.key, intermediate.crt) not working

Original Source Link

I have installed dovecot and postfix and got it working, but when I change the ssl_cert_file, ssl_key_file and ssl_ca_file dovecot configuration to my wildcard SSL certificate (working on Apache) it simply does not work. It is not a file permission problem, I’m pretty sure.

The output of the command

openssl s_client -connect localhost:993 is:

CONNECTED(00000003)
write:errno=104

no peer certificate available

No client certificate CA names sent

SSL handshake has read 0 bytes and written 263 bytes

New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE

When I use the self-signed SSL, I got the correct output and the thing works. Any ideas?

Tagged : / / /

Server Bug Fix: Docker for private server

Original Source Link

I want to set up a server (Ubuntu V-Server) for a few people. I’m planning to run a few applications e.g. GitLab, NextCloud, and Seafile.

What are the advantages of orchestrating these apps (and their dependencies like PostgreSQL) in containers via docker-compose, vs installing them directly on “bare metal” and configuring them by hand?

Tagged : / / / /

Server Bug Fix: ansible not working become

Original Source Link

I’m using task ansible

playbook.yml

---
- hosts: servers
  remote_user: user
  become: True
  become_user: user
  become_method: sudo
  gather_facts: no

  tasks:
    - name:
  ┆ ┆ copy:
  ┆ ┆ ┆ src: editProxy.sh
  ┆ ┆ ┆ dest: /tmp/editProxy.sh
  ┆ ┆ ┆ mode: 0755

  ┆ - name: run edit proxy settings for apt 
  ┆ ┆ command: /tmp/editProxy.sh

editProxy.sh

#!/bin/bash

if grep -q "old_proxy" /etc/apt/apt.conf; then
    sed -i 's/old_proxy/new_proxy/g' /etc/apt/apt.conf;
fi

run playbook
ansible-playbook palybook.yml –extra-vars=’ansible_become_pass=passwd

script copy to servers, and not return error

changed: [10.1.1.1]

But changes on the server do not happen, if you run the script manually on the server, the changes take place. What could be the problem

Rather than uploading a script and executing that to run sed to replace a string in a configuration file, have considered using the Ansible native replace module? For instance something along the line of:

  - name: Modify proxy settings for apt
    replace:
      path: /etc/apt/apt.conf
      regexp: 'old_proxy'
      replace: 'new_proxy'
      backup: yes

Tagged : /

Making Game: SiS 671/771 and Lubuntu 20.04 LTS

Original Source Link

I have installed Lubuntu 20.04 LTS on a Asus K50C, that has a Silicon Integrated Systems 671/771 Graphics Adapter. It can not boot successfully without setting through GRUB nomodeset or GRUB_GFXMODE=1024x768. Also the maximum supported resolution (checked with videoinfo on Grub’s command line) is 1024×768, while the laptop supports up to 1366×768. As I have not found a valid procedure, how is it possible to install proper drivers for the SiS 671/771 on Lubuntu 20.04?

Tagged : / / /

Linux HowTo: NVIDIA X server settings is displaying empty window

Original Source Link

I have problems with running NVIDIA X server settings to adjust GPU fan speeds. When launching the utility it only opens an empty window like this:

Blank window

Below is additional info from nvidia-smi utility and lsb_release -a:

nvidia-smi image

Ubuntu version:
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic

I will be happy to hear any tips about what could be the reason for this kind of behaviour.

I had the same problem.
The reason was in the version of the nvidia-settings utility.
I used https://launchpad.net/~oibaf/+archive/ubuntu/graphics-drivers to install drivers.
Run the command to list installed packages:

dpkg -l | grep nvidia-

she brought out:

ii nvidia-compute-utils-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA compute utilities
ii nvidia-dkms-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA DKMS package
ii nvidia-driver-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA driver metapackage
ii nvidia-kernel-common-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA kernel source package
ii nvidia-prime 0.8.10 NVIDIA's Prime all Tools to enable
ii nvidia-settings 418.56-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
ii nvidia-utils-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA driver support binaries
ii xserver-xorg-video-nvidia-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA binary Xorg driver

The apt search nvidia-settings command confirmed that there is only a version of nvidia-settings / disco, now 418.56-0ubuntu1 amd64.

Solution, deleted all nvidia by using:

apt purge *nvidia*

Download the latest driver from: https://www.nvidia.com/object/unix.html
After installing the driver from the .run file, the settings are opened normally.

I found a solution for this issue. The driver isn’t initializing, but the secure boot (SB) doesn’t need to be disabled. If you correctly configure the SB on your computer, after installing the driver, a window opens asking for a password. Set your password. After finishing the setup, reboot your PC.

After a few seconds, a blue screen appears, press enter and select the first option (I don’t remember what says, but is a thing with MOK, KOM, whatever).
It asks for a password, write the password that was set up before, and reboot.

I think I have solution for the problem, at least for me it was working.

Driver manager is not installing properly the driver, so in Synaptic package manager I was searching the nvidia-340 driver that I was installing earlier in “Additional drivers” app. Marked for installation the nvidia-340 missing dependencies. And restarted the PC.

The driver isn’t initializing. Disable secure boot in your bios.

Tagged : / / /

Making Game: NVIDIA X server settings is displaying empty window

Original Source Link

I have problems with running NVIDIA X server settings to adjust GPU fan speeds. When launching the utility it only opens an empty window like this:

Blank window

Below is additional info from nvidia-smi utility and lsb_release -a:

nvidia-smi image

Ubuntu version:
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic

I will be happy to hear any tips about what could be the reason for this kind of behaviour.

I had the same problem.
The reason was in the version of the nvidia-settings utility.
I used https://launchpad.net/~oibaf/+archive/ubuntu/graphics-drivers to install drivers.
Run the command to list installed packages:

dpkg -l | grep nvidia-

she brought out:

ii nvidia-compute-utils-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA compute utilities
ii nvidia-dkms-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA DKMS package
ii nvidia-driver-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA driver metapackage
ii nvidia-kernel-common-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA kernel source package
ii nvidia-prime 0.8.10 NVIDIA's Prime all Tools to enable
ii nvidia-settings 418.56-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
ii nvidia-utils-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA driver support binaries
ii xserver-xorg-video-nvidia-430 430.26-0ubuntu0 ~ gpu19.04.1 amd64 NVIDIA binary Xorg driver

The apt search nvidia-settings command confirmed that there is only a version of nvidia-settings / disco, now 418.56-0ubuntu1 amd64.

Solution, deleted all nvidia by using:

apt purge *nvidia*

Download the latest driver from: https://www.nvidia.com/object/unix.html
After installing the driver from the .run file, the settings are opened normally.

I found a solution for this issue. The driver isn’t initializing, but the secure boot (SB) doesn’t need to be disabled. If you correctly configure the SB on your computer, after installing the driver, a window opens asking for a password. Set your password. After finishing the setup, reboot your PC.

After a few seconds, a blue screen appears, press enter and select the first option (I don’t remember what says, but is a thing with MOK, KOM, whatever).
It asks for a password, write the password that was set up before, and reboot.

I think I have solution for the problem, at least for me it was working.

Driver manager is not installing properly the driver, so in Synaptic package manager I was searching the nvidia-340 driver that I was installing earlier in “Additional drivers” app. Marked for installation the nvidia-340 missing dependencies. And restarted the PC.

The driver isn’t initializing. Disable secure boot in your bios.

Tagged : / / /

Server Bug Fix: Locked out of my own server: getting “Too many authentication failures” right away when connecting via ssh

Original Source Link

I have an AWS EC2 Ubuntu instance for pet projects. When I tried logging in one day, this error results:

~$ ssh -i"/home/kona/.ssh/aws_kona_id" [email protected] -p22 
Enter passphrase for key '/home/kona/.ssh/aws_kona_id': 
Received disconnect from [IP address] port 22:2: Too many authentication failures
Disconnected from [IP address] port 22
~$

kona is the only account enabled on this server

I’ve tried rebooting the server, changing my IP address, and waiting.

EDIT:

[email protected]:~$ ssh -o "IdentitiesOnly yes" -i"/home/kona/.ssh/aws_kona_id" -v [email protected] -p22 
OpenSSH_8.1p1 Debian-1, OpenSSL 1.1.1d  10 Sep 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to ec2-3-17-146-113.us-east-2.compute.amazonaws.com [3.17.146.113] port 22.
debug1: Connection established.
debug1: identity file /home/kona/.ssh/aws_kona_id type -1
debug1: identity file /home/kona/.ssh/aws_kona_id-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1p1 Debian-1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
debug1: Authenticating to ec2-3-17-146-113.us-east-2.compute.amazonaws.com:22 as 'kona'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:D3sIum9dMyyHNjtnL7Pr4u5DhmP5aQ1jaZ8Adsdma9E
debug1: Host 'ec2-3-17-146-113.us-east-2.compute.amazonaws.com' is known and matches the ECDSA host key.
debug1: Found key in /home/kona/.ssh/known_hosts:41
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /home/kona/.ssh/aws_kona_id  explicit
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/kona/.ssh/aws_kona_id
Enter passphrase for key '/home/kona/.ssh/aws_kona_id': 
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
[email protected]: Permission denied (publickey).
[email protected]:~$ 

This error usually means that you’ve got too many keys loaded in your ssh-agent.

Explanation: Your ssh client will attempt to use all the keys from ssh-agent one by one before it gets to use the key specified with -i aws_kona_id. Yes, it’s a bit counter-intuitive. Because each such attempt counts as an authentication failure and by default only 5 attempts are allowed by the SSH server you are getting the error you see: Too many authentication failures.

You can view the identities (keys) attempted with ssh -v.

The solution is to tell ssh to only use the identities specified on the command line:

ssh -o "IdentitiesOnly yes" -i ~/.ssh/aws_kona_id -v [email protected]

If it doesn’t help post the output of that command here.

I think MLu’s answer is possibly correct in this case. The way to validate this is to run a command line ssh command specifying the correct key to the server.

ssh -i "keyfile.pem" [email protected]

If that doesn’t work, and in the general case of “I’ve been locked out of my server, help!”, the generally recommend approach is to mount the volume to another instance as a data volume.

  1. Stop the EC2 server.
  2. Mount the volume onto a new instance as a data volume.
  3. Do any investigation or repairs required (look at logs, add keys, etc). This can include creating new users and new keys, changing files on the file system, etc.
  4. Mount the volume as a root volume on the original instance.

Repeat until you have access. If you can’t get access this at least gets you access to your data.

SSH by default tries all available SSH keys. It does so in a “random” order. Specifying the -i option simply tells SSH to add that keyfile to the list of keys to try.

It does not:

  • limit SSH to use only that key
  • tell SSH to try that key first

What ends up happening (quite often if you use many keys) is that SSH tries a couple random key that don’t work and the server stops accepting authentication attempts from your client.

If you want to tell SSH to “use only this key” you must specify the IdentitiesOnly yes option:

ssh -o "IdentitiesOnly yes" -i"/home/kona/.ssh/aws_kona_id" [email protected] -p22 

IdentitiesOnly yes tells SSH only to use the explicitly specified keys (in this case only the key specified using -i).

This is why when I use custom keys for different hosts I always define the host configuration in .ssh/config. This allows me to use a simple alias and, more importantly, to specify IdentitiesOnly yes and which key to use to avoid this kind of mistake:

Host kona.server
    Hostname server.akona.me
    IdentityFile ~/.ssh/aws_kona_id
    IdentitiesOnly yes
    Port 22
    User kona

With the above in your .ssh/config you should be able to login in your server with simply:

$ ssh kona.server

The verbose output you’ve just added shows that you get Permission denied for ~/.ssh/aws_kona_id.

That’s a completely different problem than Too many authentication failures.

Perhaps your aws_kona_id isn’t the right key for the user (and that’s why it kept trying all the other identities from the ssh-agent) or you should use the default EC2 user account, e.g. ec2-user or ubuntu or what have you.

Try those accounts or try to find the right key for kona user.

Ubuntu EC2 Instance have a ubuntu user account installed with your ssh key.

If you don’t remove this account you can still connect with :

ssh -i "/home/kona/.ssh/aws_kona_id" [email protected]

And fix your account problem after sudo -i and investigating /home/kona/.ssh/authorized_keys

just run your ssh-agent

eval "$(ssh-agent -s)"

then

add your rsa key

ssh-add id_rsa

then try to connect again to your server

Tagged : / /

Linux HowTo: How to recover important files after accidentally deleting /etc folder on ubuntu

Original Source Link

I am in desparate situation. I am/was running ubuntu on virtualbox on windows as host.

I accidentally deleted /etc folder. I have some files which are months of hard work. How can i retrieve them? I tried drag and drop, usb mounting, connecting to internet to share the files . Nothing worked. PLease please please help.

As soon as possible uncleanly turn off the VM and do nothing that would suppose a write to its disk (don’t ever turn it on again).

Then you could use something like photorec on the VM disk image.

Tagged : / /