Server Bug Fix: setting the cookie header with the value of anouther header in an nginx reverse proxy

Original Source Link

We are working with a piece of hardware(chromecast) that seems to be stripping out cookies, but will happily pass along a custom header (which we signal is acceptable from the back end) one workaround we came up to stuff the cookie data into said custom header and put it back into the cookie header on the proxy. This allows most clients (which respect normal CORS signaling) and the backend to remain unchanged. In my Nginx config I currently have a map defined like:

map cookie $cookie {
    default   $http_cookie;
    ""        $http_my_token;
}

which I’m later using as follows:

server {
  listen 443 ssl;
  #...
  location /myurl {
      proxy_pass http://nodeserver;
      proxy_set_header X-Forwarded-For $remote_addr;
      #...
      proxy_set_header Cookie $cookie;
 }
}

I expect this code to replace the Cookie header with the value supplied in the “My-Token” header, but what I see instead is that the value of both headers is passed through unchanged.

Your map specifies that the value you want to check is the literal string cookie. This is not what you said you want, and it’s pretty useless to try to check a literal string since it can only be itself.

The first parameter to map is the value you want to test. Instead of the string cookie you should pass what you actually want to check. It seems you are looking for whether the user agent is sending a cookie. If that’s the case, you should use $http_cookie.

map $http_cookie $cookie {

Tagged :

Server Bug Fix: PHP App (Owncloud) Pages Requesting Incorrect Path for Assets

Original Source Link

I have a VPS set up with php-fpm and nginx (with ssl). I have set up Tiny Tiny RSS already, and it works just fine. However, I recently attempted to set up Owncloud, and instantly hit a roadblock.

I visited the index page to do the initial set up, and there was absolutely no styling at all. I looked in Firefox’s console, and saw several 404 errors. Looking closely, I saw that all the paths to the assets were wrong. Instead of requesting http://mydomain.com/owncloud/some/important/component.js, it requested http://mydomain.com/usr/share/nginx/html/owncloud/some/important/component.js.

It would seem that php is doing something wrong when it’s processing the pages. I don’t have this problem with Tiny Tiny RSS, so I would assume it has something to do with the way Owncloud was written.

I’m assuming there’s a php.ini key I have to change. Any ideas?

The following is the content of my server block:

            listen 443 ssl;
            ssl_certificate /var/ssl/secret/sauce.key
            ssl_certificate_key /var/ssl/secret/sauce.key;
            server_name localhost 127.0.0.1 mydomain.com;
            root /usr/share/nginx/html;
            index index.html index.htm index.php;
            client_max_body_size 1000M;

            location / {
                    try_files $uri $uri/ @webdav =404;
            }

            location ~ .php$ { 
                    include fastcgi_params;
                    fastcgi_index index.php;
                    try_files $1 = 404;
                    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                    fastcgi_param HTTPS on;
                    fastcgi_pass 127.0.0.1:9000;
            }

            location ~ ^/owncloud/(data|config|.ht|db_structure.xml|README) {
                    deny all;
            }

            location @webdav {
                    fastcgi_split_path_info ^(.+.php)(/.*)$;
                    fastcgi_pass 127.0.0.1:9000;
                    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                    fastcgi_param HTTPS on;
                    include fastcgi_params;
            }

I realise that this isn’t the complete configuration that the Owncloud documentation recommends, but I generally like to get a minimal working configuration and work up in order to learn how everything works. Reviewing the configuration options I left out, there didn’t seem to be any that affected php processing, so I’m assuming it’s fine. Otherwise, I would like to know what nginx directive I’m missing and why it’s important.

I’m assuming there’s a php.ini key I have to change. Any ideas?

No, your application is likely configured wrong – Owncloud’s PHP is generating those paths. Make sure you configured the URL path correctly in Owncloud.

Create a new virtual host for owncloud, and just edit the root and the upstream (either sock or port ), I already have a functioning owncloud server and it works fine.

Nginx configuration for owncloud

replace the root line and server in the upstream

I’m assuming you know how to create a virtual server, if not tell me so I could provide an explanation for that too.

EDIT

About ssl, one IP, and one virtual server:

Not really, It depends on what your ssl was bought for, if it’s a single domain ssl like for example example.com or store.example.com, you can have as many sites as wanted on the same IP but the SSL will only be valid for that one domain you bought it for, unless you bought a wild card SSL, in that case it might support *.example.com

anyways, too have SSL on my server and it’s only for domain.com and www.domain.com, I used it on my cloud server which was hosted on cloud.domain.com the only downside for it is that you get that yellow page warning because the domain isn’t matching the one the ssl was bought for, I tell the browser to ignore that warning and save the exception and that’s it, also the same for the sync client, it told me if i want to ignore the warning or not and it works just fine.

If you don’t want to face that warning then yes you need to create the own cloud server under the same virtual host, not because of the IP, but because the name that the ssl was bought for.

Tell me which do you want and I’ll help you with either.

Tagged : / / /

Server Bug Fix: How to proxy /grafana with nginx?

Original Source Link

I’ve setup and started default grafana and it works as expected on http://localhost:3000. I’m trying to proxy it with nginx where I have ssl installed. I’m trying to have it respond to https://localhost/grafana but it just serves the following:

{{alert.title}}

I have this in my nginx server block:

location /grafana {
     proxy_pass         http://localhost:3000;
     proxy_set_header   Host $host;
}

It seems nginx supports rewriting the requests to the proxied server so updating the config to this made it work:

location /grafana {
     proxy_pass         http://localhost:3000;
     rewrite  ^/grafana/(.*)  /$1 break;
     proxy_set_header   Host $host;
}

My grafana.ini also has an updated root:

[server]
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana

Adding on to @AXE-Labs answer, you don’t need to rewrite the URL.

nginx.conf

location /grafana/ {
     proxy_pass         http://localhost:3000/;
     proxy_set_header   Host $host;
}

grafana.ini update root:

[server]
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/

Notice the additional / in the location block, that makes all the difference.

If you want to see the entire file, please visit https://gist.github.com/mvadu/5fbb7f5676ce31f2b1e6 where I have rever proxy setup for Infludb as well as grafana.

I got the same problem when using nginx and grafana on docker, in two different containers. Passed the following options to docker-compose on grafana service, following http://docs.grafana.org/installation/behind_proxy/#nginx-configuration-with-sub-path:

- GF_SERVER_DOMAIN=foo.bar.com
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:/grafana

But it didn’t work, and my browser’s console shows: net::ERR_CONTENT_LENGTH_MISMATCH.

So, to fix it, I added the following line to my nginx config:

location /grafana/ {
  proxy_pass http://monitoring_grafana:3000/;
  proxy_max_temp_file_size 0; # THIS MADE THE TRICK!
}

FYI:

root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana

causes proxy errors for some API calls. I find:

root_url = %(protocol)s://%(domain)s:/grafana

I struggled a bit with all the answers here. For completeness and documentation for myself a full example which worked in my case.

/etc/grafana/grafana.ini:

... DEFAULT CONFIGURATION

#################################### Server ################################

... DEFAULT CONFIGURATION

root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/

... DEFAULT CONFIGURATION

nginx.conf looks like this:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        #

        ##
        # Logging Settings
        ##
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        ... DEFAULT SETTINGS ...

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;

        # note that I am not using sites-enabled:
        # include /etc/nginx/sites-enabled/*;
}

I put the NGINX configuration for grafana into a separate grafana.conf located in /etc/nginx/conf.d/:

server {

  listen 80;
  listen [::]:80;

  listen 443 ssl;
  listen [::]:443 ssl;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
  ssl_prefer_server_ciphers on;

  ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
  ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;

  root /var/www/<my.hostname.xy>/html;
  index index.html index.htm;

  server_name <my.hostname.xy>;

  location /grafana/ {
    proxy_pass http://localhost:3000/;
    proxy_set_header Host $host;
  }
}

Tagged : / /

Server Bug Fix: redis2_query & proxy_pass together

Original Source Link

I am trying to implement security to APIs via Nginx. Basically I will allow APIs only if a token exists in redis

location /api/ {
                if ($http_securitytoken = "") { return 403; }
                if ($http_securitytoken){
                             redis2_query get $http_securitytoken;
                             redis2_pass 127.0.0.1:6379;
                }

                proxy_pass http://127.0.0.1:9003;
        }

The issue here is what I want is redis2_query results to a variable, instead it is sending response and my proxy_pass is not working anymore, instead the redis result is getting send as output.

How to resolve?

Tagged : /

Server Bug Fix: Nginx can’t get real ip address because realip_remote_addr and remote_addr have same value

Original Source Link

$realip_remote_addr and $remote_addr have equal values for all combinations of lines defined
inside html block:

  • set_real_ip_from 192.168.2.1;
  • real_ip_header X-Real-IP; or real_ip_header X-Forwarded-For;
  • with or without: real_ip_recursive on;

with logging format:
‘realip=”$realip_remote_addr” ‘
‘$remote_addr – $remote_user [$time_local] “$request”

I always get the same values for $realip_remote_addr and $remote_addr, e.g.

“realip=”192.168.2.1” 192.168.2.1 – – [19/Jun/2020:09:32:23 +0200] “GET”…

I expect and want something like: “realip=”132.156.21.41” 192.168.2.1 – – [19/Jun/2020:09:32:23 +0200] “GET”…
What am I doing wrong?

I use cloudflare and I only set proxy status to ‘dns lookup’. When changed to ‘proxy’ headers that I cloudflare adds could be used for real_user_ip address.

Tagged : /

Server Bug Fix: Stripping index.html and .html from URLs with nginx

Original Source Link

My basic goal is to serve the following clean URLs with nginx:

  • / serves /index.html
  • /abc/ serves /abc/index.html
  • /abc/def serves /abc/def.html
  • /abc redirects to /abc/

In order to have canonical names for each resource, I also want to normalize any URL with superfluous file names or extensions:

  • /index.html redirects to /
  • /abc/index.html redirects to /abc/
  • /abc/def.html redirects to /abc/def

The directives I thought would accomplish this:

index index.html;
try_files $uri.html $uri $uri/ =404;

# Redirect */index and */index.html to *.
rewrite ^(.*)/index(.html)?$ $1 permanent;
# Redirect *.html to *.
rewrite ^(.+).html$          $1 permanent;

However, the result of this is different than I expected:

  • /, /index and /index.html all redirect to / (loop).
  • /abc and /abc/ both redirect to /abc/ (loop).

(It works as designed for /abc/def.html and /abc/def; only the directory URLs don’t work.)

I’m not sure what is happening here; maybe I’m misunderstanding how the rewrite works?

(I already tried using location blocks instead, but this also results in loops as try_files performs an internal redirect to the location block that sends the HTTP 301.)

Edit: Fundamentally, I need something like a location block that only matches the original request URI, but is ignored for the purpose of internal redirects, so it doesn’t create a loop in combination with the try_files directive.

You might be looking for a solution like the one explained here:

server {
    listen       80;
    server_name  mysite.com;

    index index.html;
    root /var/www/mysite/public;

    location / { 
        try_files $uri $uri/ @htmlext;
    }   

    location ~ .html$ {
        try_files $uri =404;
    }   

    location @htmlext {
        rewrite ^(.*)$ $1.html last;
    }   
}

I believe I have found a solution, though I’m not experienced enough to tell if this could break in special cases or could be solved more easily in another way.

Basically, the problem is that a location ~ ... {} block does not only get matched on the original request URI, but also on the result of try_files and other rewrites. So if I have a location block to strip off the index.html or .html with a redirect, then it will not only run when a client requests index.html or abc.html directly, but also if the client requests / or abc and the server internally redirects these to /index.html and abc.html respectively, causing a redirect loop.

However, the redirect module provides an if directive which can check the $request_uri variable – this remains unchanged by internal redirects:

index index.html;
try_files $uri $uri.html $uri/ =404;

# like "location ~", but only for matching the original request. 
if ($request_uri ~ /index(.html)?$) {
  rewrite ^(.*/)index(.html)?$ $1 permanent;
}
if ($request_uri ~ .html$) {
  rewrite ^(.*).html$ $1 permanent;
}

(Note that all of these directives now exist in the server context, without any location blocks.)

Tagged :

Server Bug Fix: Poor application performance behind nginx reverse proxy

Original Source Link

I started noticing very slow page loads on my Jira server. I figured out that this only happens when Jira is accessed through nginx, but if I use SSH port forwarding to the server and access the backend ports directly, page loads are instantaneous.

nginx config (/etc/nginx/sites-enabled/support.example.org.conf):

## Jira
##
## Modified from nginx http version
## Modified from https://confluence.atlassian.com/jirakb/integrating-jira-with-nginx-426115340.html
## Modified from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
##

server {
  listen 192.168.118.32:443 ssl;
  server_name support.example.org;
  server_tokens off;

  ## Strong SSL Security
  ## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
  ssl on;
  ssl_certificate     /etc/letsencrypt/live/support.example.org/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/support.example.org/privkey.pem;

  ssl_protocols TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;
  ssl_session_timeout 1d;


  access_log  /var/log/nginx/support_access.log;
  error_log   /var/log/nginx/support_error.log;

  location /jira {
    gzip off;

    proxy_read_timeout      300;
    proxy_connect_timeout   300;
    # proxy_redirect          off;
    proxy_request_buffering off;
    proxy_buffering         off;

    proxy_set_header    X-Forwarded-Host    $host;
    proxy_set_header    X-Real-IP           $remote_addr;
    proxy_set_header    X-Forwarded-Ssl     on;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
    proxy_set_header    X-Forwarded-Proto   $scheme;
    proxy_pass http://localhost:8081/jira;

    client_max_body_size 2G;
  }


  include snippets/letsencrypt.conf;
}

Some of the proxy settings are things I tried already and they ranged from minor improvements to no improvements, but the performance is still abysmal.

Jira config: (/opt/atlassian/jira/conf/server.xml)

<?xml version="1.0" encoding="utf-8"?>
<Server port="8005" shutdown="SHUTDOWN">
    <Listener className="org.apache.catalina.startup.VersionLoggerListener"/>
    <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on"/>
    <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/>
    <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"/>

    <Service name="Catalina">
        <!--
         ==============================================================================================================
         HTTPS - Proxying Jira via Apache or Nginx over HTTPS

         If you're proxying traffic to Jira over HTTPS, uncomment the below connector and comment out the others.
         Ensure the proxyName and proxyPort are updated with the appropriate information if necessary as per the docs.

         See the following for more information:

            Apache - https://confluence.atlassian.com/x/PTT3MQ
            nginx  - https://confluence.atlassian.com/x/DAFmGQ
         ==============================================================================================================
        -->

        <Connector port="8081" relaxedPathChars="[]|" relaxedQueryChars="[]|{}^\`"<>"
                   maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false"
                   maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443"
                   acceptCount="100" disableUploadTimeout="true" bindOnInit="false" secure="true" scheme="https"
                   proxyName="support.example.org" proxyPort="443"/>

        <Engine name="Catalina" defaultHost="localhost">
            <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">

                <Context path="/jira" docBase="${catalina.home}/atlassian-jira" reloadable="false" useHttpOnly="true">
                    <Resource name="UserTransaction" auth="Container" type="javax.transaction.UserTransaction"
                              factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60"/>
                    <Manager pathname=""/>
                    <JarScanner scanManifest="false"/>
                    <Valve className="org.apache.catalina.valves.StuckThreadDetectionValve" threshold="120" />
                </Context>

            </Host>
            <Valve className="org.apache.catalina.valves.AccessLogValve"
                   pattern="%a %{jira.request.id}r %{jira.request.username}r %t "%m %U%q %H" %s %b %D "%{Referer}i" "%{User-Agent}i" "%{jira.request.assession.id}r""/>
        </Engine>
    </Service>
</Server>

When I test directly, I enable the default connector:

<Connector port="8080" relaxedPathChars="[]|" relaxedQueryChars="[]|{}^\`"<>"
                   maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false"
                   maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443"
                   acceptCount="100" disableUploadTimeout="true" bindOnInit="false"/>

What am I doing wrong or how can I improve the performance?

Your nginx config looks okay, but why you disable gzip?

Make sure your gzip compression from the JIRA -> ‘Administration’ -> ‘Global Settings’ -> ‘General configuration’ is set to ‘on’. Then remove the config line from nginx vhost:

gzip off;

and add e.g. this sample config:

gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types application/javascript application/rss+xml application/vnd.ms-fontobject application/x-font application/x-font-opentype application/x-font-otf application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/opentype font/otf font/ttf image/svg+xml image/x-icon text/css text/javascript text/plain text/xml;

Switching to Apache HTTPD and mod_proxy_ajp following a different page in Jira’s documentation seems to fix the issue. A comment on the question suggests this is a flaw in nginx.

Tagged : /

Server Bug Fix: User authentication: In HTTP Server vs. in Web Application

Original Source Link

Management decided to switch the authentication-backend from LDAP to Kerberos as LDAP is deemed “obsolete and insecure”. Also they want to switch from Apache to nginx for “performance and reliability”. Ultimate goal is to enable SPNEGO for single-sign-on within the domain.

Previously, we happily used Apache’s mod_authnz_ldap. nginx however does not even seem to support authentication modules by default. I never worked with nginx before, so I might have missed something.

Asking local experts about this, I received the response “The HTTP server should not do the user authentication – that is the web application’s responsibility.” So now I am stuck with a bunch of services which were never designed to do user authentication themselves.

This made me think: What are the advantages of not having authentication in the HTTP server?

Performance might be one factor – but at what cost? Usually my stance is “never do it yourself”. Especially when it comes to cryptography or – in this case – authentication schemes. Using the HTTP server’s features, all authentication is done in one place. User information is simply forwarded to the server-side application. Without such a feature in the HTTP server, I would need to implement the authentication scheme in each and every application over and over again. As of today, I failed to find ready-to-use modules for our ancient php-based applications. There is a kerberos module for flask. It was last updated six years ago and does not play nice with me at all. I have not even looked into the other services yet. It seems to be a massive increase in the maintenance required. I suppose, there are upsides to this approach, but I fail to see the. What are the advantages?

The advantage of authenticating at the application is having this done independently of the OS and web server, you are not mixing your implementation layers (i.e. the application’s authentication and access controls don’t rely on information passed on from another piece of software).

Generally the advice of not authenticating at the web server level is a good one, since there’s limited control of granularity and it affects the concept of server side sessions. But it’s also a one-sided argument and there could be good reasons to use it. Context is critical.

Personally I find, at face value with the information you shared, I find it a poor decision to essentially break a functional setup and shoehorn a type of solution into multiple applications that may not be running on the most up to date platforms or receiving sufficient amount of maintenance effort. It’s both fixing what isn’t broken and multiplying effort. If/once it’s done you’d probably be in a better position for the future, but getting there will become a problem.

I think that @Pedro has a very good and well balanced answer to your question. However, since I generally agree with your local experts, I think it is good to have some further context on why you might want to make this change. As Pedro mentioned, the issue is mixing implementation layers, and limited control becomes a real problem when you aren’t authenticating at the web server level. What does that actually mean though? Consider the following questions, which represent real-life (and in my experience, common) business needs that may be very difficult or nearly impossible to execute in your current circumstances. Note that some of these are based on assumptions about your hosting setup that may or may not be applicable.

  1. What happens if your hosting provider closes and you have trouble finding a new provider that supports Apache with the mod_authnz_ldap module enabled?
  2. What if internal application changes force a change of OS, and you have difficulty getting the mod_authnz_ldap module installed and running on the new OS?
  3. What if you need to change the application to allow in users who aren’t in LDAP?
  4. What if you need to migrate away from LDAP for reasons completely unrelated to this application anyway (hint: this is where you find yourself)
  5. What if you are tired of running your own server and need to migrate to load-balanced cloud infrastructure? Will you still have access to LDAP? Will this module work properly in a completely new environment?
  6. What if you want to ditch servers all together and run in Kubernetes or the like? Will this setup transition smoothly?
  7. What if Apache drops support for the mod_authnz_ldap module?
  8. What if you need to implement role based access instead of a global allow/disallow rule?

Many of these may not be applicable to you, but many of these are extremely common business needs. So while it sounds like the current reasons for these switches are not necessarily well thought out, and you may be able to put them off for a while, eventually there is going to be a compelling business need that forces this change and you will find yourself back right here. You definitely don’t want to rush an overhaul to the authentication system for an application, but at the same time it seems unlikely to me that you will be able to continue to use this authentication setup in the long term.

Also, it is quite possible that there is a middle ground between “Leave the current system as-is” and “Do it all yourself” (although to be clear the rule of “never roll your own only goes so far – otherwise you wouldn’t be building your own web application in the first place). For instance there are plenty of 3rd party authentication systems your application can integrate with to alleviate most of the burden. AWS Cognito and Auth0 would be two such examples, which I mention only for completeness and not as an endorsement.

Tagged : / / / /

Server Bug Fix: How to run nginx SSL on non-standard port

Original Source Link

I realize this looks like a duplicate of at least a few other questions, but I have read them each several times and am still doing something wrong.

Following are the contents of my myexample.com nginx config file located in /etc/nginx/sites-available.

server {

  listen       443 ssl;
  listen       [::]:443 ssl;

  server_name myexample.com www.myexample.com;
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem;

  #Configures the publicly served root directory
  #Configures the index file to be served
  root /var/www/myexample.com;
      index index.html index.htm;

}

It works, when I go to https://myexample.com the content is served and the connection is secure. So this config seems to be good.

Now if I change the ssl port to 9443 and reload the nginx config, the config reloads without error, but visiting https://myexample.com shows an error in the browser (This site can’t be reached / myexample.com refused to connect. ERR_CONNECTION_REFUSED)

I have tried suggestions and documentation here, here, and here (among others) but I always get the ERR_CONNECTION_REFUSED error.

I should note that I can use a non-standard port and then explicitly type that port into the URL, e.g., https://myexample.com:9443. But I don’t want to do that. What I want is for a user to be able to type myexample.com into any browser and have nginx redirect to the secure connection automatically.

Again, I have no issues whatsoever when I use the standard 443 SSL port.

Edit: I’m using nginx/1.6.2 on debian/jessie

In order to support typing “https://myexample.com” in your browser, and having it handled by the nginx config listening on port 9443, you will need an additional nginx config that still listens on port 443, since that is the IP port to which the browser connects.

Thus:

server {
  listen 443 ssl;
  listen [::]:443 ssl;

  server_name myexample.com www.myexample.com;
  ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem;

  # Redirect the browser to our port 9443 config
  return 301 $scheme://myexample.com:9443$request_uri;
}

server {
  listen 9443 ssl;
  listen [::]:9443 ssl;

  server_name myexample.com www.myexample.com;
  ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem;
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

  #Configures the publicly served root directory
  #Configures the index file to be served
  root /var/www/myexample.com;
  index index.html index.htm;
}

Notice that the same certificate/key is needed for both sections, since the certificate is usually tied to the DNS hostname, but not necessarily the port.

Hope this helps!

When you type https://example.com, the standard for the https:// scheme is to connect to port 443. In your case, you have moved your server so that it now listens on port 9443. You get the connection refused message because of this – nothing is listening on port 443.

You will need to arrange to have something listen on port 443 that redirects connections to port 9443 or use a port as part of the URL.

if you change the port to a non standard one like 9443 you need to add a redirection from 443 to 9443. Set nginx to reverse proxy to that port.

Tagged : /

Server Bug Fix: Nginx upstream keepalive with SNI

Original Source Link

I’ve got Nginx setup to proxy two subdomains. SNI is used so each subdomain has a different SSL certificate. Nginx setup is roughly:

upstream a_example_443 {
    server 1.2.3.4:443;
    keepalive 128;
    keepalive_timeout 180s;
}
upstream b_example_443 {
    server 1.2.3.4:443;
    keepalive 128;
    keepalive_timeout 180s;
}
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
server {
    listen 443 ssl;
    server_name aproxy.example.com;
    location / {
        proxy_pass https://a_example_443;
    }
}
server {
    listen 443 ssl;
    server_name bproxy.example.com;
    location / {
        proxy_pass https://b_example_443;
    }
}

This works, the SNI names are a_example_443 and b_example_443 and the subdomains have aliases for those. However, is it bad that I use two upstreams?

I tried configuring it to use one upstream. After quite some effort, this works:

upstream example_443 {
    server 1.2.3.4:443;
    keepalive 128;
    keepalive_timeout 180s;
}
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
proxy_ssl_session_reuse off;
server {
    listen 443 ssl;
    server_name aproxy.example.com;
    location / {
        proxy_set_header HOST a.example.com;
        proxy_ssl_name a.example.com;
        proxy_pass https://example_443;
    }
}
server {
    listen 443 ssl;
    server_name bproxy.example.com;
    location / {
        proxy_set_header HOST b.example.com;
        proxy_ssl_name b.example.com;
        proxy_pass https://example_443;
    }
}

First I had to set the HOST and proxy_ssl_name to the SNI name. This is fine, except it seems when I added proxy_set_header HOST, I lose all the proxy_set_header I had at the http configuration level (not shown here). I’d love to know why, but OK fine, I put those in a file and include it in each server. Edit, found out why, proxy_set_header docs:

These directives are inherited from the previous level if and only if there are no proxy_set_header directives defined on the current level.

Next I had to set proxy_ssl_session_reuse off. What are the ramifications of using this? I understand it disables abbreviated handshakes, but when is that needed? My guess is when using keep alive, not very often. Is that right?

Keepalive is where things become unclear for me about exactly how the upstream connections work. Nginx gets a request, opens an SSL connection to the upstream, sends the an SNI name. The upstream routes it to the right subdomain, uses that subdomain’s certificate, etc. Later Nginx receives another request — can it reuse the previous SSL connection that is still because of keepalive? If so, what if the second request is for the other SNI name? Does it just send the request and let the upstream use the Host header to route it?

Ultimately, should I use two upstreams or one?

It seems that the above configuration is wrong: the SSL connections are reused and if the connection doesn’t match the SNI name then the upstream rejects it. I suppose this means the answer is that two upstreams are required, one for each SNI name.

Tagged : / /