Git Product home page Git Product logo

nginx-admins-handbook's Introduction

Nginx Admin's Handbook

My notes on NGINX administration basics, tips & tricks, caveats, and gotchas.

Meme


Hi-diddle-diddle, he played on his
fiddle and danced with lady pigs.
Number three said, "Nicks on tricks!
I'll build my house with EN-jin-EKS!".
The Three Little Pigs: Who's Afraid of the Big Bad Wolf?


Pull Requests License

Created by trimstray and contributors


Table of Contents

Introduction



Before you start playing with NGINX please read an official Beginner’s Guide. It's a great introduction for everyone.

Nginx (/ˌɛndʒɪnˈɛks/ EN-jin-EKS, stylized as NGINX or nginx) is an open source HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. It is originally written by Igor Sysoev. For a long time, it has been running on many heavily loaded Russian sites including Yandex, Mail.Ru, VK, and Rambler. In the April 2019 NGINX was the most commonly used HTTP server (see Netcraft survey).

NGINX is a fast, light-weight and powerful web server that can also be used as a:

  • fast HTTP reverse proxy
  • reliable load balancer
  • high performance caching server
  • full-fledged web platform

Generally it provides the core of complete web stacks and is designed to help build scalable web applications. When it comes to performance NGINX can easily handle a huge amount of traffic. The other main advantage of the NGINX is that allows you to do the same thing in different ways.

For me, it is a one of the best and most important service that I used in my SysAdmin career.


These essential documents should be the main source of knowledge for you:

In addition, I would like to recommend two great docs focuses on the concept of the HTTP protocol:

If you love security keep your eye on this one: Cryptology ePrint Archive. It provides access to recent research in cryptology and explores many subjects of security (e.g. Ciphers, Algorithms, SSL/TLS protocols).

General disclaimer

When I was studying architecture of HTTP servers I became interested in NGINX. I found a lot of information about it but I've never found one guide that covers the most important things in suitable form. I was a little disappointed.

I was interested in everything: NGINX's internals, functions, security best practices, performance optimisations, tips & tricks, hacks and rules, but all documents treated the subject lightly.

Of course, I know that we also have great resources like Official Documentation, agentzh's Nginx Tutorials, Nginx Guts or Emiller’s Advanced Topics In Nginx Module Development. These are definitely the best assets for us and in the first place you should seek help there.

For me, however, there hasn't been a truly in-depth and reasonably simple cheatsheet which describe a variety of configurations and important cross-cutting topics for HTTP servers. That's why I created this repository.

This handbook is a collection of rules, helpers, notes and papers, best practices and recommendations collected and used by me (also in production environments). Many of these refer to external resources.

Throughout this handbook you will explore the many features and capabilities of the NGINX. You'll find out, for example, how to testing the performance or how to resolve debugging problems. You will learn configuration guidelines, security design patterns, ways to handle common issues and how to stay out of them.

In this handbook I added set of guidelines and examples has also been produced to help you administer of the NGINX server. They give us insight into NGINX's internals also.

If you do not have the time to read hundreds of articles (just like me) this multipurpose handbook may be useful. I created it in the hope that it will be useful especially for System Administrators and WebOps. I think it can also be a good complement to official documentations.

I did my best to make this handbook a single and consistent. Of course, I still have a lot to improve and to do. I hope you enjoy it, and fun with it.

Before you start remember about the two most important things:

Do not follow guides just to get 100% of something. Think about what you actually do at your server!

These guidelines provides (in some places) recommendations for very restrictive setup.

Contributing & Support

A real community, however, exists only when its members interact in a meaningful way that deepens their understanding of each other and leads to learning.

If you find something which doesn't make sense, or something doesn't seem right, please make a pull request and please add valid and well-reasoned explanations about your changes or comments.

Before adding a pull request, please see the contributing guidelines.

If this project is useful and important for you, you can bring positive energy by giving some good words or supporting this project. Thank you!

ToDo list

New chapters:

  • Reverse Proxy
  • Caching
  • 3rd party modules
  • Web Application Firewall
  • ModSecurity
  • Debugging

Existing chapters:

Introduction
  • Checklist to rule them all
Books
  • ModSecurity 3.0 and NGINX: Quick Start Guide
  • Cisco ACE to NGINX: Migration Guide
External Resources
  • Nginx official
    • Nginx Official Forum
    • Nginx Official Mailing List
  • Presentations
    • NGINX: Basics and Best Practices
    • NGINX Installation and Tuning
    • Nginx Internals (by Joshua Zhu)
    • Nginx internals (by Liqiang Xu)
    • How to secure your web applications with NGINX
    • Tuning TCP and NGINX on EC2
    • Extending functionality in nginx, with modules!
    • Nginx - Tips and Tricks.
    • Nginx Scripting - Extending Nginx Functionalities with Lua
    • How to handle over 1,200,000 HTTPS Reqs/Min
    • Using ngx_lua / lua-nginx-module in pixiv
  • Static analyzers
    • nginx-minify-conf
  • Comparison reviews
  • Benchmarking tools
    • wrk2
    • httperf
    • slowloris
    • slowhttptest
    • GoldenEye
  • Debugging tools
    • strace
    • GDB
    • SystemTap
    • stapxx
    • htrace.sh
Helpers
  • Server blocks logic
    • rewrite vs return
    • try_files directive
    • if, break and set
  • Log files
    • Conditional logging
    • Manually log rotation
  • Configuration syntax
    • Comments
    • Variables & Strings
    • Directives, Blocks, and Contexts
    • External files
    • Measurement units
    • Enable syntax highlighting
  • Connection processing
    • Event-Driven architecture
    • Multiple processes
    • Simultaneous connections
    • Keepalive connections
  • Load balancing algorithms
    • Backend parameters
    • Round Robin
    • Weighted Round Robin
    • Least Connections
    • Weighted Least Connections
    • IP Hash
    • Generic Hash
    • Fair module
    • Other methods
  • Monitoring
    • CollectD, Prometheus, and Grafana
      • nginx-vts-exporter
    • CollectD, InfluxDB, and Grafana
    • Telegraf, InfluxDB, and Grafana
  • Testing
    • Send request and show response headers
    • Send request with http method, user-agent, follow redirects and show response headers
    • Send multiple requests
    • Testing SSL connection
    • Testing SSL connection with SNI support
    • Testing SSL connection with specific SSL version
    • Testing SSL connection with specific cipher
    • Load testing with ApacheBench (ab)
      • Standard test
      • Test with KeepAlive header
    • Load testing with wrk2
      • Standard scenarios
      • POST call (with Lua)
      • Random paths (with Lua)
      • Multiple paths (with Lua)
      • Random server address to each thread (with Lua)
      • Multiple json requests (with Lua)
      • Debug mode (with Lua)
      • Analyse data pass to and from the threads
      • Parsing wrk result and generate report
    • Load testing with locust
      • Multiple paths
      • Multiple paths with different user sessions
    • TCP SYN flood Denial of Service attack
    • HTTP Denial of Service attack
  • Debugging
    • Show information about processes
    • Check memory usage
    • Show open files
    • Dump configuration
    • Get the list of configure arguments
    • Check if the module has been compiled
    • Show the most requested urls with http methods
    • Show the most accessed response codes
    • Calculating requests per second with IP addresses and urls
    • Check that the gzip_static module is working
    • Which worker processing current request
    • Capture only http packets
    • Extract User Agent from the http packets
    • Capture only http GET and POST packets
    • Capture requests and filter by source ip and destination port
    • Dump a process's memory
    • GNU Debugger (gdb)
      • Dump configuration from a running process
      • Show debug log in memory
      • Core dump backtrace
    • SystemTap cheatsheet
      • stapxx
  • Errors & Issues
    • Common errors
  • Configuration snippets
    • Custom error pages
    • Adding and removing the www prefix
    • Redirect POST request with payload to external endpoint
    • Allow multiple cross-domains using the CORS headers
    • Tips and methods for high load traffic testing (cheatsheet)
  • Other snippets
    • Create a temporary static backend
    • Create a temporary static backend with SSL support
    • Generate private key without passphrase
    • Generate CSR
    • Generate CSR (metadata from existing certificate)
    • Generate CSR with -config param
    • Generate private key and CSR
    • Generate ECDSA private key
    • Generate private key with CSR (ECC)
    • Generate self-signed certificate
    • Generate self-signed certificate from existing private key
    • Generate self-signed certificate from existing private key and csr
    • Generate multidomain certificate
    • Generate wildcard certificate
    • Generate certificate with 4096 bit private key
    • Generate DH Param key
    • Convert DER to PEM
    • Convert PEM to DER
    • Verification of the private key
    • Verification of the public key
    • Verification of the certificate
    • Verification of the CSR
    • Check whether the private key and the certificate match
  • Installation from source
    • Add autoinstaller for RHEL/Debian like distributions
    • Add compiler and linker options
      • Debugging Symbols
    • Add SystemTap - Real-time analysis and diagnoistcs tools
    • Separation and improvement of installation methods
    • Add installation process on CentOS 7 for NGINX
    • Add installation process on CentOS 7 for OpenResty
    • Add installation process on FreeBSD 11.2
Base Rules
  • Format, prettify and indent your Nginx code
  • Never use a hostname in a listen directive
  • Making a rewrite absolute (with scheme)
  • Use return directive for URL redirection (301, 302)
  • Configure log rotation policy
Debugging
  • Disable all workers except one
  • Memory analysis from core dumps
  • Use mirror module to copy requests to another backend
  • Dynamic debugging with echo module
Performance
  • Use index directive in the http block
  • Avoid multiple "index" directives
  • Use $request_uri to avoid using regular expressions
  • Use try_files directive to ensure a file exists
  • Don't pass all requests to backends - use "try_files"
  • Use return directive instead rewrite for redirects
  • Set proxy timeouts for normal load and under heavy load
  • Configure kernel parameters for high load traffic
Hardening
  • Keep NGINX up-to-date
  • Use only the latest supported OpenSSL version
  • Prevent caching of sensitive data
  • Set properly files and directories permissions (also with acls) on a paths
  • Implement HTTPOnly and secure attributes on cookies
Reverse Proxy
  • Setting up FastCGI proxying
Others
  • Define security policies with security.txt

Other stuff:

  • Add static error pages generator to NGINX snippets directory

Reports: blkcipher.info

Many of these recipes have been applied to the configuration of my private website.

An example configuration is in configuration examples chapter. It's also based on this version of printable high-res hardening cheatsheets.

SSL Labs

Read about SSL Labs grading here (SSL Labs Grading 2018).

Short SSL Labs grades explanation:

A+ is clearly the desired grade, both A and B grades are acceptable and result in adequate commercial security. The B grade, in particular, may be applied to configurations designed to support very wide audiences (for old clients).

I finally got A+ grade and following scores:

  • Certificate = 100%
  • Protocol Support = 100%
  • Key Exchange = 90%
  • Cipher Strength = 90%

blkcipher_ssllabs_preview

Mozilla Observatory

Read about Mozilla Observatory here.

I also got the highest note from Mozilla:

blkcipher_mozilla_observatory_preview

Checklist to rule them all

This checklist contains all rules (54) from this handbook.

Generally, I think that each of these principles is important and should be considered. I tried, however, to separate them into four levels of priority which I hope will help guide your decision.

PRIORITY NAME AMOUNT DESCRIPTION
high critical 22 definitely use this rule, otherwise it will introduce high risks of your NGINX security, performance, and other
medium major 18 it's also very important but not critical, and should still be addressed at the earliest possible opportunity
low normal 9 there is no need to implement but it is worth considering because it can improve the NGINX working and functions
info minor 5 as an option to implement or use (not required)

Remember, these are only guidelines. My point of view may be different from yours so if you feel these priority levels do not reflect your configurations commitment to security, performance or whatever else, you should adjust them as you see fit.

RULE CHAPTER PRIORITY
Define the listen directives explicitly with address:port pair
Prevents soft mistakes which may be difficult to debug.
Base Rules high
Prevent processing requests with undefined server names
It protects against configuration errors e.g. don't pass traffic to incorrect backends.
Base Rules high
Configure log rotation policy
Save yourself trouble with your web server: configure appropriate logging policy.
Base Rules high
Always keep NGINX up-to-date
Use newest NGINX package to fix a vulnerabilities, bugs and to use new features.
Hardening high
Run as an unprivileged user
Use the principle of least privilege. This way only master process runs as root.
Hardening high
Protect sensitive resources
Hidden directories and files should never be web accessible.
Hardening high
Hide upstream proxy headers
Don't expose what version of software is running on the server.
Hardening high
Force all connections over TLS
Protects your website especially for handle sensitive communications.
Hardening high
Use min. 2048-bit private keys
2048 bits private keys are sufficient for commercial use.
Hardening high
Keep only TLS 1.3 and TLS 1.2
Use TLS with modern cryptographic algorithms and without protocol weaknesses.
Hardening high
Use only strong ciphers
Use only strong and not vulnerable cipher suites.
Hardening high
Use more secure ECDH Curve
Use ECDH Curves with according to NIST recommendations.
Hardening high
Use strong Key Exchange
Establishes a shared secret between two parties that can be used for secret communication.
Hardening high
Defend against the BEAST attack
The server ciphers should be preferred over the client ciphers.
Hardening high
HTTP Strict Transport Security
Tells browsers that it should only be accessed using HTTPS, instead of using HTTP.
Hardening high
Reduce XSS risks (Content-Security-Policy)
CSP is best used as defence-in-depth. It reduces the harm that a malicious injection can cause.
Hardening high
Control the behaviour of the Referer header (Referrer-Policy)
The default behaviour of referrer leaking puts websites at risk of privacy and security breaches.
Hardening high
Provide clickjacking protection (X-Frame-Options)
Defends against clickjacking attack.
Hardening high
Prevent some categories of XSS attacks (X-XSS-Protection)
Prevents to render pages if a potential XSS reflection attack is detected.
Hardening high
Prevent Sniff Mimetype middleware (X-Content-Type-Options)
Tells browsers not to sniff MIME types.
Hardening high
Reject unsafe HTTP methods
Only allow the HTTP methods for which you, in fact, provide services.
Hardening high
Prevent caching of sensitive data
It helps to prevent critical data (e.g. credit card details, or username) leaked.
Hardening high
Organising Nginx configuration Base Rules medium
Format, prettify and indent your Nginx code
Formatted code is easier to maintain, debug, and can be read and understood in a short amount of time.
Base Rules medium
Use reload method to change configurations on the fly Base Rules medium
Use HTTP/2
HTTP/2 will make our applications faster, simpler, and more robust.
Performance medium
Maintaining SSL sessions
Improves performance from the clients’ perspective.
Performance medium
Use exact names in a server_name directive where possible Performance medium
Avoid checks server_name with if directive
Decreases NGINX processing requirements.
Performance medium
Use try_files directive to ensure a file exists
Use it if you need to search for a file, it saving duplication of code also.
Performance medium
Use return directive instead rewrite for redirects
Use return directive to more speedy response than rewrite.
Performance medium
Disable unnecessary modules
Limits vulnerabilities, improve performance and memory efficiency.
Hardening medium
Hide Nginx version number
Don't disclose sensitive information about NGINX.
Hardening medium
Hide Nginx server signature
Don't disclose sensitive information about NGINX.
Hardening medium
Use only the latest supported OpenSSL version
Hardening medium
Mitigation of CRIME/BREACH attacks
Disable HTTP compression or compress only zero sensitive content.
Hardening medium
Deny the use of browser features (Feature-Policy)
A mechanism to allow and deny the use of browser features.
Hardening medium
Control Buffer Overflow attacks
Prevents errors are characterised by the overwriting of memory fragments of the NGINX process.
Hardening medium
Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
Prevents attacks in which the attacker sends HTTP requests in pieces slowly.
Hardening medium
Enable DNS CAA Policy
Allows domain name holders to indicate to CA whether they are authorized to issue digital certificates.
Others medium
Separate listen directives for 80 and 443 Base Rules low
Use only one SSL config for the listen directive Base Rules low
Use geo/map modules instead allow/deny Base Rules low
Drop the same root inside location block Base Rules low
Adjust worker processes Performance low
Make an exact location match to speed up the selection process Performance low
Use limit_conn to improve limiting the download speed Performance low
Tweak passive health checks Load Balancing low
Define security policies with security.txt Others low
Map all the things... Base Rules info
Use debug mode to track down unexpected behaviour Debugging info
Use custom log formats Debugging info
Memory analysis from core dumps Debugging info
Don't disable backends by comments, use down parameter Load Balancing info

Printable high-res hardening cheatsheets

I created two versions of printable posters with hardening cheatsheets (High-Res 5000x8200) based on recipes from this handbook:

For *.xcf and *.pdf formats please see this directory.

  • A+ with all 100%’s on @ssllabs and 120/100 on @mozilla observatory:

    It provides the highest scores of the SSL Labs test. Setup is very restrictive with 4096-bit private key, only TLS 1.2 and also modern strict TLS cipher suites (non 128-bits).

nginx-hardening-cheatsheet-100p

  • A+ on @ssllabs and 120/100 on @mozilla observatory with TLS 1.3 support:

    It provides less restrictive setup with 2048-bit private key, TLS 1.3 and 1.2 and also modern strict TLS cipher suites (128/256-bits). The final grade is also in line with the industry standards. Recommend using this configuration.

nginx-hardening-cheatsheet-tls13

Books

Authors: Valery Kholodkov

Excel in Nginx quickly by learning to use its most essential features in real-life applications.

  • Learn how to set up, configure, and operate an Nginx installation for day-to-day use
  • Explore the vast features of Nginx to manage it like a pro, and use them successfully to run your website
  • Example-based guide to get the best out of Nginx to reduce resource usage footprint

This short review comes from this book or the store.

Authors: Derek DeJonghe

You’ll find recipes for:

  • Traffic management and A/B testing
  • Managing programmability and automation with dynamic templating and the NGINX Plus API
  • Securing access through encrypted traffic, secure links, HTTP authentication subrequests, and more
  • Deploying NGINX to AWS, Azure, and Google cloud-computing services
  • Using Docker to deploy containers and microservices
  • Debugging and troubleshooting, performance tuning, and practical ops tips

This short review comes from this book or the store.

Authors: Martin Fjordvald, Clement Nedelcu

Harness the power of Nginx to make the most of your infrastructure and serve pages faster than ever.

  • Discover possible interactions between Nginx and Apache to get the best of both worlds
  • Learn to exploit the features offered by Nginx for your web applications
  • Get your hands on the most updated version of Nginx (1.13.2) to support all your web administration requirements

This short review comes from this book or the store.

Authors: Rahul Sharma

Optimize NGINX for high-performance, scalable web applications.

  • Configure Nginx for best performance, with configuration examples and explanations
  • Step–by-step tutorials for performance testing using open source software
  • Tune the TCP stack to make the most of the available infrastructure

This short review comes from this book or the store.

Authors: Dimitri Aivaliotis

Written for experienced systems administrators and engineers, this book teaches you from scratch how to configure Nginx for any situation. Step-by-step instructions and real-world code snippets clarify even the most complex areas.

This short review comes from this book or the store.

Authors: Faisal Memon, Owen Garrett, Michael Pleshakov

Learn in this ebook how to get started with ModSecurity, the world’s most widely deployed web application firewall (WAF), now available for NGINX and NGINX Plus.

This short review comes from this book or the store.

Authors: Faisal Memon

This ebook provides step-by-step instructions on replacing Cisco ACE with NGINX and off-the-shelf servers. NGINX helps you cut costs and modernize.

In this ebook you will learn:

  • How to migrate Cisco ACE configuration to NGINX, with detailed examples
  • Why you should go with a software load balancer, and not hardware

This short review comes from this book or the store.

External Resources

Nginx official

  :black_small_square: Nginx Project
  :black_small_square: Nginx Documentation
  :black_small_square: Nginx Wiki
  :black_small_square: Nginx Admin's Guide
  :black_small_square: Nginx Pitfalls and Common Mistakes
  :black_small_square: Nginx Forum
  :black_small_square: Nginx Mailing List
  :black_small_square: Nginx Read-only Mirror

Nginx distributions

  :black_small_square: OpenResty
  :black_small_square: The Tengine Web Server

Comparison reviews

  :black_small_square: NGINX vs. Apache (Pro/Con Review, Uses, & Hosting for Each)
  :black_small_square: Web cache server performance benchmark: nuster vs nginx vs varnish vs squid

Cheatsheets & References

  :black_small_square: agentzh's Nginx Tutorials
  :black_small_square: Nginx Guts
  :black_small_square: Nginx Cheatsheet
  :black_small_square: Nginx Tutorials, Linux Sysadmin Configuration & Optimizing Tips and Tricks
  :black_small_square: Nginx boilerplate configs
  :black_small_square: Awesome Nginx configuration template
  :black_small_square: Nginx Quick Reference
  :black_small_square: A collection of resources covering Nginx and more
  :black_small_square: A collection of useful Nginx configuration snippets

Performance & Hardening

  :black_small_square: Nginx Tuning For Best Performance by Denji
  :black_small_square: TLS has exactly one performance problem: it is not used widely enough
  :black_small_square: SSL/TLS Deployment Best Practices
  :black_small_square: SSL Server Rating Guide
  :black_small_square: How to Build a Tough NGINX Server in 15 Steps
  :black_small_square: Top 25 Nginx Web Server Best Security Practices
  :black_small_square: Nginx Secure Web Server
  :black_small_square: Strong ciphers for Apache, Nginx, Lighttpd and more
  :black_small_square: Strong SSL Security on Nginx
  :black_small_square: Enable cross-origin resource sharing (CORS)
  :black_small_square: NAXSI - WAF for Nginx
  :black_small_square: ModSecurity for Nginx
  :black_small_square: Transport Layer Protection Cheat Sheet
  :black_small_square: Security/Server Side TLS

Presentations

  :black_small_square: NGINX: Basics and Best Practices
  :black_small_square: NGINX Installation and Tuning
  :black_small_square: Nginx Internals (by Joshua Zhu)
  :black_small_square: Nginx internals (by Liqiang Xu)
  :black_small_square: How to secure your web applications with NGINX
  :black_small_square: Tuning TCP and NGINX on EC2
  :black_small_square: Extending functionality in nginx, with modules!
  :black_small_square: Nginx - Tips and Tricks.
  :black_small_square: Nginx Scripting - Extending Nginx Functionalities with Lua
  :black_small_square: How to handle over 1,200,000 HTTPS Reqs/Min
  :black_small_square: Using ngx_lua / lua-nginx-module in pixiv

Playgrounds

  :black_small_square: NGINX Rate Limit, Burst and nodelay sandbox

Config generators

  :black_small_square: Nginx config generator on steroids
  :black_small_square: Mozilla SSL Configuration Generator

Static analyzers

  :black_small_square: gixy - is a tool to analyze Nginx configuration to prevent security misconfiguration and automate flaw detection.
  :black_small_square: nginx-config-formatter - Nginx config file formatter/beautifier written in Python.
  :black_small_square: nginxbeautifier - format and beautify Nginx config files.
  :black_small_square: nginx-minify-conf - creates a minified version of a Nginx configuration.

Log analyzers

  :black_small_square: GoAccess - is a fast, terminal-based log analyzer (quickly analyze and view web server statistics in real time).
  :black_small_square: Graylog - is a leading centralized log management for capturing, storing, and enabling real-time analysis.
  :black_small_square: Logstash - is an open source, server-side data processing pipeline.

Performance analyzers

  :black_small_square: ngxtop - parses your Nginx access log and outputs useful, top-like, metrics of your Nginx server.

Benchmarking tools

  :black_small_square: ab - is a single-threaded command line tool for measuring the performance of HTTP web servers.
  :black_small_square: siege - is an http load testing and benchmarking utility.
  :black_small_square: wrk - is a modern HTTP benchmarking tool capable of generating significant load.
  :black_small_square: wrk2 - is a constant throughput, correct latency recording variant of wrk.
  :black_small_square: bombardier - is a HTTP(S) benchmarking tool.
  :black_small_square: gobench - is a HTTP/HTTPS load testing and benchmarking tool.
  :black_small_square: hey - is a HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom.
  :black_small_square: boom - is a script you can use to quickly smoke-test your web app deployment.
  :black_small_square: httperf - the httperf HTTP load generator.
  :black_small_square: JMeter™ - is designed to load test functional behavior and measure performance.
  :black_small_square: Gatling - is a powerful open-source load and performance testing tool for web applications.
  :black_small_square: locust - is an easy-to-use, distributed, user load testing tool.
  :black_small_square: slowloris - low bandwidth DoS tool. Slowloris rewrite in Python.
  :black_small_square: slowhttptest - application layer DoS attack simulator.
  :black_small_square: GoldenEye - GoldenEye Layer 7 (KeepAlive+NoCache) DoS test tool.

Debugging tools

  :black_small_square: strace - is a diagnostic, debugging and instructional userspace utility (linux syscall tracer) for Linux.
  :black_small_square: GDB - allows you to see what is going on `inside' another program while it executes.
  :black_small_square: SystemTap - provides infrastructure to simplify the gathering of information about the running Linux system.
  :black_small_square: stapxx - simple macro language extentions to SystemTap.
  :black_small_square: htrace.sh - is a simple Swiss Army knife for http/https troubleshooting and profiling.

Development

  :black_small_square: Programming in Lua (first edition)
  :black_small_square: Emiller’s Guide To Nginx Module Development
  :black_small_square: An Introduction To OpenResty (nginx + lua) - Part 1
  :black_small_square: An Introduction To OpenResty - Part 2 - Concepts
  :black_small_square: An Introduction To OpenResty - Part 3

Online tools

  :black_small_square: SSL Server Test by SSL Labs
  :black_small_square: SSL/TLS Capabilities of Your Browser
  :black_small_square: Test SSL/TLS (PCI DSS, HIPAA and NIST)
  :black_small_square: SSL analyzer and certificate checker
  :black_small_square: Test your TLS server configuration (e.g. ciphers)
  :black_small_square: Scan your website for non-secure content
  :black_small_square: Public Diffie-Hellman Parameter Service/Tool
  :black_small_square: Analyse the HTTP response headers by Security Headers
  :black_small_square: Analyze your website by Mozilla Observatory
  :black_small_square: CAA Record Helper
  :black_small_square: Linting tool that will help you with your site's accessibility, speed, security and more
  :black_small_square: Service to scan and analyse websites
  :black_small_square: Tool from above to either encode or decode a string of text
  :black_small_square: Online translator for search queries on log data
  :black_small_square: Online regex tester and debugger: PHP, PCRE, Python, Golang and JavaScript
  :black_small_square: Online tool to learn, build, & test Regular Expressions
  :black_small_square: Online Regex Tester & Debugger
  :black_small_square: A web app for encryption, encoding, compression and data analysis
  :black_small_square: Nginx location match tester
  :black_small_square: Nginx location match visible

Other stuff

  :black_small_square: OWASP Cheat Sheet Series
  :black_small_square: Mozilla Web Security
  :black_small_square: Application Security Wiki
  :black_small_square: OWASP ASVS 4.0
  :black_small_square: Transport Layer Security (TLS) Parameters
  :black_small_square: Security/Server Side TLS by Mozilla
  :black_small_square: TLS Security 6: Examples of TLS Vulnerabilities and Attacks
  :black_small_square: Guidelines for Setting Security Headers
  :black_small_square: Secure your web application with these HTTP headers
  :black_small_square: Security HTTP Headers
  :black_small_square: Analysis of various reverse proxies, cache proxies, load balancers, etc.
  :black_small_square: TLS Redirection (and Virtual Host Confusion)
  :black_small_square: HTTPS on Stack Overflow: The End of a Long Road
  :black_small_square: The Architecture of Open Source Applications - Nginx
  :black_small_square: BBC Digital Media Distribution: How we improved throughput by 4x
  :black_small_square: The C10K problem by Dan Kegel
  :black_small_square: The Secret To 10 Million Concurrent Connections
  :black_small_square: High Performance Browser Networking
  :black_small_square: jemalloc vs tcmalloc vs dlmalloc

Helpers

Directories and files

If you compile NGINX server by default all files and directories are available from /usr/local/nginx location.

For upstream NGINX packaging paths can be as follows (it depends on the type of system):

  • /etc/nginx - is the default configuration root for the NGINX service

    • other locations: /usr/local/etc/nginx, /usr/local/nginx/conf
  • /etc/nginx/nginx.conf - is the default configuration entry point used by the NGINX services, includes the top-level http block and all other configuration contexts and files

    • other locations: /usr/local/etc/nginx/nginx.conf, /usr/local/nginx/conf/nginx.conf
  • /usr/share/nginx - is the default root directory for requests, contains html directory and basic static files

  • /var/log/nginx - is the default log (access and error log) location for NGINX

    • other locations: logs/ in root directory
  • /var/cache/nginx - is the default temporary files location for NGINX

    • other locations: /var/lib/nginx
  • /etc/nginx/conf - contains custom/vhosts configuration files

    • other locations: /etc/nginx/conf.d, /etc/nginx/sites-enabled (I can't stand this debian-like convention...)
  • /var/run/nginx - contains information about NGINX process(es)

    • other locations: /usr/local/nginx/logs, logs/ in root directory

Commands

  • nginx -h - shows the help
  • nginx -v - shows the NGINX version
  • nginx -V - shows the extended information about NGINX: version, build parameters and configuration arguments
  • nginx -t - tests the NGINX configuration
  • nginx -c - sets configuration file (default: /etc/nginx/nginx.conf)
  • nginx -p - sets prefix path (default: /etc/nginx/)
  • nginx -T - tests the NGINX configuration and prints the validated configuration on the screen
  • nginx -s - sends a signal to the NGINX master process:
    • stop - discontinues the NGINX process immediately
    • quit - stops the NGINX process after it finishes processing inflight requests
    • reload - reloads the configuration without stopping processes
    • reopen - instructs NGINX to reopen log files
  • nginx -g - sets global directives out of configuration file

Some useful snippets for process management:

  • testing configuration:
/usr/sbin/nginx -t -c /etc/nginx/nginx.conf
/usr/sbin/nginx -t -q -g 'daemon on; master_process on;' # ; echo $?
  • starting daemon:
/usr/sbin/nginx -g 'daemon on; master_process on;'

service nginx start
systemctl start nginx

# You can also start NGINX from start-stop-daemon script:
/sbin/start-stop-daemon --quiet --start --exec /usr/sbin/nginx --background --retry QUIT/5 --pidfile /run/nginx.pid
  • stopping daemon:
/usr/sbin/nginx -s quit     # graceful shutdown (waiting for the worker processes to finish serving current requests)
/usr/sbin/nginx -s stop     # fast shutdown (kill connections immediately)

service nginx stopmicro
systemctl stop nginx

# You can also stop NGINX from start-stop-daemon script:
/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
  • reloading daemon:
/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload

service nginx reload
systemctl reload nginx

kill -HUP $(cat /var/run/nginx.pid)
kill -HUP $(pgrep -f "nginx: master")

Configuration syntax

NGINX uses a micro programming language in the configuration files. This language's design is heavily influenced by Perl and Bourne Shell. For me NGINX's configuration has a simple and very transparent structure.

Comments

NGINX's configuration files don't support comment blocks, they only accept # at the beginning of a line for a comment.

End of lines

Lines containing directives must end with a ; or NGINX will fail to load the configuration and report an error.

Variables & Strings

Variables start with $. Some modules introduce variables can be used when setting directives.

There are some directives that do not support variables, e.g. access_log or error_log.

To assign value to the variable you should use a set directive:

set $var "value";

To learn more about variables see if, break and set section.

Some interesting things about variables in NGINX:

Make sure to read the agentzh's Nginx Tutorials - it's about NGINX tips & tricks. That guy is a Guru and creator of OpenResty. In these tutorials he describes, amongst other things, variables in great detail.

  • the scope of variables spreads out all over configuration
  • variable assignment occurs when requests are actually being served
  • variable have exactly the same lifetime as the corresponding request
  • each request does have its own version of all those variables' containers (different containers values)
  • requests do not interfere with each other even if they are referencing a variable with the same name
  • the assignment operation is only performed in requests that access location

Strings may be inputted without quotes unless they include blank spaces, semicolons or curly braces, then they need to be escaped with backslashes or enclosed in single/double quotes.

Variables in quoted strings are expanded normally unless the $ is escaped.

Directives, Blocks, and Contexts

Read this great article about the NGINX configuration inheritance model by Martin Fjordvald.

Configuration options are called directives. We have four types of directives:

  • standard directive - one value per context:

    worker_connections 512;
  • array directive - multiple values per context:

    error_log /var/log/nginx/localhost/localhost-error.log warn;
  • action directive - something which does not just configure:

    rewrite ^(.*)$ /msie/$1 break;
  • try_files directive:

    try_files $uri $uri/ /test/index.html;

    If you want to review all directives see alphabetical index of directives.

Directives are organised into groups known as blocks or contexts. Generally context is a block directive can have other directives inside braces. It appears to be organised in a tree-like structure, defined by sets of brackets - { and }.

As a general rule, if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.

Directives placed in the configuration file outside of any contexts are considered to be in the global/main context.

Contexts can be layered within one another (a level of inheritance). Their structure looks like this:

Global/Main Context
        |
        |
        +-----» Events Context
        |
        |
        +-----» HTTP Context
        |          |
        |          |
        |          +-----» Server Context
        |          |          |
        |          |          |
        |          |          +-----» Location Context
        |          |
        |          |
        |          +-----» Upstream Context
        |
        |
        +-----» Mail Context
External files

include directive may appear inside any contexts to perform conditional inclusion. It attaching another file, or files matching the specified mask:

include /etc/nginx/proxy.conf
Measurement units

Sizes can be specified in:

  • k or K: Kilobytes
  • m or M: Megabytes
  • g or G: Gigabytes
client_max_body_size 2M;

Time intervals can be specified in:

  • ms: Milliseconds
  • s: Seconds (default, without a suffix)
  • m: Minutes
  • h: Hours
  • d: Days
  • w: Weeks
  • M: Months (30 days)
  • y: Years (365 days)
proxy_read_timeout 20s;
Enable syntax highlighting
vi/vim
# 1) Download vim plugin for NGINX:

# Official NGINX vim plugin:
mkdir -p ~/.vim/syntax/

wget "http://www.vim.org/scripts/download_script.php?src_id=19394" -O ~/.vim/syntax/nginx.vim

# Improved NGINX vim plugin (incl. syntax highlighting) with Pathogen:
mkdir -p ~/.vim/{autoload,bundle}/

curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
echo -en "\nexecute pathogen#infect()\n" >> ~/.vimrc

git clone https://github.com/chr4/nginx.vim ~/.vim/bundle/nginx.vim

# 2) Set location of NGINX config files:
cat > ~/.vim/filetype.vim << __EOF__
au BufRead,BufNewFile /etc/nginx/*,/etc/nginx/conf.d/*,/usr/local/nginx/conf/*,*/conf/nginx.conf if &ft == '' | setfiletype nginx | endif
__EOF__

It may be interesting for you: Highlight insecure SSL configuration in Vim.

Sublime Text

Install cabal - system for building and packaging Haskell libraries and programs (on Ubuntu):

add-apt-repository -y ppa:hvr/ghc
apt-get update

apt-get install -y cabal-install-1.22 ghc-7.10.2

# Add this to your shell main configuration file:
export PATH=$HOME/.cabal/bin:/opt/cabal/1.22/bin:/opt/ghc/7.10.2/bin:$PATH
source $HOME/.<shellrc>

cabal update
  • nginx-lint:

    git clone https://github.com/temoto/nginx-lint
    
    cd nginx-lint && cabal install --global
  • sublime-nginx + SublimeLinter-contrib-nginx-lint:

    Bring up the Command Palette and type install. Among the commands you should see Package Control: Install Package. Type nginx to install sublime-nginx and after that do the above again for install SublimeLinter-contrib-nginx-lint: type SublimeLinter-contrib-nginx-lint.

Processes

NGINX has one master process and one or more worker processes.

The main purposes of the master process is to read and evaluate configuration files, as well as maintain the worker processes (respawn when a worker dies), handle signals, notify workers, opens log files, and, of course binding to ports.

Master process should be started as root user, because this will allow NGINX to open sockets below 1024 (it needs to be able to listen on port 80 for HTTP and 443 for HTTPS).

The worker processes do the actual processing of requests and get commands from master process. They runs in an event loop (registering events and responding when one occurs), handle network connections, read and write content to disk, and communicate with upstream servers. These are spawned by the master process, and the user and group will as specified (unprivileged).

NGINX has also cache loader and cache manager processes but only if you enable caching.

The following signals can be sent to the NGINX master process:

SIGNAL NUM DESCRIPTION
TERM, INT 15, 2 quick shutdown
QUIT 3 graceful shutdown
KILL 9 halts a stubborn process
HUP 1 configuration reload, start new workers, gracefully shutdown the old worker processes
USR1 10 reopen the log files
USR2 12 upgrade executable on the fly
WINCH 28 gracefully shutdown the worker processes

There’s no need to control the worker processes yourself. However, they support some signals too:

SIGNAL NUM DESCRIPTION
TERM, INT 15, 2 quick shutdown
QUIT 3 graceful shutdown
USR1 10 reopen the log files

Connection processing

NGINX supports a variety of connection processing methods which depends on the platform used.

In general there are four types of event multiplexing:

  • select - is anachronism and not recommended but installed on all platforms as a fallback
  • poll - is anachronism and not recommended

And the most efficient implementations of non-blocking I/O:

  • epoll - recommend if you're using GNU/Linux
  • kqueue - recommend if you're using BSD (is technically superior to epoll)

There are also great resources (also makes comparisons) about them:

Look also at libevent benchmark (read about libevent – an event notification library):

libevent-benchmark

This infographic comes from daemonforums - An interesting benchmark (kqueue vs. epoll).

You may also view why big players use NGINX on FreeBSD instead of on GNU/Linux:

Event-Driven architecture

Thread Pools in NGINX Boost Performance 9x! - this official article is an amazing explanation about thread pools and generally about handling connections. I also recommend Inside NGINX: How We Designed for Performance & Scale. Both are really great.

NGINX uses Event-Driven architecture which heavily relies on Non-Blocking I/O. One advantage of non-blocking/asynchronous operations is that you can maximize the usage of a single CPU as well as memory because is that your thread can continue it's work in parallel.

There is a perfectly good and brief summary about non-blocking I/O and multi-threaded blocking I/O by Werner Henze.

Look what the official documentation says about it:

It’s well known that NGINX uses an asynchronous, event‑driven approach to handling connections. This means that instead of creating another dedicated process or thread for each request (like servers with a traditional architecture), it handles multiple connections and requests in one worker process. To achieve this, NGINX works with sockets in a non‑blocking mode and uses efficient methods such as epoll and kqueue.

Because the number of full‑weight processes is small (usually only one per CPU core) and constant, much less memory is consumed and CPU cycles aren’t wasted on task switching. The advantages of such an approach are well‑known through the example of NGINX itself. It successfully handles millions of simultaneous requests and scales very well.

I must not forget to mention here about Non-Blocking and 3rd party modules (from official documentation):

Unfortunately, many third‑party modules use blocking calls, and users (and sometimes even the developers of the modules) aren’t aware of the drawbacks. Blocking operations can ruin NGINX performance and must be avoided at all costs.

To handle concurrent requests with a single worker process NGINX uses the reactor design pattern. Basically, it's a single-threaded but it can fork several processes to utilize multiple cores.

However, NGINX is not a single threaded application. Each of worker processes is single-threaded and can handle thousands of concurrent connections. NGINX does not create a new process/thread for each connection/requests but it starts several worker threads during start. It does this asynchronously with one thread, rather than using multi-threaded programming (it uses an event loop with asynchronous I/O).

That way, the I/O and network operations are not a very big bottleneck (remember that your CPU would spend a lot of time waiting for your network interfaces, for example). This results from the fact that NGINX only use one thread to service all requests. When requests arrive at the server, they are serviced one at a time. However, when the code serviced needs other thing to do it sends the callback to the other queue and the main thread will continue running (it doesn't wait).

Now you see why NGINX can handle a large amount of requests perfectly well (and without any problems).

For more information take a look at following resources:

Multiple processes

NGINX uses only asynchronous I/O, which makes blocking a non-issue. The only reason NGINX uses multiple processes is to make full use of multi-core, multi-CPU and hyper-threading systems. NGINX requires only enough worker processes to get the full benefit of symmetric multiprocessing (SMP).

From NGINX documentation:

The NGINX configuration recommended in most cases – running one worker process per CPU core – makes the most efficient use of hardware resources.

NGINX uses a custom event loop which was designed specifically for NGINX - all connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers.

Multiplexing works by using a loop to increment through a program chunk by chunk operating on one piece of data/new connection/whatever per connection/object per loop iteration. It is all based on events multiplexing like epoll(), kqueue() or select(). Within each worker NGINX can handle many thousands of concurrent connections and requests per second.

See Nginx Internals presentation as a lot of great stuff about the internals of NGINX.

NGINX does not fork a process or thread per connection (like Apache) so memory usage is very conservative and extremely efficient in the vast majority of cases. NGINX is a faster and consumes less memory than Apache. It is also very friendly for CPU because there's no ongoing create-destroy pattern for processes or threads.

Finally and in summary:

  • uses Non-Blocking "Event-Driven" architecture
  • uses the single-threaded reactor pattern to handle concurrent requests
  • uses highly efficient loop for connection processing
  • is not a single threaded application because it starts multiple worker processes (to handle multiple connections and requests) during start
Simultaneous connections

Okay, so how many simultaneous connections can be processed by NGINX?

worker_processes * worker_connections = max connections

According to this: if you are running 4 worker processes with 4096 worker connections per worker, you will be able to serve 16384 connections.

I've seen some admins does directly translate the sum of worker_processes and worker_connections into the number of clients that can be served simultaneously. In my opinion it is mistake because each clients (e.g. browsers) opens a number of parallel connections to download various components that compose a web page, for example, images, scripts, and so on.

Additionally, you must know that worker_connections directive includes all connections (e.g. connections with proxied servers, among others), not only connections with clients.

Be aware that every worker connection (in the sleeping state) needs 256 bytes of memory, so you can increase it easily.

The number of connections is limited by the maximum number of open files (RLIMIT_NOFILE) on your system. To change the limit of the maximum file descriptors that can be opened by a single worker process (as oppose to the user running NGINX) you can edit the worker_rlimit_nofile directive (with this there's no need to restarting the main process).

A file descriptor is an opaque handle that is used in the interface between user and kernel space to identify file/socket resources.

I think that the chance of running out of file descriptors is minimal. However, you should know the following important rules:

  • before increasing the number of worker_processes or worker_connections verify the open file limit, for this, following commands will be useful for you:

    # List all file descriptors in kernel memory:
    #   first value:  <allocated file handles>
    #  second value:  <unused-but-allocated file handles>
    #   third value:  <the system-wide max number of file handles>
    sysctl fs.file-nr
    
    # Find out the system-wide maximum number of file handles:
    sysctl fs.file-max
    
    # Current open file descriptors per NGINX worker process:
    for _pid in $(pgrep -f "nginx: worker") ; do
    
      echo -en "\n\n##### per worker pid: $_pid #####\n\n"
    
      # List files from proc directory:
      #   - ls -l /proc/${_pid}/fd
      ls /proc/${_pid}/fd | wc -l
    
      # List all open files (files, memory mapped files):
      lsof -as -p $_pid | awk '{if(NR>1)print}'
    
    done
  • if you have SELinux enabled, you will need to run setsebool -P httpd_setrlimit 1 so that NGINX has permissions to set its rlimit. To diagnose SELinux denials and attempts you can use sealert -a /var/log/audit/audit.log.

  • worker_rlimit_nofile serves to dynamically change the maximum file descriptors the NGINX worker process can handle, which is typically defined with the system's soft limit

  • worker_rlimit_nofile works only at the process level, it's limited to the system's hard limit (ulimit -Hn)

So if you don't set this directive manually, then the OS settings will determine how many file descriptors can be used by NGINX.

Ok, so how many fds are opens by NGINX?

  • one file handler for the client's active connection
  • one file handler for opening file (e.g. static file)
  • one file handler for the proxied connection (that will open a socket handling these requests to remote or local host/process)

Also important is:

NGINX can use up to two file descriptors per full-fledged connection.

Look also at these diagrams:

  • 1 file handler for connection with client and 1 file handler for static file being served by NGINX:

                         +-----------------+
    +----------+         |                 |
    |          |    1    |                 |
    |  CLIENT <---------------> NGINX      |
    |          |         |        ^        |
    +----------+         |        |        |
                         |      2 |        |
                         |        |        |
                         |        |        |
                         | +------v------+ |
                         | | STATIC FILE | |
                         | +-------------+ |
                         +-----------------+
    
  • 1 file handler for connection with client and 1 file handler for a open socket to the remote or local host/process:

                         +-----------------+
    +----------+         |                 |         +-----------+
    |          |    1    |                 |    2    |           |
    |  CLIENT <---------------> NGINX <---------------> BACKEND  |
    |          |         |                 |         |           |
    +----------+         |                 |         +-----------+
                         +-----------------+
    
  • 2 file handlers for two simultaneous connections from the same client (1, 4), 1 file handler for connection with other client (3), 2 file handlers for static files (2, 5), and 1 file handler for a open socket to the remote or local host/process (6), so in total it is 6 file descriptors:

                      4
          +-----------------------+
          |              +--------|--------+
    +-----v----+         |        |        |
    |          |    1    |        v        |  6
    |  CLIENT <-----+---------> NGINX <---------------+
    |          |    |    |        ^        |    +-----v-----+
    +----------+    |    |        |        |    |           |
                  3 |    |      2 | 5      |    |  BACKEND  |
    +----------+    |    |        |        |    |           |
    |          |    |    |        |        |    +-----------+
    |  CLIENT  <----+    | +------v------+ |
    |          |         | | STATIC FILE | |
    +----------+         | +-------------+ |
                         +-----------------+
    

I the first two examples: we can take that NGINX needs 2 file handlers for full-fledged connection (but still uses 2 worker connections). In the third example NGINX can take still 2 file handlers for every full-fledged connection (also if client uses parallel connections).

I think that the correct value of worker_rlimit_nofile per all connections of worker is:

worker_rlimit_nofile = worker_connections

So maximum number of open files by the NGINX should be:

worker_processes * worker_rlimit_nofile + (shared libs, log files, event pool etc.) = max open files

To serve 16384 connections by all workers, and bearing in mind about the other handlers used by NGINX, a reasonably value of max files handlers in this case may be 20000. I think it's more than enough.

To change/improve the limitations you should:

# Add to /etc/sysctl.d/99-fs.conf (system-wide value):
fs.file-max = 50000

# Add to /etc/security/limits.conf:
nginx       soft    nofile    10000
nginx       hard    nofile    20000

# Update worker_rlimit_nofile in nginx.conf within the main context:
worker_rlimit_nofile          20000;

You can test the hard and soft limits applying to the NGINX process with this: grep "Max open files" /proc/$(pgrep -f "nginx: master")/limits.

There is a great article about Optimizing Nginx for High Traffic Loads.

Request processing stages

There can be altogether 11 phases when NGINX handles (processes) a request:

You may feel lost now (me too...) so I let myself put this great and simple preview:

request-flow

This infographic comes from Inside NGINX official library.

Server blocks logic

NGINX does have server blocks (like a virtual hosts in an Apache) that use listen and server_name directives to bind to tcp sockets.

Before start reading this chapter you should know what regular expressions are and how they works. I recommend two great and short write-ups about regular expressions created by Jonny Fox:

Why? Regular expressions can be used in both the server_name and location (also in other) directives, and sometimes you must have a great skill of reading them. I think you should create the most readable regular expressions that do not become spaghetti code - impossible to debug and maintain.

It's a short example of server block context (two server blocks):

http {

  index index.html;
  root /var/www/example.com/default;

  server {

    listen 10.10.250.10:80;
    server_name www.example.com;

    access_log logs/example.access.log main;

    root /var/www/example.com/public;

    ...

  }

  server {

    listen 10.10.250.11:80;
    server_name "~^(api.)?example\.com api.de.example.com";

    access_log logs/example.access.log main;

    proxy_pass http://localhost:8080;

    ...

  }

}
Handle incoming connections

NGINX uses the following logic to determining which virtual server (server block) should be used:

  1. Match the address:port pair to the listen directive - that can be multiple server blocks with listen directives of the same specificity that can handle the request

NGINX use the address:port combination for handle incoming connections. This pair is assigned to the listen directive.

The listen directive can be set to:

  • an IP address/port combination (127.0.0.1:80;)

  • a lone IP address, if only address is given, the port 80 is used (127.0.0.1;) - becomes 127.0.0.1:80;

  • a lone port which will listen to every interface on that port (80; or *:80;) - becomes 0.0.0.0:80;

  • the path to a UNIX-domain socket (unix:/var/run/nginx.sock;)

If the listen directive is not present then either *:80 is used (runs with the superuser privileges), or *:8000 otherwise.

The next steps are as follows:

- NGINX translates all incomplete `listen` directives by substituting missing values with their default values (see above)

- NGINX attempts to collect a list of the server blocks that match the request most specifically based on the `address:port`

- If any block that is functionally using `0.0.0.0`, will not be selected if there are matching blocks that list a specific IP

- If there is only one most specific match, that server block will be used to serve the request

- If there are multiple server blocks with the same level of matching, NGINX then begins to evaluate the `server_name` directive of each server block

Look at this short example:

# From client side:
GET / HTTP/1.0
Host: api.random.com

# From server side:
server {

  # This block will be processed:
  listen 192.168.252.10;  # --> 192.168.252.10:80

  ...

}

server {

  listen 80;  # --> *:80 --> 0.0.0.0:80
  server_name api.random.com;

  ...

}
  1. Match the Host header field against the server_name directive as a string (the exact names hash table)

  2. Match the Host header field against the server_name directive with a wildcard at the beginning of the string (the hash table with wildcard names starting with an asterisk)

If one is found, that block will be used to serve the request. If multiple matches are found, the longest match will be used to serve the request.

  1. Match the Host header field against the server_name directive with a wildcard at the end of the string (the hash table with wildcard names ending with an asterisk)

If one is found, that block is used to serve the request. If multiple matches are found, the longest match will be used to serve the request.

  1. Match the Host header field against the server_name directive as a regular expression

The first server_name with a regular expression that matches the Host header will be used to serve the request.

  1. If all the Host headers doesn't match, then direct to the listen directive marked as default_server

  2. If all the Host headers doesn't match and there is no default_server, direct to the first server with a listen directive that satisfies first step

  3. Finally, NGINX goes to the location context

This list is based on Mastering Nginx - The virtual server section.

Matching location

For each request, NGINX goes through a process to choose the best location block that will be used to serve that request.

The location syntax looks like:

location optional_modifier location_match { ... }

location_match in the above defines what NGINX should check the request URI against. The optional_modifier below will cause the associated location block to be interpreted as follows:

  • (none): if no modifiers are present, the location is interpreted as a prefix match. To determine a match, the location will now be matched against the beginning of the URI

  • =: is an exact match, without any wildcards, prefix matching or regular expressions; forces a literal match between the request URI and the location parameter

  • ~: if a tilde modifier is present, this location must be used for case sensitive matching (RE match)

  • ~*: if a tilde and asterisk modifier is used, the location must be used for case insensitive matching (RE match)

  • ^~: assuming this block is the best non-RE match, a carat followed by a tilde modifier means that RE matching will not take place

And now, a short introduction to determines location priority:

  • the exact match is the best priority (processed first); ends search if match

  • the prefix match is the second priority; there are two types of prefixes: ^~ and (none), if this match used the ^~ prefix, searching stops

  • the regular expression match has the lowest priority; there are two types of prefixes: ~ and ~*; in the order they are defined in the configuration file

  • if regular expression searching yielded a match, that result is used, otherwise, the match from prefix searching is used

So look at this example, it comes from the Nginx documentation - ngx_http_core_module:

location = / {
  # Matches the query / only.
  [ configuration A ]
}
location / {
  # Matches any query, since all queries begin with /, but regular
  # expressions and any longer conventional blocks will be
  # matched first.
  [ configuration B ]
}
location /documents/ {
  # Matches any query beginning with /documents/ and continues searching,
  # so regular expressions will be checked. This will be matched only if
  # regular expressions don't find a match.
  [ configuration C ]
}
location ^~ /images/ {
  # Matches any query beginning with /images/ and halts searching,
  # so regular expressions will not be checked.
  [ configuration D ]
}
location ~* \.(gif|jpg|jpeg)$ {
  # Matches any request ending in gif, jpg, or jpeg. However, all
  # requests to the /images/ directory will be handled by
  # Configuration D.
  [ configuration E ]
}

To help you understand how does location match works:

The process of choosing NGINX location block is as follows (a detailed explanation):

  1. Prefix-based NGINX location matches (no regular expression). Each location will be checked against the request URI

  2. NGINX searches for an exact match. If a = modifier exactly matches the request URI, this specific location block is chosen right away

  3. If no exact (meaning no = modifier) location block is found, NGINX will continue with non-exact prefixes. It starts with the longest matching prefix location for this URI, with the following approach:

  • In case the longest matching prefix location has the ^~ modifier, NGINX will stop its search right away and choose this location.

  • Assuming the longest matching prefix location doesn’t use the ^~ modifier, the match is temporarily stored and the process continues.

  1. As soon as the longest matching prefix location is chosen and stored, NGINX continues to evaluate the case-sensitive and insensitive regular expression locations. The first regular expression location that fits the URI is selected right away to process the request

  2. If no regular expression locations are found that match the request URI, the previously stored prefix location is selected to serve the request

In order to better understand how this process work please see this short cheatsheet that will allow you to design your location blocks in a predictable way:

nginx-location-cheatsheet

I recommend to use external tools for testing regular expressions. For more please see online tools chapter.

In conclusion, location picking order is as follows:

  1. = - exactly, e.g. location = /path

  2. ^~ - forward match, e.g. location ^~ /path

  3. ~ - regular expression case sensitive, e.g. location ~ /path/

  4. ~* - regular expression case insensitive, e.g. location ~* .(jpg|png|svg)

  5. /, e.g. location /path

Ok, so here's a more complicated configuration:

server {

 listen           80;
 server_name      xyz.com www.xyz.com;

 location ~ ^/(media|static)/ {
  root            /var/www/xyz.com/static;
  expires         10d;
 }

 location ~* ^/(media2|static2) {
  root            /var/www/xyz.com/static2;
  expires         20d;
 }

 location /static3 {
  root            /var/www/xyz.com/static3;
 }

 location ^~ /static4 {
  root            /var/www/xyz.com/static4;
 }

 location = /api {
  proxy_pass      http://127.0.0.1:8080;
 }

 location / {
  proxy_pass      http://127.0.0.1:8080;
 }

 location /backend {
  proxy_pass      http://127.0.0.1:8080;
 }

 location ~ logo.xcf$ {
  root            /var/www/logo;
  expires         48h;
 }

 location ~* .(png|ico|gif|xcf)$ {
  root            /var/www/img;
  expires         24h;
 }

 location ~ logo.ico$ {
  root            /var/www/logo;
  expires         96h;
 }

 location ~ logo.jpg$ {
  root            /var/www/logo;
  expires         48h;
 }

}

And here's the table with the results:

URL LOCATIONS FOUND FINAL MATCH
/ 1) prefix match for / /
/css 1) prefix match for / /
/api 1) exact match for /api /api
/api/ 1) prefix match for / /
/backend 1) prefix match for /
2) prefix match for /backend
/backend
/static 1) prefix match for / /
/static/header.png 1) prefix match for /
2) case sensitive regex match for ^/(media|static)/
^/(media|static)/
/static/logo.jpg 1) prefix match for /
2) case sensitive regex match for ^/(media|static)/
^/(media|static)/
/media2 1) prefix match for /
2) case insensitive regex match for ^/(media2|static2)
^/(media2|static2)
/media2/ 1) prefix match for /
2) case insensitive regex match for ^/(media2|static2)
^/(media2|static2)
/static2/logo.jpg 1) prefix match for /
2) case insensitive regex match for ^/(media2|static2)
^/(media2|static2)
/static2/logo.png 1) prefix match for /
2) case insensitive regex match for ^/(media2|static2)
^/(media2|static2)
/static3/logo.jpg 1) prefix match for /static3
2) prefix match for /
3) case sensitive regex match for logo.jpg$
logo.jpg$
/static3/logo.png 1) prefix match for /static3
2) prefix match for /
3) case insensitive regex match for .(png|ico|gif|xcf)$
.(png|ico|gif|xcf)$
/static4/logo.jpg 1) priority prefix match for /static4
2) prefix match for /
/static4
/static4/logo.png 1) priority prefix match for /static4
2) prefix match for /
/static4
/static5/logo.jpg 1) prefix match for /
2) case sensitive regex match for logo.jpg$
logo.jpg$
/static5/logo.png 1) prefix match for /
2) case insensitive regex match for .(png|ico|gif|xcf)$
.(png|ico|gif|xcf)$
/static5/logo.xcf 1) prefix match for /
2) case sensitive regex match for logo.xcf$
logo.xcf$
/static5/logo.ico 1) prefix match for /
2) case insensitive regex match for .(png|ico|gif|xcf)$
.(png|ico|gif|xcf)$
rewrite vs return

Generally there are two ways of implementing redirects in NGINX: with rewrite and return.

These directives (they come from the ngx_http_rewrite_module) are very useful but (from the NGINX documentation) the only 100% safe things which may be done inside if in a location context are:

  • return ...;
  • rewrite ... last;

Anything else may possibly cause unpredictable behaviour, including potential SIGSEGV.

rewrite directive

The rewrite directives are executed sequentially in order of their appearance in the configuration file. It's slower (but still extremely fast) than a return and returns HTTP 302 in all cases, irrespective of permanent.

Importantly only the part of the original url that matches the regex is rewritten. It can be used for temporary url changes.

I sometimes used rewrite to capture elementes in the original URL, change or add elements in the path, and in general when I do something more complex:

location / {

  ...

  rewrite   ^/users/(.*)$       /user.php?username=$1 last;

  # or:
  rewrite   ^/users/(.*)/items$ /user.php?username=$1&page=items last;

}

rewrite directive accept optional flags:

  • break - basically completes processing of rewrite directives, stops processing, and breakes location lookup cycle by not doing any location lookup and internal jump at all

    • if you use break flag inside location block:

      • no more parsing of rewrite conditions
      • internal engine continues to parse the current location block

      Inside a location block, with break, NGINX only stops processing anymore rewrite conditions.

    • if you use break flag outside location block:

      • no more parsing of rewrite conditions
      • internal engine goes to the next phase (searching for location match)

      Outside a location block, with break, NGINX stops processing anymore rewrite conditions.

  • last - basically completes processing of rewrite directives, stops processing, and starts a search for a new location matching the changed URI

    • if you use last flag inside location block:

      • no more parsing of rewrite conditions
      • internal engine starts to look for another location match based on the result of the rewrite result
      • no more parsing of rewrite conditions, even on the next location match

      Inside a location block, with last, NGINX stops processing anymore rewrite conditions and then starts to look for a new matching of location block! NGINX also ignores any rewrites in the new location block!

    • if you use last flag outside location block:

      • no more parsing of rewrite conditions
      • internal engine goes to the next phase (searching for location match)

      Outside a location block, with last, NGINX stops processing anymore rewrite conditions.

  • redirect - returns a temporary redirect with the 302 HTTP response code

  • permanent - returns a permanent redirect with the 301 HTTP response code

Note:

  • that outside location blocks, last and break are effectively the same
  • processing of rewrite directives at server level may be stopped via break, but the location lookup will follow anyway

This explanation is based on the awesome answer by Pothi Kalimuthu to nginx url rewriting: difference between break and last.

Official documentation has a great tutorials about Creating NGINX Rewrite Rules and Converting rewrite rules.

return directive

The other way is a return directive. It's faster than rewrite because there is no regexp that has to be evaluated. It's stops processing and returns HTTP 301 (by default) to a client, and the entire url is rerouted to the url specified.

I use return directive to:

  • force redirect from http to https:

    server {
    
      ...
    
      return  301 https://example.com$request_uri;
    
    }
  • redirect from www to non-www and vice versa:

    server {
    
      ...
    
      if ($host = www.domain.com) {
    
        return  301 https://domain.com$request_uri;
    
      }
    
    }
  • close the connection and log it internally:

    server {
    
      ...
    
      return 444;
    
    }
  • send 4xx HTTP response for a client without any other actions:

    server {
    
      ...
    
      if ($request_method = POST) {
    
        return 405;
    
      }
    
      # or:
      if ($invalid_referer) {
    
        return 403;
    
      }
    
      # or:
      if ($request_uri ~ "^/app/(.+)$") {
    
        return 403;
    
      }
    
      # or:
      location ~ ^/(data|storage) {
    
        return 403;
    
      }
    
    }
  • and sometimes for reply with HTTP code without serving a file:

    server {
    
      ...
    
      # NGINX will not allow a 200 with no response body (200's need to be with a resource in the response)
      return 204 "it's all okay";
      # Because default Content-Type is application/octet-stream, browser will offer to "save the file".
      # If you want to see reply in browser you should add properly Content-Type:
      # add_header Content-Type text/plain;
    
    }
try_files directive

We have one more very interesting and important directive: try_files (from ngx_http_core_module). This directive tells NGINX to check for the existence of a named set of files or directories (checks files conditionally breaking on success).

I think the best explanation come from official documentation:

try_files checks the existence of files in the specified order and uses the first found file for request processing; the processing is performed in the current context. The path to a file is constructed from the file parameter according to the root and alias directives. It is possible to check directory’s existence by specifying a slash at the end of a name, e.g. $uri/. If none of the files were found, an internal redirect to the uri specified in the last parameter is made.

Generally it may check files on disk, redirect to proxies, or internal locations, and return error codes, all in one directive.

Take a look at the following example:

server {

  ...

  root /var/www/example.com;

  location / {

    try_files $uri $uri/ /frontend/index.html;

  }

  location ^~ /images {

    root /var/www/static;
    try_files $uri $uri/ =404;

  }

  ...
  • default root directory for all locations is /var/www/example.com

  • location / - matches all locations without more specific locations, e.g. exact names

    • try_files $uri - when you receive a URI that's matched by this block try $uri first

      For example: https://example.com/tools/en.js - NGINX will try to check if there's a file inside /tools called en.js, if found it, serve it in the first place.

    • try_files $uri $uri/ - if you didn't find the first condition $uri try the URI as a directory

      For example: https://example.com/backend/ - NGINX will try first check if a file called backend exists, if can't find it then goes to second check $uri/ and see if there's a directory called backend exists then it will try serving it.

    • try_files $uri $uri/ /frontend/index.html - if a file and directory not found, NGINX sends /frontend/index.html

  • location ^~ /images - handle any query beginning with /images and halts searching

    • default root directory for this location is /var/www/static

    • try_files $uri - when you receive a URI that's matched by this block try $uri first

      For example: https://example.com/images/01.gif - NGINX will try to check if there's a file inside /images called 01.gif, if found it, serve it in the first place.

    • try_files $uri $uri/ - if you didn't find the first condition $uri try the URI as a directory

      For example: https://example.com/images/ - NGINX will try first check if a file called images exists, if can't find it then goes to second check $uri/ and see if there's a directory called images exists then it will try serving it.

    • try_files $uri $uri/ =404 - if a file and directory not found, NGINX sends HTTP 404 (Not Found)

if, break and set

The ngx_http_rewrite_module also provides additional directives:

  • break - stops processing, if is specified inside the location, further processing of the request continues in this location:

    # It's useful for:
    if ($slow_resp) {
    
      limit_rate 50k;
      break;
    
    }
  • if - you can use if inside a server but not the other way around, also notice that you shouldn't use if inside location as it may not work as desired (see If Is Evil)

    The NGINX docs say also: There are cases where you simply cannot avoid using an if, for example if you need to test a variable which has no equivalent directive.

  • set - sets a value for the specified variable. The value can contain text, variables, and their combination

Example of usage if and set directives:

# It comes from: https://gist.github.com/jrom/1760790:
if ($request_uri = /) {
  set $test  A;
}

if ($host ~* example.com) {
  set $test  "${test}B";
}

if ($http_cookie !~* "auth_token") {
  set $test  "${test}C";
}

if ($test = ABC) {
  proxy_pass http://cms.example.com;
  break;
}

Log files

Log files are a critical part of the NGINX management. It writes information about client requests in the access log right after the request is processed (in the last phase: NGX_HTTP_LOG_PHASE).

By default:

  • the access log is located in logs/access.log, but I suggest you take it to /var/log/nginx directory
  • data is written in the predefined combined format
Conditional logging

Sometimes certain entries are there just to fill up the logs or are cluttering them. I sometimes exclude requests - by client IP or whatever else - when I want to debug log files more effective.

So in this example if the $error_codes variable’s value is 0 - then log nothing (default action), but if 1 (e.g. 404 or 503 from backend) - to save this request to the log:

# Define map in the http context:
http {

  ...

  map $status $error_codes {

    default   1;
    ~^[23]    0;

  }

  ...

  # Add if condition to access log:
  access_log /var/log/nginx/example.com-access.log combined if=$error_codes;

}
Manually log rotation

NGINX will re-open its logs in response to the USR1 signal:

cd /var/log/nginx

mv access.log access.log.0
kill -USR1 $(cat /var/run/nginx.pid) && sleep 1

# >= gzip-1.6:
gzip -k access.log.0
# With any version:
gzip < access.log.0 > access.log.0.gz

# Test integrity and remove if test passed:
gzip -t access.log.0 && rm -fr access.log.0

You can also read about how to configure log rotation policy.

Error log severity levels

The following is a list of all severity levels:

TYPE DESCRIPTION
debug information that can be useful to pinpoint where a problem is occurring
info informational messages that aren’t necessary to read but may be good to know
notice something normal happened that is worth noting
warn something unexpected happened, however is not a cause for concern
error something was unsuccessful, contains the action of limiting rules
crit important problems that need to be addressed
alert severe situation where action is needed promptly
emerg the system is in an unusable state and requires immediate attention

For example: if you set crit error log level, messages of crit, alert, and emerg levels are logged.

Load balancing algorithms

Load Balancing is in principle a wonderful thing really. You can find out about it when you serve tens of thousands (or maybe more) of requests every second. Of course load balancing is not the only reason - think also about maintenance tasks without downtime for example.

Generally load balancing is a technique used to distribute the workload across multiple computing resources and servers.

I think you should always use this technique also if you have a simple app or whatever else what you're sharing with other.

The configuration is very simple. NGINX includes a ngx_http_upstream_module to define backends (groups of servers or multiple server instances). More specifically, the upstream directive is responsible for this.

Backend parameters

Before we start talking about the load balancing techniques you should know something about server directive. It defines the address and other parameters of a backend servers.

This directive accepts the following options:

  • weight=<num> - sets the weight of the origin server, e.g. weight=10

  • max_conns=<num> - limits the maximum number of simultaneous active connections from the NGINX proxy server to an upstream server (default value: 0 = no limit), e.g. max_conns=8

    • if you set max_conns=4 the 5th will be rejected
    • if the server group does not reside in the shared memory (zone directive), the limitation works per each worker process
  • max_fails=<num> - the number of unsuccessful attempts to communicate with the backend (default value: 1, 0 disables the accounting of attempts), e.g. max_fails=3;

  • fail_timeout=<time> - the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable (default value: 10 seconds), e.g. fail_timeout=30s;

  • zone <name> <size> - defines shared memory zone that keeps the group’s configuration and run-time state that are shared between worker processes, e.g. zone backend 32k;

  • backup - if server is marked as a backup server it does not receive requests unless both of the other servers are unavailable

  • down - marks the server as permanently unavailable

Round Robin

It's the simpliest load balancing technique. Round Robin has the list of servers and forwards each request to each server from the list in order. Once it reaches the last server, the loop again jumps to the first server and start again.

upstream bck_testing_01 {

  # with default weight for all (weight=1)
  server 192.168.250.220:8080
  server 192.168.250.221:8080
  server 192.168.250.222:8080

}

round-robin

Weighted Round Robin

In Weighted Round Robin load balancing algorithm, each server is allocated with a weight based on its configuration and ability to process the request.

This method is similar to the Round Robin in a sense that the manner by which requests are assigned to the nodes is still cyclical, albeit with a twist. The node with the higher specs will be apportioned a greater number of requests.

upstream bck_testing_01 {

  server 192.168.250.220:8080   weight=3
  server 192.168.250.221:8080              # default weight=1
  server 192.168.250.222:8080              # default weight=1

}

weighted-round-robin

Least Connections

This method tells the load balancer to look at the connections going to each server and send the next connection to the server with the least amount of connections.

upstream bck_testing_01 {

  least_conn;

  # with default weight for all (weight=1)
  server 192.168.250.220:8080
  server 192.168.250.221:8080
  server 192.168.250.222:8080

}

For example: if clients D10, D11 and D12 attempts to connect after A4, C2 and C8 have already disconnected but A1, B3, B5, B6, C7 and A9 are still connected, the load balancer will assign client D10 to server 2 instead of server 1 and server 3. After that, client D11 will be assign to server 1 and client D12 will be assign to server 2.

least-conn

Weighted Least Connections

This is, in general, a very fair distribution method, as it uses the ratio of the number of connections and the weight of a server. The server in the cluster with the lowest ratio automatically receives the next request.

upstream bck_testing_01 {

  least_conn;

  server 192.168.250.220:8080   weight=3
  server 192.168.250.221:8080              # default weight=1
  server 192.168.250.222:8080              # default weight=1

}

For example: if clients D10, D11 and D12 attempts to connect after A4, C2 and C8 have already disconnected but A1, B3, B5, B6, C7 and A9 are still connected, the load balancer will assign client D10 to server 2 or 3 (because they have a least active connections) instead of server 1. After that, client D11 and D12 will be assign to server 1 because it has the biggest weight parameter.

weighted-least-conn

IP Hash

The IP Hash method uses the IP of the client to create a unique hash key and associates the hash with one of the servers. This ensures that a user is sent to the same server in future sessions (a basic kind of session persistence) except when this server is unavailable. If one of the servers needs to be temporarily removed, it should be marked with the down parameter in order to preserve the current hashing of client IP addresses.

This technique is especially helpful if actions between sessions has to be kept alive e.g. products put in the shopping cart or when the session state is of concern and not handled by shared memory of the application.

upstream bck_testing_01 {

  ip_hash;

  # with default weight for all (weight=1)
  server 192.168.250.220:8080
  server 192.168.250.221:8080
  server 192.168.250.222:8080

}

ip-hash

Generic Hash

This technique is very similar to the IP Hash but for each request the load balancer calculates a hash that is based on the combination of a text string, variable, or a combination you specify, and associates the hash with one of the servers.

upstream bck_testing_01 {

  hash $request_uri;

  # with default weight for all (weight=1)
  server 192.168.250.220:8080
  server 192.168.250.221:8080
  server 192.168.250.222:8080

}

For example: load balancer calculate hash from the full original request URI (with arguments). Clients A4, C7, C8 and A9 sends requests to the /static location and will be assign to server 1. Similarly clients A1, C2, B6 which get /sitemap.xml resource they will be assign to server 2. Clients B3 and B5 sends requests to the /api/v4 and they will be assign to server 3.

generic-hash

Other methods

It is similar to the Generic Hash method because you can also specify a unique hash identifier but the assignment to the appropriate server is under your control. I think it's a somewhat primitive method and I wouldn't say it is a full load balancing technique, but in some cases it is very useful.

Mainly this helps reducing the mess on the configuration made by a lot of location blocks with similar configurations.

First of all create a map:

map $request_uri $bck_testing_01 {

  default       "192.168.250.220:8080";

  /api/v4       "192.168.250.220:8080";
  /api/v3       "192.168.250.221:8080";
  /static       "192.168.250.222:8080";
  /sitemap.xml  "192.168.250.222:8080";

}

And add proxy_pass directive:

server {

  ...

  location / {

    proxy_pass    http://$bck_testing_01;

  }

  ...

}

Rate limiting

All rate limiting rules (definitions) should be added to the NGINX http context.

Rate limiting rules are useful for:

  • traffic shaping
  • traffic optimising
  • slow down the rate of incoming requests
  • protect http requests flood
  • protect against slow http attacks
  • prevent consume a lot of bandwidth
  • mitigating ddos attacks
  • protect brute-force attacks

NGINX has following variables (unique keys) that can be used in a rate limiting rules:

VARIABLE DESCRIPTION
$remote_addr client address
$binary_remote_addr client address in a binary form, it is smaller and saves space then remote_addr
$server_name name of the server which accepted a request
$request_uri full original request URI (with arguments)
$query_string arguments in the request line

Please see official doc for more information about variables.

NGINX also provides following keys:

KEY DESCRIPTION
limit_req_zone stores the current number of excessive requests
limit_conn_zone stores the maximum allowed number of connections

and directives:

DIRECTIVE DESCRIPTION
limit_req sets the shared memory zone and the maximum burst size of requests
limit_conn sets the shared memory zone and the maximum allowed number of connections to the server per a client IP

Keys are used to store the state of each IP address and how often it has accessed a limited object. This information are stored in shared memory available from all NGINX worker processes.

Both keys also provides response status parameters indicating too many requests or connections with specific http code (default 503).

  • limit_req_status <value>
  • limit_conn_status <value>

For example, if you want to set the desired logging level for cases when the server limits the number of connections:

# Add this to http context:
limit_req_status 429;

# Set your own error page for 429 http code:
error_page 429 /rate_limit.html;
location = /rate_limit.html {

  root /usr/share/www/http-error-pages/sites/other;
  internal;

}

# And create this file:
cat > /usr/share/www/http-error-pages/sites/other/rate_limit.html << __EOF__
HTTP 429 Too Many Requests
__EOF__

Rate limiting rules also have zones that lets you define a shared space in which to count the incoming requests or connections.

All requests or connections coming into the same space will be counted in the same rate limit. This is what allows you to limit per URL, per IP, or anything else.

The zone has two required parts:

  • <name> - is the zone identifier
  • <size> - is the zone size

Example:

<key> <variable> zone=<name>:<size>;

State information for about 16,000 IP addresses takes 1 megabyte. So 1 kilobyte zone has 16 IP addresses.

The range of zones is as follows:

  • http context

    http {
    
      ... zone=<name>;
    
  • server context

    server {
    
      ... zone=<name>;
    
  • location directive

    location /api {
    
      ... zone=<name>;
    

limit_req_zone key lets you set rate parameter (optional) - it defines the rate limited URL(s).

For enable queue you should use limit_req or limit_conn directives (see above). limit_req also provides optional parameters:

PARAMETER DESCRIPTION
burst=<num> sets the maximum number of excessive requests that await to be processed in a timely manner; maximum requests as rate * burst in burst seconds
nodelay it imposes a rate limit without constraining the allowed spacing between requests; default NGINX would return 503 response and not handle excessive requests

nodelay parameters are only useful when you also set a burst.

Without nodelay option NGINX would wait (no 503 response) and handle excessive requests with some delay.

Analyse configuration

It is an essential way for testing NGINX configuration:

nginx -t -c /etc/nginx/nginx.conf

An external tool for analyse NGINX configuration is gixy:

gixy /etc/nginx/nginx.conf

Monitoring

GoAccess

Standard paths to the configuration file:

  • /etc/goaccess.conf
  • /etc/goaccess/goaccess.conf
  • /usr/local/etc/goaccess.conf

Prior to start GoAccess enable these parameters:

time-format %H:%M:%S
date-format %d/%b/%Y
log-format  %h %^[%d:%t %^] "%r" %s %b "%R" "%u"
Build and install
# Ubuntu/Debian
apt-get install gcc make libncursesw5-dev libgeoip-dev libtokyocabinet-dev

# RHEL/CentOS
yum install gcc ncurses-devel geoip-devel tokyocabinet-devel

cd /usr/local/src/

wget -c https://tar.goaccess.io/goaccess-1.3.tar.gz && \
tar xzvfp goaccess-1.3.tar.gz

cd goaccess-1.3

./configure --enable-utf8 --enable-geoip=legacy --with-openssl=<path_to_openssl_sources> --sysconfdir=/etc/

make -j2 && make install

ln -s /usr/local/bin/goaccess /usr/bin/goaccess

You can always fetch default configuration from /usr/local/src/goaccess-<version>/config/goaccess.conf source tree.

Analyse log file and enable all recorded statistics
goaccess -f access.log -a
Analyse compressed log file
zcat access.log.1.gz | goaccess -a -p /etc/goaccess/goaccess.conf
Analyse log file remotely
ssh user@remote_host 'access.log' | goaccess -a
Analyse log file and generate html report
goaccess -p /etc/goaccess/goaccess.conf -f access.log --log-format=COMBINED -o /var/www/index.html
Ngxtop
Analyse log file
ngxtop -l access.log
Analyse log file and print requests with 4xx and 5xx
ngxtop -l access.log -i 'status >= 400' print request status
Analyse log file remotely
ssh user@remote_host tail -f access.log | ngxtop -f combined

Testing

You can change combinations and parameters of these commands.

Send request and show response headers
# 1)
curl -Iks <scheme>://<server_name>:<port>

# 2)
http -p Hh <scheme>://<server_name>:<port>

# 3)
htrace.sh -u <scheme>://<server_name>:<port> -h
Send request with http method, user-agent, follow redirects and show response headers
# 1)
curl -Iks --location -X GET -A "x-agent" <scheme>://<server_name>:<port>

# 2)
http -p Hh GET <scheme>://<server_name>:<port> User-Agent:x-agent --follow

# 3)
htrace.sh -u <scheme>://<server_name>:<port> -M GET --user-agent "x-agent" -h
Send multiple requests
# URL sequence substitution with a dummy query string:
curl -ks <scheme>://<server_name>:<port>?[1-20]

# With shell 'for' loop:
for i in {1..20} ; do curl -ks <scheme>://<server_name>:<port> ; done
Testing SSL connection
# 1)
echo | openssl s_client -connect <server_name>:<port>

# 2)
gnutls-cli --disable-sni -p 443 <server_name>
Testing SSL connection with SNI support
# 1)
echo | openssl s_client -servername <server_name> -connect <server_name>:<port>

# 2)
gnutls-cli -p 443 <server_name>
Testing SSL connection with specific SSL version
openssl s_client -tls1_2 -connect <server_name>:<port>
Testing SSL connection with specific cipher
openssl s_client -cipher 'AES128-SHA' -connect <server_name>:<port>
Load testing with ApacheBench (ab)

Project documentation: Apache HTTP server benchmarking tool

Installation:

# Debian like:
apt-get install -y apache2-utils

# RedHat like:
yum -y install httpd-tools

This is a great explanation about ApacheBench by Mamsaac:

The apache benchmark tool is very basic, and while it will give you a solid idea of some performance, it is a bad idea to only depend on it if you plan to have your site exposed to serious stress in production.

Standard test
ab -n 1000 -c 100 https://example.com/
Test with KeepAlive header
ab -n 5000 -c 100 -k -H "Accept-Encoding: gzip, deflate" https://example.com/index.php
Load testing with wrk2

Project documentation: wrk2

See this chapter to use the Lua API for wrk2. Also take a look at wrk2 scripts.

Installation:

# Debian like:
apt-get install -y build-essential libssl-dev git zlib1g-dev
git clone https://github.com/giltene/wrk2 && cd wrk2
make
sudo cp wrk /usr/local/bin

# RedHat like:
yum -y groupinstall 'Development Tools'
yum -y install openssl-devel git
git clone https://github.com/giltene/wrk2 && cd wrk2
make
sudo cp wrk /usr/local/bin
Standard scenarios
# 1)
wrk -c 1 -t 1 -d 2s -R 5 -H "Host: example.com" https://example.com
Running 2s test @ https://example.com
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    45.21ms   20.99ms 108.16ms   90.00%
    Req/Sec       -nan      -nan   0.00      0.00%
  10 requests in 2.01s, 61.69KB read
Requests/sec:      4.99
Transfer/sec:     30.76KB

# RPS:
6 09/Jul/2019:08:00:25
5 09/Jul/2019:08:00:26

# 2)
wrk -c 1 -t 1 -d 2s -R 25 -H "Host: example.com" https://example.com
Running 2s test @ https://example.com
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    64.40ms   24.26ms 110.46ms   48.00%
    Req/Sec       -nan      -nan   0.00      0.00%
  50 requests in 2.01s, 308.45KB read
Requests/sec:     24.93
Transfer/sec:    153.77KB

# RPS:
12 09/Jul/2019:08:02:09
26 09/Jul/2019:08:02:10
13 09/Jul/2019:08:02:11

# 3)
wrk -c 5 -t 5 -d 2s -R 25 -H "Host: example.com" https://example.com
Running 2s test @ https://example.com
  5 threads and 5 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    47.97ms   25.79ms 136.45ms   90.00%
    Req/Sec       -nan      -nan   0.00      0.00%
  50 requests in 2.01s, 308.45KB read
Requests/sec:     24.92
Transfer/sec:    153.75KB

# RPS:
25 09/Jul/2019:08:03:56
25 09/Jul/2019:08:03:57
 5 09/Jul/2019:08:03:58

# 4)
wrk -c 5 -t 5 -d 2s -R 50 -H "Host: example.com" https://example.com
Running 2s test @ https://example.com
  5 threads and 5 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    45.09ms   18.63ms 130.69ms   91.00%
    Req/Sec       -nan      -nan   0.00      0.00%
  100 requests in 2.01s, 616.89KB read
Requests/sec:     49.85
Transfer/sec:    307.50KB

# RPS:
35 09/Jul/2019:08:05:00
50 09/Jul/2019:08:05:01
20 09/Jul/2019:08:05:02

# 5)
wrk -c 24 -t 12 -d 30s -R 2500 -H "Host: example.com" https://example.com
Running 30s test @ https://example.com
  12 threads and 24 connections
  Thread calibration: mean lat.: 3866.673ms, rate sampling interval: 13615ms
  Thread calibration: mean lat.: 3880.487ms, rate sampling interval: 13606ms
  Thread calibration: mean lat.: 3890.279ms, rate sampling interval: 13615ms
  Thread calibration: mean lat.: 3872.985ms, rate sampling interval: 13606ms
  Thread calibration: mean lat.: 3876.076ms, rate sampling interval: 13615ms
  Thread calibration: mean lat.: 3883.463ms, rate sampling interval: 13606ms
  Thread calibration: mean lat.: 3870.145ms, rate sampling interval: 13623ms
  Thread calibration: mean lat.: 3873.675ms, rate sampling interval: 13623ms
  Thread calibration: mean lat.: 3898.842ms, rate sampling interval: 13672ms
  Thread calibration: mean lat.: 3890.278ms, rate sampling interval: 13615ms
  Thread calibration: mean lat.: 3882.429ms, rate sampling interval: 13631ms
  Thread calibration: mean lat.: 3896.333ms, rate sampling interval: 13639ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.01s     4.32s   22.46s    57.62%
    Req/Sec    52.00      0.00    52.00    100.00%
  18836 requests in 30.01s, 113.52MB read
Requests/sec:    627.59
Transfer/sec:      3.78MB

# RPS:
 98 09/Jul/2019:08:06:13
627 09/Jul/2019:08:06:14
624 09/Jul/2019:08:06:15
640 09/Jul/2019:08:06:16
629 09/Jul/2019:08:06:17
627 09/Jul/2019:08:06:18
648 09/Jul/2019:08:06:19
624 09/Jul/2019:08:06:20
624 09/Jul/2019:08:06:21
631 09/Jul/2019:08:06:22
641 09/Jul/2019:08:06:23
627 09/Jul/2019:08:06:24
633 09/Jul/2019:08:06:25
636 09/Jul/2019:08:06:26
648 09/Jul/2019:08:06:27
626 09/Jul/2019:08:06:28
617 09/Jul/2019:08:06:29
636 09/Jul/2019:08:06:30
640 09/Jul/2019:08:06:31
627 09/Jul/2019:08:06:32
635 09/Jul/2019:08:06:33
639 09/Jul/2019:08:06:34
633 09/Jul/2019:08:06:35
598 09/Jul/2019:08:06:36
644 09/Jul/2019:08:06:37
632 09/Jul/2019:08:06:38
635 09/Jul/2019:08:06:39
624 09/Jul/2019:08:06:40
643 09/Jul/2019:08:06:41
635 09/Jul/2019:08:06:42
431 09/Jul/2019:08:06:43

# Other examples:
wrk -c 24 -t 12 -d 30s -R 2500 --latency https://example.com/index.php
POST call (with Lua)

Based on:

Example 1:

-- lua/post-call.lua
request = function()

  wrk.method = "POST"
  wrk.body = "login=foo&password=bar"
  wrk.headers["Content-Type"] = "application/x-www-form-urlencoded"

  return wrk.format(wrk.method)

end

Example 2:

-- lua/post-call.lua

request = function()

  path = "/forms/int/d/1FAI"

  wrk.method = "POST"
  wrk.body = "{\"hash\":\"ooJeiveenai6iequ\",\"timestamp\":\"1562585990\",\"data\":[[\"cache\",\"x-cache\",\"true\"]]}"
  wrk.headers["Content-Type"] = "application/json; charset=utf-8"
  wrk.headers["Accept"] = "application/json"

  return wrk.format(wrk.method, path)

end

Example 3:

-- lua/post-call.lua

request = function()

  path = "/login"

  wrk.method = "POST"
  wrk.body = [[{
    "hash": "ooJeiveenai6iequ",
    "timestamp": "1562585990",
    "data":
    {
      login: "foo",
      password: "bar"
    },
  }]]
  wrk.headers["Content-Type"] = "application/json; charset=utf-8"

  return wrk.format(wrk.method, path)

end

Command:

# The first example:
wrk -c 12 -t 12 -d 30s -R 12000 -s lua/post-call.lua https://example.com/login

# Second and third example:
wrk -c 12 -t 12 -d 30s -R 12000 -s lua/post-call.lua https://example.com
Random paths (with Lua)

Based on:

Example 1:

-- lua/random-paths.lua

math.randomseed(os.time())

request = function()

  url_path = "/search?q=" .. math.random(0,100000)

  -- print(url_path)

  return wrk.format("GET", url_path)

end

Example 2:

-- lua/random-paths.lua

math.randomseed(os.time())

local connected = false

local host = "example.com"
local path = "/search?q="
local url  = "https://" .. host .. path

wrk.headers["Host"] = host

function ranValue(length)

  local res = ""

  for i = 1, length do

    res = res .. string.char(math.random(97, 122))

  end

  return res

end

request = function()

  url_path = path .. ranValue(32)

  -- print(url_path)

   if not connected then

      connected = true
      return wrk.format("CONNECT", host)

   end

  return wrk.format("GET", url_path)

end

Example 3:

-- lua/random-paths.lua

math.randomseed(os.time())

counter = 0

function ranValue(length)

  local res = ""

  for i = 1, length do

    res = res .. string.char(math.random(97, 122))

  end

  return res

end

request = function()

  path = "/accounts/" .. counter

  rval = ranValue(32)

  wrk.method = "POST"
  wrk.body   = [[{
    "user": counter,
    "action": "insert",
    "amount": rval
  }]]
  wrk.headers["Content-Type"] = "application/json"
  wrk.headers["Accept"] = "application/json"

  io.write(string.format("id: %4d, path: %14s,\tvalue: %s\n", counter, path, rval))

  counter = counter + 1
  if counter>500 then

    counter = 1

  end

  return wrk.format(wrk.method, path)

end

Command:

wrk -c 12 -t 12 -d 30s -R 12000 -s lua/random-paths.lua https://example.com/
Multiple paths (with Lua)

Example 1:

-- lua/multi-paths.lua

math.randomseed(os.time())
math.random(); math.random(); math.random()

function shuffle(paths)

  local j, k
  local n = #paths

  for i = 1, n do

    j, k = math.random(n), math.random(n)
    paths[j], paths[k] = paths[k], paths[j]

  end

  return paths

end

function load_url_paths_from_file(file)

  lines = {}

  local f=io.open(file,"r")
  if f~=nil then

    io.close(f)

  else

    return lines

  end

  for line in io.lines(file) do

    if not (line == '') then

      lines[#lines + 1] = line

    end

  end

  return shuffle(lines)

end

paths = load_url_paths_from_file("data/paths.list")

if #paths <= 0 then

  print("No paths found. You have to create a file data/paths.list with one path per line.")
  os.exit()

end

counter = 0

request = function()

  url_path = paths[counter]

  if counter > #paths then

    counter = 0

  end

  counter = counter + 1

  return wrk.format(nil, url_path)

end
  • data/paths.list:

    / - it's not recommend, requests are being duplicated if you add only '/'
    /foo/bar
    /articles/id=25
    /3a06e672fad4bec2383748cfd82547ee.html
    

Command:

wrk -c 12 -t 12 -d 60s -R 200 -s lua/multi-paths.lua https://example.com
Random server address to each thread (with Lua)

Based on:

Example 1:

-- lua/resolve-host.lua

local addrs = nil

function setup(thread)

  if not addrs then

    -- addrs = wrk.lookup(wrk.host, "443" or "http")
    addrs = wrk.lookup(wrk.host, wrk.port or "http")

    for i = #addrs, 1, -1 do

      if not wrk.connect(addrs[i]) then

        table.remove(addrs, i)

      end

    end

  end

  thread.addr = addrs[math.random(#addrs)]

end

function init(args)

  local msg = "thread remote socket: %s"
  print(msg:format(wrk.thread.addr))

end

Command:

wrk -c 12 -t 12 -d 30s -R 600 -s lua/resolve-host.lua https://example.com
Multiple json requests (with Lua)

Based on:

You should install luarocks, lua, luajit and lua-cjson before use multi-req.lua:

# Debian like:
apt-get install lua5.1 libluajit-5.1-dev luarocks

# RedHat like:
yum install lua luajit-devel luarocks

# cjson:
luarocks install lua-cjson
-- lua/multi-req.lua

local cjson = require "cjson"
local cjson2 = cjson.new()
local cjson_safe = require "cjson.safe"

math.randomseed(os.time())
math.random(); math.random(); math.random()

function shuffle(paths)

  local j, k
  local n = #paths

  for i = 1, n do

    j, k = math.random(n), math.random(n)
    paths[j], paths[k] = paths[k], paths[j]

  end

  return paths

end

function load_request_objects_from_file(file)

  local data = {}
  local content

  local f=io.open(file,"r")
  if f~=nil then

    content = f:read("*all")
    io.close(f)

  else

    return lines

  end

  data = cjson.decode(content)

  return shuffle(data)

end

requests = load_request_objects_from_file("data/requests.json")

if #requests <= 0 then

  print("No requests found. You have to create a file data/requests.json.")
  os.exit()

end

print(" " .. #requests .. " requests")

counter = 1

request = function()

  local request_object = requests[counter]

  counter = counter + 1

  if counter > #requests then

    counter = 1

  end

  return wrk.format(request_object.method, request_object.path, request_object.headers, request_object.body)

end
  • data/requests.json:

    [
      {
        "path": "/id/1",
        "body": "ceR1caesaed2nohJei",
        "method": "GET",
        "headers": {
          "X-Custom-Header-1": "foo",
          "X-Custom-Header-2": "bar"
        }
      },
      {
        "path": "/id/2",
        "body": "{\"field\":\"value\"}",
        "method": "POST",
        "headers": {
          "Content-Type": "application/json",
          "X-Custom-Header-1": "foo",
          "X-Custom-Header-2": "bar"
        }
      }
    ]

Command:

wrk -c 12 -t 12 -d 30s -R 200 -s lua/multi-req.lua https://example.com
Debug mode (with Lua)

Based on:

-- lua/debug.lua

local file = io.open("data/debug.log", "w")

file:write("\n----------------------------------------\n")
file:write(os.date("%m/%d/%Y %I:%M %p"))
file:write("\n----------------------------------------\n")
file:close()

local file = io.open("data/debug.log", "a")

function typeof(var)

  local _type = type(var);
  if(_type ~= "table" and _type ~= "userdata") then

    return _type;

  end

  local _meta = getmetatable(var);
  if(_meta ~= nil and _meta._NAME ~= nil) then

    return _meta._NAME;

  else

    return _type;

  end

end

local function string(o)

  return '"' .. tostring(o) .. '"'

end

local function recurse(o, indent)

  if indent == nil then indent = '' end

  local indent2 = indent .. '  '

  if type(o) == 'table' then

    local s = indent .. '{' .. '\n'
    local first = true

    for k,v in pairs(o) do

      if first == false then s = s .. ', \n' end
      if type(k) ~= 'number' then k = string(k) end
      s = s .. indent2 .. '[' .. k .. '] = ' .. recurse(v, indent2)
      first = false

    end

    return s .. '\n' .. indent .. '}'

  else

    return string(o)

  end

end

local function var_dump(...)

  local args = {...}
  if #args > 1 then

    var_dump(args)

  else

    print(recurse(args[1]))

  end

end

max_requests = 0
counter = 1
show_body = 0

function setup(thread)

  thread:set("id", counter)
  counter = counter + 1

end

response = function (status, headers, body)

  file:write("\n----------------------------------------\n")
  file:write("Response " .. counter .. " with status: " .. status .. " on thread " .. id)
  file:write("\n----------------------------------------\n")

  file:write("[response] Headers:\n")

  for key, value in pairs(headers) do

    file:write("[response]  - " .. key  .. ": " .. value .. "\n")

  end

  if (show_body == 1) then

    file:write("[response] Body:\n")
    file:write(body .. "\n")

  end

  if (max_requests > 0) and (counter > max_requests) then

    wrk.thread:stop()

  end

  counter = counter + 1

end

done = function ()

  file:close()

end

Command:

wrk -c 12 -t 12 -d 15s -R 200 -s lua/debug.lua https://example.com
Analyse data pass to and from the threads

Based on:

-- lua/threads.lua

local counter = 1
local threads = {}

function setup(thread)

  thread:set("id", counter)
  table.insert(threads, thread)

  counter = counter + 1

end

function init(args)

  requests  = 0
  responses = 0

  -- local msg = "thread %d created"
  -- print(msg:format(id))

end

function request()

  requests = requests + 1
  return wrk.request()

end

function response(status, headers, body)

  responses = responses + 1

end

function done(summary, latency, requests)

  io.write("\n----------------------------------------\n")
  io.write(" Summary")
  io.write("\n----------------------------------------\n")

  for index, thread in ipairs(threads) do

    local id        = thread:get("id")
    local requests  = thread:get("requests")
    local responses = thread:get("responses")

    local msg = "thread %d : %d req , %d res"

    print(msg:format(id, requests, responses))

  end

end

Command:

wrk -c 12 -t 12 -d 5s -R 5000 -s lua/threads.lua https://example.com
Parsing wrk result and generate report

Installation:

go get -u github.com/jgsqware/wrk-report

Command:

wrk -c 12 -t 12 -d 15s -R 500 --latency https://example.com | wrk-report > report.html

wrk-report-01

Load testing with locust

Project documentation: Locust Documentation

Installation:

# Python 2.x
python -m pip install locustio

# Python 3.x
python3 -m pip install locustio

About locust:

  • Number of users to simulate - the number of users testing your application. Each user opens a TCP connection to your application and tests it

  • Hatch rate (users spawned/second) - for each second, how many users will be added to the current users until the total amount of users. Each hatch Locust calls the on_start function if you have

For example:

  • Number of users: 1000
  • Hatch rate: 10

Each second 10 users added to current users starting from 0 so in 100 seconds you will have 1000 users. When it reaches to the number of users, the statistic will be reset.

Locust tries to emulate user behavior, it will pause each individual 'User' between min_wait and max_wait ms, to simulate the time between normal user actions.

Each of tasks will be executed in a random order, with a delay of min_wait - max_wait between the beginning of each task.

Multiple paths
# python/multi-paths.py

import urllib3

from locust import HttpLocust, TaskSet, task

urllib3.disable_warnings()

multiheaders = """{
"Host": "example.com",
"User-Agent":"python-locust-test",
}
"""

self.client.get("/", headers=h)

def on_start(self):
  self.client.verify = False

class UserBehavior(TaskSet):

  @task
  class NonLoggedUserBehavior(TaskSet):

    # Home page
    @task(1)
    def index(self):
      self.client.get("/", headers=multiheaders, verify=False)

    # Status
    @task(1)
    def status(self):
      self.client.get("/status", verify=False)

    # Article
    @task(1)
    def article(self):
      self.client.get("/article/1044162375/", headers=multiheaders, verify=False)

    # About
    # Twice as much of requests:
    @task(2)
    def about(self):
      with self.client.get("/about", catch_response=True) as response:
        if response.text.find("[email protected]") > 0:
          response.success()
        else:
          response.failure("[email protected] not found in response")

class WebsiteUser(HttpLocust):

  task_set = UserBehavior
  min_wait = 1000 # ms, 1s
  max_wait = 5000 # ms, 5s

Command:

# Without web interface:
locust --host=https://example.com -f python/multi-paths.py -c 2000 -r 10 -t 1h 30m --no-web --print-stats --only-summary

# With web interface
locust --host=https://example.com -f python/multi-paths.py --print-stats --only-summary
Multiple paths with different user sessions

Look also:

Create a file with user credentials:

# python/credentials.py

USER_CREDENTIALS = [

  ("user5", "ShaePhu8aen8"),
  ("user4", "Cei5ohcha3he"),
  ("user3", "iedie8booChu"),
  ("user2", "iCuo4es1ahzu"),
  ("user1", "eeSh0yi0woo8")

  # ...

]
# python/diff-users.py

import urllib3, logging, sys

from locust import HttpLocust, TaskSet, task
from credentials import USER_CREDENTIALS

urllib3.disable_warnings()

class UserBehavior(TaskSet):

  @task
  class LoggedUserBehavior(TaskSet):

    username = "default"
    password = "default"

    def on_start(self):
      if len(USER_CREDENTIALS) > 0:
        self.username, self.password = USER_CREDENTIALS.pop()

      self.client.post("/login", {
        'username': self.username, 'password': self.password
      })
      logging.info('username: %s, password: %s', self.username, self.password)

    def on_stop(self):
      self.client.post("/logout", verify=False)

    # Home page
    # 10x more often than other
    @task(10)
    def index(self):
      self.client.get("/", verify=False)

    # Enter specific url after client login
    @task(1)
    def random_gen(self):
      self.client.get("/random-generator", verify=False)

    # Client profile page
    @task(1)
    def profile(self):
      self.client.get("/profile", verify=False)

    # Contact page
    @task(1)
    def contact(self):
      self.client.post("/contact", {
        "email": "[email protected]",
        "subject": "GNU/Linux and BSD",
        "message": "Free software, Yeah!"
      })

class WebsiteUser(HttpLocust):

  host = "https://api.example.com"
  task_set = UserBehavior
  min_wait = 2000   # ms, 2s
  max_wait = 15000  # ms, 15s

Command:

# Without web interface (for 5 users, see credentials.py):
locust -f python/diff-users.py -c 5 -r 5 -t 30m --no-web --print-stats --only-summary

# With web interface (for 5 users, see credentials.py)
locust -f python/diff-users.py --print-stats --only-summary
TCP SYN flood Denial of Service attack
hping3 -V -c 1000000 -d 120 -S -w 64 -p 80 --flood --rand-source <remote_host>
HTTP Denial of Service attack
# 1)
slowhttptest -g -o http_dos.stats -H -c 1000 -i 15 -r 200 -t GET -x 24 -p 3 -u <scheme>://<server_name>/index.php

slowhttptest -g -o http_dos.stats -B -c 5000 -i 5 -r 200 -t POST -l 180 -x 5 -u <scheme>://<server_name>/service/login

# 2)
pip3 install slowloris
slowloris <server_name>

# 3)
git clone https://github.com/jseidl/GoldenEye && cd GoldenEye
./goldeneye.py  <scheme>://<server_name> -w 150 -s 75 -m GET

Debugging

You can change combinations and parameters of these commands. When carrying out the analysis, remember about debug log and log formats.

Show information about processes

with ps:

# For all processes (master + workers):
ps axw -o pid,ppid,gid,user,etime,%cpu,%mem,vsz,rss,wchan,ni,command | egrep '([n]ginx|[P]ID)'

ps aux | grep [n]ginx
ps -lfC nginx

# For master process:
ps axw -o pid,ppid,gid,user,etime,%cpu,%mem,vsz,rss,wchan,ni,command | egrep '([n]ginx: master|[P]ID)'

ps aux | grep "[n]ginx: master"

# For worker/workers:
ps axw -o pid,ppid,gid,user,etime,%cpu,%mem,vsz,rss,wchan,ni,command | egrep '([n]ginx: worker|[P]ID)'

ps aux | grep "[n]ginx: worker"

# Show only pid, user and group for all NGINX processes:
ps -eo pid,comm,euser,supgrp | grep nginx

with top:

# For all processes (master + workers):
top -p $(pgrep -d , nginx)

# For master process:
top -p $(pgrep -f "nginx: master")
top -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: master") { print $1}')

# For one worker:
top -p $(pgrep -f "nginx: worker")
top -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}')

# For multiple workers:
top -p $(pgrep -f "nginx: worker" | sed '$!s/$/,/' | tr -d '\n')
top -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}' | sed '$!s/$/,/' | tr -d '\n')
Check memory usage

with ps_mem:

# For all processes (master + workers):
ps_mem -s -p $(pgrep -d , nginx)
ps_mem -d -p $(pgrep -d , nginx)

# For master process:
ps_mem -s -p $(pgrep -f "nginx: master")
ps_mem -s -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: master") { print $1}')

# For one worker:
ps_mem -s -p $(pgrep -f "nginx: worker")
ps_mem -s -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}')

# For multiple workers:
ps_mem -s -p $(pgrep -f "nginx: worker" | sed '$!s/$/,/' | tr -d '\n')
ps_mem -s -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}' | sed '$!s/$/,/' | tr -d '\n')

with pmap:

# For all processes (master + workers):
pmap $(pgrep -d ' ' nginx)
pmap $(pidof nginx)

# For master process:
pmap $(pgrep -f "nginx: master")
pmap $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: master") { print $1}')

# For one and multiple workers:
pmap $(pgrep -f "nginx: worker")
pmap $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}')
Show open files
# For all processes (master + workers):
lsof -n -p $(pgrep -d , nginx)

# For master process:
lsof -n -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: master") { print $1}')

# For one worker:
lsof -n -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}')

# For multiple workers:
lsof -n -p $(ps axw -o pid,command | awk '($2 " " $3 ~ "nginx: worker") { print $1}' | sed '$!s/$/,/' | tr -d '\n')
Dump configuration

From a configuration file and all attached files (from a disk, only what a new process would load):

nginx -T
nginx -T -c /etc/nginx/nginx.conf

From a running process:

For more information please see GNU Debugger (gdb) - Dump configuration from a running process.

Get the list of configure arguments
nginx -V 2>&1 | grep arguments
Check if the module has been compiled
nginx -V 2>&1 | grep -- 'http_geoip_module'
Show the most accessed IP addresses
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s%20s\n%10s%20s\n' "AMOUNT" "IP_ADDRESS"
awk '{print $1}' access.log | sort | uniq -c | sort -nr
Show the top 5 visitors (IP addresses)
# - add this to the end for print header:
#   ... | xargs printf '%10s%10s%20s\n%10s%10s%20s\n' "NUM" "AMOUNT" "IP_ADDRESS"
cut -d ' ' -f1 access.log | sort | uniq -c | sort -nr | head -5 | nl
Show the most requested urls
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "URL"
awk -F\" '{print $2}' access.log | awk '{print $2}' | sort | uniq -c | sort -nr
Show the most requested urls containing 'string'
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "URL"
awk -F\" '($2 ~ "/string") { print $2}' access.log | awk '{print $2}' | sort | uniq -c | sort -nr
Show the most requested urls with http methods
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s %8s\t%s\n%10s %8s\t%s\n' "AMOUNT" "METHOD" "URL"
awk -F\" '{print $2}' access.log | awk '{print $1 "\t" $2}' | sort | uniq -c | sort -nr
Show the most accessed response codes
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "HTTP_CODE"
awk '{print $9}' access.log | sort | uniq -c | sort -nr
Analyse web server log and show only 2xx http codes
tail -n 100 -f access.log | grep "HTTP/[1-2].[0-1]\" [2]"
Analyse web server log and show only 5xx http codes
tail -n 100 -f access.log | grep "HTTP/[1-2].[0-1]\" [5]"
Show requests which result 502 and sort them by number per requests by url
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "URL"
awk '($9 ~ /502/)' access.log | awk '{print $7}' | sort | uniq -c | sort -nr
Show requests which result 404 for php files and sort them by number per requests by url
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "URL"
awk '($9 ~ /401/)' access.log | awk -F\" '($2 ~ "/*.php")' | awk '{print $7}' | sort | uniq -c | sort -nr
Calculating amount of http response codes
# Not less than 1 minute:
tail -2000 access.log | awk -v date=$(date -d '1 minutes ago' +"%d/%b/%Y:%H:%M") '$4 ~ date' | cut -d '"' -f3 | cut -d ' ' -f2 | sort | uniq -c | sort -nr

# Last 2000 requests from log file:
# - add this to the end for print header:
#   ... | xargs printf '%10s\t%s\n%10s\t%s\n' "AMOUNT" "HTTP_CODE"
tail -2000 access.log | cut -d '"' -f3 | cut -d ' ' -f2 | sort | uniq -c | sort -nr
Calculating requests per second
# In real time:
tail -F access.log | pv -lr >/dev/null

# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s%24s%18s\n%10s%24s%18s\n' "AMOUNT" "DATE" "IP_ADDRESS"
awk '{print $4}' access.log | uniq -c | sort -nr | tr -d "["
Calculating requests per second with IP addresses
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s%24s%18s\n%10s%24s%18s\n' "AMOUNT" "DATE" "IP_ADDRESS"
awk '{print $4 " " $1}' access.log | uniq -c | sort -nr | tr -d "["
Calculating requests per second with IP addresses and urls
# - add `head -n X` to the end to limit the result
# - add this to the end for print header:
#   ... | xargs printf '%10s%24s%18s\t%s\n%10s%24s%18s\t%s\n' "AMOUNT" "DATE" "IP_ADDRESS" "URL"
awk '{print $4 " " $1 " " $7}' access.log | uniq -c | sort -nr | tr -d "["
Get entries within last n hours
awk -v _date=`date -d 'now-6 hours' +[%d/%b/%Y:%H:%M:%S` ' { if ($4 > _date) print $0}' access.log

# date command shows output for specific locale, for prevent this you should set LANG variable:
awk -v _date=$(LANG=en_us.utf-8 date -d 'now-6 hours' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _date) print $0}' access.log

# or:
export LANG=en_us.utf-8
awk -v _date=$(date -d 'now-6 hours' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _date) print $0}' access.log
Get entries between two timestamps (range of dates)
# 1)
awk '$4>"[05/Feb/2019:02:10" && $4<"[15/Feb/2019:08:20"' access.log

# 2)
# date command shows output for specific locale, for prevent this you should set LANG variable:
awk -v _dateB=$(LANG=en_us.utf-8 date -d '10:20' +[%d/%b/%Y:%H:%M:%S) -v _dateE=$(LANG=en_us.utf-8 date -d '20:30' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _dateB && $4 < _dateE) print $0}' access.log

# or:
export LANG=en_us.utf-8
awk -v _dateB=$(date -d '10:20' +[%d/%b/%Y:%H:%M:%S) -v _dateE=$(date -d '20:30' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _dateB && $4 < _dateE) print $0}' access.log

# 3)
# date command shows output for specific locale, for prevent this you should set LANG variable:
awk -v _dateB=$(LANG=en_us.utf-8 date -d 'now-12 hours' +[%d/%b/%Y:%H:%M:%S) -v _dateE=$(LANG=en_us.utf-8 date -d 'now-2 hours' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _dateB && $4 < _dateE) print $0}' access.log

# or:
export LANG=en_us.utf-8
awk -v _dateB=$(date -d 'now-12 hours' +[%d/%b/%Y:%H:%M:%S) -v _dateE=$(date -d 'now-2 hours' +[%d/%b/%Y:%H:%M:%S) ' { if ($4 > _dateB && $4 < _dateE) print $0}' access.log
Get line rates from web server log
tail -F access.log | pv -N RAW -lc 1>/dev/null
Trace network traffic for all processes
strace -q -e trace=network -p `pidof nginx | sed -e 's/ /,/g'`
List all files accessed by a NGINX
strace -q -ff -e trace=file nginx 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print'
Check that the gzip_static module is working
strace -q -p `pidof nginx | sed -e 's/ /,/g'` 2>&1 | grep gz
Which worker processing current request

Example 1 (more elegant way):

log_format debug-req-trace
                '$pid - "$request_method $scheme://$host$request_uri" '
                '$remote_addr:$remote_port $server_addr:$server_port '
                '$request_id';

# Output example:
31863 - "GET https://example.com/" 35.228.233.xxx:63784 10.240.20.2:443 be90154db5beb0e9dd13c5d91c8ecd4c

Example 2:

# Run strace in the background:
nohup strace -q -s 256 -p `pidof nginx | sed -e 's/ /,/g'` 2>&1 -o /tmp/nginx-req.trace </dev/null >/dev/null 2>/dev/null &

# Watch output file:
watch -n 0.1 "awk '/Host:/ {print \"pid: \" \$1 \", \" \"host: \" \$6}' /tmp/nginx-req.trace | sed 's/\\\r\\\n.*//'"

# Output example:
Every 0.1s: awk '/Host:/ {print "pid: " $1 ", " "host: " $6}' /tmp/nginx-req.trace | sed 's/\\r\\n.*//'

pid: 31863, host: example.com
Capture only http packets
ngrep -d eth0 -qt 'HTTP' 'tcp'
Extract User Agent from the http packets
tcpdump -ei eth0 -nn -A -s1500 -l | grep "User-Agent:"
Capture only http GET and POST packets
# 1)
tcpdump -ei eth0 -s 0 -A -vv \
'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420' or 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354'

# 2)
tcpdump -ei eth0 -s 0 -v -n -l | egrep -i "POST /|GET /|Host:"
Capture requests and filter by source ip and destination port
ngrep -d eth0 "<server_name>" src host 10.10.252.1 and dst port 80
Dump a process's memory

For more information about analyse core dumps please see GNU Debugger (gdb) - Core dump backtrace.

A core dump is a file containing a process's address space (memory) when the process terminates unexpectedly. In other words is an instantaneous picture of a failing process at the moment it attempts to do something very wrong.

NGINX is unbelievably stable but sometimes it can happen that there is a unique termination of the running processes.

I think the best practice for core dumps are a properly collected core files, and associated information, we can often solve, and otherwise extract valuable information about the failing process.

To enable core dumps from NGINX configuration you should:

# In main NGINX configuration file:
#   - specify the maximum possible size of the core dump for worker processes
#   - specify the maximum number of open files for worker processes
#   - specify a working directory in which a core dump file will be saved
#   - enable global debugging (optional)
worker_rlimit_core    500m;
worker_rlimit_nofile  65535;
working_directory     /var/dump/nginx;
error_log             /var/log/nginx/error.log debug;

# Make sure the /var/dump/nginx directory is writable:
chown nginx:nginx /var/dump/nginx
chmod 0770 /var/dump/nginx

# Disable the limit for the maximum size of a core dump file:
ulimit -c unlimited
# or:
sh -c "ulimit -c unlimited && exec su $LOGNAME"

# Enable core dumps for the setuid and setgid processes:
#   %e.%p.%h.%t - <executable_filename>.<pid>.<hostname>.<unix_time>
echo "/var/dump/nginx/core.%e.%p.%h.%t" | tee /proc/sys/kernel/core_pattern
sysctl -w fs.suid_dumpable=2 && sysctl -p

To generate a core dump of a running NGINX master process:

_pid=$(pgrep -f "nginx: master") ; gcore -o core.master $_pid

To generate a core dump of a running NGINX worker processes:

for _pid in $(pgrep -f "nginx: worker") ; do gcore -o core.worker $_pid ; done

Or other solution for above (to dump memory regions of running NGINX process):

# Set pid of NGINX master process:
_pid=$(pgrep -f "nginx: master")

# Generate gdb commands from the process's memory mappings using awk:
cat /proc/$_pid/maps | \
awk '$6 !~ "^/" {split ($1,addrs,"-"); print "dump memory mem_" addrs[1] " 0x" addrs[1] " 0x" addrs[2] ;}END{print "quit"}' > gdb.args

# Use gdb with the -x option to dump these memory regions to mem_* files:
gdb -p $_pid -x gdb.args

# Look for some (any) nginx.conf text:
grep -a worker_connections mem_*
grep -a server_name mem_*

# or:
strings mem_* | grep worker_connections
strings mem_* | grep server_name
GNU Debugger (gdb)

You can use GDB to extract very useful information about NGINX instances, e.g. the log from memory or configuration from running process.

Dump configuration from a running process

It's very useful when you need to verify which configuration has been loaded and restore a previous configuration if the version on disk has been accidentally removed or overwritten.

The ngx_conf_t is a type of a structure used for configuration parsing. It only exists during configuration parsing, and obviously you can't access it after configuration parsing is complete. For dump configuration from a running process you should use ngx_conf_dump_t.

# Save gdb arguments to a file, e.g. nginx.gdb:
set $cd = ngx_cycle->config_dump
set $nelts = $cd.nelts
set $elts = (ngx_conf_dump_t*)($cd.elts)
while ($nelts-- > 0)
  set $name = $elts[$nelts]->name.data
  printf "Dumping %s to nginx.conf.running\n", $name
append memory nginx.conf.running \
  $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end
end

# Run gdb in a batch mode:
gdb -p $(pgrep -f "nginx: master") -batch -x nginx.gdb

# And open NGINX config:
less nginx.conf.running

or other solution:

# Save gdb functions to a file, e.g. nginx.gdb:
define dump_config
  set $cd = ngx_cycle->config_dump
  set $nelts = $cd.nelts
  set $elts = (ngx_conf_dump_t*)($cd.elts)
  while ($nelts-- > 0)
    set $name = $elts[$nelts]->name.data
    printf "Dumping %s to nginx.conf.running\n", $name
  append memory nginx.conf.running \
    $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end
  end
end
document dump_config
  Dump NGINX configuration.
end

# Run gdb in a batch mode:
gdb -p $(pgrep -f "nginx: master") -iex "source nginx.gdb" -ex "dump_config" --batch

# And open NGINX config:
less nginx.conf.running
Show debug log in memory

First of all a buffer for debug logging should be included:

error_log   memory:64m debug;

and:

# Save gdb functions to a file, e.g. nginx.gdb:
define dump_debug_log
  set $log = ngx_cycle->log
  while ($log != 0) && ($log->writer != ngx_log_memory_writer)
    set $log = $log->next
  end
  if ($log->wdata != 0)
    set $buf = (ngx_log_memory_buf_t *) $log->wdata
    dump memory debug_mem.log $buf->start $buf->end
  end
end
document dump_debug_log
  Dump in memory debug log.
end

# Run gdb in a batch mode:
gdb -p $(pgrep -f "nginx: master") -iex "source nginx.gdb" -ex "dump_debug_log" --batch

# truncate the file:
sed -i 's/[[:space:]]*$//' debug_mem.log

# And open NGINX debug log:
less debug_mem.log
Core dump backtrace

The above functions (GNU Debugger (gdb)) under discussion can be used with core files.

To backtrace core dumps which saved in working_directory:

gdb /usr/sbin/nginx /var/dump/nginx/core.nginx.8125.x-9s-web01-prod.1561475764
(gdb) bt

You can use also this recipe:

gdb --core /var/dump/nginx/core.nginx.8125.x-9s-web01-prod.1561475764

Shell aliases

alias ng.test='nginx -t -c /etc/nginx/nginx.conf'

alias ng.stop='ng.test && systemctl stop nginx'

alias ng.reload='ng.test && systemctl reload nginx'
alias ng.reload='ng.test && kill -HUP $(cat /var/run/nginx.pid)'
#                       ... kill -HUP $(ps auxw | grep [n]ginx | grep master | awk '{print $2}')

alias ng.restart='ng.test && systemctl restart nginx'
alias ng.restart='ng.test && kill -QUIT $(cat /var/run/nginx.pid) && /usr/sbin/nginx'
#                        ... kill -QUIT $(ps auxw | grep [n]ginx | grep master | awk '{print $2}') ...

Configuration snippets

Restricting access with basic authentication
# 1) Generate file with htpasswd command:
htpasswd -c htpasswd_example.com.conf <username>

# 2) Include this file in specific context: (e.g. server):
server_name example.com;

  ...

  # These directives are optional, only if we need them:
  satisfy all;

  deny    10.255.10.0/24;
  allow   192.168.0.0/16;
  allow   127.0.0.1;
  deny    all;

  # It's important:
  auth_basic            "Restricted Area";
  auth_basic_user_file  /etc/nginx/acls/htpasswd_example.com.conf;

  location / {

    ...

  location /public/ {

    auth_basic off;

  }

  ...
Blocking/allowing IP addresses

Example 1:

# 1) File: /etc/nginx/acls/allow.map.conf

# Map module:
map $remote_addr $globals_internal_map_acl {

  # Status code:
  #  - 0 = false
  #  - 1 = true
  default 0;

  ### INTERNAL ###
  10.255.10.0/24 1;
  10.255.20.0/24 1;
  10.255.30.0/24 1;
  192.168.0.0/16 1;

}

# 2) Include this file in http context:
include /etc/nginx/acls/allow.map.conf;

# 3) Turn on in a specific context (e.g. location):
server_name example.com;

  ...

  location / {

    proxy_pass http://localhost:80;
    client_max_body_size 10m;

  }

  location ~ ^/(backend|api|admin) {

    if ($globals_internal_map_acl) {

      set $pass 1;

    }

    if ($pass = 1) {

      proxy_pass http://localhost:80;
      client_max_body_size 10m;

    }

    if ($pass != 1) {

      rewrite ^(.*) https://example.com;

    }

  ...

Example 2:

# 1) File: /etc/nginx/acls/allow.geo.conf

# Geo module:
geo $globals_internal_geo_acl {

  # Status code:
  #  - 0 = false
  #  - 1 = true
  default 0;

  ### INTERNAL ###
  10.255.10.0/24 1;
  10.255.20.0/24 1;
  10.255.30.0/24 1;
  192.168.0.0/16 1;

}

# 2) Include this file in http context:
include /etc/nginx/acls/allow.geo.conf;

# 3) Turn on in a specific context (e.g. location):
server_name example.com;

  ...

  location / {

    proxy_pass http://localhost:80;
    client_max_body_size 10m;

  }

  location ~ ^/(backend|api|admin) {

    if ($globals_internal_geo_acl = 0) {

      return 403;

    }

    proxy_pass http://localhost:80;
    client_max_body_size 10m;

  ...

Example 3:

# 1) File: /etc/nginx/acls/allow.conf

### INTERNAL ###
allow 10.255.10.0/24;
allow 10.255.20.0/24;
allow 10.255.30.0/24;
allow 192.168.0.0/16;

### EXTERNAL ###
allow 35.228.233.xxx;

# 2) Include this file in http context:
include /etc/nginx/acls/allow.conf;

# 3) Turn on in a specific context (e.g. server):
server_name example.com;

  include /etc/nginx/acls/allow.conf;
  allow   35.228.233.xxx;
  deny    all;

  ...
Blocking referrer spam

Example 1:

# 1) File: /etc/nginx/limits.conf
map $http_referer $invalid_referer {

  hostnames;

  default                   0;

  # Invalid referrers:
  "invalid.com"             1;
  "~*spamdomain4.com"       1;
  "~*.invalid\.org"         1;

}

# 2) Include this file in http context:
include /etc/nginx/limits.conf;

# 3) Turn on in a specific context (e.g. server):
server_name example.com;

  if ($invalid_referer) { return 403; }

  ...

Example 2:

# 1) Turn on in a specific context (e.g. location):
location /check_status {

  if ($http_referer ~ "spam1\.com|spam2\.com|spam3\.com") {

    return 444;

  }

  ...

How to test?

siege -b -r 2 -c 40 -v https:/example.com/storage/img/header.jpg -H "Referer: https://spamdomain4.com/"
** SIEGE 4.0.4
** Preparing 5 concurrent users for battle.
The server is now under siege...
HTTP/1.1 403     0.11 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.12 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.18 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.18 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.19 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.10 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.11 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.11 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.12 secs:     124 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 403     0.12 secs:     124 bytes ==> GET  /storage/img/header.jpg

...
Limiting referrer spam

Example 1:

# 1) File: /etc/nginx/limits.conf
map $http_referer $limit_ip_key_by_referer {

  hostnames;

  # It's important because if you set numeric value, e.g. 0 rate limiting rule will be catch all referers:
  default                   "";

  # Invalid referrers (we restrict them):
  "invalid.com"             $binary_remote_addr;
  "~referer-xyz.com"        $binary_remote_addr;
  "~*spamdomain4.com"       $binary_remote_addr;
  "~*.invalid\.org"         $binary_remote_addr;

}

limit_req_zone $limit_ip_key_by_referer zone=req_for_remote_addr_by_referer:1m rate=5r/s;

# 2) Include this file in http context:
include /etc/nginx/limits.conf;

# 3) Turn on in a specific context (e.g. server):
server_name example.com;

  limit_req zone=req_for_remote_addr_by_referer burst=2;

  ...

How to test?

siege -b -r 2 -c 40 -v https:/example.com/storage/img/header.jpg -H "Referer: https://spamdomain4.com/"
** SIEGE 4.0.4
** Preparing 5 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200     0.13 secs:    3174 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 503     0.14 secs:     206 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 503     0.15 secs:     206 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 503     0.10 secs:     206 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 503     0.10 secs:     206 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 503     0.10 secs:     206 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 200     0.63 secs:    3174 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 200     1.13 secs:    3174 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 200     1.00 secs:    3174 bytes ==> GET  /storage/img/header.jpg
HTTP/1.1 200     1.04 secs:    3174 bytes ==> GET  /storage/img/header.jpg

...
Limiting the rate of requests with burst mode
limit_req_zone $binary_remote_addr zone=req_for_remote_addr:64k rate=10r/m;
  • key/zone type: limit_req_zone
  • the unique key for limiter: $binary_remote_addr
    • limit requests per IP as following
  • zone name: req_for_remote_addr
  • zone size: 64k (1024 IP addresses)
  • rate is 0,16 request each second or 10 requests per minute (1 request every 6 second)

Example of use:

location ~ /stats {

  limit_req zone=req_for_remote_addr burst=5;

  ...
  • set maximum requests as rate * burst in burst seconds
    • with bursts not exceeding 5 requests:
      • 0,16r/s * 5 = 0.80 requests per 5 seconds
      • 10r/m * 5 = 50 requests per 5 minutes

Testing queue:

# siege -b -r 1 -c 12 -v https://x409.info/stats/
** SIEGE 4.0.4
** Preparing 12 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 *   0.20 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 503     0.20 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.20 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.21 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.22 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.22 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.23 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 200 *   6.22 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *  12.24 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *  18.27 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *  24.30 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *  30.32 secs:       2 bytes ==> GET  /stats/
             |
             - burst=5
             - 0,16r/s, 10r/m - 1r every 6 seconds

Transactions:              6 hits
Availability:          50.00 %
Elapsed time:          30.32 secs
Data transferred:       0.01 MB
Response time:         15.47 secs
Transaction rate:       0.20 trans/sec
Throughput:             0.00 MB/sec
Concurrency:            3.06
Successful transactions:   6
Failed transactions:       6
Longest transaction:   30.32
Shortest transaction:   0.20
Limiting the rate of requests with burst mode and nodelay
limit_req_zone $binary_remote_addr zone=req_for_remote_addr:50m rate=2r/s;
  • key/zone type: limit_req_zone
  • the unique key for limiter: $binary_remote_addr
    • limit requests per IP as following
  • zone name: req_for_remote_addr
  • zone size: 50m (800,000 IP addresses)
  • rate is 2 request each second or 120 requests per minute (2 requests every 1 second)

Example of use:

location ~ /stats {

  limit_req zone=req_for_remote_addr burst=5 nodelay;

  ...
  • set maximum requests as rate * burst in burst seconds
    • with bursts not exceeding 5 requests
      • 2r/s * 5 = 10 requests per 5 seconds
      • 120r/m * 5 = 600 requests per 5 minutes
  • allocates slots in the queue according to the burst parameter with nodelay

Testing queue:

# siege -b -r 1 -c 12 -v https://x409.info/stats/
** SIEGE 4.0.4
** Preparing 12 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 *   0.18 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *   0.18 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *   0.19 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *   0.19 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *   0.19 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 200 *   0.19 secs:       2 bytes ==> GET  /stats/
HTTP/1.1 503     0.19 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.19 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.20 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.21 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.21 secs:    1501 bytes ==> GET  /stats/
HTTP/1.1 503     0.22 secs:    1501 bytes ==> GET  /stats/
             |
             - burst=5 with nodelay
             - 2r/s, 120r/m - 1r every 0.5 second

Transactions:              6 hits
Availability:          50.00 %
Elapsed time:           0.23 secs
Data transferred:       0.01 MB
Response time:          0.39 secs
Transaction rate:      26.09 trans/sec
Throughput:             0.04 MB/sec
Concurrency:           10.17
Successful transactions:   6
Failed transactions:       6
Longest transaction:    0.22
Shortest transaction:   0.18
Limiting the number of connections
limit_conn_zone $binary_remote_addr zone=conn_for_remote_addr:1m;
  • key/zone type: limit_conn_zone
  • the unique key for limiter: $binary_remote_addr
    • limit requests per IP as following
  • zone name: conn_for_remote_addr
  • zone size: 1m (16,000 IP addresses)

Example of use:

location ~ /stats {

  limit_conn conn_for_remote_addr 1;

  ...
  • limit a single IP address to make no more than 1 connection from IP at the same time

Testing queue:

# siege -b -r 1 -c 100 -t 10s --no-parser https://x409.info/stats/
defaulting to time-based testing: 10 seconds
** SIEGE 4.0.4
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege...
Transactions:            364 hits
Availability:          32.13 %
Elapsed time:           9.00 secs
Data transferred:       1.10 MB
Response time:          2.37 secs
Transaction rate:      40.44 trans/sec
Throughput:             0.12 MB/sec
Concurrency:           95.67
Successful transactions: 364
Failed transactions:     769
Longest transaction:    1.10
Shortest transaction:   0.38
Adding and removing the www prefix
  • www to non-www:
server {

  ...

  server_name www.domain.com;

  # $scheme will get the http or https protocol:
  return 301 $scheme://domain.com$request_uri;

}
  • non-www to www:
server {

  ...

  server_name domain.com;

  # $scheme will get the http or https protocol:
  return 301 $scheme://www.domain.com$request_uri;

}
Redirect POST request with payload to external endpoint

POST data is passed in the body of the request, which gets dropped if you do a standard redirect.

Look at this:

DESCRIPTION PERMANENT TEMPORARY
allows changing the request method from POST to GET 301 302
does not allow changing the request method from POST to GET 308 307

You can try with the HTTP status code 307, a RFC compliant browser should repeat the post request. You just need to write a NGINX rewrite rule with HTTP status code 307 or 308:

location /api {

  # HTTP 307 only for POST requests:
  if ($request_method = POST) {

    return 307 https://api.example.com?request_uri;

  }

  # You can keep this for non-POST requests:
  rewrite ^ https://api.example.com?request_uri permanent;

  client_max_body_size    10m;

  ...

}
Allow multiple cross-domains using the CORS headers

Example 1:

location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ {

  if ( $http_origin ~* (https?://(.+\.)?(domain1|domain2|domain3)\.(?:me|co|com)$) ) {

    add_header "Access-Control-Allow-Origin" "$http_origin";

  }

}

Example 2 (more slightly configuration; for GETs and POSTs):

location / {

  if ($http_origin ~* (^https?://([^/]+\.)*(domainone|domaintwo)\.com$)) {

    set $cors "true";

  }

  # Determine the HTTP request method used:
  if ($request_method = 'GET') {

    set $cors "${cors}get";

  }

  if ($request_method = 'POST') {

    set $cors "${cors}post";

  }

  if ($cors = "true") {

    # Catch all in case there's a request method we're not dealing with properly:
    add_header 'Access-Control-Allow-Origin' "$http_origin";

  }

  if ($cors = "trueget") {

    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';

  }

  if ($cors = "truepost") {

    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';

  }

}

Other snippets

Create a temporary static backend

Python 3.x:

python3 -m http.server 8000 --bind 127.0.0.1

Python 2.x:

python -m SimpleHTTPServer 8000
Create a temporary static backend with SSL support

Python 3.x:

from http.server import HTTPServer, BaseHTTPRequestHandler
import ssl

httpd = HTTPServer(('localhost', 4443), BaseHTTPRequestHandler)

httpd.socket = ssl.wrap_socket (httpd.socket,
        keyfile="path/to/key.pem",
        certfile='path/to/cert.pem', server_side=True)

httpd.serve_forever()

Python 2.x:

import BaseHTTPServer, SimpleHTTPServer
import ssl

httpd = BaseHTTPServer.HTTPServer(('localhost', 4443),
        SimpleHTTPServer.SimpleHTTPRequestHandler)

httpd.socket = ssl.wrap_socket (httpd.socket,
        keyfile="path/tp/key.pem",
        certfile='path/to/cert.pem', server_side=True)

httpd.serve_forever()
Generate private key without passphrase
# _len: 2048, 4096
( _fd="private.key" ; _len="4096" ; \
openssl genrsa -out ${_fd} ${_len} )
Generate CSR
( _fd="private.key" ; _fd_csr="request.csr" ; \
openssl req -out ${_fd_csr} -new -key ${_fd} )
Generate CSR (metadata from existing certificate)
( _fd="private.key" ; _fd_csr="request.csr" ; _fd_crt="cert.crt" ; \
openssl x509 -x509toreq -in ${_fd_crt} -out ${_fd_csr} -signkey ${_fd} )
Generate CSR with -config param
( _fd="private.key" ; _fd_csr="request.csr" ; \
openssl req -new -sha256 -key ${_fd} -out ${_fd_csr} \
-config <(
cat <<-EOF
[req]
default_bits        = 2048
default_md          = sha256
prompt              = no
distinguished_name  = dn
req_extensions      = req_ext

[ dn ]
C   = "<two-letter ISO abbreviation for your country>"
ST  = "<state or province where your organisation is legally located>"
L   = "<city where your organisation is legally located>"
O   = "<legal name of your organisation>"
OU  = "<section of the organisation>"
CN  = "<fully qualified domain name>"

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = <fully qualified domain name>
DNS.2 = <next domain>
DNS.3 = <next domain>
EOF
))

Other values in [ dn ]:

Look at this great explanation: How to create multidomain certificates using config files

countryName            = "DE"                     # C=
stateOrProvinceName    = "Hessen"                 # ST=
localityName           = "Keller"                 # L=
postalCode             = "424242"                 # L/postalcode=
streetAddress          = "Crater 1621"            # L/street=
organizationName       = "apfelboymschule"        # O=
organizationalUnitName = "IT Department"          # OU=
commonName             = "example.com"            # CN=
emailAddress           = "[email protected]"  # CN/emailAddress=
Generate private key and CSR
( _fd="private.key" ; _fd_csr="request.csr" ; _len="4096" ; \
openssl req -out ${_fd_csr} -new -newkey rsa:${_len} -nodes -keyout ${_fd} )
Generate ECDSA private key
# _curve: prime256v1, secp521r1, secp384r1
( _fd="private.key" ; _curve="prime256v1" ; \
openssl ecparam -out ${_fd} -name ${_curve} -genkey )

# _curve: X25519
( _fd="private.key" ; _curve="x25519" ; \
openssl genpkey -algorithm ${_curve} -out ${_fd} )
Generate private key with CSR (ECC)
# _curve: prime256v1, secp521r1, secp384r1
( _fd="domain.com.key" ; _fd_csr="domain.com.csr" ; _curve="prime256v1" ; \
openssl ecparam -out ${_fd} -name ${_curve} -genkey ; \
openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 )
Generate self-signed certificate
# _len: 2048, 4096
( _fd="domain.key" ; _fd_out="domain.crt" ; _len="4096" ; _days="365" ; \
openssl req -newkey rsa:${_len} -nodes \
-keyout ${_fd} -x509 -days ${_days} -out ${_fd_out} )
Generate self-signed certificate from existing private key
# _len: 2048, 4096
( _fd="domain.key" ; _fd_out="domain.crt" ; _days="365" ; \
openssl req -key ${_fd} -nodes \
-x509 -days ${_days} -out ${_fd_out} )
Generate self-signed certificate from existing private key and csr
# _len: 2048, 4096
( _fd="domain.key" ; _fd_csr="domain.csr" ; _fd_out="domain.crt" ; _days="365" ; \
openssl x509 -signkey ${_fd} -nodes \
-in ${_fd_csr} -req -days ${_days} -out ${_fd_out} )
Generate multidomain certificate
certbot certonly -d example.com -d www.example.com
Generate wildcard certificate
certbot certonly --manual --preferred-challenges=dns -d example.com -d *.example.com
Generate certificate with 4096 bit private key
certbot certonly -d example.com -d www.example.com --rsa-key-size 4096
Generate DH Param key
openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096
Extract private key from pfx
( _fd_pfx="cert.pfx" ; _fd_key="key.pem" ; \
openssl pkcs12 -in ${_fd_pfx} -nocerts -nodes -out ${_fd_key} )
Extract private key and certs from pfx
( _fd_pfx="cert.pfx" ; _fd_pem="key_certs.pem" ; \
openssl pkcs12 -in ${_fd_pfx} -nodes -out ${_fd_pem} )
Convert DER to PEM
( _fd_der="cert.crt" ; _fd_pem="cert.pem" ; \
openssl x509 -in ${_fd_der} -inform der -outform pem -out ${_fd_pem} )
Convert PEM to DER
( _fd_der="cert.crt" ; _fd_pem="cert.pem" ; \
openssl x509 -in ${_fd_pem} -outform der -out ${_fd_der} )
Verification of the private key
( _fd="private.key" ; \
openssl rsa -noout -text -in ${_fd} )
Verification of the public key
# 1)
( _fd="public.key" ; \
openssl pkey -noout -text -pubin -in ${_fd} )

# 2)
( _fd="private.key" ; \
openssl rsa -inform PEM -noout -in ${_fd} &> /dev/null ; \
if [ $? = 0 ] ; then echo -en "OK\n" ; fi )
Verification of the certificate
( _fd="certificate.crt" ; # format: pem, cer, crt \
openssl x509 -noout -text -in ${_fd} )
Verification of the CSR
( _fd_csr="request.csr" ; \
openssl req -text -noout -in ${_fd_csr} )
Check whether the private key and the certificate match
(openssl rsa -noout -modulus -in private.key | openssl md5 ; \
openssl x509 -noout -modulus -in certificate.crt | openssl md5) | uniq

Installation from prebuilt packages

RHEL7 or CentOS 7
From EPEL
# Install epel repository:
yum install epel-release
# or alternative:
#   wget -c https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
#   yum install epel-release-latest-7.noarch.rpm

# Install NGINX:
yum install nginx
From Software Collections
# Install and enable scl:
yum install centos-release-scl
yum-config-manager --enable rhel-server-rhscl-7-rpms

# Install NGINX (rh-nginx14, rh-nginx16, rh-nginx18):
yum install rh-nginx16

# Enable NGINX from SCL:
scl enable rh-nginx16 bash
From Official Repository
# Where:
#   - <os_type> is: rhel or centos
cat > /etc/yum.repos.d/nginx.repo << __EOF__
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/<os_type>/$releasever/$basearch/
gpgcheck=0
enabled=1
__EOF__

# Install NGINX:
yum install nginx
Debian or Ubuntu

Check available flavours of NGINX before install. For more information please see this great answer by Thomas Ward.

From Debian/Ubuntu Repository
# Install NGINX:
apt-get install nginx
From Official Repository
# Where:
#   - <os_type> is: debian or ubuntu
#   - <os_release> is: xenial, bionic, jessie, stretch or other
cat > /etc/apt/sources.list.d/nginx.list << __EOF__
deb http://nginx.org/packages/<os_type>/ <os_release> nginx
deb-src http://nginx.org/packages/<os_type>/ <os_release> nginx
__EOF__

# Update packages list:
apt-get update

# Download the public key (or <pub_key> from your GPG error):
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <pub_key>

# Install NGINX:
apt-get update
apt-get install nginx

Installation from source

The build is configured using the configure command. The configure shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a Makefile. Of course you can adjust certain environment variables to make configure able to find the packages like a zlib or openssl, and of many other options (paths, modules).

Before the beginning installation process please read these important articles which describes exactly the entire installation process and the parameters using the configure command:

In this chapter I'll present three (very similar) methods of installation. They relate to:

Each of them is suited towards a high performance as well as high-concurrency applications. They work great as a high-end proxy servers too.

Look also on this short note about the system locations. That can be useful too:

  • For booting the system, rescues and maintenance: /

    • /bin - user programs
    • /sbin - system programs
    • /lib - shared libraries
  • Full running environment: /usr

    • /usr/bin - user programs
    • /usr/sbin - system programs
    • /usr/lib - shared libraries
    • /usr/share - manual pages, data
  • Added packages: /usr/local

    • /usr/local/bin - user programs
    • /usr/local/sbin - system programs
    • /usr/local/lib - shared libraries
    • /usr/local/share - manual pages, data
Automatic installation

Installation from source consists of multiple steps. If you don't want to pass through all of them manually, you can run automated script. I created it to facilitate the whole installation process.

It supports Debian and RHEL like distributions.

This tool is located in lib/ngx_installer.sh. Configuration file is in lib/ngx_installer.conf. By default, it show prompt to confirm steps but you can disable it if you want:

cd lib/
export NGX_PROMPT=0 ; bash ngx_installer.sh
Nginx package

There are currently two versions of NGINX:

  • stable - is recommended, doesn’t include all of the latest features, but has critical bug fixes from mainline release
  • mainline - is typically quite stable as well, includes the latest features and bug fixes and is always up to date

You can download NGINX source code from an official read-only mirrors:

Detailed instructions about download and compile the NGINX sources can be found later in the handbook.

Dependencies

Mandatory requirements:

Download, compile and install or install prebuilt packages from repository of your distribution.

OpenResty's LuaJIT uses its own branch of LuaJIT with various important bug fixes and optimizations for OpenResty's use cases.

I also use Cloudflare Zlib version due to performance. See below articles:

If you download and compile above sources the good point is to install additional packages (dependent on the system version) before building NGINX:

Debian Like RedHat Like Comment
gcc
make
build-essential
linux-headers*
bison
gcc
gcc-c++
kernel-devel
bison
perl
libperl-dev
libphp-embed
perl
perl-devel
perl-ExtUtils-Embed
libssl-dev* openssl-devel*
zlib1g-dev* zlib-devel*
libpcre2-dev* pcre-devel*
libluajit-5.1-dev* luajit-devel*
libxslt-dev libxslt libxslt-devel
libgd-dev gd gd-devel
libgeoip-dev GeoIP-devel
libxml2-dev libxml2-devel
libexpat-dev expat-devel
libgoogle-perftools-dev
libgoogle-perftools4
gperftools-devel
cpio
gettext-devel
autoconf autoconf for jemalloc from sources
libjemalloc1
libjemalloc-dev*
jemalloc
jemalloc-devel*
for jemalloc
libpam0g-dev pam-devel for ngx_http_auth_pam_module
jq jq for http error pages generator

* If you don't use from sources.

Shell one-liners example:

# Ubuntu/Debian
apt-get install gcc make build-essential bison perl libperl-dev libphp-embed libssl-dev zlib1g-dev libpcre2-dev libluajit-5.1-dev libxslt-dev libgd-dev libgeoip-dev libxml2-dev libexpat-dev libgoogle-perftools-dev libgoogle-perftools4 autoconf jq

# RedHat/CentOS
yum install gcc gcc-c++ kernel-devel bison perl perl-devel perl-ExtUtils-Embed openssl-devel zlib-devel pcre-devel luajit-devel libxslt libxslt-devel gd gd-devel GeoIP-devel libxml2-devel expat-devel gperftools-devel cpio gettext-devel autoconf jq
3rd party modules

Not all external modules can work properly with your currently NGINX version. You should read the documentation of each module before adding it to the modules list. You should also to check what version of module is compatible with your NGINX release.

Before installing external modules please read Event-Driven architecture section to understand why poor quality 3rd party modules may reduce NGINX's performance.

Modules can be compiled as a shared object (*.so file) and then dynamically loaded into NGINX at runtime (--add-dynamic-module). On the other hand you can also built them into NGINX at compile time and linked to the NGINX binary statically (--add-module).

I mixed both variants because some of the modules are built-in automatically even if I try them to be compiled as a dynamic modules (they are not support dynamic linking).

You can download external modules from:

A short description of the modules that I used in this step-by-step tutorial:

* Available in Tengine Web Server (but these modules may have been updated/patched by Tengine Team).
** Is already being used in quite a few third party modules.

Compiler and linker

Someting about compiler and linker options. Out of the box you probably do not need to provide any flags yourself, the configure script should detect automatically some reasonable defaults. However, in order to optimise for speed and/or security, you should probably provide a few compiler flags.

See this recommendations by RedHat. You should also read Compilation and Installation for OpenSSL.

There are examples:

# Example of use compiler options:
# 1)
--with-cc-opt="-I/usr/local/include -I${OPENSSL_INC} -I${LUAJIT_INC} -I${JEMALLOC_INC} -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC"
# 2)
--with-cc-opt="-I/usr/local/include -m64 -march=native -DTCP_FASTOPEN=23 -O3 -g -fstack-protector-strong -flto -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -gsplit-dwarf"

# Example of use linker options:
# 1)
--with-ld-opt="-Wl,-E -L/usr/local/lib -ljemalloc -lpcre -Wl,-rpath,/usr/local/lib,-z,relro -Wl,-z,now -pie"
# 2)
--with-ld-opt="-L/usr/local/lib -ljemalloc -Wl,-lpcre -Wl,-z,relro -Wl,-rpath,/usr/local/lib"
Debugging Symbols

Debugging symbols helps obtain additional information for debugging, such as functions, variables, data structures, source file and line number information.

However, if you get the No symbol table info available error when you run a (gdb) backtrace you should to recompile NGINX with support of debugging symbols. For this it is essential to include debugging symbols with the -g flag and make the debugger output easier to understand by disabling compiler optimization with the -O0 flag:

If you use -O0 remember about disable -D_FORTIFY_SOURCE=2, if you don't do it you will get: error: #warning _FORTIFY_SOURCE requires compiling with optimization (-O).

./configure --with-debug --with-cc-opt='-O0 -g' ...

Also if you get errors similar to one of them:

Missing separate debuginfo for /usr/lib64/libluajit-5.1.so.2 ...
Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found) ...

You should also recompile libraries with -g compiler option and optional with -O0. For more information please read 3.9 Options for Debugging Your Program.

SystemTap

SystemTap is a scripting language and tool for dynamically instrumenting running production Linux kernel-based operating systems. It's required for openresty-systemtap-toolkit for OpenResty.

It's good all-in-one tutorial for install and configure SystemTap on CentOS 7/Ubuntu distributions. In case of problems please see this SystemTap document.

Hint: Do not specify --with-debug while profiling. It slows everything down significantly.

cd /opt

git clone --depth 1 https://github.com/openresty/openresty-systemtap-toolkit

# RHEL/CentOS
yum install yum-utils
yum --enablerepo=base-debuginfo install kernel-devel-$(uname -r) kernel-headers-$(uname -r) kernel-debuginfo-$(uname -r) kernel-debuginfo-common-x86_64-$(uname -r)
yum --enablerepo=base-debuginfo install systemtap systemtap-debuginfo

reboot

# Run this commands for testing SystemTap:
stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}'
stap -v -e 'probe begin { printf("Hello, World!\n"); exit() }'

For installation SystemTap on Ubuntu/Debian:

stapxx

The author of OpenResty created great and simple macro language extensions to the SystemTap: stapxx.

Install Nginx on CentOS 7

Pre installation tasks

Set NGINX version (I use stable release):

export ngx_version="1.17.0"

Set temporary variables:

ngx_src="/usr/local/src"
ngx_base="${ngx_src}/nginx-${ngx_version}"
ngx_master="${ngx_base}/master"
ngx_modules="${ngx_base}/modules"

Create directories:

for i in "$ngx_base" "${ngx_master}" "$ngx_modules" ; do

  mkdir "$i"

done
Install or build dependencies

In my configuration I used all prebuilt dependencies without libssl-dev, zlib1g-dev, libluajit-5.1-dev and libpcre2-dev because I compiled them manually - for TLS 1.3 support and with OpenResty recommendation for LuaJIT.

Install prebuilt packages, export variables and set symbolic link:

# It's important and required, regardless of chosen sources:
yum install gcc gcc-c++ kernel-devel bison perl perl-devel perl-ExtUtils-Embed libxslt libxslt-devel gd gd-devel GeoIP-devel libxml2-devel expat-devel gperftools-devel cpio gettext-devel autoconf jq

# In this example we use sources for all below packages so we do not install them:
yum install openssl-devel zlib-devel pcre-devel luajit-devel

# For LuaJIT (libluajit-5.1-dev):
export LUAJIT_LIB="/usr/local/x86_64-linux-gnu"
export LUAJIT_INC="/usr/include/luajit-2.1"

ln -s /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2 /usr/local/lib/liblua.so

Remember to build sregex also if you use above steps.

Or download and compile them:

PCRE:

cd "${ngx_src}"

export pcre_version="8.42"

export PCRE_SRC="${ngx_src}/pcre-${pcre_version}"
export PCRE_LIB="/usr/local/lib"
export PCRE_INC="/usr/local/include"

wget https://ftp.pcre.org/pub/pcre/pcre-${pcre_version}.tar.gz && tar xzvf pcre-${pcre_version}.tar.gz

cd "$PCRE_SRC"

# Add to compile with debugging symbols:
#   CFLAGS='-O0 -g' ./configure
./configure

make -j2 && make test
make install

Zlib:

# I recommend to use Cloudflare Zlib version (cloudflare/zlib) instead an original Zlib (zlib.net), but both installation methods are similar:
cd "${ngx_src}"

export ZLIB_SRC="${ngx_src}/zlib"
export ZLIB_LIB="/usr/local/lib"
export ZLIB_INC="/usr/local/include"

# For original Zlib:
#   export zlib_version="1.2.11"
#   wget http://www.zlib.net/zlib-${zlib_version}.tar.gz && tar xzvf zlib-${zlib_version}.tar.gz
#   cd "${ZLIB_SRC}-${zlib_version}"

# For Cloudflare Zlib:
git clone --depth 1 https://github.com/cloudflare/zlib

cd "$ZLIB_SRC"

./configure

make -j2 && make test
make install

OpenSSL:

cd "${ngx_src}"

export openssl_version="1.1.1b"

export OPENSSL_SRC="${ngx_src}/openssl-${openssl_version}"
export OPENSSL_DIR="/usr/local/openssl-${openssl_version}"
export OPENSSL_LIB="${OPENSSL_DIR}/lib"
export OPENSSL_INC="${OPENSSL_DIR}/include"

wget https://www.openssl.org/source/openssl-${openssl_version}.tar.gz && tar xzvf openssl-${openssl_version}.tar.gz

cd "${ngx_src}/openssl-${openssl_version}"

# Please run this and add as a compiler param:
export __GCC_SSL=("__SIZEOF_INT128__:enable-ec_nistp_64_gcc_128")

for _cc_opt in "${__GCC_SSL[@]}" ; do

    _cc_key=$(echo "$_cc_opt" | cut -d ":" -f1)
    _cc_value=$(echo "$_cc_opt" | cut -d ":" -f2)

  if [[ ! $(gcc -dM -E - </dev/null | grep -q "$_cc_key") ]] ; then

    echo -en "$_cc_value is supported on this machine\n"
    _openssl_gcc+="$_cc_value "

  fi

done

# Add to compile with debugging symbols:
#   ./config -d ...
./config --prefix="$OPENSSL_DIR" --openssldir="$OPENSSL_DIR" shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong "$_openssl_gcc"

make -j2 && make test
make install

# Setup PATH environment variables:
cat > /etc/profile.d/openssl.sh << __EOF__
#!/bin/sh
export PATH=${OPENSSL_DIR}/bin:${PATH}
export LD_LIBRARY_PATH=${OPENSSL_DIR}/lib:${LD_LIBRARY_PATH}
__EOF__

chmod +x /etc/profile.d/openssl.sh && source /etc/profile.d/openssl.sh

# To make the OpenSSL 1.1.1b version visible globally first:
mv /usr/bin/openssl /usr/bin/openssl-old
ln -s ${OPENSSL_DIR}/bin/openssl /usr/bin/openssl

cat > /etc/ld.so.conf.d/openssl.conf << __EOF__
${OPENSSL_DIR}/lib
__EOF__

LuaJIT:

# I recommend to use OpenResty's branch (openresty/luajit2) instead LuaJIT (LuaJIT/LuaJIT), but both installation methods are similar:
cd "${ngx_src}"

export LUAJIT_SRC="${ngx_src}/luajit2"
export LUAJIT_LIB="/usr/local/lib"
export LUAJIT_INC="/usr/local/include/luajit-2.1"

# For original LuaJIT:
#   git clone http://luajit.org/git/luajit-2.0 luajit2
#   cd "$LUAJIT_SRC"

# For OpenResty's LuaJIT:
git clone --depth 1 https://github.com/openresty/luajit2

cd "$LUAJIT_SRC"

# Add to compile with debugging symbols:
#   CFLAGS='-g' make ...
make && make install

ln -s /usr/local/lib/libluajit-5.1.so.2.1.0 /usr/local/lib/liblua.so

sregex:

Required for replace-filter-nginx-module module.

cd "${ngx_src}"

git clone --depth 1 https://github.com/openresty/sregex

cd "${ngx_src}/sregex"

make && make install

jemalloc:

To verify jemalloc in use: lsof -n | grep jemalloc.

cd "${ngx_src}"

export JEMALLOC_SRC="${ngx_src}/jemalloc"
export JEMALLOC_INC="/usr/local/include/jemalloc"

git clone --depth 1 https://github.com/jemalloc/jemalloc

cd "$JEMALLOC_SRC"

./autogen.sh

make && make install

Update links and cache to the shared libraries for both types of installation:

ldconfig
Get Nginx sources
cd "${ngx_base}"

wget https://nginx.org/download/nginx-${ngx_version}.tar.gz

# or alternative:
#   git clone --depth 1 https://github.com/nginx/nginx master

tar zxvf nginx-${ngx_version}.tar.gz -C "${ngx_master}" --strip 1
Download 3rd party modules
cd "${ngx_modules}"

for i in \
https://github.com/simplresty/ngx_devel_kit \
https://github.com/openresty/lua-nginx-module \
https://github.com/openresty/set-misc-nginx-module \
https://github.com/openresty/echo-nginx-module \
https://github.com/openresty/headers-more-nginx-module \
https://github.com/openresty/replace-filter-nginx-module \
https://github.com/openresty/array-var-nginx-module \
https://github.com/openresty/encrypted-session-nginx-module \
https://github.com/vozlt/nginx-module-sysguard \
https://github.com/nginx-clojure/nginx-access-plus \
https://github.com/yaoweibin/ngx_http_substitutions_filter_module \
https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng \
https://github.com/vozlt/nginx-module-vts \
https://github.com/google/ngx_brotli ; do

  git clone --depth 1 "$i"

done

wget http://mdounin.ru/hg/ngx_http_delay_module/archive/tip.tar.gz -O delay-module.tar.gz
mkdir delay-module && tar xzvf delay-module.tar.gz -C delay-module --strip 1

For ngx_brotli:

cd "${ngx_modules}/ngx_brotli"

git submodule update --init

I also use some modules from Tengine:

  • ngx_backtrace_module
  • ngx_debug_pool
  • ngx_debug_timer
  • ngx_http_upstream_check_module
  • ngx_http_footer_filter_module
cd "${ngx_modules}"

git clone --depth 1 https://github.com/alibaba/tengine

If you use NAXSI:

cd "${ngx_modules}"

git clone --depth 1 https://github.com/nbs-system/naxsi
Build Nginx
cd "${ngx_master}"

# - you can also build NGINX without 3rd party modules
# - remember about compiler and linker options
# - don't set values for --with-openssl, --with-pcre, and --with-zlib if you select prebuilt packages for them
# - add to compile with debugging symbols: -O0 -g
#   - and remove -D_FORTIFY_SOURCE=2 if you use above
./configure --prefix=/etc/nginx \
            --conf-path=/etc/nginx/nginx.conf \
            --sbin-path=/usr/sbin/nginx \
            --pid-path=/var/run/nginx.pid \
            --lock-path=/var/run/nginx.lock \
            --user=nginx \
            --group=nginx \
            --modules-path=/etc/nginx/modules \
            --error-log-path=/var/log/nginx/error.log \
            --http-log-path=/var/log/nginx/access.log \
            --http-client-body-temp-path=/var/cache/nginx/client_temp \
            --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
            --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
            --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
            --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
            --with-compat \
            --with-debug \
            --with-file-aio \
            --with-threads \
            --with-stream \
            --with-stream_realip_module \
            --with-stream_ssl_module \
            --with-stream_ssl_preread_module \
            --with-http_addition_module \
            --with-http_auth_request_module \
            --with-http_degradation_module \
            --with-http_geoip_module \
            --with-http_gunzip_module \
            --with-http_gzip_static_module \
            --with-http_image_filter_module \
            --with-http_perl_module \
            --with-http_random_index_module \
            --with-http_realip_module \
            --with-http_secure_link_module \
            --with-http_ssl_module \
            --with-http_stub_status_module \
            --with-http_sub_module \
            --with-http_v2_module \
            --with-google_perftools_module \
            --with-openssl=${OPENSSL_SRC} \
            --with-openssl-opt="shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong ${_openssl_gcc}" \
            --with-pcre=${PCRE_SRC} \
            --with-pcre-jit \
            --with-zlib=${ZLIB_SRC} \
            --without-http-cache \
            --without-http_memcached_module \
            --without-mail_pop3_module \
            --without-mail_imap_module \
            --without-mail_smtp_module \
            --without-http_fastcgi_module \
            --without-http_scgi_module \
            --without-http_uwsgi_module \
            --add-module=${ngx_modules}/ngx_devel_kit \
            --add-module=${ngx_modules}/encrypted-session-nginx-module \
            --add-module=${ngx_modules}/nginx-access-plus/src/c \
            --add-module=${ngx_modules}/ngx_http_substitutions_filter_module \
            --add-module=${ngx_modules}/nginx-sticky-module-ng \
            --add-module=${ngx_modules}/nginx-module-vts \
            --add-module=${ngx_modules}/ngx_brotli \
            --add-module=${ngx_modules}/tengine/modules/ngx_backtrace_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_debug_pool \
            --add-module=${ngx_modules}/tengine/modules/ngx_debug_timer \
            --add-module=${ngx_modules}/tengine/modules/ngx_http_footer_filter_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_http_upstream_check_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_slab_stat \
            --add-dynamic-module=${ngx_modules}/lua-nginx-module \
            --add-dynamic-module=${ngx_modules}/set-misc-nginx-module \
            --add-dynamic-module=${ngx_modules}/echo-nginx-module \
            --add-dynamic-module=${ngx_modules}/headers-more-nginx-module \
            --add-dynamic-module=${ngx_modules}/replace-filter-nginx-module \
            --add-dynamic-module=${ngx_modules}/array-var-nginx-module \
            --add-dynamic-module=${ngx_modules}/nginx-module-sysguard \
            --add-dynamic-module=${ngx_modules}/delay-module \
            --add-dynamic-module=${ngx_modules}/naxsi/naxsi_src \
            --with-cc-opt="-I/usr/local/include -m64 -march=native -DTCP_FASTOPEN=23 -O2 -g -fstack-protector-strong -flto -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -gsplit-dwarf" \
            --with-ld-opt="-L/usr/local/lib -ljemalloc -Wl,-lpcre -Wl,-z,relro -Wl,-rpath,/usr/local/lib"

make -j2 && make test
make install

ldconfig

Check NGINX version:

nginx -v
nginx version: nginx/1.16.0

And list all files in /etc/nginx:

.
├── fastcgi.conf
├── fastcgi.conf.default
├── fastcgi_params
├── fastcgi_params.default
├── html
│   ├── 50x.html
│   └── index.html
├── koi-utf
├── koi-win
├── mime.types
├── mime.types.default
├── modules
│   ├── ngx_http_array_var_module.so
│   ├── ngx_http_delay_module.so
│   ├── ngx_http_echo_module.so
│   ├── ngx_http_headers_more_filter_module.so
│   ├── ngx_http_lua_module.so
│   ├── ngx_http_naxsi_module.so
│   ├── ngx_http_replace_filter_module.so
│   ├── ngx_http_set_misc_module.so
│   └── ngx_http_sysguard_module.so
├── nginx.conf
├── nginx.conf.default
├── scgi_params
├── scgi_params.default
├── uwsgi_params
├── uwsgi_params.default
└── win-utf

2 directories, 26 files
Post installation tasks

Create a system user/group:

# Ubuntu/Debian
adduser --system --home /non-existent --no-create-home --shell /usr/sbin/nologin --disabled-login --disabled-password --gecos "nginx user" --group nginx

# RedHat/CentOS
groupadd -r -g 920 nginx

useradd --system --home-dir /non-existent --no-create-home --shell /usr/sbin/nologin --uid 920 --gid nginx nginx

passwd -l nginx

Create required directories:

for i in \
/var/www \
/var/log/nginx \
/var/cache/nginx ; do

  mkdir -p "$i" && chown -R nginx:nginx "$i"

done

Include the necessary error pages:

You can also define them e.g. in /etc/nginx/errors.conf or other file and attach it as needed in server contexts.

  • default location: /etc/nginx/html
    50x.html  index.html

Update modules list and include modules.conf to your configuration:

_mod_dir="/etc/nginx/modules"
_mod_conf="/etc/nginx/modules.conf"

:>"${_mod_conf}"

for _module in $(ls "${_mod_dir}/") ; do echo -en "load_module\t\t${_mod_dir}/$_module;\n" >> "$_mod_conf" ; done

Create logrotate configuration:

cat > /etc/logrotate.d/nginx << __EOF__
/var/log/nginx/*.log {
  daily
  missingok
  rotate 14
  compress
  delaycompress
  notifempty
  create 0640 nginx nginx
  sharedscripts
  prerotate
    if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
      run-parts /etc/logrotate.d/httpd-prerotate; \
    fi \
  endscript
  postrotate
    invoke-rc.d nginx reload >/dev/null 2>&1
  endscript
}
__EOF__

Add systemd service:

cat > /lib/systemd/system/nginx.service << __EOF__
# Stop dance for nginx
# =======================
#
# ExecStop sends SIGSTOP (graceful stop) to the nginx process.
# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control
# and sends SIGTERM (fast shutdown) to the main process.
# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends
# SIGKILL to all the remaining processes in the process group (KillMode=mixed).
#
# nginx signals reference doc:
# http://nginx.org/en/docs/control.html
#
[Unit]
Description=A high performance web server and a reverse proxy server
Documentation=man:nginx(8)
After=network.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed

[Install]
WantedBy=multi-user.target
__EOF__

Reload systemd manager configuration:

systemctl daemon-reload

Enable NGINX service:

systemctl enable nginx

Test NGINX configuration:

nginx -t -c /etc/nginx/nginx.conf

Install OpenResty on CentOS 7

OpenResty is a full-fledged web application server by bundling the standard nginx core, lots of 3rd-party nginx modules, as well as most of their external dependencies.

This bundle is maintained by Yichun Zhang (agentzh).

OpenResty is a more than web server. I would call it a superset of the NGINX web server. OpenResty comes with LuaJIT, a just-in-time compiler for the Lua scripting language and many Lua libraries, lots of high quality 3rd-party NGINX modules, and most of their external dependencies.

OpenResty has good quality and performance. For me, the ability to run Lua scripts from within is also really great.

Show step-by-step OpenResty installation
Pre installation tasks

Set the OpenResty version (I use newest and stable release):

export ngx_version="1.15.8.1"

Set temporary variables:

ngx_src="/usr/local/src"
ngx_base="${ngx_src}/openresty-${ngx_version}"
ngx_master="${ngx_base}/master"
ngx_modules="${ngx_base}/modules"

Create directories:

for i in "$ngx_base" "${ngx_master}" "$ngx_modules" ; do

  mkdir "$i"

done
Install or build dependencies

In my configuration I used all prebuilt dependencies without libssl-dev, zlib1g-dev, and libpcre2-dev because I compiled them manually - for TLS 1.3 support. In addition, LuaJIT comes with OpenResty.

Install prebuilt packages, export variables and set symbolic link:

# It's important and required, regardless of chosen sources:
yum install gcc gcc-c++ kernel-devel bison perl perl-devel perl-ExtUtils-Embed libxslt libxslt-devel gd gd-devel GeoIP-devel libxml2-devel expat-devel gperftools-devel cpio gettext-devel autoconf jq

# In this example we use sources for all below packages so we do not install them:
yum install openssl-devel zlib-devel pcre-devel

Remember to build sregex also if you use above steps.

Or download and compile them:

PCRE:

cd "${ngx_src}"

export pcre_version="8.42"

export PCRE_SRC="${ngx_base}/pcre-${pcre_version}"
export PCRE_LIB="/usr/local/lib"
export PCRE_INC="/usr/local/include"

wget https://ftp.pcre.org/pub/pcre/pcre-${pcre_version}.tar.gz && tar xzvf pcre-${pcre_version}.tar.gz

cd "$PCRE_SRC"

# Add to compile with debugging symbols:
#   CFLAGS='-O0 -g' ./configure
./configure

make -j2 && make test
make install

Zlib:

# I recommend to use Cloudflare Zlib version (cloudflare/zlib) instead an original Zlib (zlib.net), but both installation methods are similar:
cd "${ngx_src}"

export ZLIB_SRC="${ngx_src}/zlib"
export ZLIB_LIB="/usr/local/lib"
export ZLIB_INC="/usr/local/include"

# For original Zlib:
#   export zlib_version="1.2.11"
#   wget http://www.zlib.net/zlib-${zlib_version}.tar.gz && tar xzvf zlib-${zlib_version}.tar.gz
#   cd "${ZLIB_SRC}-${zlib_version}"

# For Cloudflare Zlib:
git clone --depth 1 https://github.com/cloudflare/zlib

cd "$ZLIB_SRC"

./configure

make -j2 && make test
make install

OpenSSL:

cd "${ngx_src}"

export openssl_version="1.1.1b"

export OPENSSL_SRC="${ngx_src}/openssl-${openssl_version}"
export OPENSSL_DIR="/usr/local/openssl-${openssl_version}"
export OPENSSL_LIB="${OPENSSL_DIR}/lib"
export OPENSSL_INC="${OPENSSL_DIR}/include"

wget https://www.openssl.org/source/openssl-${openssl_version}.tar.gz && tar xzvf openssl-${openssl_version}.tar.gz

cd "${ngx_src}/openssl-${openssl_version}"

# Please run this and add as a compiler param:
export __GCC_SSL=("__SIZEOF_INT128__:enable-ec_nistp_64_gcc_128")

for _cc_opt in "${__GCC_SSL[@]}" ; do

    _cc_key=$(echo "$_cc_opt" | cut -d ":" -f1)
    _cc_value=$(echo "$_cc_opt" | cut -d ":" -f2)

  if [[ ! $(gcc -dM -E - </dev/null | grep -q "$_cc_key") ]] ; then

    echo -en "$_cc_value is supported on this machine\n"
    _openssl_gcc+="$_cc_value "

  fi

done

# Add to compile with debugging symbols:
#   ./config -d ...
./config --prefix="$OPENSSL_DIR" --openssldir="$OPENSSL_DIR" shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong "$_openssl_gcc"

make -j2 && make test
make install

# Setup PATH environment variables:
cat > /etc/profile.d/openssl.sh << __EOF__
#!/bin/sh
export PATH=${OPENSSL_DIR}/bin:${PATH}
export LD_LIBRARY_PATH=${OPENSSL_DIR}/lib:${LD_LIBRARY_PATH}
__EOF__

chmod +x /etc/profile.d/openssl.sh && source /etc/profile.d/openssl.sh

# To make the OpenSSL 1.1.1b version visible globally first:
mv /usr/bin/openssl /usr/bin/openssl-old
ln -s ${OPENSSL_DIR}/bin/openssl /usr/bin/openssl

cat > /etc/ld.so.conf.d/openssl.conf << __EOF__
${OPENSSL_DIR}/lib
__EOF__

sregex:

Required for replace-filter-nginx-module module.

cd "${ngx_src}"

git clone --depth 1 https://github.com/openresty/sregex

cd "${ngx_src}/sregex"

make && make install

jemalloc:

To verify jemalloc in use: lsof -n | grep jemalloc.

cd "${ngx_src}"

export JEMALLOC_SRC="/usr/local/src/jemalloc"
export JEMALLOC_INC="/usr/local/include/jemalloc"

git clone --depth 1 https://github.com/jemalloc/jemalloc

cd "$JEMALLOC_SRC"

./autogen.sh

make && make install

Update links and cache to the shared libraries for both types of installation:

ldconfig
Get OpenResty sources
cd "${ngx_base}"

wget https://openresty.org/download/openresty-${ngx_version}.tar.gz

tar zxvf openresty-${ngx_version}.tar.gz -C "${ngx_master}" --strip 1
Download 3rd party modules
cd "${ngx_modules}"

for i in \
https://github.com/openresty/replace-filter-nginx-module \
https://github.com/vozlt/nginx-module-sysguard \
https://github.com/nginx-clojure/nginx-access-plus \
https://github.com/yaoweibin/ngx_http_substitutions_filter_module \
https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng \
https://github.com/vozlt/nginx-module-vts \
https://github.com/google/ngx_brotli ; do

  git clone --depth 1 "$i"

done

wget http://mdounin.ru/hg/ngx_http_delay_module/archive/tip.tar.gz -O delay-module.tar.gz
mkdir delay-module && tar xzvf delay-module.tar.gz -C delay-module --strip 1

For ngx_brotli:

cd "${ngx_modules}/ngx_brotli"

git submodule update --init

I also use some modules from Tengine:

  • ngx_backtrace_module
  • ngx_debug_pool
  • ngx_debug_timer
  • ngx_http_upstream_check_module
  • ngx_http_footer_filter_module
cd "${ngx_modules}"

git clone --depth 1 https://github.com/alibaba/tengine

If you use NAXSI:

cd "${ngx_modules}"

git clone --depth 1 https://github.com/nbs-system/naxsi
Build OpenResty
cd "${ngx_master}"

# - you can also build OpenResty without 3rd party modules
# - remember about compiler and linker options
# - don't set values for --with-openssl, --with-pcre, and --with-zlib if you select prebuilt packages for them
# - add to compile with debugging symbols: -O0 -g
#   - and remove -D_FORTIFY_SOURCE=2 if you use above
./configure --prefix=/etc/nginx \
            --conf-path=/etc/nginx/nginx.conf \
            --sbin-path=/usr/sbin/nginx \
            --pid-path=/var/run/nginx.pid \
            --lock-path=/var/run/nginx.lock \
            --user=nginx \
            --group=nginx \
            --modules-path=/etc/nginx/modules \
            --error-log-path=/var/log/nginx/error.log \
            --http-log-path=/var/log/nginx/access.log \
            --http-client-body-temp-path=/var/cache/nginx/client_temp \
            --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
            --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
            --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
            --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
            --with-compat \
            --with-debug \
            --with-file-aio \
            --with-threads \
            --with-stream \
            --with-stream_geoip_module \
            --with-stream_realip_module \
            --with-stream_ssl_module \
            --with-stream_ssl_preread_module \
            --with-http_addition_module \
            --with-http_auth_request_module \
            --with-http_degradation_module \
            --with-http_geoip_module \
            --with-http_gunzip_module \
            --with-http_gzip_static_module \
            --with-http_image_filter_module \
            --with-http_perl_module \
            --with-http_random_index_module \
            --with-http_realip_module \
            --with-http_secure_link_module \
            --with-http_slice_module \
            --with-http_ssl_module \
            --with-http_stub_status_module \
            --with-http_sub_module \
            --with-http_v2_module \
            --with-google_perftools_module \
            --with-luajit \
            --with-openssl=${OPENSSL_SRC} \
            --with-openssl-opt="shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong ${_openssl_gcc}" \
            --with-pcre=${PCRE_SRC} \
            --with-pcre-jit \
            --with-zlib=${ZLIB_SRC} \
            --without-http-cache \
            --without-http_memcached_module \
            --without-http_redis2_module \
            --without-http_redis_module \
            --without-http_rds_json_module \
            --without-http_rds_csv_module \
            --without-lua_redis_parser \
            --without-lua_rds_parser \
            --without-lua_resty_redis \
            --without-lua_resty_memcached \
            --without-lua_resty_mysql \
            --without-lua_resty_websocket \
            --without-mail_pop3_module \
            --without-mail_imap_module \
            --without-mail_smtp_module \
            --without-http_fastcgi_module \
            --without-http_scgi_module \
            --without-http_uwsgi_module \
            --add-module=${ngx_modules}/nginx-access-plus/src/c \
            --add-module=${ngx_modules}/ngx_http_substitutions_filter_module \
            --add-module=${ngx_modules}/nginx-module-vts \
            --add-module=${ngx_modules}/ngx_brotli \
            --add-module=${ngx_modules}/tengine/modules/ngx_backtrace_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_debug_pool \
            --add-module=${ngx_modules}/tengine/modules/ngx_debug_timer \
            --add-module=${ngx_modules}/tengine/modules/ngx_http_footer_filter_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_http_upstream_check_module \
            --add-module=${ngx_modules}/tengine/modules/ngx_slab_stat \
            --add-dynamic-module=${ngx_modules}/replace-filter-nginx-module \
            --add-dynamic-module=${ngx_modules}/nginx-module-sysguard \
            --add-dynamic-module=${ngx_modules}/delay-module \
            --add-dynamic-module=${ngx_modules}/naxsi/naxsi_src \
            --with-cc-opt="-I/usr/local/include -m64 -march=native -DTCP_FASTOPEN=23 -O2 -g -fstack-protector-strong -flto -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -gsplit-dwarf" \
            --with-ld-opt="-L/usr/local/lib -ljemalloc -Wl,-lpcre -Wl,-z,relro -Wl,-rpath,/usr/local/lib"

make && make test
make install

ldconfig

Check OpenResty version:

nginx -v
nginx version: openresty/1.15.8.1

And list all files in /etc/nginx:

.
├── bin
│   ├── md2pod.pl
│   ├── nginx-xml2pod
│   ├── openresty -> /usr/sbin/nginx
│   ├── opm
│   ├── resty
│   ├── restydoc
│   └── restydoc-index
├── COPYRIGHT
├── fastcgi.conf
├── fastcgi.conf.default
├── fastcgi_params
├── fastcgi_params.default
├── koi-utf
├── koi-win
├── luajit
│   ├── bin
│   │   ├── luajit -> luajit-2.1.0-beta3
│   │   └── luajit-2.1.0-beta3
│   ├── include
│   │   └── luajit-2.1
│   │       ├── lauxlib.h
│   │       ├── luaconf.h
│   │       ├── lua.h
│   │       ├── lua.hpp
│   │       ├── luajit.h
│   │       └── lualib.h
│   ├── lib
│   │   ├── libluajit-5.1.a
│   │   ├── libluajit-5.1.so -> libluajit-5.1.so.2.1.0
│   │   ├── libluajit-5.1.so.2 -> libluajit-5.1.so.2.1.0
│   │   ├── libluajit-5.1.so.2.1.0
│   │   ├── lua
│   │   │   └── 5.1
│   │   └── pkgconfig
│   │       └── luajit.pc
│   └── share
│       ├── lua
│       │   └── 5.1
│       ├── luajit-2.1.0-beta3
│       │   └── jit
│       │       ├── bc.lua
│       │       ├── bcsave.lua
│       │       ├── dis_arm64be.lua
│       │       ├── dis_arm64.lua
│       │       ├── dis_arm.lua
│       │       ├── dis_mips64el.lua
│       │       ├── dis_mips64.lua
│       │       ├── dis_mipsel.lua
│       │       ├── dis_mips.lua
│       │       ├── dis_ppc.lua
│       │       ├── dis_x64.lua
│       │       ├── dis_x86.lua
│       │       ├── dump.lua
│       │       ├── p.lua
│       │       ├── v.lua
│       │       ├── vmdef.lua
│       │       └── zone.lua
│       └── man
│           └── man1
│               └── luajit.1
├── lualib
│   ├── cjson.so
│   ├── librestysignal.so
│   ├── ngx
│   │   ├── balancer.lua
│   │   ├── base64.lua
│   │   ├── errlog.lua
│   │   ├── ocsp.lua
│   │   ├── pipe.lua
│   │   ├── process.lua
│   │   ├── re.lua
│   │   ├── resp.lua
│   │   ├── semaphore.lua
│   │   ├── ssl
│   │   │   └── session.lua
│   │   └── ssl.lua
│   ├── resty
│   │   ├── aes.lua
│   │   ├── core
│   │   │   ├── base64.lua
│   │   │   ├── base.lua
│   │   │   ├── ctx.lua
│   │   │   ├── exit.lua
│   │   │   ├── hash.lua
│   │   │   ├── misc.lua
│   │   │   ├── ndk.lua
│   │   │   ├── phase.lua
│   │   │   ├── regex.lua
│   │   │   ├── request.lua
│   │   │   ├── response.lua
│   │   │   ├── shdict.lua
│   │   │   ├── time.lua
│   │   │   ├── uri.lua
│   │   │   ├── utils.lua
│   │   │   ├── var.lua
│   │   │   └── worker.lua
│   │   ├── core.lua
│   │   ├── dns
│   │   │   └── resolver.lua
│   │   ├── limit
│   │   │   ├── conn.lua
│   │   │   ├── count.lua
│   │   │   ├── req.lua
│   │   │   └── traffic.lua
│   │   ├── lock.lua
│   │   ├── lrucache
│   │   │   └── pureffi.lua
│   │   ├── lrucache.lua
│   │   ├── md5.lua
│   │   ├── random.lua
│   │   ├── sha1.lua
│   │   ├── sha224.lua
│   │   ├── sha256.lua
│   │   ├── sha384.lua
│   │   ├── sha512.lua
│   │   ├── sha.lua
│   │   ├── shell.lua
│   │   ├── signal.lua
│   │   ├── string.lua
│   │   ├── upload.lua
│   │   └── upstream
│   │       └── healthcheck.lua
│   └── tablepool.lua
├── mime.types
├── mime.types.default
├── modules
│   ├── ngx_http_delay_module.so
│   ├── ngx_http_naxsi_module.so
│   ├── ngx_http_replace_filter_module.so
│   └── ngx_http_sysguard_module.so
├── nginx
│   └── html
│       ├── 50x.html
│       └── index.html
├── nginx.conf
├── nginx.conf.default
├── pod
│   ├── array-var-nginx-module-0.05
│   │   └── array-var-nginx-module-0.05.pod
│   ├── drizzle-nginx-module-0.1.11
│   │   └── drizzle-nginx-module-0.1.11.pod
│   ├── echo-nginx-module-0.61
│   │   └── echo-nginx-module-0.61.pod
│   ├── encrypted-session-nginx-module-0.08
│   │   └── encrypted-session-nginx-module-0.08.pod
│   ├── form-input-nginx-module-0.12
│   │   └── form-input-nginx-module-0.12.pod
│   ├── headers-more-nginx-module-0.33
│   │   └── headers-more-nginx-module-0.33.pod
│   ├── iconv-nginx-module-0.14
│   │   └── iconv-nginx-module-0.14.pod
│   ├── lua-5.1.5
│   │   └── lua-5.1.5.pod
│   ├── lua-cjson-2.1.0.7
│   │   └── lua-cjson-2.1.0.7.pod
│   ├── luajit-2.1
│   │   ├── changes.pod
│   │   ├── contact.pod
│   │   ├── ext_c_api.pod
│   │   ├── extensions.pod
│   │   ├── ext_ffi_api.pod
│   │   ├── ext_ffi.pod
│   │   ├── ext_ffi_semantics.pod
│   │   ├── ext_ffi_tutorial.pod
│   │   ├── ext_jit.pod
│   │   ├── ext_profiler.pod
│   │   ├── faq.pod
│   │   ├── install.pod
│   │   ├── luajit-2.1.pod
│   │   ├── running.pod
│   │   └── status.pod
│   ├── luajit-2.1-20190507
│   │   └── luajit-2.1-20190507.pod
│   ├── lua-rds-parser-0.06
│   ├── lua-redis-parser-0.13
│   │   └── lua-redis-parser-0.13.pod
│   ├── lua-resty-core-0.1.17
│   │   ├── lua-resty-core-0.1.17.pod
│   │   ├── ngx.balancer.pod
│   │   ├── ngx.base64.pod
│   │   ├── ngx.errlog.pod
│   │   ├── ngx.ocsp.pod
│   │   ├── ngx.pipe.pod
│   │   ├── ngx.process.pod
│   │   ├── ngx.re.pod
│   │   ├── ngx.resp.pod
│   │   ├── ngx.semaphore.pod
│   │   ├── ngx.ssl.pod
│   │   └── ngx.ssl.session.pod
│   ├── lua-resty-dns-0.21
│   │   └── lua-resty-dns-0.21.pod
│   ├── lua-resty-limit-traffic-0.06
│   │   ├── lua-resty-limit-traffic-0.06.pod
│   │   ├── resty.limit.conn.pod
│   │   ├── resty.limit.count.pod
│   │   ├── resty.limit.req.pod
│   │   └── resty.limit.traffic.pod
│   ├── lua-resty-lock-0.08
│   │   └── lua-resty-lock-0.08.pod
│   ├── lua-resty-lrucache-0.09
│   │   └── lua-resty-lrucache-0.09.pod
│   ├── lua-resty-memcached-0.14
│   │   └── lua-resty-memcached-0.14.pod
│   ├── lua-resty-mysql-0.21
│   │   └── lua-resty-mysql-0.21.pod
│   ├── lua-resty-redis-0.27
│   │   └── lua-resty-redis-0.27.pod
│   ├── lua-resty-shell-0.02
│   │   └── lua-resty-shell-0.02.pod
│   ├── lua-resty-signal-0.02
│   │   └── lua-resty-signal-0.02.pod
│   ├── lua-resty-string-0.11
│   │   └── lua-resty-string-0.11.pod
│   ├── lua-resty-upload-0.10
│   │   └── lua-resty-upload-0.10.pod
│   ├── lua-resty-upstream-healthcheck-0.06
│   │   └── lua-resty-upstream-healthcheck-0.06.pod
│   ├── lua-resty-websocket-0.07
│   │   └── lua-resty-websocket-0.07.pod
│   ├── lua-tablepool-0.01
│   │   └── lua-tablepool-0.01.pod
│   ├── memc-nginx-module-0.19
│   │   └── memc-nginx-module-0.19.pod
│   ├── nginx
│   │   ├── accept_failed.pod
│   │   ├── beginners_guide.pod
│   │   ├── chunked_encoding_from_backend.pod
│   │   ├── configure.pod
│   │   ├── configuring_https_servers.pod
│   │   ├── contributing_changes.pod
│   │   ├── control.pod
│   │   ├── converting_rewrite_rules.pod
│   │   ├── daemon_master_process_off.pod
│   │   ├── debugging_log.pod
│   │   ├── development_guide.pod
│   │   ├── events.pod
│   │   ├── example.pod
│   │   ├── faq.pod
│   │   ├── freebsd_tuning.pod
│   │   ├── hash.pod
│   │   ├── howto_build_on_win32.pod
│   │   ├── install.pod
│   │   ├── license_copyright.pod
│   │   ├── load_balancing.pod
│   │   ├── nginx_dtrace_pid_provider.pod
│   │   ├── nginx.pod
│   │   ├── ngx_core_module.pod
│   │   ├── ngx_google_perftools_module.pod
│   │   ├── ngx_http_access_module.pod
│   │   ├── ngx_http_addition_module.pod
│   │   ├── ngx_http_api_module_head.pod
│   │   ├── ngx_http_auth_basic_module.pod
│   │   ├── ngx_http_auth_jwt_module.pod
│   │   ├── ngx_http_auth_request_module.pod
│   │   ├── ngx_http_autoindex_module.pod
│   │   ├── ngx_http_browser_module.pod
│   │   ├── ngx_http_charset_module.pod
│   │   ├── ngx_http_core_module.pod
│   │   ├── ngx_http_dav_module.pod
│   │   ├── ngx_http_empty_gif_module.pod
│   │   ├── ngx_http_f4f_module.pod
│   │   ├── ngx_http_fastcgi_module.pod
│   │   ├── ngx_http_flv_module.pod
│   │   ├── ngx_http_geoip_module.pod
│   │   ├── ngx_http_geo_module.pod
│   │   ├── ngx_http_grpc_module.pod
│   │   ├── ngx_http_gunzip_module.pod
│   │   ├── ngx_http_gzip_module.pod
│   │   ├── ngx_http_gzip_static_module.pod
│   │   ├── ngx_http_headers_module.pod
│   │   ├── ngx_http_hls_module.pod
│   │   ├── ngx_http_image_filter_module.pod
│   │   ├── ngx_http_index_module.pod
│   │   ├── ngx_http_js_module.pod
│   │   ├── ngx_http_keyval_module.pod
│   │   ├── ngx_http_limit_conn_module.pod
│   │   ├── ngx_http_limit_req_module.pod
│   │   ├── ngx_http_log_module.pod
│   │   ├── ngx_http_map_module.pod
│   │   ├── ngx_http_memcached_module.pod
│   │   ├── ngx_http_mirror_module.pod
│   │   ├── ngx_http_mp4_module.pod
│   │   ├── ngx_http_perl_module.pod
│   │   ├── ngx_http_proxy_module.pod
│   │   ├── ngx_http_random_index_module.pod
│   │   ├── ngx_http_realip_module.pod
│   │   ├── ngx_http_referer_module.pod
│   │   ├── ngx_http_rewrite_module.pod
│   │   ├── ngx_http_scgi_module.pod
│   │   ├── ngx_http_secure_link_module.pod
│   │   ├── ngx_http_session_log_module.pod
│   │   ├── ngx_http_slice_module.pod
│   │   ├── ngx_http_spdy_module.pod
│   │   ├── ngx_http_split_clients_module.pod
│   │   ├── ngx_http_ssi_module.pod
│   │   ├── ngx_http_ssl_module.pod
│   │   ├── ngx_http_status_module.pod
│   │   ├── ngx_http_stub_status_module.pod
│   │   ├── ngx_http_sub_module.pod
│   │   ├── ngx_http_upstream_conf_module.pod
│   │   ├── ngx_http_upstream_hc_module.pod
│   │   ├── ngx_http_upstream_module.pod
│   │   ├── ngx_http_userid_module.pod
│   │   ├── ngx_http_uwsgi_module.pod
│   │   ├── ngx_http_v2_module.pod
│   │   ├── ngx_http_xslt_module.pod
│   │   ├── ngx_mail_auth_http_module.pod
│   │   ├── ngx_mail_core_module.pod
│   │   ├── ngx_mail_imap_module.pod
│   │   ├── ngx_mail_pop3_module.pod
│   │   ├── ngx_mail_proxy_module.pod
│   │   ├── ngx_mail_smtp_module.pod
│   │   ├── ngx_mail_ssl_module.pod
│   │   ├── ngx_stream_access_module.pod
│   │   ├── ngx_stream_core_module.pod
│   │   ├── ngx_stream_geoip_module.pod
│   │   ├── ngx_stream_geo_module.pod
│   │   ├── ngx_stream_js_module.pod
│   │   ├── ngx_stream_keyval_module.pod
│   │   ├── ngx_stream_limit_conn_module.pod
│   │   ├── ngx_stream_log_module.pod
│   │   ├── ngx_stream_map_module.pod
│   │   ├── ngx_stream_proxy_module.pod
│   │   ├── ngx_stream_realip_module.pod
│   │   ├── ngx_stream_return_module.pod
│   │   ├── ngx_stream_split_clients_module.pod
│   │   ├── ngx_stream_ssl_module.pod
│   │   ├── ngx_stream_ssl_preread_module.pod
│   │   ├── ngx_stream_upstream_hc_module.pod
│   │   ├── ngx_stream_upstream_module.pod
│   │   ├── ngx_stream_zone_sync_module.pod
│   │   ├── request_processing.pod
│   │   ├── server_names.pod
│   │   ├── stream_processing.pod
│   │   ├── switches.pod
│   │   ├── syntax.pod
│   │   ├── sys_errlist.pod
│   │   ├── syslog.pod
│   │   ├── variables_in_config.pod
│   │   ├── websocket.pod
│   │   ├── welcome_nginx_facebook.pod
│   │   └── windows.pod
│   ├── ngx_coolkit-0.2
│   ├── ngx_devel_kit-0.3.1rc1
│   │   └── ngx_devel_kit-0.3.1rc1.pod
│   ├── ngx_lua-0.10.15
│   │   └── ngx_lua-0.10.15.pod
│   ├── ngx_lua_upstream-0.07
│   │   └── ngx_lua_upstream-0.07.pod
│   ├── ngx_postgres-1.0
│   │   ├── ngx_postgres-1.0.pod
│   │   └── todo.pod
│   ├── ngx_stream_lua-0.0.7
│   │   ├── dev_notes.pod
│   │   └── ngx_stream_lua-0.0.7.pod
│   ├── opm-0.0.5
│   │   └── opm-0.0.5.pod
│   ├── rds-csv-nginx-module-0.09
│   │   └── rds-csv-nginx-module-0.09.pod
│   ├── rds-json-nginx-module-0.15
│   │   └── rds-json-nginx-module-0.15.pod
│   ├── redis2-nginx-module-0.15
│   │   └── redis2-nginx-module-0.15.pod
│   ├── redis-nginx-module-0.3.7
│   ├── resty-cli-0.24
│   │   └── resty-cli-0.24.pod
│   ├── set-misc-nginx-module-0.32
│   │   └── set-misc-nginx-module-0.32.pod
│   ├── srcache-nginx-module-0.31
│   │   └── srcache-nginx-module-0.31.pod
│   └── xss-nginx-module-0.06
│       └── xss-nginx-module-0.06.pod
├── resty.index
├── scgi_params
├── scgi_params.default
├── site
│   ├── lualib
│   ├── manifest
│   └── pod
├── uwsgi_params
├── uwsgi_params.default
└── win-utf

78 directories, 305 files
Post installation tasks

Check all post installation tasks from Nginx on CentOS 7 - Post installation tasks section.

Install Tengine on Ubuntu 18.04

Tengine is a web server originated by Taobao, the largest e-commerce website in Asia. It is based on the NGINX HTTP server and has many advanced features. There’s a lot of features in Tengine that do not (yet) exist in NGINX.

Generally, Tengine is a great solution, including many patches, improvements, additional modules, and most importantly it is very actively maintained.

The build and installation process is very similar to Install Nginx on Centos 7. However, I will only specify the most important changes.

Show step-by-step Tengine installation
Pre installation tasks

Set the Tengine version (I use newest and stable release):

export ngx_version="2.3.0"

Set temporary variables:

ngx_src="/usr/local/src"
ngx_base="${ngx_src}/tengine-${ngx_version}"
ngx_master="${ngx_base}/master"
ngx_modules="${ngx_base}/modules"

Create directories:

for i in "$ngx_base" "${ngx_master}" "$ngx_modules" ; do

  mkdir "$i"

done
Install or build dependencies

Install prebuilt packages, export variables and set symbolic link:

apt-get install gcc make build-essential bison perl libperl-dev libphp-embed libxslt-dev libgd-dev libgeoip-dev libxml2-dev libexpat-dev libgoogle-perftools-dev libgoogle-perftools4 autoconf jq

# In this example we don't use zlib sources:
apt-get install zlib1g-dev

PCRE:

cd "${ngx_src}"

export pcre_version="8.42"

export PCRE_SRC="${ngx_base}/pcre-${pcre_version}"
export PCRE_LIB="/usr/local/lib"
export PCRE_INC="/usr/local/include"

wget https://ftp.pcre.org/pub/pcre/pcre-${pcre_version}.tar.gz && tar xzvf pcre-${pcre_version}.tar.gz

cd "$PCRE_SRC"

# Add to compile with debugging symbols:
#   CFLAGS='-O0 -g' ./configure
./configure

make -j2 && make test
make install

OpenSSL:

cd "${ngx_src}"

export openssl_version="1.1.1b"

export OPENSSL_SRC="${ngx_src}/openssl-${openssl_version}"
export OPENSSL_DIR="/usr/local/openssl-${openssl_version}"
export OPENSSL_LIB="${OPENSSL_DIR}/lib"
export OPENSSL_INC="${OPENSSL_DIR}/include"

wget https://www.openssl.org/source/openssl-${openssl_version}.tar.gz && tar xzvf openssl-${openssl_version}.tar.gz

cd "${ngx_src}/openssl-${openssl_version}"

# Please run this and add as a compiler param:
export __GCC_SSL=("__SIZEOF_INT128__:enable-ec_nistp_64_gcc_128")

for _cc_opt in "${__GCC_SSL[@]}" ; do

    _cc_key=$(echo "$_cc_opt" | cut -d ":" -f1)
    _cc_value=$(echo "$_cc_opt" | cut -d ":" -f2)

  if [[ ! $(gcc -dM -E - </dev/null | grep -q "$_cc_key") ]] ; then

    echo -en "$_cc_value is supported on this machine\n"
    _openssl_gcc+="$_cc_value "

  fi

done

# Add to compile with debugging symbols:
#   ./config -d ...
./config --prefix="$OPENSSL_DIR" --openssldir="$OPENSSL_DIR" shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong "$_openssl_gcc"

make -j2 && make test
make install

# Setup PATH environment variables:
cat > /etc/profile.d/openssl.sh << __EOF__
#!/bin/sh
export PATH=${OPENSSL_DIR}/bin:${PATH}
export LD_LIBRARY_PATH=${OPENSSL_DIR}/lib:${LD_LIBRARY_PATH}
__EOF__

chmod +x /etc/profile.d/openssl.sh && source /etc/profile.d/openssl.sh

# To make the OpenSSL 1.1.1b version visible globally first:
mv /usr/bin/openssl /usr/bin/openssl-old
ln -s ${OPENSSL_DIR}/bin/openssl /usr/bin/openssl

cat > /etc/ld.so.conf.d/openssl.conf << __EOF__
${OPENSSL_DIR}/lib
__EOF__

LuaJIT:

# I recommend to use OpenResty's branch (openresty/luajit2) instead LuaJIT (LuaJIT/LuaJIT), but both installation methods are similar:
cd "${ngx_src}"

export LUAJIT_SRC="${ngx_src}/luajit2"
export LUAJIT_LIB="/usr/local/lib"
export LUAJIT_INC="/usr/local/include/luajit-2.1"

# For original LuaJIT:
#   git clone http://luajit.org/git/luajit-2.0 luajit2
#   cd "$LUAJIT_SRC"

# For OpenResty's LuaJIT:
git clone --depth 1 https://github.com/openresty/luajit2

cd "$LUAJIT_SRC"

# Add to compile with debugging symbols:
#   CFLAGS='-g' make ...
make && make install

ln -s /usr/local/lib/libluajit-5.1.so.2.1.0 /usr/local/lib/liblua.so

sregex:

Required for replace-filter-nginx-module module.

cd "${ngx_src}"

git clone --depth 1 https://github.com/openresty/sregex

cd "${ngx_src}/sregex"

make && make install

jemalloc:

To verify jemalloc in use: lsof -n | grep jemalloc.

cd "${ngx_src}"

export JEMALLOC_SRC="/usr/local/src/jemalloc"
export JEMALLOC_INC="/usr/local/include/jemalloc"

git clone --depth 1 https://github.com/jemalloc/jemalloc

cd "$JEMALLOC_SRC"

./autogen.sh

make && make install

Update links and cache to the shared libraries for both types of installation:

ldconfig
Get Tengine sources
cd "${ngx_base}"

wget https://tengine.taobao.org/download/tengine-${ngx_version}.tar.gz

# or alternative:
#   git clone --depth 1 https://github.com/alibaba/tengine master

tar zxvf tengine-${ngx_version}.tar.gz -C "${ngx_master}"
Download 3rd party modules

Not all modules from this section working properly with Tengine (e.g. ndk_http_module and other dependent on it).

cd "${ngx_modules}"

for i in \
https://github.com/openresty/echo-nginx-module \
https://github.com/openresty/headers-more-nginx-module \
https://github.com/openresty/replace-filter-nginx-module \
https://github.com/nginx-clojure/nginx-access-plus \
https://github.com/yaoweibin/ngx_http_substitutions_filter_module \
https://github.com/vozlt/nginx-module-vts \
https://github.com/google/ngx_brotli ; do

  git clone --depth 1 "$i"

done

wget http://mdounin.ru/hg/ngx_http_delay_module/archive/tip.tar.gz -O delay-module.tar.gz
mkdir delay-module && tar xzvf delay-module.tar.gz -C delay-module --strip 1

For ngx_brotli:

cd "${ngx_modules}/ngx_brotli"

git submodule update --init

If you use NAXSI:

cd "${ngx_modules}"

git clone --depth 1 https://github.com/nbs-system/naxsi
Build Tengine
cd "${ngx_master}"

# - you can also build Tengine without 3rd party modules
# - remember about compiler and linker options
# - don't set values for --with-openssl, --with-pcre, and --with-zlib if you select prebuilt packages for them
# - add to compile with debugging symbols: -O0 -g
#   - and remove -D_FORTIFY_SOURCE=2 if you use above
./configure --prefix=/etc/nginx \
            --conf-path=/etc/nginx/nginx.conf \
            --sbin-path=/usr/sbin/nginx \
            --pid-path=/var/run/nginx.pid \
            --lock-path=/var/run/nginx.lock \
            --user=nginx \
            --group=nginx \
            --modules-path=/etc/nginx/modules \
            --error-log-path=/var/log/nginx/error.log \
            --http-log-path=/var/log/nginx/access.log \
            --http-client-body-temp-path=/var/cache/nginx/client_temp \
            --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
            --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
            --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
            --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
            --with-compat \
            --with-debug \
            --with-file-aio \
            --with-threads \
            --with-stream \
            --with-stream_geoip_module \
            --with-stream_realip_module \
            --with-stream_ssl_module \
            --with-stream_ssl_preread_module \
            --with-http_addition_module \
            --with-http_auth_request_module \
            --with-http_degradation_module \
            --with-http_geoip_module \
            --with-http_gunzip_module \
            --with-http_gzip_static_module \
            --with-http_image_filter_module \
            --with-http_lua_module \
            --with-http_perl_module \
            --with-http_random_index_module \
            --with-http_realip_module \
            --with-http_secure_link_module \
            --with-http_ssl_module \
            --with-http_stub_status_module \
            --with-http_sub_module \
            --with-http_v2_module \
            --with-google_perftools_module \
            --with-openssl=${OPENSSL_SRC} \
            --with-openssl-opt="shared zlib no-ssl3 no-weak-ssl-ciphers -DOPENSSL_NO_HEARTBEATS -fstack-protector-strong ${_openssl_gcc}" \
            --with-pcre=${PCRE_SRC} \
            --with-pcre-jit \
            --with-jemalloc=${JEMALLOC_SRC} \
            --without-http-cache \
            --without-http_memcached_module \
            --without-mail_pop3_module \
            --without-mail_imap_module \
            --without-mail_smtp_module \
            --without-http_fastcgi_module \
            --without-http_scgi_module \
            --without-http_uwsgi_module \
            --without-http_upstream_keepalive_module \
            --add-module=${ngx_master}/modules/ngx_backtrace_module \
            --add-module=${ngx_master}/modules/ngx_debug_pool \
            --add-module=${ngx_master}/modules/ngx_debug_timer \
            --add-module=${ngx_master}/modules/ngx_http_footer_filter_module \
            --add-module=${ngx_master}/modules/ngx_http_lua_module \
            --add-module=${ngx_master}/modules/ngx_http_proxy_connect_module \
            --add-module=${ngx_master}/modules/ngx_http_reqstat_module \
            --add-module=${ngx_master}/modules/ngx_http_slice_module \
            --add-module=${ngx_master}/modules/ngx_http_sysguard_module \
            --add-module=${ngx_master}/modules/ngx_http_trim_filter_module \
            --add-module=${ngx_master}/modules/ngx_http_upstream_check_module \
            --add-module=${ngx_master}/modules/ngx_http_upstream_consistent_hash_module \
            --add-module=${ngx_master}/modules/ngx_http_upstream_dynamic_module \
            --add-module=${ngx_master}/modules/ngx_http_upstream_keepalive_module \
            --add-module=${ngx_master}/modules/ngx_http_upstream_session_sticky_module \
            --add-module=${ngx_master}/modules/ngx_http_user_agent_module \
            --add-module=${ngx_master}/modules/ngx_slab_stat \
            --add-module=${ngx_modules}/nginx-access-plus/src/c \
            --add-module=${ngx_modules}/ngx_http_substitutions_filter_module \
            --add-module=${ngx_modules}/nginx-module-vts \
            --add-module=${ngx_modules}/ngx_brotli \
            --add-dynamic-module=${ngx_modules}/echo-nginx-module \
            --add-dynamic-module=${ngx_modules}/headers-more-nginx-module \
            --add-dynamic-module=${ngx_modules}/replace-filter-nginx-module \
            --add-dynamic-module=${ngx_modules}/delay-module \
            --add-dynamic-module=${ngx_modules}/naxsi/naxsi_src \
            --with-cc-opt="-I/usr/local/include -I${OPENSSL_INC} -I${LUAJIT_INC} -I${JEMALLOC_INC} -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC" \
            --with-ld-opt="-Wl,-E -L/usr/local/lib -ljemalloc -lpcre -Wl,-rpath,/usr/local/lib/,-z,relro -Wl,-z,now -pie"

make -j2 && make test
make install

ldconfig

Check Tengine version:

nginx -v
Tengine version: Tengine/2.3.0
nginx version: nginx/1.15.9

And list all files in /etc/nginx:

tree
.
├── fastcgi.conf
├── fastcgi.conf.default
├── fastcgi_params
├── fastcgi_params.default
├── html
│   ├── 50x.html
│   └── index.html
├── koi-utf
├── koi-win
├── mime.types
├── mime.types.default
├── modules
│   ├── ngx_http_delay_module.so
│   ├── ngx_http_echo_module.so
│   ├── ngx_http_headers_more_filter_module.so
│   ├── ngx_http_naxsi_module.so
│   └── ngx_http_replace_filter_module.so
├── nginx.conf
├── nginx.conf.default
├── scgi_params
├── scgi_params.default
├── uwsgi_params
├── uwsgi_params.default
└── win-utf

2 directories, 22 files
Post installation tasks

Check all post installation tasks from Nginx on CentOS 7 - Post installation tasks section.

Base Rules

These are the basic set of rules to keep NGINX in good condition.

🔰 Organising Nginx configuration

Rationale

When your NGINX configuration grow, the need for organising your configuration will also grow. Well organised code is:

  • easier to understand
  • easier to maintain
  • easier to work with

Use include directive to move common server settings into a separate files and to attach your NGINX specific code to global config, contexts and other.

I always try to keep multiple directories in root of configuration tree. These directories stores all configuration files which are attached to the main nginx.conf file. I prefer following structure:

  • html - for default static files, e.g. global 5xx error page
  • master - for main configuration, e.g. acls, listen directives and domains
    • _acls - for access control lists (with geo or map modules)
    • _basic - for rate limiting rules, redirect maps or proxy params
    • _listen - for all listen directives; also stores SSL configuration
    • _server - for domains (localhost) configuration; also stores backends definitions
  • modules - for modules which are dynamically loading into NGINX
  • snippets - for NGINX's aliases, configuration of logrotate and other

I attach some of them, if necessary, to files which has server directives.

Example
# Store this configuration in e.g. https-ssl-common.conf
listen 10.240.20.2:443 ssl;

root /etc/nginx/error-pages/other;

ssl_certificate /etc/nginx/domain.com/certs/nginx_domain.com_bundle.crt;
ssl_certificate_key /etc/nginx/domain.com/certs/domain.com.key;

# And include this file in server section:
server {

  include /etc/nginx/domain.com/commons/https-ssl-common.conf;

  server_name domain.com www.domain.com;

  ...
External resources

🔰 Format, prettify and indent your Nginx code

Rationale

Work with unreadable configuration files is terrible, if syntax isn’t very readable, it makes your eyes sore, and you suffers from headaches.

When your code is formatted, it is significantly easier to maintain, debug, optimise, and can be read and understood in a short amount of time. You should eliminate code style violations from your NGINX configuration files.

Choose your formatter style and setup a common config for it. Some rules are universal, but the most important thing is to keep a consistent NGINX code style throughout your code base:

  • use whitespaces and blank lines to arrange and separate code blocks
  • use tabs for indents - they are consistent, customizable and allow mistakes to be more noticeable (unless you are a 4 space kind of guy)
  • use comments to explain why things are done not what is done
  • use meaningful naming conventions
  • simple is better than complex but complex is better than complicated

Of course, the NGINX configuration code is a micro programming language. Some would say that NGINX's files are written in their own language or syntax so we should not overdo it. I think it's worth sticking to the general (programming) rules and make your and other NGINX adminstrators life easier.

Example
# Good code style:
http {

  # Attach global rules:
  include         /etc/nginx/proxy.conf;
  include         /etc/nginx/fastcgi.conf;

  index           index.html index.htm index.php;

  default_type    application/octet-stream;

  # Standard log format:
  log_format      main '$remote_addr - $remote_user [$time_local]  $status '
                       '"$request" $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for"';

  access_log      /var/log/nginx/access.log main;

  sendfile        on;
  tcp_nopush      on;

  # This seems to be required for some vhosts:
  server_names_hash_bucket_size 128;

  ...

# Bad code style:
http {
  include    nginx/proxy.conf;
  include    /etc/nginx/fastcgi.conf;
  index    index.html index.htm index.php;

  default_type application/octet-stream;
  log_format   main '$remote_addr - $remote_user [$time_local]  $status '
    '"$request" $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';
  access_log   logs/access.log    main;
  sendfile on;
  tcp_nopush   on;
  server_names_hash_bucket_size 128; # this seems to be required for some vhosts

  ...
External resources

🔰 Use reload method to change configurations on the fly

Rationale

Use the reload method of NGINX to achieve a graceful reload of the configuration without stopping the server and dropping any packets. This function of the master process allows to rolls back the changes and continues to work with stable and old working configuration.

This ability of NGINX is very critical in a high-uptime, dynamic environments for keeping the load balancer or standalone server online.

Master process checks the syntax validity of the new configuration and tries to apply all changes. If this procedure has been accomplished, the master process create new worker processes and sends shutdown messages to old. Old workers stops accepting new connections after received a shut down signal but current requests are still processing. After that, the old workers exit.

When you restart NGINX you might encounter situation in which NGINX will stop, and won't start back again, because of syntax error. Reload method is safer than restarting because before old process will be terminated, new configuration file is parsed and whole process is aborted if there are any problems with it.

To stop processes with waiting for the worker processes to finish serving current requests use nginx -s quit command. It's better than nginx -s stop for fast shutdown.

From NGINX's documentation:

In order for NGINX to re-read the configuration file, a HUP signal should be sent to the master process. The master process first checks the syntax validity, then tries to apply new configuration, that is, to open log files and new listen sockets. If this fails, it rolls back changes and continues to work with old configuration. If this succeeds, it starts new worker processes, and sends messages to old worker processes requesting them to shut down gracefully. Old worker processes close listen sockets and continue to service old clients. After all clients are serviced, old worker processes are shut down.

Example
# 1)
systemctl reload nginx

# 2)
service nginx reload

# 3)
/etc/init.d/nginx reload

# 4)
/usr/sbin/nginx -s reload

# 5)
kill -HUP $(cat /var/run/nginx.pid)
# or
kill -HUP $(pgrep -f "nginx: master")

# 6)
/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
External resources

🔰 Separate listen directives for 80 and 443

Rationale

If you served HTTP and HTTPS with the exact same config (a single server that handles both HTTP and HTTPS requests) NGINX is intelligent enough to ignore the SSL directives if loaded over port 80.

I don't like duplicating the rules, but separate listen directives is certainly to help you maintain and modify your configuration.

Example
# For HTTP:
server {

  listen 10.240.20.2:80;

  ...

}

# For HTTPS:
server {

  listen 10.240.20.2:443 ssl;

  ...

}

A single HTTP/HTTPS server:

server {

  listen 10.240.20.2:80;
  listen 10.240.20.2:443 ssl;

  ...

}
External resources

🔰 Define the listen directives explicitly with address:port pair

Rationale

NGINX translates all incomplete listen directives by substituting missing values with their default values.

NGINX will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level in the listen directive.

Set IP address and port number to prevents soft mistakes which may be difficult to debug.

Example
server {

  # This block will be processed:
  listen 192.168.252.10;  # --> 192.168.252.10:80

  ...

}

server {

  listen 80;  # --> *:80 --> 0.0.0.0:80
  server_name api.random.com;

  ...

}
External resources

🔰 Prevent processing requests with undefined server names

Rationale

NGINX should prevent processing requests with undefined server names (also on IP address). It also protects against configuration errors and don't pass traffic to incorrect backends. The problem is easily solved by creating a default catch all server config.

If none of the listen directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair (it means that NGINX always has a default server).

If someone makes a request using an IP address instead of a server name, the Host request header field will contain the IP address and the request can be handled using the IP address as the server name.

The server_name _ is not required in modern versions of NGINX. If a server with a matching listen and server_name cannot be found, NGINX will use the default server. If your configurations are spread across multiple files, there evaluation order will be ambiguous, so you need to mark the default server explicitly.

It is a simple procedure for all non defined server names:

  • one server block, with...
  • complete listen directive, with...
  • default_server parameter, with...
  • only one server_name definition, and...
  • preventively I add it at the beginning of the configuration

Also good point is return 444; for default server name because this will close the connection and log it internally, for any domain that isn't defined in NGINX.

Example
# Place it at the beginning of the configuration file to prevent mistakes.
server {

  # Add default_server to your listen directive in the server that you want to act as the default.
  listen 10.240.20.2:443 default_server ssl;

  # We catch:
  #   - invalid domain names
  #   - requests without the "Host" header
  #   - and all others (also due to the above setting)
  #   - default_server in server_name directive is not required - I add this for a better understanding and I think it's an unwritten standard
  # ...but you should know that it's irrelevant, really, you can put in everything there.
  server_name _ "" default_server;

  ...

  return 444;

  # We can also serve:
  # location / {

    # static file (error page):
    # root /etc/nginx/error-pages/404;
    # or redirect:
    # return 301 https://badssl.com;

    # return 444;

  # }

}

server {

  listen 10.240.20.2:443 ssl;

  server_name domain.com;

  ...

}

server {

  listen 10.240.20.2:443 ssl;

  server_name domain.org;

  ...

}
External resources

🔰 Use only one SSL config for the listen directive

Rationale

For sharing a single IP address between several HTTPS servers you should use one SSL config (e.g. protocols, ciphers, curves) because changes will affect only the default server.

Remember that regardless of SSL parameters you are able to use multiple SSL certificates on the same listen directive (IP address).

Another good idea is to move common server settings into a separate file, i.e. common/example.com.conf and then include it in separate server blocks.

If you want to set up different SSL configurations for the same IP address then it will fail. It's important because SSL configuration is presented for default server - if none of the listen directives have the default_server parameter then the first server in your configuration will be default server. So you should use only one SSL setup with several names on the same IP address. It's also to prevent mistakes and configuration mismatch.

From NGINX's documentation:

This is caused by SSL protocol behaviour. The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.

Also take a look at this:

A more generic solution for running several HTTPS servers on a single IP address is TLS Server Name Indication extension (SNI, RFC 6066), which allows a browser to pass a requested server name during the SSL handshake and, therefore, the server will know which certificate it should use for the connection.

Example
# Store this configuration in e.g. https.conf
listen 192.168.252.10:443 default_server ssl http2;

ssl_protocols TLSv1.2;
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384";

ssl_prefer_server_ciphers on;

ssl_ecdh_curve secp521r1:secp384r1;

...

# Include this file to the server context (attach domain-a.com for specific listen directive)
server {

  include             /etc/nginx/https.conf;

  server_name         domain-a.com;

  ssl_certificate     domain-a.com.crt;
  ssl_certificate_key domain-a.com.key;

  ...

}

# Include this file to the server context (attach domain-b.com for specific listen directive)
server {

  include             /etc/nginx/https.conf;

  server_name         domain-b.com;

  ssl_certificate     domain-b.com.crt;
  ssl_certificate_key domain-b.com.key;

  ...

}
External resources

🔰 Use geo/map modules instead allow/deny

Rationale

Use map or geo modules (one of them) to prevent users abusing your servers. This allows to create variables with values depending on the client IP address.

Since variables are evaluated only when used, the mere existence of even a large number of declared e.g. geo variables does not cause any extra costs for request processing.

These directives provides the perfect way to block invalid visitors e.g. with ngx_http_geoip_module.

I use both modules for a large lists. You should've thought about it because this rule requires to use several if conditions. I think that allow/deny directives are better solution for simple lists, after all. Take a look at the example below:

# Allow/deny:
location /internal {

  include acls/internal.conf;
  allow   192.168.240.0/24;
  deny    all;

  ...

# vs geo/map:
location /internal {

  if ($globals_internal_map_acl) {
    set $pass 1;
  }

  if ($pass = 1) {
    proxy_pass http://localhost:80;
  }

  if ($pass != 1) {
    return 403;
  }

  ...

}
Example
# Map module:
map $remote_addr $globals_internal_map_acl {

  # Status code:
  #  - 0 = false
  #  - 1 = true
  default 0;

  ### INTERNAL ###
  10.255.10.0/24 1;
  10.255.20.0/24 1;
  10.255.30.0/24 1;
  192.168.0.0/16 1;

}

# Geo module:
geo $globals_internal_geo_acl {

  # Status code:
  #  - 0 = false
  #  - 1 = true
  default 0;

  ### INTERNAL ###
  10.255.10.0/24 1;
  10.255.20.0/24 1;
  10.255.30.0/24 1;
  192.168.0.0/16 1;

}
External resources

🔰 Map all the things...

Rationale

Manage a large number of redirects with maps and use them to customise your key-value pairs.

The map directive maps strings, so it is possible to represent e.g. 192.168.144.0/24 as a regular expression and continue to use the map directive.

Map module provides a more elegant solution for clearly parsing a big list of regexes, e.g. User-Agents, Referrers.

You can also use include directive for your maps so your config files would look pretty.

Example
map $http_user_agent $device_redirect {

  default "desktop";

  ~(?i)ip(hone|od) "mobile";
  ~(?i)android.*(mobile|mini) "mobile";
  ~Mobile.+Firefox "mobile";
  ~^HTC "mobile";
  ~Fennec "mobile";
  ~IEMobile "mobile";
  ~BB10 "mobile";
  ~SymbianOS.*AppleWebKit "mobile";
  ~Opera\sMobi "mobile";

}

# Turn on in a specific context (e.g. location):
if ($device_redirect = "mobile") {

  return 301 https://m.domain.com$request_uri;

}
External resources

🔰 Drop the same root inside location block

Rationale

If you add a root to every location block then a location block that isn’t matched will have no root. Set global root inside server directive.

Example
server {

  server_name domain.com;

  root /var/www/domain.com/public;

  location / {

    ...

  }

  location /api {

    ...

  }

  location /static {

    root /var/www/domain.com/static;

    ...

  }

}
External resources

🔰 Configure log rotation policy

Rationale

Log files gives you feedback about the activity and performance of the server as well as any problems that may be occurring. They are records details about requests and NGINX internals. But also logs use more disk space.

You should define a process which periodically archiving the current log file and starting a new one, renames and optionally compresses the current log files, delete old log files, and force the logging system to begin using new log files.

I think the best tool for this is a logrotate. I use it everywhere if I want to manage logs automatically, and for a good night's sleep also. It is a simple program to rotate logs, uses crontab to work. It's scheduled work, not a daemon, so no need to reload its configuration.

Example
  • for manually rotation:

    # Check manually (all log files):
    logrotate -dv /etc/logrotate.conf
    
    # Check manually with force rotation (specific log file):
    logrotate -dv --force /etc/logrotate.d/nginx
  • for automate rotation:

    cat > /etc/logrotate.d/nginx << __EOF__
    /var/log/nginx/*.log {
      daily
      missingok
      rotate 14
      compress
      delaycompress
      notifempty
      create 0640 nginx nginx
      sharedscripts
      prerotate
        if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
          run-parts /etc/logrotate.d/httpd-prerotate; \
        fi \
      endscript
      postrotate
        # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid`
        invoke-rc.d nginx reload >/dev/null 2>&1
      endscript
    }
    
    /var/log/nginx/localhost/*.log {
      daily
      missingok
      rotate 14
      compress
      delaycompress
      notifempty
      create 0640 nginx nginx
      sharedscripts
      prerotate
        if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
          run-parts /etc/logrotate.d/httpd-prerotate; \
        fi \
      endscript
      postrotate
        # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid`
        invoke-rc.d nginx reload >/dev/null 2>&1
      endscript
    }
    
    /var/log/nginx/domains/example.com/*.log {
      daily
      missingok
      rotate 14
      compress
      delaycompress
      notifempty
      create 0640 nginx nginx
      sharedscripts
      prerotate
        if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
          run-parts /etc/logrotate.d/httpd-prerotate; \
        fi \
      endscript
      postrotate
        # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid`
        invoke-rc.d nginx reload >/dev/null 2>&1
      endscript
    }
    __EOF__
External resources

Debugging

NGINX has many methods for troubleshooting configuration problems. In this chapter I will present a few ways to deal with them.

🔰 Use debug mode to track down unexpected behaviour

Rationale

There's probably more detail than you want, but that can sometimes be a lifesaver (but log file growing rapidly on a very high-traffic sites).

Generally, the error_log directive is specified in the main context but you can specified inside a particular server or a location block, the global settings will be overridden and such error_log directive will set its own path to the log file and the level of logging.

It is possible to enable the debugging log for a particular IP address or a range of IP addresses (see examples).

The alternative method of storing the debug log is keep it in the memory (to a cyclic memory buffer). The memory buffer on the debug level does not have significant impact on performance even under high load.

If you want to logging of ngx_http_rewrite_module (at the notice level) you should enable rewrite_log on; in http, server or a location contexts.

Words of caution:

  • never leave debug logging to a file on in production
  • don't forget to revert debug-level for error_log on a very high traffic sites
  • absolutely use log rotation policy
Example
  • Debugging log to a file:
# Turn on in a specific context, e.g.:
#   - global    - for global logging
#   - http      - for http and all locations logging
#   - location  - for specific location
error_log /var/log/nginx/error-debug.log debug;
  • Debugging log to memory:

    error_log memory:32m debug;

    How to analyse error log in memory you can read Show debug log in memory chapter.

  • Debugging log for a IP address/range:

    events {
    
      debug_connection    192.168.252.15/32;
      debug_connection    10.10.10.0/24;
    
    }
  • Debugging log for a each server:

    error_log /var/log/nginx/debug.log debug;
    
    ...
    
    http {
    
      server {
    
        # To enable debugging:
        error_log /var/log/nginx/domain.com/domain.com-debug.log debug;
        # To disable debugging:
        error_log /var/log/nginx/domain.com/domain.com-debug.log;
    
        ...
    
      }
    
    }
External resources

🔰 Use custom log formats

Rationale

Anything you can access as a variable in NGINX config, you can log, including non-standard http headers, etc. so it's a simple way to create your own log format for specific situations.

This is extremely helpful for debugging specific location directives.

Example
# Default main log format from NGINX repository:
log_format main
                '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';

# Extended main log format:
log_format main-level-0
                '$remote_addr - $remote_user [$time_local] '
                '"$request_method $scheme://$host$request_uri '
                '$server_protocol" $status $body_bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '$request_time';

# Debug log formats:
log_format debug-level-0
                '$remote_addr - $remote_user [$time_local] '
                '"$request_method $scheme://$host$request_uri '
                '$server_protocol" $status $body_bytes_sent '
                '$request_id $pid $msec $request_time '
                '$upstream_connect_time $upstream_header_time '
                '$upstream_response_time "$request_filename" '
                '$request_completion';

log_format debug-level-1
                '$remote_addr - $remote_user [$time_local] '
                '"$request_method $scheme://$host$request_uri '
                '$server_protocol" $status $body_bytes_sent '
                '$request_id $pid $msec $request_time '
                '$upstream_connect_time $upstream_header_time '
                '$upstream_response_time "$request_filename" $request_length '
                '$request_completion $connection $connection_requests '
                '"$http_user_agent"';

log_format debug-level-2
                '$remote_addr - $remote_user [$time_local] '
                '"$request_method $scheme://$host$request_uri '
                '$server_protocol" $status $body_bytes_sent '
                '$request_id $pid $msec $request_time '
                '$upstream_connect_time $upstream_header_time '
                '$upstream_response_time "$request_filename" $request_length '
                '$request_completion $connection $connection_requests '
                '$remote_addr $remote_port $server_addr $server_port '
                '$http_x_forwarded_for "$http_referer" "$http_user_agent"';

# Debug log format for SSL:
log_format debug-ssl-level-0
                '$remote_addr - $remote_user [$time_local] '
                '"$request_method $scheme://$host$request_uri '
                '$server_protocol" $status $body_bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '$request_time '
                '$tls_version $ssl_protocol $ssl_cipher';

# Log format for GeoIP module (ngx_http_geoip_module):
log_format geoip-level-0
                '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for" '
                '"$geoip_area_code $geoip_city_country_code $geoip_country_code"';
External resources

🔰 Memory analysis from core dumps

Rationale

A core dump is basically a snapshot of the memory when the program crashed.

NGINX is a very stable daemon but sometimes it can happen that there is a unique termination of the running NGINX process.

It ensures two important directives that should be enabled if you want the memory dumps to be saved, however, in order to properly handle memory dumps, there are a few things to do. For fully information about it see Dump a process's memory (from this Handbook).

You should always enable core dumps when your NGINX instance receive an unexpected error or when it crashed.

Example
worker_rlimit_core    500m;
worker_rlimit_nofile  65535;
working_directory     /var/dump/nginx;
External resources

Performance

NGINX is a insanely fast, but you can adjust a few things to make sure it's as fast as possible for your use case.

🔰 Adjust worker processes

Rationale

The worker_processes directive is the sturdy spine of life for NGINX. This directive is responsible for letting our virtual server know many workers to spawn once it has become bound to the proper IP and port(s).

Rule of thumb: If much time is spent blocked on I/O, worker processes should be increased further.

I think for high load proxy servers (also standalone servers) interesting value is ALL_CORES - 1 (or more) because if you're running NGINX with other critical services on the same server, you're just going to thrash the CPUs with all the context switching required to manage all of those processes.

Official NGINX documentation say:

When one is in doubt, setting it to the number of available CPU cores would be a good start (the value "auto" will try to autodetect it). [...] running one worker process per CPU core – makes the most efficient use of hardware resources.

Example
# VCPU = 4 , expr $(nproc --all) - 1
worker_processes 3;
External resources

🔰 Use HTTP/2

Rationale

The primary goals for HTTP/2 are to reduce latency by enabling full request and response multiplexing, minimise protocol overhead via efficient compression of HTTP header fields, and add support for request prioritisation and server push.

HTTP/2 will make our applications faster, simpler, and more robust.

HTTP/2 is backwards-compatible with HTTP/1.1, so it would be possible to ignore it completely and everything will continue to work as before because if the client that does not support HTTP/2 will never ask the server for an HTTP/2 communication upgrade: the communication between them will be fully HTTP1/1.

Also include the ssl parameter, required because browsers do not support HTTP/2 without encryption.

HTTP/2 has a extremely large blacklist of old and insecure ciphers, so you should avoid them.

Example
# For https:
server {

  listen 10.240.20.2:443 ssl http2;

  ...
External resources

🔰 Maintaining SSL sessions

Rationale

This improves performance from the clients’ perspective, because it eliminates the need for a new (and time-consuming) SSL handshake to be conducted each time a request is made.

Most servers do not purge sessions or ticket keys, thus increasing the risk that a server compromise would leak data from previous (and future) connections.

Set SSL Session Timeout to 5 minutes for prevent abused by advertisers like Google and Facebook.

Example
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_session_tickets off;
ssl_buffer_size 1400;
External resources

🔰 Use exact names in a server_name directive where possible

Rationale

Exact names, wildcard names starting with an asterisk, and wildcard names ending with an asterisk are stored in three hash tables bound to the listen ports.

The exact names hash table is searched first. If a name is not found, the hash table with wildcard names starting with an asterisk is searched. If the name is not found there, the hash table with wildcard names ending with an asterisk is searched. Searching wildcard names hash table is slower than searching exact names hash table because names are searched by domain parts.

Regular expressions are tested sequentially and therefore are the slowest method and are non-scalable. For these reasons, it is better to use exact names where possible.

Example
# It is more efficient to define them explicitly:
server {

    listen       80;

    server_name  example.org  www.example.org  *.example.org;

    ...

}

# than to use the simplified form:
server {

    listen       80;

    server_name  .example.org;

    ...

}
External resources

🔰 Avoid checks server_name with if directive

Rationale

When NGINX receives a request no matter what is the subdomain being requested, be it www.example.com or just the plain example.com this if directive is always evaluated. Since you’re requesting NGINX to check for the Host header for every request. It’s extremely inefficient.

Instead use two server directives like the example below. This approach decreases NGINX processing requirements.

Example

Bad configuration:

server {

  ...

  server_name                 domain.com www.domain.com;

  if ($host = www.domain.com) {

    return                    301 https://domain.com$request_uri;

  }

  server_name                 domain.com;

  ...

}

Good configuration:

server {

    server_name               www.domain.com;
    return                    301 $scheme://domain.com$request_uri;
    # If you force your web traffic to use HTTPS:
    #                         301 https://domain.com$request_uri;

}

server {

    listen                    80;

    server_name               domain.com;

    ...

}
External resources

🔰 Use try_files directive to ensure a file exists

Rationale

try_files is definitely a very useful thing. You can use try_files directive to check a file exists in a specified order.

You should use try_files instead of if directive. It's definitely better way than using if for this action because if directive is extremely inefficient since it is evaluated every time for every request.

The advantage of using try_files is that the behavior switches immediately with one command. I think the code is more readable also.

try_files allows you:

  • to check if the file exists from a predefined list
  • to check if the file exists from a specified directory
  • to use an internal redirect if none of the files are found
Example

Bad configuration:

  ...

  root /var/www/example.com;

  location /images {

    if (-f $request_filename) {

      expires 30d;
      break;

    }

  ...

}

Good configuration:

  ...

  root /var/www/example.com;

  location /images {

    try_files $uri =404;

  ...

}
External resources

🔰 Use return directive instead rewrite for redirects

Rationale

You should use server blocks and return statements as they're way simpler and faster than evaluating RegEx via location blocks. This directive stops processing and returns the specified code to a client.

Example
server {

  ...

  if ($host = api.domain.com) {

    return                    403;
    # or other examples:
    # return                  301 https://domain.com$request_uri;
    # return                  301 $scheme://$host$request_uri;

  }

  ...
External resources

🔰 Make an exact location match to speed up the selection process

Rationale

Exact location matches are often used to speed up the selection process by immediately ending the execution of the algorithm.

Example
# Matches the query / only and stops searching:
location = / {

  ...

}

# Matches the query /v9 only and stops searching:
location = /v9 {

  ...

}

...

# Matches any query due to the fact that all queries begin at /,
# but regular expressions and any longer conventional blocks will be matched at first place:
location / {

  ...

}
External resources

🔰 Use limit_conn to improve limiting the download speed

Rationale

NGINX provides two directives to limiting download speed:

  • limit_rate_after - sets the amount of data transferred before the limit_rate directive takes effect
  • limit_rate - allows you to limit the transfer rate of individual client connections (past exceeding limit_rate_after)

This solution limits NGINX download speed per connection, so, if one user opens multiple e.g. video files, it will be able to download X * the number of times he connected to the video files.

To prevent this situation use limit_conn_zone and limit_conn directives.

Example
# Create limit connection zone:
limit_conn_zone $binary_remote_addr zone=conn_for_remote_addr:1m;

# Add rules to limiting the download speed:
limit_rate_after 1m;  # run at maximum speed for the first 1 megabyte
limit_rate 250k;      # and set rate limit after 1 megabyte

# Enable queue:
location /videos {

  # Max amount of data by one client: 10 megabytes (limit_rate_after * 10)
  limit_conn conn_for_remote_addr 10;

  ...
External resources

Hardening

In this chapter I will talk about some of the NGINX hardening approaches and security standards.

🔰 Always keep NGINX up-to-date

Rationale

NGINX is a very secure and stable but vulnerabilities in the main binary itself do pop up from time to time. It's the main reason for keep NGINX up-to-date as hard as you can.

A very safe way to plan the update is once a new stable version is released but for me the most common way to handle NGINX updates is to wait a few weeks after the stable release.

Before update/upgrade NGINX remember about do it on the testing environment.

Most modern GNU/Linux distros will not push the latest version of NGINX into their default package lists so maybe you should consider install it from sources.

External resources

🔰 Run as an unprivileged user

Rationale

There is no real difference in security just by changing the process owner name. On the other hand in security, the principle of least privilege states that an entity should be given no more permission than necessary to accomplish its goals within a given system. This way only master process runs as root.

This is the default NGINX behaviour, but remember to check it.

Example
# Edit nginx.conf:
user nginx;

# Set owner and group for root (app, default) directory:
chown -R nginx:nginx /var/www/domain.com
External resources

🔰 Disable unnecessary modules

Rationale

It is recommended to disable any modules which are not required as this will minimise the risk of any potential attacks by limiting the operations allowed by the web server.

The best way to unload unused modules is use the configure option during installation.

If you have static linking a shared module you should re-compile NGINX.

Example
# During installation:
./configure --without-http_autoindex_module

# Comment modules in the configuration file e.g. modules.conf:
# load_module                 /usr/share/nginx/modules/ndk_http_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_auth_pam_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_cache_purge_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_dav_ext_module.so;
load_module                   /usr/share/nginx/modules/ngx_http_echo_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_fancyindex_module.so;
load_module                   /usr/share/nginx/modules/ngx_http_geoip_module.so;
load_module                   /usr/share/nginx/modules/ngx_http_headers_more_filter_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_image_filter_module.so;
# load_module                 /usr/share/nginx/modules/ngx_http_lua_module.so;
load_module                   /usr/share/nginx/modules/ngx_http_perl_module.so;
# load_module                 /usr/share/nginx/modules/ngx_mail_module.so;
# load_module                 /usr/share/nginx/modules/ngx_nchan_module.so;
# load_module                 /usr/share/nginx/modules/ngx_stream_module.so;
External resources

🔰 Protect sensitive resources

Rationale

Hidden directories and files should never be web accessible - sometimes critical data are published during application deploy. If you use control version system you should defninitely drop the access to the critical hidden directories like a .git or .svn to prevent expose source code of your application.

Sensitive resources contains items that abusers can use to fully recreate the source code used by the site and look for bugs, vulnerabilities, and exposed passwords.

Example
if ($request_uri ~ "/\.git") {

  return 403;

}

# or
location ~ /\.git {

  deny all;

}

# or
location ~* ^.*(\.(?:git|svn|htaccess))$ {

  return 403;

}

# or all . directories/files excepted .well-known
location ~ /\.(?!well-known\/) {

  deny all;

}
External resources

🔰 Hide Nginx version number

Rationale

Disclosing the version of NGINX running can be undesirable, particularly in environments sensitive to information disclosure.

But the "Official Apache Documentation (Apache Core Features)" (yep, it's not a joke...) say:

Setting ServerTokens to less than minimal is not recommended because it makes it more difficult to debug interoperational problems. Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of "security through obscurity" is a myth and leads to a false sense of safety.

Example
server_tokens off;
External resources

🔰 Hide Nginx server signature

Rationale

In my opinion there is no real reason or need to show this much information about your server. It is easy to look up particular vulnerabilities once you know the version number.

You should compile NGINX from sources with ngx_headers_more to used more_set_headers directive.

Example
more_set_headers "Server: Unknown";
External resources

🔰 Hide upstream proxy headers

Rationale

When NGINX is used to proxy requests to an upstream server (such as a PHP-FPM instance), it can be beneficial to hide certain headers sent in the upstream response (e.g. the version of PHP running).

Example
proxy_hide_header X-Powered-By;
proxy_hide_header X-AspNetMvc-Version;
proxy_hide_header X-AspNet-Version;
proxy_hide_header X-Drupal-Cache;
External resources

🔰 Force all connections over TLS

Rationale

TLS provides two main services. For one, it validates the identity of the server that the user is connecting to for the user. It also protects the transmission of sensitive information from the user to the server.

In my opinion you should always use HTTPS instead of HTTP to protect your website, even if it doesn’t handle sensitive communications. The application can have many sensitive places that should be protected.

Always put login page, registration forms, all subsequent authenticated pages, contact forms, and payment details forms in HTTPS to prevent injection and sniffing. Them must be accessed only over TLS to ensure your traffic is secure.

If page is available over TLS, it must be composed completely of content which is transmitted over TLS. Requesting subresources using the insecure HTTP protocol weakens the security of the entire page and HTTPS protocol. Modern browsers should blocked or report all active mixed content delivered via HTTP on pages by default.

Also remember to implement the HTTP Strict Transport Security (HSTS).

We have currently the first free and open CA - Let's Encrypt - so generating and implementing certificates has never been so easy. It was created to provide free and easy-to-use TLS and SSL certificates.

Example
  • force all traffic to use TLS:

    server {
    
      listen 10.240.20.2:80;
    
      server_name domain.com;
    
      return 301 https://$host$request_uri;
    
    }
    
    server {
    
      listen 10.240.20.2:443 ssl;
    
      server_name domain.com;
    
      ...
    
    }
  • force e.g. login page to use TLS:

    server {
    
      listen 10.240.20.2:80;
    
      server_name domain.com;
    
      ...
    
      location ^~ /login {
    
        return 301 https://domain.com$request_uri;
    
      }
    
    }
External resources

🔰 Use only the latest supported OpenSSL version

Rationale

Before start see Release Strategy Policies and Changelog on the OpenSSL website.

Criteria for choosing OpenSSL version can vary and it depends all on your use.

The latest versions of the major OpenSSL library are (may be changed):

  • the next version of OpenSSL will be 3.0.0
  • version 1.1.1 will be supported until 2023-09-11 (LTS)
    • last minor version: 1.1.1c (May 23, 2019)
  • version 1.1.0 will be supported until 2019-09-11
    • last minor version: 1.1.0k (May 28, 2018)
  • version 1.0.2 will be supported until 2019-12-31 (LTS)
    • last minor version: 1.0.2s (May 28, 2018)
  • any other versions are no longer supported

In my opinion the only safe way is based on the up-to-date and still supported version of the OpenSSL. And what's more, I recommend to hang on to the latest versions (e.g. 1.1.1).

If your system repositories do not have the newest OpenSSL, you can do the compilation process (see OpenSSL sub-section).

External resources

🔰 Use min. 2048-bit private keys

Rationale

Advisories recommend 2048 for now. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030 (as per NIST).

The latest version of FIPS-186 also say the U.S. Federal Government generate (and use) digital signatures with 1024, 2048, or 3072 bit key lengths.

Generally there is no compelling reason to choose 4096 bit keys over 2048 provided you use sane expiration intervals.

If you want to get A+ with 100%s on SSL Lab (for Key Exchange) you should definitely use 4096 bit private keys. That's the main reason why you should use them.

Longer keys take more time to generate and require more CPU (please use openssl speed rsa on your server) and power when used for encrypting and decrypting, also the SSL handshake at the start of each connection will be slower. It also has a small impact on the client side (e.g. browsers).

Use of alternative solution: ECC Certificate Signing Request (CSR) - ECDSA certificates contain an ECC public key. ECC keys are better than RSA & DSA keys in that the ECC algorithm is harder to break.

The "SSL/TLS Deployment Best Practices" book say:

The cryptographic handshake, which is used to establish secure connections, is an operation whose cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in "too much" security and slow operation. For most web sites, using RSA keys stronger than 2048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2048 bits for DHE and 256 bits for ECDHE.

Konstantin Ryabitsev (Reddit):

Generally speaking, if we ever find ourselves in a world where 2048-bit keys are no longer good enough, it won't be because of improvements in brute-force capabilities of current computers, but because RSA will be made obsolete as a technology due to revolutionary computing advances. If that ever happens, 3072 or 4096 bits won't make much of a difference anyway. This is why anything above 2048 bits is generally regarded as a sort of feel-good hedging theatre.

My recommendation:

Use 2048-bit key instead 4096-bit at this moment.

Example
### Example (RSA):
( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} )

# Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com --rsa-key-size 2048

### Example (ECC):
# _curve: prime256v1, secp521r1, secp384r1
( _fd="domain.com.key" ; _fd_csr="domain.com.csr" ; _curve="prime256v1" ; \
openssl ecparam -out ${_fd} -name ${_curve} -genkey ; \
openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 )

# Let's Encrypt (from above):
certbot --csr ${_fd_csr} -[other-args]

For x25519:

( _fd="private.key" ; _curve="x25519" ; \
openssl genpkey -algorithm ${_curve} -out ${_fd} )

  :arrow_right: ssllabs score: 100%

( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} )

# Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com

  :arrow_right: ssllabs score: 90%

External resources

🔰 Keep only TLS 1.3 and TLS 1.2

Rationale

It is recommended to run TLS 1.2/1.3 and fully disable SSLv2, SSLv3, TLS 1.0 and TLS 1.1 that have protocol weaknesses and uses older cipher suites (do not provide any modern ciper modes).

TLS 1.0 and TLS 1.1 must not be used (see Deprecating TLSv1.0 and TLSv1.1) and were superceded by TLS 1.2, which has now itself been superceded by TLS 1.3. They are also actively being deprecated in accordance with guidance from government agencies (e.g. NIST SP 80052r2) and industry consortia such as the Payment Card Industry Association (PCI) [PCI-TLS1].

TLS 1.2 and TLS 1.3 are both without security issues. Only these versions provides modern cryptographic algorithms. TLS 1.3 is a new TLS version that will power a faster and more secure web for the next few years. TLS 1.0 and TLS 1.1 protocols will be removed from browsers at the beginning of 2020.

TLS 1.2 does require careful configuration to ensure obsolete cipher suites with identified vulnerabilities are not used in conjunction with it. TLS 1.3 removes the need to make these decisions. TLS 1.3 version also improves TLS 1.2 security, privace and performance issues.

Before enabling specific protocol version, you should check which ciphers are supported by the protocol. So if you turn on TLS 1.2 and TLS 1.3 both remember about the correct (and strong) ciphers to handle them. Otherwise, they will not be anyway works without supported ciphers (no TLS handshake will succeed).

I think the best way to deploy secure configuration is: enable TLS 1.2 without any CBC Ciphers (is safe enough) only TLS 1.3 is safer because of its handling improvement and the exclusion of everything that went obsolete since TLS 1.2 came up.

If you told NGINX to use TLS 1.3, it will use TLS 1.3 only where is available. NGINX supports TLS 1.3 since version 1.13.0 (released in April 2017), when built against OpenSSL 1.1.1 or more.

My recommendation:

Use only TLSv1.3 and TLSv1.2.

Example

TLS 1.3 + 1.2:

ssl_protocols TLSv1.3 TLSv1.2;

TLS 1.2:

ssl_protocols TLSv1.2;

  :arrow_right: ssllabs score: 100%

TLS 1.3 + 1.2 + 1.1:

ssl_protocols TLSv1.3 TLSv1.2 TLSv1.1;

TLS 1.2 + 1.1:

ssl_protocols TLSv1.2 TLSv1.1;

  :arrow_right: ssllabs score: 95%

External resources

🔰 Use only strong ciphers

Rationale

This parameter changes quite often, the recommended configuration for today may be out of date tomorrow.

To check ciphers supported by OpenSSL on your server: openssl ciphers -s -v, openssl ciphers -s -v ECDHE or openssl ciphers -s -v DHE.

For more security use only strong and not vulnerable cipher suites. Place ECDHE and DHE suites at the top of your list. The order is important because ECDHE suites are faster, you want to use them whenever clients supports them. Ephemeral DHE/ECDHE are recommended and support Perfect Forward Secrecy.

For backward compatibility software components you should use less restrictive ciphers. Not only that you have to enable at least one special AES128 cipher for HTTP/2 support regarding to RFC7540: TLS 1.2 Cipher Suites, you also have to allow prime256 elliptic curves which reduces the score for key exchange by another 10% even if a secure server preferred order is set.

Also modern cipher suites (e.g. from Mozilla recommendations) suffers from compatibility troubles mainly because drops SHA-1. But be careful if you want to use ciphers with HMAC-SHA-1 - there's a perfectly good explanation why.

If you want to get A+ with 100%s on SSL Lab (for Cipher Strength) you should definitely disable 128-bit ciphers. That's the main reason why you should not use them.

In my opinion 128-bit symmetric encryption doesn’t less secure. For example TLS 1.3 use TLS_AES_128_GCM_SHA256 (0x1301) (for TLS-compliant applications). It is not possible to control ciphers for TLS 1.3 without support from client to use new API for TLSv1.3 cipher suites so at this moment it's always on (also if you disable potentially weak cipher from NGINX). On the other hand the ciphers in TLSv1.3 have been restricted to only a handful of completely secure ciphers by leading crypto experts.

For TLS 1.2 you should consider disable weak ciphers without forward secrecy like ciphers with CBC algorithm. Using them also reduces the final grade because they don't use ephemeral keys. In my opinion you should use ciphers with AEAD (TLS 1.3 supports only these suites) encryption because they don't have any known weaknesses.

Disable TLS cipher modes (all ciphers that start with TLS_RSA) that use RSA encryption because they are vulnerable to ROBOT attack. Not all servers that support RSA key exchange are vulnerable. But it is recommended to disable RSA key exchange ciphers as it does not support forward secrecy.

You should also absolutely disable weak ciphers regardless of the TLS version do you use, like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name.

We have a nice online tool for testing compatibility cipher suites with user agents: CryptCheck. I think it will be very helpful for you.

My recommendation:

Use only TLSv1.3 and TLSv1.2 with below cipher suites:

ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";
Example

Cipher suites for TLS 1.3:

ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384";

Cipher suites for TLS 1.2:

ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";

  :arrow_right: ssllabs score: 100%

Cipher suites for TLS 1.3:

ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256";

Cipher suites for TLS 1.2:

# 1)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";

# 2)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";

# 3)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256";

# 4)
ssl_ciphers "EECDH+CHACHA20:EDH+AESGCM:AES256+EECDH:AES256+EDH";

Cipher suites for TLS 1.1 + 1.2:

# 1)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";

# 2)
ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:!AES256-GCM-SHA256:!AES256-GCM-SHA128:!aNULL:!MD5";

  :arrow_right: ssllabs score: 90%

This will also give a baseline for comparison with Mozilla SSL Configuration Generator:

  • Modern profile with OpenSSL 1.1.0b (TLSv1.2)
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
  • Intermediate profile with OpenSSL 1.1.0b (TLSv1, TLSv1.1 and TLSv1.2)
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
External resources

🔰 Use more secure ECDH Curve

Rationale

In my opinion your main source of knowledge should be The SafeCurves web site. This site reports security assessments of various specific curves.

For a SSL server certificate, an "elliptic curve" certificate will be used only with digital signatures (ECDSA algorithm).

x25519 is a more secure (also with SafeCurves requirements) but slightly less compatible option. I think to maximise interoperability with existing browsers and servers, stick to P-256 prime256v1 and P-384 secp384r1 curves. Of course there's tons of different opinions about P-256 and P-384 curves.

NSA Suite B says that NSA uses curves P-256 and P-384 (in OpenSSL, they are designated as, respectively, prime256v1 and secp384r1). There is nothing wrong with P-521, except that it is, in practice, useless. Arguably, P-384 is also useless, because the more efficient P-256 curve already provides security that cannot be broken through accumulation of computing power.

Bernstein and Lange believe that the NIST curves are not optimal and there are better (more secure) curves that work just as fast, e.g. x25519.

Keep an eye also on this:

Secure implementations of the standard curves are theoretically possible but very hard.

The SafeCurves say:

  • NIST P-224, NIST P-256 and NIST P-384 are UNSAFE

From the curves described here only x25519 is a curve meets all SafeCurves requirements.

I think you can use P-256 to minimise trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then use P-384: it will increases your computational and network costs.

If you use TLS 1.3 you should enable prime256v1 signature algorithm. Without this SSL Lab reports TLS_AES_128_GCM_SHA256 (0x1301) signature as weak.

If you do not set ssl_ecdh_curve, then NGINX will use its default settings, e.g. Chrome will prefer x25519, but it is not recommended because you can not control default settings (seems to be P-256) from the NGINX.

Explicitly set ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1; decreases the Key Exchange SSL Labs rating.

Definitely do not use the secp112r1, secp112r2, secp128r1, secp128r2, secp160k1, secp160r1, secp160r2, secp192k1 curves. They have a too small size for security application according to NIST recommendation.

My recommendation:

Use only TLSv1.3 and TLSv1.2 and only strong ciphers with above curves:

ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
Example

Curves for TLS 1.2:

ssl_ecdh_curve secp521r1:secp384r1:prime256v1;

  :arrow_right: ssllabs score: 100%

# Alternative (this one doesn’t affect compatibility, by the way; it’s just a question of the preferred order).

# This setup downgrade Key Exchange score but is recommended for TLS 1.2 + 1.3:
ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
External resources

🔰 Use strong Key Exchange

Rationale

The DH key is only used if DH ciphers are used. Modern clients prefer ECDHE instead and if your NGINX accepts this preference then the handshake will not use the DH param at all since it will not do a DHE key exchange but an ECDHE key exchange.

Most of the modern profiles from places like Mozilla's ssl config generator no longer recommend using this.

Default key size in OpenSSL is 1024 bits - it's vulnerable and breakable. For the best security configuration use your own 4096 bit DH Group or use known safe ones pre-defined DH groups (it's recommended) from mozilla.

Example
# To generate a DH key:
openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096

# To produce "DSA-like" DH parameters:
openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_4096.pem 4096

# To generate a ECDH key:
openssl ecparam -out /etc/nginx/ssl/ecparam.pem -name prime256v1

# NGINX configuration:
ssl_dhparam /etc/nginx/ssl/dhparams_4096.pem;

  :arrow_right: ssllabs score: 100%

External resources

🔰 Defend against the BEAST attack

Rationale

Generally the BEAST attack relies on a weakness in the way CBC mode is used in SSL/TLS.

More specifically, to successfully perform the BEAST attack, there are some conditions which needs to be met:

  • vulnerable version of SSL must be used using a block cipher (CBC in particular)
  • JavaScript or a Java applet injection - should be in the same origin of the web site
  • data sniffing of the network connection must be possible

To prevent possible use BEAST attacks you should enable server-side protection, which causes the server ciphers should be preferred over the client ciphers, and completely excluded TLS 1.0 from your protocol stack.

Example
ssl_prefer_server_ciphers on;
External resources

🔰 Mitigation of CRIME/BREACH attacks

Rationale

Disable HTTP compression or compress only zero sensitive content.

You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways. Disabling SSL/TLS compression stops the attack very effectively. A deployment of HTTP/2 over TLS 1.2 must disable TLS compression (please see RFC 7540: 9.2. Use of TLS Features).

CRIME exploits SSL/TLS compression which is disabled since nginx 1.3.2. BREACH exploits HTTP compression

Some attacks are possible (e.g. the real BREACH attack is a complicated) because of gzip (HTTP compression not TLS compression) being enabled on SSL requests. In most cases, the best action is to simply disable gzip for SSL.

Compression is not the only requirement for the attack to be done so using it does not mean that the attack will succeed. Generally you should consider whether having an accidental performance drop on HTTPS sites is better than HTTPS sites being accidentally vulnerable.

You shouldn't use HTTP compression on private responses when using TLS.

I would gonna to prioritise security over performance but compression can be (I think) okay to HTTP compress publicly available static content like css or js and HTML content with zero sensitive info (like an "About Us" page).

Remember: by default, NGINX doesn't compress image files using its per-request gzip module.

Gzip static module is better, for 2 reasons:

  • you don't have to gzip for each request
  • you can use a higher gzip level

You should put the gzip_static on; inside the blocks that configure static files, but if you’re only running one site, it’s safe to just put it in the http block.

Example
# Disable dynamic HTTP compression:
gzip off;

# Enable dynamic HTTP compression for specific location context:
location / {

  gzip on;

  ...

}

# Enable static gzip compression:
location ^~ /assets/ {

  gzip_static on;

  ...

}
External resources

🔰 HTTP Strict Transport Security

Rationale

Generally HSTS is a way for websites to tell browsers that the connection should only ever be encrypted.This prevents MITM attacks, downgrade attacks, sending plain text cookies and session ids.

The header indicates for how long a browser should unconditionally refuse to take part in unsecured HTTP connection for a specific domain.

You had better be pretty sure that your website is indeed all HTTPS before you turn this on because HSTS adds complexity to your rollback strategy. Google recommend enabling HSTS this way:

  1. Roll out your HTTPS pages without HSTS first
  2. Start sending HSTS headers with a short max-age. Monitor your traffic both from users and other clients, and also dependents' performance, such as ads
  3. Slowly increase the HSTS max-age
  4. If HSTS doesn't affect your users and search engines negatively, you can, if you wish, ask your site to be added to the HSTS preload list used by most major browsers
Example
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;

  :arrow_right: ssllabs score: A+

External resources

🔰 Reduce XSS risks (Content-Security-Policy)

Rationale

CSP reduce the risk and impact of XSS attacks in modern browsers.

Whitelisting known-good resource origins, refusing to execute potentially dangerous inline scripts, and banning the use of eval are all effective mechanisms for mitigating cross-site scripting attacks.

CSP is a good defence-in-depth measure to make exploitation of an accidental lapse in that less likely.

Before enable this header you should discuss with developers about it. They probably going to have to update your application to remove any inline script and style, and make some additional modifications there.

Example
# This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load.
add_header Content-Security-Policy "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" always;
External resources

🔰 Control the behaviour of the Referer header (Referrer-Policy)

Rationale

Determine what information is sent along with the requests.

Example
add_header Referrer-Policy "no-referrer";
External resources

🔰 Provide clickjacking protection (X-Frame-Options)

Rationale

Helps to protect your visitors against clickjacking attacks. It is recommended that you use the x-frame-options header on pages which should not be allowed to render a page in a frame.

Example
add_header X-Frame-Options "SAMEORIGIN" always;
External resources

🔰 Prevent some categories of XSS attacks (X-XSS-Protection)

Rationale

Enable the cross-site scripting (XSS) filter built into modern web browsers.

Example
add_header X-XSS-Protection "1; mode=block" always;
External resources

🔰 Prevent Sniff Mimetype middleware (X-Content-Type-Options)

Rationale

It prevents the browser from doing MIME-type sniffing (prevents "mime" based attacks).

Example
add_header X-Content-Type-Options "nosniff" always;
External resources

🔰 Deny the use of browser features (Feature-Policy)

Rationale

This header protects your site from third parties using APIs that have security and privacy implications, and also from your own team adding outdated APIs or poorly optimised images.

Example
add_header Feature-Policy "geolocation 'none'; midi 'none'; notifications 'none'; push 'none'; sync-xhr 'none'; microphone 'none'; camera 'none'; magnetometer 'none'; gyroscope 'none'; speaker 'none'; vibrate 'none'; fullscreen 'none'; payment 'none'; usb 'none';";
External resources

🔰 Reject unsafe HTTP methods

Rationale

Set of methods support by a resource. An ordinary web server supports the HEAD, GET and POST methods to retrieve static and dynamic content. Other (e.g. OPTIONS, TRACE) methods should not be supported on public web servers, as they increase the attack surface.

Example
add_header Allow "GET, POST, HEAD" always;

if ($request_method !~ ^(GET|POST|HEAD)$) {

  return 405;

}
External resources

🔰 Prevent caching of sensitive data

Rationale

This policy should be implemented by the application architect, however, I know from experience that this does not always happen.

Don' to cache or persist sensitive data. As browsers have different default behaviour for caching HTTPS content, pages containing sensitive information should include a Cache-Control header to ensure that the contents are not cached.

One option is to add anticaching headers to relevant HTTP/1.1 and HTTP/2 responses, e.g. Cache-Control: no-cache, no-store and Expires: 0.

To cover various browser implementations the full set of headers to prevent content being cached should be:

Cache-Control: no-cache, no-store, private, must-revalidate, max-age=0, no-transform Pragma: no-cache Expires: 0

Example
location /api {

  expires 0;
  add_header Cache-Control "no-cache, no-store";

}
External resources

🔰 Control Buffer Overflow attacks

Rationale

Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in NGINX we can set buffer size limitations for all clients.

Example
client_body_buffer_size 100k;
client_header_buffer_size 1k;
client_max_body_size 100k;
large_client_header_buffers 2 1k;
External resources

🔰 Mitigating Slow HTTP DoS attacks (Closing Slow Connections)

Rationale

Close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible.

You can close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible (thus reducing the server’s ability to accept new connections).

Example
client_body_timeout 10s;
client_header_timeout 10s;
keepalive_timeout 5s 5s;
send_timeout 10s;
External resources

Reverse Proxy

One of the frequent uses of NGINX is setting it up as a proxy server.

To be completed.

Load Balancing

Load balancing is a useful mechanism to distribute incoming traffic around several capable servers. We may improve of some rules about the NGINX working as a load balancer.

🔰 Tweak passive health checks

Rationale

Monitoring for health is important on all types of load balancing mainly for business continuity. Passive checks watches for failed or timed-out connections as they pass through NGINX as requested by a client.

This functionality is enabled by default but the parameters mentioned here allow you to tweak their behaviour. Default values are: max_fails=1 and fail_timeout=10s.

Example
upstream backend {

  server bk01_node:80 max_fails=3 fail_timeout=5s;
  server bk02_node:80 max_fails=3 fail_timeout=5s;

}
External resources

🔰 Don't disable backends by comments, use down parameter

Rationale

Sometimes we need to turn off backends e.g. at maintenance-time. I think good solution is marks the server as permanently unavailable with down parameter even if the downtime takes a short time.

It's also important if you use IP Hash load balancing technique. If one of the servers needs to be temporarily removed, it should be marked with this parameter in order to preserve the current hashing of client IP addresses.

Comments are good for really permanently disable servers or if you want to leave information for historical purposes.

NGINX also provides a backup parameter which marks the server as a backup server. It will be passed requests when the primary servers are unavailable. I use this option rarely for the above purposes and only if I am sure that the backends will work at the maintenance time.

Example
upstream backend {

  server bk01_node:80 max_fails=3 fail_timeout=5s down;
  server bk02_node:80 max_fails=3 fail_timeout=5s;

}
External resources

Others

This rules aren't strictly related to the NGINX but in my opinion they're also very important aspect of security.

🔰 Enable DNS CAA Policy

Rationale

DNS CAA policy helps you to control which Certificat Authorities are allowed to issue certificates for your domain becaues if no CAA record is present, any CA is allowed to issue a certificate for the domain.

Example

Generic configuration (Google Cloud DNS, Route 53, OVH, and other hosted services) for Let's Encrypt:

example.com. CAA 0 issue "letsencrypt.org"

Standard Zone File (BIND, PowerDNS and Knot DNS) for Let's Encrypt:

example.com. IN CAA 0 issue "letsencrypt.org"
External resources

🔰 Define security policies with security.txt

Rationale

The main purpose of security.txt is to help make things easier for companies and security researchers when trying to secure platforms. It also provides information to assist in disclosing security vulnerabilities.

When security researchers detect potential vulnerabilities in a page or application, they will try to contact someone "appropriate" to "responsibly" reveal the problem. It's worth taking care of getting to the right address.

This file should be placed under the /.well-known/ path, e.g. /.well-known/security.txt (RFC5785) of a domain name or IP address for web properties.

Example
curl -ks https://example.com/.well-known/security.txt

Contact: [email protected]
Contact: +1-209-123-0123
Encryption: https://example.com/pgp.txt
Preferred-Languages: en
Canonical: https://example.com/.well-known/security.txt
Policy: https://example.com/security-policy.html

And from Google:

curl -ks https://www.google.com/.well-known/security.txt

Contact: https://g.co/vulnz
Contact: mailto:[email protected]
Encryption: https://services.google.com/corporate/publickey.txt
Acknowledgements: https://bughunter.withgoogle.com/
Policy: https://g.co/vrp
Hiring: https://g.co/SecurityPrivacyEngJobs
# Flag: BountyCon{075e1e5eef2bc8d49bfe4a27cd17f0bf4b2b85cf}
External resources

Configuration Examples

Remember to make a copy of the current configuration and all files/directories.

This chapter is still work in progress.

Installation

I used step-by-step tutorial from this handbook Installation from source.

Configuration

I used Google Cloud instance with following parameters:

ITEM VALUE COMMENT
VM Google Cloud Platform
vCPU 2x
Memory 4096MB
HTTP Varnish on port 80
HTTPS NGINX on port 443

Reverse Proxy

This chapter describes the basic configuration of my proxy server (for blkcipher.info domain).

Configuration is based on the installation from source chapter. If you go through the installation process step by step you can use the following configuration (minor adjustments may be required).

Import configuration

It's very simple - clone the repo, backup your current configuration and perform full directory sync:

git clone https://github.com/trimstray/nginx-admins-handbook

tar czvfp ~/nginx.etc.tgz /etc/nginx && mv /etc/nginx /etc/nginx.old

rsync -avur lib/nginx/ /etc/nginx/

If you compiled NGINX from source you should also update/refresh modules. All compiled modules are stored in /usr/local/src/nginx-${ngx_version}/master/objs and installed in accordance with the value of the --modules-path variable.

Set bind IP address

Find and replace 192.168.252.2 string in directory and file names
cd /etc/nginx
find . -depth -not -path '*/\.git*' -name '*192.168.252.2*' -execdir bash -c 'mv -v "$1" "${1//192.168.252.2/xxx.xxx.xxx.xxx}"' _ {} \;
Find and replace 192.168.252.2 string in configuration files
cd /etc/nginx
find . -not -path '*/\.git*' -type f -print0 | xargs -0 sed -i 's/192.168.252.2/xxx.xxx.xxx.xxx/g'

Set your domain name

Find and replace blkcipher.info string in directory and file names
cd /etc/nginx
find . -not -path '*/\.git*' -depth -name '*blkcipher.info*' -execdir bash -c 'mv -v "$1" "${1//blkcipher.info/example.com}"' _ {} \;
Find and replace blkcipher.info string in configuration files
cd /etc/nginx
find . -not -path '*/\.git*' -type f -print0 | xargs -0 sed -i 's/blkcipher_info/example_com/g'
find . -not -path '*/\.git*' -type f -print0 | xargs -0 sed -i 's/blkcipher.info/example.com/g'

Regenerate private keys and certs

For localhost
cd /etc/nginx/master/_server/localhost/certs

# Private key + Self-signed certificate:
( _fd="localhost.key" ; _fd_crt="nginx_localhost_bundle.crt" ; \
openssl req -x509 -newkey rsa:2048 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \
-subj "/C=X0/ST=localhost/L=localhost/O=localhost/OU=X00/CN=localhost" )
For default_server
cd /etc/nginx/master/_server/defaults/certs

# Private key + Self-signed certificate:
( _fd="defaults.key" ; _fd_crt="nginx_defaults_bundle.crt" ; \
openssl req -x509 -newkey rsa:2048 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \
-subj "/C=X1/ST=default/L=default/O=default/OU=X11/CN=default_server" )
For your domain (e.g. Let's Encrypt)
cd /etc/nginx/master/_server/example.com/certs

# For multidomain:
certbot certonly -d example.com -d www.example.com --rsa-key-size 2048

# For wildcard:
certbot certonly --manual --preferred-challenges=dns -d example.com -d *.example.com --rsa-key-size 2048

# Copy private key and chain:
cp /etc/letsencrypt/live/example.com/fullchain.pem nginx_example.com_bundle.crt
cp /etc/letsencrypt/live/example.com/privkey.pem example.com.key

Update modules list

Update modules list and include modules.conf to your configuration:

_mod_dir="/etc/nginx/modules"

:>"${_mod_dir}.conf"

for _module in $(ls "${_mod_dir}/") ; do echo -en "load_module\t\t${_mod_dir}/$_module;\n" >> "${_mod_dir}.conf" ; done

Generating the necessary error pages

In the example (lib/nginx) error pages are included from lib/nginx/master/_static/errors.conf file.

  • default location: /etc/nginx/html:
    50x.html  index.html
  • custom location: /usr/share/www:
    cd /etc/nginx/snippets/http-error-pages
    
    ./httpgen
    
    # You can also sync sites/ directory with /etc/nginx/html:
    #   rsync -var sites/ /etc/nginx/html/
    rsync -var sites/ /usr/share/www/

Add new domain

Updated nginx.conf
# At the end of the file (in 'IPS/DOMAINS' section):
include /etc/nginx/master/_server/domain.com/servers.conf;
include /etc/nginx/master/_server/domain.com/backends.conf;
Init domain directory
cd /etc/nginx/cd master/_server
cp -R example.com domain.com

cd domain.com
find . -not -path '*/\.git*' -depth -name '*example.com*' -execdir bash -c 'mv -v "$1" "${1//example.com/domain.com}"' _ {} \;
find . -not -path '*/\.git*' -type f -print0 | xargs -0 sed -i 's/example_com/domain_com/g'
find . -not -path '*/\.git*' -type f -print0 | xargs -0 sed -i 's/example.com/domain.com/g'

Create log directories

mkdir -p /var/log/nginx/localhost
mkdir -p /var/log/nginx/defaults
mkdir -p /var/log/nginx/others
mkdir -p /var/log/nginx/domains/blkcipher.info

chown -R nginx:nginx /var/log/nginx

Logrotate configuration

cp /etc/nginx/snippets/logrotate.d/nginx /etc/logrotate.d/

Test your configuration

nginx -t -c /etc/nginx/nginx.conf

nginx-admins-handbook's People

Contributors

trimstray avatar virtubox avatar zumoshi avatar cmbankester avatar panckreous avatar mbologna avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.