Source code (AGPL) for archcodes.com - fast parametric architecture generator focused on ease of use, design quality and privacy.
- Rhino
- Blender
- Houdini
If you choose to host “on the cloud ☁️” here are my recommmendations for privacy-oriented providers. [2023-10-19 Thu]
name | Location | Price |
---|---|---|
1984.hosting | Iceland | 77 € / year (27.49 € for 1st year) |
njal.la | Sweden (Malmo) | 15 € / month |
exoscale.com | Europe | 11.93 € / month |
Other recommendations Stack Overflow 2023 Survey
- Debian 12
- first login with public ssh key shared with the hosting provider
- first login as root
- ownership of a DNS domain
ssh root@<SERVER-IP>
apt update && apt upgrade -y
adduser admin --gecos ""
usermod -aG sudo admin
cp -r /root/.ssh /home/admin
chown -R admin:admin /home/admin/.ssh
Logout of root
and login as admin
.
touch .hushlogin
sudo passwd -d root
sudo passwd -l root
On Debian ufw may be already installed and preconfigured.
sudo apt install ufw -y
sudo ufw allow <CHOOSE-PORT-NUMBER-BETWEEN-1024-AND-65535>
# For example: sudo ufw allow 38461
sudo ufw enable
sudo vi /etc/ssh/sshd_config
Port <CHOOSE-PORT-BETWEEN-1024-AND-65535>
# For example: Port 38461
# Disable login as root
PermitRootLogin no
# No need for Graphics on a server
X11Forwarding no
# Use cryptographic keys instead of passwords
PasswordAuthentication no
# Only allow "admin" user to login via ssh
# If you named your user differently change it for that name
AllowUsers admin
sudo systemctl restart sshd
# Check if this file exists
sudo cat /etc/sudoers.d/90-cloud-init-users
# Remove it
sudo rm /etc/sudoers.d/90-cloud-init-users
# Check if this file exists
sudo cat /etc/ssh/sshd_config.d/50-cloud-init.conf
# Remove it
sudo rm /etc/ssh/sshd_config.d/50-cloud-init.conf
# Restart ssh
sudo systemctl restart sshd
Your server will be constantly scanned for vulnerabilities by various (good and bad) parties. Fail2Ban blocks such attempts if they reach a predefined number of unsuccessful connections.
sudo apt install fail2ban -y
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo vi /etc/fail2ban/jail.local
Sane defaults:
[DEFAULT]
bantime = 1000m
findtime = 100m
maxentry = 3
# In Debian 12 it became necessary to specify systemd backend explicitely.
backend = systemd
⚠ If you failed to connect more than 3 times before setting this up it may block you. Fix
sudo systemctl restart fail2ban.service
⚠ Advanced Use a proxy/VPN/Tor to change your IP, reconnect to the server and unban you IP.
Start tor service or connect the Tor Browser. The easiest way to setup Tor service on desktop is to download the Tor Browser and start it. It bundles tor server and will make it available locally on port 9050.
ssh -o ProxyCommand='nc -x 127.0.0.1:9150 %h %p' <USER>@<SERVER-IP>
# check your IP
curl ifconfig.me
Unban your original IP.
Recommended .bashrc
included in this repo.
scp -P <SSH-PORT-NUMBER> <PATH-TO-THIS-REPO>/.bashrc <SERVER-IP>:
Install fzf fuzzy finder to serach command history interactively (Ctrl+R).
sudo apt install git wget -y
git clone --depth 1 https://github.com/junegunn/fzf.git .fzf
./.fzf/install
sudo apt install fd-find exa bat ripgrep htop nethogs -y
Below aliases (e.g. b
instead of batcat
) were set in the above .bashrc file.
Search files by name.
Find all directories (-td
) that have system-wg
somewhere in their name. Search only in /sys/fs/cgroup
.
fd -td system-wg /sys/fs/cgroup
Search files by contents
name.
Search for alias
in .bashrc
rg alias .bashrc
Pretty print directories.
View file contents.
View contents of .bashrc
.
b .bashrc
View running processes.
htop
View running network connections.
nethogs -l -a -C
-l
display command line
-C
capture TCP and UDP
-a
monitor all devices, even loopback/stopped ones
These may have been automatically set by your hosting provider.
sudo vi /etc/hostname
sudo vi /etc/host
127.0.1.1 hostname.example.com hostname or: <STATIC-IP> hostname.example.com hostname
dnsdomainname
dnsdomainname -f
dnsdomainname --fqdn
sudo reboot
sudo fail2ban-client status sshd
sudo zgrep 'Ban' /var/log/fail2ban.log* | b
fail2ban-client set sshd unbanip IPADDRESSHERE
or unban all IPs
fail2ban-client unban --all
journalctl -u sshd
cat /var/log/fail2ban.log
last
zgrep . /var/log/apt/history.log*
uptime
uname --kernel-release
Make sure to take snapshot/backup beforehand. It’s not always guaranteed to work.
sudo apt-get full-upgrade
sudo apt install nginx
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx --domain example.com --domain subdomain.example.com --email [email protected] --agree-tos --redirect --hsts --staple-ocsp
Update root
directive.
sudo vi /etc/nginx/sites-available/<DOMAIN-NAME>
mkdir /var/www/<DOMAIN-NAME>/
sudo vi /etc/fail2ban/jail.local
[nginx-http-auth]
enabled = true
sudo systemctl restart fail2ban.service
sudo fail2ban-client status
sudo fail2ban-client status nginx-http-auth
sudo iptables -S | grep f2b
- Separate the webserver from compute nodes.
- Don’t save received models.
- Don’t log incoming connections.
TODO
read-only root / immutability / regular server reinstalls.TODO
containarization.
- Separate for each
location
context. If set inserver
context and another header is added to alocation
they get erased. - Create a new file and include it to simplify configuration.
location / {
include /etc/nginx/security-headers.conf;
}
sudo vi /etc/nginx/security-headers.conf
Python-nginx-certbot plugin adds it automatically, but doesn’t include subdomains.
add_header Strict-Transport-Security "max-age=31449600; includeSubDomains" always;
“SSL stripping” is based on intercepting the first request to a website (before SSL encryption).
Protect against XSS (cross-site scripting). Restrict access to javascript files from other origins.
add_header Content-Security-Policy "object-src 'none'; script-src 'self'; script-src-elem 'self'; font-src self; base-uri 'self'; require-trusted-types-for 'script'" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin" always;
add_header X-Xss-Protection "1; mode=block" always;
Allow others to use resources from your domain. No need to add this. Informational only. =add_header Access-Control-Allow-Origin “example.com”=
add_header Feature-Policy "microphone 'none'; geolocation 'none'; camera 'none'" always;
add_header Permissions-Policy "microphone=(); geolocation=(); camera=()" always;
Rate limits to 10 requests per second per client.
Not quite sure yet how to control it “globally” and how much of it will be needed.
I believe that if exceeded Nginx will send 429
code and the browser will inform the user automatically.
http {
limit_req_zone $binary_remote_addr zone=limit_zone:10m rate=10r/s;
server {
location @proxy_to_app {
limit_req zone=limit_zone burst=20;
proxy_pass http://app_server;
}
}
}
- There can be multiple
server
contexts (aka virtual servers). - They are evaluated first by network interface they
listen
on, thenserver_name
(aka domain name). server_name
can also be a wildcard or a regular expression.Host
header field in the client’s http request is used to matchserver_name
.default_server
parameter can be used to catch requests that don’t match any server.
server {
location / {
proxy_pass http://localhost:8080/;
}
# match all URIs ending with .gif, .jpg, or .png
location ~ \.(gif|jpg|png)$ {
root /data/images;
}
}
Treat web applications like they were already compromised. Protect the rest of the infrastructure. Separate applications into isolated machines (AND containers) and use nginx to load balance requests to them.
You can use VPN (e.g. wireguard) to connect web server with compute nodes. Wireguard is a fast and modern VPN.
sudo apt install wireguard -y
sudo vi /etc/wireguard/wg0.conf
[Interface]
PrivateKey = <YourPrivateKey>
Address = 10.0.0.1/24
ListenPort = <Port>
[Peer]
PublicKey = <PeerPublicKey>
AllowedIPs = 10.0.0.2/32
Endpoint = <PeerEndpoint>:<PeerPort>
wg genkey | tee /dev/tty | wg pubkey
sudo ufw allow <Port>
wg-quick up wg0
sudo systemctl enable wg-quick@wg0
upstream app_server {
server <WIREGUARD-PEER-IP> max_fails=3;
}
- fail_timeout: 10s
- weight: 1
- max_fails: 1
- max_conns
sudo vi /etc/nginx/nginx.conf
http {
server_tokens off;
}
Change cache durations between different locations.
expires
adds Expires
HTTP header and -1
tells the clients not to cache it.
location ~ /index.html {
expires -1;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
}
http {
keepalive_timeout 65;
}
Docs - Optimizing the Backlog Queue
sudo sysctl -w net.core.somaxconn=4096
net.core.somaxconn = 4096
sudo vi /etc/nginx/sites-available/yourdomain.com
server {
listen 80 backlog=4096;
# ...
}
Gunicorn is simpler than uWSGI for small websites.
Prefer system packages over pip
(unless a specific version is needed).
sudo apt install gunicorn
sudo vi /etc/nginx/sites-available/<DOMAIN-NAME>
# https://docs.gunicorn.org/en/stable/deploy.html#nginx-configuration
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server <WIREGUARD-PEER-IP>:<GUNICORN-PORT> fail_timeout=0;
# For example: server 10.0.0.2:8000 fail_timeout=0;
}
location @proxy_to_app {
expires -1;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
include /etc/nginx/proxy_params;
include /etc/nginx/security-headers.conf;
# limit max uploaded file size
client_max_body_size 10M;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header in /etc/nginx/proxy_params already.
proxy_redirect off;
proxy_pass http://app_server;
}
location / {
...
# nginx will try first static files and if it fails it will pass request to proxy
# I thought that you can keep the =404 at the end, but then @proxy_to_app stops working
try_files $uri $uri/ @proxy_to_app;
}
sudo vi /etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
Type=notify
# see http://0pointer.net/blog/dynamic-users-with-systemd.html
DynamicUser=yes
RuntimeDirectory=gunicorn
WorkingDirectory=/var/www/flask
ExecStart=/usr/bin/gunicorn app:app --workers 2
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
TimeoutStopSec=5
PrivateTmp=true
[Install]
WantedBy=multi-user.target
sudo vi /etc/systemd/system/gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=<WIREGUARD-PEER-IP>:<GUNICORN-PORT>
# For example: ListenStream=10.0.0.2:8000
# Accept=yes
[Install]
WantedBy=sockets.target
sudo systemctl enable --now gunicorn.socket
sudo ufw allow from 10.0.0.1 to 10.0.0.2 port 8000
Add executable permissions to app.py.
chmod +x <FLASK-ROOT>/app.py
rsync -avzhP -e "ssh -p <COMPUTE-NODE-SSH-PORT>" <PATH-TO-FLASK-ROOT> <USERNAME>@<SERVERNAME>:/srv/flask/
rsync -avzhP -e "ssh -p 12345" /home/user/archcodes/flask/ [email protected]:/srv/flask/
You don’t need to mitigate against CSRF if you don’t keep user sessions. If you do use “Same-Site Cookies”.
Removing special characters, spaces and directory traversal attempts (e.g. “../”).
Hugo doesn’t automatically clean previous build.
rm -rf /home/user1/archidecks.com/public
hugo -s /home/user1/archidecks.com
Change <STATIC-FILES-LOCATION>
permissions from root
to your user.
chown admin: /var/www/archcodes.com
rm -rf <PATH-TO-HUGO-BUILD> \
&& hugo -s <PATH-TO-HUGO-ROOT> \
&& rsync -avzhP -e "ssh -p <SSH-PORT>" <PATH-TO-HUGO-BUILD> <USERNAME>@<SERVERNAME>:<STATIC-FILES-LOCATION>
rm -rf /home/user/archcodes/public \
&& hugo -s /home/user/archcodes
&& rsync -avzhP -e "ssh -p 12345" /home/user/archcodes/public/ [email protected]:/var/www/archcodes.com/
cd <PATH-TO-HUGO-ROOT>
hugo server
Org-mode to hugo markdown converter.
org-hugo-auto-export-mode
Updates only subtrees that changed.
org-hugo-export-to-md