for i in {1..10}; do flock -nx ~/tmp/ws.lock -c "/home/binary" || true ; sleep 5; done
Why?
Echoing my comments in https://lowendtalk.com/discussion/177687/nodejs-hosting-using-directadmin -
Since Apache can be used as reverse proxy why should we be limited by environment provided by the host?
How?
Apache ProxyPass (nope)
Of course Apache can act as reverse proxy natively - but this function is almost guaranteed to be disabled in shared hosting environment.
CloudLinux + Python Selector + Passenger (not natively)
DirectAdmin, or cPanel, is extremely flexible since it's a wrapper on top of Apache(aka LiteSpeed). Most hosts are using CloudLinux, which usually comes with Python Selector, although the logic behind is to ensure full isolation between users and allocate resources among every user for anti abuse.
Python(or Ruby/NodeJS) Selector is developed on Passenger, a plugin for Apache: starting from version 6.0 it's possible to use ANY language with this plugin as long as the app is taking in a port argument. However this does not seems possible with Apache, leaving us looking for other solutions.
CloudLinux + Python Selector + Passenger + Python WSGIProxy + flock
We can design a system like this:
- DirectAdmin use CloudLinux as host
- CloudLinux has Passenger installed exposing Python Selector
- Write a custom WSGI Proxy talking to the app locally
- Make the app listen to localhost and a port
- Ensure the app is always running
Steps
You want SSH access to the host.
Upload application to host
Upload the application you want to run to the host. Make it run by listening to a random local port and 127.0.0.1
. Use curl
to ensure the app is actually running.
Setup Python Selector
Use any Python 3 provided.
Setup files
Enter the virtualenv
by copy and pasting the command provided.
passenger_wsgi.py
looks like:
import imp
import os
import sys
sys.path.insert(0, os.path.dirname(__file__))
wsgi = imp.load_source('wsgi', 'app.py')
application = wsgi.application
Create an app.py
with:
import sys, os
import webob # https://pypi.org/project/WebOb/
import wsgiproxy # https://pypi.org/project/WSGIProxy2/
BACKEND_HOST = os.getenv("ROCKET_ADDRESS", "localhost")
BACKEND_PORT = os.getenv("ROCKET_PORT", 30000)
BACKEND_URL = "http://{}:{}".format(BACKEND_HOST, BACKEND_PORT)
PROXY = wsgiproxy.HostProxy(BACKEND_URL)
def application(environ, start_response):
req = webob.Request(environ)
res = req.get_response(PROXY)
start_response(res.status, res.headerlist)
return [res.body]
Create a requirements.txt
with:
WebOb
WSGIProxy2
Change the environment variables with:
ROCKET_PORT
: the port your application shall listen on
Save the config. Use panel to install dependencies, or use SSH to run pip install -r requirements.txt
.
Test your site: it should be working.
Setup application monitoring
In the "Cron Job" section in the panel, setup
flock -nx ~/tmp/app.lock -c "THE_STARTUP_COMMAND_OF_YOUR_APP"
Disable Email, and set the cron to * * * * *
to check every minute.
References
Why?
Although not problematic for most commercial Email providers(GMail, Outlook, etc.), self-hosted Email are prone to inbound SPAM.
A couple solutions exist: the most common one is to use local Spamassassin - but training the model takes time and there's no good corpse available - and spammers are much smarter.
There are a handful of hosted Inbound SPAM filtering solutions but are quite expensive:
- Mailchannels - $20/month/5 domains, then hikes to $507/mth/1000 domains.
- Spamtitan - pricing is "quotation only".
- Spydermail - $1.99/mailbox/month, starting from 15 mailboxes.
- MXguarddog - $0.25/mailbox/month.
- Spamhero - per mailbox per mail pricing.
- McAfee: pricing not available.
Use Mailgun
Mailgun is included in Github student package. Flex plan provides 1000 Emails/month for free.
Code of connector:
import os
from flask import Flask, request, jsonify
from imap_tools import MailBox
app = Flask(__name__)
# HTTP param > environment variable > default
IMAP_USERNAME = os.getenv('IMAP_USERNAME', '')
IMAP_PASSWORD = os.getenv('IMAP_PASSWORD', '')
IMAP_SERVER = os.getenv('IMAP_SERVER', '')
MAILGUN_ANTISPAM_USE_BOOL = int(os.getenv('MAILGUN_ANTISPAM_USE_BOOL', '0') or 0)
MAILGUN_ANTISPAM_SSCORE_CUTOFF = int(os.getenv('MAILGUN_ANTISPAM_SSCORE_CUTOFF', '20') or 20)
def get_target_mailbox(data, use_bool=MAILGUN_ANTISPAM_USE_BOOL, sscore_cutoff=MAILGUN_ANTISPAM_SSCORE_CUTOFF):
"""
Return target mailbox from the data received from Mailgun.
See https://documentation.mailgun.com/en/latest/user_manual.html#spam-filter
for more details.
MAILGUN_ANTISPAM_USE_BOOL suppresses MAILGUN_ANTISPAM_CUTOFF.
:param junk_threshold: threshold for junk mail according to Mailgun
:type junk_threshold: float
:param data: mail data from Mailgun
:type data: dict
:return: INBOX/Junk
:rtype: str
"""
# No = Not spam, Yes = Spam
# "At the time of writing this, we are filtering spam at a score of around 5.0 but we are constantly calibrating
# this."
if use_bool:
if data['X-Mailgun-Sflag'][0] == 'No':
return 'INBOX'
return 'Junk'
# lower = less likely to be spam
# negative = very unlikely to be spam
# > 20 is very likely to be spam
if float(data['X-Mailgun-Sscore'][0]) < sscore_cutoff:
return 'INBOX'
return 'Junk'
@app.route('/post_mime', methods=['POST'])
def post_mime():
# TODO: check if the request is from Mailgun
data = request.form.to_dict(flat=False)
# HTTP param suppresses everything else
imap_username = request.args.get('username', IMAP_USERNAME)
imap_password = request.args.get('password', IMAP_PASSWORD)
imap_server = request.args.get('server', IMAP_SERVER)
use_bool = int(request.args.get('use_bool', MAILGUN_ANTISPAM_USE_BOOL))
sscore_cutoff = int(request.args.get('sscore_cutoff', MAILGUN_ANTISPAM_SSCORE_CUTOFF))
# decide whether incoming mail is spam or not
try:
target_mailbox = get_target_mailbox(data, use_bool, sscore_cutoff)
except Exception as e:
return jsonify({'error': 'Cannot decide target: ' + str(e)}), 500
# directly deliver to target mailbox
try:
with MailBox(imap_server).login(imap_username, imap_password) as mailbox:
msg = '\n'.join(data['body-mime'])
mailbox.append(msg.encode(), target_mailbox, dt=None)
except Exception as e:
return jsonify({'error': str(e), 'status': -1}), 500
return jsonify({'status': 0})
if __name__ == '__main__':
app.run(host='localhost', port=8000, debug=True)
Setup environment variables as follows:
Name Value
IMAP_PASSWORD your password
IMAP_SERVER the server
IMAP_USERNAME login
LANG en_US.UTF-8
LC_ALL en_US.UTF-8
LC_LANG en_US.UTF-8
MAILGUN_ANTISPAM_SSCORE_CUTOFF 20
MAILGUN_ANTISPAM_USE_BOOL 1
PYTHONIOENCODING UTF-8
Go to Mailgun, setup a Route with catch all forwarding to https://your-binded-domain/post_mime
.
Why?
Google Docs is great - but is behind GFW. Better have something we can control.
Options?
Nextcloud has native support of Collabora Office.
ONLYOFFICE also supports Nextcloud.
Difference?
Collabora is LibreOffice in browser. Harder on server since the server is effectively rendering a copy of LibreOffice for every client and transporting differences after operations. Works better if bandwidth(for both server and client) is high, and server is powerful.
ONLYOFFICE is more like Google Docs: client's browser loads the full editor, and sync changes on the document to the server. Easier on server but mobile client is not free.
Setting up Collabora
Get Nginx working
I use Nginx on the host directly since I have more than 1 services to run.
Setup SSL with anything you like - I use acme.sh:
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name .com;
ssl_certificate /root/.acme.sh/.com/.com.cer;
ssl_certificate_key /root/.acme.sh/.com/.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
# static files
location ^~ /browser {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Capabilities
location ^~ /hosting/capabilities {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# main websocket
location ~ ^/cool/(.*)/ws$ {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# download, presentation and image upload
location ~ ^/(c|l)ool {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Host $http_host;
}
# Admin Console websocket
location ^~ /cool/adminws {
proxy_pass https://127.0.0.1:9980;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
}
Setup Collabora CODE server
I really hate this setup since privileged
is required - there's no easy way bypassing the limit:
version: '3'
services:
code:
image: collabora/code:latest
restart: always
privileged: true
environment:
- password=xxx
- username=xxx
- domain=nextcloud.your.domain
ports:
- '127.0.0.1:9980:9980'
So all traffic shall go through the reverse proxy.
Nextcloud
Install the App.
URL: https://username:password@domain.for.collabora.com
What Cache?
There are may layers of cache one can run on WordPress - ranking from farthest to closest:
- Browser: setup long expiry time to cache as much data as possible on user's machine.
- CDN: For static content like CSS, image, JS and rich media. Works wonder if the whole page can be cached altogether. Jetpack has free CDN for image and CSS without purging possibillity, while Cloudflare can be more flexible but your mileage may vary. Distributed.
- Cache on Load Balancer: Some LBs like LiteSpeed and Nginx can have cache enabled. Normally not distributed.
- "Supercache": Caching the whole page to disk, DB or RAM to turn the dynamic site into static one. Works great for CMS, but won't work that much for sites with tons of interactions, like WooCommerce. Most famous plugins works on this level, like WP Super Cache, or the LiteSpeed plugin with crawler. Can be distributed.
- Object Cache: Cache the raw DB query to ease up DB load, serialize object and save in RAM, like the Redis cache covered in this article. Can be distributed but this article only covers the most basic single instance config.
- OPCode cache: PHP can cache some calculations in RAM to save on CPU. Configured in
php.ini
. Not distributed.
Why go this far?
This task is trivial if you control the whole environment, however there are tons of restrictions on shared hosting. But if you have SSH access there are things you can take advantage of.
Setup
If your machine has gcc
Download latest Redis
SSH into your shared hosting box.
Download and unzip the latest version of Redis 6:
cd ~
wget http://download.redis.io/redis-stable.tar.gz
tar zxvf redis-stable.tar.gz
mv redis-stable redis
rm redis-stable.tar.gz
cd ~/redis
If you want easier setup, download Redis 5 at https://download.redis.io/releases/redis-5.0.14.tar.gz
.
Compile the server
make redis-server
Your redis-server
should be available at ~/redis/src/redis-server
.
If your machine does not have gcc
Find our your OS
Type uname -a. This will give you your kernel version, but might not mention the distribution your running.
To find out what distribution of linux your running (Ex. Ubuntu) try lsb_release -a or cat /etc/release or cat /etc/issue or cat /proc/version.
Compile Redis on that distro
Get a VM/container of that distro, follow similar steps and take redis-server
out.
Try at the target machine and see if everything's working. Redis is pretty easy to build.
Create Redis config file
Create a redis config file somewhere, say ~/redis/rediss.conf
:
You can't use /tmp
as you are on shared hosting. Create a temporary folder - this folder exists on DirectAdmin by default:
mkdir ~/tmp
cd ~/tmp
pwd
and keep the absolute path of your tmp folder.
Content of file for Redis 6:
bind 0
protected-mode yes
port 0
unixsocket {TMP_FOLDER}/redis.sock
unixsocketperm 700
timeout 0
tcp-keepalive 300
daemonize yes
pidfile {TMP_FOLDER}/redis_6379.pid
dir {TMP_FOLDER}/
maxmemory 50M
And for Redis 5:
# create a unix domain socket to listen on
unixsocket {TMP_FOLDER}/redis.sock
# set permissions for the socket
unixsocketperm 775
# No password
# requirepass passwordtouse
# Do not listen on IP
port 0
daemonize yes
stop-writes-on-bgsave-error no
rdbcompression yes
# maximum memory allowed for redis - 50MB for small site, 128MB+ for high traffic
maxmemory 50M
# how redis will evice old objects - least recently used
maxmemory-policy allkeys-lru
Save it.
Start the process
Most distros should come with flock
to ensure that only one instance of process can be run at the same time. If not, refer to https://unix.stackexchange.com/questions/150778/prevent-a-second-instance-of-my-software-from-starting and make your own flock
.
Run
flock -nx ~/tmp/redis.lock -c "~/redis/src/redis-server ~/redis/rediss.conf"
in the shell.
ls ~/tmp
and see whether the lock file and the socket file are generated. If everything's there, you should have a Redis instance running.
Configure WordPress
In wp-config.php
:
/** Redis object cache */
define( 'WP_REDIS_SCHEME', 'unix' );
define( 'WP_REDIS_PATH', '{TMP_FOLDER}/redis.sock' );
// define( 'WP_REDIS_PASSWORD', 'secret' );
define( 'WP_REDIS_TIMEOUT', 1 );
define( 'WP_REDIS_READ_TIMEOUT', 1 );
// change the database for each site to avoid cache collisions
define( 'WP_REDIS_DATABASE', 0 );
define( 'WP_REDIS_MAXTTL', 60 * 60 * 24 * 7 );
// define( 'WP_REDIS_DISABLED', true );
Save it.
Enable Redis Object Storage
Go to Plugins, and install Redis Object Storage
. Enable it. See whether it can talk to your local Redis.
Setup cronjob to ensure Redis is always up
Although we may not care contents inside Redis it's important to make sure it's always up.
In your shared hosting's control panel, add a cronjob for every minute, with content
flock -nx ~/tmp/redis.lock -c "~/redis/src/redis-server ~/redis/rediss.conf" >/dev/null 2>&1
to restart process and don't send Email.
Now you should enjoy the new layer of caching.
What happened?
Sadly Google is shutting down the free version of G Suite for good.
What now?
There are some not-so-good alternatives:
- Self hosting: Mailcow is extremely memory hungry and delivery is definitely spotty unless you use a external SMTP provider like Mailgun.
- Yandex: Free, 10GB space, 1000 seats. Anti SPAM is very hungry and sending can be limited for no good reason.
- VK Mail(biz.mail.ru): Free, unlimited space, 5000 seats. Delivery is spotty.
- MXRoute: Good service, reliable. Price can be as low as $10/yr/10GB during black friday. 1GB mailbox can be as low as $3/year, 2GB at $5/year. There's a $179/lifetime/10GB package. Check Nexusbytes - price is better with resellers.
- MS Exchange: $4/seat.
- Office 365: $99/yr/6 seats.
- Paying ransom to Google: $72/year.
- Apple: $1/seat/month. No unlimited alias.
- Zoho: Free, 10GB space but NO SMTP/IMAP support.
- Disroot: Paid at 0.15 Euro/GB/yr. Delivery can be problematic.
- Protonmail: $5/sear/month. NO SMTP support.
Solution
So what I am looking should have:
- Large enough space(I've accumulated ~8GB of Emails in the last 15ish years)
- Good delivery
- SMTP support
- Unlimited aliases
- Unlimited seats
- Catch-all address
Gmail has a unique feature(not that unique but I've only seen it on Outlook outside Gmail) - "send as alias". You can setup a preferred alias, customize SMTP credentials and Gmail shall call that SMTP server when sending outbound Email when using that alias.
So what we can do is:
- Get Email forwarding service on the domain to catch Emails and forward to your personal Gmail
- Get SMTP working
- Add SMTP as alias in Gmail
Preparations
You will need
- A Email forwarder(will elaborate)
- Preferred SMTP provider
- Gmail account(can be free ones)
- DNS access of your domain
Email forwarder
A Email forwarder should act as your MX record, take inbounding Email and forward to designated address.
There are quite a number of good free forwarders around:
- improvmx
- Cloudflare Email Routing. Note you need full zone(aka using Cloudflare as NS)
- Mailgun: no free plan though
- Your domain register probably provides this feature for free
- Self hosting
- Sparkpost with Heroku
SMTP provider
There's no lack of them with free plans:
- Mailchimp
- Sendgrid
- Sparkpost
- Amazon SES(note traffic is not free)
Steps
Make sure you stick with this sequence.
(Optional) Migrate existing Emails
You can do that in Settings.
Setup Email forwarding
Go to any forwarder and setup MX and SPF record with DNS provider: MX record to receive Email, SPF to send Email on behalf of your domain so you can receive forwarded Email.
Setup SMTP
Use any SMTP provider and verify your domain. Add necessary DMARC and SPF record: the SPF record may looks like v=spf1 include:spf.improvmx.com include:_spf.somethingelse.com ~all
Note down the SMTP credentials. Using Sparkpost as example -
Host smtp.sparkpostmail.com
Port 587
Alternative Port 2525
Authentication AUTH LOGIN
Encryption STARTTLS
Username SMTP_Injection
Password an API key you created
Setup Gmail
- Go to Gmail's settings and select "send as alias". Put in your desired alias.
Click next: Gmail may infer some IMAP settings - replace with SMTP credentials.
Click next: Gmail will send an Email to that alias for verification - you should receive that Email in your Gmail as you've already setup forwarding. Put in the code.
(Optional) Set the new alias as default.
Now you've got a reliable free Domain Email for completely free.
There's a problem that Slack login is required for Slack integration, but sometimes we want to use OIDC for login: this will make the application vomit.
Steps to fix it:
Get a Postgres connector, connect to DB. Backup DB.
Consider the Email domain you will use with OIDC:
blah [at] cnbeining.com
iscnbeining.com
,test@gmail.com
isgmail.com
.Fill in all environment variables required for OIDC integration. Remove
ALLOWED_DOMAIN
variable - otherwise Outline will not allow logins from Email whose domain is different from it's own domain.Create a new entry in table
authentication_providers
.name
is alwaysoidc
,domain
is the domain of your Email,enabled
istrue
,teamid
is the same as the original one. Note self hosted Outline can only have 1 team at a time.Go to table
teams
. Thename
is ALWAYS HARDCODED asWiki
.domain
is empty. Failing to do so will cause Outline to complain "max number of teams reached".Now your new OIDC should be working but users is not be associated across authentication providers by Email.
In
user_authentications
table, create a second entry for users you want to associate: randomid
, sameuserId
, setauthenticationProviderId
,scopes
andproviderId
as the new one's. Delete the newly created but disassociated user.Now you should have more than 1 login method with users associated across the board by Email.
Outline what?
Outline is a nice personal wiki tool that
- Supports Markdown natively
- Self hosting option
- Fast
- Nice searching function
- Not overly complicated
- Supports live collaboration
making it great for hosting personal wiki.
However there's no free official hosted version of Outline - but it's pretty simple to host on Heroku for free.
Preparations
Heroku
Register an account, and bind a credit card: this will give you 1,000 hours of running time per month, which is good enough. A worker dyno is needed so this shall give you 500 hours of usable time per month.
Slack
Slack app is required for authentication and integration.
Go to https://api.slack.com/apps and create an app - in a space you feel comfortable.
Other authentication
OIDC is supported but I have not tried it - Auth0 and Okta have free plan that supports OAuth.
S3
S3 is needed for rich media storage.
There are a couple of free providers:
- Backblaze B2: 10GB space, 1GB download traffic per day. Technically should work with Cloudflare but I have not figured out how.
- Scaleway: 75GB space, 75GB/month traffic. DC located in EU.
- IBM: 25GB space, 5GB/month traffic.
- Storj DCS: 150GB space, 150GB traffic.
Create an account and collect all the details.
Get something that supports SMTP. Free options includes:
Create an account, verify the domain and collect SMTP config.
(optional) Cloudflare
Use Cloudflare to use your own domain. A free plan is good enough but you may need a Page Rule for overriding SSL settings.
(optional) GitHub account
Use this to update the application in the future, or configure auto update.
Fork the repo;
Install wei/pull
, enable it for your forked repo so your personal fork is always up to date.
Setup
Instructions as follow:
Get it running first
Click here to deploy the application. Follow the instructions, and put in environment variables as requested. Generate UTILS_SECRET
with openssl rand -hex 32
. URL
is the https://xxx.herokuapp.com
for now.
Do not bother with Google OAuth - you need a paid account to get it working.
You should have a working app: try logging in. If in doubt, check logs under More
.
Remember to enable workers at Configure Dynos
under Overview
.
Cloudflare setup
This official documentation is not very clear - TLDR version:
- Add your desired domain to Heroku, don't bother with SSL - it's paid addon
- Go to Cloudflare, add a CNAME record from that domain to
xxx.herokuapp.com
- NOT the record Heroku gave you! - If your SSL settings is NOT Strict, add a Page Rule to override it;
- Change
URL
in Heroku config to the new domain underSettings
.
Now you should have access on the new domain. It may take a while for Cloudflare to issue the new SSL certificate.
Setting up app update
Go to Deploy
, connect your Github, select your fork.
Enable auto update as you wish(de facto nightly build), or conduct manual updates.
Database backup(for Heroku Postgres)
It seems that there's no way to set this up in the UI:
- Go to https://data.heroku.com/ and click in the Postgres DB
- Settings - View credentials
- Copy the Heroku CLI command, something like
heroku pg:psql postgresql-blah-12345 --app outline-yourname
- In your local shell that has
heroku
setup, runheroku pg:backups:schedule postgresql-blah-12345 --at '02:00 America/Los_Angeles' --app outline-yourname
, change the time as your preference.
Limitation for free DB:
- Only 1 backup per day
- No point in time
- Only preserving the last 2 backups
OpenVZ - no Docker this time.
- install Nginx, php 7.4, and plugins https://www.cloudbooklet.com/upgrade-php-version-to-php-7-4-on-ubuntu/
- install mysql
- get a root password
- make sure root password is in deed working, and wrong passwords are rejected
- restore backup
- copy all files to new box
- chown to
www-data:www-data
- In wp-config.php:
- disable cache
- change DB
- In wp-content:
- delete advanced-cache.php
- In Nginx conf:
- revert to no cache version
If error out:
- Test with phpinfo() file
- if 502, check php-fpm and Nginx connection
- Test with new WordPress
- If 500, check DB
- Rebuild cache
In the light of recent news that
Docker Desktop for Windows and macOS is no longer free, efforts have been put in to search for a replacement.
Docker Desktop?
Docker Desktop used to be Docker Toolbox, which is based on VirtualBox. Docker Desktop is able to utilize Hyperkit on macOS which makes the performance much better compared to Type II hypervisor.
In my mind a good in-place replacement should be:
- Runs single Dockerfile
- Supports volume mounting and port forwarding
- Runs docker-compose.yml
- Easy to setup
- Supports Windows and macOS(Docker Engine on Linux remains open-sourced)
- Works with built-in debugger for common IDEs(IntelliJ & VSCode)
- Works with IDE's built-in support that's based on Docker socket
- Supports ARM Mac
Common replacements proposed includes:
- Podman: a Redhat-backed solution that runs on OCI. Spins up a Podman-flavored VM. Very limited support for Docker-Compose. Probably requires reworking Dockerfile. Volume mounting is painful. Cannot be used with IDE's built-in support.
- Microk8s & k3s: Kubenetes. Won't work with Docker-Compose.
Multipass
This is where Multipass comes to shine:
- Utilized Hyperkit, Hyper-V, and Virtualbox for best performance
- Handles volume mounting nicely
- Easy to setup
- Networking is easy
- Volume mounting is simple – at least when using Hyperkit
- Native ARM support
Downsides include:
- Ubuntu. Not even Debian. Bad news for CentOS lovers.
- Overlapping function with Vagrant. Can't deny that.
The actual setup
Steps based on macOS Catalina, 10.15.7.
Install Multipass
Download installer
here. You DO NOT need to set driver as VirtualBox – leave it as is and Hyperkit will be used as default!
You can try running brew install multipass
.
Make sure that you give multipassd
Full Disk Access – otherwise mounting will fail!
Create and setup your instance
I am calling my instance as
fakedocker
.
Note the default disk size is merely 5GiB – not enough for setting up Docker. To give it a bit more space:
multipass launch fakedocker -d 20G
Use -c
to add more CPU cores, -m
to add more RAM. Documentation
One-liner with cloud-init
You may want to change Docker-Compose's version to the latest.
Save this file as cloud-init.yaml,
and run:
multipass launch fakedocker -d 20G --cloud-init cloud-init.yaml
#cloud-config
apt:
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
package_update: true
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- docker-ce
- docker-ce-cli
- containerd.io
- avahi-daemon
- libnss-mdns
runcmd:
- sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo systemctl daemon-reload
# create the docker group
groups:
- docker
# Add default auto created user to docker group
system_info:
default_user:
groups: [docker]
write_files:
- content: |
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://0.0.0.0:
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
path: /lib/systemd/system/docker.service.d/overwrite.conf
owner: root:root
permissions: '0644'
power_state:
mode: reboot
message: Restarting after configuring Ubuntu for Docker and Docker-Compose
If having any problem, run multipass shell fakedocker
to SSH into the VM. Logs are located at /var/log/cloud-init.log
.
Manually set up your instance
Create VM
Use multipass launch fakedocker -d 20G
to create the instance.
Run multipass shell fakedocker
to get into the instance.
Setup Docker and other stuff
Follow this guide to install Docker: https://docs.docker.com/engine/install/ubuntu/
Also this post-install guide to setup groups: https://docs.docker.com/engine/install/linux-postinstall/
For installing Docker-Compose: https://docs.docker.com/compose/install/
Run sudo apt-get install -y avahi-daemon libnss-mdns
to install Avahi to enable Bonjour so you can access the box by fakedocer.local
. This will comes in handy in the following setup.
Edit Docker's Systemd config to expose both port and Unix socket: TCP is for remote debugger on your local machine, the socket is for running Docker command locally:
Create a folder at /lib/systemd/system/docker.service.d/
add a file overwrite.conf
with the following content:
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://0.0.0.0:
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
then
sudo chmod 666 /var/run/docker.sock
sudo systemctl daemon-reload
sudo systemctl restart docker.service
to make sure that docker
command works on the VM.
Mounting local folder
Run multipass mount ~ fakedocker
to attach your home directory to the new VM. This will make sure that you don't need to calculate where you are mounting.
Quick sanity test
On fakedocker
VM,
- Run
docker ps
command: Docker should be able to run. You may need to log out and re-login for group change to take effect. - Run
nc -zv 127.0.0.1 2375
to make sure that Docker Engine is taking traffic from TCP. - Run
ls /User/<your_username>
to make sure that volume attaching is successful: usecat
to read a file, and write something random to make sure that the attached volume is readable and writable.
On your local machine:
- do a
nc -zv fakedocker.local 2375
to make sure that your local debugger can communicate with the Docker instance onfakedocker
.
Now you've got a nice environment where you can use
docker
and
docker-compose
as if you are on your local machine.
Setting up Docker on Mac
Run export DOCKER_HOST="tcp://fakedocker.local:2375"
or put it in your ~/.zshrc
to make the VM your default Docker host.
Follow https://docs.docker.com/engine/install/binaries/ to setup docker
command locally, and https://github.com/docker/compose/releases to setup docker-compose
.
Setting up remote debugger
I am using PyCharm with Docker-Compose with attached volume in docker-compose.yml
as an example:
Create a new Docker machine
PyCharm should be able to connect to this instance.
Setup a new Interpreter
Make sure that you do Path mappings as shown: otherwise, debugger won't be able to run.
Setup debug config
Listen to 0.0.0.0
- otherwise, you won't be able to visit the web service.
That should be it! Trigger debugging - if you are listening to port 8000, go to http://fakedocker.local:8000
to visit your site. Setup port forwarding in docker-compose.yml
if you want to expose more services, like your database.
Common problems
Multipass VM stuck at "Starting"/ Multipass is stuck on any command
Run
sudo pkill multipassd
and redo your last step. multipassd
is a daemon that checks VM: killing this process does not change the state of VM. (So what's the point of making it a daemon?)
Multipass VM is botched
Run multipass delete <name_of_vm> && multipass purge
to completely remove the VM.
Multipass VM cannot read mounted folder on computer reboot
Reattach the volume:
multipass unmount ~ fakedocker && multipass mount ~ fakedocker
.
Cannot connect to remote Docker
Make sure that you've edited Docker's startup command to include TCP listening on
0.0.0.0
.
Need to debug issues when using cloud-init
Logs are located at /var/log/cloud-init.log
.
docker command on VM won't run
- Make sure that Docker's startup command also listens on Unix socket.
- That socket file needs to be 666.
IDE cannot connect to debugger: cannot find debugger file(especially Pycharm)
Mount your project home to
/opt/project
.
Reference
https://github.com/canonical/multipass/issues/913
https://youtrack.jetbrains.com/issue/PY-33489
https://serverfault.com/questions/843296/how-do-i-expose-the-docker-api-over-tcp
https://docs.docker.com/engine/install/ubuntu/
https://docs.docker.com/engine/install/linux-postinstall/
https://docs.docker.com/compose/install/
https://github.com/canonical/multipass/issues/1839
https://github.com/canonical/multipass/issues/1389
https://docs.docker.com/engine/install/binaries/
https://cloudinit.readthedocs.io/en/latest/topics/examples.html
https://cloudinit.readthedocs.io/en/latest/topics/boot.html
https://unix.stackexchange.com/questions/542343/docker-service-how-to-edit-systemd-service-file