Replacement for free G Suite without losing GMail with Gmail, SMTP and forwarder

What happened?

Sadly Google is shutting down the free version of G Suite for good.

What now?

There are some not-so-good alternatives:

  • Self hosting: Mailcow is extremely memory hungry and delivery is definitely spotty unless you use a external SMTP provider like Mailgun.
  • Yandex: Free, 10GB space, 1000 seats. Anti SPAM is very hungry and sending can be limited for no good reason.
  • VK Mail(biz.mail.ru): Free, unlimited space, 5000 seats. Delivery is spotty.
  • MXRoute: Good service, reliable. Price can be as low as $10/yr/10GB during black friday. 1GB mailbox can be as low as $3/year, 2GB at $5/year. There's a $179/lifetime/10GB package. Check Nexusbytes - price is better with resellers.
  • MS Exchange: $4/seat.
  • Office 365: $99/yr/6 seats.
  • Paying ransom to Google: $72/year.
  • Apple: $1/seat/month. No unlimited alias.
  • Zoho: Free, 10GB space but NO SMTP/IMAP support.
  • Disroot: Paid at 0.15 Euro/GB/yr. Delivery can be problematic.
  • Protonmail: $5/sear/month. NO SMTP support.

Solution

So what I am looking should have:

  • Large enough space(I've accumulated ~8GB of Emails in the last 15ish years)
  • Good delivery
  • SMTP support
  • Unlimited aliases
  • Unlimited seats
  • Catch-all address

Gmail has a unique feature(not that unique but I've only seen it on Outlook outside Gmail) - "send as alias". You can setup a preferred alias, customize SMTP credentials and Gmail shall call that SMTP server when sending outbound Email when using that alias.
So what we can do is:

  • Get Email forwarding service on the domain to catch Emails and forward to your personal Gmail
  • Get SMTP working
  • Add SMTP as alias in Gmail

Preparations

You will need

  • A Email forwarder(will elaborate)
  • Preferred SMTP provider
  • Gmail account(can be free ones)
  • DNS access of your domain

Email forwarder

A Email forwarder should act as your MX record, take inbounding Email and forward to designated address.
There are quite a number of good free forwarders around:

SMTP provider

There's no lack of them with free plans:

  • Mailchimp
  • Sendgrid
  • Sparkpost
  • Amazon SES(note traffic is not free)

Steps

Make sure you stick with this sequence.

(Optional) Migrate existing Emails

You can do that in Settings.

Setup Email forwarding

Go to any forwarder and setup MX and SPF record with DNS provider: MX record to receive Email, SPF to send Email on behalf of your domain so you can receive forwarded Email.

Setup SMTP

Use any SMTP provider and verify your domain. Add necessary DMARC and SPF record: the SPF record may looks like v=spf1 include:spf.improvmx.com include:_spf.somethingelse.com ~all
Note down the SMTP credentials. Using Sparkpost as example -

Host smtp.sparkpostmail.com
Port 587
Alternative Port 2525
Authentication AUTH LOGIN
Encryption STARTTLS
Username SMTP_Injection
Password an API key you created

Setup Gmail

  1. Go to Gmail's settings and select "send as alias". Put in your desired alias.

  2. Click next: Gmail may infer some IMAP settings - replace with SMTP credentials.

  3. Click next: Gmail will send an Email to that alias for verification - you should receive that Email in your Gmail as you've already setup forwarding. Put in the code.

  4. (Optional) Set the new alias as default.

Now you've got a reliable free Domain Email for completely free.

Hacking Outline Wiki: Making Slack login works with other methods, like OIDC

There's a problem that Slack login is required for Slack integration, but sometimes we want to use OIDC for login: this will make the application vomit.
Steps to fix it:

  1. Get a Postgres connector, connect to DB. Backup DB.

  2. Consider the Email domain you will use with OIDC: blah [at] cnbeining.com is cnbeining.com, test@gmail.com is gmail.com.

  3. Fill in all environment variables required for OIDC integration. Remove ALLOWED_DOMAIN variable - otherwise Outline will not allow logins from Email whose domain is different from it's own domain.

  4. Create a new entry in table authentication_providers. name is always oidc, domain is the domain of your Email, enabled is true, teamid is the same as the original one. Note self hosted Outline can only have 1 team at a time.

  5. Go to table teams. The name is ALWAYS HARDCODED as Wiki. domain is empty. Failing to do so will cause Outline to complain "max number of teams reached".

  6. Now your new OIDC should be working but users is not be associated across authentication providers by Email.

  7. In user_authentications table, create a second entry for users you want to associate: random id, same userId, set authenticationProviderId, scopes and providerId as the new one's. Delete the newly created but disassociated user.

  8. Now you should have more than 1 login method with users associated across the board by Email.

Free personal wiki with Outline on Heroku with personal domain via Cloudflare

Outline what?

Outline is a nice personal wiki tool that

  • Supports Markdown natively
  • Self hosting option
  • Fast
  • Nice searching function
  • Not overly complicated
  • Supports live collaboration

making it great for hosting personal wiki.
However there's no free official hosted version of Outline - but it's pretty simple to host on Heroku for free.

Preparations

Heroku

Register an account, and bind a credit card: this will give you 1,000 hours of running time per month, which is good enough. A worker dyno is needed so this shall give you 500 hours of usable time per month.

Slack

Slack app is required for authentication and integration.
Go to api.slack.com/apps and create an app - in a space you feel comfortable.

Other authentication

OIDC is supported but I have not tried it - Auth0 and Okta have free plan that supports OAuth.

S3

S3 is needed for rich media storage.
There are a couple of free providers:

  • Backblaze B2: 10GB space, 1GB download traffic per day. Technically should work with Cloudflare but I have not figured out how.
  • Scaleway: 75GB space, 75GB/month traffic. DC located in EU.
  • IBM: 25GB space, 5GB/month traffic.
  • Storj DCS: 150GB space, 150GB traffic.

Create an account and collect all the details.

Email

Get something that supports SMTP. Free options includes:

Create an account, verify the domain and collect SMTP config.

(optional) Cloudflare

Use Cloudflare to use your own domain. A free plan is good enough but you may need a Page Rule for overriding SSL settings.

(optional) GitHub account

Use this to update the application in the future, or configure auto update.
Fork the repo;
Install wei/pull, enable it for your forked repo so your personal fork is always up to date.

Setup

Instructions as follow:

Get it running first

Click here to deploy the application. Follow the instructions, and put in environment variables as requested. Generate UTILS_SECRET with openssl rand -hex 32. URL is the https://xxx.herokuapp.com for now.
Do not bother with Google OAuth - you need a paid account to get it working.
You should have a working app: try logging in. If in doubt, check logs under More.
Remember to enable workers at Configure Dynos under Overview.

Cloudflare setup

This official documentation is not very clear - TLDR version:

  • Add your desired domain to Heroku, don't bother with SSL - it's paid addon
  • Go to Cloudflare, add a CNAME record from that domain to xxx.herokuapp.com - NOT the record Heroku gave you!
  • If your SSL settings is NOT Strict, add a Page Rule to override it;
  • Change URL in Heroku config to the new domain under Settings.

Now you should have access on the new domain. It may take a while for Cloudflare to issue the new SSL certificate.

Setting up app update

Go to Deploy, connect your Github, select your fork.
Enable auto update as you wish(de facto nightly build), or conduct manual updates.

Database backup(for Heroku Postgres)

It seems that there's no way to set this up in the UI:

  • Go to data.heroku.com/ and click in the Postgres DB
  • Settings - View credentials
  • Copy the Heroku CLI command, something like heroku pg:psql postgresql-blah-12345 --app outline-yourname
  • In your local shell that has heroku setup, run heroku pg:backups:schedule postgresql-blah-12345 --at '02:00 America/Los_Angeles' --app outline-yourname, change the time as your preference.

Limitation for free DB:

  • Only 1 backup per day
  • No point in time
  • Only preserving the last 2 backups

Note: Migrate WordPress by hand

      No Comments on Note: Migrate WordPress by hand

OpenVZ - no Docker this time.

  • install Nginx, php 7.4, and plugins www.cloudbooklet.com/upgrade-php-version-to-php-7-4-on-ubuntu/
  • install mysql
    • get a root password
    • make sure root password is in deed working, and wrong passwords are rejected
    • restore backup
  • copy all files to new box
  • chown to www-data:www-data
  • In wp-config.php:
    • disable cache
    • change DB
  • In wp-content:
    • delete advanced-cache.php
  • In Nginx conf:
    • revert to no cache version

If error out:
- Test with phpinfo() file
- if 502, check php-fpm and Nginx connection
- Test with new WordPress
- If 500, check DB

  • Rebuild cache

Using Docker and Docker-Compose on macOS with Multipass

In the light of recent news that
Docker Desktop for Windows and macOS is no longer free, efforts have been put in to search for a replacement.

Docker Desktop?

Docker Desktop used to be Docker Toolbox, which is based on VirtualBox. Docker Desktop is able to utilize Hyperkit on macOS which makes the performance much better compared to Type II hypervisor.
In my mind a good in-place replacement should be:

  • Runs single Dockerfile
  • Supports volume mounting and port forwarding
  • Runs docker-compose.yml
  • Easy to setup
  • Supports Windows and macOS(Docker Engine on Linux remains open-sourced)
  • Works with built-in debugger for common IDEs(IntelliJ & VSCode)
  • Works with IDE's built-in support that's based on Docker socket
  • Supports ARM Mac

Common replacements proposed includes:

  • Podman: a Redhat-backed solution that runs on OCI. Spins up a Podman-flavored VM. Very limited support for Docker-Compose. Probably requires reworking Dockerfile. Volume mounting is painful. Cannot be used with IDE's built-in support.
  • Microk8s & k3s: Kubenetes. Won't work with Docker-Compose.

Multipass

This is where Multipass comes to shine:

  • Utilized Hyperkit, Hyper-V, and Virtualbox for best performance
  • Handles volume mounting nicely
  • Easy to setup
  • Networking is easy
  • Volume mounting is simple – at least when using Hyperkit
  • Native ARM support

Downsides include:

  • Ubuntu. Not even Debian. Bad news for CentOS lovers.
  • Overlapping function with Vagrant. Can't deny that.

The actual setup

Steps based on macOS Catalina, 10.15.7.

Install Multipass

Download installer
here. You DO NOT need to set driver as VirtualBox – leave it as is and Hyperkit will be used as default!
You can try running brew install multipass.
Make sure that you give multipassd Full Disk Access – otherwise mounting will fail!

Create and setup your instance

I am calling my instance as
fakedocker.
Note the default disk size is merely 5GiB – not enough for setting up Docker. To give it a bit more space:
multipass launch fakedocker -d 20G
Use -c to add more CPU cores, -m to add more RAM. Documentation

One-liner with cloud-init

You may want to change Docker-Compose's version to the latest.
Save this file as cloud-init.yaml, and run:
multipass launch fakedocker -d 20G --cloud-init cloud-init.yaml

#cloud-config
apt:
  sources:
    docker.list:
      source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
      key: |
        -----BEGIN PGP PUBLIC KEY BLOCK-----
        mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
        lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
        38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
        L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
        UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
        cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
        ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
        vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
        G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
        XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
        q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
        tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
        BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
        v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
        tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
        jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
        6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
        XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
        FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
        g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
        ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
        9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
        G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
        FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
        EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
        M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
        Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
        w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
        z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
        eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
        VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
        1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
        zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
        pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
        ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
        BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
        1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
        YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
        mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
        KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
        JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
        cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
        6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
        U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
        VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
        irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
        SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
        QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
        9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
        24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
        dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
        Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
        H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
        /nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
        M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
        xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
        jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
        YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
        =0YYh
        -----END PGP PUBLIC KEY BLOCK-----
package_update: true
packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - docker-ce
  - docker-ce-cli
  - containerd.io
  - avahi-daemon
  - libnss-mdns
runcmd:
  - sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - sudo chmod +x /usr/local/bin/docker-compose
  - sudo systemctl daemon-reload
# create the docker group
groups:
  - docker
# Add default auto created user to docker group
system_info:
  default_user:
    groups: [docker]
write_files:
  - content: |
      [Service]
      Type=notify
      # the default is not to use systemd for cgroups because the delegate issues still
      # exists and systemd currently does not support the cgroup feature set required
      # for containers run by docker
      ExecStart=
      ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://0.0.0.0:
      ExecReload=/bin/kill -s HUP $MAINPID
      TimeoutSec=0
      RestartSec=2
      Restart=always
    path: /lib/systemd/system/docker.service.d/overwrite.conf
    owner: root:root
    permissions: '0644'
power_state:
  mode: reboot
  message: Restarting after configuring Ubuntu for Docker and Docker-Compose

If having any problem, run multipass shell fakedocker to SSH into the VM. Logs are located at /var/log/cloud-init.log.

Manually set up your instance

Create VM

Use multipass launch fakedocker -d 20G to create the instance.
Run multipass shell fakedocker to get into the instance.

Setup Docker and other stuff

Follow this guide to install Docker: docs.docker.com/engine/install/ubuntu/
Also this post-install guide to setup groups: docs.docker.com/engine/install/linux-postinstall/
For installing Docker-Compose: docs.docker.com/compose/install/
Run sudo apt-get install -y avahi-daemon libnss-mdns to install Avahi to enable Bonjour so you can access the box by fakedocer.local. This will comes in handy in the following setup.
Edit Docker's Systemd config to expose both port and Unix socket: TCP is for remote debugger on your local machine, the socket is for running Docker command locally:
Create a folder at /lib/systemd/system/docker.service.d/ add a file overwrite.conf with the following content:

      [Service]
      Type=notify
      # the default is not to use systemd for cgroups because the delegate issues still
      # exists and systemd currently does not support the cgroup feature set required
      # for containers run by docker
      ExecStart=
      ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://0.0.0.0:
      ExecReload=/bin/kill -s HUP $MAINPID
      TimeoutSec=0
      RestartSec=2
      Restart=always

then

sudo chmod 666 /var/run/docker.sock
sudo systemctl daemon-reload
sudo systemctl restart docker.service

to make sure that docker command works on the VM.

Mounting local folder

Run multipass mount ~ fakedocker to attach your home directory to the new VM. This will make sure that you don't need to calculate where you are mounting.

Quick sanity test

On fakedocker VM,

  • Run docker pscommand: Docker should be able to run. You may need to log out and re-login for group change to take effect.
  • Run nc -zv 127.0.0.1 2375to make sure that Docker Engine is taking traffic from TCP.
  • Run ls /User/<your_username>to make sure that volume attaching is successful: use catto read a file, and write something random to make sure that the attached volume is readable and writable.

On your local machine:

  • do a nc -zv fakedocker.local 2375to make sure that your local debugger can communicate with the Docker instance on fakedocker.

Now you've got a nice environment where you can use
docker
and
docker-compose
as if you are on your local machine.

Setting up Docker on Mac

Run export DOCKER_HOST="tcp://fakedocker.local:2375" or put it in your ~/.zshrc to make the VM your default Docker host.
Follow docs.docker.com/engine/install/binaries/ to setup docker command locally, and github.com/docker/compose/releases to setup docker-compose.

Setting up remote debugger

I am using PyCharm with Docker-Compose with attached volume in docker-compose.yml as an example:

Create a new Docker machine


PyCharm should be able to connect to this instance.

Setup a new Interpreter



Make sure that you do Path mappings as shown: otherwise, debugger won't be able to run.

Setup debug config


Listen to 0.0.0.0 - otherwise, you won't be able to visit the web service.
That should be it! Trigger debugging - if you are listening to port 8000, go to http://fakedocker.local:8000 to visit your site. Setup port forwarding in docker-compose.yml if you want to expose more services, like your database.

Common problems

Multipass VM stuck at "Starting"/ Multipass is stuck on any command

Run
sudo pkill multipassd
and redo your last step. multipassd is a daemon that checks VM: killing this process does not change the state of VM. (So what's the point of making it a daemon?)

Multipass VM is botched

Run multipass delete <name_of_vm> && multipass purge to completely remove the VM. 

Multipass VM cannot read mounted folder on computer reboot

Reattach the volume:
multipass unmount ~ fakedocker && multipass mount ~ fakedocker.

Cannot connect to remote Docker

Make sure that you've edited Docker's startup command to include TCP listening on
0.0.0.0.

Need to debug issues when using cloud-init

Logs are located at /var/log/cloud-init.log.

docker command on VM won't run

  • Make sure that Docker's startup command also listens on Unix socket.
  • That socket file needs to be 666.

IDE cannot connect to debugger: cannot find debugger file(especially Pycharm)

Mount your project home to
/opt/project.

Reference

github.com/canonical/multipass/issues/913
youtrack.jetbrains.com/issue/PY-33489
serverfault.com/questions/843296/how-do-i-expose-the-docker-api-over-tcp
docs.docker.com/engine/install/ubuntu/
docs.docker.com/engine/install/linux-postinstall/
docs.docker.com/compose/install/
github.com/canonical/multipass/issues/1839
github.com/canonical/multipass/issues/1389
docs.docker.com/engine/install/binaries/
cloudinit.readthedocs.io/en/latest/topics/examples.html
cloudinit.readthedocs.io/en/latest/topics/boot.html
unix.stackexchange.com/questions/542343/docker-service-how-to-edit-systemd-service-file

使用Backblaze B2+Cloudflare+rclone为小盘机(免费)增加空间

小盘机的空间经常不够用:我们可以把Backblaze B210G免费空间搬到小鸡上。
你需要:

  • 接码手机号和邮箱
  • (可选)一个顶级域名,供注册Cloudflare

技术

Rclone

Rclone最大的作用是同步文件。Rclone支持很多稀奇古怪的数据源,以及很多稀奇古怪的功能 - 其中包括将远程文件挂载成FUSE文件系统。
利用类似本文的方法,也可以将Google Drive,Onedrive,Dropbox,远程大盘鸡等挂载成文件系统。

Backblaze B2

Backblaze的服务器在美国西部,虽然只有一个机房但是便宜:每T存储每月5.12 USD。

为什么要白嫖B2而不是其他的存储?

  1. Backblaze加入了Cloudflare的Bandwidth Alliance,下载流量免费;上传不收费。

  2. Backblaze B2是企业级存储:8个9的可靠性,3个9的可用性。不会像个人网盘删文件或限制API请求频率。

  3. 每个账户送10G存储空间,每日1G下载流量,2500次下载请求。对于白嫖够用了。

设置

Backblaze B2

注册Backblaze B2

点击 www.backblaze.com/b2/sign-up.html 注册。
注册后使用邮箱和密码登陆:第一次登陆需要绑定手机号。请自由发挥。+86应该可以使用。

创建仓库

在https://secure.backblaze.com/b2_buckets.htm 处,点击“Create a Bucket”。

  • Bucket Unique Name: 仓库的名字。必须全局唯一,仅限字母和数字。
  • Files in Bucket are: 公开还是私密仓库。私密(private)即可,除非你不在乎公开文件。
  • Object Lock: 锁定文件一段时间内不能删除。不需要开启。

记录仓库的名字。点击蓝色的Create a Bucket 创建仓库。
和所有的对象存储一样,B2默认保存文件的所有版本:如果要修改,点击仓库的Lifecycle Settings

  • Keep all versions of the file (default):默认保存所有版本
  • Keep only the last version of the file:只保存最后一个版本。我会使用这个节省空间。
  • Keep prior versions for this number of days:在X天内保存旧版本。
  • Use custom lifecycle rules:按文件前缀自定义多少天隐藏,多少天删除文件。

创建APP Key

默认的key不能使用:在https://secure.backblaze.com/app_keys.htm 创建一个新key。
点击Add a New Application Key

  • Name of Key: 名字,供你自己参考
  • Allow access to Bucket(s): 你可以让key只能访问某个仓库。
  • Type of Access: 只读,只写或读写都可以。除非你知道你在做什么,否则选择Read and Write读写均可。
  • File name prefix: 只能访问前缀为此项的文件。除非你知道你在做什么,否则留空。
  • Duration (seconds): 有效期。除非你知道你在做什么,否则留空。

点击Create New Key。你会看见创建的key:立即保存,这些信息只出现一次。
记录keyIDapplicationKey备用。

数据端点

首先找到你的数据端点:在https://tree-sac0-0001.backblaze.com/b2_browse_files2.htm ,点击你创建的仓库;
点击Upload,上传一个文件 - 哪怕只有1字节也行。上传成功后,点击这个文件。
你会看见很多链接:找到Friendly URL,看域名是什么。例如https://f002.backblazeb2.com/file/bntest/test.tgz代表 f002.backblazeb2.com
记录这个端点值。

Cloudflare

我用自己的域名(2012年至今,应该永久续费)和CF Pro做了如下设置:

f000b2.cnbeining.com CNAME f000.backblazeb2.com
f001b2.cnbeining.com CNAME f001.backblazeb2.com
f002b2.cnbeining.com CNAME f002.backblazeb2.com
f003b2.cnbeining.com CNAME f003.backblazeb2.com

这几个记录都开启了Cloudflare转发。
如果你想用自己的域名,按下面的流程注册Cloudflare并绑定域名,设置CNAME:如果懒得搞,用我提供的域名即可 - 从上文记录的f00*.backblazeb2.com端点,找到对应的CNAME。实际上我(应该是)不能在Cloudflare上看见访问的具体URL和数据。
记录新的端点域名。

(可选)注册并添加域名

如果你想用自己的域名:
在https://dash.cloudflare.com/sign-up 注册。
官方教程:https://support.cloudflare.com/hc/zh-cn/articles/201720164-%E5%88%9B%E5%BB%BA-Cloudflare-%E5%B8%90%E6%88%B7%E5%B9%B6%E6%B7%BB%E5%8A%A0%E7%BD%91%E7%AB%99

设置CNAME

上文配置B2时记录了端点域名。
去Cloudflare,设置DNS,添加一个CNAME记录指向端点域名。一定要打开Cloudflare转发!
记录你设置的新域名,例如f002b2.cnbeining.com

Rclone

安装

Rclone的安装教程在https://rclone.org/install/ 。
最简单的安装方法是在Linux小鸡中运行curl https://rclone.org/install.sh | sudo bash
Windows小鸡:在https://rclone.org/downloads/ 找到安装包,下载,解压。下文中所有的rclone命令都用rclone.exe替换。
macOS:brew install rclone即可。

配置

在命令行中敲rclone config

Current remotes:
Name                 Type
====                 ====
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> e

e,回车。
输入后端昵称,例如b2。回车。
选择Backblaze B2:

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
...
 5 / Backblaze B2
   \ "b2"

找到Backblaze B2,输入数字。这里输入数字5。回车。
参考上面配置App Key的记录:account是上面的keyIDApplication KeyapplicationKey
hard_delete代表硬删除(彻底删除)或软删除(设成隐藏)。为了节省空间我选true
Edit advanced config? (y/n)这里按y进入高级设置。
一路回车使用默认值即可,除非你知道自己在做什么。

Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
This is probably only useful for a public bucket.
Leave blank if you want to use the endpoint provided by Backblaze.
Enter a string value. Press Enter for the default ("").
download_url>

这里输入新端点域名:我这里输入f002b2.cnbeining.com
一路回车即可。
测试一下:
rclone ls b2:<你的仓库名>。你应该能看见仓库里的内容。

挂载

在小盘机上创建一个目录,然后把远程存储挂载过去,例如sudo mkdir /b2
开始挂载:rclone mount b2:/<你的仓库名> /b2 &
现在你可以把/b2目录当做本机使用。不建议大量读写(创建临时文件等),日常使用没有问题。

Hacking SearX Docker, Nginx and Cloudflare together the wrong way

SearX is a great meta search engine that aggrgate multiple engine's result together, giving you privacy during searching.
A list of public instances can be found at searx.space/ - however it's not possible to know what logging those public instances are putting up. Some public instances are using Cloudflare, which is OK - but some tends to set the senstivity too high which ruins the experience. Note Cloudflare can see everything - but for personal users you do need that to stop bots.
A better solution is to create your own instance, and share with your friends. The sharing step is as important as setting up - otherwise it's effectly the same as you are using a single proxy. But think twice before setting up public instance unless you know what you are doing.
SearX has an official Docker Compose repo at github.com/searx/searx-docker - but I am already running Nginx on 443. So I need to hack the setup to make my current setup working with the new containers. Make sure you read github.com/searx/searx-docker#what-is-included- and understand which part is for what.
Grab this repo, edit .env file as instructed, and run ./start.sh once. Don't worry about issues: we will hack them though.

Hacking Caddyfile

I should not use Caddy with Nginx but to make it working:

  1. Remove all morty related content
  2. If you want to use Cloudflare, hack Content-Security-Policy and add https://ajax.cloudflare.com/ in script-src 'self'; otherwise rocket loader won't work.

Hacking searx/settings.yml

You need to change the Morty related stuffs at the end. Hardcode your Morty URL in, like https://search.fancy.tld/morty .

Hacking docker-compose.yml

  1. For Caddy, bind 80 to other ports. Like 4180:80.
  2. For morty: limit port 3000 to only localhost.
  3. For searx: hardcode morty related URL in.

Hacking .env

  1. Put localhost:4180 in host so Caddy won't take port 80 from Nginx.
  2. Use HTTP only. We shall do SSL with Nginx.

Hacking rules.json

Remove the block deflate part if you need Cloudflare.

Hacking Nginx

Try this setup:

upstream searx {
  server localhost:4180;
  keepalive 64;
}
upstream morty {
  server localhost:3000;
  keepalive 64;
}
server {
    listen       :80;
    listen       [::]:80;
    listen       :443 ssl;
    listen       [::]:443 ssl;
    server_name  fancy.search.tld;
    ssl_certificate /etc/nginx/ssl/fancy.search.tld.pem;
    ssl_certificate_key /etc/nginx/ssl/fancy.search.tld.key;
    ssl_session_timeout 5m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
    ssl_prefer_server_ciphers on;
    keepalive_timeout 70;
    ssl_session_cache shared:SSL:10m;
    ssl_dhparam /etc/nginx/ssl/dhparams.pem;
    location / {
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Need this or morty will complain
        proxy_pass http://searx;
    }
    location /morty {
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Need this or morty will complain
        proxy_pass http://morty;
    }
}

Note you must use upstream for reverse proxy or morty will complain.
With all the setup you should have something more or less usable. Wait for the checker to finish for optimized list of engines to enable - and note Qwant and DDG both uses Bing result, while Startpage is watered down Google.
If you want to set your SearX as default search engine for Chrome: visit your site, go to and your engine should be selectable. You may need to change the URL.

Customize Microsoft Sculpt Ergonomic Desktop on macOS

Recently I got 2 sets of Microsoft Sculpt Ergonomic Desktop Keyboard & Mouse Bundle (www.microsoft.com/accessories/en-ca/products/keyboards/sculpt-ergonomic-desktop/l5v-00002) for use in office and at home.
Quick review:

  • The wrist rest for the keyboard is soft: but easy to get dirty
  • Note F keys are button rather than ordinary keys. Do not purchase if you need F keys often.
  • Key is easy to type on
  • Buttons on the mouse are soft
  • Note the keypad is separate. If you need the keypad often, consider www.microsoft.com/en-ca/p/surface-ergonomic-keyboard/90pnc9ljwpx9?activetab=pivot%3aoverviewtab
  • Note if you purchase the mouse and the keyboard separately you will have 2 USB-A dongles: but the full set only requires 1 dongle if you get the set version. You cannot separate the set: and dongle is not reprogrammable.

Another version of this keyboard exists as www.microsoft.com/en-ca/p/surface-ergonomic-keyboard/90pnc9ljwpx9?activetab=pivot%3aoverviewtab
To get this keyboard & mouse working on macOS you will need the following list of software:

all of them are open source.
Steps:

  1. Change the switch on the right corner of the keyboard to Fn
  2. Go to Karaviner-Elements.app, select the keyboard,
    • switch the left command and option keys
    • map right_gui to mouse5(Mouse buttons-button5)
    • Remap F7~F9 to match the keyboard symbols.
  3. Open Mos.app. Adjust the scrolling as needed. Maybe in\verse the scrolling.
  4. Open SensibleSideButtons.app. Enable it.

Result:

  • F keys will map the media keys
  • Command and Option keys match Mac keyboards
  • Scrolling is smoothed
  • Back key on the mouse is back; Windows key is forward(at least in Chrome)

Missing:

  • Calculator key: it's not showing up in keyboard events
  • Double-tap is missing since no mouse would support tapping except for Magic Mouse 2