Hacking Outline Wiki: Making Slack login works with other methods, like OIDC

There's a problem that Slack login is required for Slack integration, but sometimes we want to use OIDC for login: this will make the application vomit.
Steps to fix it:

  1. Get a Postgres connector, connect to DB. Backup DB.

  2. Consider the Email domain you will use with OIDC: blah [at] cnbeining.com is cnbeining.com, test@gmail.com is gmail.com.

  3. Fill in all environment variables required for OIDC integration. Remove ALLOWED_DOMAIN variable - otherwise Outline will not allow logins from Email whose domain is different from it's own domain.

  4. Create a new entry in table authentication_providers. name is always oidc, domain is the domain of your Email, enabled is true, teamid is the same as the original one. Note self hosted Outline can only have 1 team at a time.

  5. Go to table teams. The name is ALWAYS HARDCODED as Wiki. domain is empty. Failing to do so will cause Outline to complain "max number of teams reached".

  6. Now your new OIDC should be working but users is not be associated across authentication providers by Email.

  7. In user_authentications table, create a second entry for users you want to associate: random id, same userId, set authenticationProviderId, scopes and providerId as the new one's. Delete the newly created but disassociated user.

  8. Now you should have more than 1 login method with users associated across the board by Email.

Free personal wiki with Outline on Heroku with personal domain via Cloudflare

Outline what?

Outline is a nice personal wiki tool that

  • Supports Markdown natively
  • Self hosting option
  • Fast
  • Nice searching function
  • Not overly complicated
  • Supports live collaboration

making it great for hosting personal wiki.
However there's no free official hosted version of Outline - but it's pretty simple to host on Heroku for free.



Register an account, and bind a credit card: this will give you 1,000 hours of running time per month, which is good enough. A worker dyno is needed so this shall give you 500 hours of usable time per month.


Slack app is required for authentication and integration.
Go to api.slack.com/apps and create an app - in a space you feel comfortable.

Other authentication

OIDC is supported but I have not tried it - Auth0 and Okta have free plan that supports OAuth.


S3 is needed for rich media storage.
There are a couple of free providers:

  • Backblaze B2: 10GB space, 1GB download traffic per day. Technically should work with Cloudflare but I have not figured out how.
  • Scaleway: 75GB space, 75GB/month traffic. DC located in EU.
  • IBM: 25GB space, 5GB/month traffic.
  • Storj DCS: 150GB space, 150GB traffic.

Create an account and collect all the details.


Get something that supports SMTP. Free options includes:

Create an account, verify the domain and collect SMTP config.

(optional) Cloudflare

Use Cloudflare to use your own domain. A free plan is good enough but you may need a Page Rule for overriding SSL settings.

(optional) GitHub account

Use this to update the application in the future, or configure auto update.
Fork the repo;
Install wei/pull, enable it for your forked repo so your personal fork is always up to date.


Instructions as follow:

Get it running first

Click here to deploy the application. Follow the instructions, and put in environment variables as requested. Generate UTILS_SECRET with openssl rand -hex 32. URL is the https://xxx.herokuapp.com for now.
Do not bother with Google OAuth - you need a paid account to get it working.
You should have a working app: try logging in. If in doubt, check logs under More.
Remember to enable workers at Configure Dynos under Overview.

Cloudflare setup

This official documentation is not very clear - TLDR version:

  • Add your desired domain to Heroku, don't bother with SSL - it's paid addon
  • Go to Cloudflare, add a CNAME record from that domain to xxx.herokuapp.com - NOT the record Heroku gave you!
  • If your SSL settings is NOT Strict, add a Page Rule to override it;
  • Change URL in Heroku config to the new domain under Settings.

Now you should have access on the new domain. It may take a while for Cloudflare to issue the new SSL certificate.

Setting up app update

Go to Deploy, connect your Github, select your fork.
Enable auto update as you wish(de facto nightly build), or conduct manual updates.

Database backup(for Heroku Postgres)

It seems that there's no way to set this up in the UI:

  • Go to data.heroku.com/ and click in the Postgres DB
  • Settings - View credentials
  • Copy the Heroku CLI command, something like heroku pg:psql postgresql-blah-12345 --app outline-yourname
  • In your local shell that has heroku setup, run heroku pg:backups:schedule postgresql-blah-12345 --at '02:00 America/Los_Angeles' --app outline-yourname, change the time as your preference.

Limitation for free DB:

  • Only 1 backup per day
  • No point in time
  • Only preserving the last 2 backups

Note: Migrate WordPress by hand

      No Comments on Note: Migrate WordPress by hand

OpenVZ - no Docker this time.

  • install Nginx, php 7.4, and plugins www.cloudbooklet.com/upgrade-php-version-to-php-7-4-on-ubuntu/
  • install mysql
    • get a root password
    • make sure root password is in deed working, and wrong passwords are rejected
    • restore backup
  • copy all files to new box
  • chown to www-data:www-data
  • In wp-config.php:
    • disable cache
    • change DB
  • In wp-content:
    • delete advanced-cache.php
  • In Nginx conf:
    • revert to no cache version

If error out:
- Test with phpinfo() file
- if 502, check php-fpm and Nginx connection
- Test with new WordPress
- If 500, check DB

  • Rebuild cache

Using Docker and Docker-Compose on macOS with Multipass

In the light of recent news that
Docker Desktop for Windows and macOS is no longer free, efforts have been put in to search for a replacement.

Docker Desktop?

Docker Desktop used to be Docker Toolbox, which is based on VirtualBox. Docker Desktop is able to utilize Hyperkit on macOS which makes the performance much better compared to Type II hypervisor.
In my mind a good in-place replacement should be:

  • Runs single Dockerfile
  • Supports volume mounting and port forwarding
  • Runs docker-compose.yml
  • Easy to setup
  • Supports Windows and macOS(Docker Engine on Linux remains open-sourced)
  • Works with built-in debugger for common IDEs(IntelliJ & VSCode)
  • Works with IDE's built-in support that's based on Docker socket
  • Supports ARM Mac

Common replacements proposed includes:

  • Podman: a Redhat-backed solution that runs on OCI. Spins up a Podman-flavored VM. Very limited support for Docker-Compose. Probably requires reworking Dockerfile. Volume mounting is painful. Cannot be used with IDE's built-in support.
  • Microk8s & k3s: Kubenetes. Won't work with Docker-Compose.


This is where Multipass comes to shine:

  • Utilized Hyperkit, Hyper-V, and Virtualbox for best performance
  • Handles volume mounting nicely
  • Easy to setup
  • Networking is easy
  • Volume mounting is simple – at least when using Hyperkit
  • Native ARM support

Downsides include:

  • Ubuntu. Not even Debian. Bad news for CentOS lovers.
  • Overlapping function with Vagrant. Can't deny that.

The actual setup

Steps based on macOS Catalina, 10.15.7.

Install Multipass

Download installer
here. You DO NOT need to set driver as VirtualBox – leave it as is and Hyperkit will be used as default!
You can try running brew install multipass.
Make sure that you give multipassd Full Disk Access – otherwise mounting will fail!

Create and setup your instance

I am calling my instance as
Note the default disk size is merely 5GiB – not enough for setting up Docker. To give it a bit more space:
multipass launch fakedocker -d 20G
Use -c to add more CPU cores, -m to add more RAM. Documentation

One-liner with cloud-init

You may want to change Docker-Compose's version to the latest.
Save this file as cloud-init.yaml, and run:
multipass launch fakedocker -d 20G --cloud-init cloud-init.yaml

      source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
      key: |
        -----BEGIN PGP PUBLIC KEY BLOCK-----
        -----END PGP PUBLIC KEY BLOCK-----
package_update: true
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - docker-ce
  - docker-ce-cli
  - containerd.io
  - avahi-daemon
  - libnss-mdns
  - sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - sudo chmod +x /usr/local/bin/docker-compose
  - sudo systemctl daemon-reload
# create the docker group
  - docker
# Add default auto created user to docker group
    groups: [docker]
  - content: |
      # the default is not to use systemd for cgroups because the delegate issues still
      # exists and systemd currently does not support the cgroup feature set required
      # for containers run by docker
      ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://
      ExecReload=/bin/kill -s HUP $MAINPID
    path: /lib/systemd/system/docker.service.d/overwrite.conf
    owner: root:root
    permissions: '0644'
  mode: reboot
  message: Restarting after configuring Ubuntu for Docker and Docker-Compose

If having any problem, run multipass shell fakedocker to SSH into the VM. Logs are located at /var/log/cloud-init.log.

Manually set up your instance

Create VM

Use multipass launch fakedocker -d 20G to create the instance.
Run multipass shell fakedocker to get into the instance.

Setup Docker and other stuff

Follow this guide to install Docker: docs.docker.com/engine/install/ubuntu/
Also this post-install guide to setup groups: docs.docker.com/engine/install/linux-postinstall/
For installing Docker-Compose: docs.docker.com/compose/install/
Run sudo apt-get install -y avahi-daemon libnss-mdns to install Avahi to enable Bonjour so you can access the box by fakedocer.local. This will comes in handy in the following setup.
Edit Docker's Systemd config to expose both port and Unix socket: TCP is for remote debugger on your local machine, the socket is for running Docker command locally:
Create a folder at /lib/systemd/system/docker.service.d/ add a file overwrite.conf with the following content:

      # the default is not to use systemd for cgroups because the delegate issues still
      # exists and systemd currently does not support the cgroup feature set required
      # for containers run by docker
      ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock -H tcp://
      ExecReload=/bin/kill -s HUP $MAINPID


sudo chmod 666 /var/run/docker.sock
sudo systemctl daemon-reload
sudo systemctl restart docker.service

to make sure that docker command works on the VM.

Mounting local folder

Run multipass mount ~ fakedocker to attach your home directory to the new VM. This will make sure that you don't need to calculate where you are mounting.

Quick sanity test

On fakedocker VM,

  • Run docker pscommand: Docker should be able to run. You may need to log out and re-login for group change to take effect.
  • Run nc -zv 2375to make sure that Docker Engine is taking traffic from TCP.
  • Run ls /User/<your_username>to make sure that volume attaching is successful: use catto read a file, and write something random to make sure that the attached volume is readable and writable.

On your local machine:

  • do a nc -zv fakedocker.local 2375to make sure that your local debugger can communicate with the Docker instance on fakedocker.

Now you've got a nice environment where you can use
as if you are on your local machine.

Setting up Docker on Mac

Run export DOCKER_HOST="tcp://fakedocker.local:2375" or put it in your ~/.zshrc to make the VM your default Docker host.
Follow docs.docker.com/engine/install/binaries/ to setup docker command locally, and github.com/docker/compose/releases to setup docker-compose.

Setting up remote debugger

I am using PyCharm with Docker-Compose with attached volume in docker-compose.yml as an example:

Create a new Docker machine

PyCharm should be able to connect to this instance.

Setup a new Interpreter

Make sure that you do Path mappings as shown: otherwise, debugger won't be able to run.

Setup debug config

Listen to - otherwise, you won't be able to visit the web service.
That should be it! Trigger debugging - if you are listening to port 8000, go to http://fakedocker.local:8000 to visit your site. Setup port forwarding in docker-compose.yml if you want to expose more services, like your database.

Common problems

Multipass VM stuck at "Starting"/ Multipass is stuck on any command

sudo pkill multipassd
and redo your last step. multipassd is a daemon that checks VM: killing this process does not change the state of VM. (So what's the point of making it a daemon?)

Multipass VM is botched

Run multipass delete <name_of_vm> && multipass purge to completely remove the VM. 

Multipass VM cannot read mounted folder on computer reboot

Reattach the volume:
multipass unmount ~ fakedocker && multipass mount ~ fakedocker.

Cannot connect to remote Docker

Make sure that you've edited Docker's startup command to include TCP listening on

Need to debug issues when using cloud-init

Logs are located at /var/log/cloud-init.log.

docker command on VM won't run

  • Make sure that Docker's startup command also listens on Unix socket.
  • That socket file needs to be 666.

IDE cannot connect to debugger: cannot find debugger file(especially Pycharm)

Mount your project home to



使用Backblaze B2+Cloudflare+rclone为小盘机(免费)增加空间

小盘机的空间经常不够用:我们可以把Backblaze B210G免费空间搬到小鸡上。

  • 接码手机号和邮箱
  • (可选)一个顶级域名,供注册Cloudflare



Rclone最大的作用是同步文件。Rclone支持很多稀奇古怪的数据源,以及很多稀奇古怪的功能 - 其中包括将远程文件挂载成FUSE文件系统。
利用类似本文的方法,也可以将Google Drive,Onedrive,Dropbox,远程大盘鸡等挂载成文件系统。

Backblaze B2

Backblaze的服务器在美国西部,虽然只有一个机房但是便宜:每T存储每月5.12 USD。


  1. Backblaze加入了Cloudflare的Bandwidth Alliance,下载流量免费;上传不收费。

  2. Backblaze B2是企业级存储:8个9的可靠性,3个9的可用性。不会像个人网盘删文件或限制API请求频率。

  3. 每个账户送10G存储空间,每日1G下载流量,2500次下载请求。对于白嫖够用了。


Backblaze B2

注册Backblaze B2

点击 www.backblaze.com/b2/sign-up.html 注册。


在https://secure.backblaze.com/b2_buckets.htm 处,点击“Create a Bucket”。

  • Bucket Unique Name: 仓库的名字。必须全局唯一,仅限字母和数字。
  • Files in Bucket are: 公开还是私密仓库。私密(private)即可,除非你不在乎公开文件。
  • Object Lock: 锁定文件一段时间内不能删除。不需要开启。

记录仓库的名字。点击蓝色的Create a Bucket 创建仓库。
和所有的对象存储一样,B2默认保存文件的所有版本:如果要修改,点击仓库的Lifecycle Settings

  • Keep all versions of the file (default):默认保存所有版本
  • Keep only the last version of the file:只保存最后一个版本。我会使用这个节省空间。
  • Keep prior versions for this number of days:在X天内保存旧版本。
  • Use custom lifecycle rules:按文件前缀自定义多少天隐藏,多少天删除文件。

创建APP Key

默认的key不能使用:在https://secure.backblaze.com/app_keys.htm 创建一个新key。
点击Add a New Application Key

  • Name of Key: 名字,供你自己参考
  • Allow access to Bucket(s): 你可以让key只能访问某个仓库。
  • Type of Access: 只读,只写或读写都可以。除非你知道你在做什么,否则选择Read and Write读写均可。
  • File name prefix: 只能访问前缀为此项的文件。除非你知道你在做什么,否则留空。
  • Duration (seconds): 有效期。除非你知道你在做什么,否则留空。

点击Create New Key。你会看见创建的key:立即保存,这些信息只出现一次。


首先找到你的数据端点:在https://tree-sac0-0001.backblaze.com/b2_browse_files2.htm ,点击你创建的仓库;
点击Upload,上传一个文件 - 哪怕只有1字节也行。上传成功后,点击这个文件。
你会看见很多链接:找到Friendly URL,看域名是什么。例如https://f002.backblazeb2.com/file/bntest/test.tgz代表 f002.backblazeb2.com


我用自己的域名(2012年至今,应该永久续费)和CF Pro做了如下设置:

f000b2.cnbeining.com CNAME f000.backblazeb2.com
f001b2.cnbeining.com CNAME f001.backblazeb2.com
f002b2.cnbeining.com CNAME f002.backblazeb2.com
f003b2.cnbeining.com CNAME f003.backblazeb2.com

如果你想用自己的域名,按下面的流程注册Cloudflare并绑定域名,设置CNAME:如果懒得搞,用我提供的域名即可 - 从上文记录的f00*.backblazeb2.com端点,找到对应的CNAME。实际上我(应该是)不能在Cloudflare上看见访问的具体URL和数据。


在https://dash.cloudflare.com/sign-up 注册。





Rclone的安装教程在https://rclone.org/install/ 。
最简单的安装方法是在Linux小鸡中运行curl https://rclone.org/install.sh | sudo bash
Windows小鸡:在https://rclone.org/downloads/ 找到安装包,下载,解压。下文中所有的rclone命令都用rclone.exe替换。
macOS:brew install rclone即可。


在命令行中敲rclone config

Current remotes:
Name                 Type
====                 ====
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> e

选择Backblaze B2:

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 5 / Backblaze B2
   \ "b2"

找到Backblaze B2,输入数字。这里输入数字5。回车。
参考上面配置App Key的记录:account是上面的keyIDApplication KeyapplicationKey
Edit advanced config? (y/n)这里按y进入高级设置。

Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
This is probably only useful for a public bucket.
Leave blank if you want to use the endpoint provided by Backblaze.
Enter a string value. Press Enter for the default ("").

rclone ls b2:<你的仓库名>。你应该能看见仓库里的内容。


在小盘机上创建一个目录,然后把远程存储挂载过去,例如sudo mkdir /b2
开始挂载:rclone mount b2:/<你的仓库名> /b2 &

Hacking SearX Docker, Nginx and Cloudflare together the wrong way

SearX is a great meta search engine that aggrgate multiple engine's result together, giving you privacy during searching.
A list of public instances can be found at searx.space/ - however it's not possible to know what logging those public instances are putting up. Some public instances are using Cloudflare, which is OK - but some tends to set the senstivity too high which ruins the experience. Note Cloudflare can see everything - but for personal users you do need that to stop bots.
A better solution is to create your own instance, and share with your friends. The sharing step is as important as setting up - otherwise it's effectly the same as you are using a single proxy. But think twice before setting up public instance unless you know what you are doing.
SearX has an official Docker Compose repo at github.com/searx/searx-docker - but I am already running Nginx on 443. So I need to hack the setup to make my current setup working with the new containers. Make sure you read github.com/searx/searx-docker#what-is-included- and understand which part is for what.
Grab this repo, edit .env file as instructed, and run ./start.sh once. Don't worry about issues: we will hack them though.

Hacking Caddyfile

I should not use Caddy with Nginx but to make it working:

  1. Remove all morty related content
  2. If you want to use Cloudflare, hack Content-Security-Policy and add https://ajax.cloudflare.com/ in script-src 'self'; otherwise rocket loader won't work.

Hacking searx/settings.yml

You need to change the Morty related stuffs at the end. Hardcode your Morty URL in, like https://search.fancy.tld/morty .

Hacking docker-compose.yml

  1. For Caddy, bind 80 to other ports. Like 4180:80.
  2. For morty: limit port 3000 to only localhost.
  3. For searx: hardcode morty related URL in.

Hacking .env

  1. Put localhost:4180 in host so Caddy won't take port 80 from Nginx.
  2. Use HTTP only. We shall do SSL with Nginx.

Hacking rules.json

Remove the block deflate part if you need Cloudflare.

Hacking Nginx

Try this setup:

upstream searx {
  server localhost:4180;
  keepalive 64;
upstream morty {
  server localhost:3000;
  keepalive 64;
server {
    listen       :80;
    listen       [::]:80;
    listen       :443 ssl;
    listen       [::]:443 ssl;
    server_name  fancy.search.tld;
    ssl_certificate /etc/nginx/ssl/fancy.search.tld.pem;
    ssl_certificate_key /etc/nginx/ssl/fancy.search.tld.key;
    ssl_session_timeout 5m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    keepalive_timeout 70;
    ssl_session_cache shared:SSL:10m;
    ssl_dhparam /etc/nginx/ssl/dhparams.pem;
    location / {
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Need this or morty will complain
        proxy_pass http://searx;
    location /morty {
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Need this or morty will complain
        proxy_pass http://morty;

Note you must use upstream for reverse proxy or morty will complain.
With all the setup you should have something more or less usable. Wait for the checker to finish for optimized list of engines to enable - and note Qwant and DDG both uses Bing result, while Startpage is watered down Google.
If you want to set your SearX as default search engine for Chrome: visit your site, go to and your engine should be selectable. You may need to change the URL.

Customize Microsoft Sculpt Ergonomic Desktop on macOS

Recently I got 2 sets of Microsoft Sculpt Ergonomic Desktop Keyboard & Mouse Bundle (www.microsoft.com/accessories/en-ca/products/keyboards/sculpt-ergonomic-desktop/l5v-00002) for use in office and at home.
Quick review:

  • The wrist rest for the keyboard is soft: but easy to get dirty
  • Note F keys are button rather than ordinary keys. Do not purchase if you need F keys often.
  • Key is easy to type on
  • Buttons on the mouse are soft
  • Note the keypad is separate. If you need the keypad often, consider www.microsoft.com/en-ca/p/surface-ergonomic-keyboard/90pnc9ljwpx9?activetab=pivot%3aoverviewtab
  • Note if you purchase the mouse and the keyboard separately you will have 2 USB-A dongles: but the full set only requires 1 dongle if you get the set version. You cannot separate the set: and dongle is not reprogrammable.

Another version of this keyboard exists as www.microsoft.com/en-ca/p/surface-ergonomic-keyboard/90pnc9ljwpx9?activetab=pivot%3aoverviewtab
To get this keyboard & mouse working on macOS you will need the following list of software:

all of them are open source.

  1. Change the switch on the right corner of the keyboard to Fn
  2. Go to Karaviner-Elements.app, select the keyboard,
    • switch the left command and option keys
    • map right_gui to mouse5(Mouse buttons-button5)
    • Remap F7~F9 to match the keyboard symbols.
  3. Open Mos.app. Adjust the scrolling as needed. Maybe in\verse the scrolling.
  4. Open SensibleSideButtons.app. Enable it.


  • F keys will map the media keys
  • Command and Option keys match Mac keyboards
  • Scrolling is smoothed
  • Back key on the mouse is back; Windows key is forward(at least in Chrome)


  • Calculator key: it's not showing up in keyboard events
  • Double-tap is missing since no mouse would support tapping except for Magic Mouse 2

V2Ray WebSocket+TLS+Web+Nginx+CDN

      3 Comments on V2Ray WebSocket+TLS+Web+Nginx+CDN



  • SSL: freessl.cn
  • 域名:https://www.freenom.com ,或者自己找一个




  "inbounds": [
      "port": 10001,  # 本地端口,不冲突即可
      "listen":"", # V2Ray只监听本机
      "protocol": "vmess",
      "settings": {
        "clients": [
            "id": "d111702d-8604-4358-b1fa-xxxxxxxxx",
            "alterId": 64
      "streamSettings": {
        "network": "ws",
        "wsSettings": {
        "path": "/ray/" # 注意最后的反斜杠必须和Nginx一致
  "outbounds": [
      "protocol": "freedom",
      "settings": {}
server {
  server_name           {subdomain.domain.tld};
  listen ssl;
  listen [2001::]:443 ssl; # 让Nginx也监听v6
  ssl on;
  ssl_certificate       /etc/nginx/ssl/xxx.pem; # SSL证书位置
  ssl_certificate_key   /etc/nginx/ssl/xxx.key;
  ssl_protocols         TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers           HIGH:!aNULL:!MD5;
  location /ray/ {
        proxy_redirect off;
        proxy_pass; # 注意端口和上面一致
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        # Show realip in v2ray access.log
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host; # 必须有这条
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


  "log": {
    "error": "",
    "loglevel": "debug",
    "access": ""
  "inbounds": [
      "listen": "",
      "protocol": "socks",
      "settings": {
        "ip": "",
        "userLevel": 0,
        "timeout": 0,
        "udp": false,
        "auth": "noauth"
      "port": "8091" # SOCKS代理本地端口
  "outbounds": [
      "mux": {
        "enabled": false,
        "concurrency": 8
      "protocol": "vmess",
      "streamSettings": {
        "wsSettings": {
          "path": "/ray/",# 注意和上面一致
          "headers": {
            "host": ""
        "tlsSettings": {
          "allowInsecure": true
        "security": "tls",
        "network": "ws"
      "tag": "",
      "settings": {
        "vnext": [
            "address": "上面的domain",
            "users": [
                "id": "和上面的UUID一样,
                "alterId": 64,
                "level": 0,
                "security": "auto"
            "port": 443 # Nginx监听的端口
  "dns": {
    "servers": [
  "routing": {
    "strategy": "rules",
    "settings": {
      "domainStrategy": "IPIfNonMatch",
      "rules": [
          "outboundTag": "direct",
          "type": "field",
          "ip": [
          "domain": [
  "transport": {}


  • 如果挂Cloudflare:
    • 如果自签证书(真的没必要),SSL必须设flexible:否则CF会报证书错误
    • 如果使用正常SSL证书,SSL必须设Full:否则Nginx有可能随便丢一个站点过去
    • 如果是新域名:等SSL生成后才能操作
    • Cloudflare的免费证书只支持一级subdomain的SSL(*.domain.tld):如果域名是二级以上,请加钱或重新弄SSL。
  • 利用curl进行debug。在任何情况下,错误码不应该是404.

HTTP2 with Caddy



https://域名:2053 {
    root /usr/share/nginx/html/
    tls /etc/nginx/ssl/公钥.pem /etc/nginx/ssl/私钥.key { # 也可以让Caddy自己找Letsencrypt生成
        curves p384
        key_type p384
    proxy /v2ray https://localhost:12000 { # 端口号是V2Ray监听的本地端口
        header_upstream X-Forwarded-Proto "https"
        header_upstream Host "域名
    header / {
        Strict-Transport-Security "max-age=31536000;"
        X-XSS-Protection "1; mode=block"
        X-Content-Type-Options "nosniff"
        X-Frame-Options "DENY"
  "inbounds": [
      "port": 12000, # 监听本地端口号
      "listen": "", # 只监听本地
      "protocol": "vmess",
      "settings": {
        "clients": [
            "id": "UUID",
            "alterId": 64
      "streamSettings": {
        "network": "h2",
        "httpSettings": {
          "path": "/v2ray",
          "host": ["域名"]
        "security": "tls",
        "tlsSettings": {
          "certificates": [
              "certificateFile": "/etc/nginx/ssl/公钥.pem",
              "keyFile": "/etc/nginx/ssl/私钥.key"
  "outbounds": [
      "protocol": "freedom",
      "settings": {}


  "inbounds": [
      "port": 8091,
      "listen": "",
      "protocol": "socks"
  "outbounds": [
      "protocol": "vmess",
      "settings": {
        "vnext": [
            "address": "域名",
            "port": 2053, # Caddy监听的端口
            "users": [
                "id": "同一个UUID",
                "alterId": 64
      "streamSettings": {
        "network": "h2",
        "httpSettings": {
          "path": "/v2ray",
          "host": ["域名"]
        "security": "tls"


  • Nginx不能做HTTP 2转发:因为作者觉得没必要。只能用Caddy。
  • 如果要使用CDN:虽然很多CDN支持HTTP2(例如Cloudflare),但是我们需要的是回源走HTTP2。目前还没有找到这种东西。