I’ve recently got an M1 MacBook and played around with it a bit. It seems many open source projects still haven’t added MacOS with ARM64 into their support matrix, requiring a few extra steps to work properly, and V8 is no exception. Here are the steps I’ve taken to get V8 building on a M1 MacBook and hopefully it could help someone else on the Internet.

Update: since some time in 2022, the workarounds for depot_tools has no longer been necessary.

Setting up the build environment

First, download the depot_tools and bootstrap it as usual. Assuming you place your all projects under a $WORKSPACE_DIR (which is what I tend to do):

1
2
3
4
5
6
7
8
9
10
11
cd $WORKSPACE_DIR
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

# To make the depot_tool commands e.g. fetch available
export PATH=$WORKSPACE_DIR/depot_tools:$PATH
# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export PATH=$WORKSPACE_DIR/depot_tools:\$PATH" >> ~/.zshrc

# Bootstrap depot_tools
gclient

Next, fetch V8:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Create the parent folder for V8
mkdir $WORKSPACE_DIR/v8

# The following are necessary to make `gclient sync` work on MacOS ARM64,
# otherwise you'd see vpython errors
cd $WORKSPACE_DIR/v8
echo "mac-arm64" > .cipd_client_platform
export VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export VPYTHON_BYPASS=\"manually managed python not supported by chrome operations\"" >> ~/.zshrc

fetch v8

Creating V8 build configs

tools/dev/v8gen.py doesn’t seem to work on MacOS ARM64 yet - it keeps putting target_cpu = "x64" into the config which is in conflict with the v8_target_cpu = "arm64" that it will generate when you pass arm64 as the architecture to it. So I ended up just creating the configs manually myself. For debug builds:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mkdir -p out.gn/arm64.debug/
cat >> out.gn/arm64.debug/args.gn <<EOF

is_debug = true
target_cpu = "arm64"
v8_enable_backtrace = true
v8_enable_slow_dchecks = true
v8_optimized_debug = false
v8_target_cpu = "arm64"

v8_enable_trace_ignition=true
cc_wrapper="ccache"
EOF

# Generate the build files
gn gen out.gn/arm64.debug

For release builds:

1
2
3
4
5
6
7
8
9
10
11
12
13
mkdir -p out.gn/arm64.release/
cat >> out.gn/arm64.release/args.gn <<EOF

dcheck_always_on = false
is_debug = false
target_cpu = "arm64"
v8_target_cpu = "arm64"

cc_wrapper="ccache"
EOF

# Generate the build files
gn gen out.gn/arm64.release

v8_enable_trace_ignition=true (which gives you a nice trace when you pass --trace-ignition to d8) and cc_wrapper="ccache" (which enables ccache integration, see Chromium’s guide on how to use ccache) are what I tend to use myself, but they are not musts. For optimized debug builds you just need to turn on v8_optimized_debug and tweak other configs as you see fit.

Building V8

As explained earlier I usually use ccache when building V8, so I’d first do

1
2
3
4
5
6
7
export CCACHE_CPP2=yes
export CCACHE_SLOPPINESS=time_macros

# Optionally, add this to your ~/.zshrc if you are using zsh, or any
# other equivalents
echo "export CCACHE_CPP2=yes" >> ~/.zshrc
echo "export CCACHE_SLOPPINESS=time_macros" >> ~/.zshrc

And then per the instructions of the Chromium ccache guide, prefix the $PATH variable before I run ninja to build:

1
PATH=`pwd`/third_party/llvm-build/Release+Asserts/bin:$PATH ninja -C out.gn/arm64.release

Just to check that it’s working:

1
python2 tools/run-tests.py --outdir=out.gn/arm64.release --quickcheck