Invisible Node.js Setup with Docker
aryan02420
Last year I replaced my system package manager with docker. It was a game-changer, but one challenge stood out: setting up tools to work as if they were installed natively.
This is how I interact with node.
docker run -it --rm \
-v "${PWD}:/workspace" \
node:16-alpine
There are a few problems with this approach:
The command is very verbose. Even for basic commands like
npm install
andnpm run dev
, I need to type a lot of docker boilerplateWith the
--rm
flag, a new container is created every time I run this command. I loose access to changes made inside the container for example the repl history.If I don't use the
--rm
flag, I need to worry about stopping the containers, otherwise I run out of memory.Cold start on every command.
Throwing away muscle memory and relearning new command syntax. This would also mean I would struggle with regular setups.
I can also use the shell inside the container directly, but I enjoy looking at my pretty prompt. I also cannot live without zsh tab suggestions.
First I need to figure out how my container should be configured and what it can access.
Start inside the project folder by default using the
-w
or--workdir
-w "/workspace"
When creating files inside the container, I need them to have the correct owner so I can also access the them from host. I can supply the user via the
-u
or--user
flag. Thenode
user is predefined in the image.-u "node"
I need to preserve my repl history. I can simply mount the history file that would have been created during a regular installation.
-v "${HOME}/.node_repl_history:/home/node/.node_repl_history"
I might want to publish a package on npm or download private packages. Same story.
-v "${HOME}/.npmrc:/home/node/.npmrc"
I still can't download packages or make
fetch
calls from the repl. Easiest way is to reuse the host network.--network host
Easy stuff done. Maybe I can run a container in background and individually execute command each command inside the background container?
Have a container running in the background with the project mounted doing nothing.
-d
flag is for "Detached" mode. We need to give a name to the container to refer to it during exec. The last two lines is just runningtail -f /dev/null
. It keeps the container running in the background doing nothing. It is equivalent to awhile (1)
loop without consuming the CPU.docker run -it -d \ -u "node" \ -v "/some/awesome/project:/workspace" \ -v "${HOME}/.npmrc:/home/node/.npmrc" \ -v "${HOME}/.node_repl_history:/home/node/.node_repl_history" \ -w "/workspace" \ --name node \ --network host \ --entrypoint tail \ node:16-alpine -f /dev/null
Executing commands inside the container.
alias node='docker exec -it node node' # run the node repl alias npm='docker exec -it node npm' # npm install, npm publish, etc alias npx='docker exec -it node npx' # curl bashing with extra steps alias node:shell='docker exec -it node sh' # sometimes I need to debug
I can't remember the start command. Also lets not hardcode the project path. Also commands to stop / restart the container in a new project
alias node:start='docker run -it -d \ -u "node" \ -v "${PWD}:/workspace/$(basename ${PWD})" \ -v "${HOME}/.npmrc:/home/node/.npmrc" \ -v "${HOME}/.node_repl_history:/home/node/.node_repl_history" \ -w "/workspace/$(basename ${PWD})" \ --name node \ --network host \ --entrypoint tail \ node:16-alpine -f /dev/null' alias node:stop='docker stop node; docker rm node; true' alias node:restart='node:stop; node:start'
Limitations:
You can only execute commands from the project's root. If you
cd
in the host shell, the workdir inside the container will still be on the root path.You have a global instance of node container. You wont be able to work on two projects simultaneously.
Enhancements:
- we can create new containers using node:start. This will create containers with
node
and their path converted to a slug. we can then define new alias node:attach, that sets$NODE_CONTAINER
env variable. runningnode
will exec inside container name stored in$NODE_CONTAINER
Edit: 2024-06-24
nix-shell
is a much better solution
P.S. A little horror story: While trying to sell Docker to a friend, I completely forgot I had a container running in the background with a different project directory mounted. I ended up accidentally rm -rf
ing my project while trying to convince him that Docker keeps you safe from this exact scenario. ðŸ˜