Invisible Node.js Setup with Docker

#docker#nodejs

aryan02420

Last year I replaced my system package manager with docker. It was a game-changer, but one challenge stood out: setting up tools to work as if they were installed natively.

This is how I interact with node.

docker run -it --rm \
  -v "${PWD}:/workspace" \
  node:16-alpine

There are a few problems with this approach:

  1. The command is very verbose. Even for basic commands like npm install and npm run dev, I need to type a lot of docker boilerplate

  2. With the --rm flag, a new container is created every time I run this command. I loose access to changes made inside the container for example the repl history.

  3. If I don't use the --rm flag, I need to worry about stopping the containers, otherwise I run out of memory.

  4. Cold start on every command.

  5. Throwing away muscle memory and relearning new command syntax. This would also mean I would struggle with regular setups.

  6. I can also use the shell inside the container directly, but I enjoy looking at my pretty prompt. I also cannot live without zsh tab suggestions.

First I need to figure out how my container should be configured and what it can access.

  1. Start inside the project folder by default using the -w or --workdir

    -w "/workspace"
  2. When creating files inside the container, I need them to have the correct owner so I can also access the them from host. I can supply the user via the -u or --user flag. The node user is predefined in the image.

    -u "node"
  3. I need to preserve my repl history. I can simply mount the history file that would have been created during a regular installation.

    -v "${HOME}/.node_repl_history:/home/node/.node_repl_history"
  4. I might want to publish a package on npm or download private packages. Same story.

    -v "${HOME}/.npmrc:/home/node/.npmrc"
  5. I still can't download packages or make fetch calls from the repl. Easiest way is to reuse the host network.

    --network host

Easy stuff done. Maybe I can run a container in background and individually execute command each command inside the background container?

  1. Have a container running in the background with the project mounted doing nothing. -d flag is for "Detached" mode. We need to give a name to the container to refer to it during exec. The last two lines is just running tail -f /dev/null. It keeps the container running in the background doing nothing. It is equivalent to a while (1) loop without consuming the CPU.

    docker run -it -d \
      -u "node" \
      -v "/some/awesome/project:/workspace" \
      -v "${HOME}/.npmrc:/home/node/.npmrc" \
      -v "${HOME}/.node_repl_history:/home/node/.node_repl_history" \
      -w "/workspace" \
      --name node \
      --network host \
      --entrypoint tail \
      node:16-alpine -f /dev/null
  2. Executing commands inside the container.

    alias node='docker exec -it node node' # run the node repl
    alias npm='docker exec -it node npm' # npm install, npm publish, etc
    alias npx='docker exec -it node npx' # curl bashing with extra steps
    alias node:shell='docker exec -it node sh' # sometimes I need to debug
  3. I can't remember the start command. Also lets not hardcode the project path. Also commands to stop / restart the container in a new project

    alias node:start='docker run -it -d \
      -u "node" \
      -v "${PWD}:/workspace/$(basename ${PWD})" \
      -v "${HOME}/.npmrc:/home/node/.npmrc" \
      -v "${HOME}/.node_repl_history:/home/node/.node_repl_history" \
      -w "/workspace/$(basename ${PWD})" \
      --name node \
      --network host \
      --entrypoint tail \
      node:16-alpine -f /dev/null'
    alias node:stop='docker stop node; docker rm node; true'
    alias node:restart='node:stop; node:start'

Limitations:

  1. You can only execute commands from the project's root. If you cd in the host shell, the workdir inside the container will still be on the root path.

  2. You have a global instance of node container. You wont be able to work on two projects simultaneously.

Enhancements:

  1. we can create new containers using node:start. This will create containers with node and their path converted to a slug. we can then define new alias node:attach, that sets $NODE_CONTAINER env variable. running node will exec inside container name stored in $NODE_CONTAINER

Edit: 2024-06-24
nix-shell is a much better solution

P.S. A little horror story: While trying to sell Docker to a friend, I completely forgot I had a container running in the background with a different project directory mounted. I ended up accidentally rm -rfing my project while trying to convince him that Docker keeps you safe from this exact scenario. 😭