A detailed look at a CRAE app (create-react-app and express)

I've been working on numerous web projects lately, and have been shifting towards building everything purely in JavaScript. Lately, I've been diving deeper and becoming more absorbed in one of my projects. It has matured quite a bit since my last blog post. I'll describe it's architecture, my development environment and discuss some of the pros and cons of my current approach to web development.

In my previous post, A set of tools for a freelance webdev workflow I described my architecture while still in the earliest stages of development. I mentioned building the server and client as seperate projects in case I would want to migrate from nodejs/express towards a different solution in later stages. Over time, my confidence in using JS on the server side has grown, so I combined the two projects into one. The complexity of the application has grown as well, and I needed to be able to accurately pinpoint the source of bugs. Previously, this separation could often cause a lot of confusion. In this post, we'll take an in depth look at a sensible CRAE app.

Folder Structure

Below is a (slightly simplified) overview of my new folder structure.

project
│   server.js
│   README.md
│   .env
│   .gitignore
│   package.json
│   ...
└───routes
│   │   user.js
│   │   admin.js
│   │   products.js
│   │   ...
└───client
│   │   package.json
│   │   gulpfile.js
│   └───public
│   |   │   index.html
│   |   │   favicon.ico
│   |   │   ...
│   └───src
│       └───components
│       |   |   Header.js
│       |   |   App.js
│       |   |   ...
│       └───scss
│       └───fonts

The server code is in the root of the project. The client folder contains the react frontend, which was generated with create-react-app (CRA). It still hasn't been ejected, and i'm really happy about that. I'd like to avoid any extra configuration steps, and keep getting updates from the community. (CRA contributors are a talented bunch, and the repository is maintained by facebook). I don't intend to eject it until i'm absolutely forced to. Click here for more information on ejecting.

Simplified Workflow

I've got a number of helpful scripts in my root package.json file.

yarn start concurrently boots up the API server in debug mode, as well as the CRA development server, in a single command. Pretty neat! Here's the full command:

package.json:

"start": "concurrently \"cd client && yarn start\" \"yarn debug\"",

Concurrently is an npm package that can be used to run multiple buildpacks simultaneously in a single terminal.

the debug script looks like this:

"debug": "nodemon --inspect=0.0.0.0:9229 ./bin/www",

I'm using nodemon to start the server and watch for code changes, which will trigger a server restart. The --inspect flag starts a debugging session which I can attach to in my editor (Visual Studio Code). More on that later.

Proxying

A clever trick for this sort of setup is the proxy property in a CRA's package.json file.

client/package.json:

"proxy": {
    "/api": {
        "target": "http://localhost:3001",
        "secure": false
    }
},

This allows me to define the base url for all my api calls as /api, avoiding CORS issues while simplifying routing. The port in the target url should point the the server's port. Here's an example api call from the CRA to my server:

client/src/utils/api.js:

const BASE_URL = '/api';
function getItemById(id) {
  const url = `${BASE_URL}/item/${id}`;
  return axios.get(url, { headers: { Authorization: `Bearer ${getAccessToken()}` }}).then(response => response.data);
}

Now it helps to keep the server side API endpoints neatly restful and well organized. I'm sending an Authorization header because i'm using Auth0 to authenticate users and secure my endpoints. I might write another post about client/server authentication and security later.

In my server side code, i've defined routes like this:
app.js:

var items = require('./routes/items');
...
    app.use('/api/item', exposeDb, items);

I handle the route and fetch an item from MongoDB in routes/items.js, :

router.get('/:id',authCheck, checkReadScopes, function(req, res, next) {
  query = {"_id": ObjectId(req.params.id)};
  req.mongoDb.collection("items").findOne(query, function(err,results){
    res.json(results);
  });
});

authCheck and checkReadScopes are jwt middleware that process the access token to evaluate the request as legitimate, coming from a real authenticated client user with the correct access scopes. Again, i'll do another post on OAuth and security later.

Unified git history

This was one of the big upsides to combining the server and client projects. I have the full git history for the whole stack in one place. Google stores a vast majority of their billions of lines of code in a single repository, so that seems like the way to go. The video below shows my history of code changes since I started the project last christmas.

Better debugging

Having all my code in one place makes it a lot easier to debug the project, especially since it has grown in complexity recently. I'm using Visual Studio Code with a number of helpful extentions. I'll list the my favorites here:

My vscode project configurations are very minimal, but they get the job done. The Chrome entry allows me to attach to a Chrome browser session, set breakpoints and debug the frontend code from within VSCode. The second entry allows me to debug the server side, by first starting the server in debug mode with my yarn debug script and then attatching to the debug session.

Again, here's my debug entry in package.json:

"debug": "nodemon --inspect=0.0.0.0:9229 ./bin/www",

vscode project config:
.vscode/launch.json

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Chrome",
            "type": "chrome",
            "request": "launch",
            "url": "http://localhost:3000",
            "webRoot": "${workspaceRoot}/src"
        },
        {
            "name": "Attach to node",
            "type": "node",
            "request": "attach",
            "protocol": "auto",
            "restart": true,
            "port": 9229
        }
    ]
}

Preparing for an eventual public release

I'm still using heroku to host my application. Now I only need one dyno for the whole stack. When I push my application to heroku, heroku automatically runs npm install (or yarn install) to install root dependencies, followed by whatever build script is defined in package.json. You can also define postbuild, prebuild, postinstall, etc, which will also run before or after each step automatically.

My postinstall just cd's into the client side app and installs the node dependencies there as well.

package.json:

"postinstall": "cd client && yarn install",

Remember to include your yarn.lock and client/yarn.lock files in git so that they get pushed to heroku. Heroku needs those to build the dependencies.

My build scripts work similarly, they build a productio version of the server, followed by the client:
package.json

"build": "babel . --ignore node_modules,build,client,docs --out-dir build",
"postbuild": "cd client && yarn build",

the client has a prebuild step, which compiles the scss styles using gulp (see my previous post for the complete gulp file), and then runs the included CRA production build tool:

client/package.json

"prebuild": "gulp sass",
"build": "react-scripts build",

Finally, we need a script to start running a production server.

"start:prod": "NODE_ENV=production node ./bin/www",

We can tell heroku which start script it should use, by creating a Procfile in the root of the project, containing this line:

web: yarn start:prod

By defining these scripts, the only thing I have to do to deploy a production optimized build is push my changes to the heroku git remote.