Development
November 27, 2021

From Auth to API  -  How We Rethought Our Approach

This is part four in a series of articles documenting the process of building our first Shopify app. Click here for part three.

From Humble Beginnings…

If you’ve been following our story, you know that we wanted to rethink the process of authentication for our Shopify Apps and create a reusable and scalable Authentication Service. We did just that.

However, as we continued to build our first app, we realised that we had a growing need for additional reusable services, such as fetching/storing app settings, handling webhook requests, verifying additional tokens etc.

In the interest of keeping our services aligned with our app development philosophy, we discussed setting up a separate service for handling these new requests. It would be more akin to a traditional API, and all of our apps would interface with it to fetch app-specific data, or store settings.

While this made perfect sense, and was going to be an API Service which sat alongside the Auth Service we had developed, we realised that we would be doubling up on functionality and codebase in several areas.

Both services would be Serverless projects, using the same libraries, accessing the same database, and just generally reusing a lot of code which could be shared. While we could abstract this code out into external libraries and import them into both projects, we ultimately decided it wasn’t worth the hassle, and would create more issues in the long run.

For these reasons, we made the decision to change our Authentication Service, into a more generic API Service and have authentication be one part of that service.

From Auth to API

Our first step in changing the service was to rename a lot of things and rearrange the codebase to resemble a more modular structure, with Auth being one component of the service.

As part of this process, we rewrote a lot of our classes to be more generic to allow them to be extended when needed. For example, our Database class initially only needed to interact with a single table, and had a getShop() function which would fetch the shop record.

Moving forward with our app, we now have several tables which can all benefit from the one generic getter and setter. So what used to be:

Has now become:

Where Shop is a model class for the shop table, which extends our abstract Database class.

This is just one example of how we have refactored the service, but there are many similar changes we made to expand its functionality. Our current handler functions routes are now:

Note that we use dynamic routes for authentication and webhooks (the appName is passed in), but for settings we have static routes for each app.

This is by design, as every app we build will be different. While they may all have a get/set route for settings, the handlers themselves may need to differ per app, so we statically define each route for clarity and future-proofing.

Of course they all use the exact same auth and webook processes though, so those can be a shared handler attached to a dynamic route.

Malcolm in the Middleware

Another expansion to our service is the inclusion of several middleware functions we wrote to validate/verify requests before we waste any time running handler code.

We use the serverless-middleware plugin for Serverless to give us the functionality. It transforms our function config from this:

To this:

Here we are running two middleware functions before hitting our auth handler, getConfig and validateAuthRequest.

Let’s briefly look at a few examples to see how the middleware operates.

/middleware/getConfig.js

Take a glance at the code, then we’ll go over it.

On the surface, all this middleware is doing is pulling querystring parameters and setting them to our config object. This is true, but the reason this needs to happen before anything else, is because the Config class uses the appName provided to load all of our environmental variables for the specific app that is being requested (this includes API keys, database connection details, app URLs, webhooks etc.).

This happens before anything else, and allows the config data to be globally accessible to our handlers, as we instantiate it as a singleton. Make a note of this, as it will be important later 😄

Let’s look at one more example of a middleware function; one that runs on the /auth/verify route as part of the OAuth installation process:

/middleware/validateHMAC.js

This middleware handles the verification process outlined here in the Shopify Dev Docs, to ensure the request is coming from Shopify themselves and hasn’t been tampered with.

Simple enough, we check there is a hmac parameter, then perform the steps outlined by Shopify to verify it is a valid HMAC.

If everything looks good, we simply proceed with no return value and the middleware will continue on to the next handler. If there is an error, we use context.end() to halt execution of the lambda function and return an error response with the specific error message.

We use several different middleware functions for different types of verifications.

Stop! Error Time

As much as I’d like to pretend that everything worked perfectly with no hiccups, we all know that is never true. We had a few big challenges along the way, so we decided to lay out the two main ones we faced, in the hopes of helping others who may face similar issues.

Of CORS, We Forgot

For most web developers, CORS (or Cross-Origin Resource Sharing), is basically that annoying (but important) mechanism which restricts AJAX requests from being made to a domain other than your origin.

There’s a little more to it, and if you’re unfamiliar with CORS, it might be worth doing a little reading on how it works, what a preflight is etc.

This is the first issue we faced when attempting to access our API for the first time to load app settings. The request would be blocked, as it was coming from our development app, which was essentially an embedded iframe inside the Shopify admin.

We needed to allow the domain to access our API, which included enabling specific headers as well. In Serverless, this can be done easily enough with a change to the YML config:

There’s an important caveat you should know!

Be warned that if you are enabling the Authorization header or allowCredentials setting, you cannot use wildcard domains such as * .

You must specify each domain you wish to allow. This was a problem for us in development, as we initially used ngrok to create a local tunnel while building our app, and the public domain it generates for you changes each time.

We didn’t want to have to redeploy our service each time we ran ngrok, so we started using localtunnel instead, as it allows us to set a static domain, which is the loca.lt domain you see in the config (masked in the snippet for security). We also included our production URL where the app will eventually live, as that will also be static.

The last step was to make sure we’re outputting the correct headers in our lambda response, which is a simple change to our Response library. Here’s the full class definition with the new headers:

Feel free to use it in your own serverless projects if you want (you’ll just have to remove the config reference and potentially change the output JSON structure to suit your preference).

Lambda Functions Stay Warm!

Something of which we were aware, but didn’t fully understand the consequences, was the fact that Lambda functions will stay warm. Let’s understand what it means to stay warm (if you also live in Australia, you are well-acquainted with staying warm 🌞).

When someone requests your lambda function for the first time, it begrudgingly gets out of bed, spins up an instance, loads the required libraries, executes your handler code and hands you a response. Great!

For a short period of time after, the instance sticks around to see if anyone else needs its services before it goes back to sleep. Before it heads back to bed, someone else walks in with another request. To save time and energy, the function realises it has already spun up the entire environment it needs, and so it just simply re-runs the handler code and hands you the response. Cool!

This sounds great in principle, and it is, making subsequent requests much faster than the first one that runs from a cold start. However, it is important to keep in mind that this means anything loaded or initialised outside the handler function, will persist across invocations.

Again, this should be no problem, as all of our logic and definitions happen inside the handler functions… right? Well, if you remember, we actually instantiate our Config object as a singleton, e.g.:

This means that when we import it in other files, we are always importing this same instance of the object, rather than instantiating a new one each time. This is great to give us some global config we can access in different areas, but it means that we face the issue of our config being reused across different invocations as well, which is dangerous and unsecure, as the Config object contains database connection details, API keys and other secrets, and is used to determine which app is being requested.

So how do we solve this?

Well, if we change the Config object from a singleton, we could solve it, but we would lose the global functionality on which our app relies. We did consider doing this and trying to pass the config along to each handler that needed it, but that proved difficult with our middleware setup.

We thought about attaching the config data to the event or context objects which each handler receives as function parameters, but it felt wrong and seemed like a disaster waiting to happen.

Ultimately, the solution we found and settled on which required very little code change, was to simply create a reset function in our Config class, which would return the singleton to its default state, and we can simply call this as the very first step of every request made to our service (this happens via middleware).

Perhaps not the most elegant solution, but one which involved no refactoring, and let us retain the benefits of our service structure. We are very interested in hearing other solutions you may have if you’ve faced similar issues in the past!

Can We Get Back to Applications Please Yo

Wow! That was a lot! But we now have a fully operational API service, with an integrated Auth component, all running in a scalable serverless environment!

It’s pretty cool.

It may seem like a lot of work to set up this service, and be no closer to having built an actual app, but now we can focus on building useful apps without worrying about any of the backend architecture!

As always, we’d love to hear your thoughts on this approach to Shopify apps, or if you’ve had any similar challenges using AWS Lambda!

Part five of this series is now available, explaining how we implemented the Polaris guidelines and framework into our app.

Recent Articles