Blog by Aliaksei Belski

Developing a "reactions bar" using Javascript and AWS Services

Published: October 28th, 2021javascriptNetlifyGatsbyReactions BarAWS LambdaDynamoDB

Developing a "reactions bar" using Javascript and AWS Services

I’m a child of 90th and remember the rise of the Internet community. With a wish to collaborate with others on websites, owners set up guestbooks, bulletin boards, or introduced a possibility to comment posts. And now, partially because the Internet end up with social networks and endless content ribbons, it became noticeable that very few people across the audience are leaving comments. That’s why collecting reactions became critical for marketing and analytics purposes.

I personally took many attempts as well trying to involve readers in a discussion, but with a little number of visits, it was particularly ineffective (though the platform I used for the last time, Commento, was quite good by itself). So I decided to collect reactions myself, and implement a solution on my own this time.


Implementation can be found on GitHub. It involves a couple of AWS services, such as Lambda functions and DynamoDB, and supports basic configurations to stick within a budget.


That planned to be a quick enough solution as the workflow was the simplest: just three buttons that appear under each article and count clicks. no session identifiers, and even on limitations: click as many times as you want (Medium has a similar concept with its claps).

Reactions bar - box and shapes

But don’t be confused by straightforward UI: things are getting knotty under the hood. As the reaction tool is not supposed to be used all the time, a dedicated server is definitely not something I would want to pay for. This is an ideal case when Serverless implementation makes a difference: you don’t spend money when no one uses it. That implementation involves three AWS Services though:

  1. DynamoDB, to store counts;
  2. Lambda functions, to allow creating, listing, and updating records;
  3. API Gateway, to access AWS resources via the Internet.

Reactions bar - was diagram

“OK, what we’re waiting for, let’s go and allocate some resources in AWS”, you may say. At the first glance, it looks like the easiest way to start. However, if you think about the implementation process itself, you may expect frequent changes in the architecture while developing things (additional configurations, limitations, etc). Also, it should be developed locally somehow, and it’s not great to use production resources. To handle all of that, the Serverless framework was designed. The idea behind it is to describe configuration settings in a YAML file, so every time you need to run the project, it can be done offline or be deployed on any number of stages. Settings have a strict structure and usually include the following information:

  • Project settings, as the application name, schema version, and environment prerequisites;
  • Resources that need to be allocated;
  • Their configuration settings, including a source of keys, operations allowed, and limitations.

The final configuration file can be found here. We’ll discuss some bits of information from this config later on.


Once the decision is made on WHAT should be done, and it's clear HOW it’ll be delivered, it’s time to do the part that many developers try doing first: coding stuff. It involves service implementation and all connections with the existing website.

As the initial plan is to define three endpoints, let’s stick to it. Their logic will be simple and somehow repetitive, so I'll focus only on one of them. For each function we need to:

  1. Prepare input for a DB call based on incoming parameters;
  2. Request the database;
  3. Correctly handle response results.

A possible implementation for a simple 'reaction' could be as displayed below.

const DDB = require('@aws-sdk/client-dynamodb');
const { Marshaller } = require('@aws/dynamodb-auto-marshaller');

// Marshaller helps transforming DynamoDB strings automatically to the flat object and back
const marshaller = new Marshaller();

// The table name is used both in configuration and in code, 
// So we're moving that name out to keep in one place
const TableName = process.env.TABLE_NAME;

// Database connection settings, storing separately to be safe
const dbParams = { 
  region: process.env.AWS_VOTES_REGION,
  endpoint: process.env.AWS_VOTES_ENDPOINT

 * Prepares a request to upvote in specific category
 * @param {string} articleId Unique article code to refer
 * @param {string} action Type of action, 'e.g. `likes`'
 * @returns {Object} Object to patch database
const voteIncrementParams = (articleId, action) => {
  return {
    UpdateExpression: `SET #action = #action + :action`,
    ExpressionAttributeNames: { "#action": action },
    ExpressionAttributeValues: { ":action": { "N": "1" } },
    ReturnValues: 'ALL_NEW',
    Key: marshaller.marshallItem({ articleId })

const dbClient = new DDB.DynamoDBClient(dbParams);

 * Saves an vote to a storage
 * @param {Object} event AWS Lambda event
 * @param {string} Article Identifier
 * @param {string} event.body Stringified payload
 * @returns {{ statusCode: number, body: Object }}
 */ = async ({pathParameters, body = '{}'}) => {
  const articleId =;
  const { action } = JSON.parse(body);

  const params = voteIncrementParams(articleId, action);
  const command = new DDB.UpdateItemCommand(params);

  try {
    const result = await dbClient.send(command);
    console.log(`[Votes] Successful increlemt of action[${action}] for articleID[${articleId}]`, result);
    return {
      statusCode: 200,
      body: JSON.stringify(marshaller.unmarshallItem(result.Attributes))
  } catch(ex) {
    console.error(`[Votes] Error while incrementing action[${action}] for articleID[${articleId}]`, ex);
    const { name: errorCode, message } = ex;
    return {
      statusCode: 400,
      body: JSON.stringify({ errorCode, message })

After executing this code for the test article which you didn’t like, the response will look similar to the example below, which is what is expected.

    "shares": 0,
    "articleId": "test",
    "dislikes": 1,
    "likes": 0


Creating a simple version of an endpoint is not enough when it comes to a real production environment, and the reason is simple: in AWS, you’re billed per request and per every 100ms of execution time, so making sure Lambda is called only when it’s needed is essential (otherwise anyone can make you broke in a snap).

To avoid that, our lambda should have at least 2 levels of security:

  1. Making a request allowed only with an API key, securely stored under the hood;
  2. Even with a key, an endpoint has a quota limit (e.g. NO MORE THAN X REQUESTS PER DAY) as well as boundaries for limiting concurrency. It helps to control the usage, by turning off the reaction bar if you have unexpectedly more reactions than the count of visitors.

Restricted Access

Technically, it’s really simple to reproduce the request. Everyone can open the Developer Console, find a request in the list and recall it. That’s why so important to have hidden parts, that tell the server to never start processing until they match from both sides. In AWS, it can be implemented with the so-called x-api-key: a special header that identifies a correct requester from fake ones.

To tell Lambda that such parameter is required, the setting function.<name>.events.http.private in Serverless config should be set to true . After that your endpoint will never return data until API Gateway verifies a key, which is set by you in provider.apiGateway.apiKeys array or auto-generated by AWS.

      - value: 'ANY_LONG_RANDOM_API_KEY'
      - http:
          private: true

Limited Usage

But not only authentication matters. Technically, you can be “robbed” by AWS even when all the requests are coming from your own website, so limiting API usage is important: There are several ways of doing that, and all of them perform better when combined:

  1. Limit provisioned concurrency for a lambda underneath an API call. This will create an artificial bottleneck, controlled by you, as by default AWS allows to have up to 1000 concurrent calls.
  2. Set a usage limit for a particular API key. This helps if someone, for example, starts putting some pressure on your website, responding with 429 TOO MANY REQUESTS code for requests over the daily/weekly/monthly limit.
  3. Set a throttling on API Gateway level. This is particularly helpful if you use multiple API keys and want to limit overall throughput. The default is 10000.

Reactions bar - limits

After applying all the changes, your configuration may look similar to the example below.

  # other provider's settings...
      - value: 'ANY_LONG_RANDOM_API_KEY'
	  # usage plan for the API key above 
        limit: 1000
        period: DAY
        burstLimit: 5
        rateLimit: 10

    handler: src/
    provisionedConcurrency: 1
    # rest of settings...

    maxRequestsPerSecond: 10
    maxConcurrentRequests: 5

  - serverless-api-gateway-throttling
  # other plugins...

Embed new API to the Gatsby blog

As the design of the blog is (historically) minimal, the reaction bar had to be minimal as well. After a couple of tries, I came to this compromise between my design skills and functionality:

Reactions bar - implementation

These three buttons under the article represent reactions we want to count. Of course, frankly speaking, share button is not a reaction, but it is still good to track if someone decided to share an article with others. Usually, it’s even better than an actual like :).

In Development mode, Gatsby provides a convenient way o embed third-party APIs to the app using Proxy feature. There are two possible ways to use it: the simplest, with the list of hosts/prefixes, and the advanced one, with an expanded toolset. As it's needed to pass a custom header to the proxy, the second one fits better. Luckily, as far as development proxy is concerned, Gatsby exposes entire Express.js, so it’s easy to connect a proxy middleware and to add a tiny configuration to it:

module.exports = {
  developMiddleware: app => {
        target: process.env.REDONE_VOTE_HOST,
        headers: {
          'x-api-key': process.env.API_KEY

As you might notice, both values (host and API key) are hidden in environment variables which gives a better chance to protect them.

Alright. So now, everything works seamlessly in development mode and we can use our endpoint via proxy. But what about production mode? Because it’s Gatsby, it means that the production bundle is just a static file and there is no own web-server underneath. How to make it work anyway?

You can try to utilize redirects if your hosting provides this option. I use Netlify, which receives redirect rules declaratively and does all the job. All that is needed is to tell Netlify, that requests coming from /api/v1 should be redirected to the reactions endpoint and pass an additional header for authentication.

  from = "/api/v1/*"
  to = "$BLD_HST/prod/api/v1/:splat"
  status = 200
  force = true
  headers = {x-api-key = "$BLD_TKN"}

Did you notice strange variables called $BLD_HST and $BLD_TKN? These are places where host and token values should appear before a deployment. The only problem is that netlify.toml file doesn’t recognize any environment variables, but we don’t want to put them right in the file and store them in a repository for security reasons. That’s why we need to take a step back and utilize another Gatsby function: hooks.

Gatsby Lifecycle API includes a set of hooks, allowing you to step in the process of preparing your build and do additional changes. For this case, at the moment of writing, documentation is still not ready, but sources say that there are onPreBuild and onPostBuild hooks we can utilize, to rewrite our Netlify configuration right on the Gatsby Cloud server. To do so, let’s define a separate yarn (you can use npm, doesn’t really matter) command which replaces substrings in the file on the fly:

  "scripts": {
    "netlify-build": "sed -i \"s|\$BLD_TKN|${API_KEY}|g; s|\$BLD_HST|${REDONE_VOTE_HOST}|g\" static/netlify.toml"

This script will be called by the code placed at the bottom of the gatsby-node.js file.

const util = require("util");
const child_process = require("child_process");
const exec = util.promisify(child_process.exec);

exports.onPreBuild = async ({ reporter }) => {
  const reportOut = (report) => {
    const { stderr, stdout } = report;
    if (stderr) reporter.error(stderr);
    if (stdout);

  reportOut(await exec("yarn netlify-build"));

Quite confusing, right? Let’s recap the steps:

  1. Gatsby server downloads sources from Github and ready to start building access;
  2. Before assets are built, Gatsby starts yarn netlify-build script, which, based on environment variables applied, replaces placeholders with real data. The file is changed on a remote server and never committed back to Git, so it's safe;
  3. Gatsby prepares a build folder and deploys it to Netlify;
  4. Netlify recognizes a netlify.toml file with configuration and enables redirects;
  5. All set! Calling a particular URL will upvote a particular action in the database.

Implementing new API-based functionality for a static website may seem simple. But as a website is static and doesn’t have its own computational capabilities, there is a number of additional steps to ensure security. However, going serverless is not just a trend these days as it allows, when properly set up, huge flexibility and better control of costs.