Back to blog

How to Build A Powerful, Scalable and Auto-Managed Notification Service

Almost any 21st-century project requires flexibility and scalability from an architectural point of view. Especially when it comes to chatbot development – if the customer wants to broaden functionality, carry some fresh ideas in and suggest user new handy features. Push notifications are not a brand new feature, but it’s something that can be developed in an architecturally rational way. In this article, we’ll show you how.



By the way, in an era of extreme popular technologies development, it’s virtually impossible to keep up and choose the most current tools. But being either the developer or the project owner, you should clearly understand that choosing the profound “starter pack” from the very beginning is the key to success.

So, hope this, as one of my best practices, is worth reading at least for comparison reasons (and certainly, this is not a very single solution).


The Problem

Imagine that your retail chatbot sells sports gear, providing lots of customers with a pretty nice service: answering all the typical questions, accepting, processing orders, and in general a lot of manual work has been automated. But you feel like the bot is somewhat raw and missing auto-mailing and the system for long-term processes. Your employees are still writing out the messages about each product delivery manually, and customers have started to subscribe to another service that provides them with something more than just online-purchases: reminders, news, interesting updates, discount notifications, and the wished product availability.

Don’t you miss the push notification service in your project?


Tools to tackle it

No doubt that a student can set a cron-scheduler at the server that will execute an action at a specific time on a recurring basis – very quick, but a primitive solution for such a worthy project. Amongst the main problems:

  • Cron is a system-level process, run on RAM. It’s not even about the limits you may face, but more about the accuracy. Moreover, what about clean up after the server reboot or any unhandled failure?
  • Being not an application process, Cron tangles the development process. Cron may even run at a different time than expected after the server timezone is changed – something that developers shouldn’t worry about, but would.
  • Smallest resolution – 1 minute – you can’t schedule a task that needs to be executed every 30 seconds
  • No queueing – you cannot specify an order for the jobs to complete, to divide them logically and finally make sure that they work independently
  • Dynamic params – imagine you are eager to send some specific text to all users this evening, in 5 hours. If you use the Cron you should create a new CronJob, hardcode the text, build and deploy – sense the difference?


You may investigate more on the weak sides of the Cron scheduler here and there, to be aware of the issue, and keep pace on issue updates.


Bull – is one of the most rational solutions right now, having integrated which allows you to solve the above-highlighted problems and be powerful in scheduling and managing tasks.


Here is a list of the tools that I prefer using for one of the potential architecture solutions. Also, I won’t dive into AWS services launch configuration details, or Redis setup possible problems, but you will definitely understand the general idea and the short hands-on during the article. In this “recipe” you’ll need:

  1. Node.js server
  2. Bull + Redis
  3. AWS EC2 + start scripts
  4. AWS Auto Scaling Group and AWS Elastic Load Balancer


Into the stack


1. Node.js server

You may use anything for building your API that will be responsible for managing queues. From  Express.js to hapi – anything you do prefer. The routing system /queue/:name should include:


HTTP method
POST – /job/:type define the delayed job or create & start the repeatable job
GET – /jobs

– /jobs/:id

get the list of all (one) jobs, may pass statistics, count, etc params
PUT – /pause

– /resume

– /empty

manage any queue type in the general conception; empty means removing all jobs from the waiting list
PUT – /retry/job/delayed/:id

– /promote/job/delayed/:id

manage the queue with delayed jobs – promote (force start the job that is in the waiting list) or retry if any has failed
PATCH – /job/repeatable

– /job/delayed/:id

change data or opts fields to modify or reschedule the job
DELETE – /job/repeatable

– /job/delayed/:id

remove either single delayed job by id, or all repeatable jobs from the queue (as all have common time settings)

where :type – the queue type (repeatable/delayed). It’s important to highlight that there are different Bull queues and job methods for the edit and delete processes for different jobs type – so it’s important to diversify them through routes. For instance, to update the delayed job – you should fetch the target job through queue.getJob , and then use job.update(newData), while for repeated one there will be the next action order queue.removeRepeatableByKey -> queue.add.


By the way, a nice solution will be to write the Bull wrapper. QueuesHandler – is the “list” of imported queues handlers from folders /queues/delayed and queues/repeatable. If you’re keen on Typescript – the piece of code below will be even much nicer:



Bull offers a great list of queue events, so you may be sure that you will miss nothing. Detailed info about:


2. Bull


Bull is a Redis-based queue service for Node.js (if you still are not familiar with the basics of Node.js – we strongly recommend you pass the Node.js tutorial first).


Previously, there was the light alternative – Kue, but now it’s no longer maintained. Anyway, Bull has a list of privileges such as repeatable jobs, atomic ops, and rate limiters. Moreover, there is even the Web GUI working for Bull queues called Bull Arena.


The advantages of the Bull:
  • Low CPU use and high performance. For the cases of huge mailing, the AWS ASG will automatically take care of this.
  • Allows to execute asynchronous functions
  • No limits for queues creation – just divide and conquer. Create as many queues with custom time rules for each and manage them easily.
  • Update, remove, promote, pause and resume – play the queue around however it’s needed
  • Jobs history – you may review all added, executed and even failed jobs with the saved error message. The statistics method allows seeing the queue summary
  • Error handling – there are events, that may catch the error, failure, and the cases when the queue was stalled, drained. Define the convenient handler for these cases and be sure that your developers receive the mail and you get the nice message from your Admin bot telling that planned New Year congratulation for your customers wasn’t finished so that your team will quickly check the stuff.


  • no service is 100-percent perfect. The project is maintained and always-improved. You may find out more about their current bugs and their statuses on their GitHub page.


What is the queue? This is the imaginary bucket, in which we may gather jobs, each with its own time settings (in 5 minutes, tomorrow at 10 AM, every Sunday), and for all jobs in this bucket, there will be a predefined process handler (function to execute). The handler is called when the time “comes” – either for all jobs in the queue (repeatable) or for a single one from the queue of the delayed ones. We really recommend you separate queues for each repeatable job and for the delayed purposes, even if they have the same executor, but different appointment:



You may associate the queue with buckets with clothes to-wash that’s of different colors, texture and different powder intended to be used; or as if it’s your daily tasks sorted in work/study/sport/food/ sleep blocks, whatever you like. The main idea is: virtual alarm (Redis) says oh, it’s right X timestamp now (execution time is configured in job opts), you have definitely something scheduled in Y queue, please check – then you just quickly “remember” (bull process on listener) the action (function) and execute it. It doesn’t work every second or millisecond – the alarm knows the exact time points and reacts only if the current time is equal to one of them.


Time settings: this virtually divides the Bull jobs into 2 types: delayed and repeated. Delayed job – this is a scheduled job executed once at this specific time in the future. For instance, “We have just sent your order in the Postal office. Here is your invoice number and please wait for the next notification that will count your buyer score.” The Bull’s add event literally stands for creating a job in the queue with delay field in the opts object and making the job in this queue “wait” for it’s appointed time (will see the difference with scheduled).


Value\Queue type
Delayed-jobs queue
Repeatable-jobs queue
different common
Time settings
different common
common common
once endlessly (until terminated)

Repeated joba job, that is configured with endless execution every X days/hours/minutes, whatever. To contrast the delayed jobs – bull’s add event, in this case, means creating this event once, and I will work endlessly every XXX milliseconds (under the hood) until the queue is not emptied (by your request or thanks to another scheduler). For example, a sports news daily digest, every 3rd Friday feedback gathering or even every 1st January special 50% discount promotion code for regular customers. You may pass one of the next options in the opts.repeat field when adding the job:




3. Redis

Redis – in-memory, run on RAM key-value storage used for reducing the load of the databases and increasing app performance, the prevailing concurrent of the Memcached and the leader on the caching ring for now.


Here is a great tutorial about manual Redis setup a Linux Machine – you should choose the Amazon Linux 2 AMI for your EC2.


The Redis plays a background role in this service, we just need to prepare it for the Bull, but we do not actually use it directly. In simple words, under the hood Redis helps to save 2 values: time and queue name. Then the Queue.process(job => /* manipulation*/) handler takes everything into his own hands:



4. AWS EC2 machine


It’s not a secret that AWS EC2 is the most popular AWS offering. Go to the EC2 Dashboard, choose the closest region and launch the instance. Configure the instance whatever it’s needed, but pay special attention to the Security Groups’ configuration – they control inbound and outbound traffic. In the Configure Instance tab you’ll see the advanced details chapter – here you should find the User Data input field. Prepare your build & start scripts and paste them right there. 


The #!/bin/bash script should be responsible for:

  1. Installing updates
  2. Installing the software (node.js in our case)
  3. Cloning (downloading) the project
  4. Installing dependencies
  5. Starting the server


Furthermore, consider the safe storage for your .env file – there may be important credentials for accessing your database and the API keys. We highly recommend you to think about AWS KMS encryption beforehand, as well as paying attention to the AWS Secret Manager.


For the CDE reasons (continuous delivery) it will be perfect to configure AWS CodePipeline at your cloud virtual machine, that will automatize build and deploy stages, but this article doesn’t cover this point in detail.


5. AWS Auto Scaling Group


AWS Auto Scaling Group – this service will help us to scale out or scale in automatically, depending on the load. Regarding the fact, that we have supposed that your retail bot has or plans to have a billion customers – this service will definitely make a positive impact on our notifications service.


For instance, if the load increased on any even custom metric rule – the ASG would automatically register the new instances and start the instance based on the script (use the one you created on step 4). There may be average CPU usage defined as the Auto Scaling Rule (for example, CPU should be <= 40%), and the CloudWatch alarm will monitor it. For the custom metric case – reference AWS PutMetricData API and create the CloudWatch metric manually.


You should define the maximum (min = 1) instances running, and  ASG will never exceed this limit. Another great advantage of the ASG is that it will automatically restart the instance if it gets terminated and will replace the unhealthy instances (will highlight the health check in the paragraph below).


In fact, Auto Scaling Groups are free, you will be billed only for the launched instances.



6. AWS Elastic Load Balancer


AWS Elastic Load Balancer – the name speaks for itself: the service will spread the load across multiple downstream instances. Moreover, it will expose a single DNS and redirect the traffic in case of failures – the fantastic power of AWS!


What is important to know: there is a handy possibility to configure the health check for the instances: which route should be called, what response should we expect, how many times we should try to be sure it’s a success and how many times it’s enough to get it as the failure. Add this one for your app to let the AWS manage the unhealthy instances on its own.


Moreover, knowing the Security Group features well – you may easily make the inbound traffic come only from the Load Balancer to your app, so no one will make the use of EC2 IP(s).



I hope the article was interesting for you, and you learned something new. Feel free to write comments, ask questions and liven the discussions up – we’ll be grateful to receive your feedback. And the architecture diagram should serve as a good summary of the article:



Back to blog

How To Register a Chatbot In WhatsApp With Twilio [Step-By-Step Guide]

The step-by-step guide is created for business owners and project managers who want to get a verified Twilio number. Mostly, business owners do not want to share Facebook Business Account Manager and payment card details. So, you can use the guide and get the number 🙂

The first step is to create a free Twilio account on the Twilio website.

Create a Facebook Business Account and Facebook Business Account Manager. Below the brief guide on how to do this and connect with Twilio phone number. Our recommendation is to have a good business account (not empty and without any info).


Create a Business Manager.


Go to


Click Create Account.



Enter a name for your business, select the primary Page and enter your name and work email address. Note: If you don’t yet have a Page for your business, create one.





Twilio WhatsApp request access



When you have finished with Facebook Business Account and Facebook Business Account Manager, upgrade your Twilio account and make the payment for it. You can’t get a number while using a free Twilio account. Attention! Sometimes (it depends on country dial code) Twilio asks business documents to confirm that the business is real. 


After that make a request from Twilio to WhatsApp(in case you have a phone number, facebook business account and Facebook business account manager). In the form enter a new phone number that you already bought. Pay attention and fill the line ‘Twilio account SID (you can find it in console Twilio) and Business Manager ID. 


 Wait for the notification regarding account approvement.



Back to blog

How To Build A Scalable Chatbot Architecture From Scratch  in 2019

In this publication series, we’re going to cover our best practices used during developing IT projects. If you’ve ever had thoughts about developing chatbot and conversational platform for your business  —  you’re in the right place, because today we’re going to start from the very beginning of any project  — architecture.  We hope that everyone will learn something useful and valuable in this publication. 

Moving right along, we strongly recommend you to separate chatbot module and conversation logic from the rest of your back-end system. Later we will find out why it’s important, prudent and how this can be beneficial for your project.

Let’s imagine that our imaginary chatbot project’s main goal is to deliver visualization of trading stocks data. In this case, we will need a module for fetching, storing and visualizing information. 

The microservice architecture will be more beneficial, as it ensures decentralization and the ability to easily connect separate entities. Moreover, scalability and speed are the other two key factors that will definitely impact chatbot performance. Therefore, it’s obvious that separating each module as a microservice in our architecture makes sense. Moreover, it’s profound and important to have each module scalable and resistant to high loads.

In fact, a conversational interface as a mobile app, messenger or even custom web chat – is potentially one of the future possible solutions for this issue that will ensure handy user interaction. Imagine the situation when you have defined everything in one module? So, how would you scale and encapsulate the business logic, when it’s mixed with conversation flow? In this case, it’s profound to separate REST API and NLU/NLP modules to provide the chatbot architecture with higher flexibility, as in the example below:

As you can see, it’s pretty easy to add new interfaces if you have seen anything about dividing the project into 3 or more major modules: Rest API, entire conversational flow and NLP/NLU usage. Now let’s talk about each architecture module in detail.

Chatbot Interfaces (front-end)

Surely the frontend of the bot plays an indispensable part in our project. Webchat on the website serves as the interface through which users can interact and talk with the bot. We really prefer using own web-widget, that has lots of advantages, amongst which there are:


Certainly, Facebook, WhatsApp, Slack or many other platforms are widely used, but they all have lots of restrictions of controllers you may use. So, the user will see only predefined, limited and equally designed for concrete platform calendars, buttons, notifications, file uploaders, and viewers.
In contrast, we may create as many as needed of our own custom elements, designed in colors, forms, and sizes, as our imagination allows.

Dynamic interaction

It’s a matter of fact. Users want to get the most when it comes to using bots. Sending not a static picture, but rather a rendered HTML chart, that looks like a picture with clickable inner elements — is the feature that attracts and makes our bot stand out among the rest.

AWS Lambda + AWS API Gateway

In our system, these notions stand for the core API of our system.
It’s not a secret that one of the biggest strengths of AWS Lambda functions is the reduced cost of execution. In a traditional web application, with code hosted on  and accessible through  an EC2 machine instance in AWS, you need to pay for the server usage regardless of the fact whether your API is actually in-use or not. The cost of the idle-time can be very high, depending on the instance particulars you’re working with.
As our project is not hosted on a specific server, we also considerably reduce the risk that our machine might break down. It can be easily substantiated: we aren’t obligated to rely upon a single machine to perform all the tasks of serving the app and executing the code . If one machine goes down, the process of replacing it is handled automatically by AWS. In other words, AWS “cares” for a problem instantly and our code doesn’t miss a beat.

AWS ElasticCache Redis

In our project, it’s users for the highly scalable caching system storing temporary data(cache, tokens, API calls counter, etc.). So, it’s not so “expensive” to make the same API calls or access databases with repetitive queries dozens or hundreds of times a day. However, it can be challenging to get it working with AWS Lambda. We’ll prepare an entire publication on this topic on the next series. Stay tuned 🙂

AWS DynamoDB

It offers built-in security, backup, and restores, as well as in-memory caching. Furthermore, it was extremely easy to implement it into our current system.

Main reasons why we use it:

  • Fast and owing single-millisecond latency
  • Integrated with most of AWS Services such as IAM, CloudWatch, etc.
  • Autoscale and elastic in nature
  • Virtually infinite storage 


Being in the process of cutting usage down as much as possible, AWS EC2 is still used for the static images generation due to AWS Lambda limitations. The main usage of it is based on the FusionCharts and uploading pictures, as well as infographics generation. But this point is highlighted in the next chapter.

In the first version of the chart, targeted for static image generation, we used Export and Upload service developed by FusionExport team. The rendered HTML is literally screenshotted, uploaded to the AWS S3 service that prevails over others due to the security, low cost, and scalability. For the same reasons, AWS S3 was used to store widget plugins and admin-pages for our project.

Let’s draw an interference from all above-mentioned:
To a certain extent, using AWS services makes the mind of the developer free of redundant anxiety regarding the machine “life.”
Mulling over correct architecture in advance plays a pivotal role in the project development.
Using Lambda + API Gateway definitely prevails over EC2 machines, as AWS automatically defines select new instances when it’s needed, so the application is an endless-liver.

In the next series, we’ll talk about using different charts services, caching process, experience, and comparison of a couple of other platforms we used. Stay frosty!

Back to blog

WhatsApp Business API [Common Questions for Chatbot Implementation]

WhatsApp is on everybody’s A-list. It’s nothing new that WhatsApp is on the hype and most companies want to implement the bot into for business. Statistic says, that today messenger has 1.6 Billion monthly active users across 170+ Countries and 80% of messages sent to WhatsApp are seen within 5 minutes compared to the traditional SMS. Looks impressive but there are still a lot of uncovered questions that go on and on.

So, I gathered the most important for development and implementation questions that cover WhatsApp Business API. In the course of creating the article, I also checked Quora questions I’ve previously answered related to the topic. Please check the following list and write the questions you want to know.

WhatsApp bot


What is a WhatsApp bot?

For now, the WhatsApp bot seems like a simple AI solution. The bot for this messenger communicates only through text chat and images. Buttons and additional tools are not possible like on other messengers.

How safe are WhatsApp bots?

WhatsApp bots are very safe. Compared with other platforms, WhatsApp bots really encrypt messages. Messages to the WhatsApp server come via encrypted format. Check WhatsApp Availability and Scaling to understand how the service architecture provides this.

How do I build a simple Whatsapp bot?

To create a simple bot for WhatsApp without any additional help you can use the Twilio API that allows one start building and prototyping in a sandbox. Visit Twilio API for WhatsApp to understand how it works.

Is it possible to integrate a chatbot in Whatsapp which replies to my messages?

As mentioned previously, you can already build a bot for WhatsApp. First of all, install and set up the WhatsApp server. Then you will get access to the API that gives the opportunity for a bot to start working.

Warning: the bot will start working if it is created with programming languages but not with cloud platforms.

You can probably integrate a chatbot with WhatsApp but you’ll need to create an integration by yourself.

Launch the following services:

Getting Started — Quick outline of how to get started and next steps

Network Requirements — Network requirements necessary for the WhatsApp Business API client to connect to WhatsApp servers

Phone Number — Verify and register your business’s phone number

Verified Name — Display your business’s name when talking to your customers

Volumes — Securely store WhatsApp data while using Docker

Installation and Upgrading — Installation scripts and guide to get the WhatsApp Business API client running

HTTPS Setup — Set up and use HTTPS endpoints

Users endpoint — Create, manage, and use access tokens and user accounts for additional security

Backup and Restore Settings — How to backup and restore your WhatsApp Business API client

Deploying WhatsApp on AWS — Create an Amazon Web Services cloud to run the WhatsApp Business API client

Rate Limits — WhatsApp Business API endpoint rate limits scenarios

Understanding How to Get Opt-in for WhatsApp — How to present your users with an opt-in for messaging

Postman Collection — Import our WAE Developer Collection into Postman to quickly test the WhatsApp Business API

Then create an integration code with AI WA. Check out API Reference to understand how to build that. Here you find information regarding WhatsApp Business API Root Nodes and example of WhatsApp Business API format uses the contacts node.

How do I create a computer bot that can automatically reply to all our Facebook and Whatsapp messages?

As far as I know, one of the solutions is to use the Twillio API for WhatsApp using a simple REST API. The Twilio API for WhatsApp allows developers to start building and prototyping in a sandbox. In order to launch apps in production, you can request access to enable WhatsApp on your Twilio number. WhatsApp is currently opening up access in a Limited Availability program, where WhatsApp approval is required for all customers who wish to create their own profiles. Check out this link to start work with Twiliio API.

Other platforms, for example, Clare.AI and Hubtype, are still in beta testing. You can use these ones if you have the time when it will be launched.
For now, start learning how to make your bot for WhatsApp using WhatsApp Business API. So, the solution is to learn guidelines and tutorial and develop bot by yourself or pass the work to agencies.

How can I connect a WhatsApp group with a Telegram Group using a bot?

As of now, there isn’t any opportunity to work with groups in WhatsApp, unfortunately. As far as I know, only p2p communication is possible.

How accessible is the WhatsApp API?

WhatsApp API is accessible after branching on servers. Check this Installation and Upgrading to understand how it works in coding.

How can I get Whatsapp API?

First of all, install and update the WhatsApp Business API Client using Docker Compose. Follow all the steps and then you will get the WhatsApp API as you need it.

Make sure you have gotten approval for your business’s phone number and have a Verified Name certificate before attempting installation.

Initial Setup

Before continuing, please read through the other WhatsApp Business API guides.

This guide requires Docker, a container platform that lets you run the WhatsApp Business API Client. Docker Compose is also required. Docker Compose is bundled with Docker for macOS and Windows but requires a separate installation on Linux.

1. Install Docker on your system.
2. If Docker Compose is not bundled with your Docker installation, install it.
3. Download the docker-compose.yml and db.env configuration files: (find it here)
4. Open a console and navigate to the directory where you saved the downloaded files.
5. If you have a MySQL installation running, change the values in the db.env file to reflect your MySQL configuration. If you do not have MySQL installed, the docker-compose.yml and db.env files have a default configuration to bring up an instance in a local container.
6. Run the following command in the console:

docker-compose up

You will get some output, while the script downloads the Docker images and sets everything up. To run the containers in the background, use the -d parameter:

docker-compose up -d

Once you have completed these steps, ensure that the containers are running with the following command:

docker-compose ps

By default, the Web app container will be running on port 9090.

You can download and configure our Postman Collection to easily interact with the WhatsApp Business API.

Refer to the Registration documentation for next steps.


Please note that the list could be supplemented by other related questions. If you have anything to add, please share it with us and we’ll include it in the existing list. So, follow me and stay on top of WhatsApp chatbots novelties.

Get a Quote