Back to blog

How To Build A Scalable Chatbot Architecture From Scratch  in 2019

Marta Bobyk

Marta Bobyk

Yuriy Zahreva

Yuriy Zahreva

July 22, 2019

In this publication series, we’re going to cover our best practices used during developing IT projects. If you’ve ever had thoughts about developing chatbot and conversational platform for your business  —  you’re in the right place, because today we’re going to start from the very beginning of any project  — architecture.  We hope that everyone will learn something useful and valuable in this publication. 

Moving right along, we strongly recommend you to separate chatbot module and conversation logic from the rest of your back-end system. Later we will find out why it’s important, prudent and how this can be beneficial for your project.

Let’s imagine that our imaginary chatbot project’s main goal is to deliver visualization of trading stocks data. In this case, we will need a module for fetching, storing and visualizing information. 

The microservice architecture will be more beneficial, as it ensures decentralization and the ability to easily connect separate entities. Moreover, scalability and speed are the other two key factors that will definitely impact chatbot performance. Therefore, it’s obvious that separating each module as a microservice in our architecture makes sense. Moreover, it’s profound and important to have each module scalable and resistant to high loads.

In fact, a conversational interface as a mobile app, messenger or even custom web chat – is potentially one of the future possible solutions for this issue that will ensure handy user interaction. Imagine the situation when you have defined everything in one module? So, how would you scale and encapsulate the business logic, when it’s mixed with conversation flow? In this case, it’s profound to separate REST API and NLU/NLP modules to provide the chatbot architecture with higher flexibility, as in the example below:

As you can see, it’s pretty easy to add new interfaces if you have seen anything about dividing the project into 3 or more major modules: Rest API, entire conversational flow and NLP/NLU usage. Now let’s talk about each architecture module in detail.

Chatbot Interfaces (front-end)

Surely the frontend of the bot plays an indispensable part in our project. Webchat on the website serves as the interface through which users can interact and talk with the bot. We really prefer using own web-widget, that has lots of advantages, amongst which there are:

Customization

Certainly, Facebook, WhatsApp, Slack or many other platforms are widely used, but they all have lots of restrictions of controllers you may use. So, the user will see only predefined, limited and equally designed for concrete platform calendars, buttons, notifications, file uploaders, and viewers.
In contrast, we may create as many as needed of our own custom elements, designed in colors, forms, and sizes, as our imagination allows.

Dynamic interaction

It’s a matter of fact. Users want to get the most when it comes to using bots. Sending not a static picture, but rather a rendered HTML chart, that looks like a picture with clickable inner elements — is the feature that attracts and makes our bot stand out among the rest.

AWS Lambda + AWS API Gateway

In our system, these notions stand for the core API of our system.
It’s not a secret that one of the biggest strengths of AWS Lambda functions is the reduced cost of execution. In a traditional web application, with code hosted on  and accessible through  an EC2 machine instance in AWS, you need to pay for the server usage regardless of the fact whether your API is actually in-use or not. The cost of the idle-time can be very high, depending on the instance particulars you’re working with.
As our project is not hosted on a specific server, we also considerably reduce the risk that our machine might break down. It can be easily substantiated: we aren’t obligated to rely upon a single machine to perform all the tasks of serving the app and executing the code . If one machine goes down, the process of replacing it is handled automatically by AWS. In other words, AWS “cares” for a problem instantly and our code doesn’t miss a beat.

AWS ElasticCache Redis

In our project, it’s users for the highly scalable caching system storing temporary data(cache, tokens, API calls counter, etc.). So, it’s not so “expensive” to make the same API calls or access databases with repetitive queries dozens or hundreds of times a day. However, it can be challenging to get it working with AWS Lambda. We’ll prepare an entire publication on this topic on the next series. Stay tuned 🙂

AWS DynamoDB

It offers built-in security, backup, and restores, as well as in-memory caching. Furthermore, it was extremely easy to implement it into our current system.

Main reasons why we use it:

  • Fast and owing single-millisecond latency
  • Integrated with most of AWS Services such as IAM, CloudWatch, etc.
  • Autoscale and elastic in nature
  • Virtually infinite storage 

AWS EC2

Being in the process of cutting usage down as much as possible, AWS EC2 is still used for the static images generation due to AWS Lambda limitations. The main usage of it is based on the FusionCharts and uploading pictures, as well as infographics generation. But this point is highlighted in the next chapter.

In the first version of the chart, targeted for static image generation, we used Export and Upload service developed by FusionExport team. The rendered HTML is literally screenshotted, uploaded to the AWS S3 service that prevails over others due to the security, low cost, and scalability. For the same reasons, AWS S3 was used to store widget plugins and admin-pages for our project.

Let’s draw an interference from all above-mentioned:
To a certain extent, using AWS services makes the mind of the developer free of redundant anxiety regarding the machine “life.”
Mulling over correct architecture in advance plays a pivotal role in the project development.
Using Lambda + API Gateway definitely prevails over EC2 machines, as AWS automatically defines select new instances when it’s needed, so the application is an endless-liver.

In the next series, we’ll talk about using different charts services, caching process, experience, and comparison of a couple of other platforms we used. Stay frosty!

Tags: , ,

Get a Quote