<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1063935717132479&amp;ev=PageView&amp;noscript=1 https://www.facebook.com/tr?id=1063935717132479&amp;ev=PageView&amp;noscript=1 "> Bitovi Blog - UX and UI design, JavaScript and Frontend development
Loading

8 Topics Every Node.js Microservice Developer Should Know

This article explores 8 topics that NodeJS microservices developers should be aware of when transitioning from monolithic application development.

The Bitovi Team

The Bitovi Team

Twitter Reddit

When you design a microservice system, there are some key topics and tools you should be familiar with. Designing a successful microservice system differs from developing a successful monolithic application in several key ways, and the better understanding you have of these differences the better you’ll be able to ensure that your environment is reliable, secure, and consistent. In this article, we’ll discuss eight topics that NodeJS microservice developers should be familiar with.


These topics are:

1. Service Separation
2. Data Security
3. Documentation
4. Effective Testing
5. Versioning
6. Containerization
7. Queues/Eventual Consistency
8. Data Lakes and Bug Tracking

1. Service Separation


NodeJS microservice developers should think of services as self-contained applications, often supported and managed by different development teams. The primary advantage of using microservices is that they develop and release independently, decreasing development time with quicker testing cycles. 

There’s no requirement that services within a system need to be written in the same programming language or use the same underlying technologies. Ultimately, microservices function as black boxes, and service consumers won’t need to know what’s going under the hood of the microservice, they just need to see inputs and outputs.

Microservice APIs are commonly accessed by other servers, and not just by clients or UIs. Developers need to account for this type of access when creating services. When designing services, developers should take into account information flow for both “client-to-server” and “server-to-server” instances. Sessions are rarely used for these services, instead they should be as stateless as possible. 

The Short Version:

  • Services are self-contained applications, developed and released independently. 
  • Services don’t need to be written in the same language or use the same underlying technologies. Service consumers only need to see inputs and outputs. 
  • Microservice APIs are commonly accessed by other servers, and devs should take into account information flow for both “client to server” and “server to server” instances. 
  • Services should be as stateless as possible.

2. Data Security

When designing a monolithic application that will interface with a server, traditional authentication and authorization mechanisms work just fine. However, NodeJS microservices often have multiple applications and servers accessing their data, meaning that a modified authorization and authentication schema is required.

When transitioning to a microservice development architecture, it’s typical to create a microservice that’s specifically intended to handle authorization, connect to external authorization systems, or both. External authorization systems take the form of SSO (Single Sign-On) systems or a social authentication system letting users reuse their existing logins like Google or Facebook. 

A common method of handling authentication for microservices is OAuth/Open ID Connect, which enables users to give applications permission to access data on their behalf (often referred to as delegated authorization). Simple bearer tokens often come up short in these designs. The use of JSON Web Tokens (JWT) will commonly fill these gaps by encoding scope and other metadata into the token.

As always be sure to encrypt data in transit using SSL, and encrypt sensitive data like passwords and contact info at rest. It’s also extremely important to pay attention to what data might show up in access logs. Because interservice communication occurs so often within a microservice architecture, data is bound to show up in many servers, so it must be treated judiciously.

The Short Version:

  • Microservices require a more mature authorization and authentication schema than monolithic applications.
  • Authorization can be handled by one or more of the following: your own service, external services (SSO), or social platforms.
  • OAuth/OpenID Connect enables users to give applications permission to access data on their behalf.

3. Documentation

 

Documentation is critical for the development of any application, but it’s especially important for microservice systems, regardless of whether you’re developing with NodeJS or another environment. The success of a microservice-based application relies on the ability of microservices to integrate with each other. While different development teams will be overseeing different microservices, it’s important that any given microservice be able to integrate seamlessly with any other microservice. 

Well documented microservice APIs are those that enable clients to consistently and predictably interface with them. Documentation should drive development and docs should follow standards like Open API Specs. Inconsistent documentation and engineering will prevent individual microservices from being able to communicate with each other. To address this problem, Open API Specs set out standards for data types, document structure, and schemas for interfacing with your API’s different object types.

In addition to any typical inline comments that exist in a codebase, events and unseen processes also need to be documented. CRON jobs and other automated processes should have their own documentation outlining the tasks that are part of the job.

The Short Version:

  • Documentation helps microservices integrate seamlessly with any other microservice. 
  • Documentation should drive development and docs should follow standards like Open API Specs.
  • Preserve inline code comments.
  • Document unseen processes like events and CRON jobs.

4. Effective Testing

 

When developing a microservice system in NodeJS, you need to test with careful consideration. Ensure that the tests provide truly valuable assurance regarding the reliability of your microservices. 

Many developers use code coverage as a benchmark when evaluating the quality of their tests. However, while code coverage can be a useful metric for assessing the completeness of tests, it should never be the only metric. Code coverage can be deceptive as it only tells you how many lines of code your tests have touched overall, not if you have tested cases that might break your code. Don’t just test to increase coverage, be sure that you are proactively thinking of and testing edge cases that might cause your code to fail.

Microservices often rely on each other to operate as intended, so every microservice within the system should be rigorously tested to detect as many bugs as possible. It’s especially important to test thoroughly and catch bugs before they show up in production, as debugging an issue in a distributed microservice system can prove difficult.

Contract testing is a good way to ensure that messages can move from consumer to provider and vice versa. The goal of a contract test is to determine if two separate microservices are compatible with one another. It does this by logging the interactions that the microservices have with each other and storing them in a contract which both services must adhere to. 

Contract tests can be used to ensure that both consumer and provider possess an accurate understanding of the request-response relationship, and when they are combined with traditional, functional tests that check inputs and outputs, you can be much more confident in the reliability of your entire microservice system. Contract testing can be done with frameworks like Pact.

The Short Version:

  • Be sure that you are truly testing edge cases that might break your code, not just testing to increase coverage.
  • Use contract testing, with frameworks like Pact, to ensure that messages can move from consumer to provider and vice versa.

5. Versioning

Microservices should always be managed with versioning. In fact, versioning is one of the most critical parts of maintaining a microservice system. Unlike when designing a monolithic system, microservice APIs are written and maintained independently. Proper versioning ensures that microservices which are working continue to work even if changes to other microservices are made.

This means that they each should be updated only when necessary. You shouldn’t force a microservice to adhere to new changes as soon as they are made, rather they should be updated according to semantic versioning standards, which follow a “MAJOR.MINOR.PATCH” schema.

The MAJOR portion of the version number is only updated when a breaking change has been made that isn’t backwards compatible. The MINOR portion is changed when backwards compatible changes are introduced to the system. Finally, the PATCH portion of the version number is updated whenever patches or bug fixes are released. 

The Short Version:

  • Proper versioning helps ensure that microservices continue to work even if changes to other microservices are made.
  • Don’t force microservices to adhere to new changes as soon as they are made, update them according to semantic versioning standards.

6. Containerization

 

After transitioning from a monolithic application to an agile, microservice-based architecture you will almost always need to use some form of automated deployment. NodeJS developers can accomplish this with DevOps tools and techniques such as Kubernetes, Circle CI, or AWS Cloud Build. Developing and deploying with containers is a common strategy for ensuring consistency in this area.

Containers are essentially bundles of everything a service or application needs to run. Container engines can be used to quickly create new instances of a microservice or system component, or to destroy these components if you no longer need them. Another reason that containers are so useful is that they are vendor agnostic, and they can be deployed on any commonly used container hosting platform. 

Containers can also assist with local development by reducing the risk of errors in production, letting you install and remove tools in a controlled environment without having to worry about cleanup. Docker is by far the most commonly used container engine, but other container creation engines like Oracle and OpenVZ exist.

The Short Version:

  • Service containers bundle everything a service needs to run together. Container engines, like Docker, can be used to run your microservices.
  • Agile development tools like Git and Jenkins can be used to automate deployment of containers.

7. Queues / Eventual Consistency

 

One of the defining features of a microservice-based system is that when one microservice goes down other microservices remain operable. Synchronous result delivery is often expected in monolithic systems, but in a microservice environment you can’t rely on this. You have to have some way of ensuring that when one microservice fails the entire chain doesn’t break. One way to guard against synchronous failures is by using queues.

When a microservice is configured to run asynchronously, it may transact the data in the target service synchronously, while queueing the transaction for downstream services asynchronously.

By adding transactions to queues, they are preserved even if a microservice fails. If a necessary microservice goes down, the transaction will remain in the queue until the microservice is restored and the requests are completed. Popular message queue tools include Kafka, Rabbit MQ, and Amazon SQS.

The Short Version:

  • You can protect against failures of synchronous result delivery by using queues for asynchronous delivery to downstream services.
  • Queues preserve transactions even if a microservice fails, and they can be managed with tools like Kafka, RabbitMQ, and Amazon SQS.

8. Data Lakes and Bug Tracking

 

When transitioning to a NodeJS microservice design pattern from a monolithic design pattern, you’ll need effective methods of reporting data and debugging errors. 

Because data is distributed in a microservice architecture, a tool for centralized reporting is necessary. Data lakes, like those created by Snowflake, help report data for large, complex systems where data comes from many different sources. Data lakes are repositories that let you store structured and unstructured data at any scale you want. Data lakes can hold different formats/structures of data and enable retrieval with a single interface.

Because bugs can spread across multiple microservices, it’s a good idea to have tools that can perform centralized error monitoring. Tools like Sentry assist in tracking which components of a microservice interface with parts of another microservice, enabling easier, more efficient debugging.

The Short Version:

  • Data lakes are tools for centralized reporting that let you report data originating from many different sources.  
  • Centralized error monitoring tools like Sentry help make tracing and debugging of cross-service bugs easier.