How to implement logging in your REST service by using Elasticsearch - PART 2
How to implement logging in your REST service by using Elasticsearch - PART 2
Note: This article is the second part of How to implement logging in your REST service by using Elasticsearch article series. Click the below link for part 1 of this series, or if you have already gone through that then you are good to proceed with part 2.
How to implement logging in your REST service by using Elasticsearch - PART 1
In part 1 of this article series, I explained the introduction to the ELK stack and how it can be used to implement logging in your REST service. You can find the first section of this series in the following links medium, dev, and hashnode. In this section, we will discuss how we can install, configure and use the ELK stack.
For logging purposes, Elasticsearch comes with two other tools Kibana and Logstash. Together they form the ELK stack as introduced in part one of this article. To avoid confusion and reading fatigue, I will divide this second part into two sections as well
Part 2.A: Install and configure Elasticsearch
Part 2.B: Install and configure Kibana and Logstash as well as how to use the ELK stack
Logging Process with ELK stack
Part 2.A: Install and configure Elasticsearch
Initially, we need to download Elasticsearch, Kibana, and Logstash. Below are the links where you may download these tools. Currently, I am using version 8.3.1 in a Linux environment, but the version might vary for different environments and times. Also in this article, we will be using APT (Advanced Package Tool) which is a Linux installation package manager to download and install all tools.
Note: In this article, we mainly focus on accessing logs logged in an external file or files. This means you need to implement an external logging service to a file or files in the REST architecture of your choice. Thus all logs should be accessed from a specific file or files.
Steps 1 - Installing Elasticsearch
To begin, we need to configure the Ubuntu package repository by adding Elastic’s package source list in order to download and install Elasticsearch. This is not configured by default, so we need to do it manually.
Note: All of the packages are signed with the Elasticsearch public GPG key in order to protect your system from package spoofing. Packages authenticated using a key are considered secured by the downloading manager.
a. Open the terminal and use the cURL command-line tool for transferring data with URL, to import the Elasticsearch public GPG key into APT. We are also using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected:
b. add the Elastic source list to the sources.list.d
directory, where APT will look for new sources:
c. update your package lists so APT will read the new Elastic source:
d. Use this command to install Elasticsearch
If you have reached this far without any error, that means Elasticsearch is now installed and ready to be configured. 🎉
Steps 2 - Configuringing Elasticsearch
All Elastic search configuration goes into elasticsearch.yml
a. Use the command to access elasticsearch.yml file
There is a lot you can configure in Elasticsearch such as cluster, node, path, memory, network, discovery, and gateway. Most of these configurations are already preconfigured in the file but you can change them as you see fit. For the sake of this tutorial, we will only change the network host configuration to allow single server access.
Elasticsearch listens for traffic from everywhere on port 9200. For this reason You may want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API]
In order to accomplish this, find the line that specifies network.host
, uncomment it, and replace its value with custom IP address
like this:
b. If you accessed the configuration file by nano, use the following key combination to save and close the file CTRL+X
or ⌘ + X
in Macintosh, followed by Y
and then ENTER
.
Steps 3 - Configuring Elasticsearch
We use systemctl
command to start Elasticsearch service, this will allow Elasticsearch to initiate properly otherwise it will run into error and fail to start.
a. Open terminal and run command.
b. You can also enable Elasticsearch to automatically run on every system boot.
c. Run the following command to test your Elasticsearch. note, as for me Elasticsearch is running on localhost:9200
. you may want to specify which IP:Port
address your Elasticsearch is using upon.
If every thing went well, you will see a response showing some basic information about your local node, similar to this:
Now that Elasticsearch is up and running, in the next section which is Part 2.B of this article series, we will install Kibana, and Logstash and test our Logging configuration.
Just incase you dont know, there are many more article like this at ClickPesa on hasnode, ClickPesa on dev.to and ClickPesa on medium. You will thank me later.