Guide to Installing Elasticsearch, Logstash, and Kibana on Amazon Linux 2023

BalajiBalaji
4 min read

Are you looking to harness the power of Elasticsearch, Logstash, and Kibana for effective log management and data analysis?
This step-by-step guide will walk you through the process of installing and configuring this powerful trio on Amazon Linux 2023. Elasticsearch enables lightning-fast searches and data storage, Logstash efficiently processes logs, and Kibana delivers stunning visualizations. Let's dive right in!

Prerequisites:

  • Amazon linux 2023

  • enable security group "all traffic"

  • 2 CPU and 4gb RAM (t3.medium or t2.large)

    Step 1: Installing Elasticsearch

  • Install Java 11:

      sudo yum install java-11 -y
    

Configure the Elasticsearch repository:

https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html

Create a repo for Elasticsearch using vi/etc/yum.repos.d/elasticsearch.repo

   vi/etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

To verify :

cat /etc/yum.repos.d/elasticsearch.repo
  1. Import the GPG key:

     sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    

  2. Update and refresh the repository:

     sudo yum clean all
     sudo yum makecache
    

  3. Install Elasticsearch on Amazon Linux 2023.

     sudo yum -y install elasticsearch
    

  4. Verify that Elasticearch has been installed successfully:

     rpm -qi elasticsearch
    

  5. Start and enable the Elasticsearch service:

     systemctl start elasticsearch.service
     systemctl enable elasticsearch.service
    
  6. Configure Elasticsearch on Amazon Linux 2023

    After the installation, you may need to configure Elasticsearch. Edit the file /etc/elasticsearch/elasticsearch.yml

    Update the network.host to 0.0.0.0 and http.port to 9200 and modify discovery.seed hosts:[].To disable security features, set xpack.security.enabled: false.

     vi /etc/elasticsearch/elasticsearch.yml
    

  7. Restart Elasticsearch:

     systemctl restart elasticsearch.service
    

    Verify service status

  8. Testing Elasticsearch:

    Lets test Elasticsearch using curl command by sending HTTP request

     curl -X GET "localhost:9200"
    

  9. To check browser <Instance publicip>:9200

    Step 2: Installing Logstash

    After a successful installation and configuration of Elasticsearch on Amazon Linux 2023, we now proceed to the next element, which is Logstash.

    1. Install Logstash:

       sudo yum install logstash -y
      

    2. Edit the Logstash configuration file to add the input and output parameters.

       vi /etc/logstash/conf.d/logstash.conf
      

      paste:

       input {
      
         beats {
      
           port => 5044
      
         }
      
       }
       output {
      
         elasticsearch {
      
           hosts => ["localhost:9200"]
      
           manage_template => false
      
           index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      
         }
      
       }
      
    3. Start and enable the Logstash service:

       systemctl start logstash.service
       systemctl enable logstash.service
       systemctl status logstash.service
      

Step 3: Installing and Configuring Kibana

Kibana exists in the ELK repo that we had configured earlier, we shall therefore proceed to install the package directly.

  1. Install Kibana

     sudo yum -y install kibana
    
  2. Start and enable the Kibana service:

     systemctl start kibana.service
     systemctl enable kibana.service
     systemctl status kibana.service
    

  3. Configure Kibana by modifying /etc/kibana/kibana.yml. Update uncommentserver.host to "0.0.0.0" and elasticsearch.hosts to ["http://localhost:9200"].

     vi /etc/kibana/kibana.yml
    

  4. Restart Kibana:

     systemctl restart kibana.service
    
  5. Access the Kibana dashboard by <server-ip>:5601

You can now start adding your data and shipping logs using beats such as Filebeat, Metricbeat, etc.

Step 4: Installing and Configuring Filebeat

  1. Install Filebeat:

     sudo yum install filebeat -y
    
  2. Start and enable the Filebeat service:

     systemctl start filebeat.service
     systemctl enable filebeat.service
    
  3. Configure Filebeat

    filebeat, by default, send data to Elasticsearch. Filebeat can be also configured to send event data to Logstash

    #output.elastic search:

    An array of hosts to connect to.

    #hosts: ["localhost:9200"]

    ------------------------------ Logstash Output -------------------------------

    output.logstash:

    The Logstash hosts

    hosts: ["localhost:5044"]

    open configuration file follow below command:

     vi /etc/filebeat/filebeat.yml
    

  4. Restart Filebeat:

     systemctl restart filebeat.service
    

    Enable modules for Filebeat. This enables the applications that will ship their logs to Elasticsearch. To check the available modules, run the command below:

     sudo filebeat modules list
    
  5. Enable a module, such as the Nginx module:

     sudo filebeat modules enable system
    

  6. Restart Filebeat:

     systemctl restart filebeat.service
    
  7. load the index template:

    note: change the instance IP Address in the template <instance IP:9200>

     filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["<instance ip>:9200"]'
    

  8. Start and enable the Filebeat service:

     systemctl start filebeat.service
     systemctl enable filebeat.service
    
  9. Verify Elasticsearch reception of data:

     curl -XGET http://<instance-ip>:9200/_cat/indices?v
    

  10. To Access browser

    <instance-ip>:9200/_cat/indices?v
    

You're all set! With Elasticsearch, Logstash, and Kibana up and running, you have a powerful platform for managing, analyzing, and visualizing your data. Customize the setup according to your needs, create stunning visualizations, and dive into the world of actionable insights.

Remember, this guide provides a basic setup. For production environments, consider security, scaling, and optimization. Happy analyzing!

0
Subscribe to my newsletter

Read articles from Balaji directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Balaji
Balaji

👋 Hi there! I'm Balaji S, a passionate technologist with a focus on AWS, Linux, DevOps, and Kubernetes. 💼 As an experienced DevOps engineer, I specialize in designing, implementing, and optimizing cloud infrastructure on AWS. I have a deep understanding of various AWS services like EC2, S3, RDS, Lambda, and more, and I leverage my expertise to architect scalable and secure solutions. 🐧 With a strong background in Linux systems administration, I'm well-versed in managing and troubleshooting Linux-based environments. I enjoy working with open-source technologies and have a knack for maximizing performance and stability in Linux systems. ⚙️ DevOps is my passion, and I thrive in bridging the gap between development and operations teams. I automate processes, streamline CI/CD pipelines, and implement robust monitoring and logging solutions to ensure continuous delivery and high availability of applications. ☸️ Kubernetes is a key part of my toolkit, and I have hands-on experience in deploying and managing containerized applications in Kubernetes clusters. I'm skilled in creating Helm charts, optimizing resource utilization, and implementing effective scaling strategies for microservices architectures. 📝 On Hashnode, I share my insights, best practices, and tutorials on topics related to AWS, Linux, DevOps, and Kubernetes. Join me on my journey as we explore the latest trends and advancements in cloud-native technologies. ✨ Let's connect and dive into the world of AWS, Linux, DevOps, and Kubernetes together!