Go Concurrency: Goroutines & Channels Simplified

What are Goroutines?

Imagine you are working on a school project with a group of friends. Instead of doing everything by yourself, you give each friend a specific task to work on at the same time. This way, you can get the project done faster. In Go programming, these friends are like Goroutines. A Goroutine is a function that runs at the same time as other functions, helping you do multiple things at once.

Why Use Goroutines?

Using Goroutines is like having multiple friends to help you with different tasks simultaneously, making your work faster and more efficient. If you had to do everything yourself, it would take much longer.

How to Create a Goroutine?

Starting a Goroutine is very easy. You just use the go keyword before the function call. Here’s a simple example:

package main

import (
    "fmt"
    "time"
)

func sayHello() {
    fmt.Println("Hello, world!")
}

func main() {
    go sayHello() // This starts the sayHello function in a new Goroutine
    time.Sleep(1 * time.Second) // Wait for a second to let the Goroutine finish
}

In this example, the sayHello function runs in a separate Goroutine. The time.Sleep line is there to give the Goroutine enough time to print "Hello, world!" before the program ends.

Channels

What are Channels?

Now, let's say you and your friends are working on different parts of a project, and you need a way to share information with each other. Channels in Go are like walkie-talkies that you and your friends use to communicate. They let Goroutines send and receive messages to each other safely.

Why Use Channels?

Channels help Goroutines communicate without getting in each other's way. They make sure that the messages are passed correctly and safely, avoiding any mix-ups.

How to Create and Use Channels?

Creating a channel is like setting up a new walkie-talkie line. You can then send and receive messages using the <- operator. Here’s a simple example:

package main

import "fmt"

func main() {
    messages := make(chan string) // Create a new channel for strings

    // Start a new Goroutine
    go func() {
        messages <- "H
 // Send a message to the channel
    }()

    // Receive the message from the channel
    msg := <-messages
    fmt.Println(msg)
}

In this example:

  1. We create a channel named messages that can carry text messages (strings).

  2. We start a Goroutine that sends the message "Hello from Goroutine!" to the messages channel.

  3. The main Goroutine (the main part of the program) receives the message from the channel and prints it.

Example of how to use Goroutine & Channel in a Real-World Project: Web Crawler

A web crawler is a program that visits web pages and extracts information. Let's say you want to build a web crawler that visits a list of URLs, extracts the title of each page, and prints them. Using Goroutines and Channels, we can make this process efficient by visiting multiple URLs concurrently.

Steps

  1. Fetch URLs Concurrently: Use Goroutines to fetch the content of multiple web pages at the same time.

  2. Communicate Results: Use Channels to pass the fetched titles back to the main Goroutine for printing.

Code Example

Here’s how you might implement a simple web crawler using Goroutines and Channels in Go:

package main

import (
    "fmt"
    "net/http"
    "io/ioutil"
    "log"
    "regexp"
    "sync"
)

// Function to fetch the title of a web page
func fetchTitle(url string, wg *sync.WaitGroup, results chan<- string) {
    defer wg.Done() // Signal that this Goroutine is done
    resp, err := http.Get(url)
    if err != nil {
        log.Println(err)
        results <- fmt.Sprintf("Error fetching %s", url)
        return
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        log.Println(err)
        results <- fmt.Sprintf("Error reading %s", url)
        return
    }

    // Regex to find the title tag
    re := regexp.MustCompile("<title>(.*?)</title>")
    matches := re.FindStringSubmatch(string(body))
    if len(matches) > 1 {
        results <- fmt.Sprintf("Title of %s: %s", url, matches[1])
    } else {
        results <- fmt.Sprintf("No title found for %s", url)
    }
}

func main() {
    urls := []string{
        "https://www.google.com",
        "https://www.github.com",
        "https://www.stackoverflow.com",
    }

    results := make(chan string, len(urls)) // Channel to collect results
    var wg sync.WaitGroup // WaitGroup to wait for all Goroutines to finish

    // Start a Goroutine for each URL
    for _, url := range urls {
        wg.Add(1)
        go fetchTitle(url, &wg, results)
    }

    // Wait for all Goroutines to finish
    go func() {
        wg.Wait()
        close(results) // Close the channel when all Goroutines are done
    }()

    // Collect and print results
    for result := range results {
        fmt.Println(result)
    }
}

Explanation

  1. Fetch Titles Concurrently:

    • fetchTitle is a function that fetches the title of a given URL.

    • It uses http.Get to make an HTTP request to the URL.

    • It reads the response body and uses a regular expression to extract the title tag.

  2. WaitGroup and Channels:

    • We use a sync.WaitGroup to keep track of all the Goroutines. Each Goroutine calls wg.Done() when it completes.

    • The results channel is used to send the titles back to the main Goroutine.

  3. Starting Goroutines:

    • We start a Goroutine for each URL in the list. Each Goroutine fetches the title of a URL and sends it to the results channel.
  4. Collecting Results:

    • We start another Goroutine that waits for all the fetch Goroutines to finish (wg.Wait()) and then closes the results channel.

    • The main function reads from the results channel and prints the titles until the channel is closed.

Benefits

  • Concurrency: Multiple URLs are fetched at the same time, making the process much faster than fetching them one by one.

  • Synchronization: Using Channels and WaitGroups ensures that all results are collected and printed correctly.

Real-World Use Cases

  • Search Engines: Web crawlers are a core component of search engines like Google.

  • Scraping Data: Collecting data from multiple websites for analysis, such as price comparisons, news aggregation, or social media monitoring.

  • Monitoring: Continuously checking the status of multiple websites or services for uptime and performance.

This example demonstrates how Goroutines and Channels can be used to build an efficient web crawler, making it easy to perform multiple tasks concurrently and handle their results in a synchronized manner.

3
Subscribe to my newsletter

Read articles from Oluwajuwon Falore directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Oluwajuwon Falore
Oluwajuwon Falore

I am a full-Stack (backend leaning) software developer. Experienced with all stages of the development cycle for dynamic web projects. Well-versed in programming languages including HTML5, CSS, JAVASCRIPT, NODEJS, GOLANG, REACTJS, PYTHON, ANGULAR and IONIC.