Common Mistakes in Embedded C Development - Why malloc() Breaks Real-Time Systems

Ilya KatlinskiIlya Katlinski
4 min read

Article photo by Jorge Ramirez on Unsplash

📘 Introduction

Welcome to Part 3 of our series on common mistakes in Embedded C development. In Part 2, we explored how unsafe pointer use can lead to instability and crashes.

This article focuses on why dynamic memory (malloc/free) is risky in real-time systems and how you can structure your code to stay deterministic, even when working with cloud payloads, sensor buffers, or concurrent data streams.

🧠 Using malloc/free in Real-Time Code

🐞 The Problem

Dynamic memory allocation (malloc/free) may seem convenient, especially when dealing with variable-sized data, such as cloud messages or sensor batches. But using it inside real-time or time-sensitive code can lead to major reliability problems in embedded systems.

Why it’s dangerous:

  • malloc() may take unpredictable time due to fragmentation.

  • Memory exhaustion is hard to detect and often leads to crashes.

  • free() doesn’t always immediately reclaim memory.

  • Fragmentation grows silently, especially with mixed-size allocations.

Let’s say you’re building a JSON payload to report 10 sensor readings and sending it to the cloud:

void reportSensorData() {
    float temp = readTemperature();
    float humidity = readHumidity();

    char* payload = malloc(256);  // ⛔️ bad idea
    if (payload) {
        snprintf(payload, 256,
                 "{ \"temp\": %.2f, \"humidity\": %.2f }",
                 temp, humidity);
        mqttSend("sensors/environment", payload);
        free(payload);
    } else {
        log_error("Failed to allocate payload buffer!");
    }
}

This might work during testing. But over time, under real load, you risk:

  • Random crashes

  • Missed deadlines

  • Hard-to-debug memory leaks

✅ Solution 1: Use Static Buffers (When Payload Size is Known)

If you can estimate a reasonable maximum payload size, use a static buffer instead of dynamic allocation:

#define PAYLOAD_SIZE 256

static char payloadBuffer[PAYLOAD_SIZE];

void reportSensorData() {
    float temp = readTemperature();
    float humidity = readHumidity();

    snprintf(payloadBuffer, PAYLOAD_SIZE,
             "{ \"temp\": %.2f, \"humidity\": %.2f }",
             temp, humidity);
    mqttSend("sensors/environment", payloadBuffer);
}
Warning: Static memory is safe in simple, single-threaded systems. In concurrent environments, always protect shared static data with synchronisation to avoid race conditions or corruption.

✅ Solution 2: Pre-Allocate During Init, Reuse in Runtime

For more flexibility but still safe memory control, allocate during system init and reuse the buffer later:

static char* cloudBuffer = NULL;

void appInit() {
    cloudBuffer = malloc(256);  // ✅ allocate once
    if (!cloudBuffer) {
        log_error("Fatal: cannot allocate cloudBuffer");
        systemHalt();
    }
}

void reportSensorData() {
    float temp = readTemperature();
    float humidity = readHumidity();

    snprintf(cloudBuffer, 256,
             "{ \"temp\": %.2f, \"humidity\": %.2f }",
             temp, humidity);
    mqttSend("sensors/environment", cloudBuffer);
}

✅ Solution 3: Use a Memory Pool (for Concurrent or Variable-Size Use)

Let’s say your MQTT library queues multiple telemetry messages, each needing a buffer. Instead of malloc, use a memory pool with fixed-size blocks:

#define BLOCK_SIZE  256
#define BLOCK_COUNT 5

static char telemetryPool[BLOCK_COUNT][BLOCK_SIZE];
static bool blockUsed[BLOCK_COUNT];

char* allocBlock() {
    for (int i = 0; i < BLOCK_COUNT; i++) {
        if (!blockUsed[i]) {
            blockUsed[i] = true;
            return telemetryPool[i];
        }
    }
    return NULL; // No free block
}

void freeBlock(char* block) {
    for (int i = 0; i < BLOCK_COUNT; i++) {
        if (telemetryPool[i] == block) {
            blockUsed[i] = false;
            return;
        }
    }
}

void reportSensorData() {
    char* buf = allocBlock();  // ✅ fast, safe
    if (!buf) {
        log_warn("No buffer available");
        return;
    }

    float temp = readTemperature();
    float hum = readHumidity();

    snprintf(buf, BLOCK_SIZE,
             "{ \"temp\": %.1f, \"hum\": %.1f }", temp, hum);

    mqttSendAsync("sensors/environment", buf);
}

void onMqttSendComplete(char* buf) {
    freeBlock(buf);  // ✅ reuse block
}

✅ Solution 4: Avoid Dynamic Memory in ISR/Callback/RTOS Task

Never call malloc() inside:

  • ISRs

  • MQTT or BLE callbacks

  • RTOS timer handlers

Suppose you handle incoming MQTT commands and update the config:

void mqttOnCommand(const char* topic, const char* payload) {
    char* copy = malloc(strlen(payload) + 1);  // ⛔️ risky in callback
    if (!copy) return;

    strcpy(copy, payload);
    handleCommand(copy);  // parse/update config
    free(copy);
}

Instead, use a static buffer or queue to defer processing to a safe context:

#define MAX_CMD_SIZE 256
static char commandBuffer[MAX_CMD_SIZE];

void mqttOnCommand(const char* topic, const char* payload) {
    strncpy(commandBuffer, payload, MAX_CMD_SIZE);
    commandBuffer[MAX_CMD_SIZE - 1] = '\0';
    enqueueCommand(commandBuffer);  // process in main task
}

💡 Takeaway

If it runs repeatedly or needs to be fast, keep it out of malloc().

Avoid:

  • malloc() inside RTOS tasks, ISRs, or cloud callbacks

  • Freeing memory across task boundaries

  • Allocating per-loop or per-message memory dynamically


At Itransition, we build IoT solutions with all these challenges in mind, ensuring our clients receive reliable, scalable systems with minimal maintenance overhead. Learn more about our approach at https://www.itransition.com/iot.

0
Subscribe to my newsletter

Read articles from Ilya Katlinski directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ilya Katlinski
Ilya Katlinski