Applying AIMD to save kindle-clippings from my friends!

Pratyay DhondPratyay Dhond
5 min read

I am blessed with great people as friends. They might not be all supportive and disney-movie like, but they sure help me see where I am going wrong, and are first to break my website.
This is an intriguing start to a tech blog, even for someone like me. But this idea began with Abhishek, a friend of mine from my hometown. I had just launched Kindle-clippings, and it had only been a few hours, I was happy to have hosted an end to end application and was on a walk with two of friends tuktuk and Abhishek, that’s when I showed them the website on their own phones ;).

That was supposed to be my flex, and then Abhishek started going through it, and breaking it, uploading an image to a .txt file field, doing random actions, trying to login with an unsupported email type, and I was into it.

There was a phase of my life where I believed “If it works, don’t touch it”, but not any more. I would rather have my code broken in front of my face than behind my back by a user I don’t know.

Today, I was writing code to reduce the API calls and cost to kindle-clippings. The idea was to do this by switching to primarily using cache for data, and replacing auto-sync with manual user-driven sync buttons.

  • This would be a bad UX for sure, but given that this is a free project meant to help people organise kindle highlights, one click for manual sync instead of having to pay a premium doesn’t sound like a bad deal to me.

But then it dawned on me, while thinking this, I have friends like Abhishek, they would spam the hell out of the refresh-cache button.

I would just add a timeout say 1 sec, 5 sec to the cache button

That is a good idea, but my friends are persistent, (that’s why I like them) they would go out of their way to use Selenium or some automation tool to be afk and spam the api calls.
The things these people would do to say “I broke your website ;)” ~ feels like love to me!

What should I do then? How do I prevent it from attackers without making it a bad user experience?

When I asked myself this, I had a thought about Computer Networks, AIMD - Additive Increase Multiplicative Decrease - and I thought I could go ahead with this, that sounded good.

Additive Increase Multiplicative Decrease
  • This Algorithm is used everywhere to communicate on the Internet, if you ever wonder why downloading starts slow and then jumps up in speed exponentially till it reaches a stable point, you experienced AIMD.

  • But AIMD is for another entire blog, it is beautiful ;)

    The problem was I couldn’t just use AIMD as it is, it doesn’t make sense,

    1. I could add a penalty for every refresh ( say refresh in 1s, 2s, 3s …)

    2. But what would I set for the multiplicative Decrease condition? Successful API calls? Unsuccessful API calls? But that won’t help with my friend spamming my website.

That’s where the idea for AIMD inspired Penalty came from
  • I would set a constant multiplier, for every refresh the user does, the multiplier would be used to increase the timeout before the users can sync again.

  • If the API fails, the timeout would be decreased to half.. as the classic AIMD algorithm does.

So I just changed the Additive condition here, but the advantage of this is surreal.

  • Lets say, my friend could make 1 refresh call per second to the backend without this rate-limiting.

  • After the AIMD inspired rate-limiting is in place, they would still be able to make the first backend call in 1 sec, but by the time they hit 20th refresh API call, it would take them - 2097151 seconds (24.27 days).

  • That is the power of exponential growth.

But now, there is another problem, I would need to set a condition so that this time-delay could be reset or reduced.

  • I could cut the time in multiplicative manner if the user is good for ‘X’ time, but my friends will figure that out.

  • How about, we cap the max time limit. Max penalty a user could get.

    • My friends would still be able to spam, but at time I decide, not according to them. I like this power.
What did we settle for then?
  • We would cap the MAX_PENALTY is set to 1 hour (3600 seconds) per user).

  • The penalty would be written to user’s local storage and would be in the website’s current state.

  • There would be a backend layer implementation, where the user’s last api call timestamp would be noted down with the API call, and the penalty.

    • If there is a mismatch between them, the penalty would be reset to MAX_PENALTY, irrespective of the penalty time left.
  • What if people genuinely hit that limit,

    • It won’t happen as failed API calls won’t penalize the users

    • If API call worked right, there would be no need to spam for regular users.

What if someone clears their cache mistakenly
  • Sadly, there will always be some false-postives, genuine users who land on the wrong side of the river.

  • But for them, if the cache is empty, the first data fetch is automatic no manual button press is needed, so this problem would never occur for them.

    • That is, they won’t be needing the refresh button, as their data would be sync from origin (centralised data store) itself.

That is settled, this is what I am implementing.

If you are interested in seeing how this is implemented checkout the following links:
- Kindle-Clippings Website
- Kindle-Clippings Backend Code
- Kindle-Clippings Frontend Code


If you haven’t checked Kindle-Clippings yet, go and test it out. We have added a sample file for you to try the application out, even if you don’t use a kindle. You’ll get 1,000 free coins if you register now (for a limited time).

10
Subscribe to my newsletter

Read articles from Pratyay Dhond directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pratyay Dhond
Pratyay Dhond

Learning from my experiences while joking about the stupid mistakes I did in my prior code!