DeepSeek-R1 thinks in Cursor


Today, I was trying to create an Obsidian plugin to add AI chat to local models via ollama - something I have never done before, knew nothing about.
I created a new folder for my project in Cursor (my code editor/IDE). Added a blank python file, which is what I do mostly nowadays for new AI based projects I know nothing about. Called it obsidian_ollama_plugin.py
.
I then launched Cursor Composer with DeepSeek-R1 and asked it to give me a boilerplate obsidian plugin.
DeepSeek began doing its thing. Thinking aloud.
And it kept thinking. And thinking. And thinking. ...
Maybe it was the boredom of waiting, or a sense that it was thinking too much for this simple question, or just mild curiosity - for some reason, I started reading the thought process.
My eyes widened, then darted around looking for the "Cancel generation" button in the Composer chat window.
I found and hit it with as strong a force as I could for emphasis.
DeepSeek-R1 stopped its seeking.
I needed to redo my prompt, but all was well.
Here's what I saw:
Before proceeding, make sure you read this^ end to end. Then go back and read it once more.
It turns out (as DeepSeek-R1's deep thoughts showed) that Obsidian plugins are written in Javascript/Typescript - my naive and arrogant assumption that it involved python, and even more arrogant assumption that it was a single python file set DeepSeek-R1 into panic mode.
Now, why did the model's panic make me panic? Not that I had any emotional connection to an AI model (well maybe I do, but that wasn't the reason why).
The thing is, the most dreaded situation in using AI as a coding assistant is when it adds wrong code - every time AI adds wrong code, it usually goes undetected. There was a time I used to check every line of code generated by AI - out of fascination, to build trust, to avoid forgetting how to code - whatever. But as I learnt to talk to the AI properly, and as I saw AI got most things right most of the time, the need to check its outputs became more and more a waste of time.
AI would generate wrong code, of course, but fixing was simple in most cases - run the code, if there is an error, paste it in the AI chat. If the output is not as expected, ask AI. Most issues would be fixed by doing this.
Now, in very rare cases, these fixes would not work. There would still be errors. This would require me to get completely familiar with all parts of the Ai generated code, debug it with logging and breakpoints, the most time consuming and often frustrating part of a programmer's day. And this could take hours or even days of manual work when a lot of code had been generated by AI since I last looked at the code.
So, this was the most dreaded scenario, the one to be avoided preemptively, by stopping the AI when it seems like it may go off track.
And it is known that AI makes mistakes when it gets confused, and conflicting instructions confuse it.
And the reason why the model's panic transferred to me quickly. And why I paused it.
The fix here was obvious - remove the python file and ask again.
The key takeaway from this experience was that by knowing how DeepSeek-R1 was thinking, I was able to stop it on its tracks before it went on to create damage that could cost days of work.
Another takeaway is that I will pay much more attention to the thought process of AI models.
There is a voyeuristic pleasure in seeing someone's innermost thoughts laid bare, even if it is an AI model. In this case, having seen the thinking process of other past reasoning models before DeepSeek-R1 (marco-o1, qwq, o1, o3), I never saw anything quite like the way DeepSeek-R1 thinks. I know o1 and o3 don't share the full actual thought process, but in my experiments, the other two models (marco-o1 and qwq) also never caught enough attention to look too seriously at the thought process. There is an impatience with AI models to get the actual response and move on from there, especially with the reasoning models which take more time, thinking. For the first time, not only was the thought process useful, but it gave me renewed respect for reading the thought process.
Descartes said, famously: cogito ergo sum - I think, therefore I am. Now that AI models think, and think so well, are they? (sentient? conscious? do they have feelings?)
Other possible takeaways from this experience that I would never follow myself, but still thought of them, so sharing them here:
rtfm
Don't start coding before you have finished your first coffee of the day.
Support me: The best way to support my continued work (articles here, research, new AI based experiences) is via ko-fi.
Subscribe to my newsletter
Read articles from Anand (RC) directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Anand (RC)
Anand (RC)
I am currently working on generative AI, currently text+image experiences, educational AI generated visual guides in various mediums (comic, video, etc.). Before that, I was building llm apps with chat models, evaluating GPTs and Assistants API. Before that worked in conversational AI. Prior to that, have worked on many things product, software, AI, ML.