Aww: Key Takeaways from the Second Week of Development

Roberto LupiRoberto Lupi
10 min read

Here's a concise summary of the key points:

  1. Data Alone is Insufficient: Simply acquiring data, "sterile data acquisition", does not guarantee positive or meaningful "wholesome" changes.

  2. Aww is a Flexible Toolset: "Aww" should be understood as a collection of tools, not a pre-packaged solution, and this flexibility is an advantage but carries a steeper adoption curve. I am fine with that, since I am likely the only user at this point in time.

  3. Aww Enables Simple LLM Operations: The "Aww OODA loop" framework allows for using Large Language Models (LLMs) in operational processes in a straightforward, low-complexity manner.

I started Augmented Awareness (GitHub) to create a tool for more balanced living. It also helps me gain insights into how integrating LLMs into software processes requires us to change our thinking and approach to problems. Additionally, it allows me to experiment more boldly with AI-assisted software development than I can do at work.

Sterile data acquisition doesn’t lead to wholesome changes

Simply gathering data, a process we can term "sterile data acquisition," does not inherently lead to beneficial or "wholesome" changes within an organization or system. Data in its raw form—spreadsheets filled with numbers, logs brimming with timestamps, databases storing user clicks—lacks intrinsic meaning or direction. It represents potential, not progress. Consider a fitness tracker diligently logging steps, heart rate, and sleep duration. If the user never reviews this data, reflects on patterns (like poor sleep after late-night meals), or adjusts their behavior (like setting an earlier bedtime), the tracker merely accumulates digits without contributing to improved health. The data remains sterile.

Achieving meaningful improvements requires moving beyond mere collection. It involves a deliberate process:

  • Contextualization: Understanding where the data came from, how it was collected, and what it represents (metadata). Is a sales dip seasonal, or does it indicate a deeper issue? Did I feel grumpy because of my own mistakes, like not following a good sleep routine for over a week, or was it due to external factors?

  • Analysis: Employing methods ranging from simple sorting and visualization to complex statistical modeling to identify patterns, trends, correlations, anomalies, and root causes. This transforms raw data into information. For instance, analyzing website traffic data might reveal that users consistently abandon their shopping carts at the payment stage.

  • Interpretation: Translating these patterns into actionable insights. The high cart abandonment rate might suggest user friction, perhaps due to unexpected shipping costs or a complicated checkout process. This step converts information into knowledge.

  • Decision and Action: Formulating a plan based on the insights and implementing changes. The business might decide to display shipping costs earlier or simplify the payment form. This is where knowledge spurs tangible change.

  • Feedback Loop: Measuring the impact of the actions taken by collecting and analyzing new data, thus restarting the cycle. Did simplifying the checkout reduce cart abandonment?

Without this complete cycle, data acquisition remains an academic exercise, consuming resources without delivering tangible benefits or fostering positive evolution. Wholesome changes arise not from the data itself, but from the informed actions it inspires.

After using aww, in its current schedule-focused state, for a couple of weeks, I realized that my current emphasis on meticulously recording precise times in my journal appears to be counterproductive to my goal of living a more wholesome life. This practice, while perhaps intended to foster discipline or provide data for reflection, seems to be subtly shifting my focus in ways that detract from overall well-being.

Specifically, I observe a tangible decline in the frequency and perhaps depth of my yoga practice. The amount of time dedicated to yoga has decreased, leading to a noticeable increase in physical stiffness. This physical manifestation runs parallel to a diminished focus on spiritual growth. I find myself less inclined or able to connect with metaphysical ideas, experiencing a drift back towards perceiving only the immediate, physical reality as the entirety of existence. My impatience with #meditation has grown; sustaining focus during practices like Shambhavi Mahamudra Kriya feels increasingly difficult, my mind wandering excessively. While acknowledging Sadhguru's advice to practice for its own sake, like an offering, the pull towards quantifiable results seems to interfere. Interestingly, guided practices like Isha Kriya (via Miracle of Mind app) remain more accessible, suggesting that structure itself isn't the issue, but perhaps the type of structure imposed by rigid time-tracking is.

I am actively questioning the root of this shift. It's unclear whether this is primarily an effect of my deep intellectual engagement with developing concepts like the aww OODA loop design and aww-agents, aww’s agentic framework design. These frameworks, drawing inspiration from models emphasizing rapid observation, orientation, decision, and action cycles, inherently involve time sensitivity and optimization. The focus here is often on efficiency and tempo, which, while valuable in certain contexts, might inadvertently train my mind to prioritize speed and measurement over presence and experience. As noted in my reflections on OODA already before starting development, a faster tempo isn't always useful; sometimes it just means doing "stupid things faster with more energy."

Alternatively, could the issue stem directly from using the Aww software I've developed? Its current design, perhaps unintentionally, seems to guide my attention towards quantifying the time spent on activities, rather than enriching the quality or meaning of those activities themselves. The very act of logging start and end times might implicitly elevate duration as the primary metric of value. This resonates with ideas explored around software profiling, where the focus is on identifying bottlenecks and optimizing resource usage (time being a key resource). While applying such optimization principles to life seemed intellectually appealing – focusing on the largest potential wins, measuring change, choosing data sources carefully – the practical outcome seems to be a reductionist view where the clock dictates value. Even features like the "life in weeks" visualizer, intended to foster mindfulness about life's finitude, could inadvertently reinforce this quantification of existence.

This experience contrasts sharply with the positive outcomes from deliberately implementing small, qualitative actions. Engaging in a 5-minute mindful start, a 10-minute micro-yoga flow, and brief "disconnect" moments away from screens demonstrably improved concentration, reduced stress, and enhanced mental clarity without a primary focus on the exact minutes elapsed. These activities prioritized the experience – grounding, movement, presence – over precise time logging.

Therefore, the evidence suggests that while tools and frameworks for tracking and optimization, like Aww and OODA-inspired thinking, hold intellectual appeal and potential benefits (e.g., using aww-pomodoro for work focus), their current application, particularly the emphasis on precise time recording, may be hindering, rather than helping, my pursuit of a balanced and wholesome life. The challenge lies in refining these tools and my relationship with them to support qualitative well-being, not just quantitative measurement.

Aww is a flexible toolset and this is a strength

Characterizing "Aww" as a toolset, rather than a monolithic solution, highlights a fundamental strength: its inherent flexibility and adaptability. Think of the difference between a fully equipped, pre-assembled kitchen ("solution") versus a comprehensive set of high-quality kitchen implements—knives, pans, mixers, measuring tools ("toolset"). The pre-assembled kitchen might perfectly suit one specific cooking style but prove restrictive for others. The toolset, however, empowers the chef to select and combine tools to prepare a vast array of dishes, tailored precisely to the ingredients, the desired outcome, and their personal technique.

Similarly, Aww provides a collection of distinct capabilities or components. Users can select, combine, and configure these components to address their unique challenges and integrate them into their specific operational contexts. This approach offers several advantages over a rigid, pre-packaged solution:

  • Tailored Application: Users are not forced into a one-size-fits-all framework. They can assemble Aww's components to precisely match their existing workflows, data sources, and objectives.

  • Seamless Integration: Toolsets generally integrate more smoothly with existing systems. Instead of replacing entire processes, users can introduce Aww components incrementally to enhance specific parts of their workflow, minimizing disruption and leveraging existing investments.

  • Adaptive Scalability: Users can start small, employing only the necessary tools for an immediate task, and gradually incorporate more components as their needs evolve or complexity increases. This contrasts with the often significant upfront commitment required by a comprehensive solution.

  • Fostering Innovation: A toolset encourages users to think creatively about how components can be combined to solve novel problems or optimize processes in ways the original designers might not have explicitly foreseen.

This flexibility means Aww empowers users to build their solution, rather than adopting a potentially ill-fitting one. Its nature as a toolset is a strategic advantage, prioritizing adaptability and user-driven configuration over prescriptive rigidity.

My approach to building augmented awareness centers on creating a collection of adaptable, loosely connected components instead of a single, pre-packaged application. This design philosophy provides significant advantages, particularly in flexibility and the ability to integrate discoveries organically during development, making it a strength rather than a limitation.

A practical example of this modular benefit emerged unexpectedly. While journaling, I experimented with using a Large Language Model (LLM) directly within my note-taking application, Obsidian, via the Local GPT plugin. I fed it scattered thoughts and diary entries, and the LLM (specifically, Google's Gemini 2.5 Pro model) synthesized these into a coherent discourse. This process strikingly mirrors the concept of Niklas Luhmann's Zettelkasten method – a system for networked thought where individual notes connect to generate emergent ideas, a topic I'm exploring through the book A System for Writing. The LLM acted as a powerful tool for this kind of knowledge discovery, connecting ideas I hadn't explicitly linked myself. This capability, however, also raises an important question as I reflected: does relying on LLMs for synthesis risk a form of mental laziness, potentially diminishing my own ability to formulate and pursue complex questions systematically?

If I were committed to building augmented awareness as a monolithic, shrink-wrapped system, integrating such a sophisticated journal-analysis feature would require significant dedicated development. This might involve building a complex Retrieval-Augmented Generation (RAG) system – a technique where the LLM retrieves relevant information from a knowledge base (like my Obsidian notes) before generating a response – specifically for my notes. Undertaking such a task could potentially span weeks or months reflecting my earlier thoughts on avoiding premature complexity by starting simple. While this integrated analysis is a desirable long-term goal for the aww-agents component, the modular, toolset-based approach allows me to immediately leverage the existing capabilities of Local GPT within Obsidian. I can experiment with its output now and build upon it incrementally. For example, future steps could involve refining aww’s input by passing entire Markdown journal pages instead of just specific events, or enhancing my own LLM integrations to execute external tools or navigate wiki-links within my notes. This effectively extends the system's capabilities without needing to replicate the core LLM synthesis function from scratch at this stage.

This strategy aligns directly with the principle of using low-fidelity prototypes before committing to extensive development. By treating components like the LLM integration as experiments built on existing tools, I can quickly assess their usefulness, refine the requirements (like choosing what data is truly valuable to record), and manage complexity before investing heavily in custom implementations. It allows the system to evolve based on practical needs and discoveries, rather than a rigid upfront design.

The Aww OODA Loop: Simplifying LLM Integration for Practical Operations

Aww leverages the OODA loop (Observe, Orient, Decide, Act)—a framework renowned for facilitating rapid decision-making in dynamic situations—to enable the practical, straightforward use of Large Language Models (LLMs) within operational cycles. This approach deliberately aims for a "low-fi, low-complexity" implementation, making advanced AI capabilities accessible without demanding deep technical expertise or complex infrastructure overhauls.

Here’s how Aww maps LLM operations onto the OODA loop:

  1. Observe: The cycle begins with gathering relevant information from the operational environment.

  2. Orient: This crucial phase involves making sense of the observed data. Here, Aww facilitates interaction with an LLM. It sends the observed data, potentially enriched with relevant context, along with a carefully crafted prompt (instructions) to an LLM. The LLM processes this input and generates an output—perhaps a concise summary, a suggested classification, a draft response, potential root causes for an alert, or an identified anomaly. The LLM acts as a powerful assistant in this orientation phase, rapidly synthesizing information or suggesting interpretations that might take a significant time, effort and, crucially, attention to develop on my own.

  3. Decide: Armed with the LLM's output and other relevant factors (like business rules or situational context), a decision is made. This decision might be executed by a human operator (e.g., approving the LLM-drafted email reply, confirming the LLM's suggested ticket routing) or triggered by automated rules within the Aww framework (e.g., automatically escalating an alert if the LLM identifies it as critical).

  4. Act: The decision translates into action within the operational environment. Aww components execute the chosen course, triggering an automated remediation script, or logging the event.

The "low-fi, low-complexity" aspect means Aww abstracts much of the technical difficulty typically associated with deploying AI models. Users interact with the LLM through the structured OODA process facilitated by Aww, focusing on defining the task for the LLM (via prompts and context) rather than managing the underlying model infrastructure, complex fine-tuning, or intricate API integrations.

0
Subscribe to my newsletter

Read articles from Roberto Lupi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Roberto Lupi
Roberto Lupi