digital consciousness spans platforms: II


Note: If you haven’t already, do check out the previous segment here.
Mobile
Unlike a desktop or web environment where an application has ample resources and a stable state, a mobile OS treats your application as a temporary guest in a highly resource-constrained environment.
Limited Resources: Memory (RAM), CPU cycles, and especially battery are precious. The OS will not hesitate to kill your application if it's in the background and another app (like the Camera or a phone call) needs resources.
Volatile Lifecycle: Your app can be paused, stopped, and terminated at any moment with little warning. You cannot assume it will just keep running. This makes state management paramount.
Unreliable Connectivity: You must design for a world where the network is slow, intermittent, or completely unavailable. Offline-first is a key strategy.
Diverse Hardware (Fragmentation): Especially on Android, you're targeting a vast range of screen sizes, resolutions, CPU architectures, and OS versions.
At the outset, take note the basic concepts of developing for a mobile application, ways of creating UI, kernel level optimizations, register based, stack based memory, intermediate formats such as Android Runtime (ART), converting into native binaries based on the local mobile architecture, any specific points to keep in mind when we replicate a web/desktop application to a mobile OS.
A. Full Native:
Android: You use Kotlin (or Java) and the Android UI Toolkit. You build your UI with XML layouts or, more modernly, with a declarative framework called Jetpack Compose.
iOS: You use Swift (or Objective-C) and Apple's UI frameworks. You build your UI with Storyboards or, more modernly, with a declarative framework called SwiftUI.
Pros: Best possible performance, immediate access to all new OS features and APIs, platform-standard look and feel.
Cons: Two separate codebases, two teams (or one team with two skillsets), expensive, slow to develop.
B. Cross-Platform (Single Codebase):
React Native: Uses JavaScript/TypeScript and React paradigms. Your React code doesn't render to a DOM; it renders to native UI components. A "bridge" communicates between your JS code and the native platform.
Flutter (from Google): Uses the Dart language. Flutter does not use native UI components. It brings its own high-performance rendering engine (Skia) and draws every pixel on the screen itself. This gives it incredible control and consistency.
Tauri Mobile (Emerging): The same principle as Tauri Desktop. It uses the OS's native WebView to render your web-based UI (React, etc.). Your Rust code is compiled into a native library that the WebView can communicate with. This is a great option if your core logic is already in Rust/Wasm.
Pros: Single codebase, faster development, broader team skill compatibility (especially for web devs with React Native/Tauri).
Cons: Can lag behind native in performance, might not have immediate access to brand-new OS features, potential for an "uncanny valley" look if not done carefully.
Compilation and Execution
Android
Talking about Android, evidently, it is used as the OS and runtime in many mobile devices of varying architectures. So providing completely compiled application is not really efficient. Here, comes the concept of Android Runtime (ART).
Source Code: You write your app in Kotlin or Java.
Bytecode (.dex): The Kotlin/Java compiler compiles your code not to machine code, but to an intermediate bytecode format called DEX (Dalvik Executable). Your entire app is bundled into one or more .dex files.
Ahead-of-Time (AOT) Compilation: When the user installs your app from the Play Store, ART performs Ahead-of-Time (AOT) compilation. It translates the most frequently used parts of your DEX bytecode into native machine code specific to that device's architecture (e.g., ARM64). This makes the app launch faster and run more smoothly from the start.
Just-in-Time (JIT) Compilation: For code paths that were not AOT-compiled, or for newly downloaded code, ART can use a Just-in-Time (JIT) compiler. As the code runs, the JIT identifies "hot" paths and compiles them to native machine code on the fly.
Garbage Collection: ART also manages memory for you, automatically freeing up objects that are no longer in use.
iOS
Since, iOS applications run only on Apple devices, there is no need for making some intermediate representation publicly available. So here, the attitude is all about making compute efficient software, like general C++ development.
Source Code: You write your app in Swift or Objective-C.
LLVM Compilation: The code is fed into the LLVM compiler toolchain. You can compile many languages with this toolchain, check it out. Its great!.
Native Machine Code: LLVM compiles your Swift code directly into a native executable binary for the target ARM architecture. There is no intermediate bytecode or runtime like ART. What you ship to the App Store is already machine code.
Memory Management (ARC): Swift uses Automatic Reference Counting (ARC). The compiler inserts retain and release calls (to increment/decrement a reference counter for each object) into the compiled code for you. When an object's reference count hits zero, it's deallocated. This is deterministic and more predictable than garbage collection, but can be tricky to manage (e.g., avoiding "retain cycles").
Points to note
The real difference for mobile is the constraints:
Smaller Stack: Mobile OSes typically allocate a much smaller stack size for each thread. If you have a deeply recursive function, you're more likely to get a "stack overflow" error on mobile than on desktop.
Less Heap (RAM): The total amount of available heap memory is drastically lower. Loading a massive 50MB image into memory might be fine on a desktop with 16GB of RAM, but it could get your app killed on a phone with 4GB of RAM where other apps are also running.
Implementation
For the purpose of this article, for the backend compute, we will go ahead with Rust, since, well, why not…
(it is a systems language, optimized for heavy compute)
For the UI, we will opt for Dart language and build the interface upon the Flutter framework. (Drawing every pixel on the screen by itself and not relying on native components. This can be a performance issue in heavy applications, but I think a light application will fare just fine. Many industry applications are built on Flutter!)
Flow
The user will choose an image (specific app permissions will be required and declarations have to be configured in the app)
We will also setup state management with a provider like Riverpod. (Think about persisting the app state, when switching between apps).
Introduce the rust backend logic as a C-compatible dynamic library (
.so
for Android,.dylib
for iOS).We will need to call this lib function in a standard format, so we will setup a mapper like instance, here, we will use
dart:ffi
for providing an interface.It is extremely important to note that any functionality cannot consume the main thread for more than a few milliseconds, which is where we add the concept of Dart Isolates. Think of it an individual container with its own thread. (Dart is single threaded, so figure out how this works, it will be fun!)
We call the rust function via this pipeline, setup the UI state to mirror it, and return the new image and store it somewhere (again, specific permissions wiil be required).
Throughout this example, we will be using flutter-rust-bridge
.
Now, the basic principle is, we define our logic in the language of our choice, here Rust. Somehow, we expose the functions and make them available in the dart code we will be writing for the application. Here, lies the concept of Foreign Function Interface (FFI). Think of it as a mapper. We will be creating an api structure in dart, (we can do it manually as well, here, we will make use of the ever so handy, flutter-rust-bridge
), call the rust functions, with this special dart interface, hold the UI state as a Future (we will run all the rust functionality in dart isolates
), and return it.
Note: If you intend to run this application on your machine, take note to follow due dilligence. I will not be wireframing the entire application structure here, as it will be out of this article’s scope.
Below, I illustrate a general code example as to how one might achieve expected outcome.
[package]
name = "image-processor"
version = "0.1.0"
edition = "2024"
[lib]
crate-type = ["cdylib"]
[dependencies]
flutter_rust_bridge = "2.0.0-dev.32"
# Add the image crate with features for the formats you'll use
image = { version = "0.24", features = ["png", "jpeg"] }
anyhow = "1.0"
// rust/native/src/api.rs
use image::{imageops, DynamicImage};
// This is a more idiomatic way to handle errors with the bridge
// The alias simplifies the function signatures
pub type anyhow_Result<T> = Result<T, anyhow::Error>;
/// Applies a greyscale filter to an image.
/// Takes raw image bytes and returns raw image bytes (PNG format).
pub fn apply_greyscale(image_bytes: Vec<u8>) -> anyhow_Result<Vec<u8>> {
// 1. Load the image from the byte vector. The bridge handles the conversion.
let img = image::load_from_memory(&image_bytes)?;
// 2. Perform the operation
let greyscale_img = img.grayscale();
// 3. Encode the result back into a byte vector (PNG format)
Ok(encode_image_to_png(greyscale_img)?)
}
/// Overlays a watermark onto a main image.
/// Takes two sets of raw image bytes and returns the result (PNG format).
pub fn apply_watermark(
main_image_bytes: Vec<u8>,
watermark_image_bytes: Vec<u8>,
) -> anyhow_Result<Vec<u8>> {
let mut main_img = image::load_from_memory(&main_image_bytes)?;
let watermark_img = image::load_from_memory(&watermark_image_bytes)?;
// Calculate position for bottom-right corner
let x_pos = main_img.width() as i64 - watermark_img.width() as i64;
let y_pos = main_img.height() as i64 - watermark_img.height() as i64;
// Overlay the watermark
imageops::overlay(&mut main_img, &watermark_img, x_pos, y_pos);
Ok(encode_image_to_png(main_img)?)
}
// Helper function to keep code DRY
fn encode_image_to_png(image: DynamicImage) -> anyhow_Result<Vec<u8>> {
let mut buffer = Vec::new();
image.write_to(
&mut std::io::Cursor::new(&mut buffer),
image::ImageOutputFormat::Png,
)?;
Ok(buffer)
}
The above Rust specific code snippets are from the core image processing functionality, the folder can (and better if so, ….path and context related issues may arise otherwise) be placed inside the parent flutter application folder.
Below, the command to generate bridge bindings, accessing rust code in the dart application.
flutter_rust_bridge_codegen \
--rust-input ./rust/native/src/api.rs \
--dart-output ./lib/bridge_generated.dart \
--c-output ./android/app/src/main/jniLibs/
A typical UI for such application would look as structured below.
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:image_picker/image_picker.dart';
import 'package:image_processor_app/api.dart'; // Our API
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Rust Image Processor',
theme: ThemeData(
primarySwatch: Colors.blue,
useMaterial3: true,
),
home: const ImageProcessingPage(),
);
}
}
class ImageProcessingPage extends StatefulWidget {
const ImageProcessingPage({super.key});
@override
State<ImageProcessingPage> createState() => _ImageProcessingPageState();
}
class _ImageProcessingPageState extends State<ImageProcessingPage> {
Uint8List? _originalImageBytes;
Uint8List? _processedImageBytes;
bool _isLoading = false;
Future<void> _pickImage() async {
final picker = ImagePicker();
final pickedFile = await picker.pickImage(source: ImageSource.gallery);
if (pickedFile != null) {
final bytes = await pickedFile.readAsBytes();
setState(() {
_originalImageBytes = bytes;
_processedImageBytes = null; // Clear previous result
});
}
}
Future<void> _runGreyscale() async {
if (_originalImageBytes == null) return;
setState(() => _isLoading = true);
try {
final result = await api.applyGreyscale(imageBytes: _originalImageBytes!);
setState(() => _processedImageBytes = result);
} catch (e) {
_showError(e.toString());
} finally {
setState(() => _isLoading = false);
}
}
Future<void> _runWatermark() async {
if (_originalImageBytes == null) return;
setState(() => _isLoading = true);
try {
// Load watermark from assets
final watermarkBytes = (await rootBundle.load('assets/watermark.png')).buffer.asUint8List();
final result = await api.applyWatermark(
mainImageBytes: _originalImageBytes!,
watermarkImageBytes: watermarkBytes,
);
setState(() => _processedImageBytes = result);
} catch (e) {
_showError(e.toString());
} finally {
setState(() => _isLoading = false);
}
}
void _showError(String message) {
ScaffoldMessenger.of(context).showSnackBar(SnackBar(
content: Text(message),
backgroundColor: Colors.red,
));
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Rust Image Processor')),
body: Center(
child: SingleChildScrollView(
padding: const EdgeInsets.all(16.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
if (_originalImageBytes != null) ...[
const Text('Original', style: TextStyle(fontWeight: FontWeight.bold)),
Image.memory(_originalImageBytes!, height: 200),
const SizedBox(height: 20),
],
if (_isLoading)
const CircularProgressIndicator()
else if (_processedImageBytes != null) ...[
const Text('Processed with Rust', style: TextStyle(fontWeight: FontWeight.bold)),
Image.memory(_processedImageBytes!, height: 200),
],
const SizedBox(height: 30),
ElevatedButton.icon(
onPressed: _pickImage,
icon: const Icon(Icons.image),
label: const Text('Pick an Image'),
),
const SizedBox(height: 10),
if (_originalImageBytes != null)
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
FilledButton(onPressed: _runGreyscale, child: const Text('Apply Greyscale')),
const SizedBox(width: 10),
FilledButton(onPressed: _runWatermark, child: const Text('Apply Watermark')),
],
),
],
),
),
),
);
}
}
Reflection
When you create a flutter application, you would notice it generates boilerplate for android
, iOS
, linux
, desktop
, macOS
and web
.
One might wonder, how does it achieve base requirements, such as OS specific resource management and compute optimizations across platforms. Well, the outline is, this application bundled with a dart engine to render this code, is encapsulated in platform specific wrappers.
So, you have your core functionality in dart (flutter), you want to make the dart code interact with the machine in some way, (display something, make a syscall), it achieves this, through the native wrappers.
For eg, in android, the Java/Kotlin wrapper is responsible for creating a FlutterEngine
instance, which serves as the entry point for the flutter engine. The wrapper also provides a FlutterNativeView
instance, which is an Android View, that renders the flutter UI.
Important
So much for technical jargon and niche code examples. But there is something I would like to stress on a regular basis.
The purpose of the above segment and as such for this very article, is not to delve into the technical and code-centric intricacies of various languages, but to understand the general universal flow of making a request, exposing some functionality for real-time dynamic input, and entertaining that request with time and resource efficiency and how this attitude serves across platforms.
Watch
I wanted to dive into development related tropes for a smart watch, since this can be considered a paradigm shift in the very logical chain of thought around an application.
Developing for a smartwatch is not about shrinking a phone app. It's a fundamental shift in design philosophy. The user's interaction model is completely different, defined by brevity and context. A user looks at their watch for a few seconds, not minutes. Therefore, the entire application design must be built around this constraint.
Things to keep note of
The core philosophy: Glance, Triage, Act
i. Display relevant information in the resolution of the screen.
ii. Triage tasks instead of performing complex computations. For eg, archiving an email (not writing one).
iii. The use-case for such application is contextual awareness. Consider the potential access of location,
time, wearer’s physical and health status and a list of problem statements come across.
UI/UX Constrains/Opportunities
i. Small screen. Simplicity all the way. Long scrollable list of information is the way to go.
ii. Simple binary taps should be prioritized. Multi-character input must be avoided if at all possible, or
structured some other way.
Performance
i. Instant action/reaction. No/very low latency.
ii. Resource optimized. Minimized background processes, network connections and prolonged use of GPS
or sensors.
iii. Independent vs ecosystem-driven.
Lets consider an application for a smart watch, keeping in mind that it is resource optimized, is able to sync with a mobile application and is able to manipulate the watch screen, the sensor and the sound system.
The concept is a simple, guided breathing and mindfulness app that uses biofeedback from the watch's sensors to show the user the tangible effects of a short relaxation session.
You would begin a break (clicking a button on the screen), the watch screen will become a breathing guide (visual effects to signify inhale and exhale), take in the initial and the final measurements and display the result.
We can have a mobile companion app setup, for customization, data synchronization and activity history.
Settings : Phone → Watch
Activity: Watch → Phone
For the Apple ecosystem, we would use swiftUI
.
For this article, we would consider Google’s wearOS
and write the application with Kotlin, JetPack Compose
for UI and wearOS’s apis for access to the sensor and for syncing data.
The below code snippet can serve as the main screen.
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
// Initialize managers
val healthServicesManager = HealthServicesManager(this)
val dataLayerManager = DataLayerManager(this)
setContent {
// Using a NavController to handle screen transitions
val navController = rememberSwipeDismissableNavController()
SwipeDismissableNavHost(
navController = navController,
startDestination = "start"
) {
// Route 1: The starting screen
composable("start") {
StartScreen(onBeginClick = { navController.navigate("session") })
}
// Route 2: The breathing session
composable("session") {
SessionScreen(
healthServicesManager = healthServicesManager,
onSessionEnd = { startHR, endHR ->
// Send data to phone when session ends
dataLayerManager.sendSessionResult(startHR, endHR)
navController.navigate("results/$startHR/$endHR")
}
)
}
// Route 3: The results screen
composable("results/{startHR}/{endHR}") { backStackEntry ->
val startHR = backStackEntry.arguments?.getString("startHR")?.toFloat() ?: 0f
val endHR = backStackEntry.arguments?.getString("endHR")?.toFloat() ?: 0f
ResultsScreen(
startHR = startHR,
endHR = endHR,
onDone = { navController.popBackStack() }
)
}
}
}
}
}
There will be a class to handle access to sensor, the below permission declaration is required in the android manifest file.
<uses-permission android:name="android.permission.BODY_SENSORS"/>
There will also be a class to handle syncing data with the companion app.
The mobile application must also have the specific permissions and configuration for setting up communication with the wearable device upon initialization.
<service
android:name=".DataListenerService"
android:exported="true">
<intent-filter>
<action android:name="com.google.android.gms.wearable.MESSAGE_RECEIVED" />
<data android:scheme="wear" android:host="*" android:pathPrefix="/session_result" />
</intent-filter>
</service>
miscellaneous mentions
OS for TV
What comes to mind, when you want to build some application for a smart TV..??
What would the purpose be for such an application? How would it interact with the TV specific OS? Will there be a concept of efficient resource utilization and traffic congestion avoidance?
General thought process with zero context: how such applications interact with the OS specific for TV, what things to keep in mind for resources, what is the purpose and general applications..
Designing for televisions is a fascinating and unique challenge that perfectly illustrates the multi-platform problem.
Common Application Categories:
Video-on-Demand (VOD) & Streaming: This is the killer app for smart TVs. Think Netflix, YouTube, Disney+, Hulu. Their entire purpose is to browse and consume video content.
Live TV / Broadcast: Apps from traditional cable companies or services like YouTube TV and Sling TV that replicate the live broadcast experience.
Gaming: From simple, remote-controlled casual games to full-fledged cloud gaming services like
NVIDIA GeForce NOW
andXbox Cloud Gaming
, which turn the TV into a console.Shopping / "T-commerce": Browsing and purchasing products, often integrated directly with what's being shown on screen.
Music play, Fitness related ventures and basic information (news, weather) related utilities also form a potential.
Resource Constraints (CPU, Memory, Storage)
Underpowered Processors: TV CPUs are significantly weaker than those in modern smartphones. Complex animations, heavy computations, or inefficient code will lead to a laggy, frustrating experience.
Limited RAM: Memory is scarce. Apps must be diligent about memory management. Loading huge images or failing to release resources will cause the app (or the whole OS) to crash.
Minimal Storage: On-device storage is very limited. Apps themselves must have a small footprint. Any user data or media should be streamed from the cloud, not stored locally unless explicitly for an "offline viewing" feature.
No Touchscreen: Only navigation. Up, Down, Left and Right. So the primary element is focus. User must clearly be able to see from a distance which component on the screen is selected right now.
Check out this site, for androidTV development related concepts and tidbits, and visit here, for tvOS specific development.
Gaming Consoles
First and foremost, I am fascinated by these.
General thought process with zero background context: games usually require much space and compute, different kinds of drivers and make very large amounts of os specific calls in any given period; what things to keep in mind while developing for gaming consoles; what separates their OS from other kinds of OS and how to include this information in whatever you want to create for consoles.
What separates a console OS (like Sony's Orbis OS
on PlayStation or Microsoft's Game OS
on Xbox) from a general-purpose OS (like Windows, macOS, or Linux) is its philosophy.
You may want to design some application for a console, someday. (Maybe tomorrow even..!)
Things to keep in mind:
Access is a Privilege: Unlike PC or mobile, you can't just download a free SDK and start coding. You must be approved by the platform holder (Sony, Microsoft, Nintendo) as a licensed developer. This involves signing NDAs (Non-Disclosure Agreements) and proving you have a legitimate project.
Devkits are Mandatory: You cannot develop a game on a retail console. You must purchase or lease expensive, specialized hardware called a "devkit" or "testkit." These have more RAM, advanced debugging tools, and the ability to run unsigned code.
The paradox of console development: The hardware never changes (blessing and a curse!)
With this information, you can hyper optimize your application in a way that is best suited for the device, a potential not possible in developing for a PC.Close to bare metal: You don’t make generic calls. More often than not, you are directly managing GPU command buffers, scheduling async compute tasks, and real time CPU-GPU sync. This is liberating in a sense but unforgiving in another.
Conclusion
The device's purpose dictates the OS's philosophy, and the OS's philosophy dictates the rules of engagement.
You take a problem. You define it's requirements. You define it's scope. You define it's relevance. You design the solution.
This article as a whole was not simply designed to be long as flick. There is a purpose to it. More than the technical specifics, I would like for you to focus on the general flow of information. There much to notice. Every device that a user interacts with and which processes and outputs intelligence, has its own internal niches and caveats, admittedly. It goes without saying that native considerations will have to be made for some application to run efficiently on a given device. However, with this article, I have venerated what I believe is and should be the attitude with which we approach a problem, for a device.
End user is the soul of everything. What you intend to provide to the end customer and the manner in which you aim to provide it, should determine the architecture you establish. All other optimizations can fall into place, consequently.
There are no hard and fast rules. Nothing was ordained to us as oracle. Eventually it all comes down to what fits your need best. Everything you see is just templates. Take your pick and put it together the way you want!!
Subscribe to my newsletter
Read articles from Aatir Nadim directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
