Over the last 6 months I created Analog Photography Assistant, an Android app to help photographers shoot old film cameras. The app provides a bunch of utilities such as measuring the amount of light in a scene, and a calculator to factor in the reciprocity failure characteristic of film.
Before this, I’d never written an Android application and the extent of my Kotlin experience was using the ungodly Kotlin DSL for Teamcity CI/CD pipelines. I’m now at the first major release of the app and am reflecting on how I’ve written the code. This is the first time I’ve used an LLM to drive development, with Claude writing ~95% of the code in the app.
Abstractions, Learning
My biggest takeaway from these LLMs is their ability to translate abstractions or concepts. Going into this project:
- I was a Kotlin newbie, but had a strong base of F#, C#, Python, Typescript, &c;
- I had never used the Material 3 Design System, but had worked with the CSS framework Bootstrap and corporate Design Systems in my day job.
- I didn’t know Jetpack Compose, but have worked with ReactJS for years and know the concepts inside out.
Starter template aside, I was prompting the LLM to generate code based on concepts from different tech stacks or programming languages. When I started I didn’t know what Jetpack Compose was (hybrid UI framework and application framework for native Android apps). I didn’t know the edges of what Compose offered, and where stock-standard Android, Kotlin, or other libraries began, but that didn’t matter.
My prompts to the LLM took concepts that I knew – and mixed them. I could declare types in F#, express a state management concept in React, and tell the LLM to generate idiomatic Kotlin code for an Android application.
Over time I learned Kotlin, Jetpack Compose, the Android APIs, and Material 3. But I learned these in a different order than other projects I’ve worked on. In day job settings I’ve written apps in Python & Django, C# & .NET Core, and Typescript & React. In all cases I started with the documentation, and recall almost permanently having the official project documentation, cheatsheets, or an API reference on the screen. I needed to learn the language and framework in order to write code.
On this project it was the other way around. I (well, the LLM) wrote the code to the point of a functioning state. From there I could navigate through API documentation within the IDE and prompt the LLM to tell me more about a given abstraction, concept, or class. I found myself returning to official project documentation to properly understand high level concepts and abstractions. An example of this was when I was implementing edge-to-edge display within the app. The Android docs[1] do an excellent job of presenting exactly what an engineer needs to know – starting with a conceptual overview, illustrating it with graphics and video, and finishing by linking to the APIs I need to implement it. Once I started implementing the edge-to-edge display functionality I once again used the LLM to generate the code.
Having worked this way for six months, I feel as if I have an intermediate level of knowledge around Android development. Critically, I have the language and concepts in my head that I need to navigate architectures, debug, and write new code.
Architecture to fit the Prompt
My workflow relied on Claude’s browser/chat LLM prompts. I never integrated it with my IDE. Any small issue within the code I fix just by relying on my expertise and error description (majority were import errors, types, providing the correct function arguments, &c;). I used LLM prompts to do the heavy lifting – generating hundreds of lines of Jetpack Compose code for the user interface, writing unit tests that would verify an implementation that it had written, or simply to “maximise UI more idiomatic Material 3 design system”. Apart from my grammar and correctness suffering, I found that the architecture of my application evolved so that I could easily include relevant units within the context window.
LLMs have finite context windows which in theory limit the amount of text or code you can include in your prompt, but in practice limit the number of back-and-forths you can have with the LLM as each prompt and reply gets included within the context of the next prompt. My prompting strategy adjusted to this limited context window size. A typical prompt would be an english description of what I wanted, along with the code that it should operate on.
- When generating code from scratch – easy.
- Modifying a small block of existing code was also fine as it was easy to copy and paste it into the prompt.
- Architecturally significant or complex changes were more difficult. You need to include all of the code that’s relevant to the change within the prompt. This is difficult and time consuming as you may need to selectively upload utilities, components, and relevant functions that enable a particular thing. Given it’s a complex change, if you don’t include everything that’s relevant, the LLM may suggest you make changes that aren’t relevant or accurate.
To enable the third use case, I found that my application architecture and folder structure evolved/emerged in such a way that code relevant to a particular area was self-contained within a directory. This meant in the prompt, I could just drag and drop a given level of directory depending on the change I was making.
logan@MBP apa % tree -d
.
├── screens
│ ├── bwpreview
│ ├── compass
│ ├── guide
│ ├── lightmeter
│ │ └── ui
│ ├── notes
│ │ └── actions
│ ├── reciprocitycalculator
│ └── settings
│ └── ui
└── ui
├── composables
└── theme
I found myself duplicating implementations between screen modules, just so that it was easy to quickly refactor and iterate. As functionality of any given screen got closer to completion I found myself doing more classical software engineering refactors to reduce duplication and make the code more concise.
Type definitions and function signatures became even more important. Combined with giving symbols semantic and verbose names, the LLM ‘guessed’ at what the undeclared (in the prompt window) symbol did and prompts would yield results without needing to feed in full implementations. Of course sometimes this doesn’t work. In this area, the most common failure I got was the LLM not knowing about newly released Android or package APIs that superseded old ones. Within a back and forth prompt, it might sneakily replace the usage of a new API that I coded with the now-deprecated way of doing things.
This idea of ‘architecture to fit the prompt’ has a shelf life. The code generated by the latest LLMs is trending towards flawless. The amount of context and ‘working memory’ that LLMs can juggle must be the next big area of competition.
Big blocks of code
The Kotlin code needed for metering a scene for light was necessarily complex. The app needed to be able to take a photograph of a scene, for which it must have permissions. The photo must be temporarily written somewhere (more permissions, file i/o) before an algorithm is run over it and the file is deleted. Data generated by the algorithm is presented back to the user. All of those tasks need to happen asynchronously to not block the UI, and the UI itself needs to be declared.
When I started working with the LLM, I viewed that the thing I needed to iterate on was the generated code. The LLM would generate code, and then I needed to debug, iterate, or extend what was made – sometimes with the help of the LLM itself. Over time I found myself focusing less on iterating on the generated code, and more on iterating on the prompt so that it would generate the code as I needed it in one-shot. The light meter was a good example of this. The further I got, the more requirements I discovered and the more I developed an understanding of how I wanted the light metering algorithm to work. This parallels the software engineering truism that ‘writing code is the easy part’. With the aid of an LLM, it made coding not just easy, but fast.
With the LLM generating code rather than me, I wasn’t actively making decisions about the specific APIs to use, code design, or way of implementing a particular construct. As such, I wasn’t forming a mental model of how the software worked. I used this to my advantage. Rather than getting the LLM to create abstractions, an API, or some greater framework (that I would not remember), I instead prompted the LLM to generate large blocks of procedural code. I iterated prompts until I got the code working, and then approached it as a refactoring exercise in order to create abstractions that I designed, or meet Jetpack Compose conventions.
–
During the six or so months in writing the app, the LLM I was using released several updates. Each time the code it generated got better, informally measured by fewer errors in generated code. I now have Claude improving code that an earlier model wrote from months ago.
For small hobby projects, LLM-aided development has been a game changer for me. I can make meaningful progress on the project, even when I’m tired or only have half an hour available.
The app is now available in the Play Store. If you shoot film cameras, you might find it valuable: