Domain Modeling Made Functional

May 2019 · 7 minute read · tech fsharp linux dotnet book

Domain Modeling Made Functional by Scott Wlaschin is a book which guides you through the design and implementation of an e-commerce ordering system. It’s a real world application with non-trivial business requirements. The project is implemented in F#, but any other language with a powerful type system which allows you to write in a functional paradigm could be used. Scott is author of the popular website fsharpforfunandprofit.com and also a highly regarded speaker in the F# community.

Domain Modeling Made Functional Cover

The book assumes some knowledge of software design, that you’ve written any sort of software before, but assumes no knowledge of domain driven design or F#. This was ideal for me as I’m an F# hobbyist and know very little about domain driven design. One of my struggles in trying to learn functional programming has been finding resources on actually implementing a real world application. Tutorials describing monads or how to parse a simple calculator grammar don’t show you how to deal with the sorts of problems you face when writing business applications, specifically:

  • A lot of IO, with different systems (file system, databases, remote APIs) which have different latencies and uptimes.
  • Using OOP libraries (ASP.NET Core, database abstraction libraries, …) in a functional manner.
  • Handling errors through multiple layers of abstraction.
  • Ever changing business requirements, and how to structure the application to handle this.

There are great ‘success story’ conference talks about the latest bank that was written in Haskell, but unfortunately they never go into actionable detail. If you took the advice of a lot of blog posts and tutorials on the internet about writing functional applications you would know it’s a simple 3 step process

  1. Gather requirements
  2. Write the application functionally
  3. Done

These tutorials tell you everything about step #3, and nothing about #1 and #2. Domain Modeling Made Functional teaches you how to write your application using a functional architecture. It’s split into 3 secions which largely mimic the development process: gathering requirements, designing the architecture, and implementing the code. The book concludes with making some non-trivial modifications to the application, simulating a change in business requirements. I cover the 3 sections below.

I. Understanding the Domain

The first 55 pages of the book don’t contain any F# code so you can skip them. No! No! No! Gathering requirements, and then modelling the domain are a critical part in any software project. Scott describes a domain specific language specifically for describing requirements and organising them into domains. In other words, a set of language, diagrams, and questions that developers and the domain experts can use to get a shared understanding of what is being built, a ubiquitous language.

Good thing I have experience gathering software requirements and already know how to do it. Again, No! No! No! Scott emphasises the point that requirements should be gathered and organised in a domain driven manner. I learned an enormous amount in this book, but I think this section was most valuable to me. It challenged the principles and knowledge I already thought I had around requirements gathering and software design. It demonstrated that if you develop an understanding of user requirements and domains in the context of domain driven development then a functional architecture almost naturally appears out of nowhere. In contrast, if you were to develop an understanding of user requirements and domains in the traditional OOP model (database driven design or class driven design), then trying to retrofit a functional architecture or functional language on top of this will be complicated, clumsy, and difficult to understand. I suspect for most developers the unconcious default is to gather requirement in an OOP manner.

II. Modelling the Domain

The section opens with a brief primer of some of the best features of the F# language: the type system, algebraic data types, composing types, optional, and Result. This quickly brings anyone not familiar with F# up to speed. It then dives into modelling the order taking system using the F# type system, and clearly shows how the type system can be used to almost perfectly model the domain (check out Scott’s article on making illegal states unrepresentable).

Below is an example from the book of capturing business rules in the F# type system. The UnitQuantity of a product is the number of copies of the product that is being ordered. This could be modelled as an int, but we don’t really want to allow the user to order between -2,147,483,648 and 2,147,483,647 items of the product. In the example of the book we want this value to be between 1 and 1000. So how do we model this in the code?

We start by declaring our own type with a private constructor

type UnitQuantity = private UnitQuantity of int

The private constructor prevents us from creating a UnitQuantity value directly, so we can’t accidentally do

let x = UnitQuantity -500

We’ll define our own module with a function called create, we put our validation logic in there and it’s the only way to make new values of type UnitQuantity

module UnitQuantity = 

    let create qty =
        if qty < 1 then
            Error "UnitQuantity can not be negative"
        else if qty > 1000 then
            Error "UnitQuantity can not be more than 1000"
        else
            Ok (UnitQuantity qty)

    let value (UnitQuantity qty) = qty

We can then see the function in action

match UnitQuantity.create -1000 with
| Error msg -> printfn "Failure: %A" msg
| Ok qty    -> printfn "Success. Value is %A" (UnitQuantity.value qty)
 
match UnitQuantity.create 50 with
| Error msg -> printfn "Failure: %A" msg
| Ok qty    -> printfn "Success. Value is %A" (UnitQuantity.value qty)

This example of primitive wrapping is really trivial and something you’ve likely seen before. The book starts with this, but also describes how to achieve safety with much more complex types such as those which span multiple domains, or are aggregates. I found the real value of this book is how it achieves the latter, Scott’s explains the details and process really well and it’s a pattern which you can pull out for writing your own applications.

But it’s the real world. If this was used in a website or public API then in reality our UnitQuantity can only ever be a JSON integer. Using the bounded context approach we have our ‘unsafe outside world’ of JSON filled with primitives like int which contain untrusted values. Our application will have an explicit serialisation step which converts between the untrusted public DTOs (data transfer objects), and our safely typed internal representation. Any errors at this stage are appropriately returned to the user. The ‘safe inside world’, where we have types such as UnitQuantity, as well as F# algebraic data types and units of measure are used throughout our business logic. Details on how to write this are covered in Chapter 11.

III. Implementing the Model

The last section of the book describes how the application is implemented using a functional pipeline. But not just any pipeline. It’s a pipeline that describes application workflows which handle

  • Dependency injection
  • Async error handling
  • Application events
  • DTO conversion
  • Data persistence (database)

In other words, everything that is needed in a real world application. All code from the book is available on Github. It’s well commented and can be read and understood without having read the book. I think that it’s an incredible testament to F# as a language and Scott as an author, and is a perfect architecture in which to base your own application on.

There are also chapters on computation expressions for more complex error handling, techniques on serialising complex F# algebraic data types to JSON, and how to best model persistence in a database.

Conclusion

This book describes how you can architect and build an application in a functional programming language using domain driven principles. Scott writes with clarity and describes the problems or gotchas that can occur before explaining how to solve them. The diagrams in the book illustrate the pipelines and transformations that need to occur. The F# code printed in the book was clear to understand and wasn’t overwhelming. We’ve all seen Java text books where application code is printed over several pages.

I learned a huge amount from Domain Modeling Made Functional and will keep referring back to it. Techniques for modelling requirements, architecture of functional applications, tricks with the F# type system to gain application safety, error handling in pipelines, and use of computation expressions to name just a few.

If you’re interested in what I’ve written about and would like to learn more than I recommend Scott’s talk Domain Modeling Made Functional, checking out his website fsharpforfunandprofit.com, and then buying the book.


The code with UnitQuantity can be played around with here.


SQLite Database with Dapper and F#

May 2019 · 3 minute read · tech fsharp linux dotnet sql

This is a tutorial on using F# with Dapper to query a SQLite database on .NET Core. It uses the fsharp-dapper library written by Alex Troshkin. While this example uses SQLite, the general pattern could be used for any other SQL database. All code from this post is on Github and can be run locally.

Overview

At its core this pattern allows you to separate your SQL queries from your domain types. You write a small API layer which does the binding of data to types, and then can call these methods to do asynchronous queries of the database. You are free to define your own interface and use your own naming conventions. Here is how you might interact with the API in your application.

With a defined business type

type Bird = {
        Id : int64
        Name: string
        Alias: string option
}

We can create the database table and insert records using

Queries.Schema.CreateTables
    |> Async.RunSynchronously
    |> ignore

Queries.Bird.New "Kereru" "Wood Pigeon" |> Async.RunSynchronously |> ignore
Queries.Bird.New "Kea" None |> Async.RunSynchronously |> ignore
Queries.Bird.New "Doose" None |> Async.RunSynchronously |> ignore

Find and return multiple records in an F# sequence (IEnumerable)

Queries.Bird.GetAll()
    |> Async.RunSynchronously
    |> Seq.iter (printfn "%A")

Update records

Queries.Bird.UpdateAliasByName "Kea" "Mountain Jester" |> Async.RunSynchronously |> ignore

Query for single records, returning a Some<Bird>

match Queries.Bird.GetSingleByName "Kea" |> Async.RunSynchronously with
| Some(bird) -> printfn "Kea alias is now %s" bird.Alias.Value
| None -> printfn "Kea record does not exist"

And finally delete records

Queries.Bird.DeleteByName "Doose" |> Async.RunSynchronously |> ignore

Implementation

As it is a simple example with a single domain (Bird) I have kept all code in a single file, Database.fs. For larger applications you’ll want to split these modules out into their own files and likely have one file per domain for queries. Below are some of the interesting bits.

Connection

This module establishes connections to the database. This is the only SQLite specific part of the code so you could easily drop in an NpgsqlConnection if you wanted to query PostgreSQL, or the appropriate connection class for any other type of SQL database.

module Connection =

    let private mkOnDiskConnectionString (dataSource: string) =
        sprintf
            "Data Source = %s;"
            dataSource

    let mkOnDisk () = new SqliteConnection (mkOnDiskConnectionString "./example.db")

Queries

We instantiate the database connection and bind it to the query API.

module Queries = 
    let private connectionF () = Connection.SqliteConnection (Connection.mkOnDisk())

    let querySeqAsync<'R>    = querySeqAsync<'R> (connectionF)
    let querySingleAsync<'R> = querySingleOptionAsync<'R> (connectionF)

Schema

Queries which create and modify the database schema.

module Schema = 
    let CreateTables = querySingleAsync<int> {
        script """
            DROP TABLE IF EXISTS Bird;
            CREATE TABLE Bird (
                Id INTEGER PRIMARY KEY,
                Name VARCHAR(255) NOT NULL,
                Alias VARCHAR(255) NULL
            );
        """
    }

Domain Queries

Queries specific to each domain of your application. Using the query wrappers we defined above these query the database and bind the result to one of the domain types. This is the high level API that the rest of your code would interact with.

module Bird = 

    let New name alias = querySingleAsync<int> {
        script "INSERT INTO Bird (Name, Alias) VALUES (@Name, @Alias)"
        parameters (dict ["Name", box name; "Alias", box alias])
    }

    let GetSingleByName name = querySingleAsync<Types.Bird> {
        script "SELECT * FROM Bird WHERE Name = @Name LIMIT 1"
        parameters (dict ["Name", box name])
    }

    let GetAll() = querySeqAsync<Types.Bird> { script "SELECT * FROM Bird" } 

    let UpdateAliasByName name alias = querySingleAsync<int> {
        script "UPDATE Bird SET Alias = @Alias WHERE Name = @Name"
        parameters (dict ["Alias", box alias; "Name", box name])
    }

    let DeleteByName name = querySingleAsync<int> {
        script "DELETE FROM Bird WHERE Name = @Name"
        parameters (dict ["Name", box name])
    }

Thoughts on Accelerate - Building and Scaling High Performing Technology Organizations

April 2019 · 8 minute read · tech book lean devops practices

I recently read Accelerate - Building and Scaling High Performing Technology Organizations. Here I give a brief overview of the book and share some thoughts I had while reading it.

Accelerate - Building and Scaling High Performing Technology Organizations

The authors are looking to answer questions such as “Do technical practices and automation impact software delivery?” and “Is a Lean approach to product management an important aspect of software development and delivery?”. They’ve done this through several rounds of surveys to people who know what Devops is, but don’t necessarily have to be following it. They got 23,000 responses from around 2000 different organisations sized between 5 and 10,000 employees. The survey asked Likert-type questions, where the respondent chooses from a range of options, ie. ‘Strongly disagree’ through ‘Strongly agree’. An example of a question in the survey is “How often their organisation deploys code for the primary service or application they work on” and were offered a choice of six options ranging from “On demand (multiple deploys per day)” through to “Fewer than once every six months”. It’s important to remember that this data was sourced from a closed question survey - so it’s in no way a definitive list, nor necessarily causal. It is a list of practices which high performing companies happen to follow.

Measuring Software Delivery Performance

The authors came up with these four metrics which could be used to measure the performance of software delivery:

  • Lead Time - Time taken between when a feature is agreed upon until it is running in production. Keeping this short allows more features to be developed, as well as short turn around time on bugs.
  • Deployment Frequency - How frequently is software deployed?
  • Mean Time to Restore (MTTR) - After a failure in production how quickly is service restored?
  • Change Fail Percentage - What percentage of deployments to production fail?

Lead Time and Deployment Frequency measure the velocity of delivery whereas MTTR and Change Fail Percentage measure the stability of a delivery. This is an agreeable definition of the term Software Delivery and achieves a nice balance. All four measures fit in the Lean approach discussed in Eric Ries’ book The Lean Startup (another worthy read). Businesses can’t run experiments (think A/B tests) if they can’t quickly and reliably deliver software. Speed and stability are not a trade-off:

[Survey results] demonstrate that there is no trade-off between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures. This is precisely what the Agile and Lean movements predict, but much dogma still rests on the false assumption that moving faster means trading off against other performance goals, rather than enabling and reinforcing them.

Capabilities to drive improvement

Freshly copied from Appendix A, these are capabilities that the authors noted high performing companies execute to a high level.

Continuous Delivery Capabilities

  • Use version control for all production code and configuration.
  • Automate your deployment process.
  • Implement Continuous Integration.
  • Use trunk based development methods - keeping feature branches few and short lived.
  • Implement automated tests.
  • Support test data management - integration tests often need test data to run. This needs to be properly managed.
  • Shift left on security - involve security early in the development process.
  • Implement Continuous Delivery.

Architecture Capabilities

  • Use a loosely couple architecture - allowing deployments to occur independent of other teams and services.
  • Architect for empowered teams - Let teams pick their own CI and CD tooling.

Product and Process Capabilities

  • Gather and implement customer feedback.
  • Make the flow of work visible through the value stream - teams should know why it is they’re working on a particular feature.
  • Work in small batches.
  • Foster and enable team experimentation.

Lean Management and Monitoring Capabilities

  • Have a lightweight change approval process - peer review is appropriate.
  • Application and infrastructure monitoring should inform business decisions.
  • Check system health proactively.
  • Implement ‘Work in Process’ limits - reduces task switching overhead.
  • Visualize work to monitor quality and communicate throughout the team - Dashboards, dashboards, dashboards.

Cultural Capabilities

  • Support a generative culture - as outlined by Westrum.
  • Encourage and support learning - have a learning culture, invest in the employees.
  • Support and facilitate collaboration among teams - prevent siloing.
  • Provide resources and tools that make work meaningful.
  • Support or embody transformational leadership - enabling these other 23 things starts at the top.

On the face of it these all seem sensible. CI/CD? This is what we’ve been reading about on Hacker News the last 6 years. Regularly gathering customer feedback? Startup 101. Supportive and learning culture? Yeah of course, no one wants the opposite. So is this list helpful at all? I think it is if the people within the team aren’t the ones assessing themselves on their progress at it. Some of these items such as ‘automated deployment’ are just giant. I suspect the majority of developers within large companies say they have an automated deployment pipeline. It’s just that some also have manual deployment steps in and around the automated ones.

When you’re on a team you quickly get normalised to the way things are done. The automated deployment pipeline is good enough, tests cover almost all critical parts of the application, we can usually focus on an item of work and not be constantly context switching. But your normal drifts. Two years later you’ve had to add a few more manual steps to the pipeline, the developer who wrote a new feature left before they wrote tests, and you’re juggling varying types of work. But it still feels normal and you’re still relatively happy with it. At my last job I would have said we had an automated deployment pipeline. Jenkins would deploy the app, secrets, and server configs to production. Sure you had to manage the infrastructure, write the post release documentation, and do a few other manual steps but it was all automated… right? When I joined my new team and saw how they used Jenkins the deployment pipeline at my old job suddenly looked like a Reliant Robin. Sure I’d been reading about deployment pipelines for years on Hacker News but in retrospect it must have been with the lens that what we had implemented was good and normal. Maybe what we set up was good at the time and had just aged poorly, but it took a new point of view to see how wrong I was. I had the same eye opening feeling happened at a Devops gathering a few weeks ago. A colleague was giving a demo of the Jenkins and Octopus pipeline he had been working on. The features, particularly around automated post-release documentation and notifications were amazing. I suspect that unless an implementation is causing a team pain (ie. they are performing the capability poorly) then they’re unlikely to critically assess it for potential improvement.

So how does a team get around the trap of normal? How do they really know if they’re executing any of those capabilities at a high level or just using them in a middling sort of way? One way would be assessment, for experts in these domains to evaluate how teams are doing, but by that point teams have already put a lot of work into an apparent substandard implementation, this may cause tension. I don’t like this idea. Standardising these common implementations (CI, CD, architecture, test data) into templates is essential, but is only of immediate value to new teams. Templates would only cover some of the technical capabilities, they wouldn’t help a team develop a learning or sharing culture. Project owners, people leads, and agile team facilitators are all oriented around the process, lean management, and cultural capabilities of the team. I don’t know enough around this area to speculate so won’t offend those wonderful people by doing so :-) . One of the core messages of the book is that it’s a learning and continual development process. A team should never be in a situation where they say ‘we have automated deployments’ or ‘we encourage and support learning’ and stop paying attention to it. It should always be thinking about how it can improve the processes it has implemented. It’s not a destination, it’s a …

Journey album cover

In this post I’ve focused on technical capabilities but it’s important to note that these are enabled by the capabilities in the realms of process, team structure, and management. All of these are discussed in detail in the book. Accelerate is very easy to read and I highly recommend it. Finally, a few quotes from the book:

On culture, something we should all remember, from Google, 2015.

Who is on a team matters less than how the team members interact, structure their work, and view their contributions

Failure Demand exists for software delivery as well.

Seddon invented the concept when he discovered that the movement of ‘telephone work’ from local bank branches to call centres in the 1980s caused an explosion in the volumes of demand – the number of phone calls soared.[3] He found that the rise in call volumes was attributable to the creation of ‘failure demand’, i.e. customers calling a second time because the bank had failed to solve their problem on the first call.

On burnout and unhappiness at work.

Maslach found that most organisations try to fix the person and ignore the work environment, even though her research shows that fixing the environment has a high likelihood of success