Write code that is easy to delete, not easy to extend.
“Every line of code is written without reason, maintained out of weakness, and deleted by chance” Jean-Paul Sartre’s Programming in ANSI C.
Every line of code written comes at a price: maintenance. To avoid paying for a lot of code, we build reusable software. The problem with code re-use is that it gets in the way of changing your mind later on.
The more consumers of an API you have, the more code you must rewrite to introduce changes. Similarly, the more you rely on an third-party api, the more you suffer when it changes. Managing how the code fits together, or which parts depend on others, is a significant problem in large scale systems, and it gets harder as your project grows older.
My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent” EWD 1036
If we see ‘lines of code’ as ‘lines spent’, then when we delete lines of code, we are lowering the cost of maintenance. Instead of building re-usable software, we should try to build disposable software.
I don’t need to tell you that deleting code is more fun than writing it.
To write code that’s easy to delete: repeat yourself to avoid creating dependencies, but don’t repeat yourself to manage them. Layer your code too: build simple-to-use APIs out of simpler-to-implement but clumsy-to-use parts. Split your code: isolate the hard-to-write and the likely-to-change parts from the rest of the code, and each other. Don’t hard code every choice, and maybe allow changing a few at runtime. Don’t try to do all of these things at the same time, and maybe don’t write so much code in the first place.
Step 0: Don’t write code
The number of lines of code doesn’t tell us much on its own, but the magnitude does 50, 500 5,000, 10,000, 25,000, etc. A million line monolith is going to be more annoying than a ten thousand line one and significantly more time, money, and effort to replace.
Although the more code you have the harder it is to get rid of, saving one line of code saves absolutely nothing on its own.
Even so, the easiest code to delete is the code you avoided writing in the first place.
Step 1: Copy-paste code
Building reusable code is something that’s easier to do in hindsight with a couple of examples of use in the code base, than foresight of ones you might want later. On the plus side, you’re probably re-using a lot of code already by just using the file-system, why worry that much? A little redundancy is healthy.
It’s good to copy-paste code a couple of times, rather than making a library function, just to get a handle on how it will be used. Once you make something a shared API, you make it harder to change.
The code that calls your function will rely on both the intentional and the unintentional behaviours of the implementation behind it. The programmers using your function will not rely on what you document, but what they observe.
It’s simpler to delete the code inside a function than it is to delete a function.
Step 2: Don’t copy paste code
When you’ve copy and pasted something enough times, maybe it’s time to pull it up to a function. This is the “save me from my standard library” stuff: the “open a config file and give me a hash table”, “delete this directory”. This includes functions without any state, or functions with a little bit of global knowledge like environment variables. The stuff that ends up in a file called “util”.
Aside: Make a util directory and keep different utilities in different files. A single util file will always grow until it is too big and yet too hard to split apart. Using a single util file is unhygienic.
The less specific the code is to your application or project, the easier they are to re-use and the less likely to change or be deleted. Library code like logging, or third party APIs, file handles, or processes. Other good examples of code you’re not going to delete are lists, hash tables, and other collections. Not because they often have very simple interfaces, but because they don’t grow in scope over time.
Instead of making code easy-to-delete, we are trying to keep the hard-to-delete parts as far away as possible from the easy-to-delete parts.
Step 3: Write more boilerplate
Despite writing libraries to avoid copy pasting, we often end up writing a lot more code through copy paste to use them, but we give it a different name: boilerplate. Boiler plate is a lot like copy-pasting, but you change some of the code in a different place each time, rather than the same bit over and over.
Like with copy paste, we are duplicating parts of code to avoid introducing dependencies, gain flexibility, and pay for it in verbosity.
Libraries that require boilerplate are often stuff like network protocols, wire formats, or parsing kits, stuff where it’s hard to interweave policy (what a program should do), and protocol (what a program can do) together without limiting the options. This code is hard to delete: it’s often a requirement for talking to another computer or handling different files, and the last thing we want to do is litter it with business logic.
This is not an exercise in code reuse: we’re trying keep the parts that change frequently, away from the parts that are relatively static. Minimising the dependencies or responsibilities of library code, even if we have to write boilerplate to use it.
You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.
Step 4: Don’t write boilerplate
Boilerplate works best when libraries are expected to cater to all tastes, but sometimes there is just too much duplication. It’s time to wrap your flexible library with one that has opinions on policy, workflow, and state. Building simple-to-use APIs is about turning your boilerplate into a library.
This isn’t as uncommon as you might think: One of the most popular and beloved python http clients, requests, is a successful example of providing a simpler interface, powered by a more verbose-to-use library urllib3 underneath. requests caters to common workflows when using http, and hides many practical details from the user. Meanwhile, urllib3 does the pipelining, connection management, and does not hide anything from the user.
It is not so much that we are hiding detail when we wrap one library in another, but we are separating concerns: requests is about popular http adventures, urllib3 is about giving you the tools to choose your own adventure.
I’m not advocating you go out and create a /protocol/ and a /policy/ directory, but you do want to try and keep your util directory free of business logic, and build simpler-to-use libraries on top of simpler-to-implement ones. You don’t have to finish writing one library to start writing another atop.
It’s often good to wrap third party libraries too, even if they aren’t protocol-esque. You can build a library that suits your code, rather than lock in your choice across the project. Building a pleasant to use API and building an extensible API are often at odds with each other.
This split of concerns allows us to make some users happy without making things impossible for other users. Layering is easiest when you start with a good API, but writing a good API on top of a bad one is unpleasantly hard. Good APIs are designed with empathy for the programmers who will use it, and layering is realising we can’t please everyone at once.
Layering is less about writing code we can delete later, but making the hard to delete code pleasant to use (without contaminating it with business logic).
Step 5: Write a big lump of code
You’ve copy-pasted, you’ve refactored, you’ve layered, you’ve composed, but the code still has to do something at the end of the day. Sometimes it’s best just to give up and write a substantial amount of trashy code to hold the rest together.
Business logic is code characterised by a never ending series of edge cases and quick and dirty hacks. This is fine. I am ok with this. Other styles like ‘game code’, or ‘founder code’ are the same thing: cutting corners to save a considerable amount of time.
The reason? Sometimes it’s easier to delete one big mistake than try to delete 18 smaller interleaved mistakes. A lot of programming is exploratory, and it’s quicker to get it wrong a few times and iterate than think to get it right first time.
This is especially true of more fun or creative endeavours. If you’re writing your first game: don’t write an engine. Similarly, don’t write a web framework before writing an application. Go and write a mess the first time. Unless you’re psychic you won’t know how to split it up.
Monorepos are a similar tradeoff: You won’t know how to split up your code in advance, and frankly one large mistake is easier to deploy than 20 tightly coupled ones.
When you know what code is going to be abandoned soon, deleted, or easily replaced, you can cut a lot more corners. Especially if you make one-off client sites, event web pages. Anything where you have a template and stamp out copies, or where you fill in the gaps left by a framework.
I’m not suggesting you write the same ball of mud ten times over, perfecting your mistakes. To quote Perlis: “Everything should be built top-down, except the first time”. You should be trying to make new mistakes each time, take new risks, and slowly build up through iteration.
Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes. You learn nothing from success. It is not that you know what good code looks like, but the scars of bad code are fresh in your mind.
Projects either fail or become legacy code eventually anyway. Failure happens more than success. It’s quicker to write ten big balls of mud and see where it gets you than try to polish a single turd.
It’s easier to delete all of the code than to delete it piecewise.
Step 6: Break your code into pieces
Big balls of mud are the easiest to build but the most expensive to maintain. What feels like a simple change ends up touching almost every part of the code base in an ad-hoc fashion. What was easy to delete as a whole is now impossible to delete piecewise.
In the same we have layered our code to separate responsibilities, from platform specific to domain specific, we need to find a means to tease apart the logic atop.
[Start] with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. D. Parnas
Instead of breaking code into parts with common functionality, we break code apart by what it does not share with the rest. We isolate the most frustrating parts to write, maintain, or delete away from each other.
We are not building modules around being able to re-use them, but being able to change them.
Unfortunately, some problems are more intertwined and hard to separate than others. Although the single responsibility principle suggests that ‘each module should only handle one hard problem’, it is more important that ‘each hard problem is only handled by one module’
When a module does two things, it is usually because changing one part requires changing the other. It is often easier to have one awful component with a simple interface, than two components requiring a careful co-ordination between them.
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [”loose coupling”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the code base involved in this case is not that. SCOTUS Justice Stewart
A system where you can delete parts without rewriting others is often called loosely coupled, but it’s a lot easier to explain what one looks like rather than how to build it in the first place.
Even hardcoding a variable once can be loose coupling, or using a command line flag over a variable. Loose coupling is about being able to change your mind without changing too much code.
For example, Microsoft Windows has internal and external APIs for this very purpose. The external APIs are tied to the lifecycle of desktop programs, and the internal API is tied to the underlying kernel. Hiding these APIs away gives Microsoft flexibility without breaking too much software in the process.
HTTP has examples of loose coupling too: Putting a cache in front of your HTTP server. Moving your images to a CDN and just changing the links to them. Neither breaks the browser.
HTTP’s error codes are another example of loose coupling: common problems across web servers have unique codes. When you get a 400 error, doing it again will get the same result. A 500 may change. As a result, HTTP clients can handle many errors on the programmers behalf.
How your software handles failure must be taken into account when decomposing it into smaller pieces. Doing so is easier said than done.
I have decided, reluctantly to use LaTeX. Making reliable distributed systems in the presence of software errors. Armstrong, 2003
Erlang/OTP is relatively unique in how it chooses to handle failure: supervision trees. Roughly, each process in an Erlang system is started by and watched by a supervisor. When a process encounters a problem, it exits. When a process exits, it is restarted by the supervisor.
(These supervisors are started by a bootstrap process, and when a supervisor encounters a fault, it is restarted by the bootstrap process)
The key idea is that it is quicker to fail-fast and restart than it is to handle errors. Error handling like this may seem counter-intuitive, gaining reliability by giving up when errors happen, but turning things off-and-on again has a knack for suppressing transient faults.
Error handling, and recovery are best done at the outer layers of your code base. This is known as the end-to-end principle. The end-to-end principle argues that it is easier to handle failure at the far ends of a connection than anywhere in the middle. If you have any handling inside, you still have to do the final top level check. If every layer atop must handle errors, so why bother handling them on the inside?
Error handling is one of the many ways in which a system can be tightly bound together. There are many other examples of tight coupling, but it is a little unfair to single one out as being badly designed. Except for IMAP.
In IMAP almost every each operation is a snowflake, with unique options and handling. Error handling is painful: errors can come halfway through the result of another operation.
Instead of UUIDs, IMAP generates unique tokens to identify each message. These can change halfway through the result of an operation too. Many operations are not atomic. It took more than 25 years to get a way to move email from one folder to another that reliably works. There is a special UTF-7 encoding, and a unique base64 encoding too.
I am not making any of this up.
By comparison, both file systems and databases make much better examples of remote storage. With a file system, you have a fixed set of operations, but a multitude of objects you can operate on.
Although SQL may seem like a much broader interface than a filesystem, it follows the same pattern. A number of operations on sets, and a multitude of rows to operate on. Although you can’t always swap out one database for another, it is easier to find something that works with SQL over any homebrew query language.
Other examples of loose coupling are other systems with middleware, or filters and pipelines. For example, Twitter’s Finagle uses a common API for services, and this allows generic timeout handling, retry mechanisms, and authentication checks to be added effortlessly to client and server code.
(I’m sure if I didn’t mention the UNIX pipeline here someone would complain at me)
First we layered our code, but now some of those layers share an interface: a common set of behaviours and operations with a variety of implementations. Good examples of loose coupling are often examples of uniform interfaces.
A healthy code base doesn’t have to be perfectly modular. The modular bit makes it way more fun to write code, in the same way that Lego bricks are fun because they all fit together. A healthy code base has some verbosity, some redundancy, and just enough distance between the moving parts so you won’t trap your hands inside.
Code that is loosely coupled isn’t necessarily easy-to-delete, but it is much easier to replace, and much easier to change too.
Step 7: Keep writing code
Being able to write new code without dealing with old code makes it far easier to experiment with new ideas. It isn’t so much that you should write microservices and not monoliths, but your system should be capable of supporting one or two experiments atop while you work out what you’re doing.
Feature flags are one way to change your mind later. Although feature flags are seen as ways to experiment with features, they allow you to deploy changes without re-deploying your software.
Google Chrome is a spectacular example of the benefits they bring. They found that the hardest part of keeping a regular release cycle, was the time it took to merge long lived feature branches in.
By being able to turn the new code on-and-off without recompiling, larger changes could be broken down into smaller merges without impacting existing code. With new features appearing earlier in the same code base, it made it more obvious when long running feature developement would impact other parts of the code.
A feature flag isn’t just a command line switch, it’s a way of decoupling feature releases from merging branches, and decoupling feature releases from deploying code. Being able to change your mind at runtime becomes increasingly important when it can take hours, days, or weeks to roll out new software. Ask any SRE: Any system that can wake you up at night is one worth being able to control at runtime.
It isn’t so much that you’re iterating, but you have a feedback loop. It is not so much you are building modules to re-use, but isolating components for change. Handling change is not just developing new features but getting rid of old ones too. Writing extensible code is hoping that in three months time, you got everything right. Writing code you can delete is working on the opposite assumption.
The strategies i’ve talked about — layering, isolation, common interfaces, composition — are not about writing good software, but how to build software that can change over time.
The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. […] Hence plan to throw one away; you will, anyhow. Fred Brooks
You don’t need to throw it all away but you will need to delete some of it. Good code isn’t about getting it right the first time. Good code is just legacy code that doesn’t get in the way.
Good code is easy to delete.
Acknowledgments
Thank you to all of my proof readers for your time, patience, and effort.
Further Reading
Layering/Decomposition
On the Criteria To Be Used in Decomposing Systems into Modules, D.L. Parnas.
How To Design A Good API and Why it Matters, J. Bloch.
The Little Manual of
API Design, J. Blanchette.
Python for Humans, K. Reitz.
Common Interfaces
The Design of the MH Mail System, a Rand technical report.
The Styx Architecture for Distributed Systems
Your Server as a Function, M. Eriksen.
Feedback loops/Operations lifecycle
Chrome Release Cycle, A. Laforge.
Why Do Computers Stop and What Can Be Done About It?, J. Gray.
How Complex Systems Fail, R. I. Cook.
The technical is social before it is technical.
All Late Projects Are the Same, Software Engineering: An Idea Whose Time Has Come and Gone?, T. DeMarco.
Epigrams in Programming, A. Perlis.
How Do Committees Invent?, M.E. Conway.
The Tyranny of Structurelessness, J. Freeman
Other posts I’ve written about software.
(Added 2019-07-22)
Repeat yourself, do more than one thing, and rewrite everything.
How do you cut a monolith in half?
Write code that’s easy to delete, and easy to debug too.
Contributed Translations
Пишите код, который легко удалять, а не дополнять.
要写易删除,而不是易扩展的代码.
확장하기 쉬운 코드가 아니라 삭제하기 쉬운 코드를 작성하자.
San Francisco for Londoners
Below is what I’ve passed onto a few friends who have asked about getting around San Francisco. repeated here in the hope it may be useful.
The basics:
- Get a clipper card
- Wear sensible shoes, because it’s hilly as fuck. Also, because the floor is lava.
- One Beer, One Dollar Tip.
Getting around and finding your way.
No-one uses street numbers, because the streets are ridiculously long, the numbers don’t match up on parallel streets, and everyone uses the address of the nearest cross-street. So you’ll hear “Folsom and 12th” and not “1582 Folsom”. Learn the cross street of your accomodation.
SF is mostly based on the grid system, with the notable exception of market (Dividing the centre into South of Market (soma) + North of Market), but many of the roads are so long that they curve.
It is very, very unlikely that you will go into sunset, presidio, noe valley, as the further you get away from market, the more suburban things get.
Walking
There are a fuck ton of hills. No matter which way you walk, you will be going uphill and downhill, and uphill again.
Traffic intersections work differently to the UK
Unsignalled crossroads work like zebra crossings. Cars will generally yield to pedestrians
Signalled crossroads work very differently, you get to cross when the traffic is going in the same direction as you,
Crossings alternate between left-right traffic and north-south traffic.
Cars can turn right through a red light, and not every crossroad has pedestrian signalling either.
You will eternally be confused by things being on the wrong side of the road.
Public transport.
If you plan to get on busses or trams, get a clipper card. It’s very similar to oyster.
There is MUNI and Bart. Bart is really only useful for getting to oakland and SFO airport, Muni is limited to SF.
MUNI is a $2.25 flat fare. You only have to touch in, not touch out. You have to step down to open the rear bus doors.
MUNI runs on its own idea of time. It will always be late. It will always be slow. It will often be smelly. The K and the T line are currently the same line.
Bus stops don’t always have signposts, signage, or timetables. Often bus stops are just poles with a small yellow strip indicating which routes stop there.
There is also the cable-cars, and the vintage F route if you want to travel on vintage trams and cars. If you want to go on the cable cars, get a day pass. Expect to queue for them (americans call this a line), and always try and stand on the edges - it’s way more fun.
Always look down before you sit down
Taxis.
Taxis have their roof light on when they are working, so you have no idea if you can hail them or not. There is a smaller, impossible to see light to know if you can hail them.
Taxis are cheap, and Uber/Lyft is ubiquitous. The ridesharing apps are usually much faster and quicker.
Food & Drink
US Coke tastes different. Mexican Coke tastes like it does in the UK.
SF is excellent for Pho, Burritos, but terrible for curry.
Get a burrito in the mission (between 14-24th). Preferably during the day.
Tip is normally between 15 and 20%. Tips is how people pay for healthcare, and to earn a living wage. Tip generously.
Beer, Bars, Dive Bars
There is no weights and measures act. Spirits are free-pour.
Pretty much every beer is hopped to fuck. Pints are smaller than in the UK, but craft beers generally are stronger on average between 6-12%. There is self-serve water at every bar. Use it. It is really, really, really easy to get drunk. It’s quite common to see people far more drunk than you would do in the UK.
The difference between a bar and a dive bar, is that you really don’t want to use the toilets in a dive bar. Some of the best bars, and usually all the dive bars are cash only.
Always Tip: Rule is one beer, one dollar. You will usually be given change with enough to tip, but having dollars spare will help.
Brunch
Do yourself a favour, and get brunch with bottomless mimosas. Brunch is a religious thing in SF, and bars will pack out more on a saturday afternoon than a friday night.
Brunch is its own section because I have never encountered a place that takes brunch so seriously.
Smoking
Cigarettes are cheap as fuck. No-one smokes rollups. You can’t smoke indoors. You must smoke outside, often by the kerb (or curb, as the americans call it), or at least 15 feet away from the exit. They are more anal about cigarette smoke than they are about weed.
Much of SF smells of weed, and people will happily try and sell you it on the street. Bear in mind, medicinal marijuana is state legal here, but not federal legal. It is still a crime, and unless you are carrying a medicinal card, you are taking a bit of a risk, especially as a foreigner.
The weed is incredibly strong, far stronger than it is in the UK and Europe. If you end up smoking in SF I guarantee you it will be too much. The same goes for brownies.
Gentrification, Poverty, and Crime.
If you’re not sure about an area, ask someone.
Like London, watch your stuff. Unlike London, SF is a bizzaro world of poverty and wealth. Imagine the compressing the inequalities of london down to a tiny city, and then ramp it up. You can walk one block along and everything changes. There are microclimates of wealth and poverty.
For example, Valencia is gentrified as fuck and the next block over, Mission, is slowly being gentrified, but still rough around mid-market, and between 16th and 24th. The latter is where the best burritos are. In six months this will have changed, so ask a local.
Similarly to London, poverty ridden areas tend to have higher crime rates. Tenderloin is where all the crack and meth generally are. You may encounter more dodgyness under the freeway, because it’s dry and sheltered from the occasional rainfall. There are countless people on the streets who are there because there isn’t really any healthcare or support for mental health issues. There is even an underclass of people who sort out the recycling and rubbish, and it’s common to see people collecting cans and bottles so they can redeem them for pennies.
The inequality will shock you and continue to shock you. Even if you’re used to London. People who have lived in SF for a while become numb to it, often taking the poverty as a point of pride for the city. “At least they won’t die out on the streets. Unlike other cities, we’re much less heavy handed about using police to clear them out of the city”. The californian liberalism is more of a passive agressive “fuck you, got mine”.
modules + network = microservices
Introduction
Microservices are a recent trend in software architecture, but the ideas behind are as old as the dawn of time (1 Jan 1970). To understand microservices, we need to understanding why we decompose software into services, and in turn, why we decompose services into modules.
The tradeoffs involved building modular software apply in both the large and the small, but we must not confuse the goal for the methods. We use modularity to reduce complexity, but often end up enabling it.
Modules
To save time i’ll skip straight to quoting Parnas’ “On the Criteria To Be Used in Decomposing Systems into Modules”–
We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.
Parnas argues that the point of modularity is not one of reuse, but one of concealment, or abstraction: hiding assumptions from the rest of the program. Another way to look at this is how easily an implementation could be grown, deleted, rewritten, or swapped with a different system altogether, without changing the rest of the system.
Unfortunately, decomposition is genuinely hard: breaking your code into pieces does not always mean that the assumptions end up in different parts: it’s very easy to build a system out of modules that tightly depend on each other. Learning how to decompose software is a hard thing to do, and you will have to make a lot of mistakes before you start to get it right.
It is a tradeoff: a module brings extra overhead, and can be harder to understand where it fits into the larger system, but can bring simplicity and easier maintenance too.
Distributed systems
Decomposition, like many things in life, gets harder when you have more computers involved. You must decide how to split up the code, and also decide how to split it across the computers involved. Like with bits of a game world spread across a litany of global variables, spreading bits of state across a network is a similar world of pain and suffering.
Splitting things across a network means that the system will have to tolerate latency, and partial failure, and it is impossible to tell a slow component from a broken one. Keeping data in sync across a network while tolerating failure is an incredibly hard engineering problem known as consensus.
In my experience, all distributed consensus algorithms are either:
1: Paxos,
2: Paxos with extra unnecessary cruft, or
3: broken. - Mike Burrows
Although consensus can be avoided, the underlying problems cannot. Decomposing a system (into parts that run on different machines) is neither straightforward or easy, but far more treacherous. There are many techniques to make it easier, like statelessness, idempotence, and process supervision, and many others worth discovering too — but one technique stands out above all: uniformity.
It’s easier to handle talking to a bunch of machines if they can be expected to behave in a similar manner. Having a common interface was one of the major design principles behind Plan 9, which connected the operating system together through the filesystem.
Another distributed operating system, Amoeba, was built as a microkernel glued together from services using a common rpc mechanism. Once an interface for a service had been defined, client stubs would be generated to use the service.
Erlang is yet another platform for distributed systems, but unlike the former, uses asynchronous procedure calls to communicate—the code is forced to handle the possibility of latency, but can now achieve parallelism and other forms of concurrency. Similarly, twitter’s finagle library uses futures to achieve the same end: a uniform approach to connecting services together asynchronously.
Exposing the asynchronous nature of a network call can seem counterintuitive to Parnas’ advice on decomposition: surely the network is hard and likely to change and therefore worth hiding? Almost. The nature of the network protocol involved, and the particular machine involved are worth hiding, but hiding that the network is unreliable does not let code deal with it effectively.
A common interface, sync or async, allows easier interoperability between components of a distributed system, as well as being able to reuse code, code generation tools, and many other tools involved in deploying, monitoring, and debugging systems. Like with modules, the existence of a common interface does not guarantee a loosely coupled system, but it can be a step in the right direction.
Services
Once you have a distributed system built from modules, you almost have one built from services—your large program has been broken into smaller communicating parts. Even the most simple web app is often broken into a database, a file store, and a http server.
The real difference between a module within a distributed system and a service, is that a service runs separately and independently of the system that is using it. Like with a good module, a good service handles a hard or changing problem, and like any module, a service comes with maintenance costs.
Running one service is a burden, keeping more of them running is a full time job. Each new service must be configured to be able to find, authenticate and communicate with each other. Although splitting a system up allows the possibility of partial failure, it’s often just another thing that can go wrong.
Successfully deploying a system built from multiple services is both its own reward and punishment.
On the other hand, a service done well can allow extensive reuse, reimplementation, and better failure handling, but the real reasons for services are often social. There are two services because there are two teams building it.
Microservices
One good example for microservices is prototypes. A new feature can be developed alongside an existing system, without disrupting or changing the older code. Prototypes can often turn into bad examples of microservices too — the service is abandoned, or no-one knows how to run it any more — but prototypes can always be merged back in.
Really, It is more important to build a system that admits microservices, than it is to built out of them entirely. Once you admit something is running across the network, it isn’t much of a stretch to admit it running on a different service entirely. Without a common framework or ecosystem for microservices, the maintenance burden will outweigh many potential benefits.
A well engineered distributed system will likely have some elements of loose coupling, uniformity, and modularity, all essential for making microservices successful. The real question is not “should I write my system as microservices”, but “What sort of modules should I break my system into” and “what benefit is there from running it as a distinct service”.
Conclusion
Decomposition, be it into modules or services is a hard task, and often far easier in hindsight. There is no obviously right or wrong answer, but a series of tradeoffs that either work for or against you, which can change over time too.
Over time your problem will change and your software will have to follow, allowing loose coupling, and by extension, microservices gives your software more opportunities to grow, but it is up to you to work out if it is worth doing.