Ten Tips on Becoming a Better Writer

A colleague asked me an interesting question today: “how can I become a better writer?”

I’m not exactly an expert but I have opinions, and opinions are what matter these days, so I sent him an email full of ‘RGB insights’. Here is a lightly edited version of what I sent him.

I present ten tips on becoming a better writer.

Writing is thinking.

The act of writing is the act of thinking. So don’t beat yourself up if it feels painful. Thinking is hard!

You should expect to feel despair when you have a dozen half-completed paragraphs, staring out at you, none of which make sense and none of which link to each other. 

Don’t worry about it… just get more of your thoughts down on paper and then start editing… move stuff around… see what happens. If two paragraphs should logically be next to each other but one doesn’t flow to the other, maybe there’s a key step in your argument missing? Are there really two different points you’re trying to make? Maybe just choose one and save the other for another piece?

The lesson I’ve learned is not to give up at this point… that chaotic process is the process.  So keep going and keep reminding yourself: it’s like this for everybody (at least everybody who aspires to be good).

Writing and speaking are surprisingly similar, so practise both

Good speeches and good pieces of writing are basically the same thing: an uninterrupted, linear, coherent sequence of sentences and paragraphs, that the audience is expected to consume and understand.

In both cases, you have to know what you think, how you’re going to express it, what you want your audience to do as a result, and how best to say it, in what sequence, to get the intended effect.

I’m always brutally reminded of this in the days before I have to give an important presentation. Yes… I’ve prepared the ‘slides’ well in advance, but that’s the easy bit. The hard part is the rehearsal.

So I go into an empty room and start to present, as if live.

And it’s invariably a disaster.  

Yes – I can read the bullets… who can’t?  But trying to speak, uninterrupted, from beginning to end of the slideshow, cold, with no preparation, hitting the key points on each chart, setting each one up clearly, segueing seamlessly to the next page, and landing on clear concluding points the first time I do the rehearsal….?

Not a chance.  Not the first time I try. Not the third.

I have to go over it again and again… really asking myself why each slide is there… why it’s at that point in the deck… what is it trying to say? 

I find myself reordering slides, deleting some, realising there are some entirely missing slides, going backwards and forwards in my mind ensuring I can construct a clear narrative from beginning to end.

That, right there, is the writing process. It’s the exact same thing.

It’s the process of converting vague thoughts into precise, clear, logical and sequential sentences.  There is no short-cut. So don’t stress out when it feels like everybody else finds it easy and that somehow you’re bad at it. That isn’t what’s going on at all.

“Keep it short” is the wrong advice. Write as much as you can… and then delete most of it.

How long is your ‘final’ version? How many words did you delete from your drafts in the process of getting there? The latter should exceed the former, or you ain’t done yet.

You can hide a lot of vague nonsense in wordy documents and verbose presentations. After all, if you throw enough words at the wall, some of them will make sense to some people and you’ll get away with it, or so you think. But you won’t influence anybody. Nobody will learn much.

To make a difference, it needs to be tight.

This is because when you cut it down, you have to make choices… which sentences stay? Which ones go? Again: that’s the act of thinking right there. You’re forcing yourself to decide which parts of the prose actually matter and which are just filler. Deleting is often more important than insertion.

Learn from other writers

Look at how writers you admire do it. The medium doesn’t matter… novelists, bloggers, newspaper columnists… whomever. Next time you find yourself reading something from beginning to end and really enjoying it or feeling engrossed, go look at how they did it.  What tricks did they use?

Here are some I’ve stolen from others:

Start in the middle, or give the punchline first. You can fill in the blanks later. Readers will only invest in your article if they think it will be worth it. So you have to grab them quickly. Have you noticed how boring the first 30 mins ‘proper’ of each James Bond movie is as they set up all the plot? That’s why they have the exciting pre-title sequences… they know you’d walk out of the cinema if they’d forced you to sit through half an hour of exposition first. And there are lots of variations you can try here. Begin with a question to get them intrigued. Or start with an introduction that is plausible but wrong. When they realise they’ve been fooled, they’ll be intrigued and will want to stick around to find out why.

Play with style. Mix short and long sentences. Include info-boxes and quotes. Add diagrams, but be sure to give them meaningful captions and reference them from the prose.

Steal from other genres. I once wrote a paper that began as a parody of Aesop’s Fable of the scorpion and the frog. Have fun!

If you’re selling an idea, write like a (good) salesperson

At the heart of pretty much every ‘call to action’ is the SCIPAB structure. Once you know it, you can see it everywhere. If in doubt about how to structure a paper that demands action, just use this.

  • What is the present SITUATION?
    • Topical example: “Central Banks have a monopoly in the issuance of cash”
  • What new COMPLICATION explains why we’re here?
    • “Transactions are going electronic; cash will soon be irrelevant”
  • What is the IMPLICATION of this?
    • “Central banks could lose relevance and find it harder to discharge their responsibilities”
  • What do you PROPOSE?
    • “Central Banks should issue a digital currency”
  • What ACTION must be taken?
    • “Hire my firm to build it!”
  • What BENEFITS will you gain as a result
    • “You can continue to achieve your policy goals in a changing world”

Who are you writing for? If you don’t know, write for past yourself

You must have a reader in mind. Who are they? What do they know? What don’t they know? What misconceptions do they have? What biases do they have? What do you need to say, in what order, to reprogram their brain?

If you don’t know who you’re writing for, how can you possibly write something that will resonate with them or correct their misconceptions? 

The good news is: if you don’t know, then write for yourself.  What did you used to think about this topic that turned out to be wrong? What ‘a ha’ moment do you wish you’d had three years ago? 

This also makes it easy to edit your document at the end… keep editing until you find it engaging and interesting!

Keep it simple

Lots of writers think the mark of intelligence is to use lots of long words. They act like confusing their readers means they must be super smart. The reality is that the opposite is true.  

So, when you think you’re done, go back and look at your long sentences… how can you make them shorter?  Are you using a complex word when a simpler one would do? Are you speaking in riddles or opaque metaphors? Remove them

Say what you mean!

Write directly, clearly and simply. And if you can’t do this, don’t beat yourself up… it’s just a sign that you need to keep going… WRITING IS THINKING… so it just means you need to keep thinking until the concepts and arguments have fully baked.

Reuse, Recycle and Plagiarise (your own work)

Writing something well is a lot of work. But once you’ve done it, you can reuse it. After all, it’s not that much extra effort to turn an email into a blog post, or a blog post into a white-paper. Indeed, “email to blog post” is exactly what I’ve done here.

So the ‘delta’ in effort to produce impactful content with longevity is often way less than you think. Think back to the last ‘important’ email you wrote where you made a case or explained something complex. Could you now use it as the basis of an internal blog post? You’ll find your thinking evolves a bit more in the process. And then you could probably also use it as the basis of an external paper. Your thinking will be even more refined.

More often than you’d expect, a well-formed, impactful white-paper is actually the third or fourth iteration of a piece of work. So the writer didn’t leap from ‘zero’ to ‘published white-paper’… there were two or three stepping stones along the way, each of them only requiring a bit of effort, and each of them having value along the way.

Nobody is born a good writer… so don’t beat yourself up. 

The more you do it, the better you get. So just get writing. And remember that writing is one of the few things you can do where you incur a cost once – writing the darn thing, but obtain the benefit forever – it exists for all time and can have impact forever!  

But the cliché about “today’s newspaper is tomorrow’s trash” is also true… sometimes you REALLY want what you write to be perfect. But don’t underestimate the power of ‘good enough’…. DONE BEATS PERFECT.  So just write it already, hit publish and move on. 


Endnote: did you spot that there were only nine tips? That’s my final insight: most of the time, most people aren’t paying attention. So you can get away with way, way more than you’d expect!

The World Needs a B2B Application Development Platform

We’ve been building business-to-business applications wrong all this time!

Imagine a firm needs a new way for employees to book vacation. Or maybe customers want to be able to track their orders online. It’s your job to build the application. What design would you go with?

Chances are you’ll build it in mucch the same way as everybody else does. There will be a database. There will be a business logic layer – you have a lot of choices here. And there will be a front-end – perhaps a mobile app or web-site, or a set of APIs. And then you have to worry about hosting it. Perhaps the HR department will host it in-house. Perhaps the supplier will host it in the cloud.

Fundamentally, there aren’t really many distinct ways to build these things.

But did you notice something? Did you see how it never even crossed your mind that the employee should write or run the vacation booking application, or that the customer should write or run the online order tracking app? It’s so obvious that it should be the employer and the supplier right?

It’s ‘obvious’ because there’s an asymmetry in many business interactions, and this makes it easy to figure out who should be responsible for what. There’s usually a ‘user’ of a service, and a ‘provider’. And this usually maps nicely to the idea of ‘clients’ and ‘servers’ in the world of IT.

And this is very helpful, because if we can assume there is a single party providing a service, who is the ‘owner’ of the data and controls changes to it, then it becomes very simple to design the application.

This is certainly the case in our examples above: we’d expect the HR department to be responsible for information about employees, and it makes sense that it’s a supplier who keeps track of what has been ordered, by whom. Modern application development frameworks bake the assumption that there’s an ‘authoritative party’ into their designs as a result.

One size does not fit all

However there’s a big problem. The problem is that lots of business interactions do not fit this pattern.

  • Imagine you had to automate reconciliations between a network of banks. Which banks are the ‘clients’ and which is the ‘server’?
  • What about a new platform to facilitate the negotiation of contracts between buyers and sellers? Who should run that application and control all the data as a result?

It’s not obvious, is it?

Yet, all too often when faced with a market with no obvious ‘clients’ or ‘servers’, the IT industry’s response has been to insist that there must be a server if you want to use our products! And if that means reshaping how the market works to introduce a brand new intermediary then so be it. This surely nuts. Why do we think it is normal that when a market doesn’t conform to IT vendors’ ideas of what it should look like, it’s the industry that has to change, not the software?

The result of this process is that manual work that was happily performed bilaterally is often controlled by a third party once the market has ‘digitised’ itself. Is this progress?

When exactly was it we decided we had to change the structure of an existing market to solve a simple business problem? How did it become acceptable that IT design assumptions should have the power to reshape the dynamics of entire industries?

If you think this is a philosophical problem of no practical concern, consider the following examples:

Market Places

When buyers and sellers get together at a physical market to trade with each other, they follow a collectively agreed set of rules, usually overseen by a market ‘operator’ that they collectively govern. Yet when buyers and sellers trade with each other electronically, the market operator’s role is usually far more significant. Somehow, when markets ‘go digital’ they end up entrenching the market operator in an ever-more privileged position. An impartial ‘arbiter’ often becomes the most powerful and profitable entity in the ecosystem.

Maybe we should ask ourselves: how much of this is a natural market outcome, and how much of it was driven by the hidden assumptions inside the software development tools we use? Have our tools forced us to model a peer-to-peer market as a client-server architecture?


When banks and large companies need to check the accuracy of their accounting records, their auditors reach out directly to their trading partners to verify that their books and records are in sync. This is an inherently decentralised, point-to-point style of interaction. Yet when IT systems were built to automate these processes, they were invariably built as centralised, hub-and-spoke systems, often run by new third-parties

Was this a conscious decision to change how the industry worked? Or was it an unintentional consequence of an IT design decision?

Global Trade

International trade still relies on a large amount of paper. Buyers, sellers, shippers, banks and others interact on a daily basis to facilitate the flow of goods, and payments for them, around the world. Yet there has been surprisingly little progress in digitising the process of trade finance: issuance of letters of credit and so forth. There’s so much paper!

Could this be because a centralised application with an all-powerful operator is fundamentally incompatible with the messy, decentralised, international trade finance industry? Who is the ‘client’ and who is the ‘server’ when a large UK retailer buys some goods from a Chinese manufacturer? Could the failure of the trade finance industry to digitise until recently actually be a sign of their sophisticated understanding of market dynamics and the IT industry’s failure to understand them?

When you start thinking in this way, you can see examples of it everywhere.

The bottom line is that the application development platforms we’ve been using for the last twenty years are designed for a hub-and-spoke model where the users are supplicants of an all-powerful application owner at the centre. This is perfect for many scenarios, especially in the ‘business to consumer’ and ‘employer to employee’ space.

But huge numbers of situations – particularly in the ‘business to business (B2B)’ space – do not look like this. B2B interactions are almost always interactions between peers and counterpartsbuyers and sellers, each acting with full agency, neither superior to the other.

If you asked two traders in a market which of them was the service provider and which of them was merely the ‘client’, they’d look at you as if you were mad! So why are we building applications that embed this assumption into the heart of their architecture?

The world needs a modern B2B application platform

It’s time we challenged ourselves to do better. So can we at least write down some requirements for a business-to-business application development platform that does respect existing market dynamics and doesn’t force new parties into the mix? For example, what are some of the problems you need to solve when building a true B2B market-style application?

Record-Level Data Ownership: fundamentally, we’d probably start with the idea that different records in the system should be controlled or ‘owned’ by different parties. If one party controlled everything then we’d have the status quo. This immediately creates a need for some notion of…

Identity: if different records are owned by different parties, we need to a way to refer to these parties. And it’s likely that each of them will want to control who has access to which records. So this means we need a way to manage…

Data sharing, reconciliation and synchronisation: each record will have one or more owners, but there will be other parties who need to have a copy, and they need to be sure their copy is the most recent version and that it has been validly created. Where did it come from? If it was updated, who did it, when, and was it done in accordance with any pre-agreed rules? After all, these records may represent real legal agreements. The need for updates that might require sign-off creates a requirement for…

Workflow: whenever firms are interacting with each other, invariably there are complex processes that need to be orchestrated. Sign-offs and approvals need to be obtained. And the inevitable delays and glitches have to be resolved. And these workflows, critically, need to be managed across and between all the different firms involved in the process… so the workflow layer needs to have an inherent and deep understanding of identity, and an ability to coordinate between firms, not just within a single firm. But if every single update needs manual sign-off, we’d never get anything done. So this creates a need for:

Shared, distributed, verifiable business logic execution: applications always contain business logic but in B2B scenarios, the question of who is expected to calculate what, for whom and when, is surprisingly subtle. So there needs to be a way for firms to verify some computations for themselves, and to specify whom they trust to calculate other things for them. After all, that’s how things work today: some things can be taken on trust, other things must explicitly be verified. And it’s rare in the real world to just give some central party complete power to do everything. The B2B application platform needs to reflect and support this real-world sophistication. But a system that is in sync with its peers in other companies but out of sync with the applications in your own company solves nothing. So we also need rich support for:

Integration: the system needs to play well with other internal systems and processes… we have to remember that the firms who need this sort of platform probably already have hundreds of existing applications, each of which is probably a system of record for something or other. So, in addition to synchronising information between parties, we need a sophisticated way to ensure this point of ‘inter-firm truth’ is in sync with all the other places it exists inside each firm. Finally, this entire architecture needs to be built to run between firms, across the internet, so we need a modern approach to…

Security: the system must understand that events, data updates and business process invocations could be arriving from both inside the firm and from outside the firm. Records in your database could be updated automatically if their owner is some other firm. We’ll be communicating non-stop with the outside about critically important business data. We can’t just pretend we’re sitting behind a strong firewall that protects us from the outside. So we need to implement cryptographic protections into the heart of the application: all interactions must be authenticated, data updates need to be controlled by pre-agreed constraints, records need to be fingerprinted in order to detect malicious mutation, and more. In short: these applications need to rely wholeheartedly on applied cryptography, or ‘trust technology’ as we sometimes call it to avoid scaring business people 🙂

There are, of course, many more requirements. But my experience from building a system that implements the vision I’m outlining here is that these are the requirements that drive most of the architecturally-significant decisions.

Put the power back in the hands of businesses, not the IT vendors

At its heart, the problem we’re trying to solve is one of power dynamics: how can firms come together to improve how their markets work, but without accidentally introducing a ‘monster’ that turns round and eats them all? We’ve all seen what happens when regular centralised platforms obtain a position of elevated power: they get to control everything that is happening on their platform and, as the platform grows, it becomes ever more valuable, attracting more users, whilst simultaneously making it harder for existing users to leave, increasing the pricing power of the operator. This is a problem we’re trying to avoid.

We also need to be pragmatic, however. Every firm wants to believe they’re in control, but not all of them want to incur the cost of running their own infrastructure, at least not at first. So we need an architecture that enables some participants to run their own piece of the shared infrastructure, whilst others experience it as a cloud service. The power comes from having the option to ‘repatriate’ data and control should they feel a central operator or other service provider is becoming over-bearing. You don’t need to exercise that option for it to have power. We might call this empowering such firms with ‘digital sovereignty’.

The good news is that we’re on the cusp of solving these problems: B2B application development platforms now exist. We just haven’t been describing them in this way.

So what have we been calling them?

B2B Application Development Platforms are… Enterprise Blockchains!

The answer is that a world-class B2B application development platform has been engineered in plain sight over the last seven years. It goes by the name of Corda, but you may also know it as an example of an ‘enterprise blockchain.’ This term is used because many of the techniques being applied were first popularised by public blockchain platforms, even though the public and private blockchain communities would be the first to admit their networks do some really quite different things.

I imagine some readers have an eyebrow raised at this point. If so, I’d encourage you to re-read this article and then look at what platforms such as Corda actually do, and what they’re being used for (as opposed to the jaw-dropping amount of hype and bluster that infects this space)…

If you have a Business-to-Business problem that needs an IT solution, and are wondering why the standard development tools don’t quite hit the mark, it’s OK… you’re not the only one. It’s a real problem. The good news is that there’s also a solution: Corda.

In this piece, my objective was to motivate the need for a purpose-built B2B application development platform. But now you are, I hope, intrigued, you can find out more about how Corda attempts to support this concept by reading the intro whitepaper, heading over to our docs or, most powerfully of all, read about real-life production projects that are using the platform right now.

Why all the Web3 Hate?

Crypto is rebranding itself as ‘web3′ and the mainstream tech community don’t like it one bit. But they’re missing the point: for good or ill, crypto’s mission to redefine finance is where the constructive feedback and critical thought should be focused, not a strawman about building a decentralised Facebook

Ryan Selkis (@twobitidiot) made an interesting statement the other day:

Why the ‘viscerally negative’ reaction to ‘web3’?

It’s possible Selkis is referring to negative reactions from within crypto. But there’s also no shortage of pushback from elsewhere. Stephen Diehl’s ‘take no prisoners’ posts are a good place to start. Or you could look here, here, here, here or here.

The pushback is occurring along a wide front but it’s important to note that Selkis specifically references ‘web3’, not ‘DeFi’ and not ‘crypto’. And that’s important. After all, there has never been a shortage of critics of permissionless blockchains over the years, including me at times. And plenty of people feel queasy at what they see as rampant speculation and fraud in pockets of that community. But Selkis is right that, in the last few weeks, it feels like the nature, and volume, of the criticism has changed, perhaps decisively.  

Why? What was the trigger?

If I were to ask Diehl, I suspect he’d point me to a tweet like this one:

But it’s not as if Diehl has only just started criticising crypto this week. So why is his message only really being heard now? Why not three years ago? It seems like all the hate started to resonate and coalesce just last month. 

Why all the hate now?

The reality is that most technologists at most firms are using regular technology to solve regular business problems. So those in the blockchain space probably overestimate the extent to which the rest of the tech industry is paying any attention at all. That’s definitely the case for the ‘permissioned‘ blockchain ecosystem I inhabit and, contrary to what you’d think from watching the Twitter echo-chamber, I suspect it’s true for the permissionless wild-west world of crypto too.

To the extent mainstream technologists pay any attention to the permissionless cryptocurrency world at all, it’s through the lens of ‘other’. I think the general thought process is this:

“There’s that thing happening over there. Some of the people seem to be getting very rich. They certainly make a lot of noise. Some of them seem to be a bit infra dig, some are probably scammers. Maybe I’m a bit annoyed at not having got in on it when there was money to be made. And my gut feel as a technologist is that some of the claims just don’t stack up. But, whatever…. If I argued with everybody I thought was wrong on the internet, I’d never get anything done. It doesn’t affect me or my work. So I’ll just ignore it and get on with my life.”

That, I think, explains the ‘traditional’ tech world’s view of blockchains until a few weeks ago.

And then somebody – in either the most brilliant rebranding in history – or as a truly insane and hubristic act of overreach – decided that building a new financial system wasn’t enough, and that permissionless blockchains were somehow also going to be the solution to the dominance of the large tech firms. 

Hey… if you can reinvent money, why not also fix the world’s political system and broken social structures too?! So somebody dusted off Gavin Wood’s old ‘Web 3.0’ thesis and announced to the world that the future of the world wide web was the public permissionless blockchain tech stack.

Oh dear…

Suddenly, as in literally overnight, the calculation being made by normal technologists doing normal work changed. The weirdos doing crypto stuff were no longer an interesting curiosity. They were now loudly and aggressively stating that their ideas and their technology were what all those normal developers and firms were going to be using in the future.

Oh, and that it was their tokens that everybody else would have to buy in order to participate.

Now, if you responded every time some crazies on the internet said they were coming for you, you’d go mad. But when they’re as well-funded, vocal and – yes – influential as the crypto community, one does feel a need to react.

So, suddenly, anybody in a leadership position in the existing tech world had lost any ability to remain neutral. Anybody who did not believe that this was the correct direction of travel felt they had an obligation to say so.

And, guess what? It turns out there are a lot of people who don’t believe that public permissionless blockchains have anything to contribute to the gnarly societal problems exposed by the exposure of the human race to global social media for the first time in our history as a species.

They were perfectly fine to sit on the sidelines whilst it didn’t affect them.  But, now that it does, they’ve come off the fence, and not just about the web3 angle, but about the whole edifice.

So it’s entirely unsurprising that the ‘cryptocurrencies and decentralised finance are the future of the web’ meme has run into such violent opposition!

It’s as if the regular tech world was entirely happy for crypto to carve its own furrow as long as it was well away from anything important, as they saw it. But, once their territory had been invaded, they had no choice but to fight back. And fighting back is what they’re doing.

The mystery to me is why anybody in the permissionless space is surprised?!

Perhaps they’re not surprised. Maybe some of them quite like it. After all, as the saying goes, ‘then they fight you’ is only one step from ‘and then you win’.

But I can’t help thinking some of the critics we’re hearing from today are missing the point. There is a deep criticism we could make, but it’s nothing to do with ‘web3’.

Maybe this is what Chris Dixon was thinking of when he tweeted this:

So in the remainder of this piece I’ll sketch out what I think a valuable critique might look like. But first a short interlude.

Interlude: never annoy a pedant

There’s a fun alternative explanation for what’s going on that I’d regret not sharing.

It’s possible that the entire backlash against the ‘web3’ movement stems from the fact that technologists are pathologically pedantic, and those advocating web3 have misunderstood what Web 2.0 was! 

Web 2.0 was nothing to do with Facebook, Google and Twitter’s dominance. Web 2.0 was all about architecture and, in particular, the emergence of Ajax techniques and the associated API-level integrations between sites that were enabled by the widespread adoption of REST that turned out to be needed to make it work. This is basically the point that Tim O’Reilly recently made.

But the fact that the web3 narrative doesn’t acknowledge this is the kind of thing that can really annoy pedantic people. Somebody literally is wrong on the internet!

So don’t discount the possibility that the web3 blowback is caused by nothing more than a few technologists getting really upset that some people don’t know their history…

Never annoy a pedant.

The case for the defence? The real critique?

If permissionless blockchains are not the future of social media, does that mean, as Diehl argues, that they are worthless, parasitic, negative-externality generators? Maybe. But it’s not obvious to me that this is the case.

And I write this as somebody whose day job at R3 is building and selling solutions based on private or permissioned versions of this technology. The permissionless blockchain world do not usually see me as a friend.

But if one strips away all the hype and complexity, there’s a very simple story one can tell about permissionless crypto. It goes like this:

First, there was a business requirement: Satoshi set out to implement a system of digital cash that is censorship resistant.

The entire architecture of Bitcoin emerges from that requirement. I make no value judgement as to whether the requirement is legitimate.  But “censorship-resistant digital cash” is the business problem for which Bitcoin is the solution.

But that was just the starting point. The explanation for the present Decentralised Finance scene requires a couple more steps.

First, censorship resistance turns out to require a system that is permissionless. After all, if you need somebody’s permission to use the system then in what way is it censorship resistant?

And permissionless platforms for the exchange of value turn out to enable more than just digital payments. Witness the emergence of projects, primarily on Ethereum, offering lending, trading, financial derivatives, fundraising and more. It’s as if everything that the investment banking industry spent the twentieth century building is being rebuilt in the permissionless crypto sphere. And, with the emergence of stablecoins, there isn’t really anything you can’t in principle do in the crypto world that you can do with traditional finance.

And so it’s not hard to imagine most of what you can do in the regular financial system being replicated, for good or ill, in the crypto space, but with two crucial differences.

  • The first difference is the unfathomably greater complexity. And, for Bitcoin and Ethereum, monumentally greater energy consumption.
    • That’s what permissionlessness costs
  • And the second difference is the almost complete lack of regulation. 
    • That’s what permissionlessness means

I’m 100% with Diehl that there is an argument to be made that either of those points could cause many people to want to steer well clear. Either could indeed be reasons to want to burn the whole thing to the ground.

But let’s suspend our moral reasoning for just a little while longer. And let’s imagine we fast forward a few years, What we could have on our hands is a vast parallel financial system, where AML, KYC and CTF rules are not applied. A system where no investor protection rules apply. A system where no accredited investor rules are in force. A world, in other words, that looks just like how the existing world would look if we simply woke up one day and the last fifty years of financial regulation was rolled back.

What happens then?

The usual debate usually ends up with one side saying ‘the regulators’ will intervene. And the other side saying that the nature of the technology means they can’t.  But a more interesting possibility is to consider what happens if ‘the regulators’ choose not to intervene?

We know that rolling back regulation is devilishly difficult. Who would vote for a sensible reform of AML rules if they feared being labelled a ‘terrorist sympathiser’ by an opponent with a vested interest in the status quo? This is why regulations are almost always like ratchets: more and more get added. Few are ever removed.

However, societies have an ‘antidote’ to this problem, which is to simply let old regulations become irrelevant. Witness the rise of the Money Market Fund in the US in the 1970s. Regulation said that banks can’t pay interest on current accounts. Inflation was sky high. It was too hard to change the regulation.  So smart bankers created the money market fund instead. The regulation was simply made irrelevant. And the establishment just kind of accepted it. The law was, in effect, changed by the creation of facts on the ground rather than through the legislative process.

It’s entirely possible the same thing could be playing out in finance, in plain sight. There are few people who would argue that the present state of financial regulation is in any way optimal. But nor is there any reasonable path to reforming it. So we could find ourselves in a few years in a situation where the present system is generally acknowledged to be broken and the only plausible alternative is… the one the crypto community has built.

Diehl’s argument is that this is terrifying. And this should give pause to anybody in the ‘mainstream’ who believes the present system is broken. Where’s your alternative model? What’s your proposal for how we transition to it?

What happens if we reach a point where consumers find the new system so much more convenient that governments don’t dare reimpose the old rules, to avoid incurring the wrath of their voters?  And might ‘the regulators’ by that point actually be quietly pleased that they had, in effect, a day zero from which to start again?

If the idea that crypto is the future of finance excites you, you’re probably already out there building. (Sorry, buidling). But if it terrifies you, why are you wasting your time arguing against a web3 strawman argument about imaginary clones of Facebook when there’s a far bigger picture to focus on?

After all, isn’t it most likely that permissionless crypto’s best chance of success is with the problem for which it was actually designed?

If so, I’d humbly suggest that this is where the fine minds presently arguing against the web3 strawman should be spending their time.

Viral Vector Vaccines – More Fun Things I’ve Learned

In this post, I shared what I had learned about how the AstraZeneca ‘viral vector’ vaccine worked and how endlessly fascinating I found it. But I was struggling to understand why it takes so long for the immune system to ‘learn’ about the new threat that the vaccine is trying to teach it about… why it seems to take “weeks” to gain any protection. I still don’t understand but I’ve learned a few more things that others might find interesting too!

Recap of the basic idea

The basic idea behind Viral Vector vaccines such as the AstraZeneca and Johnson & Johnson products is that a harmless virus is genetically modified so that, if it were to infect a human cell, the cell would start producing protein fragments that look a lot like the spike proteins on real Covid viruses. These ‘fake’ spikes become visible to your immune system, which mounts a response. Thus, if you subsequently become infected with the ‘real’ Covid, your immune system is already primed to destroy it.

Do those fifty billion viruses replicate once inside me? Answer: no

When I read that the vaccine relies on injecting a harmless ‘carrier’ virus, my immediate thought was: “I wonder if that virus is able to replicate like normal viruses do?” I saw on my ‘vaccine record’ that there were fifty billion copies of it in my dose, which made me suspect not… after all, why would you need to inject so many if each one had the ability to replicate? The human body only has about 15 trillion cells, so 50 billion viruses is enough for one in 300 cells! Surely you’d need far fewer if each one could trigger the creation of many many copies?

Turns out I was right about this: the modified ‘adenovirus’ that is injected into me is unable to replicate: those fifty billion copies are the only ones I’ll ever have as a result of that shot.

This infographic (click for better image), from The Royal Society of Chemistry, has a nice explainer on CompoundChem:

As that article explains, it seems like the decision to use a non-replicating virus was a choice, presumably on safety and public acceptance grounds: it would have been possible to design a vaccine where the virus could replicate, and some vaccines for other diseases do work that way. The advantage of the latter, I guess, is that far fewer copies would have had to be injected to start with. It’s interesting to speculate (based on absolutely zero knowledge of the science or of where the bottleneck in production actually is…) whether vaccine rollouts could have been quicker if they’d been based on replicating viruses. Would it have meant any quantity of production could have been spread more broadly?

Note: I’m still not yet clear on what happens to my cells that are infected by one of these non-replicating vector viruses… are these cells then destroyed by my immune system because they present the spike protein? Or are they allowed to live? Can they divide? If so, do their daughters also produce the spike protein?

What happens if my body already has antibodies to the vector virus?

I made a throwaway comment in my last post about how the ‘carrier’ virus has to be carefully selected: if you’ve been exposed to it – or something like it – in the past, your body will attack the virus particles before they have a chance to infect your cells… and so you’ll produce no (or fewer) spike proteins and you’ll presumably develop weaker protection against Covid than would otherwise have been the case. This piece in The Scientist explains more. It explains that this was why the AstraZeneca vaccine uses a modified chimp virus – it’s far less likely the average human has seen it before. And it points out that there’s a downstream consequence: that virus can’t now be used for a malaria vaccine. You really do have to use a different one for each vaccine.

There were a few other interesting tidbits in that article. It was the first time I’d seen an argument that one possible reason for milder side-effects from the AZ vaccine amongst older people is that the older you are the more pathogens you’ve been exposed to and so the more chance there is that your immune system has seen something like the vector virus before. And so relatively more of the fifty billion particles will be destroyed before entering a cell. So I’m even more pleased about my feverish sleepless night now!

Why are vaccines injected into muscle? How long does it take for the virus particles to get to work?

The question that triggered my attempts to learn about this stuff was why does it take weeks for me to gain any meaningful protection from the vaccine when it’s clear that my body was fully responding to the onslaught after barely twelve hours?

It got me wondering whether the mechanism of injection had anything to do with it. For example, if the vector virus is injected into a muscle, how long does it take for all fifty billion virus particles to get to work? And where do they operate? In that muscle? Or do they circulate round the body?

Was the first night reaction in response to all fifty billion viruses going to work at once? Or were only a few of them at work that night and it wasn’t yet enough to persuade my immune system that this is something it should lay down ‘memories’ about? Perhaps it’s going to take a few more weeks until they’ve all infected a cell and enough spike proteins have been produced to get my immune system finally to say “Fine! You win! Stop already! I’ll file this one with all the others on the ‘things we should be on alert for in the future’ shelf. Now stop bothering me!!”?

I was surprised how little definitive information there is about this sort of stuff online. I guess because it’s ‘obvious’ to medical professionals, and they don’t learn their trade from quick skims of Wikipedia and Quora. (I hope).

From what I can tell, the main reason vaccines are injected into the muscle are for convenience: the shoulder is at just the right height for a clinician to reach without much effort, it’s an easy target to hit, there’s no need to mess around trying to find a vein, and the risk of complications (eg inflammation of a vein or whatnot) is lower. This literature review makes for an interesting skim.

I’d also wondered if injection into muscle, rather than veins, results in the vaccine having a localised effect… eg is it only my shoulder muscle that churns out the spike proteins? Turns out the answer to that is no: muscle is chosen over, say, fat precisely because it is rich in blood vessels. The vaccine designers want the vaccine vector virus to enter the bloodstream and rush round the body.

And I’d wondered if injection into muscle was in order to create a ‘slow drip drip’ of vaccine into the bloodstream over time and perhaps that would explain why it took so long for the body to develop full immunity. Turns out the answer to that is also ‘no’. It seems that injections into the deltoid muscle (shoulder) are absorbed quicker than those into other commonly used injection sites. Implication: if the manufacturers wanted slow absorption, they wouldn’t be telling doctors to stab patients in the shoulder!

So when I bring all that together, I still remain confused… injecting the vaccine into my shoulder results in quick absorption, and my body was in full ‘fightback’ mode after twelve hours, so it’s hard to imagine there was any meaningful amount of vaccine lingering in my shoulder after, say, 24 hours… it must, by then, all surely have been whizzing round my veins and happily infecting my cells.

So what gives? Why does it take weeks after billions of my cells have been turned into zombie spike protein factories and my immune system has gone on a frenzied counterattack for me to have a meaningful level of ‘protection’ against Covid? (I’m ignoring the relevance of the ‘second dose’ here for simplicity)

I guess the answer must be ‘because that’s just how the immune system works!’

The mathematics of COVID vaccines

My AZ vaccine dose contained FIFTY BILLION virus particles. Wow!

I was fortunate to receive my first COVID vaccine dose yesterday; I received the Astra Zeneca product and all seemed to go well. As seems to be common, it made me feel feverish overnight and I slept badly. This was reassuring in a way as it made me feel like it was ‘working’… it was exciting, in a strange sort of way, to imagine the billions of ‘virus fragments’ racing round my body infecting my cells and turning them into zombie spike protein factories!

However, my inability to sleep also made me realise that I had absolutely no idea how it really worked. So as I was struggling to sleep, I started reading more and more articles about these ‘viral vector’ vaccines. They really are quite fascinating. And these articles did answer my first wave of questions… but they also then triggered more questions, to which I couldn’t find any answers at all. I’m not sure I’m going to be particularly productive at work today so thought why not write down what I discovered and list out all my questions. Perhaps my readers know the answers?

Viral Vector Vaccines: Key Concepts

Most descriptions I found online were hopelessly confused or went into so much extraneous detail about various immune system cell types that they were useless in imparting any real intuition. However, what I did seem to discover, at a very high level, was something like the following:

  • A harmless virus is used as the starting point.
    • Interesting detail: the virus needs to be somewhat obscure to reduce the risk patients’ bodies have been exposed to it before and thus already have antibodies that would destroy it before it’s had a chance to do its magic
  • It is then genetically engineered so that when it invades a human cell it triggers the cell to start churning out chunks of protein (the famous spike protein) that look a bit like Covid-19 viruses.
  • These spikes eventually become visible to the immune system, which mounts a vigorous response and, in so doing, learns to be on the lookout for the same thing in the future.

In essence, we infect ourselves with a harmless virus that causes our body’s own cells to start churning out little proteins that look similar enough to COVID that our body will be on high alert should a real COVID infection try to take hold.

Or, at least, I think that’s what’s going on.

So many extraneous details

Now, most of the articles I read then go on to talk about things like the following:

  • Technical detail about how several little fragments then have to be assembled to make one ‘spike’.
    • Important but not really critical to understanding the concepts from what I can see
  • The role of different types of immune cells.
    • The only thing these kinds of articles taught me was that it’s clear that immunologists have no idea how the immune system works and their attempts at explaining it to lay readers just makes this painfully obvious 🙂
  • Endless ‘reassuring’ paragraphs about safety.
    • I understand why they do this but it is somewhat depressing that every article has to be written this way, and I can’t help thinking that it may even be counterproductive.

Once and done or an ongoing process?

However, I found the descriptions unsatisfactory in several ways, and maybe my readers know the answers.

The literature talks about how the genetically modified virus cannot replicate. I assume this is because the modification that causes infected cells churn out spike proteins means that the cell isn’t churning out copies of the virus, as would normally happen? That would make sense if so.

And it would also explain why my ‘vaccination record’ revealed that my dose contained fifty billion viral particles! That’s one for every three hundred cells in my body! Truly mindblowing.

That said, I have no idea what a ‘viral particle’ is. Is that the same as a single copy of a virus?

It’s mindblowing to imagine that 0.5ml of liquid could contain fifty billion virus particles!

Anyhow, if the virus can’t replicate once inside my body, then the only modified virus particles that will ever infect my cells are the ones that were injected yesterday.

And so I guess the next question is: how long does it take to start invading my cells to turn them into zombie spike protein factories?

Well: the evidence of my own fever was that it was barely twelve hours before my entire body had put itself on a war footing. And in those twelve hours, presumably a lot had to happen. First, enough virus particles had to invade enough cells. And then, secondly, those infected cells then had to have started to churn out spike proteins in sufficient quantity to catch the attention of my immune system. The invasion must already have been underway before I left the medical centre!

And I guess a related question is: what happens after those fifty billion viruses were injected into my right deltoid muscle? Do they just start invading cells in that region and so my shoulder muscle becomes my body’s spike protein factory? Or do they migrate all over my body and enlist all my different cell types in some sort of collective endeavour? How long does this migration take if so? Is this what explains the time lag from “body is on a war footing after twelve hours” to “you ain’t got no protection for at least three weeks”? Are the majority of the particles floating around for days or weeks before invading a cell? Or is the full invasion done within hours of injection?

Put another way: if the only vaccine virus my body will ever see are the fifty billion copies that were injected yesterday and if after twenty four hours my body already seems back to normal from a fever perspective, what is actually going on over the next few weeks?

I did wonder if perhaps there is some reproduction going on… but not of the virus, but of the cells that have been invaded. That is: imagine the vaccine virus invades one of my cells and forces it to start churning out spike proteins. Presumably that cell will itself periodically divide. What happens here? Does the cell get killed by the immune system before it has a chance to replicate (because it’s presenting the spike protein)? Or does many of these cells actually replicate and hence create children cells? Do those children cells also churn out spike proteins? That process would explain a multi-generational multi-week process, I guess? But it wouldn’t be consistent with statements that the vaccine doesn’t persist in your DNA.

Or is the lag all to do with the immune system itself, and there’s some process that takes weeks to transition the body from “wow… we now know there’s a new enemy to be on the look out for” to “we’re now completely ready for it should it ever strike”?

As you can probably tell, I’m hopelessly confused, but also fascinated. If anybody can point me to explainers about the mathematics of vaccination, I’d be enormously grateful. For example:

  • How many of the 50 billion virus particles are typically expected to successfully enter cells?
  • How long does this invasion process take? Hours? Weeks?
  • Is there any replication going on (either of the virus or of the cells they infect?)
  • Do the virus particles diffuse across the body from the injection site? If so, how? And how long does it take?
  • Or is the time lag all to do with the immune system’s own processes?

I didn’t expect to end up learning (or learning I didn’t know) so much!

UPDATE: Follow-up post here.

A brief history of middleware and why it matters today


This blog post is a lightly edited reproduction of a series of tweets I wrote recently
  • This tweetstorm is a mini history of enterprise middleware. It argues that the problems we solved for firms 10-20 years ago are ones we can now solve for markets today. This blog post elaborates on a worked example with @Cordablockchain (1/29)
  • I sometimes take perverse pleasure in annoying my colleagues by using analogies from ancient enterprise software segments to explain what’s going on with enterprise blockchains today… But why should they be the only ones that suffer? Now you can too! (2/29)
  • Back in the late 90s and early 2000s, people began to notice that big companies in the world had a problem: they’d built or installed dozens or hundreds of applications on which they ran their businesses… and none of these systems talked to each other properly… (3/29)
  • IT systems in firms back then were all out of sync… armies of people were re-keying information left, right and centre. A total mess and colossal expense (4/29)
  • The solution to this problem began modestly, with products like Tibco Rendezvous and IBM MQSeries. This was software that sat in the _middle_ connecting applications to each other… if something interesting happened in one application it would be forwarded to the other one (5/29)
  • Tibco Rendezvous and IBM MQSeries were like “email for machines”. No more rekeying. A new industry began to take shape: “enterprise middleware”. It may seem quaint now but in the early 2000s, this industry was HOT. (6/29)
  • But sometimes the formats of data in systems were different. So you needed to transform it. Or you had to use some intelligence to figure out where to route any particular piece of data. Enterprise Application Integration was born: message routing and transformation. (7/29)
  • Fast forward a few years and we had “Enterprise Service Buses” (ESBs – the new name for EAI) and Service Oriented Architecture (SOA). Now, OK… SOA was a dead end and part of the reason middleware has a bad name today in some quarters. (8/29)
  • But thanks to Tibco, IBM, CrossWorlds, Mercator, SeeBeyond and literally dozens of other firms, middleware was transforming the efficiency of pretty much every big firm on the planet. “Middleware” gets a bad name today but the impact of ESB/EAI/MQ technologies was profound (9/29)
  • Some vendors then took it even further and realised that what all these parcels of data flying around represented were steps in _business processes_. And these business processes invariably included steps performed by systems _and_ people. (10/29)
  • The worlds of management consulting (“business process re-engineering”) and enterprise software began to converge and a new segment took shape: Business Process Management. (11/29)
  • The management consultants helped clients figure out where the inefficiencies in their processes were, and the technologists provided the software to automate them away. (12/29)
  • In other words, BPM was just fancy-speak for “figure out all the routine things that happen in the firm, automate those that can be automated, make sure the information flows where it should, when it should, and put some monitoring and management around the humans” (13/29)
  • Unfortunately, Business Process Management was often oversold – the tech was mostly just not up to it at that point – and its reputation is still somewhat tarnished (another reason “middleware” is a dirty word!) (14/29)
  • But, even given these mis-steps, the arc of progress from “systems that can barely talk to each other” to “systems and people that are orchestrated to achieve an optimised business outcome” was truly astounding. (15/29)
  • Anyway… the point is: this was mostly happening at the _level of the firm_. The effect of the enterprise middleware revolution was to help individual firms optimise the hell out of themselves. (16/29)
  • But few back then even thought about the markets in which those firms operated. How could we have? None of the software was designed to do anything other than join together systems deployed in the same IT estate. (17/29)
  • So now let’s fast forward to today. Firms are at the end of their middleware-focused optimisation journeys and are embarking on the next, as they migrate to the cloud. But the question of inefficiencies between firms remains open. (18/29)
  • Take the most trivial example in payments: “I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?” (19/29)
  • How can we be almost a fifth of the way through the 21st century and lost payments are still a daily occurrence? How can it be that if you and I agree that I owe you some money, we can still get into such a mess when I actually try to pay you? (20/29)
  • As I argue in this post, the problems we solved for firms over the last two decades within their four walls are precisely the ones that are still making inter-firm business so inefficient (21/29)
  • What stopped us solving this 20 years ago with the emerging “B2B” tech? Easy inter-firm routing b/w legal entities (the internet was scary…), broad availability of crypto techniques to protect data, orchestration of workflows between firms without central controller etc (22/29)
  • But we also hadn’t yet realised: it’s not enough merely to move data. You need to agree how it will be processed and what it means. This was a Bitcoin insight applied to the enterprise and my colleague @jwgcarlyle drew this seminal diagram that captures it so well. (23/29)
  • And my point is that the journey individual firms went on: messaging… integration… orchestration… process optimisation – is now a journey that entire markets can go on. The problems we couldn’t solve back then are ones we now can solve. (24/29)
  • What has changed? Lazy answer: “enterprise blockchain”… lazy because not all ent blockchains are designed for same thing, plus the enabling tech and environment (maturation of crypto techniques, consensus algorithms, emergence of industry consortia, etc) is not all new (25/29)
  • But the explosion of interest in blockchain technology was a catalyst and made us realise that maybe we could move to common data processing and not just data sharing at the level of markets and, in so doing, utterly transform them for the better. (26/29)
  • In my blog post I make this idea concrete by talking about something called the Corda Settler. In truth, it is an early – and modest – example of this… a small business process optimisation (27/29)
  • The Corda Settler optimisation is simple: move from asking a payee to confirm receipt once sent and instead pre-commit before the payment is even made what proof will convince them it was done, all enabled through secure inter-legal-entity-level communication and workflow (28/29)
  • But the Settler is also profound… because it’s a sign the touchpaper has truly been lit on the next middleware revolution… but this time focused on entire markets, not just individual firms (29/29)


Process Improvement and Blockchain: A Payments Example

“I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?”

red and white metal mail box

The cheque is in the post…

How can we be almost a fifth of the way through the twenty first century and this is still a daily occurrence? How can we be in a world where even if you and I agree that I owe you some money, it’s still a basically totally manual, goodwill-based process to shepherd that payment through to completion?!

The “Settler pattern” reduces opportunities for error and dispute in any payments process – and does so by changing the process

This was the problem we set out to solve when we built the Corda Settler. And I was reminded about this when I overheard some colleagues discussing it the other day. One of them wondered why we don’t include the recipient of a payment in the set of parties that must agree that a payment has actually been made. Isn’t that kinda a bit of an oversight?!

Screenshot 2019-08-09 at 10.35.03.png

The Corda Settler pattern works by moving all possible sources of disagreement in a payment process to the start

As I sketched out the answer, I realised I was also describing some concepts from the distant past… from my days in the middleware industry. In particular, it reminded me of when I used to work on Business Process Management solutions.

And there’s a really important insight from those days that explains why, despite all the stupid claims being made about the magical powers of blockchains and the justifiable cynicism in many quarters, those of us solving customer problems with Corda and some other enterprise-focused blockchain platforms are doing something a little bit different… and its impact is going to surprise a lot of people.

Now… I was in two minds about writing this blog post because words like “middleware” and “business process management” are guaranteed to send most readers to the “close tab” button… Indeed, I fear I am a figure of fun amongst some of my R3 colleagues… what on earth is our CTO – our CTO of all people! – doing talking about boring concepts from twenty years ago?!

But, to be fair, I get laughed at in the office by pretty much everybody some days… especially those when I describe Corda as “like an application server but one where you deploy it for a whole market, not just a single firm” or when I say “it’s like middleware for optimising a whole industry, not just one company.

“Application Servers? Middleware? You’re a dinosaur! It’s all about micro-services and cloud and acronyms you can’t even spell these days, Richard… Get with the programme, Grandad!”

Anyway… the Corda Settler discussion reminded me I had come up with yet another way to send my colleagues round the bend…  because I realised a good way to explain what we’re building with Corda – and enterprise blockchains in general – isn’t just “industry level middleware” or “next generation application servers”… it’s also a new generation of Business Process Management platform…  and many successful projects in this space are actually disguised Industry Process Re-Engineering exercises.

Assuming you haven’t already fallen asleep, here’s what I mean.

Enterprise Blockchains like Corda enable entire markets to move to shared processes

Think back to the promise we’re making with enterprise blockchains and what motivated the design of Corda:

“Imagine if we could apply the lessons of Bitcoin and other cryptocurrencies in how they keep disparate parties in sync about facts they care about to the world of regular business…  imagine if we could bring people who want to transact with each other to a state where they are in consensus about their contracts and trades and agreements… where we knew for sure that What You See Is What I See – WYSIWIS. Think of how much cost we could eliminate through fewer breaks, fewer reconciliation failures and greater data quality… and how much more business we could do together when we can move at pace because we can trust our information”

And that’s exactly what we’ve built. But… and sorry if this shocks anybody… Corda is not based on magic spells and pixie dust…  Instead, it works in part because we drive everybody who uses it to a far greater degree of commonality.

Because if you’re going to move from a world where everybody builds and runs their own distinct applications, which are endlessly out of sync, to one where everybody is using a shared market-level application, what you’re actually saying is: these parties have agreed in some way to align their shared business processes, as embodied in this new shared application.  And when you look at it through that lens, it’s hardly surprising that this approach would drive down deviations and errors…!

I mean: we’re documenting – in deterministically executed code – and for each fact we jointly care about: who can update which records, when and in what ways. And to do that we have to identify and ruthlessly eliminate all the places where disagreements can enter the process.

Because if we know we have eliminated all areas of ambiguity, doubt and disagreement up-front, then we can be sure the rest of our work will execute as if it’s like a train on rails.

Just like trains, if two of them start in the same place and follow the same track… they’ll end up in the same place at the end.

Reducing friction in payments: a worked example

So, for payments, what are those things? What are those things that if we don’t get them right up front can lead to the “I haven’t received your payment” saga I outlined at the start of the post?

Well, there’s the obvious ones like:

  • How much needs to be paid?
  • By whom?
  • To whom?
  • In what kind of money/asset?

There are trickier ones such as:

  • Over what settlement rail should I pay?
  • To which destination must we pay the money?
  • With any reference information?

These are trickier since there is probably a bit of automated negotiation that needs to happen at that point… we need to find a network common to us both… and the format of the routing strings is different for each and so forth. But if you have an ability to manage a back-and-forth negotiation (as Corda does, with the Flow Framework) then it’s pretty simple.

But that still leaves a problem… even if we get all of these things right, we’re still left hanging at the end. Because even if I have paid you the right amount to the right account at the right time and with the right reference, I don’t know that you’ve received it.

And so there’s always that little bit of doubt. Until you’ve acknowledged it you could always turn around in the future and play annoying games with me by claiming not to have received it and force us into dispute… and we’d be back to square one! We’d be in exactly the same position as before: parties who are not in consensus and are instead seeing different information.

And it struck us as a bit mad to be building blockchain solutions that kept everybody in sync about really complicated business processes in multiple industries, only for the prize to be stolen from our grasp at the last moment… when we discover the payment that is invariably the thing that needs to happen at the end of pretty much every process hasn’t actually been acknowledged.

It would be as if our carefully tuned train had jumped off the rails and crashed down the embankment just at the last moment. Calamity!

So we added a crucial extra step when we designed the Corda Settler. We said: not only do you need to agree on all the stuff above, you also need to agree: what will the recipient accept from the sender as irrefutable proof that the payment has been made?

And with one bound, we were free!

Because we can now… wait for it… re-engineer the payment process. We can eliminate the need for the recipient to acknowledge receipt. Because if the sender can secure the proof that the recipient has already said they will accept irrefutably then there is no need to actually ask them… simply presenting them with the proof is enough, by prior agreement.

And this proof may be a digital signature from the recipient bank, or an SPV proof from the Bitcoin network that a particular transaction is buried under sufficient work… or whatever the relevant payment network’s standard of evidence actually is.

But the key point is: we’ve agreed it all up front and made it the sender’s problem… because they have the incentive to mark the payment as “done”. As opposed to today, where it’s the recipient who must confirm receipt but has no incentive to do so, and may have an incentive to delay or lie.

But building on this notion of cryptographic proof of payment, the Corda Settler pattern has allowed us to identify a source of deviation in the payment process and moved it from the end of the process, where it is annoying and expensive and makes everybody sad… and moved it to the start of the process and, in so doing, allows us to keep the train on the rails.

And this approach is universal. Take SWIFT, for example. The innovations delivered with their gpi initiative are a perfect match for the payment process improvements enabled by the Settler pattern.

The APIs made available by Open Banking are also a great match to this approach.

Middleware for markets, Business Process Management for ecosystems, Application Servers for industries..!

And this is what I mean when I say platforms like Corda actually achieve some of their magic because they make it possible to make seemingly trivial improvements to inter-firm business processes and, in so doing, drive up levels of automation and consensus.

So this is why I sometimes say “Corda is middleware for markets”.

It’s as if the first sixty years of IT were all about optimising the operations of individual firms… and that the future of IT will be about optimising entire markets.

Busting the Myth of Public Blockchains for Business

It’s time to talk about transaction finality. Last week’s 51% attack demonstrates that Ethereum-style blockchains are not ready for business

A belief took hold amongst some of the tech community in 2018: “If you have an enterprise blockchain use-case you should build it on a platform based on Ethereum.”

The argument was pretty well constructed and relied on several plausible-sounding claims so it’s understandable how it seemed pretty convincing. However, as 2018 unfolded, these claims began to be challenged. And as we enter 2019, the final remaining argument has been undermined with a public demonstration of how the lack of settlement finality in public blockchains such as Ethereum renders their immutability and security guarantees worthless for business.

In this piece, I will argue that it is now time to conclude that Ethereum’s core technologies are the wrong foundation upon which to build business blockchain solutions. My argument is: 1) the core Ethereum technologies are due for abandonment, leaving businesses at risk of technology dead-ends, 2) the Ethereum developer skill-pool has been massively overstated and is in fact far tinier than that for the purpose-built business blockchains based on existing languages, and 3) the idea of building on Ethereum in order to securely ‘anchor’ private blockchains to a public chain is now discredited.

In short, business blockchain applications should be built on technologies designed for the enterprise, not Ethereum.

What was the argument for why businesses should build on Ethereum?

To understand how we reached this point as a community, it’s helpful to review the thinking that led here. Here’s how the argument for why businesses should build on Ethereum went:

  • “Go where the skills and innovation are: Ethereum has the largest community and the broadest availability of skills.”
  • “Use the tools that will best let you interoperate with the public chain: Even if you’re not using the public Ethereum network you should use platforms that are based on the EVM, and use languages like Solidity so you can inherit the innovation from the public chain and maximise the chances of interoperability in the future”
  • “Overcome the ‘weak’ security of private chains by ‘anchoring’ in the public chain: Public chains are more immutable than ‘insecure’ private networks and so you should ‘anchor’ your private transactions to prevent malicious parties rolling back your transactions behind your back.”

By the end of 2018, there was ample evidence to debunk the first two claims, but the third claim persisted. Indeed, this third claim, that a public blockchain such as Ethereum offers a degree of transaction confirmation permanence that is otherwise unobtainable, has been repeated over and over again, even as late as December 2018.

Until last week, that is, when a 51% attack against the Classic (original) Ethereum network demonstrated for real what we already knew in theory: that history on a public blockchain like Ethereum can be arbitrarily rewound, money double-spent and network participants defrauded.

The rest of this article will review each of the three claims above in depth to explain why they are incorrect and how that makes Ethereum – and Ethereum-based platforms – unsuitable for business. But it’s important to note that the purpose of this blog post is actually to make a positive message. Because the broader picture is actually one of success: Ethereum is proving to be a valuable tool for a wide range of isolated social and economic experiments. And plenty of blockchains purpose-built to solve business problems, such as Hyperledger and Corda, are live and are changing the world of commerce.

So my key message is that it’s the inappropriate application of Ethereum technologies to the unforgiving world of real business problems, for which it was not designed, that we need to guard against. These two worlds have very different requirements.

It’s time to declare in public what has been openly discussed in private: Ethereum is currently unsuited to the world of business and we should have the courage as a community to say so.

So let’s now review the arguments for using Ethereum in the enterprise, that have now shown to be incorrect.

Claim 1: “Go where the skills and innovation are: Ethereum has the largest community and the broadest availability of skills.”

This argument starts well. For example, ConsenSys claim that the “Ethereum developer community” has 250,000 members, by which they presumably mean the number of people who can code using Solidity, the language in which almost all Ethereum apps are coded.

But when you scratch the surface, reality begins to intrude:

  • Hundreds of thousands of Solidity developers sounds like a big number until you realise that there are over a million developers with the knowledge to build applications for Hyperledger Fabric using the language Go and twelve million developers with the knowledge to build applications for Corda using Java. In the latter case, our experience shows that any competent Java developer can pick up the Corda library and be productive in a couple of days. This means the Hyperledger and Corda developer skillpools are at least one, maybe even two, orders of magnitude bigger, even using ConsenSys’s figures.
  • But we need to challenge ConsenSys’s figures, small as they now seem. This is because there is minimal evidence to support even the 250k figure. The claim seems to be based on looking at how many people have downloaded one of the development tools that pretty much every Ethereum developer has to use, and assuming half of them became Ethereum developers. But that methodology doesn’t work. To see why, let’s apply the same logic to the Java ecosystem to generate an estimate for how many developers there are and see if it matches the correct figure, twelve million. Now, we know that one tool for developing Java applications, IntelliJ, had almost twenty five million downloads in 2017 alone, and that product had barely ten percent of the huge and diverse market for Java development tools (Eclipse, Android Studio and NetBeans were all larger). This means we can estimate there were at least 250 million downloads of Java development tools in 2017, which would mean there must be over 125 million Java developers by ConsenSys’s logic. Except, there aren’t… we know the correct number is about twelve. It’s out by a factor of ten. So the true number of people with Ethereum skills is almost certainly much smaller than 250k; I would be surprised if it was even 50k or 10k, a rounding error in the world of developer communities. And the number of those who can write Solidity contracts securely, critical to avoiding another DAO-style bug, is smaller still.
  • And on top of this, we also need to add the huge productivity gains that come from being part of established ecosystems. For example, the range of development environments, debuggers, testing frameworks, profilers and libraries available for the Java ecosystem is staggeringly larger than that for the Ethereum and Solidity ecosystems.

The reality is that the developer ecosystem and momentum is with the Hyperledger and Corda communities, not Ethereum. So it’s perhaps no surprise that the overwhelming majority of truly ground-breaking, successful enterprise blockchain deployments to date run on Hyperledger Fabric and Corda, not Ethereum.

Claim 2: “Use the tools that will best let you interoperate with the public chain: Even if you’re not using the public Ethereum network you should use platforms that are based on the Ethereum Virtual Machine (EVM) so you can inherit the ‘innovation’ from the public chain and maximise the chances of interoperability in the future”

This argument is more pernicious than the previous one. It says to developers: “even if you’ve correctly determined that a public Ethereum network is wrong for you, you should still use the Ethereum toolset for your private project.” It is an argument that plays on people’s deep fears: stick with the crowd; after all, you won’t be fired if you make the same mistake that everybody else made!

The problem is: as we demonstrated above, there is no crowd and the Ethereum community plans to throw all the current technology away in any case: the EVM is set for total replacement. The plan, “Ethereum 2.0”, is to build a new design from scratch.

So the world faces the possibility that, long after the public Ethereum community have moved on to something new, business leaders will wake up one day to discover critical parts of their business are running on technology that isn’t even being used any more for the purpose for which it was built. Talk about buyer’s remorse…

This might be OK if the Ethereum Virtual Machine was a sound technology but, as the team from Kadena documented, the EVM is “fundamentally unsafe”. And the team at Aion also independently reached a similar conclusion and have written eloquently why they didn’t use the EVM and chosen the Java ecosystem instead. And yet consultants, some from reputable firms, are pushing this technology hard in to organisations that don’t always possess the technical expertise to realise the advice may not be appropriate.

Genuinely ground-breaking work is, of course, being done by some very talented and committed people in the Ethereum community on the public Ethereum network, but it is – and should continue to be done – safely away from the back offices of the businesses upon whose data integrity the world depends.

However, 2018 ended with one, last, killer plank in the argument for why businesses should nevertheless build on Ethereum rather than a platform like Hyperledger Fabric or Hyperledger Sawtooth or Corda.

And it was this last argument that was severely undermined this week.

Claim 3: “Overcome the ‘weak’ security of private chains by ‘anchoring’ in the public chain: Public chains are more immutable than insecure private networks and so you should ‘anchor’ your private transactions to prevent malicious parties rolling back your transactions behind your back.”

This argument was actually pretty clever. Here’s how it went:

  • ‘The security of public blockchains is “backed” by the work performed by billions of dollars worth of mining equipment and electricity. To reverse a “confirmed” transaction would be economically infeasible and, since only public blockchains use proof of work, only public blockchains can provide this “immutability” guarantee.’
  • ‘By contrast, blockchains that rely instead on identifiable parties to provide consensus cannot deliver this level of security and immutability; there is always the chance that parties could “collude” to reverse a transaction.’

And so, the proponents of Ethereum for the enterprise propose a clever idea: by all means, use a peer-reviewed fault-tolerant algorithm for your business transactions – you need rapid and final confirmation, after all.

But then, as an additional layer of safety, “anchor” a summary of your transactions in the public Ethereum network. The network that is massively more secure and resistant to mutation. Its proponents even claim this would provide ‘greater “proof of settlement finality”’ and that ‘any chance of counterparty disputes about membership is eliminated’.

This sounds perfect: the privacy, performance and settlement finality of a private chain and the security and immutability of a public chain!

Except… there was always a problem with this argument: finality.

In short, the two unanswered questions were:

  • If your enterprise blockchain needs settlement finality but the chain into which it is ‘anchored’ provides only probabilistic finality, when is it safe to tell a user of the private chain their transaction has been confirmed? What happens if two conflicting hashes might be vying for inclusion at the same time? Are users expected to constantly monitor the underlying chain to check the private chain hasn’t gone bad? And what exactly are they supposed to do at that point in any case?
  • If the ‘anchor’ gets washed away by a ‘reorganisation’ of the underlying public probabilistic blockchain, what are you supposed to do then?

The problem is: technically savvy people knew these questions made the concept highly suspect but the fact that there had never been any high profile examples of where this would ever have been a problem, nobody seemed to care. And the concepts were complicated in any case – probabilistic settlement, reorganisations. All too abstract! So the response seemed to be: “sure… this could happen in theory but it never happens in practice, so who cares?”.

Until last week.

When a high profile Ethereum network suffered a devastating and unprecedented attack, that caused transactions over one hundred blocks deep to go from “confirmed” to “unconfirmed”. Any “anchor” that had been in one of those hundred blocks would have been washed away, opening up the possibility that a simultaneous attack on the private network could result in a conflicting anchor taking its place.

In other words, the trivial ease with which the supposedly secure and immutable chain was rewritten means it failed in its only and single purpose for an enterprise deployment.  

The right approach to settlement finality for business blockchains is to acknowledge things can go wrong and to plan for them up-front: accept that you need to know the identity of the consensus providers, which also ensures provider diversity rather than increasingly centralised mining providers; and that you need a governance process and dispute resolution forum for problems that cannot be solved solely with clever math or novel technology.


So, here at the start of January 2019, what is left of the “Ethereum in business” story?

  • The number of developers with skills in Ethereum is far lower than Ethereum’s proponents claim and is orders of magnitude smaller than the programming language ecosystems supporting Hyperledger and Corda
  • The core ‘engine’ of Ethereum, the EVM, has been publicly disowned by the communities that spawned it and the platform is being expensively rewritten, yet enterprise Ethereum vendors continue to push tools based on this dead-end into unsuspecting businesses.
  • And the only remaining plausible argument for using Ethereum in the enterprise, that it somehow makes it easier to secure your network by ‘anchoring’ into the public network, has been shown by the Ethereum Classic debacle to be false.

Be in no doubt: blockchain for the enterprise is real and it is here to stay. But if you’re doing it on Ethereum, you’re doing it wrong.


[Update 2019-01-14 Reworded subtitle to clarify I’m making a broader point about probabilistic finality]

Corda: Open Source Community Update

The Corda open source community is getting big… it’s time for a dedicated corda-dev mailing list, a co-maintainer for the project, a refreshed whitepaper, expanded contribution guidelines, and more..!


It feels like Corda took on board some rocket fuel over the last few months. Corda’s open source community is now getting so big and growing so fast that it’s just not possible to keep up with everything any more — a nice problem to have, of course. And I think this is a sign that we’re reaching a tipping point as an industry as people make their choices and the enterprise blockchain platforms consolidate down to what I think we’ll come to describe as “the big three”.

Read the rest of this post over at the Corda Medium blog…

Introducing the Corda Technical Advisory Council

I’m delighted to announce the formation of the Corda Technical Advisory Council (the Corda TAC). This is a group of technical leaders in our community — who most of you know well — who have volunteered to commit their time over and above their existing contributions to the Corda ecosystem to provide advice and guidance to the Corda maintainers.

Members of the TAC are invited by the maintainers of the Corda open source project (Mike and Joel) and will change over time — the inaugural members are listed below. If you’re also interested in contributing to the TAC, please do let us know — most usefully through your technical leadership and contribution to the ecosystem!

Read the rest of this post over at the Corda medium blog…!