Why all the Web3 Hate?

Crypto is rebranding itself as ‘web3′ and the mainstream tech community don’t like it one bit. But they’re missing the point: for good or ill, crypto’s mission to redefine finance is where the constructive feedback and critical thought should be focused, not a strawman about building a decentralised Facebook

Ryan Selkis (@twobitidiot) made an interesting statement the other day:

Why the ‘viscerally negative’ reaction to ‘web3’?

It’s possible Selkis is referring to negative reactions from within crypto. But there’s also no shortage of pushback from elsewhere. Stephen Diehl’s ‘take no prisoners’ posts are a good place to start. Or you could look here, here, here, here or here.

The pushback is occurring along a wide front but it’s important to note that Selkis specifically references ‘web3’, not ‘DeFi’ and not ‘crypto’. And that’s important. After all, there has never been a shortage of critics of permissionless blockchains over the years, including me at times. And plenty of people feel queasy at what they see as rampant speculation and fraud in pockets of that community. But Selkis is right that, in the last few weeks, it feels like the nature, and volume, of the criticism has changed, perhaps decisively.  

Why? What was the trigger?

If I were to ask Diehl, I suspect he’d point me to a tweet like this one:

But it’s not as if Diehl has only just started criticising crypto this week. So why is his message only really being heard now? Why not three years ago? It seems like all the hate started to resonate and coalesce just last month. 

Why all the hate now?

The reality is that most technologists at most firms are using regular technology to solve regular business problems. So those in the blockchain space probably overestimate the extent to which the rest of the tech industry is paying any attention at all. That’s definitely the case for the ‘permissioned‘ blockchain ecosystem I inhabit and, contrary to what you’d think from watching the Twitter echo-chamber, I suspect it’s true for the permissionless wild-west world of crypto too.

To the extent mainstream technologists pay any attention to the permissionless cryptocurrency world at all, it’s through the lens of ‘other’. I think the general thought process is this:

“There’s that thing happening over there. Some of the people seem to be getting very rich. They certainly make a lot of noise. Some of them seem to be a bit infra dig, some are probably scammers. Maybe I’m a bit annoyed at not having got in on it when there was money to be made. And my gut feel as a technologist is that some of the claims just don’t stack up. But, whatever…. If I argued with everybody I thought was wrong on the internet, I’d never get anything done. It doesn’t affect me or my work. So I’ll just ignore it and get on with my life.”

That, I think, explains the ‘traditional’ tech world’s view of blockchains until a few weeks ago.

And then somebody – in either the most brilliant rebranding in history – or as a truly insane and hubristic act of overreach – decided that building a new financial system wasn’t enough, and that permissionless blockchains were somehow also going to be the solution to the dominance of the large tech firms. 

Hey… if you can reinvent money, why not also fix the world’s political system and broken social structures too?! So somebody dusted off Gavin Wood’s old ‘Web 3.0’ thesis and announced to the world that the future of the world wide web was the public permissionless blockchain tech stack.

Oh dear…

Suddenly, as in literally overnight, the calculation being made by normal technologists doing normal work changed. The weirdos doing crypto stuff were no longer an interesting curiosity. They were now loudly and aggressively stating that their ideas and their technology were what all those normal developers and firms were going to be using in the future.

Oh, and that it was their tokens that everybody else would have to buy in order to participate.

Now, if you responded every time some crazies on the internet said they were coming for you, you’d go mad. But when they’re as well-funded, vocal and – yes – influential as the crypto community, one does feel a need to react.

So, suddenly, anybody in a leadership position in the existing tech world had lost any ability to remain neutral. Anybody who did not believe that this was the correct direction of travel felt they had an obligation to say so.

And, guess what? It turns out there are a lot of people who don’t believe that public permissionless blockchains have anything to contribute to the gnarly societal problems exposed by the exposure of the human race to global social media for the first time in our history as a species.

They were perfectly fine to sit on the sidelines whilst it didn’t affect them.  But, now that it does, they’ve come off the fence, and not just about the web3 angle, but about the whole edifice.

So it’s entirely unsurprising that the ‘cryptocurrencies and decentralised finance are the future of the web’ meme has run into such violent opposition!

It’s as if the regular tech world was entirely happy for crypto to carve its own furrow as long as it was well away from anything important, as they saw it. But, once their territory had been invaded, they had no choice but to fight back. And fighting back is what they’re doing.

The mystery to me is why anybody in the permissionless space is surprised?!

Perhaps they’re not surprised. Maybe some of them quite like it. After all, as the saying goes, ‘then they fight you’ is only one step from ‘and then you win’.

But I can’t help thinking some of the critics we’re hearing from today are missing the point. There is a deep criticism we could make, but it’s nothing to do with ‘web3’.

Maybe this is what Chris Dixon was thinking of when he tweeted this:

So in the remainder of this piece I’ll sketch out what I think a valuable critique might look like. But first a short interlude.

Interlude: never annoy a pedant

There’s a fun alternative explanation for what’s going on that I’d regret not sharing.

It’s possible that the entire backlash against the ‘web3’ movement stems from the fact that technologists are pathologically pedantic, and those advocating web3 have misunderstood what Web 2.0 was! 

Web 2.0 was nothing to do with Facebook, Google and Twitter’s dominance. Web 2.0 was all about architecture and, in particular, the emergence of Ajax techniques and the associated API-level integrations between sites that were enabled by the widespread adoption of REST that turned out to be needed to make it work. This is basically the point that Tim O’Reilly recently made.

But the fact that the web3 narrative doesn’t acknowledge this is the kind of thing that can really annoy pedantic people. Somebody literally is wrong on the internet!

So don’t discount the possibility that the web3 blowback is caused by nothing more than a few technologists getting really upset that some people don’t know their history…

Never annoy a pedant.

The case for the defence? The real critique?

If permissionless blockchains are not the future of social media, does that mean, as Diehl argues, that they are worthless, parasitic, negative-externality generators? Maybe. But it’s not obvious to me that this is the case.

And I write this as somebody whose day job at R3 is building and selling solutions based on private or permissioned versions of this technology. The permissionless blockchain world do not usually see me as a friend.

But if one strips away all the hype and complexity, there’s a very simple story one can tell about permissionless crypto. It goes like this:

First, there was a business requirement: Satoshi set out to implement a system of digital cash that is censorship resistant.

The entire architecture of Bitcoin emerges from that requirement. I make no value judgement as to whether the requirement is legitimate.  But “censorship-resistant digital cash” is the business problem for which Bitcoin is the solution.

But that was just the starting point. The explanation for the present Decentralised Finance scene requires a couple more steps.

First, censorship resistance turns out to require a system that is permissionless. After all, if you need somebody’s permission to use the system then in what way is it censorship resistant?

And permissionless platforms for the exchange of value turn out to enable more than just digital payments. Witness the emergence of projects, primarily on Ethereum, offering lending, trading, financial derivatives, fundraising and more. It’s as if everything that the investment banking industry spent the twentieth century building is being rebuilt in the permissionless crypto sphere. And, with the emergence of stablecoins, there isn’t really anything you can’t in principle do in the crypto world that you can do with traditional finance.

And so it’s not hard to imagine most of what you can do in the regular financial system being replicated, for good or ill, in the crypto space, but with two crucial differences.

  • The first difference is the unfathomably greater complexity. And, for Bitcoin and Ethereum, monumentally greater energy consumption.
    • That’s what permissionlessness costs
  • And the second difference is the almost complete lack of regulation. 
    • That’s what permissionlessness means

I’m 100% with Diehl that there is an argument to be made that either of those points could cause many people to want to steer well clear. Either could indeed be reasons to want to burn the whole thing to the ground.

But let’s suspend our moral reasoning for just a little while longer. And let’s imagine we fast forward a few years, What we could have on our hands is a vast parallel financial system, where AML, KYC and CTF rules are not applied. A system where no investor protection rules apply. A system where no accredited investor rules are in force. A world, in other words, that looks just like how the existing world would look if we simply woke up one day and the last fifty years of financial regulation was rolled back.

What happens then?

The usual debate usually ends up with one side saying ‘the regulators’ will intervene. And the other side saying that the nature of the technology means they can’t.  But a more interesting possibility is to consider what happens if ‘the regulators’ choose not to intervene?

We know that rolling back regulation is devilishly difficult. Who would vote for a sensible reform of AML rules if they feared being labelled a ‘terrorist sympathiser’ by an opponent with a vested interest in the status quo? This is why regulations are almost always like ratchets: more and more get added. Few are ever removed.

However, societies have an ‘antidote’ to this problem, which is to simply let old regulations become irrelevant. Witness the rise of the Money Market Fund in the US in the 1970s. Regulation said that banks can’t pay interest on current accounts. Inflation was sky high. It was too hard to change the regulation.  So smart bankers created the money market fund instead. The regulation was simply made irrelevant. And the establishment just kind of accepted it. The law was, in effect, changed by the creation of facts on the ground rather than through the legislative process.

It’s entirely possible the same thing could be playing out in finance, in plain sight. There are few people who would argue that the present state of financial regulation is in any way optimal. But nor is there any reasonable path to reforming it. So we could find ourselves in a few years in a situation where the present system is generally acknowledged to be broken and the only plausible alternative is… the one the crypto community has built.

Diehl’s argument is that this is terrifying. And this should give pause to anybody in the ‘mainstream’ who believes the present system is broken. Where’s your alternative model? What’s your proposal for how we transition to it?

What happens if we reach a point where consumers find the new system so much more convenient that governments don’t dare reimpose the old rules, to avoid incurring the wrath of their voters?  And might ‘the regulators’ by that point actually be quietly pleased that they had, in effect, a day zero from which to start again?

If the idea that crypto is the future of finance excites you, you’re probably already out there building. (Sorry, buidling). But if it terrifies you, why are you wasting your time arguing against a web3 strawman argument about imaginary clones of Facebook when there’s a far bigger picture to focus on?

After all, isn’t it most likely that permissionless crypto’s best chance of success is with the problem for which it was actually designed?

If so, I’d humbly suggest that this is where the fine minds presently arguing against the web3 strawman should be spending their time.

Viral Vector Vaccines – More Fun Things I’ve Learned

In this post, I shared what I had learned about how the AstraZeneca ‘viral vector’ vaccine worked and how endlessly fascinating I found it. But I was struggling to understand why it takes so long for the immune system to ‘learn’ about the new threat that the vaccine is trying to teach it about… why it seems to take “weeks” to gain any protection. I still don’t understand but I’ve learned a few more things that others might find interesting too!

Recap of the basic idea

The basic idea behind Viral Vector vaccines such as the AstraZeneca and Johnson & Johnson products is that a harmless virus is genetically modified so that, if it were to infect a human cell, the cell would start producing protein fragments that look a lot like the spike proteins on real Covid viruses. These ‘fake’ spikes become visible to your immune system, which mounts a response. Thus, if you subsequently become infected with the ‘real’ Covid, your immune system is already primed to destroy it.

Do those fifty billion viruses replicate once inside me? Answer: no

When I read that the vaccine relies on injecting a harmless ‘carrier’ virus, my immediate thought was: “I wonder if that virus is able to replicate like normal viruses do?” I saw on my ‘vaccine record’ that there were fifty billion copies of it in my dose, which made me suspect not… after all, why would you need to inject so many if each one had the ability to replicate? The human body only has about 15 trillion cells, so 50 billion viruses is enough for one in 300 cells! Surely you’d need far fewer if each one could trigger the creation of many many copies?

Turns out I was right about this: the modified ‘adenovirus’ that is injected into me is unable to replicate: those fifty billion copies are the only ones I’ll ever have as a result of that shot.

This infographic (click for better image), from The Royal Society of Chemistry, has a nice explainer on CompoundChem:

As that article explains, it seems like the decision to use a non-replicating virus was a choice, presumably on safety and public acceptance grounds: it would have been possible to design a vaccine where the virus could replicate, and some vaccines for other diseases do work that way. The advantage of the latter, I guess, is that far fewer copies would have had to be injected to start with. It’s interesting to speculate (based on absolutely zero knowledge of the science or of where the bottleneck in production actually is…) whether vaccine rollouts could have been quicker if they’d been based on replicating viruses. Would it have meant any quantity of production could have been spread more broadly?

Note: I’m still not yet clear on what happens to my cells that are infected by one of these non-replicating vector viruses… are these cells then destroyed by my immune system because they present the spike protein? Or are they allowed to live? Can they divide? If so, do their daughters also produce the spike protein?

What happens if my body already has antibodies to the vector virus?

I made a throwaway comment in my last post about how the ‘carrier’ virus has to be carefully selected: if you’ve been exposed to it – or something like it – in the past, your body will attack the virus particles before they have a chance to infect your cells… and so you’ll produce no (or fewer) spike proteins and you’ll presumably develop weaker protection against Covid than would otherwise have been the case. This piece in The Scientist explains more. It explains that this was why the AstraZeneca vaccine uses a modified chimp virus – it’s far less likely the average human has seen it before. And it points out that there’s a downstream consequence: that virus can’t now be used for a malaria vaccine. You really do have to use a different one for each vaccine.

There were a few other interesting tidbits in that article. It was the first time I’d seen an argument that one possible reason for milder side-effects from the AZ vaccine amongst older people is that the older you are the more pathogens you’ve been exposed to and so the more chance there is that your immune system has seen something like the vector virus before. And so relatively more of the fifty billion particles will be destroyed before entering a cell. So I’m even more pleased about my feverish sleepless night now!

Why are vaccines injected into muscle? How long does it take for the virus particles to get to work?

The question that triggered my attempts to learn about this stuff was why does it take weeks for me to gain any meaningful protection from the vaccine when it’s clear that my body was fully responding to the onslaught after barely twelve hours?

It got me wondering whether the mechanism of injection had anything to do with it. For example, if the vector virus is injected into a muscle, how long does it take for all fifty billion virus particles to get to work? And where do they operate? In that muscle? Or do they circulate round the body?

Was the first night reaction in response to all fifty billion viruses going to work at once? Or were only a few of them at work that night and it wasn’t yet enough to persuade my immune system that this is something it should lay down ‘memories’ about? Perhaps it’s going to take a few more weeks until they’ve all infected a cell and enough spike proteins have been produced to get my immune system finally to say “Fine! You win! Stop already! I’ll file this one with all the others on the ‘things we should be on alert for in the future’ shelf. Now stop bothering me!!”?

I was surprised how little definitive information there is about this sort of stuff online. I guess because it’s ‘obvious’ to medical professionals, and they don’t learn their trade from quick skims of Wikipedia and Quora. (I hope).

From what I can tell, the main reason vaccines are injected into the muscle are for convenience: the shoulder is at just the right height for a clinician to reach without much effort, it’s an easy target to hit, there’s no need to mess around trying to find a vein, and the risk of complications (eg inflammation of a vein or whatnot) is lower. This literature review makes for an interesting skim.

I’d also wondered if injection into muscle, rather than veins, results in the vaccine having a localised effect… eg is it only my shoulder muscle that churns out the spike proteins? Turns out the answer to that is no: muscle is chosen over, say, fat precisely because it is rich in blood vessels. The vaccine designers want the vaccine vector virus to enter the bloodstream and rush round the body.

And I’d wondered if injection into muscle was in order to create a ‘slow drip drip’ of vaccine into the bloodstream over time and perhaps that would explain why it took so long for the body to develop full immunity. Turns out the answer to that is also ‘no’. It seems that injections into the deltoid muscle (shoulder) are absorbed quicker than those into other commonly used injection sites. Implication: if the manufacturers wanted slow absorption, they wouldn’t be telling doctors to stab patients in the shoulder!

So when I bring all that together, I still remain confused… injecting the vaccine into my shoulder results in quick absorption, and my body was in full ‘fightback’ mode after twelve hours, so it’s hard to imagine there was any meaningful amount of vaccine lingering in my shoulder after, say, 24 hours… it must, by then, all surely have been whizzing round my veins and happily infecting my cells.

So what gives? Why does it take weeks after billions of my cells have been turned into zombie spike protein factories and my immune system has gone on a frenzied counterattack for me to have a meaningful level of ‘protection’ against Covid? (I’m ignoring the relevance of the ‘second dose’ here for simplicity)

I guess the answer must be ‘because that’s just how the immune system works!’

The mathematics of COVID vaccines

My AZ vaccine dose contained FIFTY BILLION virus particles. Wow!

I was fortunate to receive my first COVID vaccine dose yesterday; I received the Astra Zeneca product and all seemed to go well. As seems to be common, it made me feel feverish overnight and I slept badly. This was reassuring in a way as it made me feel like it was ‘working’… it was exciting, in a strange sort of way, to imagine the billions of ‘virus fragments’ racing round my body infecting my cells and turning them into zombie spike protein factories!

However, my inability to sleep also made me realise that I had absolutely no idea how it really worked. So as I was struggling to sleep, I started reading more and more articles about these ‘viral vector’ vaccines. They really are quite fascinating. And these articles did answer my first wave of questions… but they also then triggered more questions, to which I couldn’t find any answers at all. I’m not sure I’m going to be particularly productive at work today so thought why not write down what I discovered and list out all my questions. Perhaps my readers know the answers?

Viral Vector Vaccines: Key Concepts

Most descriptions I found online were hopelessly confused or went into so much extraneous detail about various immune system cell types that they were useless in imparting any real intuition. However, what I did seem to discover, at a very high level, was something like the following:

  • A harmless virus is used as the starting point.
    • Interesting detail: the virus needs to be somewhat obscure to reduce the risk patients’ bodies have been exposed to it before and thus already have antibodies that would destroy it before it’s had a chance to do its magic
  • It is then genetically engineered so that when it invades a human cell it triggers the cell to start churning out chunks of protein (the famous spike protein) that look a bit like Covid-19 viruses.
  • These spikes eventually become visible to the immune system, which mounts a vigorous response and, in so doing, learns to be on the lookout for the same thing in the future.

In essence, we infect ourselves with a harmless virus that causes our body’s own cells to start churning out little proteins that look similar enough to COVID that our body will be on high alert should a real COVID infection try to take hold.

Or, at least, I think that’s what’s going on.

So many extraneous details

Now, most of the articles I read then go on to talk about things like the following:

  • Technical detail about how several little fragments then have to be assembled to make one ‘spike’.
    • Important but not really critical to understanding the concepts from what I can see
  • The role of different types of immune cells.
    • The only thing these kinds of articles taught me was that it’s clear that immunologists have no idea how the immune system works and their attempts at explaining it to lay readers just makes this painfully obvious 🙂
  • Endless ‘reassuring’ paragraphs about safety.
    • I understand why they do this but it is somewhat depressing that every article has to be written this way, and I can’t help thinking that it may even be counterproductive.

Once and done or an ongoing process?

However, I found the descriptions unsatisfactory in several ways, and maybe my readers know the answers.

The literature talks about how the genetically modified virus cannot replicate. I assume this is because the modification that causes infected cells churn out spike proteins means that the cell isn’t churning out copies of the virus, as would normally happen? That would make sense if so.

And it would also explain why my ‘vaccination record’ revealed that my dose contained fifty billion viral particles! That’s one for every three hundred cells in my body! Truly mindblowing.

That said, I have no idea what a ‘viral particle’ is. Is that the same as a single copy of a virus?

It’s mindblowing to imagine that 0.5ml of liquid could contain fifty billion virus particles!

Anyhow, if the virus can’t replicate once inside my body, then the only modified virus particles that will ever infect my cells are the ones that were injected yesterday.

And so I guess the next question is: how long does it take to start invading my cells to turn them into zombie spike protein factories?

Well: the evidence of my own fever was that it was barely twelve hours before my entire body had put itself on a war footing. And in those twelve hours, presumably a lot had to happen. First, enough virus particles had to invade enough cells. And then, secondly, those infected cells then had to have started to churn out spike proteins in sufficient quantity to catch the attention of my immune system. The invasion must already have been underway before I left the medical centre!

And I guess a related question is: what happens after those fifty billion viruses were injected into my right deltoid muscle? Do they just start invading cells in that region and so my shoulder muscle becomes my body’s spike protein factory? Or do they migrate all over my body and enlist all my different cell types in some sort of collective endeavour? How long does this migration take if so? Is this what explains the time lag from “body is on a war footing after twelve hours” to “you ain’t got no protection for at least three weeks”? Are the majority of the particles floating around for days or weeks before invading a cell? Or is the full invasion done within hours of injection?

Put another way: if the only vaccine virus my body will ever see are the fifty billion copies that were injected yesterday and if after twenty four hours my body already seems back to normal from a fever perspective, what is actually going on over the next few weeks?

I did wonder if perhaps there is some reproduction going on… but not of the virus, but of the cells that have been invaded. That is: imagine the vaccine virus invades one of my cells and forces it to start churning out spike proteins. Presumably that cell will itself periodically divide. What happens here? Does the cell get killed by the immune system before it has a chance to replicate (because it’s presenting the spike protein)? Or does many of these cells actually replicate and hence create children cells? Do those children cells also churn out spike proteins? That process would explain a multi-generational multi-week process, I guess? But it wouldn’t be consistent with statements that the vaccine doesn’t persist in your DNA.

Or is the lag all to do with the immune system itself, and there’s some process that takes weeks to transition the body from “wow… we now know there’s a new enemy to be on the look out for” to “we’re now completely ready for it should it ever strike”?

As you can probably tell, I’m hopelessly confused, but also fascinated. If anybody can point me to explainers about the mathematics of vaccination, I’d be enormously grateful. For example:

  • How many of the 50 billion virus particles are typically expected to successfully enter cells?
  • How long does this invasion process take? Hours? Weeks?
  • Is there any replication going on (either of the virus or of the cells they infect?)
  • Do the virus particles diffuse across the body from the injection site? If so, how? And how long does it take?
  • Or is the time lag all to do with the immune system’s own processes?

I didn’t expect to end up learning (or learning I didn’t know) so much!

UPDATE: Follow-up post here.

A brief history of middleware and why it matters today

 

This blog post is a lightly edited reproduction of a series of tweets I wrote recently
  • This tweetstorm is a mini history of enterprise middleware. It argues that the problems we solved for firms 10-20 years ago are ones we can now solve for markets today. This blog post elaborates on a worked example with @Cordablockchain (1/29)
  • I sometimes take perverse pleasure in annoying my colleagues by using analogies from ancient enterprise software segments to explain what’s going on with enterprise blockchains today… But why should they be the only ones that suffer? Now you can too! (2/29)
  • Back in the late 90s and early 2000s, people began to notice that big companies in the world had a problem: they’d built or installed dozens or hundreds of applications on which they ran their businesses… and none of these systems talked to each other properly… (3/29)
  • IT systems in firms back then were all out of sync… armies of people were re-keying information left, right and centre. A total mess and colossal expense (4/29)
  • The solution to this problem began modestly, with products like Tibco Rendezvous and IBM MQSeries. This was software that sat in the _middle_ connecting applications to each other… if something interesting happened in one application it would be forwarded to the other one (5/29)
  • Tibco Rendezvous and IBM MQSeries were like “email for machines”. No more rekeying. A new industry began to take shape: “enterprise middleware”. It may seem quaint now but in the early 2000s, this industry was HOT. (6/29)
  • But sometimes the formats of data in systems were different. So you needed to transform it. Or you had to use some intelligence to figure out where to route any particular piece of data. Enterprise Application Integration was born: message routing and transformation. (7/29)
  • Fast forward a few years and we had “Enterprise Service Buses” (ESBs – the new name for EAI) and Service Oriented Architecture (SOA). Now, OK… SOA was a dead end and part of the reason middleware has a bad name today in some quarters. (8/29)
  • But thanks to Tibco, IBM, CrossWorlds, Mercator, SeeBeyond and literally dozens of other firms, middleware was transforming the efficiency of pretty much every big firm on the planet. “Middleware” gets a bad name today but the impact of ESB/EAI/MQ technologies was profound (9/29)
  • Some vendors then took it even further and realised that what all these parcels of data flying around represented were steps in _business processes_. And these business processes invariably included steps performed by systems _and_ people. (10/29)
  • The worlds of management consulting (“business process re-engineering”) and enterprise software began to converge and a new segment took shape: Business Process Management. (11/29)
  • The management consultants helped clients figure out where the inefficiencies in their processes were, and the technologists provided the software to automate them away. (12/29)
  • In other words, BPM was just fancy-speak for “figure out all the routine things that happen in the firm, automate those that can be automated, make sure the information flows where it should, when it should, and put some monitoring and management around the humans” (13/29)
  • Unfortunately, Business Process Management was often oversold – the tech was mostly just not up to it at that point – and its reputation is still somewhat tarnished (another reason “middleware” is a dirty word!) (14/29)
  • But, even given these mis-steps, the arc of progress from “systems that can barely talk to each other” to “systems and people that are orchestrated to achieve an optimised business outcome” was truly astounding. (15/29)
  • Anyway… the point is: this was mostly happening at the _level of the firm_. The effect of the enterprise middleware revolution was to help individual firms optimise the hell out of themselves. (16/29)
  • But few back then even thought about the markets in which those firms operated. How could we have? None of the software was designed to do anything other than join together systems deployed in the same IT estate. (17/29)
  • So now let’s fast forward to today. Firms are at the end of their middleware-focused optimisation journeys and are embarking on the next, as they migrate to the cloud. But the question of inefficiencies between firms remains open. (18/29)
  • Take the most trivial example in payments: “I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?” (19/29)
  • How can we be almost a fifth of the way through the 21st century and lost payments are still a daily occurrence? How can it be that if you and I agree that I owe you some money, we can still get into such a mess when I actually try to pay you? (20/29)
  • As I argue in this post, the problems we solved for firms over the last two decades within their four walls are precisely the ones that are still making inter-firm business so inefficient (21/29)
  • What stopped us solving this 20 years ago with the emerging “B2B” tech? Easy inter-firm routing b/w legal entities (the internet was scary…), broad availability of crypto techniques to protect data, orchestration of workflows between firms without central controller etc (22/29)
  • But we also hadn’t yet realised: it’s not enough merely to move data. You need to agree how it will be processed and what it means. This was a Bitcoin insight applied to the enterprise and my colleague @jwgcarlyle drew this seminal diagram that captures it so well. (23/29)
  • And my point is that the journey individual firms went on: messaging… integration… orchestration… process optimisation – is now a journey that entire markets can go on. The problems we couldn’t solve back then are ones we now can solve. (24/29)
  • What has changed? Lazy answer: “enterprise blockchain”… lazy because not all ent blockchains are designed for same thing, plus the enabling tech and environment (maturation of crypto techniques, consensus algorithms, emergence of industry consortia, etc) is not all new (25/29)
  • But the explosion of interest in blockchain technology was a catalyst and made us realise that maybe we could move to common data processing and not just data sharing at the level of markets and, in so doing, utterly transform them for the better. (26/29)
  • In my blog post I make this idea concrete by talking about something called the Corda Settler. In truth, it is an early – and modest – example of this… a small business process optimisation (27/29)
  • The Corda Settler optimisation is simple: move from asking a payee to confirm receipt once sent and instead pre-commit before the payment is even made what proof will convince them it was done, all enabled through secure inter-legal-entity-level communication and workflow (28/29)
  • But the Settler is also profound… because it’s a sign the touchpaper has truly been lit on the next middleware revolution… but this time focused on entire markets, not just individual firms (29/29)

 

Process Improvement and Blockchain: A Payments Example

“I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?”

red and white metal mail box

The cheque is in the post…

How can we be almost a fifth of the way through the twenty first century and this is still a daily occurrence? How can we be in a world where even if you and I agree that I owe you some money, it’s still a basically totally manual, goodwill-based process to shepherd that payment through to completion?!

The “Settler pattern” reduces opportunities for error and dispute in any payments process – and does so by changing the process

This was the problem we set out to solve when we built the Corda Settler. And I was reminded about this when I overheard some colleagues discussing it the other day. One of them wondered why we don’t include the recipient of a payment in the set of parties that must agree that a payment has actually been made. Isn’t that kinda a bit of an oversight?!

Screenshot 2019-08-09 at 10.35.03.png

The Corda Settler pattern works by moving all possible sources of disagreement in a payment process to the start

As I sketched out the answer, I realised I was also describing some concepts from the distant past… from my days in the middleware industry. In particular, it reminded me of when I used to work on Business Process Management solutions.

And there’s a really important insight from those days that explains why, despite all the stupid claims being made about the magical powers of blockchains and the justifiable cynicism in many quarters, those of us solving customer problems with Corda and some other enterprise-focused blockchain platforms are doing something a little bit different… and its impact is going to surprise a lot of people.

Now… I was in two minds about writing this blog post because words like “middleware” and “business process management” are guaranteed to send most readers to the “close tab” button… Indeed, I fear I am a figure of fun amongst some of my R3 colleagues… what on earth is our CTO – our CTO of all people! – doing talking about boring concepts from twenty years ago?!

But, to be fair, I get laughed at in the office by pretty much everybody some days… especially those when I describe Corda as “like an application server but one where you deploy it for a whole market, not just a single firm” or when I say “it’s like middleware for optimising a whole industry, not just one company.

“Application Servers? Middleware? You’re a dinosaur! It’s all about micro-services and cloud and acronyms you can’t even spell these days, Richard… Get with the programme, Grandad!”

Anyway… the Corda Settler discussion reminded me I had come up with yet another way to send my colleagues round the bend…  because I realised a good way to explain what we’re building with Corda – and enterprise blockchains in general – isn’t just “industry level middleware” or “next generation application servers”… it’s also a new generation of Business Process Management platform…  and many successful projects in this space are actually disguised Industry Process Re-Engineering exercises.

Assuming you haven’t already fallen asleep, here’s what I mean.

Enterprise Blockchains like Corda enable entire markets to move to shared processes

Think back to the promise we’re making with enterprise blockchains and what motivated the design of Corda:

“Imagine if we could apply the lessons of Bitcoin and other cryptocurrencies in how they keep disparate parties in sync about facts they care about to the world of regular business…  imagine if we could bring people who want to transact with each other to a state where they are in consensus about their contracts and trades and agreements… where we knew for sure that What You See Is What I See – WYSIWIS. Think of how much cost we could eliminate through fewer breaks, fewer reconciliation failures and greater data quality… and how much more business we could do together when we can move at pace because we can trust our information”

And that’s exactly what we’ve built. But… and sorry if this shocks anybody… Corda is not based on magic spells and pixie dust…  Instead, it works in part because we drive everybody who uses it to a far greater degree of commonality.

Because if you’re going to move from a world where everybody builds and runs their own distinct applications, which are endlessly out of sync, to one where everybody is using a shared market-level application, what you’re actually saying is: these parties have agreed in some way to align their shared business processes, as embodied in this new shared application.  And when you look at it through that lens, it’s hardly surprising that this approach would drive down deviations and errors…!

I mean: we’re documenting – in deterministically executed code – and for each fact we jointly care about: who can update which records, when and in what ways. And to do that we have to identify and ruthlessly eliminate all the places where disagreements can enter the process.

Because if we know we have eliminated all areas of ambiguity, doubt and disagreement up-front, then we can be sure the rest of our work will execute as if it’s like a train on rails.

Just like trains, if two of them start in the same place and follow the same track… they’ll end up in the same place at the end.

Reducing friction in payments: a worked example

So, for payments, what are those things? What are those things that if we don’t get them right up front can lead to the “I haven’t received your payment” saga I outlined at the start of the post?

Well, there’s the obvious ones like:

  • How much needs to be paid?
  • By whom?
  • To whom?
  • In what kind of money/asset?

There are trickier ones such as:

  • Over what settlement rail should I pay?
  • To which destination must we pay the money?
  • With any reference information?

These are trickier since there is probably a bit of automated negotiation that needs to happen at that point… we need to find a network common to us both… and the format of the routing strings is different for each and so forth. But if you have an ability to manage a back-and-forth negotiation (as Corda does, with the Flow Framework) then it’s pretty simple.

But that still leaves a problem… even if we get all of these things right, we’re still left hanging at the end. Because even if I have paid you the right amount to the right account at the right time and with the right reference, I don’t know that you’ve received it.

And so there’s always that little bit of doubt. Until you’ve acknowledged it you could always turn around in the future and play annoying games with me by claiming not to have received it and force us into dispute… and we’d be back to square one! We’d be in exactly the same position as before: parties who are not in consensus and are instead seeing different information.

And it struck us as a bit mad to be building blockchain solutions that kept everybody in sync about really complicated business processes in multiple industries, only for the prize to be stolen from our grasp at the last moment… when we discover the payment that is invariably the thing that needs to happen at the end of pretty much every process hasn’t actually been acknowledged.

It would be as if our carefully tuned train had jumped off the rails and crashed down the embankment just at the last moment. Calamity!

So we added a crucial extra step when we designed the Corda Settler. We said: not only do you need to agree on all the stuff above, you also need to agree: what will the recipient accept from the sender as irrefutable proof that the payment has been made?

And with one bound, we were free!

Because we can now… wait for it… re-engineer the payment process. We can eliminate the need for the recipient to acknowledge receipt. Because if the sender can secure the proof that the recipient has already said they will accept irrefutably then there is no need to actually ask them… simply presenting them with the proof is enough, by prior agreement.

And this proof may be a digital signature from the recipient bank, or an SPV proof from the Bitcoin network that a particular transaction is buried under sufficient work… or whatever the relevant payment network’s standard of evidence actually is.

But the key point is: we’ve agreed it all up front and made it the sender’s problem… because they have the incentive to mark the payment as “done”. As opposed to today, where it’s the recipient who must confirm receipt but has no incentive to do so, and may have an incentive to delay or lie.

But building on this notion of cryptographic proof of payment, the Corda Settler pattern has allowed us to identify a source of deviation in the payment process and moved it from the end of the process, where it is annoying and expensive and makes everybody sad… and moved it to the start of the process and, in so doing, allows us to keep the train on the rails.

And this approach is universal. Take SWIFT, for example. The innovations delivered with their gpi initiative are a perfect match for the payment process improvements enabled by the Settler pattern.

The APIs made available by Open Banking are also a great match to this approach.

Middleware for markets, Business Process Management for ecosystems, Application Servers for industries..!

And this is what I mean when I say platforms like Corda actually achieve some of their magic because they make it possible to make seemingly trivial improvements to inter-firm business processes and, in so doing, drive up levels of automation and consensus.

So this is why I sometimes say “Corda is middleware for markets”.

It’s as if the first sixty years of IT were all about optimising the operations of individual firms… and that the future of IT will be about optimising entire markets.

Busting the Myth of Public Blockchains for Business

It’s time to talk about transaction finality. Last week’s 51% attack demonstrates that Ethereum-style blockchains are not ready for business

A belief took hold amongst some of the tech community in 2018: “If you have an enterprise blockchain use-case you should build it on a platform based on Ethereum.”

The argument was pretty well constructed and relied on several plausible-sounding claims so it’s understandable how it seemed pretty convincing. However, as 2018 unfolded, these claims began to be challenged. And as we enter 2019, the final remaining argument has been undermined with a public demonstration of how the lack of settlement finality in public blockchains such as Ethereum renders their immutability and security guarantees worthless for business.

In this piece, I will argue that it is now time to conclude that Ethereum’s core technologies are the wrong foundation upon which to build business blockchain solutions. My argument is: 1) the core Ethereum technologies are due for abandonment, leaving businesses at risk of technology dead-ends, 2) the Ethereum developer skill-pool has been massively overstated and is in fact far tinier than that for the purpose-built business blockchains based on existing languages, and 3) the idea of building on Ethereum in order to securely ‘anchor’ private blockchains to a public chain is now discredited.

In short, business blockchain applications should be built on technologies designed for the enterprise, not Ethereum.

What was the argument for why businesses should build on Ethereum?

To understand how we reached this point as a community, it’s helpful to review the thinking that led here. Here’s how the argument for why businesses should build on Ethereum went:

  • “Go where the skills and innovation are: Ethereum has the largest community and the broadest availability of skills.”
  • “Use the tools that will best let you interoperate with the public chain: Even if you’re not using the public Ethereum network you should use platforms that are based on the EVM, and use languages like Solidity so you can inherit the innovation from the public chain and maximise the chances of interoperability in the future”
  • “Overcome the ‘weak’ security of private chains by ‘anchoring’ in the public chain: Public chains are more immutable than ‘insecure’ private networks and so you should ‘anchor’ your private transactions to prevent malicious parties rolling back your transactions behind your back.”

By the end of 2018, there was ample evidence to debunk the first two claims, but the third claim persisted. Indeed, this third claim, that a public blockchain such as Ethereum offers a degree of transaction confirmation permanence that is otherwise unobtainable, has been repeated over and over again, even as late as December 2018.

Until last week, that is, when a 51% attack against the Classic (original) Ethereum network demonstrated for real what we already knew in theory: that history on a public blockchain like Ethereum can be arbitrarily rewound, money double-spent and network participants defrauded.

The rest of this article will review each of the three claims above in depth to explain why they are incorrect and how that makes Ethereum – and Ethereum-based platforms – unsuitable for business. But it’s important to note that the purpose of this blog post is actually to make a positive message. Because the broader picture is actually one of success: Ethereum is proving to be a valuable tool for a wide range of isolated social and economic experiments. And plenty of blockchains purpose-built to solve business problems, such as Hyperledger and Corda, are live and are changing the world of commerce.

So my key message is that it’s the inappropriate application of Ethereum technologies to the unforgiving world of real business problems, for which it was not designed, that we need to guard against. These two worlds have very different requirements.

It’s time to declare in public what has been openly discussed in private: Ethereum is currently unsuited to the world of business and we should have the courage as a community to say so.

So let’s now review the arguments for using Ethereum in the enterprise, that have now shown to be incorrect.

Claim 1: “Go where the skills and innovation are: Ethereum has the largest community and the broadest availability of skills.”

This argument starts well. For example, ConsenSys claim that the “Ethereum developer community” has 250,000 members, by which they presumably mean the number of people who can code using Solidity, the language in which almost all Ethereum apps are coded.

But when you scratch the surface, reality begins to intrude:

  • Hundreds of thousands of Solidity developers sounds like a big number until you realise that there are over a million developers with the knowledge to build applications for Hyperledger Fabric using the language Go and twelve million developers with the knowledge to build applications for Corda using Java. In the latter case, our experience shows that any competent Java developer can pick up the Corda library and be productive in a couple of days. This means the Hyperledger and Corda developer skillpools are at least one, maybe even two, orders of magnitude bigger, even using ConsenSys’s figures.
  • But we need to challenge ConsenSys’s figures, small as they now seem. This is because there is minimal evidence to support even the 250k figure. The claim seems to be based on looking at how many people have downloaded one of the development tools that pretty much every Ethereum developer has to use, and assuming half of them became Ethereum developers. But that methodology doesn’t work. To see why, let’s apply the same logic to the Java ecosystem to generate an estimate for how many developers there are and see if it matches the correct figure, twelve million. Now, we know that one tool for developing Java applications, IntelliJ, had almost twenty five million downloads in 2017 alone, and that product had barely ten percent of the huge and diverse market for Java development tools (Eclipse, Android Studio and NetBeans were all larger). This means we can estimate there were at least 250 million downloads of Java development tools in 2017, which would mean there must be over 125 million Java developers by ConsenSys’s logic. Except, there aren’t… we know the correct number is about twelve. It’s out by a factor of ten. So the true number of people with Ethereum skills is almost certainly much smaller than 250k; I would be surprised if it was even 50k or 10k, a rounding error in the world of developer communities. And the number of those who can write Solidity contracts securely, critical to avoiding another DAO-style bug, is smaller still.
  • And on top of this, we also need to add the huge productivity gains that come from being part of established ecosystems. For example, the range of development environments, debuggers, testing frameworks, profilers and libraries available for the Java ecosystem is staggeringly larger than that for the Ethereum and Solidity ecosystems.

The reality is that the developer ecosystem and momentum is with the Hyperledger and Corda communities, not Ethereum. So it’s perhaps no surprise that the overwhelming majority of truly ground-breaking, successful enterprise blockchain deployments to date run on Hyperledger Fabric and Corda, not Ethereum.

Claim 2: “Use the tools that will best let you interoperate with the public chain: Even if you’re not using the public Ethereum network you should use platforms that are based on the Ethereum Virtual Machine (EVM) so you can inherit the ‘innovation’ from the public chain and maximise the chances of interoperability in the future”

This argument is more pernicious than the previous one. It says to developers: “even if you’ve correctly determined that a public Ethereum network is wrong for you, you should still use the Ethereum toolset for your private project.” It is an argument that plays on people’s deep fears: stick with the crowd; after all, you won’t be fired if you make the same mistake that everybody else made!

The problem is: as we demonstrated above, there is no crowd and the Ethereum community plans to throw all the current technology away in any case: the EVM is set for total replacement. The plan, “Ethereum 2.0”, is to build a new design from scratch.

So the world faces the possibility that, long after the public Ethereum community have moved on to something new, business leaders will wake up one day to discover critical parts of their business are running on technology that isn’t even being used any more for the purpose for which it was built. Talk about buyer’s remorse…

This might be OK if the Ethereum Virtual Machine was a sound technology but, as the team from Kadena documented, the EVM is “fundamentally unsafe”. And the team at Aion also independently reached a similar conclusion and have written eloquently why they didn’t use the EVM and chosen the Java ecosystem instead. And yet consultants, some from reputable firms, are pushing this technology hard in to organisations that don’t always possess the technical expertise to realise the advice may not be appropriate.

Genuinely ground-breaking work is, of course, being done by some very talented and committed people in the Ethereum community on the public Ethereum network, but it is – and should continue to be done – safely away from the back offices of the businesses upon whose data integrity the world depends.

However, 2018 ended with one, last, killer plank in the argument for why businesses should nevertheless build on Ethereum rather than a platform like Hyperledger Fabric or Hyperledger Sawtooth or Corda.

And it was this last argument that was severely undermined this week.

Claim 3: “Overcome the ‘weak’ security of private chains by ‘anchoring’ in the public chain: Public chains are more immutable than insecure private networks and so you should ‘anchor’ your private transactions to prevent malicious parties rolling back your transactions behind your back.”

This argument was actually pretty clever. Here’s how it went:

  • ‘The security of public blockchains is “backed” by the work performed by billions of dollars worth of mining equipment and electricity. To reverse a “confirmed” transaction would be economically infeasible and, since only public blockchains use proof of work, only public blockchains can provide this “immutability” guarantee.’
  • ‘By contrast, blockchains that rely instead on identifiable parties to provide consensus cannot deliver this level of security and immutability; there is always the chance that parties could “collude” to reverse a transaction.’

And so, the proponents of Ethereum for the enterprise propose a clever idea: by all means, use a peer-reviewed fault-tolerant algorithm for your business transactions – you need rapid and final confirmation, after all.

But then, as an additional layer of safety, “anchor” a summary of your transactions in the public Ethereum network. The network that is massively more secure and resistant to mutation. Its proponents even claim this would provide ‘greater “proof of settlement finality”’ and that ‘any chance of counterparty disputes about membership is eliminated’.

This sounds perfect: the privacy, performance and settlement finality of a private chain and the security and immutability of a public chain!

Except… there was always a problem with this argument: finality.

In short, the two unanswered questions were:

  • If your enterprise blockchain needs settlement finality but the chain into which it is ‘anchored’ provides only probabilistic finality, when is it safe to tell a user of the private chain their transaction has been confirmed? What happens if two conflicting hashes might be vying for inclusion at the same time? Are users expected to constantly monitor the underlying chain to check the private chain hasn’t gone bad? And what exactly are they supposed to do at that point in any case?
  • If the ‘anchor’ gets washed away by a ‘reorganisation’ of the underlying public probabilistic blockchain, what are you supposed to do then?

The problem is: technically savvy people knew these questions made the concept highly suspect but the fact that there had never been any high profile examples of where this would ever have been a problem, nobody seemed to care. And the concepts were complicated in any case – probabilistic settlement, reorganisations. All too abstract! So the response seemed to be: “sure… this could happen in theory but it never happens in practice, so who cares?”.

Until last week.

When a high profile Ethereum network suffered a devastating and unprecedented attack, that caused transactions over one hundred blocks deep to go from “confirmed” to “unconfirmed”. Any “anchor” that had been in one of those hundred blocks would have been washed away, opening up the possibility that a simultaneous attack on the private network could result in a conflicting anchor taking its place.

In other words, the trivial ease with which the supposedly secure and immutable chain was rewritten means it failed in its only and single purpose for an enterprise deployment.  

The right approach to settlement finality for business blockchains is to acknowledge things can go wrong and to plan for them up-front: accept that you need to know the identity of the consensus providers, which also ensures provider diversity rather than increasingly centralised mining providers; and that you need a governance process and dispute resolution forum for problems that cannot be solved solely with clever math or novel technology.

Conclusion

So, here at the start of January 2019, what is left of the “Ethereum in business” story?

  • The number of developers with skills in Ethereum is far lower than Ethereum’s proponents claim and is orders of magnitude smaller than the programming language ecosystems supporting Hyperledger and Corda
  • The core ‘engine’ of Ethereum, the EVM, has been publicly disowned by the communities that spawned it and the platform is being expensively rewritten, yet enterprise Ethereum vendors continue to push tools based on this dead-end into unsuspecting businesses.
  • And the only remaining plausible argument for using Ethereum in the enterprise, that it somehow makes it easier to secure your network by ‘anchoring’ into the public network, has been shown by the Ethereum Classic debacle to be false.

Be in no doubt: blockchain for the enterprise is real and it is here to stay. But if you’re doing it on Ethereum, you’re doing it wrong.

 

[Update 2019-01-14 Reworded subtitle to clarify I’m making a broader point about probabilistic finality]

Corda: Open Source Community Update

The Corda open source community is getting big… it’s time for a dedicated corda-dev mailing list, a co-maintainer for the project, a refreshed whitepaper, expanded contribution guidelines, and more..!

 

It feels like Corda took on board some rocket fuel over the last few months. Corda’s open source community is now getting so big and growing so fast that it’s just not possible to keep up with everything any more — a nice problem to have, of course. And I think this is a sign that we’re reaching a tipping point as an industry as people make their choices and the enterprise blockchain platforms consolidate down to what I think we’ll come to describe as “the big three”.

Read the rest of this post over at the Corda Medium blog…

Introducing the Corda Technical Advisory Council

I’m delighted to announce the formation of the Corda Technical Advisory Council (the Corda TAC). This is a group of technical leaders in our community — who most of you know well — who have volunteered to commit their time over and above their existing contributions to the Corda ecosystem to provide advice and guidance to the Corda maintainers.

Members of the TAC are invited by the maintainers of the Corda open source project (Mike and Joel) and will change over time — the inaugural members are listed below. If you’re also interested in contributing to the TAC, please do let us know — most usefully through your technical leadership and contribution to the ecosystem!

Read the rest of this post over at the Corda medium blog…!

Universal Interoperability: Why Enterprise Blockchain Applications Should be Deployed to Shared Networks

Business needs the universal interoperability of public networks but with the privacy of private networks. Only the Corda network can deliver this.

The tl;dr of this post is:

  • Most permissioned blockchains use isolated networks for each application, and these are unable to interoperate. This makes no sense.
  • We should instead aspire to deploy multiple business applications to an open, shared network. But this needs the right technology with the right privacy model.
  • Corda, the open source blockchain platform we and our community are building, was designed for just this from day one. But there was a piece missing until now: the global Corda network.
  • In this post I describe the global Corda network for the first time in public and how it will be opened up to the entire Corda community in the coming months.
  • If you’re building blockchain solutions for business, you need to read this post…

Think back to how excited you were (well, was!) when you first heard about Ethereum. The idea of a platform for smart contract applications, all running across a common network, with interoperability between all these different applications written by different people for different purposes. It was mind-blowing.

And it’s not just a vision, of course. The public Ethereum community have actually delivered it! Indeed, emerging standards such as ERC20 are a demonstration of the power of a shared, interoperable network and the power of standardisation.

So the question we asked ourselves at R3 back in 2015 was: imagine if you could apply that idea to business… imagine if different groups of people, each deploying applications for their own commercial purposes, woke up one day and discovered that those apps could be reassembled and connected in ways unimaginable to their creators but in a way that respected privacy and which could be deployed in real-world businesses with all the complexity that entails.

It seemed obvious to us that this was the right vision. And that it would require a universal, shared, open network, the topic of this post.

But it dawned on me recently that this is not how everybody in the permissioned blockchain space sees it. The consequences for users could be serious.

The rest of this post is continued at our medium site here!

View at Medium.com

What problem are we trying to solve with Corda?

Todd pointed me at a great piece about “crypto finance” versus “regular finance” by Bloomberg’s Matt Levine earlier.

I thought he did a good job of nailing the essential contradiction that arises if one tries naively to apply Bitcoin or Ethereum principles directly to traditional finance. He uses the example of an Interest Rate Swap (IRS) and how a fully pre-funded model would kind of defeat the point…

This caught my attention because an IRS was the very first project we ever did on Corda! So it’s something I know a little about… Anyway, I think the key to understanding the mismatch is captured in a post of mine from 2015 about how two revolutions are playing out in parallel.

Anyway… full details over on my Medium page.