Viral Vector Vaccines – More Fun Things I’ve Learned

In this post, I shared what I had learned about how the AstraZeneca ‘viral vector’ vaccine worked and how endlessly fascinating I found it. But I was struggling to understand why it takes so long for the immune system to ‘learn’ about the new threat that the vaccine is trying to teach it about… why it seems to take “weeks” to gain any protection. I still don’t understand but I’ve learned a few more things that others might find interesting too!

Recap of the basic idea

The basic idea behind Viral Vector vaccines such as the AstraZeneca and Johnson & Johnson products is that a harmless virus is genetically modified so that, if it were to infect a human cell, the cell would start producing protein fragments that look a lot like the spike proteins on real Covid viruses. These ‘fake’ spikes become visible to your immune system, which mounts a response. Thus, if you subsequently become infected with the ‘real’ Covid, your immune system is already primed to destroy it.

Do those fifty billion viruses replicate once inside me? Answer: no

When I read that the vaccine relies on injecting a harmless ‘carrier’ virus, my immediate thought was: “I wonder if that virus is able to replicate like normal viruses do?” I saw on my ‘vaccine record’ that there were fifty billion copies of it in my dose, which made me suspect not… after all, why would you need to inject so many if each one had the ability to replicate? The human body only has about 15 trillion cells, so 50 billion viruses is enough for one in 300 cells! Surely you’d need far fewer if each one could trigger the creation of many many copies?

Turns out I was right about this: the modified ‘adenovirus’ that is injected into me is unable to replicate: those fifty billion copies are the only ones I’ll ever have as a result of that shot.

This infographic (click for better image), from The Royal Society of Chemistry, has a nice explainer on CompoundChem:

As that article explains, it seems like the decision to use a non-replicating virus was a choice, presumably on safety and public acceptance grounds: it would have been possible to design a vaccine where the virus could replicate, and some vaccines for other diseases do work that way. The advantage of the latter, I guess, is that far fewer copies would have had to be injected to start with. It’s interesting to speculate (based on absolutely zero knowledge of the science or of where the bottleneck in production actually is…) whether vaccine rollouts could have been quicker if they’d been based on replicating viruses. Would it have meant any quantity of production could have been spread more broadly?

Note: I’m still not yet clear on what happens to my cells that are infected by one of these non-replicating vector viruses… are these cells then destroyed by my immune system because they present the spike protein? Or are they allowed to live? Can they divide? If so, do their daughters also produce the spike protein?

What happens if my body already has antibodies to the vector virus?

I made a throwaway comment in my last post about how the ‘carrier’ virus has to be carefully selected: if you’ve been exposed to it – or something like it – in the past, your body will attack the virus particles before they have a chance to infect your cells… and so you’ll produce no (or fewer) spike proteins and you’ll presumably develop weaker protection against Covid than would otherwise have been the case. This piece in The Scientist explains more. It explains that this was why the AstraZeneca vaccine uses a modified chimp virus – it’s far less likely the average human has seen it before. And it points out that there’s a downstream consequence: that virus can’t now be used for a malaria vaccine. You really do have to use a different one for each vaccine.

There were a few other interesting tidbits in that article. It was the first time I’d seen an argument that one possible reason for milder side-effects from the AZ vaccine amongst older people is that the older you are the more pathogens you’ve been exposed to and so the more chance there is that your immune system has seen something like the vector virus before. And so relatively more of the fifty billion particles will be destroyed before entering a cell. So I’m even more pleased about my feverish sleepless night now!

Why are vaccines injected into muscle? How long does it take for the virus particles to get to work?

The question that triggered my attempts to learn about this stuff was why does it take weeks for me to gain any meaningful protection from the vaccine when it’s clear that my body was fully responding to the onslaught after barely twelve hours?

It got me wondering whether the mechanism of injection had anything to do with it. For example, if the vector virus is injected into a muscle, how long does it take for all fifty billion virus particles to get to work? And where do they operate? In that muscle? Or do they circulate round the body?

Was the first night reaction in response to all fifty billion viruses going to work at once? Or were only a few of them at work that night and it wasn’t yet enough to persuade my immune system that this is something it should lay down ‘memories’ about? Perhaps it’s going to take a few more weeks until they’ve all infected a cell and enough spike proteins have been produced to get my immune system finally to say “Fine! You win! Stop already! I’ll file this one with all the others on the ‘things we should be on alert for in the future’ shelf. Now stop bothering me!!”?

I was surprised how little definitive information there is about this sort of stuff online. I guess because it’s ‘obvious’ to medical professionals, and they don’t learn their trade from quick skims of Wikipedia and Quora. (I hope).

From what I can tell, the main reason vaccines are injected into the muscle are for convenience: the shoulder is at just the right height for a clinician to reach without much effort, it’s an easy target to hit, there’s no need to mess around trying to find a vein, and the risk of complications (eg inflammation of a vein or whatnot) is lower. This literature review makes for an interesting skim.

I’d also wondered if injection into muscle, rather than veins, results in the vaccine having a localised effect… eg is it only my shoulder muscle that churns out the spike proteins? Turns out the answer to that is no: muscle is chosen over, say, fat precisely because it is rich in blood vessels. The vaccine designers want the vaccine vector virus to enter the bloodstream and rush round the body.

And I’d wondered if injection into muscle was in order to create a ‘slow drip drip’ of vaccine into the bloodstream over time and perhaps that would explain why it took so long for the body to develop full immunity. Turns out the answer to that is also ‘no’. It seems that injections into the deltoid muscle (shoulder) are absorbed quicker than those into other commonly used injection sites. Implication: if the manufacturers wanted slow absorption, they wouldn’t be telling doctors to stab patients in the shoulder!

So when I bring all that together, I still remain confused… injecting the vaccine into my shoulder results in quick absorption, and my body was in full ‘fightback’ mode after twelve hours, so it’s hard to imagine there was any meaningful amount of vaccine lingering in my shoulder after, say, 24 hours… it must, by then, all surely have been whizzing round my veins and happily infecting my cells.

So what gives? Why does it take weeks after billions of my cells have been turned into zombie spike protein factories and my immune system has gone on a frenzied counterattack for me to have a meaningful level of ‘protection’ against Covid? (I’m ignoring the relevance of the ‘second dose’ here for simplicity)

I guess the answer must be ‘because that’s just how the immune system works!’

The mathematics of COVID vaccines

My AZ vaccine dose contained FIFTY BILLION virus particles. Wow!

I was fortunate to receive my first COVID vaccine dose yesterday; I received the Astra Zeneca product and all seemed to go well. As seems to be common, it made me feel feverish overnight and I slept badly. This was reassuring in a way as it made me feel like it was ‘working’… it was exciting, in a strange sort of way, to imagine the billions of ‘virus fragments’ racing round my body infecting my cells and turning them into zombie spike protein factories!

However, my inability to sleep also made me realise that I had absolutely no idea how it really worked. So as I was struggling to sleep, I started reading more and more articles about these ‘viral vector’ vaccines. They really are quite fascinating. And these articles did answer my first wave of questions… but they also then triggered more questions, to which I couldn’t find any answers at all. I’m not sure I’m going to be particularly productive at work today so thought why not write down what I discovered and list out all my questions. Perhaps my readers know the answers?

Viral Vector Vaccines: Key Concepts

Most descriptions I found online were hopelessly confused or went into so much extraneous detail about various immune system cell types that they were useless in imparting any real intuition. However, what I did seem to discover, at a very high level, was something like the following:

  • A harmless virus is used as the starting point.
    • Interesting detail: the virus needs to be somewhat obscure to reduce the risk patients’ bodies have been exposed to it before and thus already have antibodies that would destroy it before it’s had a chance to do its magic
  • It is then genetically engineered so that when it invades a human cell it triggers the cell to start churning out chunks of protein (the famous spike protein) that look a bit like Covid-19 viruses.
  • These spikes eventually become visible to the immune system, which mounts a vigorous response and, in so doing, learns to be on the lookout for the same thing in the future.

In essence, we infect ourselves with a harmless virus that causes our body’s own cells to start churning out little proteins that look similar enough to COVID that our body will be on high alert should a real COVID infection try to take hold.

Or, at least, I think that’s what’s going on.

So many extraneous details

Now, most of the articles I read then go on to talk about things like the following:

  • Technical detail about how several little fragments then have to be assembled to make one ‘spike’.
    • Important but not really critical to understanding the concepts from what I can see
  • The role of different types of immune cells.
    • The only thing these kinds of articles taught me was that it’s clear that immunologists have no idea how the immune system works and their attempts at explaining it to lay readers just makes this painfully obvious 🙂
  • Endless ‘reassuring’ paragraphs about safety.
    • I understand why they do this but it is somewhat depressing that every article has to be written this way, and I can’t help thinking that it may even be counterproductive.

Once and done or an ongoing process?

However, I found the descriptions unsatisfactory in several ways, and maybe my readers know the answers.

The literature talks about how the genetically modified virus cannot replicate. I assume this is because the modification that causes infected cells churn out spike proteins means that the cell isn’t churning out copies of the virus, as would normally happen? That would make sense if so.

And it would also explain why my ‘vaccination record’ revealed that my dose contained fifty billion viral particles! That’s one for every three hundred cells in my body! Truly mindblowing.

That said, I have no idea what a ‘viral particle’ is. Is that the same as a single copy of a virus?

It’s mindblowing to imagine that 0.5ml of liquid could contain fifty billion virus particles!

Anyhow, if the virus can’t replicate once inside my body, then the only modified virus particles that will ever infect my cells are the ones that were injected yesterday.

And so I guess the next question is: how long does it take to start invading my cells to turn them into zombie spike protein factories?

Well: the evidence of my own fever was that it was barely twelve hours before my entire body had put itself on a war footing. And in those twelve hours, presumably a lot had to happen. First, enough virus particles had to invade enough cells. And then, secondly, those infected cells then had to have started to churn out spike proteins in sufficient quantity to catch the attention of my immune system. The invasion must already have been underway before I left the medical centre!

And I guess a related question is: what happens after those fifty billion viruses were injected into my right deltoid muscle? Do they just start invading cells in that region and so my shoulder muscle becomes my body’s spike protein factory? Or do they migrate all over my body and enlist all my different cell types in some sort of collective endeavour? How long does this migration take if so? Is this what explains the time lag from “body is on a war footing after twelve hours” to “you ain’t got no protection for at least three weeks”? Are the majority of the particles floating around for days or weeks before invading a cell? Or is the full invasion done within hours of injection?

Put another way: if the only vaccine virus my body will ever see are the fifty billion copies that were injected yesterday and if after twenty four hours my body already seems back to normal from a fever perspective, what is actually going on over the next few weeks?

I did wonder if perhaps there is some reproduction going on… but not of the virus, but of the cells that have been invaded. That is: imagine the vaccine virus invades one of my cells and forces it to start churning out spike proteins. Presumably that cell will itself periodically divide. What happens here? Does the cell get killed by the immune system before it has a chance to replicate (because it’s presenting the spike protein)? Or does many of these cells actually replicate and hence create children cells? Do those children cells also churn out spike proteins? That process would explain a multi-generational multi-week process, I guess? But it wouldn’t be consistent with statements that the vaccine doesn’t persist in your DNA.

Or is the lag all to do with the immune system itself, and there’s some process that takes weeks to transition the body from “wow… we now know there’s a new enemy to be on the look out for” to “we’re now completely ready for it should it ever strike”?

As you can probably tell, I’m hopelessly confused, but also fascinated. If anybody can point me to explainers about the mathematics of vaccination, I’d be enormously grateful. For example:

  • How many of the 50 billion virus particles are typically expected to successfully enter cells?
  • How long does this invasion process take? Hours? Weeks?
  • Is there any replication going on (either of the virus or of the cells they infect?)
  • Do the virus particles diffuse across the body from the injection site? If so, how? And how long does it take?
  • Or is the time lag all to do with the immune system’s own processes?

I didn’t expect to end up learning (or learning I didn’t know) so much!

UPDATE: Follow-up post here.

A brief history of middleware and why it matters today

 

This blog post is a lightly edited reproduction of a series of tweets I wrote recently
  • This tweetstorm is a mini history of enterprise middleware. It argues that the problems we solved for firms 10-20 years ago are ones we can now solve for markets today. This blog post elaborates on a worked example with @Cordablockchain (1/29)
  • I sometimes take perverse pleasure in annoying my colleagues by using analogies from ancient enterprise software segments to explain what’s going on with enterprise blockchains today… But why should they be the only ones that suffer? Now you can too! (2/29)
  • Back in the late 90s and early 2000s, people began to notice that big companies in the world had a problem: they’d built or installed dozens or hundreds of applications on which they ran their businesses… and none of these systems talked to each other properly… (3/29)
  • IT systems in firms back then were all out of sync… armies of people were re-keying information left, right and centre. A total mess and colossal expense (4/29)
  • The solution to this problem began modestly, with products like Tibco Rendezvous and IBM MQSeries. This was software that sat in the _middle_ connecting applications to each other… if something interesting happened in one application it would be forwarded to the other one (5/29)
  • Tibco Rendezvous and IBM MQSeries were like “email for machines”. No more rekeying. A new industry began to take shape: “enterprise middleware”. It may seem quaint now but in the early 2000s, this industry was HOT. (6/29)
  • But sometimes the formats of data in systems were different. So you needed to transform it. Or you had to use some intelligence to figure out where to route any particular piece of data. Enterprise Application Integration was born: message routing and transformation. (7/29)
  • Fast forward a few years and we had “Enterprise Service Buses” (ESBs – the new name for EAI) and Service Oriented Architecture (SOA). Now, OK… SOA was a dead end and part of the reason middleware has a bad name today in some quarters. (8/29)
  • But thanks to Tibco, IBM, CrossWorlds, Mercator, SeeBeyond and literally dozens of other firms, middleware was transforming the efficiency of pretty much every big firm on the planet. “Middleware” gets a bad name today but the impact of ESB/EAI/MQ technologies was profound (9/29)
  • Some vendors then took it even further and realised that what all these parcels of data flying around represented were steps in _business processes_. And these business processes invariably included steps performed by systems _and_ people. (10/29)
  • The worlds of management consulting (“business process re-engineering”) and enterprise software began to converge and a new segment took shape: Business Process Management. (11/29)
  • The management consultants helped clients figure out where the inefficiencies in their processes were, and the technologists provided the software to automate them away. (12/29)
  • In other words, BPM was just fancy-speak for “figure out all the routine things that happen in the firm, automate those that can be automated, make sure the information flows where it should, when it should, and put some monitoring and management around the humans” (13/29)
  • Unfortunately, Business Process Management was often oversold – the tech was mostly just not up to it at that point – and its reputation is still somewhat tarnished (another reason “middleware” is a dirty word!) (14/29)
  • But, even given these mis-steps, the arc of progress from “systems that can barely talk to each other” to “systems and people that are orchestrated to achieve an optimised business outcome” was truly astounding. (15/29)
  • Anyway… the point is: this was mostly happening at the _level of the firm_. The effect of the enterprise middleware revolution was to help individual firms optimise the hell out of themselves. (16/29)
  • But few back then even thought about the markets in which those firms operated. How could we have? None of the software was designed to do anything other than join together systems deployed in the same IT estate. (17/29)
  • So now let’s fast forward to today. Firms are at the end of their middleware-focused optimisation journeys and are embarking on the next, as they migrate to the cloud. But the question of inefficiencies between firms remains open. (18/29)
  • Take the most trivial example in payments: “I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?” (19/29)
  • How can we be almost a fifth of the way through the 21st century and lost payments are still a daily occurrence? How can it be that if you and I agree that I owe you some money, we can still get into such a mess when I actually try to pay you? (20/29)
  • As I argue in this post, the problems we solved for firms over the last two decades within their four walls are precisely the ones that are still making inter-firm business so inefficient (21/29)
  • What stopped us solving this 20 years ago with the emerging “B2B” tech? Easy inter-firm routing b/w legal entities (the internet was scary…), broad availability of crypto techniques to protect data, orchestration of workflows between firms without central controller etc (22/29)
  • But we also hadn’t yet realised: it’s not enough merely to move data. You need to agree how it will be processed and what it means. This was a Bitcoin insight applied to the enterprise and my colleague @jwgcarlyle drew this seminal diagram that captures it so well. (23/29)
  • And my point is that the journey individual firms went on: messaging… integration… orchestration… process optimisation – is now a journey that entire markets can go on. The problems we couldn’t solve back then are ones we now can solve. (24/29)
  • What has changed? Lazy answer: “enterprise blockchain”… lazy because not all ent blockchains are designed for same thing, plus the enabling tech and environment (maturation of crypto techniques, consensus algorithms, emergence of industry consortia, etc) is not all new (25/29)
  • But the explosion of interest in blockchain technology was a catalyst and made us realise that maybe we could move to common data processing and not just data sharing at the level of markets and, in so doing, utterly transform them for the better. (26/29)
  • In my blog post I make this idea concrete by talking about something called the Corda Settler. In truth, it is an early – and modest – example of this… a small business process optimisation (27/29)
  • The Corda Settler optimisation is simple: move from asking a payee to confirm receipt once sent and instead pre-commit before the payment is even made what proof will convince them it was done, all enabled through secure inter-legal-entity-level communication and workflow (28/29)
  • But the Settler is also profound… because it’s a sign the touchpaper has truly been lit on the next middleware revolution… but this time focused on entire markets, not just individual firms (29/29)

 

Process Improvement and Blockchain: A Payments Example

“I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?”

red and white metal mail box

The cheque is in the post…

How can we be almost a fifth of the way through the twenty first century and this is still a daily occurrence? How can we be in a world where even if you and I agree that I owe you some money, it’s still a basically totally manual, goodwill-based process to shepherd that payment through to completion?!

The “Settler pattern” reduces opportunities for error and dispute in any payments process – and does so by changing the process

This was the problem we set out to solve when we built the Corda Settler. And I was reminded about this when I overheard some colleagues discussing it the other day. One of them wondered why we don’t include the recipient of a payment in the set of parties that must agree that a payment has actually been made. Isn’t that kinda a bit of an oversight?!

Screenshot 2019-08-09 at 10.35.03.png

The Corda Settler pattern works by moving all possible sources of disagreement in a payment process to the start

As I sketched out the answer, I realised I was also describing some concepts from the distant past… from my days in the middleware industry. In particular, it reminded me of when I used to work on Business Process Management solutions.

And there’s a really important insight from those days that explains why, despite all the stupid claims being made about the magical powers of blockchains and the justifiable cynicism in many quarters, those of us solving customer problems with Corda and some other enterprise-focused blockchain platforms are doing something a little bit different… and its impact is going to surprise a lot of people.

Now… I was in two minds about writing this blog post because words like “middleware” and “business process management” are guaranteed to send most readers to the “close tab” button… Indeed, I fear I am a figure of fun amongst some of my R3 colleagues… what on earth is our CTO – our CTO of all people! – doing talking about boring concepts from twenty years ago?!

But, to be fair, I get laughed at in the office by pretty much everybody some days… especially those when I describe Corda as “like an application server but one where you deploy it for a whole market, not just a single firm” or when I say “it’s like middleware for optimising a whole industry, not just one company.

“Application Servers? Middleware? You’re a dinosaur! It’s all about micro-services and cloud and acronyms you can’t even spell these days, Richard… Get with the programme, Grandad!”

Anyway… the Corda Settler discussion reminded me I had come up with yet another way to send my colleagues round the bend…  because I realised a good way to explain what we’re building with Corda – and enterprise blockchains in general – isn’t just “industry level middleware” or “next generation application servers”… it’s also a new generation of Business Process Management platform…  and many successful projects in this space are actually disguised Industry Process Re-Engineering exercises.

Assuming you haven’t already fallen asleep, here’s what I mean.

Enterprise Blockchains like Corda enable entire markets to move to shared processes

Think back to the promise we’re making with enterprise blockchains and what motivated the design of Corda:

“Imagine if we could apply the lessons of Bitcoin and other cryptocurrencies in how they keep disparate parties in sync about facts they care about to the world of regular business…  imagine if we could bring people who want to transact with each other to a state where they are in consensus about their contracts and trades and agreements… where we knew for sure that What You See Is What I See – WYSIWIS. Think of how much cost we could eliminate through fewer breaks, fewer reconciliation failures and greater data quality… and how much more business we could do together when we can move at pace because we can trust our information”

And that’s exactly what we’ve built. But… and sorry if this shocks anybody… Corda is not based on magic spells and pixie dust…  Instead, it works in part because we drive everybody who uses it to a far greater degree of commonality.

Because if you’re going to move from a world where everybody builds and runs their own distinct applications, which are endlessly out of sync, to one where everybody is using a shared market-level application, what you’re actually saying is: these parties have agreed in some way to align their shared business processes, as embodied in this new shared application.  And when you look at it through that lens, it’s hardly surprising that this approach would drive down deviations and errors…!

I mean: we’re documenting – in deterministically executed code – and for each fact we jointly care about: who can update which records, when and in what ways. And to do that we have to identify and ruthlessly eliminate all the places where disagreements can enter the process.

Because if we know we have eliminated all areas of ambiguity, doubt and disagreement up-front, then we can be sure the rest of our work will execute as if it’s like a train on rails.

Just like trains, if two of them start in the same place and follow the same track… they’ll end up in the same place at the end.

Reducing friction in payments: a worked example

So, for payments, what are those things? What are those things that if we don’t get them right up front can lead to the “I haven’t received your payment” saga I outlined at the start of the post?

Well, there’s the obvious ones like:

  • How much needs to be paid?
  • By whom?
  • To whom?
  • In what kind of money/asset?

There are trickier ones such as:

  • Over what settlement rail should I pay?
  • To which destination must we pay the money?
  • With any reference information?

These are trickier since there is probably a bit of automated negotiation that needs to happen at that point… we need to find a network common to us both… and the format of the routing strings is different for each and so forth. But if you have an ability to manage a back-and-forth negotiation (as Corda does, with the Flow Framework) then it’s pretty simple.

But that still leaves a problem… even if we get all of these things right, we’re still left hanging at the end. Because even if I have paid you the right amount to the right account at the right time and with the right reference, I don’t know that you’ve received it.

And so there’s always that little bit of doubt. Until you’ve acknowledged it you could always turn around in the future and play annoying games with me by claiming not to have received it and force us into dispute… and we’d be back to square one! We’d be in exactly the same position as before: parties who are not in consensus and are instead seeing different information.

And it struck us as a bit mad to be building blockchain solutions that kept everybody in sync about really complicated business processes in multiple industries, only for the prize to be stolen from our grasp at the last moment… when we discover the payment that is invariably the thing that needs to happen at the end of pretty much every process hasn’t actually been acknowledged.

It would be as if our carefully tuned train had jumped off the rails and crashed down the embankment just at the last moment. Calamity!

So we added a crucial extra step when we designed the Corda Settler. We said: not only do you need to agree on all the stuff above, you also need to agree: what will the recipient accept from the sender as irrefutable proof that the payment has been made?

And with one bound, we were free!

Because we can now… wait for it… re-engineer the payment process. We can eliminate the need for the recipient to acknowledge receipt. Because if the sender can secure the proof that the recipient has already said they will accept irrefutably then there is no need to actually ask them… simply presenting them with the proof is enough, by prior agreement.

And this proof may be a digital signature from the recipient bank, or an SPV proof from the Bitcoin network that a particular transaction is buried under sufficient work… or whatever the relevant payment network’s standard of evidence actually is.

But the key point is: we’ve agreed it all up front and made it the sender’s problem… because they have the incentive to mark the payment as “done”. As opposed to today, where it’s the recipient who must confirm receipt but has no incentive to do so, and may have an incentive to delay or lie.

But building on this notion of cryptographic proof of payment, the Corda Settler pattern has allowed us to identify a source of deviation in the payment process and moved it from the end of the process, where it is annoying and expensive and makes everybody sad… and moved it to the start of the process and, in so doing, allows us to keep the train on the rails.

And this approach is universal. Take SWIFT, for example. The innovations delivered with their gpi initiative are a perfect match for the payment process improvements enabled by the Settler pattern.

The APIs made available by Open Banking are also a great match to this approach.

Middleware for markets, Business Process Management for ecosystems, Application Servers for industries..!

And this is what I mean when I say platforms like Corda actually achieve some of their magic because they make it possible to make seemingly trivial improvements to inter-firm business processes and, in so doing, drive up levels of automation and consensus.

So this is why I sometimes say “Corda is middleware for markets”.

It’s as if the first sixty years of IT were all about optimising the operations of individual firms… and that the future of IT will be about optimising entire markets.

Corda: Open Source Community Update

The Corda open source community is getting big… it’s time for a dedicated corda-dev mailing list, a co-maintainer for the project, a refreshed whitepaper, expanded contribution guidelines, and more..!

 

It feels like Corda took on board some rocket fuel over the last few months. Corda’s open source community is now getting so big and growing so fast that it’s just not possible to keep up with everything any more — a nice problem to have, of course. And I think this is a sign that we’re reaching a tipping point as an industry as people make their choices and the enterprise blockchain platforms consolidate down to what I think we’ll come to describe as “the big three”.

Read the rest of this post over at the Corda Medium blog…

Introducing the Corda Technical Advisory Council

I’m delighted to announce the formation of the Corda Technical Advisory Council (the Corda TAC). This is a group of technical leaders in our community — who most of you know well — who have volunteered to commit their time over and above their existing contributions to the Corda ecosystem to provide advice and guidance to the Corda maintainers.

Members of the TAC are invited by the maintainers of the Corda open source project (Mike and Joel) and will change over time — the inaugural members are listed below. If you’re also interested in contributing to the TAC, please do let us know — most usefully through your technical leadership and contribution to the ecosystem!

Read the rest of this post over at the Corda medium blog…!

Universal Interoperability: Why Enterprise Blockchain Applications Should be Deployed to Shared Networks

Business needs the universal interoperability of public networks but with the privacy of private networks. Only the Corda network can deliver this.

The tl;dr of this post is:

  • Most permissioned blockchains use isolated networks for each application, and these are unable to interoperate. This makes no sense.
  • We should instead aspire to deploy multiple business applications to an open, shared network. But this needs the right technology with the right privacy model.
  • Corda, the open source blockchain platform we and our community are building, was designed for just this from day one. But there was a piece missing until now: the global Corda network.
  • In this post I describe the global Corda network for the first time in public and how it will be opened up to the entire Corda community in the coming months.
  • If you’re building blockchain solutions for business, you need to read this post…

Think back to how excited you were (well, was!) when you first heard about Ethereum. The idea of a platform for smart contract applications, all running across a common network, with interoperability between all these different applications written by different people for different purposes. It was mind-blowing.

And it’s not just a vision, of course. The public Ethereum community have actually delivered it! Indeed, emerging standards such as ERC20 are a demonstration of the power of a shared, interoperable network and the power of standardisation.

So the question we asked ourselves at R3 back in 2015 was: imagine if you could apply that idea to business… imagine if different groups of people, each deploying applications for their own commercial purposes, woke up one day and discovered that those apps could be reassembled and connected in ways unimaginable to their creators but in a way that respected privacy and which could be deployed in real-world businesses with all the complexity that entails.

It seemed obvious to us that this was the right vision. And that it would require a universal, shared, open network, the topic of this post.

But it dawned on me recently that this is not how everybody in the permissioned blockchain space sees it. The consequences for users could be serious.

The rest of this post is continued at our medium site here!

View at Medium.com

What problem are we trying to solve with Corda?

Todd pointed me at a great piece about “crypto finance” versus “regular finance” by Bloomberg’s Matt Levine earlier.

I thought he did a good job of nailing the essential contradiction that arises if one tries naively to apply Bitcoin or Ethereum principles directly to traditional finance. He uses the example of an Interest Rate Swap (IRS) and how a fully pre-funded model would kind of defeat the point…

This caught my attention because an IRS was the very first project we ever did on Corda! So it’s something I know a little about… Anyway, I think the key to understanding the mismatch is captured in a post of mine from 2015 about how two revolutions are playing out in parallel.

Anyway… full details over on my Medium page.

New to Corda? Start here!

Are you just hearing about Corda for the first time? Want to understand how Corda differs from other platforms and how its unique architecture is perfectly suited to address the real problems faced by today’s businesses?

I just posted to the Corda Medium page with a list of links and background info that should help answer that question…

 

 

Not all business blockchain platforms are alike. To succeed they need to reimagine business computing.

Lessons from the world’s largest multi-year collaborative Blockchain research programme

If you’re not part of the blockchain bubble, you probably think people in the blockchain world are all mad! Especially those of us in the enterprise space. How on earth could we believe a technology, originally built by a group of idealistic anarchists to solve a problem no financial institution actually has, could possibly be relevant to the challenges facing those trying to build and maintain complex IT infrastructures across the world?

In this article, I explain how the work we’ve done at R3 over the last two years with hundreds of senior technologists from dozens of leading financial institutions has led me to some fundamental conclusions:

  • The promise of blockchain technology is real: new business solutions will soon be deployed which can eliminate huge amounts of cost, redundancy, error and needless reconciliation across entire business ecosystems, as well as opening up previously hidden new revenue opportunities. I provide concrete examples below.
  • This will be achieved by applying the fundamental blockchain insight: “I know that what I see is what you see”. This is the key to helping us move from a world of isolated, custom, inconsistent IT infrastructures in each firm, to one based on shared business logic, securely shared data, and common processes.
  • The new IT architecture that is needed to capture these opportunities is one that converges the complex world of application servers, messaging engines, workflow managers and databases into a unified, secure and private design that works seamlessly within and across enterprises.
  • But not all blockchain platforms are alike: Only some designs will be architecturally suited to the challenge.
  • Our research has revealed several key characteristics that are needed to succeed. An enterprise blockchain platform must:
    • Provide an integrated application, messaging, workflow and data management architecture.
    • Build on an existing technology “mega-ecosystem” to maximise skills and code reuse.
    • Eliminate unnecessary data “silos” to unlock new business opportunities by allowing real-world assets to move freely between all legitimate potential owners.
    • Embed legal entity-level identification into the programming model to enable legally enforceable and secure transactions.
    • Support inter-firm workflows for negotiating updates to the shared ledger to support real-world business scenarios.
    • Enable the inevitable move to the public cloud without requiring high-risk technology bets.

For the last two years, I’ve been privileged to lead an intense research and development effort amongst the membership of the most vibrant consortium the financial world has seen. Hundreds of senior technologists from dozens of companies have worked with R3’s engineers to identify where blockchain technology can make a difference in large enterprises and also how it needs to be designed to achieve this potential.

This work has convinced me that the judicious application of key blockchain principles is key to rescuing the world’s banks and other large companies from the corner into which they have painted themselves after decades of investment in previous generations of technology.

In particular, we could be looking at the holy grail of enterprise software: convergence of application servers, messaging engines, workflow managers and databases in a way that works seamlessly within and across businesses.

The result is a model that delivers on the promise of blockchain in the enterprise: a world where applications can be securely developed and deployed across trading partners, enabling them to transact securely and accurately and without endless reconciliation, inconsistent data, and duplication. It is why we are now able to think about building or replacing payments systemssyndicated loans processing systems, asset issuance and custody systemsFX matching business processes and much more.  And these are just examples from the banking sector being addressed by just one platform, Corda, which itself turns out to be applicable far more broadly, in areas as diverse as healthcare and identity management.

But our belief is that it is only by converging previously diverse technologies that we can dramatically simplify how companies develop, deploy and manage applications that manage their business relationships with each other.

Here’s what I mean. Through one lens, a blockchain is like an application server: it hosts business logic and ensures it runs at the right time and for the right reasons. But a blockchain is also like a messaging engine: it allows people and, importantly, their computers to exchange information and ensure that the right information gets to the right places in the right order – and to do so between companies as well as within. And some blockchain platforms also have characteristics of workflow managers; they help coordinate the activity of different parties across time and space to achieve some business outcome. And, of course, they can be akin to databases.

If we could somehow devise a platform that built on this insight and delivered a platform that could combine the function of these hitherto diverse technologies, and could do so in today’s adversarial security environment, with the necessary levels of privacy, and in a way that was easy to use and could reuse rather than replace what already exists, imagine how much simpler our future enterprise IT landscape would look!

But building a platform that combines these separate concepts is easier said than done. There’s a reason existing vendors haven’t already built something that delivers on this vision.  In what follows, I’ll share the output of the last two years of our work, which confirmed to me that, by judiciously weaving together existing technologies and key advances from the blockchain world, it is indeed possible. I highlight some of the specific, key design choices you need to make in order to achieve this convergence and hence unlock the massive opportunity to transform the IT estates of existing companies.

I will refer at times to a key output from our work: the open-source Corda platform. We developed it in parallel with the research effort I describe above and, as a result, it benefited from a huge amount of expert input from across the R3 membership. No other enterprise blockchain platform has enjoyed remotely the same level of expert input. But I also refer to other enterprise blockchains where contrasts are helpful, paying particular attention to one that has come to prominence of late, Quorum, because it makes some very different choices and hence acts as a useful comparison point. This is intended to help make some of the points concrete and draw readers’ attention to areas of legitimate disagreement.

The ideal enterprise blockchain is a next-generation application server…

First and foremost, the opportunity that blockchain technology presents to enterprises is the ability to write applications that are shared between those who use them to transact. The key idea is this: rather than you and all of your trading partners each writing an expensive application to manage your participation in that trading network, you write it once. We thus share the cost and effort, whilst each running our own instance of the application in order to maintain our own books and records.

But to enable this vision, we concluded from our research that we need features that are unavailable in traditional application servers but which are common in the blockchain world. One such feature is cryptographic chains of provenance for all data in the system. When freely sharing relevant data with your trading partners, you must take nothing on trust but verify all proposed updates to the shared application state. This is also why all consensus-critical code on the platform needs to be digitally signed so you know that the code that is running is the code that should be running.

And we need to enable as much reuse of skills and existing technology as possible: you don’t save money by throwing away all the existing technology you have or by forcing all your staff to retrain! So you should be able to write applications in languages that have as large a population of developers as possible. Working with our diverse membership, we concluded that Java is by far the most widely deployed language across the business world and so we focused on that ecosystem. There are almost ten million people in the world who have these skills. But we also found that large enterprises hunger to bring their infrastructures right up to the contemporary cutting edge, so support for techniques such as reactive programming, approaches like functional programming and new languages like Kotlin and others are also required.

In short, the ideal application server for the modern era is one built for security, productivity, data integrity and cost minimisation.

But don’t all enterprise blockchains do this? Actually, no. To see that, we can compare the approach we took with Corda – the result of our two year journey – with a competing blockchain platform such as Quorum.  Although superficially similar to Corda in the way it describes itself it is, in fact, a relatively small fork of an existing public blockchain platform called Ethereum – which was designed to solve a completely different set of problems. Quorum inherits all of the associated advantages and disadvantages as a result.

For example, rather than supporting a full range of modern and mature languages to suit different needs, for which there are abundant skills and existing libraries, as Corda does through its support for the Java Virtual Machine, Quorum only supports languages that run on something called the Ethereum Virtual Machine. The EVM is a young, purpose-built virtual machine maintained by a community primarily focused on executing cryptocurrencytransactions and associated business logic. No mainstream languages run on the Ethereum virtual machine. This raises obvious questions around skills and integration with heritage systems. But it also opens up serious new risks for those who try to apply Quorum to real-world business problems. For example, the most popular language for writing applications on Quorum has been repeatedly shown to be impossible to use safelysuffering from countless bizarre language features that have led to people losing real money.

Not all enterprise blockchains are alike.

The ideal enterprise blockchain is also a next-generation messaging platform…

However, as I argued above, the exciting realisation during our research effort with our member banks was that good blockchain platforms are also inspiration for how to rethink enterprise messaging systems. Messaging systems are used by big companies to link computers and applications to each other securely and reliably.

The need for this becomes acute when you talk about deploying shared applications betweentrading partners. The applications running on different firms’ nodes need to be able to communicate. We need all parties to be securely identifiable and to be able to communicate with each other near-instantly over the internet or private network, addressing each other by legal name.  It’s not good enough to take existing enterprise messaging platforms that let you communicate with an address or a queue; you need to be able to talk to a named legal entity, anywhere in the world.

In our work with the diverse group of technologists from across our membership, we identified that the way to achieve this was to take existing technologies – in this case, TLS, X.500 and more, and make them consumable and accessible to today’s application developers. It seems obvious when you think about it yet no mainstream platform offered it in this way until we supported it in Corda. It was necessary to weave insights from the cryptography community, where public key infrastructure is well understood, and the banking community, where financial networks between identifiable peers are common, to come up with the idea of “send to legal entity” as the natural and obvious way to communicate between parties.

A separate finding from our work was that large institutions are institutionally allergic to anything that looked like it could create new islands of connectivity.  They needed a platform that would allow multiple independent Business Networks to operate within a single global namespace – perhaps something we could call a Compatibility Zone.  No more tolerance for isolated silos of communication and data distribution (unless you want it of course…). And this turned out to be key for the delivery of the holy grail: true representation of real-world assets on a shared network, with no artificial barriers to exchange or transfer, which I talk about more below.

Achieving this level of seamless interoperability across multiple business networks, whilst retaining flat, transparent, legal-entity-level addressing is hard. But I think we’ve achieved it with Corda.  And it’s something that is lacking in platforms like Quorum, where network membership is defined in text files which must be copied to each node, and where behaviour is undefined (or, rather, may have “adverse effects”) if they don’t match perfectly.

Identity and legal-entity addressing can’t be an after-thought in a text-file: they need to be designed in from the start.

The ideal enterprise blockchain is also a next-generation workflow manager…

When I worked with clients in my previous jobs, I would find they deployed workflow management tools, also sometimes called “Business Process Management” platforms, inside the organisation to control and optimise the flow of work between people and systems.

But all too often these systems would stop at the edge of the firm.  The “other side” was treated as a black box.  And yet, it is in the interaction between firms that the risk and cost creeps in.  Did they receive the message? Are they working on it? Did they understand it and process it correctly?

Do they see what I see?

This isn’t actually something that existing blockchain platforms provide much help for. And yet it is a real need even for some simple cryptocurrency transactions such as implementing an escrow arrangement or telling somebody how to pay you! In essence, for every meaningful transaction on a blockchain platform, there is usually an out-of-band negotiation flow that needs to take place.  Just like in real business.

And as we worked through countless use-cases during our research, what emerged was a need to make it easy for developers to write this back-and-forth business logic for the collaborative negotiation of a deal or construction of a transaction.

We discovered a need for an inter-firm workflow automation technology where developers don’t have to worry about any of the complex technical work necessary to orchestrate such co-operation at scale across a global network. In particular, it needs to be possible to define business processes that flow between, within and across firms, yet which can be coded in a simple and natural programming style, where you can express what each party in a transaction should do and have that work automated within and across your firms.

Unfortunately, developing something so ambitious would be hard if implemented from scratch. So it provided further weight to our emerging belief back then, a certainty today, that a successful blockchain platform for the enterprise has to build on the work of an existing, massive ecosystem, where it can reuse as much existing infrastructure and code as possible.

If we now turn to the specific case of Corda, we were able to take cutting edge technologies from the Java ecosystem and add some seriously clever engineering by the R3 engineering team to deliver something unique called the Flow Framework. In particular, recent advances in automatic checkpointing of business logic, with restore across system restarts, means developers can write what looks like entirely normal straight-line logic and yet, behind the scenes, a complex international workflow is being orchestrated. The first time you use it, it feels a bit like magic.  This is an example of why it’s so important to build on the foundations of an existing and robust ecosystem, such as that provided by Java, whilst at the same time bringing the programming model and tools right up to the contemporary cutting edge.

The ideal enterprise blockchain is also a next-generation asset ledger…

A repeated theme from our work with members was that the real wins come when the new enterprise blockchain applications become systems of record for real transactions.

This means they need to provide legal certainty.  Application developers need to link their contract code to associated contract legal prose to give certainty about how disputes will be managed.  But, for this to work, one must go further. Just as a successful enterprise blockchain identity layer needs to be based on legal names, the actual nodes – the machines running the business logic – also need to be clearly associated with a specific legal entity.

Such a framework, provided by the legal prose, identity-driven messaging and legal entity association with nodes, means that contracts signed on platforms that support it can be legally binding if you choose. It’s a prerequisite to being safely able, at scale, to support direct issuance of assets and the direct formation of contracts.

It means that a bank or other firm can issue assets (cash, equities, even commercial paper or syndicated loans) on to the ledger and have them be directly owned and seamlessly transferred to other members of an appropriate business network in near-real-time and with settlement finality.

And make no mistake: that is a hard trick to pull off.  For dematerialised assets to have full utility, they must be easily transferrable to any and all legitimate potential owners: isolated islands won’t do. So it turns out that this also has strong implications for the most fundamental aspects of the design: how data on the blockchain is managed, distributed and evolved, which I describe more below.

The issue we found during our research was that if you get this wrong, you risk trapping assets in silos, where they can only be transferred amongst members of a preconfigured list or through the involvement of third-party market-makers, or by asking the issuer to impose friction by cancelling and reissuing them.

It’s a tough problem to crack, as the link above makes clear in its analysis of another platform. The criticism in that link also applies to Quorum: they both suffer from this stranded asset problem and it means apps you write for those platforms will almost certainly have to be extensively rewritten in the future when those platforms are redesigned to correct these shortcomings. A costly prospect.

It’s one of the reasons we were so focused on delivering a version of Corda with “API Stability”: members and other developers told us loudly and clearly that they highly valued knowing they wouldn’t have to redesign their apps in the future

And the ideal enterprise blockchain is also first-of-a-kind decentralised database…

At the heart of the enterprise blockchain vision is the idea that, if two or more parties are transacting with each other, then they should all have an identical copy of the associated data.

“I know that what I see is what you see”.

To achieve this, we have to ensure that all parties have fully validated that their shared state was indeed calculated correctly.  At heart, it is this that distinguishes blockchains from what went before: we don’t take things on trust; we verify.  And to verify we need to run the logic for ourselves. We have to check the provenance of data.

What we want is, in effect, a globally shared database, but where each party only has the rows of the database that pertain to transactions they’re part of.  So we need to make sure that you receive the rows you need and if anybody tries to change any of them, they can only do so if the rules are followed and that you get to sign off on it and/or get notified afterwards as appropriate. To get an idea of how this works under one architecture, see this simple analogy.

The design we concluded that best achieves this is one that builds on and heavily extends and generalises ideas that originated in Bitcoin rather than Ethereum. This model, which Corda adopts, encourages developers to think of the fundamental units of information in their business problem and how they can evolve over time in isolation. Corda then adds the powerful flows I described earlier to orchestrate their updates via transactions that specify precisely which data units – “state objects” – are being referenced, created or replaced.

This approach allows for arbitrarily rich scenarios and sophisticated negotiations and business processes to be modelled with ease, whilst allowing the system to tell precisely who needs to receive what data, when, whilst revealing nothing more than is needed to prove provenance.

Other platforms take a different approach, based on an Ethereum-like model.  They typically start with the idea of an “everybody sees everything” full broadcast blockchain, representing a globally shared computer, and go fractal in order to correct for the privacy issues that result: they spin up tens or hundreds or thousands of separate mini shared-computers, one for each group that wants to transact in privacy. This approach can work but our analysis, confirmed by later experience, was that, for real world scenarios, it gets very complex, very quickly.

In the case of Quorum, each “confidential contract” can be thought of as a mini Ethereum universe that is visible only to the participants in that contract. This idea of millions of mini-blockchains works for some scenarios, but as I described in this post the approach, used by some other platforms too, fails completely if you ever need to support asset mobility. As soon as you need to prove provenance of one piece of data in a confidential contract to somebody else, you have to show them everything in that contract. Game over: you either lose privacy in the hunt for provenance or your assets are stranded, with their provenance impossible to be demonstrated to outsiders.

And… the ideal enterprise blockchain must also be designed for the cloud

The world doesn’t stand still. The enterprise software market in 2017 is different to the market in 2000…  In particular, most new applications that get deployed in the future will run in the cloud, increasingly in a public cloud.

So this was inevitably a hot topic of debate throughout the consortium’s deliberations over the last couple of years: how can we support application developers who need to write applications that will form legally binding contracts and be the basis of their audited books and records yet which will run on other people’s computers in a potentially hostile environment?

We considered the obvious options: zero-knowledge proofs, homomorphic encryption, secure hardware and more, and combinations of them all.

Our first conclusion was that the answer needs to be multi-layered: deterministic sandboxes, signing of all code and transactions, everything underpinned by legal-entity-level addressing, support only for a well-understood and tested managed runtime (in Corda’s case, the JVM, which we heavily lock down) and so forth.

But we also made a decisive choice: when weighing up risk, availability of skills, time to market and a collection of other factor, we concluded that the right initial approach is first to support hardware security technologies and, Intel’s SGX technology in particular.

This allows users to deploy Corda applications to other people’s computers (ie on public clouds) in a way that prevents those operators from interfering in transaction verification or accessing historical transaction information yet with no material limitation on the business logic that can be run. And we found that making this work well required it to be designed in from the ground up, which we did.

We concluded that this approach contrasted favourably with a higher-risk one being taken by platforms such as Quorum, who are recommending today the use of zero-knowledge proofs. Zero-knowledge proofs are a very fine piece of engineering and mathematics. They’re almost certainly part of the future of privacy in the long-term on blockchains – and Corda is designed to adopt them when they have matured.  But, today, the story is different.

First, and as Mike and Kostas in our engineering team discussed at CordaCon in September, the world of Zero Knowledge Proofs is young, very few people have skills, it is almost impossible to know what one of these things actually does when written and the security assumptions they rely on are unproven. Indeed, the “zk-SNARK” technology being promoted in February this year by the inventor of Ethereum is now being challenged by something new, called zk-STARKs This is a promising sign of a vital research community, rapidly advancing a young field.  But it’s a risky foundation upon which to build enterprise solutions before it has matured more.

Secondly, the integrity of the financial system rests on the idea of atomicity: related activities should either all happen or not happen at all. Inconsistent, half-complete exchanges just won’t do.

This requirement was hammered home to us by our members during the design process for Corda and so the ability to do complex atomic “payment versus payment” and “delivery versus payment” transactions is a fundamental part of the architecture.

Unfortunately, Quorum, which needs to use zero-knowledge proofs to achieve acceptable privacy in their system cannot, according to their own documentation, achieve delivery-versus-payment:

… the POC solution will not support cryptographically-assured DvP (i.e. atomic exchange of assets). This is because ZSL currently has no means of supporting shielded DvP-style functionality.”

This is why it’s so important to distinguish between the generic promise of enterprise blockchain technology and the practical reality of specific implementations.

Thirdly, we need to remember at all times that this whole emerging industry is about consensus, about being sure your counterparts really do see the same things you do and that you agree on all important data, when it happened and in what order.

But we learned from our members that how you reach this consensus may need to differ based on the business context: maybe you’re trading amongst a group of peers who you know well: you may each need to participate in the decentralised consensus forming process by running a node as part of a consensus cluster and reach consensus quickly and with finality amongst yourselves.  But, perhaps for some other line of business, you’re transacting with people all over the world and you want an independent, decentralised group of impartial observers to timestamp the transactions and help you reach agreement about ordering. That must also be supported: you need to be able to use a fully byzantine-fault tolerance decentralised cluster of consensus nodes, operated by mutually distrusting entities. And a successful platform needs to support all these modes – and more – and, here’s the key part: on the same network, at the same time!

So when we designed Corda, we engineered it so that you’re not forced to pick one consensus model and expect everybody to accept a one-size-fits-all for every transaction they perform on their blockchain network. That would be the road to disconnected, isolated blockchain networks… and the fast lane to stranded assets.

Why I believe Corda is the future of enterprise software

This piece began by outlining what I consider to be some of the critical findings of our journey so far. And we’ve embedded as many of them as possible into the design of the platform we jointly defined with this unprecedented consortium of institutions and the hundreds of technologists who so willingly gave their time to help us get it right.

It’s why I am so careful to stress that all enterprise blockchain platforms are not the same.

There is a reason Corda looks different to platforms like Fabric and Ethereum.  It’s because we’re not merely trying to take public blockchain technology – designed for a completely different purpose – and force it inappropriately into the enterprise. 

Instead, we’re doing something altogether more exciting and ambitious: we’re building the future of enterprise software. We’re tapping into one of the largest technology ecosystems ever seen, the Java ecosystem, and building on the work of countless thousands of skilled engineers to deliver an enterprise blockchain that’s designed directly to solve real problems faced by businesses today.

It’s why I believe the Corda enterprise blockchain is the future of enterprise software.

Indeed, firms like FinastraCalypsoHPE and others discovered Corda for themselves after reaching similar conclusions as us about what the right design for a consensus system between identifiable parties in a regulated environment needs to look like and our large and growing roster of partners, including Microsoft and Intel, is a testament to the momentum.

Perhaps I shouldn’t be surprised: we were immensely fortunate to have the deep and insightful input of literally hundreds of senior technologists from across dozens of the world’s largest companies help with the design. I hope one day to be able to list each and every one of them.

Indeed, I hope we will look back on Corda in a few decades as one of the largest and most successful collaborative design efforts in the history of enterprise technology!

Finally: a note on terminology. In this post I used the word ‘blockchain’ to describe Corda.  I tried for a long time to retain engineering purity (“Corda is very much like a blockchain and has chains of transactions but, strictly speaking, it doesn’t batch them into blocks – it’s real-time – so we probably shouldn’t call it a blockchain!”) But the reality is that the market uses the term Blockchain to describe all distributed ledger technologies, including ours. So I’m not going to fight it any more…!