The World Needs a B2B Application Development Platform

We’ve been building business-to-business applications wrong all this time!

Imagine a firm needs a new way for employees to book vacation. Or maybe customers want to be able to track their orders online. It’s your job to build the application. What design would you go with?

Chances are you’ll build it in mucch the same way as everybody else does. There will be a database. There will be a business logic layer – you have a lot of choices here. And there will be a front-end – perhaps a mobile app or web-site, or a set of APIs. And then you have to worry about hosting it. Perhaps the HR department will host it in-house. Perhaps the supplier will host it in the cloud.

Fundamentally, there aren’t really many distinct ways to build these things.

But did you notice something? Did you see how it never even crossed your mind that the employee should write or run the vacation booking application, or that the customer should write or run the online order tracking app? It’s so obvious that it should be the employer and the supplier right?

It’s ‘obvious’ because there’s an asymmetry in many business interactions, and this makes it easy to figure out who should be responsible for what. There’s usually a ‘user’ of a service, and a ‘provider’. And this usually maps nicely to the idea of ‘clients’ and ‘servers’ in the world of IT.

And this is very helpful, because if we can assume there is a single party providing a service, who is the ‘owner’ of the data and controls changes to it, then it becomes very simple to design the application.

This is certainly the case in our examples above: we’d expect the HR department to be responsible for information about employees, and it makes sense that it’s a supplier who keeps track of what has been ordered, by whom. Modern application development frameworks bake the assumption that there’s an ‘authoritative party’ into their designs as a result.

One size does not fit all

However there’s a big problem. The problem is that lots of business interactions do not fit this pattern.

  • Imagine you had to automate reconciliations between a network of banks. Which banks are the ‘clients’ and which is the ‘server’?
  • What about a new platform to facilitate the negotiation of contracts between buyers and sellers? Who should run that application and control all the data as a result?

It’s not obvious, is it?

Yet, all too often when faced with a market with no obvious ‘clients’ or ‘servers’, the IT industry’s response has been to insist that there must be a server if you want to use our products! And if that means reshaping how the market works to introduce a brand new intermediary then so be it. This surely nuts. Why do we think it is normal that when a market doesn’t conform to IT vendors’ ideas of what it should look like, it’s the industry that has to change, not the software?

The result of this process is that manual work that was happily performed bilaterally is often controlled by a third party once the market has ‘digitised’ itself. Is this progress?

When exactly was it we decided we had to change the structure of an existing market to solve a simple business problem? How did it become acceptable that IT design assumptions should have the power to reshape the dynamics of entire industries?

If you think this is a philosophical problem of no practical concern, consider the following examples:

Market Places

When buyers and sellers get together at a physical market to trade with each other, they follow a collectively agreed set of rules, usually overseen by a market ‘operator’ that they collectively govern. Yet when buyers and sellers trade with each other electronically, the market operator’s role is usually far more significant. Somehow, when markets ‘go digital’ they end up entrenching the market operator in an ever-more privileged position. An impartial ‘arbiter’ often becomes the most powerful and profitable entity in the ecosystem.

Maybe we should ask ourselves: how much of this is a natural market outcome, and how much of it was driven by the hidden assumptions inside the software development tools we use? Have our tools forced us to model a peer-to-peer market as a client-server architecture?

Reconciliation

When banks and large companies need to check the accuracy of their accounting records, their auditors reach out directly to their trading partners to verify that their books and records are in sync. This is an inherently decentralised, point-to-point style of interaction. Yet when IT systems were built to automate these processes, they were invariably built as centralised, hub-and-spoke systems, often run by new third-parties

Was this a conscious decision to change how the industry worked? Or was it an unintentional consequence of an IT design decision?

Global Trade

International trade still relies on a large amount of paper. Buyers, sellers, shippers, banks and others interact on a daily basis to facilitate the flow of goods, and payments for them, around the world. Yet there has been surprisingly little progress in digitising the process of trade finance: issuance of letters of credit and so forth. There’s so much paper!

Could this be because a centralised application with an all-powerful operator is fundamentally incompatible with the messy, decentralised, international trade finance industry? Who is the ‘client’ and who is the ‘server’ when a large UK retailer buys some goods from a Chinese manufacturer? Could the failure of the trade finance industry to digitise until recently actually be a sign of their sophisticated understanding of market dynamics and the IT industry’s failure to understand them?

When you start thinking in this way, you can see examples of it everywhere.

The bottom line is that the application development platforms we’ve been using for the last twenty years are designed for a hub-and-spoke model where the users are supplicants of an all-powerful application owner at the centre. This is perfect for many scenarios, especially in the ‘business to consumer’ and ‘employer to employee’ space.

But huge numbers of situations – particularly in the ‘business to business (B2B)’ space – do not look like this. B2B interactions are almost always interactions between peers and counterpartsbuyers and sellers, each acting with full agency, neither superior to the other.

If you asked two traders in a market which of them was the service provider and which of them was merely the ‘client’, they’d look at you as if you were mad! So why are we building applications that embed this assumption into the heart of their architecture?

The world needs a modern B2B application platform

It’s time we challenged ourselves to do better. So can we at least write down some requirements for a business-to-business application development platform that does respect existing market dynamics and doesn’t force new parties into the mix? For example, what are some of the problems you need to solve when building a true B2B market-style application?

Record-Level Data Ownership: fundamentally, we’d probably start with the idea that different records in the system should be controlled or ‘owned’ by different parties. If one party controlled everything then we’d have the status quo. This immediately creates a need for some notion of…

Identity: if different records are owned by different parties, we need to a way to refer to these parties. And it’s likely that each of them will want to control who has access to which records. So this means we need a way to manage…

Data sharing, reconciliation and synchronisation: each record will have one or more owners, but there will be other parties who need to have a copy, and they need to be sure their copy is the most recent version and that it has been validly created. Where did it come from? If it was updated, who did it, when, and was it done in accordance with any pre-agreed rules? After all, these records may represent real legal agreements. The need for updates that might require sign-off creates a requirement for…

Workflow: whenever firms are interacting with each other, invariably there are complex processes that need to be orchestrated. Sign-offs and approvals need to be obtained. And the inevitable delays and glitches have to be resolved. And these workflows, critically, need to be managed across and between all the different firms involved in the process… so the workflow layer needs to have an inherent and deep understanding of identity, and an ability to coordinate between firms, not just within a single firm. But if every single update needs manual sign-off, we’d never get anything done. So this creates a need for:

Shared, distributed, verifiable business logic execution: applications always contain business logic but in B2B scenarios, the question of who is expected to calculate what, for whom and when, is surprisingly subtle. So there needs to be a way for firms to verify some computations for themselves, and to specify whom they trust to calculate other things for them. After all, that’s how things work today: some things can be taken on trust, other things must explicitly be verified. And it’s rare in the real world to just give some central party complete power to do everything. The B2B application platform needs to reflect and support this real-world sophistication. But a system that is in sync with its peers in other companies but out of sync with the applications in your own company solves nothing. So we also need rich support for:

Integration: the system needs to play well with other internal systems and processes… we have to remember that the firms who need this sort of platform probably already have hundreds of existing applications, each of which is probably a system of record for something or other. So, in addition to synchronising information between parties, we need a sophisticated way to ensure this point of ‘inter-firm truth’ is in sync with all the other places it exists inside each firm. Finally, this entire architecture needs to be built to run between firms, across the internet, so we need a modern approach to…

Security: the system must understand that events, data updates and business process invocations could be arriving from both inside the firm and from outside the firm. Records in your database could be updated automatically if their owner is some other firm. We’ll be communicating non-stop with the outside about critically important business data. We can’t just pretend we’re sitting behind a strong firewall that protects us from the outside. So we need to implement cryptographic protections into the heart of the application: all interactions must be authenticated, data updates need to be controlled by pre-agreed constraints, records need to be fingerprinted in order to detect malicious mutation, and more. In short: these applications need to rely wholeheartedly on applied cryptography, or ‘trust technology’ as we sometimes call it to avoid scaring business people 🙂

There are, of course, many more requirements. But my experience from building a system that implements the vision I’m outlining here is that these are the requirements that drive most of the architecturally-significant decisions.

Put the power back in the hands of businesses, not the IT vendors

At its heart, the problem we’re trying to solve is one of power dynamics: how can firms come together to improve how their markets work, but without accidentally introducing a ‘monster’ that turns round and eats them all? We’ve all seen what happens when regular centralised platforms obtain a position of elevated power: they get to control everything that is happening on their platform and, as the platform grows, it becomes ever more valuable, attracting more users, whilst simultaneously making it harder for existing users to leave, increasing the pricing power of the operator. This is a problem we’re trying to avoid.

We also need to be pragmatic, however. Every firm wants to believe they’re in control, but not all of them want to incur the cost of running their own infrastructure, at least not at first. So we need an architecture that enables some participants to run their own piece of the shared infrastructure, whilst others experience it as a cloud service. The power comes from having the option to ‘repatriate’ data and control should they feel a central operator or other service provider is becoming over-bearing. You don’t need to exercise that option for it to have power. We might call this empowering such firms with ‘digital sovereignty’.

The good news is that we’re on the cusp of solving these problems: B2B application development platforms now exist. We just haven’t been describing them in this way.

So what have we been calling them?

B2B Application Development Platforms are… Enterprise Blockchains!

The answer is that a world-class B2B application development platform has been engineered in plain sight over the last seven years. It goes by the name of Corda, but you may also know it as an example of an ‘enterprise blockchain.’ This term is used because many of the techniques being applied were first popularised by public blockchain platforms, even though the public and private blockchain communities would be the first to admit their networks do some really quite different things.

I imagine some readers have an eyebrow raised at this point. If so, I’d encourage you to re-read this article and then look at what platforms such as Corda actually do, and what they’re being used for (as opposed to the jaw-dropping amount of hype and bluster that infects this space)…

If you have a Business-to-Business problem that needs an IT solution, and are wondering why the standard development tools don’t quite hit the mark, it’s OK… you’re not the only one. It’s a real problem. The good news is that there’s also a solution: Corda.

In this piece, my objective was to motivate the need for a purpose-built B2B application development platform. But now you are, I hope, intrigued, you can find out more about how Corda attempts to support this concept by reading the intro whitepaper, heading over to our docs or, most powerfully of all, read about real-life production projects that are using the platform right now.

Viral Vector Vaccines – More Fun Things I’ve Learned

In this post, I shared what I had learned about how the AstraZeneca ‘viral vector’ vaccine worked and how endlessly fascinating I found it. But I was struggling to understand why it takes so long for the immune system to ‘learn’ about the new threat that the vaccine is trying to teach it about… why it seems to take “weeks” to gain any protection. I still don’t understand but I’ve learned a few more things that others might find interesting too!

Recap of the basic idea

The basic idea behind Viral Vector vaccines such as the AstraZeneca and Johnson & Johnson products is that a harmless virus is genetically modified so that, if it were to infect a human cell, the cell would start producing protein fragments that look a lot like the spike proteins on real Covid viruses. These ‘fake’ spikes become visible to your immune system, which mounts a response. Thus, if you subsequently become infected with the ‘real’ Covid, your immune system is already primed to destroy it.

Do those fifty billion viruses replicate once inside me? Answer: no

When I read that the vaccine relies on injecting a harmless ‘carrier’ virus, my immediate thought was: “I wonder if that virus is able to replicate like normal viruses do?” I saw on my ‘vaccine record’ that there were fifty billion copies of it in my dose, which made me suspect not… after all, why would you need to inject so many if each one had the ability to replicate? The human body only has about 15 trillion cells, so 50 billion viruses is enough for one in 300 cells! Surely you’d need far fewer if each one could trigger the creation of many many copies?

Turns out I was right about this: the modified ‘adenovirus’ that is injected into me is unable to replicate: those fifty billion copies are the only ones I’ll ever have as a result of that shot.

This infographic (click for better image), from The Royal Society of Chemistry, has a nice explainer on CompoundChem:

As that article explains, it seems like the decision to use a non-replicating virus was a choice, presumably on safety and public acceptance grounds: it would have been possible to design a vaccine where the virus could replicate, and some vaccines for other diseases do work that way. The advantage of the latter, I guess, is that far fewer copies would have had to be injected to start with. It’s interesting to speculate (based on absolutely zero knowledge of the science or of where the bottleneck in production actually is…) whether vaccine rollouts could have been quicker if they’d been based on replicating viruses. Would it have meant any quantity of production could have been spread more broadly?

Note: I’m still not yet clear on what happens to my cells that are infected by one of these non-replicating vector viruses… are these cells then destroyed by my immune system because they present the spike protein? Or are they allowed to live? Can they divide? If so, do their daughters also produce the spike protein?

What happens if my body already has antibodies to the vector virus?

I made a throwaway comment in my last post about how the ‘carrier’ virus has to be carefully selected: if you’ve been exposed to it – or something like it – in the past, your body will attack the virus particles before they have a chance to infect your cells… and so you’ll produce no (or fewer) spike proteins and you’ll presumably develop weaker protection against Covid than would otherwise have been the case. This piece in The Scientist explains more. It explains that this was why the AstraZeneca vaccine uses a modified chimp virus – it’s far less likely the average human has seen it before. And it points out that there’s a downstream consequence: that virus can’t now be used for a malaria vaccine. You really do have to use a different one for each vaccine.

There were a few other interesting tidbits in that article. It was the first time I’d seen an argument that one possible reason for milder side-effects from the AZ vaccine amongst older people is that the older you are the more pathogens you’ve been exposed to and so the more chance there is that your immune system has seen something like the vector virus before. And so relatively more of the fifty billion particles will be destroyed before entering a cell. So I’m even more pleased about my feverish sleepless night now!

Why are vaccines injected into muscle? How long does it take for the virus particles to get to work?

The question that triggered my attempts to learn about this stuff was why does it take weeks for me to gain any meaningful protection from the vaccine when it’s clear that my body was fully responding to the onslaught after barely twelve hours?

It got me wondering whether the mechanism of injection had anything to do with it. For example, if the vector virus is injected into a muscle, how long does it take for all fifty billion virus particles to get to work? And where do they operate? In that muscle? Or do they circulate round the body?

Was the first night reaction in response to all fifty billion viruses going to work at once? Or were only a few of them at work that night and it wasn’t yet enough to persuade my immune system that this is something it should lay down ‘memories’ about? Perhaps it’s going to take a few more weeks until they’ve all infected a cell and enough spike proteins have been produced to get my immune system finally to say “Fine! You win! Stop already! I’ll file this one with all the others on the ‘things we should be on alert for in the future’ shelf. Now stop bothering me!!”?

I was surprised how little definitive information there is about this sort of stuff online. I guess because it’s ‘obvious’ to medical professionals, and they don’t learn their trade from quick skims of Wikipedia and Quora. (I hope).

From what I can tell, the main reason vaccines are injected into the muscle are for convenience: the shoulder is at just the right height for a clinician to reach without much effort, it’s an easy target to hit, there’s no need to mess around trying to find a vein, and the risk of complications (eg inflammation of a vein or whatnot) is lower. This literature review makes for an interesting skim.

I’d also wondered if injection into muscle, rather than veins, results in the vaccine having a localised effect… eg is it only my shoulder muscle that churns out the spike proteins? Turns out the answer to that is no: muscle is chosen over, say, fat precisely because it is rich in blood vessels. The vaccine designers want the vaccine vector virus to enter the bloodstream and rush round the body.

And I’d wondered if injection into muscle was in order to create a ‘slow drip drip’ of vaccine into the bloodstream over time and perhaps that would explain why it took so long for the body to develop full immunity. Turns out the answer to that is also ‘no’. It seems that injections into the deltoid muscle (shoulder) are absorbed quicker than those into other commonly used injection sites. Implication: if the manufacturers wanted slow absorption, they wouldn’t be telling doctors to stab patients in the shoulder!

So when I bring all that together, I still remain confused… injecting the vaccine into my shoulder results in quick absorption, and my body was in full ‘fightback’ mode after twelve hours, so it’s hard to imagine there was any meaningful amount of vaccine lingering in my shoulder after, say, 24 hours… it must, by then, all surely have been whizzing round my veins and happily infecting my cells.

So what gives? Why does it take weeks after billions of my cells have been turned into zombie spike protein factories and my immune system has gone on a frenzied counterattack for me to have a meaningful level of ‘protection’ against Covid? (I’m ignoring the relevance of the ‘second dose’ here for simplicity)

I guess the answer must be ‘because that’s just how the immune system works!’

The mathematics of COVID vaccines

My AZ vaccine dose contained FIFTY BILLION virus particles. Wow!

I was fortunate to receive my first COVID vaccine dose yesterday; I received the Astra Zeneca product and all seemed to go well. As seems to be common, it made me feel feverish overnight and I slept badly. This was reassuring in a way as it made me feel like it was ‘working’… it was exciting, in a strange sort of way, to imagine the billions of ‘virus fragments’ racing round my body infecting my cells and turning them into zombie spike protein factories!

However, my inability to sleep also made me realise that I had absolutely no idea how it really worked. So as I was struggling to sleep, I started reading more and more articles about these ‘viral vector’ vaccines. They really are quite fascinating. And these articles did answer my first wave of questions… but they also then triggered more questions, to which I couldn’t find any answers at all. I’m not sure I’m going to be particularly productive at work today so thought why not write down what I discovered and list out all my questions. Perhaps my readers know the answers?

Viral Vector Vaccines: Key Concepts

Most descriptions I found online were hopelessly confused or went into so much extraneous detail about various immune system cell types that they were useless in imparting any real intuition. However, what I did seem to discover, at a very high level, was something like the following:

  • A harmless virus is used as the starting point.
    • Interesting detail: the virus needs to be somewhat obscure to reduce the risk patients’ bodies have been exposed to it before and thus already have antibodies that would destroy it before it’s had a chance to do its magic
  • It is then genetically engineered so that when it invades a human cell it triggers the cell to start churning out chunks of protein (the famous spike protein) that look a bit like Covid-19 viruses.
  • These spikes eventually become visible to the immune system, which mounts a vigorous response and, in so doing, learns to be on the lookout for the same thing in the future.

In essence, we infect ourselves with a harmless virus that causes our body’s own cells to start churning out little proteins that look similar enough to COVID that our body will be on high alert should a real COVID infection try to take hold.

Or, at least, I think that’s what’s going on.

So many extraneous details

Now, most of the articles I read then go on to talk about things like the following:

  • Technical detail about how several little fragments then have to be assembled to make one ‘spike’.
    • Important but not really critical to understanding the concepts from what I can see
  • The role of different types of immune cells.
    • The only thing these kinds of articles taught me was that it’s clear that immunologists have no idea how the immune system works and their attempts at explaining it to lay readers just makes this painfully obvious 🙂
  • Endless ‘reassuring’ paragraphs about safety.
    • I understand why they do this but it is somewhat depressing that every article has to be written this way, and I can’t help thinking that it may even be counterproductive.

Once and done or an ongoing process?

However, I found the descriptions unsatisfactory in several ways, and maybe my readers know the answers.

The literature talks about how the genetically modified virus cannot replicate. I assume this is because the modification that causes infected cells churn out spike proteins means that the cell isn’t churning out copies of the virus, as would normally happen? That would make sense if so.

And it would also explain why my ‘vaccination record’ revealed that my dose contained fifty billion viral particles! That’s one for every three hundred cells in my body! Truly mindblowing.

That said, I have no idea what a ‘viral particle’ is. Is that the same as a single copy of a virus?

It’s mindblowing to imagine that 0.5ml of liquid could contain fifty billion virus particles!

Anyhow, if the virus can’t replicate once inside my body, then the only modified virus particles that will ever infect my cells are the ones that were injected yesterday.

And so I guess the next question is: how long does it take to start invading my cells to turn them into zombie spike protein factories?

Well: the evidence of my own fever was that it was barely twelve hours before my entire body had put itself on a war footing. And in those twelve hours, presumably a lot had to happen. First, enough virus particles had to invade enough cells. And then, secondly, those infected cells then had to have started to churn out spike proteins in sufficient quantity to catch the attention of my immune system. The invasion must already have been underway before I left the medical centre!

And I guess a related question is: what happens after those fifty billion viruses were injected into my right deltoid muscle? Do they just start invading cells in that region and so my shoulder muscle becomes my body’s spike protein factory? Or do they migrate all over my body and enlist all my different cell types in some sort of collective endeavour? How long does this migration take if so? Is this what explains the time lag from “body is on a war footing after twelve hours” to “you ain’t got no protection for at least three weeks”? Are the majority of the particles floating around for days or weeks before invading a cell? Or is the full invasion done within hours of injection?

Put another way: if the only vaccine virus my body will ever see are the fifty billion copies that were injected yesterday and if after twenty four hours my body already seems back to normal from a fever perspective, what is actually going on over the next few weeks?

I did wonder if perhaps there is some reproduction going on… but not of the virus, but of the cells that have been invaded. That is: imagine the vaccine virus invades one of my cells and forces it to start churning out spike proteins. Presumably that cell will itself periodically divide. What happens here? Does the cell get killed by the immune system before it has a chance to replicate (because it’s presenting the spike protein)? Or does many of these cells actually replicate and hence create children cells? Do those children cells also churn out spike proteins? That process would explain a multi-generational multi-week process, I guess? But it wouldn’t be consistent with statements that the vaccine doesn’t persist in your DNA.

Or is the lag all to do with the immune system itself, and there’s some process that takes weeks to transition the body from “wow… we now know there’s a new enemy to be on the look out for” to “we’re now completely ready for it should it ever strike”?

As you can probably tell, I’m hopelessly confused, but also fascinated. If anybody can point me to explainers about the mathematics of vaccination, I’d be enormously grateful. For example:

  • How many of the 50 billion virus particles are typically expected to successfully enter cells?
  • How long does this invasion process take? Hours? Weeks?
  • Is there any replication going on (either of the virus or of the cells they infect?)
  • Do the virus particles diffuse across the body from the injection site? If so, how? And how long does it take?
  • Or is the time lag all to do with the immune system’s own processes?

I didn’t expect to end up learning (or learning I didn’t know) so much!

UPDATE: Follow-up post here.

A brief history of middleware and why it matters today

 

This blog post is a lightly edited reproduction of a series of tweets I wrote recently
  • This tweetstorm is a mini history of enterprise middleware. It argues that the problems we solved for firms 10-20 years ago are ones we can now solve for markets today. This blog post elaborates on a worked example with @Cordablockchain (1/29)
  • I sometimes take perverse pleasure in annoying my colleagues by using analogies from ancient enterprise software segments to explain what’s going on with enterprise blockchains today… But why should they be the only ones that suffer? Now you can too! (2/29)
  • Back in the late 90s and early 2000s, people began to notice that big companies in the world had a problem: they’d built or installed dozens or hundreds of applications on which they ran their businesses… and none of these systems talked to each other properly… (3/29)
  • IT systems in firms back then were all out of sync… armies of people were re-keying information left, right and centre. A total mess and colossal expense (4/29)
  • The solution to this problem began modestly, with products like Tibco Rendezvous and IBM MQSeries. This was software that sat in the _middle_ connecting applications to each other… if something interesting happened in one application it would be forwarded to the other one (5/29)
  • Tibco Rendezvous and IBM MQSeries were like “email for machines”. No more rekeying. A new industry began to take shape: “enterprise middleware”. It may seem quaint now but in the early 2000s, this industry was HOT. (6/29)
  • But sometimes the formats of data in systems were different. So you needed to transform it. Or you had to use some intelligence to figure out where to route any particular piece of data. Enterprise Application Integration was born: message routing and transformation. (7/29)
  • Fast forward a few years and we had “Enterprise Service Buses” (ESBs – the new name for EAI) and Service Oriented Architecture (SOA). Now, OK… SOA was a dead end and part of the reason middleware has a bad name today in some quarters. (8/29)
  • But thanks to Tibco, IBM, CrossWorlds, Mercator, SeeBeyond and literally dozens of other firms, middleware was transforming the efficiency of pretty much every big firm on the planet. “Middleware” gets a bad name today but the impact of ESB/EAI/MQ technologies was profound (9/29)
  • Some vendors then took it even further and realised that what all these parcels of data flying around represented were steps in _business processes_. And these business processes invariably included steps performed by systems _and_ people. (10/29)
  • The worlds of management consulting (“business process re-engineering”) and enterprise software began to converge and a new segment took shape: Business Process Management. (11/29)
  • The management consultants helped clients figure out where the inefficiencies in their processes were, and the technologists provided the software to automate them away. (12/29)
  • In other words, BPM was just fancy-speak for “figure out all the routine things that happen in the firm, automate those that can be automated, make sure the information flows where it should, when it should, and put some monitoring and management around the humans” (13/29)
  • Unfortunately, Business Process Management was often oversold – the tech was mostly just not up to it at that point – and its reputation is still somewhat tarnished (another reason “middleware” is a dirty word!) (14/29)
  • But, even given these mis-steps, the arc of progress from “systems that can barely talk to each other” to “systems and people that are orchestrated to achieve an optimised business outcome” was truly astounding. (15/29)
  • Anyway… the point is: this was mostly happening at the _level of the firm_. The effect of the enterprise middleware revolution was to help individual firms optimise the hell out of themselves. (16/29)
  • But few back then even thought about the markets in which those firms operated. How could we have? None of the software was designed to do anything other than join together systems deployed in the same IT estate. (17/29)
  • So now let’s fast forward to today. Firms are at the end of their middleware-focused optimisation journeys and are embarking on the next, as they migrate to the cloud. But the question of inefficiencies between firms remains open. (18/29)
  • Take the most trivial example in payments: “I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?” (19/29)
  • How can we be almost a fifth of the way through the 21st century and lost payments are still a daily occurrence? How can it be that if you and I agree that I owe you some money, we can still get into such a mess when I actually try to pay you? (20/29)
  • As I argue in this post, the problems we solved for firms over the last two decades within their four walls are precisely the ones that are still making inter-firm business so inefficient (21/29)
  • What stopped us solving this 20 years ago with the emerging “B2B” tech? Easy inter-firm routing b/w legal entities (the internet was scary…), broad availability of crypto techniques to protect data, orchestration of workflows between firms without central controller etc (22/29)
  • But we also hadn’t yet realised: it’s not enough merely to move data. You need to agree how it will be processed and what it means. This was a Bitcoin insight applied to the enterprise and my colleague @jwgcarlyle drew this seminal diagram that captures it so well. (23/29)
  • And my point is that the journey individual firms went on: messaging… integration… orchestration… process optimisation – is now a journey that entire markets can go on. The problems we couldn’t solve back then are ones we now can solve. (24/29)
  • What has changed? Lazy answer: “enterprise blockchain”… lazy because not all ent blockchains are designed for same thing, plus the enabling tech and environment (maturation of crypto techniques, consensus algorithms, emergence of industry consortia, etc) is not all new (25/29)
  • But the explosion of interest in blockchain technology was a catalyst and made us realise that maybe we could move to common data processing and not just data sharing at the level of markets and, in so doing, utterly transform them for the better. (26/29)
  • In my blog post I make this idea concrete by talking about something called the Corda Settler. In truth, it is an early – and modest – example of this… a small business process optimisation (27/29)
  • The Corda Settler optimisation is simple: move from asking a payee to confirm receipt once sent and instead pre-commit before the payment is even made what proof will convince them it was done, all enabled through secure inter-legal-entity-level communication and workflow (28/29)
  • But the Settler is also profound… because it’s a sign the touchpaper has truly been lit on the next middleware revolution… but this time focused on entire markets, not just individual firms (29/29)

 

Process Improvement and Blockchain: A Payments Example

“I just wired you the funds; did you get them?”… “No… I can’t see them. Which account did you send them to? Which reference did you use? Can you ask your bank to chase?”

red and white metal mail box

The cheque is in the post…

How can we be almost a fifth of the way through the twenty first century and this is still a daily occurrence? How can we be in a world where even if you and I agree that I owe you some money, it’s still a basically totally manual, goodwill-based process to shepherd that payment through to completion?!

The “Settler pattern” reduces opportunities for error and dispute in any payments process – and does so by changing the process

This was the problem we set out to solve when we built the Corda Settler. And I was reminded about this when I overheard some colleagues discussing it the other day. One of them wondered why we don’t include the recipient of a payment in the set of parties that must agree that a payment has actually been made. Isn’t that kinda a bit of an oversight?!

Screenshot 2019-08-09 at 10.35.03.png

The Corda Settler pattern works by moving all possible sources of disagreement in a payment process to the start

As I sketched out the answer, I realised I was also describing some concepts from the distant past… from my days in the middleware industry. In particular, it reminded me of when I used to work on Business Process Management solutions.

And there’s a really important insight from those days that explains why, despite all the stupid claims being made about the magical powers of blockchains and the justifiable cynicism in many quarters, those of us solving customer problems with Corda and some other enterprise-focused blockchain platforms are doing something a little bit different… and its impact is going to surprise a lot of people.

Now… I was in two minds about writing this blog post because words like “middleware” and “business process management” are guaranteed to send most readers to the “close tab” button… Indeed, I fear I am a figure of fun amongst some of my R3 colleagues… what on earth is our CTO – our CTO of all people! – doing talking about boring concepts from twenty years ago?!

But, to be fair, I get laughed at in the office by pretty much everybody some days… especially those when I describe Corda as “like an application server but one where you deploy it for a whole market, not just a single firm” or when I say “it’s like middleware for optimising a whole industry, not just one company.

“Application Servers? Middleware? You’re a dinosaur! It’s all about micro-services and cloud and acronyms you can’t even spell these days, Richard… Get with the programme, Grandad!”

Anyway… the Corda Settler discussion reminded me I had come up with yet another way to send my colleagues round the bend…  because I realised a good way to explain what we’re building with Corda – and enterprise blockchains in general – isn’t just “industry level middleware” or “next generation application servers”… it’s also a new generation of Business Process Management platform…  and many successful projects in this space are actually disguised Industry Process Re-Engineering exercises.

Assuming you haven’t already fallen asleep, here’s what I mean.

Enterprise Blockchains like Corda enable entire markets to move to shared processes

Think back to the promise we’re making with enterprise blockchains and what motivated the design of Corda:

“Imagine if we could apply the lessons of Bitcoin and other cryptocurrencies in how they keep disparate parties in sync about facts they care about to the world of regular business…  imagine if we could bring people who want to transact with each other to a state where they are in consensus about their contracts and trades and agreements… where we knew for sure that What You See Is What I See – WYSIWIS. Think of how much cost we could eliminate through fewer breaks, fewer reconciliation failures and greater data quality… and how much more business we could do together when we can move at pace because we can trust our information”

And that’s exactly what we’ve built. But… and sorry if this shocks anybody… Corda is not based on magic spells and pixie dust…  Instead, it works in part because we drive everybody who uses it to a far greater degree of commonality.

Because if you’re going to move from a world where everybody builds and runs their own distinct applications, which are endlessly out of sync, to one where everybody is using a shared market-level application, what you’re actually saying is: these parties have agreed in some way to align their shared business processes, as embodied in this new shared application.  And when you look at it through that lens, it’s hardly surprising that this approach would drive down deviations and errors…!

I mean: we’re documenting – in deterministically executed code – and for each fact we jointly care about: who can update which records, when and in what ways. And to do that we have to identify and ruthlessly eliminate all the places where disagreements can enter the process.

Because if we know we have eliminated all areas of ambiguity, doubt and disagreement up-front, then we can be sure the rest of our work will execute as if it’s like a train on rails.

Just like trains, if two of them start in the same place and follow the same track… they’ll end up in the same place at the end.

Reducing friction in payments: a worked example

So, for payments, what are those things? What are those things that if we don’t get them right up front can lead to the “I haven’t received your payment” saga I outlined at the start of the post?

Well, there’s the obvious ones like:

  • How much needs to be paid?
  • By whom?
  • To whom?
  • In what kind of money/asset?

There are trickier ones such as:

  • Over what settlement rail should I pay?
  • To which destination must we pay the money?
  • With any reference information?

These are trickier since there is probably a bit of automated negotiation that needs to happen at that point… we need to find a network common to us both… and the format of the routing strings is different for each and so forth. But if you have an ability to manage a back-and-forth negotiation (as Corda does, with the Flow Framework) then it’s pretty simple.

But that still leaves a problem… even if we get all of these things right, we’re still left hanging at the end. Because even if I have paid you the right amount to the right account at the right time and with the right reference, I don’t know that you’ve received it.

And so there’s always that little bit of doubt. Until you’ve acknowledged it you could always turn around in the future and play annoying games with me by claiming not to have received it and force us into dispute… and we’d be back to square one! We’d be in exactly the same position as before: parties who are not in consensus and are instead seeing different information.

And it struck us as a bit mad to be building blockchain solutions that kept everybody in sync about really complicated business processes in multiple industries, only for the prize to be stolen from our grasp at the last moment… when we discover the payment that is invariably the thing that needs to happen at the end of pretty much every process hasn’t actually been acknowledged.

It would be as if our carefully tuned train had jumped off the rails and crashed down the embankment just at the last moment. Calamity!

So we added a crucial extra step when we designed the Corda Settler. We said: not only do you need to agree on all the stuff above, you also need to agree: what will the recipient accept from the sender as irrefutable proof that the payment has been made?

And with one bound, we were free!

Because we can now… wait for it… re-engineer the payment process. We can eliminate the need for the recipient to acknowledge receipt. Because if the sender can secure the proof that the recipient has already said they will accept irrefutably then there is no need to actually ask them… simply presenting them with the proof is enough, by prior agreement.

And this proof may be a digital signature from the recipient bank, or an SPV proof from the Bitcoin network that a particular transaction is buried under sufficient work… or whatever the relevant payment network’s standard of evidence actually is.

But the key point is: we’ve agreed it all up front and made it the sender’s problem… because they have the incentive to mark the payment as “done”. As opposed to today, where it’s the recipient who must confirm receipt but has no incentive to do so, and may have an incentive to delay or lie.

But building on this notion of cryptographic proof of payment, the Corda Settler pattern has allowed us to identify a source of deviation in the payment process and moved it from the end of the process, where it is annoying and expensive and makes everybody sad… and moved it to the start of the process and, in so doing, allows us to keep the train on the rails.

And this approach is universal. Take SWIFT, for example. The innovations delivered with their gpi initiative are a perfect match for the payment process improvements enabled by the Settler pattern.

The APIs made available by Open Banking are also a great match to this approach.

Middleware for markets, Business Process Management for ecosystems, Application Servers for industries..!

And this is what I mean when I say platforms like Corda actually achieve some of their magic because they make it possible to make seemingly trivial improvements to inter-firm business processes and, in so doing, drive up levels of automation and consensus.

So this is why I sometimes say “Corda is middleware for markets”.

It’s as if the first sixty years of IT were all about optimising the operations of individual firms… and that the future of IT will be about optimising entire markets.

Corda: Open Source Community Update

The Corda open source community is getting big… it’s time for a dedicated corda-dev mailing list, a co-maintainer for the project, a refreshed whitepaper, expanded contribution guidelines, and more..!

 

It feels like Corda took on board some rocket fuel over the last few months. Corda’s open source community is now getting so big and growing so fast that it’s just not possible to keep up with everything any more — a nice problem to have, of course. And I think this is a sign that we’re reaching a tipping point as an industry as people make their choices and the enterprise blockchain platforms consolidate down to what I think we’ll come to describe as “the big three”.

Read the rest of this post over at the Corda Medium blog…

Introducing the Corda Technical Advisory Council

I’m delighted to announce the formation of the Corda Technical Advisory Council (the Corda TAC). This is a group of technical leaders in our community — who most of you know well — who have volunteered to commit their time over and above their existing contributions to the Corda ecosystem to provide advice and guidance to the Corda maintainers.

Members of the TAC are invited by the maintainers of the Corda open source project (Mike and Joel) and will change over time — the inaugural members are listed below. If you’re also interested in contributing to the TAC, please do let us know — most usefully through your technical leadership and contribution to the ecosystem!

Read the rest of this post over at the Corda medium blog…!

Universal Interoperability: Why Enterprise Blockchain Applications Should be Deployed to Shared Networks

Business needs the universal interoperability of public networks but with the privacy of private networks. Only the Corda network can deliver this.

The tl;dr of this post is:

  • Most permissioned blockchains use isolated networks for each application, and these are unable to interoperate. This makes no sense.
  • We should instead aspire to deploy multiple business applications to an open, shared network. But this needs the right technology with the right privacy model.
  • Corda, the open source blockchain platform we and our community are building, was designed for just this from day one. But there was a piece missing until now: the global Corda network.
  • In this post I describe the global Corda network for the first time in public and how it will be opened up to the entire Corda community in the coming months.
  • If you’re building blockchain solutions for business, you need to read this post…

Think back to how excited you were (well, was!) when you first heard about Ethereum. The idea of a platform for smart contract applications, all running across a common network, with interoperability between all these different applications written by different people for different purposes. It was mind-blowing.

And it’s not just a vision, of course. The public Ethereum community have actually delivered it! Indeed, emerging standards such as ERC20 are a demonstration of the power of a shared, interoperable network and the power of standardisation.

So the question we asked ourselves at R3 back in 2015 was: imagine if you could apply that idea to business… imagine if different groups of people, each deploying applications for their own commercial purposes, woke up one day and discovered that those apps could be reassembled and connected in ways unimaginable to their creators but in a way that respected privacy and which could be deployed in real-world businesses with all the complexity that entails.

It seemed obvious to us that this was the right vision. And that it would require a universal, shared, open network, the topic of this post.

But it dawned on me recently that this is not how everybody in the permissioned blockchain space sees it. The consequences for users could be serious.

The rest of this post is continued at our medium site here!

View at Medium.com

What problem are we trying to solve with Corda?

Todd pointed me at a great piece about “crypto finance” versus “regular finance” by Bloomberg’s Matt Levine earlier.

I thought he did a good job of nailing the essential contradiction that arises if one tries naively to apply Bitcoin or Ethereum principles directly to traditional finance. He uses the example of an Interest Rate Swap (IRS) and how a fully pre-funded model would kind of defeat the point…

This caught my attention because an IRS was the very first project we ever did on Corda! So it’s something I know a little about… Anyway, I think the key to understanding the mismatch is captured in a post of mine from 2015 about how two revolutions are playing out in parallel.

Anyway… full details over on my Medium page.

New to Corda? Start here!

Are you just hearing about Corda for the first time? Want to understand how Corda differs from other platforms and how its unique architecture is perfectly suited to address the real problems faced by today’s businesses?

I just posted to the Corda Medium page with a list of links and background info that should help answer that question…