I integrated AI into a blockchain Digital Currency solution. Here’s what I learned

Cats and dogs can never be friends. And blockchains and AI should always be kept apart. Or so I thought until I tried to implement ‘purpose bound money.’

A key selling point for some Central Bank Digital Currency initiatives is ‘purpose bound money‘, a concept that could enable innovations such as conditional cash transfers to citizens. Reasonable people can disagree about the political wisdom of such ideas but how would you actually implement one? Here’s what I learned when I ran my own experiment.

The CTO of a tech firm is invariably the first person asked whenever some cool new technology hits the news. Does our product use this amazing new breakthrough it? And if not, why not?

Your job as CTO is politely to explain why the new technology would be a distraction from the mission and can be safely ignored: focus and execution are the name of the game. But… it pays to remain open minded; sometimes even the most overhyped ideas can turn out to have value. “Integrating Blockchains with Machine Learning models” may turn out to be a perfect example of just this.

And this would be extremely surprising if true because, on its face, the idea of integrating blockchains and AI models is absurd.

Blockchains are deterministic; LLMs are non-deterministic. So how can they ever be friends?

At their heart, blockchains are about determinism and certainty. Indeed, the fundamental purpose of ‘Enterprise DLT’ platforms like the Corda platform my firm builds is to give participants in a market the rock-solid assurance that ‘what you see is what I see’ (WYSIWIS). Literally: “I know that my books and records – my data about loans, trades and deals we’ve done with each other – are identical to yours.” This is an extremely valuable idea. If I know my systems are in sync with yours, we can transact with confidence, make decisions with certainty, and escape from the tyranny of reconciliation and broken trades.

But the reason blockchains can deliver this WYSIWIS promise is because the market participants execute the same code with respect to the same data in the same deterministic context. Same. Same. Same. Everything the same. Annihilate inconsistency. Dispatch with divergence.

That’s how we know we will reach the same conclusion about the outcome: if we start in the same place, and then do exactly the same things, then we’ll end up in the same place! Precision, determinism, repeatability and strict rules are at the heart of the enterprise DLT story.

That world couldn’t be more divorced from the way modern Machine Learning models work. Blockchains are deterministic… AI models not so much.

AI models hallucinate. They give you a different answer each time you ask them the same question. They’re biased, and they’re opaque. It’s hard to think of a technology less suited to the problem of deterministically and reliably keeping people in perfect sync with each other.

To see what I mean, take a look at the screenshots below. I asked ChatGPT the same question twice. Here is the first time I did it:

I asked ChatGPT to concatenate “Word” and “Hello” and it said “HelloWorld”. So far so good.

And now for the second:

I then asked ChatGPT the same question a second time. And it gave me a totally different answer…

A cutting edge AI model literally cannot reliably achieve ‘Hello World’!

Imagine trying to build a system whose results could be reliably replayed and replicated on this foundation. 

Now, I should say that this phenomenon is by design and it is for good reason. Most questions have many possible correct answers and it would be a very boring world if ChatGPT only ever chose one, ever. 

So these models use randomness so that full range of possibilities can be explored. But it’s a problem from the context of a deterministic blockchain.

But the questions kept coming. So I retained an open mind, as sceptical as I was.

“Purpose Bound Money” in a way that could actually be delivered

And then, one day, I was asked for advice on ‘Central Bank Digital Currency’ project. The client wanted to explore ‘conditional transfers’. Think of ‘digital food vouchers’, say, or vulnerable people giving their carers an allowance that can only be spent on groceries.

Many of these projects are being built on blockchains. This feels like a good technology fit: there’s a need to keep various parties in sync – the Central Bank, commercial banks, retailers and so on – and the underlying security requirements can be a good match for these platform too.

And when you first look at the ‘conditional transfer’ problem, it also looks suited to a blockchain approach. Surely we want determinism here: the system shouldn’t say ‘yes’ to a purchase one day and ‘no’ to the same purchase the next day, right?

But… are we sure we could completely capture all the nuance? How on earth would you program such a thing? What human could possibly enumerate every type of purchase?

Could you really encode the idea that a child can buy one Snickers bar, but ten is probably too much? What happens if your system knows that broccoli is a vegetable, but a retailer has mis-spelled it as brocolli?

How do you reliably teach a deterministic system that one candy bar is probably OK for a child to buy, but they shouldn’t be allowed to buy dozens of them?!

Deterministic systems are hopeless as this sort of thing. But it then struck me: maybe this is something that a large language model would be good at? So I thought I’d give it a try.

I started by writing a prompt for ChatGPT, that I could then interact with using the OpenAI API:

A simple prompt to explore the idea of an AI-powered ‘purpose bound money’ engine

And then I tested it with some examples. The results were superb:

ChatGPT not only knew that books with adult themes might be unsuitable for children… it also had enough cultural knowledge to understand that Roald Dahl’s work typically didn’t fall into this category, but Stephen King’s did.

So it’s tempting to say that integrating blockchains and AI could be a perfect marriage.  Except… nothing in life is ever that simple.

We still need determinism

The problem here is that original blockchain insight: we need determinism. We need each party to reach the same conclusion. It’s no good if the retailer thinks a Snickers bar is OK but the bank thinks it is not. We need the model to give a consistent answer. But, as we saw above, this is the one thing these models don’t routinely do by design.

As we learned above, if you ask a large language model the same question twice and you might get a different answer. So how would anybody ever verify that the decisions enforced by the blockchain were correct – or, at least, were sourced from a trained and well-governed model?

The answer is that we need to import some cryptographic techniques into AI. Two in particular:

First, when we send a query to an artificial intelligence API, we need it to digitally sign the answer. That way we can subsequently prove where it came from. This solves part of the determinism problem: if a retailer can prove the ‘OK’ came from a genuine model then the bank doesn’t need to go ask the same question themselves; they can trust that signed response from the model. 

Secondly, we have to go back to the fundamental source of the non-determinism in the first place: randomness, and deal with that head on so we can reliably re-run the query should we need to. To do this we need the model to tell us what sources of non-determinism it was relying on when it answered the question. We need it to ‘commit’ to its inputs, in cryptographic terminology. And if we have that then we can reliably re-run the query and get the same answer.

OpenAI already provides some of what we need. For example, in the screenshot below, we are able to detect that the model we’re interacting with has been changed, which is an important cause of why answers can change from time to time. And my simple example could easily be extended to also capture the randomness too.

So there we have it: with a little bit of cryptographic insight, Blockchains and AI can live together in harmony.

3 thoughts on “I integrated AI into a blockchain Digital Currency solution. Here’s what I learned

  1. LLMs can be deterministic. The only true nondeterminism in them comes from the sampling process, which is the part that chooses the next token from a distribution. There are a few different ways this process can be made deterministic, albeit with some tradeoffs. For example, always taking the highest-probability token, or just re-using the same random seed at generation time.

    There is another aspect to their behaviour which might “look” nondeterministic: the outputs can be somewhat chaotic with respect to the inputs, in the sense that a small perturbation to the input text could lead to a large difference in output.

  2. Exactly… that’s the point I was getting at in the ‘commit to the inputs’ part… I’m not trying to constrain how the model works (eg how it picks the next token or which random seed to use)… just saying it would be super-useful if, in the API response, the model specified precisely which choices/settings/params it had used so that somebody wanting to re-run the same query for themselves (against the same model and with the same prompt, etc) could verify that the result was indeed a genuine possible output from the model

  3. deterministic = fixed; while with more and mroe advanced computing resource in place, it can be always evolving(un-fixed)… every scecond is to going to be with a new feedback with those factors being weighted continuously…

Leave a comment