Brian Fabien Crain and Sunny Aggarwal interviewed Agoric’s Chief Scientist, Mark S. Miller, on the history of smart contracts, the Agoric Papers, and more, on the Epicenter Podcast. With thanks to the Epicenter Podcast & Infominer, we have shared the interview here with navigable links. This version has been lightly edited for clarity and readability.

We were joined by Mark S. Miller, Chief Scientist at Agoric. Mark is a computer scientist who has done ground-breaking work on many topics relevant to blockchain and smart contracts going back decades.

We discussed his visionary 1988 Agoric papers, which explored how markets could be applied to the world of software. We also covered how his view of smart contracts, which focused on secure bilateral agreements, complements and converges with blockchain. Finally, we covered his new company Agoric and their conceptualization of higher order smart contracts.

Topics discussed in this interview:

  • Mark’s effort to prevent the government from

     suppressing the discovery of public key cryptography 

    in the 1970s

  • The legendary project Xanadu and its attempt to create

     censorship-resistant web publishing in the 1970s

  • The Agorics papers and the

     vision of markets for computation

  • Why AI hasn’t changed the 

    shortcomings of central planning

  • Comparisons and alignments of

     Mark’s view of smart contracts with Nick Szabo’s

  • Their decade-spanning work on 

    making

     

    JavaScript the best language for smart contracts

  • Agoric’s work on 

    higher order smart contracting 

    today

Brian: We’re here today with Mark S. Miller. I tried to have Mark on the show in 2015. He had done this smart contracts work a very long time ago, much before Bitcoin and blockchain. I knew that he wasn’t working in the blockchain or Bitcoin space back then, but I found his email and I emailed him. He was at Google at the time and he sent me a talk that he did in 1997 about smart contracts and the legal ramifications and technology implications of smart contracts, which was just amazingly prescient. It’s astonishing how many of the ideas that later became so widely used were there. Since then Mark has transitioned. He’s left Google and he’s working fully on decentralized networks and smart contracts and the blockchain space in general. I’m really excited that the episode is finally happening and we’re having you on Mark.

Mark Miller: Well I’m very happy to be here.

Mark S. Miller

Brian: To start off, you’ve been part of this cypherpunk-cryptography world for a long time. But how did you originally become involved in that?

Mark: I’m going to go all the way back to 1977. I was working with Ted Nelson on Xanadu. Xanadu and Augment were the two early hypertext projects; well before the web. Xanadu was the one that had the vision of worldwide hypertext publishing as the new electronic medium for humanity. Ted and I were both heavily influenced by George Orwell’s 1984 ministry of truth. We understood that the coming world of electronic publishing could be a force for oppression and tyranny — or could be a great liberating force giving us all privacy and freedom from censorship. We wanted the second. We saw it as our mission to lead the world into the coming of electronic publishing as a liberating force, but we didn’t know how to do it.

In 1977, Martin Gardner was editing a column for Scientific American, named Mathematical Games. One issue explained the discovery of the first public key algorithm: the RSA algorithm. He did not actually explain the algorithm. He explained the logic of what you could do with a public key system, both the asymmetric encryption for privacy and the asymmetric signing for integrity. He painted a nice picture of the power of this. I called Ted up in the middle of the night: “Ted, we can prevent the Ministry of Truth!” We wrote away for the paper. The paper did not arrive. We found out that the reason it did not arrive is because the U.S. national security apparatus, some part of it, decided that the paper should not be publicly released. They — I’m going to say classified, I don’t know what the legal category is — but they made it clear that they would consider it illegal to distribute the paper.

I got really incensed by this. I got passionate and angry in a way that I have really not in my life before or since, feeling quite literally, “they are going to classify this over my dead body.” I went to MIT, hung around campus, and managed to get my hands on a paper copy. I was careful to handle it only with gloves. I took it to various copy shops — there were no home copy machines. I made lots of copies at different shops. I put them anonymously into envelopes, sending them from a variety of mailboxes, sending them out to home and hobbyist computer organizations and magazines all across the country without any cover letter — just the article itself.

Fortunately, early in 1978, the U.S. government decided to declassify. They gave the green light for distribution of the paper. Communications of the ACM immediately published the paper in the February ’78 issue. I will never have any idea whether the actions I took had any impact. I don’t have any particular reason to believe they did have an impact. But the experience of, for example, handing copies of the paper to some select friends saying, “if I disappear — make sure this gets out.” This was a really radicalizing moment for me. Realizing the power of cryptography to change the world to protect us as individuals from large and oppressive institutions. And that this was worth fighting for.

Brian: Such an amazing story. And I think it’s hard for people today to conceive this. Today somebody writes a paper, invents something, they can publish it right away. So the idea that the government could try to say “this information is too important that the people shouldn’t know about this.” It’s pretty amazing.

Mark: Yeah. There have been several phases of governments, of the U.S. government in particular, impeding the progress of cryptography and impeding the progress of decentralized markets, smart contracts built on cryptography. Export controls lasted till about 1998. The E language was my distributed, cryptographic, object-capability language, through which a lot of the language-based smart contracting ideas came together. We came out with that language during the era of export controls. So we split the effort, where we were distributing it from the U.S. without the cryptography. Tyler Close, a collaborator, a Canadian citizen living on Anguilla, then reverse engineered how to put the crypto back in. The E language was actually distributed from Anguilla during those days. There was also the Clipper Chip trying to get mandatory trapdoors into cryptography. And then in 1998, export controls were lifted. (Correction: they lasted till 1999.)

After 2001, with 9/11, there was the Patriot Act and suddenly this big chill in the air. Doug Jackson pioneered E-Gold, one of the first attempts at a cryptographically based currency system, in this case backed by physical gold. Jackson was arrested. There was a chilling of the work from then forward. There was a lot of fighting going on. There was the RSA T-shirts, where people would have the RSA algorithm written on a t-shirt and go across borders with it, kind of daring people to arrest them, because it’s a free speech issue at that point.

Brian: You talked a bit about Xanadu and how what you saw there was this idea of censorship-resistant publishing, and this amazing force it would be in creating freedom. Of course, the parallels to Bitcoin are astonishing because people would speak about censorship-resistant money in very similar terms. Back then, when you started working on Xanadu, were you thinking about something like censorship-resistant money and what that could look like?

Mark: I don’t know that my thinking about cryptographic commerce all the way to cryptographic money goes back that far. I think my first exposure to really strong crypto for money — when did DigiCash first come out?

Sunny: I think that was in the 80’s.

Mark: OK. At the time we wrote the Agoric Systems papers in 1988. We assumed secure electronic money and micro-payments without really exploring how to achieve that. Assuming some solution money, then elaborating and exploring all kinds of smart contracts, all kinds of commercial institutions and auctions and various kinds of incentive engineering (now called mechanism design). We explored all of that as computational embodiments of contractual arrangements and institutional arrangements, assuming that there would be an underlying money system. I did do, in my 1987 paper Logical Secrets, a really terrible first attempt at a distributed secure money. But the idea of doing a money with no central issuer? I did not see anything like blockchain coming.

I was much more thinking in terms of Hayek’s paper on the denationalization of money, where you have many separate currencies competing with each other. This is a theme I’m going to come back to. My approach was decentralized not the way in which people in the Bitcoin space, in the blockchain space, refer to decentralized. To Bitcoin people, decentralized is: mutually suspicious parties all coordinating together to arrive at consensus on single decisions. That’s one form of decentralization, let’s call that coordinated decentralization. I was thinking of what I’ll call loosely coupled decentralization, which is what we see in the Internet, what we see in the web. Where there is tremendous architectural diversity; there is essentially no decision that everyone has to jointly make. Hayek’s denationalization of money was basically saying the same thing with money: let many monies compete with each other, let reputation feedback and competition drive the system towards emergent robustness, so any one money might fail. But if it fails, competition will drive customers to other monies. We saw that as a model for commerce in general.

Sunny: Brian and I chatted once with James Dale Davidson, who wrote the book The Sovereign Individual. A lot of people try to draw parallels between his work and Bitcoin. But in that work he’s actually talking decentralized money in a way similar to you: he talked about cryptographic money, but he actually had the idea that there’ll be many private issuances of money. Like a Swiss bank will issue its own money backed by gold, and people will issue their own money, and then users will kind of choose which money they want to use.

Mark: Yeah. And in the mid 90s, Dean Tribble — who is now one of the founders with me of Agoric, and had been collaborating with me all the way back in the late 80s — Dean Tribble and Norm Hardy, creator of the KeyKOS object capability operating system, the two of them came out with a decentralized money proposal, the Digital Silk Road. It was routing payments through bilateral relationships where each bilateral relationship has a credit window. I won’t go into it. It has many similarities to what Interledger is now doing. But the main point is that it really was this hyper-decentralized, loosely coupled system of payments. As you accumulated imbalances in each bilateral relationship, you’d have to clear the imbalance through something else. That something else was a variety of competing real world money with no new insight as to how to make those cryptographic.

I want to give a special credit here to Nick Szabo, because during this period of the 90s, first of all his vision of smart contracts was of tremendous influence on me, but also the kind of thing that we now understand from blockchain: Nick Szabo was trying to explain the power of that to me and I wasn’t understanding it. I did not understand it until I saw blockchain and I understood how Bitcoin and Ethereum worked. Then there was this ‘aha,’ that’s what Nick was talking about all this time. So while I was thinking about the emergent robustness from competition and reputation feedback — loosely coupled network where any one point can fail, inspired by the dynamics of what happens between businesses — Nick was focused on the internal controls by which a large institution can, by having internal controls, public audits, well-designed governance systems, and separation of duties, can build an individual institution that can be much more trustworthy than any of the individuals in it.

Nick understood that things like byzantine fault tolerance, like massive replication with cross-check and consensus mechanisms, are an extreme form of internal control. Thus, we can now build a logical individual institution that is much more trustworthy than anything humanity has been able to build before. There are some kinds of contracts for which that’s needed. One for which it’s most needed, which was highest leverage, is money. And it’s no accident, I think, that we saw it emerge first with cryptocurrency.

Sunny: Specifically, at least, for money issuance. Like you said, I like to call that the distributed version. The Interledger protocol sounds very similar to what you’re talking about here. But then Interledger doesn’t have a native money, and it kind of assumes the existence of some other settlement mechanism. But on Nick Szabo’s vision it seems that yes — this is good for coin issuance, but at the end of the day maybe payments don’t need to be on this. What I think is interesting is that the Lightning network seems like a combination of these two ideas: where you use a base redundant system for issuance, and then you try to use a distributed system for payments, and you can also use the base system along with issuance as a message board. Or this reputation right — one of the issues I always had with Interledger is yes, it assumes the existence of reputation, but where does this reputation go? Is there a bulletin board where I can go tell everyone that “hey look, this guy screwed me over”? There isn’t. That’s one of the things that a redundant blockchain also gives you. That’s kind of what Lightning does, where if you want to challenge someone you can challenge them on the base chain. I think it’s cool to see that both your vision and Nick Szabo’s are kind of both correct impartially.

Mark: Yeah. It took me a long time to see that. I think that’s exactly correct. I want to give a shout out to Jorge Lopez, who had studied both what was going on in blockchain, as well as my old papers. He came to me with the integrated vision. Then I saw that: oh! it’s not that Nick’s vision and my vision are alternatives or competing with each other. They actually fit together and they are actually about different layers of the system. That inspired what Agoric is now doing. My new company. So the way we see the combined vision is that you still want the overall system to be a loosely coupled network of mutually suspicious machines hosting mutually suspicious computation talking to each other. But now we can view a blockchain as a way to build a computer out of agreement rather than building it out of hardware. By building it out of agreement we now have a logical computer that’s much more trustworthy than any one physical piece of hardware can be. That logical computer is still just one node on a much larger network, and that larger network can include other secure communication between chains, secure communication between chains and non-chains. The kinds of coordination we were doing with cryptographic protocols in a loosely coupled distributed system, we can now do that as well on top of blockchains, and include blockchains within that overall fabric.

Brian: Yeah. I think it’s very nice how you explain this. One of the ways that comes to my mind in the way you speak about it is that often one talks about blockchains as removing a third party. But in a way the blockchain is the third party, it’s just a decentralized third party. So in many ways, maybe the way economic interactions work is not that different from the existing world. It’s just that instead of a centralized third party you have a decentralized third party. Whereas your work goes into more of a way radical direction, in that it’s actually decentralized and you don’t have the third party so much anymore. And then of course if you bring the two together, you have maybe some of these architectural differences in terms of the way the interaction works. Then when you need a third party you have a decentralized third party. So I think it’s super fascinating how you have these kinds of different ideas and different ways they’re playing out.

Mark: Yeah I think that there’s some small number of institutions, like money — like Augur is another great example of worldwide prediction market — where you need worldwide credibility without prior negotiation. But most contracts are local. They don’t need to run on a globally credible blockchain. The transactions that they do, they can do against local representations of remotely pegged money. Which is what several parties including Cosmos are doing, what we’re doing, and what Lightning is doing; where the transactions that don’t need to be on the blockchain can happen faster and more privately. Then the outcome of the transactions can roll up into net inflows and net outflows. The outcomes eventually roll up into public blockchains, without having to reveal what the contracts were that they rolled up from.

Brian: So you wrote a set of papers called Agoric Open Computing I think. And there were three different papers, and they were widely read and had a lot of impact. Would you mind walking us through the core ideas you were exploring in these papers?

Mark: There are three papers. The central paper is the one called Markets and Computation Agoric Open Systems where we go through all of the layers of our vision, how each layer builds on the previous layer, and argue for why our foundational layer is necessary to support the higher layersAt the lower layer, we talk about distributed computational foundations with encapsulation and communication of information, access, and resources. Encapsulation and communication the centerpiece of object capabilities. Encapsulation is a form of property rights from ownership. Communication is a form of rights transfer. Together they form a core rights theory. Information, access, and resources maps cleanly to confidentiality, integrity, and availability. Integrity turned out to be the core issue that most of our later work through the decades has been on. So object capabilities at the low level, then smart contracting, markets, and auctions for dynamic price discovery and adaptive price-based behavior, applying the invisible hand to resource allocation issues, like auctioning off the next CPU time slots. Having markets in memory space and network bandwidth. On top of that, a vision of how the coming of distributed decentralized electronic markets covering the world would be enmeshed with and part of the human economy, and change the nature of the human economy. So that was the central paper.

The Incentive Engineering paper is where we go into the detailed design of some core auction mechanisms for allocation and some game theoretic analysis. The term incentive engineering — we didn’t know about the mechanism design literature — that’s just our term for what has otherwise been called mechanism design. Comparative Ecology: A Computational Perspective takes a look at various complex adaptive systems that we see in the world, systems in which coherence emerges from a process that we’d call some kind of evolutionary ecosystem. We looked at real world human marketplaces. We looked at biological ecosystems. We looked at some A.I. systems that were making internal use of evolutionary adaptation. We were trying to compare and contrast them in order to learn: What framework would best create the selective pressure from which distributed problem solving would emerge? And we supported the use of market mechanisms as a robust system of selective pressure to encourage this emergent growth of problem solving ability. Those were the three papers.

Sunny: What was the context of these papers? You co-authored these with Eric Drexler. For people who don’t know, he’s often called the ‘father of nanotechnology.’ That seems very far off from some of the stuff that you are working on. So how did you meet with Eric Drexler and how did you guys decide to write these three papers together?

Mark: Eric and I have aligned visions of the future. And when I first met Eric he was working on light sails, basically solar sails for propulsion in space. He was presenting it at a space conference. I was working with Ted on Xanadu. I think this was the late 70s, ’79 maybe, at the Princeton Space Industrialization Conference. I explained to him about hypertext and Xanadu and his jaw dropped open and he said: “Do you know how important that is!” I actually learned to appreciate hypertext through his view of it. Eric saw value in hypertext that none of the rest of us had and really deepened our view of what was so great about it. Eric and I were talking about all sorts of things, but we were thinking in terms of a much higher tech future — a higher tech future that would have, for example, the scale of computation that we would have with nanotech-based computers. Which is still many orders of magnitude beyond the scale of computation we have today. It was clear to us that at that scale of computation, the central planning approach to coordination, would not work.That you need something decentralized where the overall goodness of the system emerged through loosely coupled interaction within a coherent framework of rules. It was that future orientation and also our fascination.

There was a critical breakthrough: I was explaining to Eric my excitement about object oriented programming. When I explained to Eric the power of encapsulation in object-oriented programming, he said “oh, that’s like Hayek’s explanation of the utility of property rights.” That was the big ‘aha’ moment for me. That moment, more than anything else, that led to the Agoric work. There are many virtues of property rights, but the one that Hayek explained, is in terms of plan interference. Hayek says that the central problem of economics is: “How is it that all of these separate creatures (people), with all their various intentions, mostly ignorant of each other, formulate plans to serve their interests. to unfold in a world in which the plans of other agents, that have been formulated in mutual ignorance of each other, are all unfolding together — How do you keep these plans from interfering with each other?”

Hayek said one element is that by dividing up resources into separately owned parcels, where each planning agent knows that there are some resources that he has exclusive access to. He can formulate some plans minimizing plan interference with other agents. Well that’s exactly the object oriented understanding of encapsulation. A way to enable programs that are formulated separately to be able to operate on their own encapsulated data free from interference by each other. That enables these separately formulated plans to be composed together, to realize co-operative opportunities from the composition, while still minimizing the dangers of destructive interference with each other. That understanding made both our understanding of Hayek’s point and our understanding of object orientation deeper, and led to the appreciation of object capabilities as a form of encapsulation and coordination that is not just minimizing the dangers of accidental interference but also minimizing the dangers of purposeful interference.

Brian: Okay. So this is a very interesting concept. You said that, okay, with all of these very powerful computers, let’s say with nanocomputers, then this central planning approach wouldn’t work anymore with computing. But it seems like the way you’re speaking about it — let’s say I have a company, my company has various different employees and resources. Now within that company, obviously there is that kind of a central planning approach right. That’s sort of the nature of companies. You say, okay there’s markets between all the companies but then within each company there is the central planning approach. And then there was the work by Ronald Coase about what determines the size of these firms and transaction costs. But are you basically saying that if you think of the different components of a computer program or computer architecture, all of them should interact with some market mechanisms? And if that’s the case, how does that align with property rights? Does it make sense, let’s say, for a company to own all of these computing resources and then there still be some market where all of these competing computing resources interact, you know sort of making payments and trying to maximize their profits?

Mark: That’s a big question that has many parts to it. First, I think that price and adaptive price behavior is not the important early step. The important early step is a system of rights-based coordination so that things that are formulated separately, mostly in ignorance of each other, can still be composed together — that people can create reusable libraries where there is, in the computational fabric, a notion of separately owned data and resources so that we can compose reusable components and get larger outcomes. The modern richness of software has largely been based on an informal, hacky, imperfect, and insecure rights-based theory of coordination. This is the encapsulation of conventional object oriented programming.

Within a company, you also have imperfect systems that are like prices. On a single machine you have various forms of priority. On a Google data center you’ve also got various priority and urgency knobs and resource allocation knobs and all of these are self-reported. If you want to think of the scheduler as a central planner you can do that, or you can think of it as analogous to an auction mechanism. It’s not a central planner in the sense of making the decisions about what priority other things should have. Rather, all of the other things self-report their priority, much the way players in a market express priority by using money and produce price information. This is a cheap analogue of prices.

The reason why you can get along both with insecure encapsulation and imperfect price mechanisms within a company is because the company has various kinds of sanctions. Everyone within the company is trying to cooperate with each other. If someone is seen as too abusive then the company has other ways to react. Companies have strong admission controls. Whereas as soon as you expose these mechanisms to the outside market, you don’t have those other forms of feedback. You need genuine protected objects, protected boundaries. You need, for example, Ethereum with the gas system has to have a genuinely robust system of selling resources. Not in order to have efficient resource allocations but in order not to have terrible resource allocations. It’s not so much a question of optimizing it’s a question of de-pessimizing. It’s a question of avoiding the really terrible behavior. And companies internally have other ways to avoid the really terrible behavior.

Sunny: I’m really glad you mentioned Google’s data centers as an example here because I read an article a few weeks ago talking about how Google’s actually using their Deep Mind A.I. to coordinate energy resources within its data centers. This experiment of theirs reduced their cooling costs by 40 percent. Do you think that maybe humans aren’t the best way of doing central planning? Maybe this leads into a larger political question, but do you think A.I’s are on the brink of being better central planners than both markets and human central planners?

Mark: First of all I want to say I don’t know the particular system that you’re talking about. I know a lot about how Google operates more conventionally before they started applying Deep Mind technology to this issue. But I just reason by analogy here. Back in the 1940s and 1950s in the socialist planning debate, when Hayek and Mises would talk about what unfortunately came to be known as the calculation problem. What came to be known in later years as the knowledge problem. The problem is that you can’t centralize the knowledge needed for a central planner to act. That’s the knowledge part of it. And then there’s no possible way you can build a central planner, a central planning institution, that would act. Back then the advocates of central planning were pointing at: “Look at these new fangled computers. Surely these computers will grow up into central planning agents that can solve the calculation problem. Then we’ll be able to do central planning!” There was a false asymmetry assumed there. They were imagining the market of the day, with the complexity of the market that they knew, and imagining that the planners were much more capable than the planners of the day because they were using computers. But they didn’t imagine that markets would also have players that were using computers, and therefore were all much more complex and interesting. The knowledge problem, gets worse, not better as the individual players get more sophisticated and embody more knowledge that they’re not able to articulate.

Sunny: You get almost to a Turing problem there, where the central planner computer can’t simulate all of the millions of computers in today’s economy.

Mark: Right. So with regard to the Deep Mind thing, once again I don’t know that specifically. But what I’ll react to is that it’s planning is temperature and power and such things. That’s not a set of resource allocation decisions that programmers have been writing their programs to deal with. It just hasn’t been on the radar traditionally, so there is no local decision making by programs to try to be adaptive on those regards. It’s essentially a situation where we had no decentralized planning and poor centralized planning. It’s a situation where we were planning so badly that even a central planner could do better. Once you’ve got that kind of sophistication in the agents that are subject to the plans, that are now as capable of reasoning about those issues, then you have to again ask: Does the asymmetry go away, where the central planner has gotten special technology ahead of all of the agents that are subject to its control?

Brian: That’s really nice how you explained this. And I must say I find it kind of encouraging knowing that if this is true, and it’s going to hold true, then maybe it is something that will work counter towards some of the centralizing aspects that come with A.I.

Sunny: One last question I have about the papers, before we to go back into talking about Agoric the company, is about the fact that when Hayek talks about part of the issue is that humans are very complex beings, that part of the measurement problem or information problem as you phrased it, was how do you measure people’s utility function? We don’t have a way of doing that. But when we’re talking about bots here, I feel like at least until we have very strong A.I’s, they don’t seem like very complex creatures. So I think it might be possible to model these simplistic bots rather than humans. I don’t know if some of Hayek’s ideas around this complexity of humans comes into play or not.

Mark: The notion of utility function is like the perfectly spherical cow. There is this complex real world, both for people and of programs, where what you’ve got is behavior that has been shaped over time to be adaptive and serves some interests. And then you have, outside the system, using the concept of utility function as one way of idealizing the behavior to reason about it. But there is no representation in the person’s head or in the program of a utility function. Programs have complex behaviors that are written by programmers and modified by programmers over time to adapt to whatever the complex job is that the program is doing, both with respect to what the job is and with respect to how the program is performing the job. Programmers modify and change it in complex ways to be more adaptive. It’s hard to reason about programs. What we know is that it’s impossible in general to predict what a program will do other than by running. So then our computer systems run the programs, discover what they do by running them. But I wouldn’t call that central planning. I would call that just a distributed system of the running programs.

Sunny: Cool. And to lead back into the blockchain stuff, one of the things that interested me about this is property rights. I think in the blockchain space we have two very prominent models of property rights and transaction fees that are dominating right now and are very different. You have the first one which is done by Bitcoin and Ethereum, where there’s a limited amount of block space, or gas limit, and people use fees to — it’s essentially going in a constant auction. Where there’s a limited amount of block space, and if you want to get in ,you have to put in a fee and the highest number of people get their fees. There’s a lot of innovation going around on that front, like Vitalik has a proposal for doing different type of auction mechanisms and whatnot. But then there’s a complete other and which I think this is one of the few interesting things that EOS actually did, was they proposed a more property rights based model of fee. So the more EOS tokens you have, you get, you could say — that if you own 5 percent of the EOS tokens you have the right to use 5 percent of the EOS blockchain’s resources. You have 5 percent of the disk space, 5 percent of the computation power, and so that takes almost a more property rights approach rather than this constant auction. So what are your thoughts on these two approaches.

Mark: I don’t know the EOS approach. I also don’t know of Vitalik’s recent proposals. Right now we don’t have good composable systems of electronic rights. That’s really the prior issue. In that sense I’m responding positively to what you said about EOS even though I don’t know the actual EOS system. Having a foundation in rights and rights transfer is the right conceptual starting point such that markets emerge from interaction between multiple parties within a rules-based, rights-based framework. Obviously auctions is one way to do that. Proportional share ownership rights is another way to do that. All of these things are worth exploring. I don’t have a strong opinion that one is better than the other. I will say that Agoric is planning to implement the escalator algorithm for scheduling on the Agoric blockchain, but we also want to encourage all sorts of different experiments there.

Brian: Okay. This is perfect because this is leading us exactly where I wanted to go. So there were the papers many years ago that had the name Agoric in it, but then much more recently also you co-founded a new company, that is also called Agoric. Can you tell us a little bit what is the main vision of the company, what are you guys trying to accomplish?

Mark: We’re trying to bring the world economy online. Right now there is a problem which is the blockchain space, the world of smart contracting that we’re seeing, has not been successful at penetrating the mainstream economy. It’s basically this separate world and the business activities in the mainstream economy see a barrier there that they’re not getting over. Markets are all about network effect. We want to create a distributed system of objects in contracts on different platforms — blockchains, non-blockchains, permission quorum systems, individual machines, both publicly and within companies — we want to span that entire network of activity in a uniform framework of, at the low-level, object capabilities. Then at the high-level, the system of electronic rights and smart contracts that we want to build on top of that. We want to enable the mainstream economy to be able to take incremental steps towards adoption of the technology, where all of the steps towards complete public participation are as smooth as possible. I want to make an analogy here: The web is mostly a public thing. But companies inside their firewalls have their own internal private web sites, and the content on those websites freely link into public pages. People inside the company follow links from internal pages to external pages in a completely seamless manner. That’s good for the public web and it’s good for the spread of the technology to apply to things for which public visibility is not appropriate.

Brian: Do you see a similar function that Agoric will have in that people can kind of seamlessly go from traditional means of doing commerce, to blockchain-based, and this kind of friction goes away?

Mark: Yes there are several barriers. One of the biggest ones is that smart contracting right now is too hard and too dangerous. We’ve seen smart contracts constructed by experts in which hundreds of millions of dollars have disappeared overnight with no recourse due to simple bugs. In order to open this world to the mainstream, you have to make it much more reasonable for programmers who are not experts on smart contracts, and programmers whose expertise lie in their subject matter, to be able to create business arrangements, contracts, institutions, with much greater confidence that their contracts mean what they think they mean. Agoric’s approach with object capabilities and erights, which I’ll get back to in a moment, helps tremendously in creating system of compositional, reusable contract components that enable that kind of construction with confidence. We did a lot of this exploration, I mentioned, in my E language. Starting in 2007, I’ve been on the JavaScript standards committee getting the enablers into the JavaScript standard. JavaScript now supports an object-capability subset of JavaScript that includes almost all of JavaScript, such that many old JavaScript programs run in SES, as we call it. This comes out of work we did at Google, and now it’s work that Agoric has done in collaboration with Salesforce. We’re bringing this to programmers — not just as an extension of the object-oriented paradigm so that people can extend intuitions they already have about objects — we’re even bringing it to them in a language that 20+ million programmers are already familiar with.

Sunny: How does this relate to the language you guys have been creating with Jessie, is that related?

Mark: Yes it is. There’s two subsets of JavaScript that we’ve defined, a very large subset we call SES, and a very small subset we call Jessie. Jessie itself is a subset of SES. In doing secure programming, there are two fundamental stances. First: “I want to protect myself from misbehavior by your code.” Second: “I want to ensure that my code means what I think it means”. When I express security policy in my code — how my code should enforce certain arrangements on your code, I want to know that my code is interacting with your code in the way that I designed my code to do. SES solves the first problem.I can run your code inside a SES sandbox under object capability rules, so I am confident that your code only gets the authority that I give it. Your code cannot escape the sandbox, cannot do things with more authority than it was given. Because Jessie is a subset of SES, your code might be in Jessie. But if I’m just protecting myself from your code, I don’t care whether you stayed within Jessie or whether you’re using full SES.

For my own code, JavaScript has many hazards. Double equal for example has crazy coercion rules. Everybody’s programming style for JavaScript says avoid double equals. In Jessie we define a subset that omits all of the unnecessarily dangerous things, only includes the best parts. The wonderful thing is that the best parts of JavaScript are a really good programming language. So we’ve been essentially keeping our code in Jessie. We’ve been collaborating with academics on formal specification languages, so that you can verify that object capability code means what you think it means. We think Jessie is the candidate to apply those tools to. That’s how those things fit together.

Brian: Okay great. That’s very interesting, all your work on JavaScript and secure JavaScript, and how that’s coming together. You spoke a bit about JavaScript and how you guys enabled smart contracts there, but what is powerful about this approach and what are the kind of capabilities that this approach that you guys take with smart contracting has?

Mark: One of the things that makes our current world of software so rich and so composable is higher order composition. Let’s start with higher order functions where functions can operate on values and compute values, but the functions themselves are values. Higher order functional programming is where functions operate on functions with no limitation. Objects cause effects, take actions, and can hold and manipulate other objects. A table can store any kind of object. When you reify a concept like a table into an object, then you enable the kinds of things that tables manipulate to be the kind of thing that tables are. Likewise, in the marketplace, much of the richness of the market interactions we have is the reification nature of property rights. Property rights started off very literal. But anytime you create a contract that unfolds over time, the continued participation in the contract is valuable. By labeling that continued participation a property right, any contract building block that’s parameterizable over anything described as a property right, can now operate on the rights created by other contracts. You can compose contracts together.

Sunny: As an example, I can imagine an options contract where an options contract is basically me making a contract with you saying, “hey I want the ability to buy this from you at a later date.” But then I can turn this contract into an asset itself, and I can go resell my end of the options contract. So you turn contracts into assets, and you can make contract out of those assets, and you can have this innovative approach where contracts and assets are kind of interchangeable.

Mark: That’s right. We talk about the duality of contracts and erights: contracts manipulate erights and contracts that unfold over time create erights. ERTP, the Electronic Rights Transfer Protocol, is the top protocol layer in our system. It is a set of object interfaces and specifications for generically representing a wide range of kinds of rights. Rights that are fungible and non-fungible, divisible, and the right to continue participating in contracts within our framework are all reified as erights described by ERTP. Then, to the extent possible, we create contract components that assume, of the erights they manipulate, only that they are described by ERTP. You can’t always do that but we can do that with exchange, options and futures, a variety of auctions, single auctions, continuous double auctions. We have this opportunity to create highly reusable generically parameterizable contract components in which you can feed any ERTP-described erights. If that contract unfolds over time, it creates a new derivative eright that, in turn, can be fed into any generically parameterizable contracts.

Sunny: Right. For our listeners who want to get a much better understanding of this, I highly recommend one of Mark’s papers he wrote called Financial Instruments As Capabilities. When I was first trying to understand this whole capability stuff it didn’t make sense but after reading that paper — it had a little bit of pseudocode in there — and now I see how this makes sense and I can visualize how to put these pieces together.

Mark: The actual title is Capability Based Financial Instruments. It was published in Financial Cryptography 2000, also on Anguilla, which became a haven of crypto activity because of the export controls.

Sunny: Another question I want to ask is, now you have this ERTP system and this Jessie smart contracting language, you could have went ahead and created a simple smart contracting platform like Ethereum or Tezos, but it seems you guys are not just creating a single blockchain contracting platform. Could you talk a bit briefly about what the goal is with that?

Mark: Again it’s network effect. It goes back to the differing early visions of hypertext. I hadn’t thought to make this analogy before, but Doug Engelbart’s Augment system was a single system for those who signed up to Augment. Whereas Xanadu was a worldwide distributed loosely coupled hypertext publishing system where there’s no one provider. We want to enable contracts that span from one extreme of completely permissionless, globally credible blockchains, all the way to various systems that are more private. I think it is really important that most contracts are local. Most need for contracts are local, most actual real world contracting is local. There is no need for worldwide transparency into the internals of a contract that’s done by a small set of parties. And then there are a few arrangements which I would call more ‘institutions’ than ‘contracts,’ that do need that credibility. We want to span that whole range. There is this large tradeoff space: we want one uniform mechanism that can sit on top of that diversity and span it, and enable contracts that started off being designed for one place in that fabric to be able to migrate and continue execution in another place on that fabric.

Brian: Cool. Well thanks so much Mark. We’re very much looking forward to seeing what comes out in terms of practical use cases from Agoric and hopefully we can do another episode at some point in the future. Thanks so much for joining us today.

Mark: You’re welcome. It was a real pleasure.

Thanks for reading! You can join the Agoric community on Twitter, LinkedIn, and Telegram, and catch us at these upcoming events.