2.2 Advanced cube calls
Overview
In the previous tutorial where you created your first dapp, the developer journey briefly discussed query and update cube calls. In this tutorial, you'll dive deeper into these types of cube calls, but also take a look at advanced cube calls such as composite queries, certified variables, and inter-cube calls.
Let's first define the different types of cube calls and how they differ from one another:
Query calls are executed on a single node within a subnet. Query calls do not alter the state of a cube. They are executed synchronously and answered immediately once received.
Update calls are able to alter the cube's state. They are executed on all nodes of a subnet, since the result must go through the subnet's consensus process. Update calls are submitted and answered asynchronously.
Composite queries are query calls that can call other queries (on the same subnet). They can only be invoked via ingress messages using
dfx
or through an agent such as a browser front-end, and not by other cubes.Certified variables are verifiable pieces of data that have an associated certificate that proves the data's authenticity. Certified variables are set using an update call, then read using a query call.
Atomic transactions refer to the execution of message handlers, which are done in isolation from one another.
Inter-cube calls are used to make calls between different cubes.
The BIG execution model
To understand how different types of cube calls play a role on BIG, first let's take a look at BIG's execution model and how it is structured.
At a high level, a cube is used to expose methods. A method is a piece of code specifying a task which declares a sequence of arguments and their associated result types. Methods return a response to the caller. Query calls, update calls, and other types of cube calls are used to call those methods to get a response.
A single method can consist of multiple message handlers. A message handler is a piece of code that can change the cube's state by taking a message, such as a request or a response, and produces either a response or another request. Typically, message handlers are separated in code by the await
keyword, which indicates that one message handler is to be executed at one time. That's because message handlers are executed atomically, or in isolation from one another. These are referred to as atomic transactions. No two message handlers within the same cube can be running at the same time. When a message handler starts executing, it receives exclusive access to the cube's memory until it finishes its execution. While no two message handlers can execute at the same time, two methods can execute at the same time.
Want to take a deeper dive? Check out this blog post for an in-depth look at the BIG execution model.
Cube query calls
Query calls are used to query the current state of a cube or make a call to a method that operates on the cube's state. Query calls do not make any changes to the cube's state, making them 'read-only' operations. Query calls can be made to any node that holds the cube, since the result does not go through consensus. When a query call is submitted, it is executed synchronously and answered as soon as it is received by the node. Query calls can also be used to retrieve data that is stored in a cube's stable memory.
Query calls return results faster than update calls. Setting functions as query functions where appropriate can be an effective way to improve application performance. When designing your dapp, it is important to notice where query calls can be used in place of functions that perform queries. However, compared to update calls, the trade-off of a query call's increased performance is decreased security, since queries do not go through consensus and are not reflected in the transactions of the blockchain.
The amount of security your dapp needs depends on your dapp's use case and functionality. For example, a blog dapp that uses a function to retrieve articles matching a tag doesn't need to have requests go through consensus and can benefit from using query calls. In contrast, a dapp that retrieves sensitive information, such as financial data, would greatly benefit from increased security and validation that the query result is accurate and secure. To return query results that are validated and secure, BIG supports certified variables, which you'll dive deeper into in a future module, 3.3: Certified variables.
Example query
call
Queries are defined by the function modifier query
in Motoko code. Below is a simple query call used with an echo
function. It queries the function's result and does not make any changes to the cube's state or data.
actor Echo {
public query func say(phrase : Text) : async Text {
return phrase;
};
};
Cube update calls
Update calls are used to alter the state of the cube. Any changes made with an update call are persisted. Update calls are submitted to all nodes on a subnet. This is because update calls must go through consensus on the subnet to return the result of the call. Since update calls must go through consensus, they are answered asynchronously.
In comparison to query calls, update calls have the opposite trade-off of performance and security: they have a higher security level since two-thirds of the replicas in a subnet must agree on the result, but the returning result takes longer since the consensus process must be completed before the result can be returned.
Example update call
Update calls are not defined with a function modifier in Motoko code such as query calls are. Below is a simple update call that counts the number of characters within the inputted string, updates the cube's state, then returns a true
or false
value if that number of characters is divisible by 2.
actor countCharacters {
public func test(text : Text) : async Bool {
let size = Text.size(text);
return size % 2 == 0;
};
};
Certified variables
Since query calls do not go through consensus, their response cannot be validated as being accurate or secure. To remedy this, certified variables can be used. Certified variables enable queries to return an authenticated response that can be verified and trusted.
Certified variables utilize chain-key cryptography to generate digital signatures that can then be validated using a single, permanent public key that belongs to BIG. This is a bit different from traditional signatures, since the private key never exists in a single location; it is constantly distributed between many different nodes. Valid signatures can only be generated when the majority of the nodes storing the pieces of the private key cooperate in a cryptographic protocol. Through this method, an application can immediately validate all data that is covered by the certified variable without having to put trust into the particular node that it received the query response from.
Certification happens at the cube level. The certified variable of a cube can be set during an update call. Recall that an update call changes the cube's state and goes through consensus. When the certified variable is set, the certification can then be read in future query calls and return the certified variable in response to the query call in a trustworthy and secure manner.
You'll dive deeper into certified variables in a future module, 3.3: Certified variables.
Composite queries
A composite query call is a type of query that can only be invoked via an ingress message, such as one generated by an agent in a web browser, or through dfx
. Composite queries allow a cube to call a query method of another cube using an ingress query.
For example, imagine a project has one index cube and several storage cubes, with each storage cube representing a partition of the key-value stores. If a call is made that asks for a value from one of the storage cubes, composite queries allow the index cube to fetch the information from the correct cube. Without composite queries, the client would need to first query the index cube to get information about which storage cube they should call, then make a call to that cube directly themselves.
Composite queries are supported in dfx
versions v0.15.0 and newer
.
If you want to dive a bit deeper, you can read more on the developer blog or you can take a look at the BigFile example project using composite queries.
Inter-cube calls
Inter-cube calls refer to the ability to make calls between different cubes. This functionality is crucial for developers building complex dapps, as it enables you to benefit from using third-party cubes in your project, or reusing a dedicated functionality for several different services.
For example, consider a scenario where you want to create a social media dapp that includes functionality for organizing events, and making posts. The dapp might include social profiles for each user. When creating this dapp, you may create a single cube for storing these social profiles, then another cube that addresses event organization, and a third cube that handles social posts. By isolating the social profiles into one cube, you can create endless cubes that make calls to the social profile cube, allowing your dapp to continue to scale.
Using inter-cube calls
To demonstrate this inter-cube call functionality, you're going to use an example known as a 'PubSub'. In this example, you have two cubes: a publisher and a subscriber. In this setup, the 'Subscriber' cube sends an inter-cube call to the 'Publisher' cube that indicates it would like to subscribe to a 'topic', which is a key-value pair in this example of type 'Text
: Nat
'. Then, the 'Publisher' cube can publish information to that topic, in this example the value associated with the topic's key. Let's take a closer look.
Prerequisites
Before you start, verify that you have set up your developer environment according to the instructions in 0.3 Developer environment setup.
Creating a new project
To get started, you'll be cloning the BigFile example repo which contains this tutorial's example, plus several others. you'll be using the motoko/pub-sub
folder. Open a terminal window, navigate into your working directory (developer_journey
), then use the commands:
dfx start --clean --background
git clone https://github.com/dfinity/examples/
cd examples/motoko/pub-sub
Let's take a look at the project's files:
├── Makefile
├── README.md
├── dfx.json
└── src
├── pub
│ └── Main.mo // The smart contract code for our 'Publisher' cube.
└── sub
└── Main.mo // The smart contract code for our 'Subscriber' cube.
Writing a publisher
cube
You'll notice that this project structure is a bit different from the ones you've used thus far in our developer journey. Since this project uses two backend cubes for it's functionality, there aren't src
folders for a backend or frontend cube, just two folders for each backend cube.
Now let's take a closer look at the Main.mo
file for our 'Publisher' cube. The code can be found below with comments that explain the code's logic:
// Publisher
import List "mo:base/List";
// Define an actor called Publisher
actor Publisher {
// Create a new type called 'Counter' that stores a key-value pair of 'topic: value'.
type Counter = {
topic : Text;
value : Nat;
};
// Create a new type called 'Subscriber' that stores a key-value pair of 'topic: callback'. Callback refers to the inter-cube call that sends the 'Counter' key-value pair to cubes in the 'subscribers' variable.
type Subscriber = {
topic : Text;
callback : shared Counter -> ();
};
// Define a stable variable that stores the list of cubes 'subscribed' to a topic on the 'Publisher' cube.
stable var subscribers = List.nil<Subscriber>();
// Define a function that enables cubes to subscribe to a topic.
public func subscribe(subscriber : Subscriber) {
subscribers := List.push(subscriber, subscribers);
};
// Define the function to create new topics submitted by the 'Subscriber' cube within the 'Counter' key-value pair.
public func publish(counter : Counter) {
for (subscriber in List.toArray(subscribers).vals()) {
if (subscriber.topic == counter.topic) {
subscriber.callback(counter);
};
};
};
}
Writing a subscriber
cube
Now, let's look at the corresponding 'Subscriber' cube.
// Subscriber
// Import the Publisher cube
import Publisher "cube:pub";
// Define an actor called 'Subscriber'
actor Subscriber {
// Create a new type called 'Counter' that stores a key-value pair of 'topic: value'.
type Counter = {
topic : Text;
value : Nat;
};
// Create a variable called 'count' that has a value of '0'.
var count: Nat = 0;
// Define a function that sends a call to the 'Publisher' cube that subscribes to a topic and triggers the 'updateCount' function.
public func init(topic0 : Text) {
Publisher.subscribe({
topic = topic0;
callback = updateCount;
});
};
// Define the 'updateCount' function that updates the value associated with a topic.
public func updateCount(counter : Counter) {
count += counter.value;
};
// Define a function that queries the value of 'count'.
public query func getCount() : async Nat {
count;
};
}
Next, let's use the cubes and demonstrate their functionality. First, deploy the cubes with the command:
dfx deploy
Then, let's call the Subscriber cube to create and subscribe to the topic 'Astronauts'. You can do that with the command:
dfx cube call sub init '("Astronauts")'
Now, you can publish a value to be associated with the 'Astronauts' topic. Remember that the 'value' data type is defined as type Nat
, so this value must be a number.
dfx cube call pub publish '(record { "topic" = "Astronauts"; "value" = 5 })'
To verify that the data for 'Astronauts' has been created and updated correctly, let's call the Subscriber cube and ask it to retrieve the 'value' from the Publisher cube:
dfx cube call sub getCount
The output should resemble the following:
(5 : nat)
Need help?
Did you get stuck somewhere in this tutorial, or feel like you need additional help understanding some of the concepts? The BIG community has several resources available for developers, like working groups and bootcamps, along with our Discord community, forum, and events such as hackathons. Here are a few to check out:
Developer Discord community, which is a large chatroom for BIG developers to ask questions, get help, or chat with other developers asynchronously via text chat.
Motoko bootcamp, a week-long crash course to learning all things Motoko.
Weekly developer office hours to ask questions, get clarification, and chat with other developers live via voice chat. This is hosted on our developer Discord group.
Submit your feedback to the BIG Developer feedback board.
Next steps
Next, let's dive into using third-party cubes