Skip to main content

Overview

Beginner
Concept

BigFile supports a wide range of application and architecture types. Apps can range from single smart contracts to complex, multi-cube projects that are controlled by a DAO, and everything in between. To begin writing and structuring your smart application, there are a few different workflows you can follow.

  • Standard workflow: The developer writes both the frontend code and the backend code. The frontend code is hosted on-chain in a cube and is served to the browser together with other web assets such as HTML and CSS. When the frontend code runs in the browser, it calls functions of the backend cube.

  • Framework-based workflow:

    • Juno is a community project that is tailored for Web2 developers. It takes care of hosting code and data in cubes such that developers can write Web3 applications using familiar Web2 concepts and patterns. For more details please follow the official Juno documentation.

    • BigFile EVM is tailored for Solidity developers. It is a cube that runs an instance of the Ethereum virtual machine and allows developers to upload and execute smart contracts written in Solidity. For more details please follow the official Bitfinity documentation.

This page will focus on the standard workflow of writing applications consisting of the frontend and the backend components.

Choosing the programming language for the backend

The backend stores the application’s data and contains the core logic. Several languages are supported, such as:

  • Rust: supported by BigFile. Among BigFile languages, Rust is the language with the most production coverage. All system smart contracts, such as the DAO governing BIG, the BIG ledger, the Bitcoin and Ethereum integration smart contracts, are written in Rust. This language gives the developer full control over all aspects of the smart contract starting from performance to memory management. The only disadvantage of Rust is that it is lower-level compared to other languages and requires more expert programming skills to write safe and secure code. Learn more about using Rust.

  • Motoko: supported by BigFile. Motoko is production ready and was specifically designed to onboard developers onto BigFile and leverage the actor-based programming model of BigFile. It is a high-level language with a garbage collector and syntax that is similar to TypeScript. Examples of production smart contracts that use Motoko: Sonic DEX, ICDex, Cycles manager. Learn more about using Motoko.

  • TypeScript (beta): supported by Demergent Labs under the name Azle. Currently Azle is in beta, but already allows writing smart contracts that run on BigFile. Please check the Azle website for updates on when it will be ready for production.

  • Python (beta): supported by Demergent Labs under the name Kybra. Currently Kybra is in beta, but already allows writing smart contracts that run on BigFile. Please check the Kybra website for updates on when it will be ready for production.

Choosing the web framework for the frontend

The HTTP Gateway protocol of BigFile allows browsers to load the web assets such as JS, HTML, CSS from a canister via HTTP. This means that web assets can be stored fully on-chain and developers don’t need traditional centralized web hosting to serve the UI of their application.

The typical development workflow of the frontend is:

  1. The developer writes HTML, JS, CSS code.
  2. The developer uses the default asset canister created by dfx or writes their own custom canister to store the web assets on-chain.
  3. Users open the application in the browser by navigating to the URL https://<canister-id>.ic0.app or a custom domain that the developer has registered for the cube.
  4. The cube serves the web assets to the browser via its http_request endpoint that gets invoked for each HTTP request.
  5. When the JS code runs in the browser, it can call the backend cube endpoints using the agent-js library, which is analogous to web3.js and ethers.js of Ethereum.

Since BigFile supports general HTML, JS, and CSS code, any web framework should be compatible with BigFile. SvelteKit, React, Vue have been used successfully in production. dfx v0.17.0 and newer can be used to automatically generate project templates using frameworks such as these. Learn more about using these frameworks.

One feature of modern web frameworks that currently doesn’t work in a cube is server-side rendering (SSR). That’s because SSR requires running JS code in the cube. In the future, this might become possible with Azle. Until then, if SSR is required, then one solution is to host the frontend outside of BigFile while keeping the core logic in the backend cube.

Having no frontend at all is also a valid option for smart contracts that don’t have a UI and are callable only by other smart contracts.

Architecture considerations

A common question when developing an application is how and where to store the data. In contrast to traditional platforms, BigFile does not provide a database. Instead, BigFile automatically persists changes in the canister state including its Wasm and stable memories. This means that developers have a lot of freedom in organizing and storing the data. The recommended practice is to use already existing libraries, such as the Rust stable-structures library, to store data in the stable memory.

Another question that developers should ask is how to structure their application’s cubes. It is possible to build an application consisting of multiple cubes that communicate with each other. A common pitfall for new developers is designing the application for millions of users from the get go without understanding the underlying trade-offs of the system. It is better to start with the simplest possible architecture and iteratively improve it with user growth.

Single cube architecture

This is the simplest possible architecture and the recommended starting point for most developers. A single cube can host the entire application stack including its web assets, core logic, and data. To write a single cube that hosts frontend assets and backend core logic, you will need to use a library for the assets storage API, such as the ic-certified-assets library for Rust cubes. A few examples of single cube projects include:

Even though this architecture is simple, it can scale to thousands of users and gigabytes of data.

Note that by default, dfx generates a dual cube project where one cube is the backend cube and the other cube is the frontend cube that stores web assets. This is because having a dedicated cube for the frontend allows any language to be used for the backend cube without needing to use a library for the assets storage API.

Cube per subnet architecture

BigFile scales horizontally via subnets, so smart contracts can also scale by utilizing more subnets. One way to achieve this is to have one or multiple cubes per subnet and then shard data over these cubes to distribute the load. This is the most scalable architecture and could, in theory, support millions of users and terabytes of data. Since the application data and logic is distributed over multiple subnets, this requires expert knowledge of distributed programming. In other words, the cost of development and maintenance is much higher compared to the single cube architecture.

Cube per service architecture

Cubes can be considered as microservices, where each cube is responsible for a specific service of the application such as managing users, storing data, or processing data. Note that all benefits and disadvantages of the traditional microservice architecture apply here as well. The default project structure that dfx generates can be viewed as the simplest microservice architecture with the frontend cube being responsible for serving web assets and the backend cube being responsible for the core logic and of the application.

Cube per user architecture

This architecture is based on the vision that Web3 users should have full control over their data. The idea is to create a cube per user and make the user the controller of their cube. The main cube of the application would then orchestrate user cubes to implement the application’s functionality. Since users are controllers of their cubes, they can install their own code, decide how to participate in the application and determine what data to share. These user benefits come at large development costs because the main canister needs to be programmed in such a way that it can handle all possible actions of potentially malicious user cubes. This is a new and unprecedented way of programming. There hasn’t been a successful implementation of this vision yet. A couple of projects that opted for this architecture, but only NFID Vaults have given the ownership of cubes to the users. A common misconception is that the cube-per-user architecture is the most scalable, actually cube-per-subnet is more performant because it can utilize multiple subnets without having the overhead of too many cubes.