Prove
Giza provides two methods for proving Orion Cairo programs: through the CLI or directly after running inference on the Giza Platform. Below are detailed instructions for both methods
Option 1: Prove a Model After Running Inference
Deploying Your Model
After deploying your model on the Giza Platform, you will receive a URL for your deployed model. Refer to the Endpoints section for more details on deploying models.
Running Inference
To run inference, use the /cairo_run
endpoint of your deployed model's URL. For example:
This action will execute the inference, generate Trace and Memory files on the platform, and initiate a proving job. The inference process will return the output result along with a request ID.
Checking Proof Status
To check the status of your proof, use the following command:
Downloading Your Proof
Once the proof is ready, you can download it using:
You can find an extensive example in Giza Action tutorial
Option 2: Proving a Model Directly from the CLI
Alternatively, you can prove a model directly using the CLI without deploying the model for inference. This method requires providing Trace and Memory files, which can only be obtained by running CairoVM in proof mode.
Running the Prove Command
Execute the following command to prove your model:
This option is less preferred due to the necessity of dealing with CairoVM.
If you opt for this method, ensure you use the following commit of CairoVM: 1a78237
.
Last updated