You can deploy an image to gain shell access and execute commands against the GPU. This example uses Ubuntu, but you can find many others on 🐳 Docker Hub.

One-click deploy (Recommended)

<aside> 💡

Click here to deploy Ubuntu with access to 1xH100 GPU.

</aside>

ubuntu-new.mp4

Access your shell

You can set up the Northflank CLI & SSH remote access by following this guide

Using SSH to access your Services

full-cli.mp4

Create a deployment

  1. Create a new deployment service via create newservicedeployment service
  2. Name your service, and select external image in the deployment section
  3. Enter the path for the image, for example pytorch/pytorch:2.8.0-cuda12.6-cudnn9-runtime
  4. Choose a deployment plan with sufficient resources for your requirements
    1. Make sure to choose a larger ephemeral storage size if you’re likely to be download large files
  5. Select the GPU type and how many to make available to the deployment
  6. In advanced options, change the Docker runtime mode and select custom command. Enter sleep 1d to keep the deployment running.

Add a persistent volume

Since containers on Northflank are ephemeral, any data will be lost when the container restarts. To make use of persistent storage, you will need to add a volume to your service. To do this: