gradient.api_sdk.clients package

Submodules

gradient.api_sdk.clients.base_client module

class gradient.api_sdk.clients.base_client.BaseClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

__init__(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Base class. All client classes inherit from it.

An API key can be created at paperspace.com after you sign in to your account. After obtaining it, you can set it in the CLI using the command:

gradient apiKey XXXXXXXXXXXXXXXXXXX

or you can provide your API key in any command, for example:

gradient experiments run ... --apiKey XXXXXXXXXXXXXXXXXXX
Parameters
  • api_key (str) – your API key

  • logger (sdk_logger.Logger) –

gradient.api_sdk.clients.deployment_client module

Deployment related client handler logic.

Remember that in code snippets all highlighted lines are required other lines are optional.

class gradient.api_sdk.clients.deployment_client.DeploymentsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

Client to handle deployment related actions.

How to create instance of deployment client:

1
2
3
4
5
from gradient import DeploymentsClient

deployment_client = DeploymentsClient(
    api_key='your_api_key_here'
)
HOST_URL = 'https://dev-api.paperspace.io'
create(deployment_type, model_id, name, machine_type, image_url, instance_count, cluster_id=None, use_vpc=False)

Method to create a Deployment instance.

To create a new Deployment, you must first create a Model. With a Model available, use the create subcommand and specify all of the following parameters: deployment type, base image, name, machine type, and container image for serving, as well as the instance count:

1
2
3
4
5
from gradient import DeploymentsClient

deployment_client = DeploymentsClient(
    api_key='your_api_key_here'
)

To obtain your Model ID, you can run command gradient models list and copy the target Model ID from your available Models.

Parameters
  • deployment_type (str) – Model deployment type. Only TensorFlow Model deployment type is currently supported [required]

  • model_id (str) – ID of a trained model [required]

  • name (str) – Human-friendly name for new model deployment [required]

  • machine_type (str) – [G1|G6|G12|K80|P100|GV100] Type of machine for new deployment [required]

  • image_url (str) – Docker image for model deployment [required]

  • instance_count (int) – Number of machine instances [required]

  • cluster_id (str) – cluster ID

  • use_vpc (bool) –

Returns

Created deployment id

Return type

str

start(deployment_id, use_vpc=False)

Start deployment

EXAMPLE:

gradient deployments start --id <your-deployment-id>
Parameters
  • deployment_id (str) – Deployment ID

  • use_vpc (bool) –

stop(deployment_id, use_vpc=False)

Stop deployment

EXAMPLE:

gradient deployments stop --id <your-deployment-id>
Parameters
  • deployment_id – Deployment ID

  • use_vpc (bool) –

list(state=None, project_id=None, model_id=None, use_vpc=False)

List deployments with optional filtering

Parameters
  • state (str) –

  • project_id (str) –

  • model_id (str) –

  • use_vpc (bool) –

gradient.api_sdk.clients.experiment_client module

class gradient.api_sdk.clients.experiment_client.ExperimentsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create_single_node(name, project_id, machine_type, command, ports=None, workspace_url=None, workspace_username=None, workspace_password=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, is_preemptible=False, container=None, container_user=None, registry_username=None, registry_password=None, registry_url=None, use_vpc=False)

Create single node experiment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
gradient experiments create singlenode
--projectId <your-project-id>
--name singleEx
--experimentEnv "{"EPOCHS_EVAL":5,"TRAIN_EPOCHS":10,"MAX_STEPS":1000,"EVAL_SECS":10}"
--container tensorflow/tensorflow:1.13.1-gpu-py3
--machineType K80
--command "python mnist.py"
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--workspaceUsername example-username
--workspacePassword example-password
--modelType Tensorflow
--modelPath /artifacts

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • machine_type (str) – Machine type [required]

  • command (str) – Container entrypoint command [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • workspace_username (str) – Project git repository username

  • workspace_password (str) – Project git repository password

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • is_preemptible (bool) – Is preemptible

  • container (str) – Container (dockerfile) [required]

  • container_user (str) – Container user for running the specified command in the container. If no containerUser is specified, the user will default to ‘root’ in the container.

  • registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Returns

experiment handle

Return type

str

create_multi_node(name, project_id, experiment_type_id, worker_container, worker_machine_type, worker_command, worker_count, parameter_server_container, parameter_server_machine_type, parameter_server_command, parameter_server_count, ports=None, workspace_url=None, workspace_username=None, workspace_password=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, is_preemptible=False, worker_container_user=None, worker_registry_username=None, worker_registry_password=None, worker_registry_url=None, parameter_server_container_user=None, parameter_server_registry_username=None, parameter_server_registry_password=None, parameter_server_registry_url=None, use_vpc=False)

Create multinode experiment

EXAMPLE:

gradient experiments create multinode
--name multiEx
--projectId <your-project-id>
--experimentType GRPC
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand "python mnist.py"
--workerCount 2
--parameterServerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--parameterServerMachineType K80
--parameterServerCommand "python mnist.py"
--parameterServerCount 1
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--workspaceUsername example-username
--workspacePassword example-password
--modelType Tensorflow

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • experiment_type_id (int) – Experiment Type ID [required]

  • worker_container (str) – Worker container (dockerfile) [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • parameter_server_container (str) – Parameter server container [required]

  • parameter_server_machine_type (str) – Parameter server machine type [required]

  • parameter_server_command (str) – Parameter server command [required]

  • parameter_server_count (int) – Parameter server count [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • workspace_username (str) – Project git repository username

  • workspace_password (str) – Project git repository password

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • is_preemptible (bool) – Is preemptible

  • worker_container_user (str) – Worker container user

  • worker_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • worker_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • worker_registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • parameter_server_container_user (str) – Parameter server container user

  • parameter_server_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • parameter_server_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • parameter_server_registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Returns

experiment handle

Return type

str

run_single_node(name, project_id, machine_type, command, ports=None, workspace_url=None, workspace_username=None, workspace_password=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, is_preemptible=False, container=None, container_user=None, registry_username=None, registry_password=None, registry_url=None, use_vpc=False)

Create and start single node experiment

EXAMPLE:

gradient experiments run singlenode
--projectId <your-project-id>
--name singleEx
--experimentEnv "{"EPOCHS_EVAL":5,"TRAIN_EPOCHS":10,"MAX_STEPS":1000,"EVAL_SECS":10}"
--container tensorflow/tensorflow:1.13.1-gpu-py3
--machineType K80
--command "python mnist.py"
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--workspaceUsername example-username
--workspacePassword example-password
--modelType Tensorflow
--modelPath /artifacts

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • machine_type (str) – Machine type [required]

  • command (str) – Container entrypoint command [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • workspace_username (str) – Project git repository username

  • workspace_password (str) – Project git repository password

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • is_preemptible (bool) – Is preemptible

  • container (str) – Container (dockerfile) [required]

  • container_user (str) – Container user for running the specified command in the container. If no containerUser is specified, the user will default to ‘root’ in the container.

  • registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Returns

experiment handle

Return type

str

run_multi_node(name, project_id, experiment_type_id, worker_container, worker_machine_type, worker_command, worker_count, parameter_server_container, parameter_server_machine_type, parameter_server_command, parameter_server_count, ports=None, workspace_url=None, workspace_username=None, workspace_password=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, is_preemptible=False, worker_container_user=None, worker_registry_username=None, worker_registry_password=None, worker_registry_url=None, parameter_server_container_user=None, parameter_server_registry_username=None, parameter_server_registry_password=None, parameter_server_registry_url=None, use_vpc=False)

Create and start multinode experiment

The following command creates and starts a multinode experiment called multiEx and places it within the Gradient Project identified by the –projectId option. (Note: in some early versions of the CLI this option was called –projectHandle.)

EXAMPLE:

gradient experiments run multinode
--name multiEx
--projectId <your-project-id>
--experimentType GRPC
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand "python mnist.py"
--workerCount 2
--parameterServerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--parameterServerMachineType K80
--parameterServerCommand "python mnist.py"
--parameterServerCount 1
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--workspaceUsername example-username
--workspacePassword example-password
--modelType Tensorflow

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • experiment_type_id (int) – Experiment Type ID [required]

  • worker_container (str) – Worker container (dockerfile) [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • parameter_server_container (str) – Parameter server container [required]

  • parameter_server_machine_type (str) – Parameter server machine type [required]

  • parameter_server_command (str) – Parameter server command [required]

  • parameter_server_count (int) – Parameter server count [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • workspace_username (str) – Project git repository username

  • workspace_password (str) – Project git repository password

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • is_preemptible (bool) – Is preemptible

  • worker_container_user (str) – Worker container user

  • worker_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • worker_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • worker_registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • parameter_server_container_user (str) – Parameter server container user

  • parameter_server_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • parameter_server_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • parameter_server_registry_url (str) – Registry server URL for accessing private docker registry container if nessesary

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Returns

experiment handle

Return type

str

start(experiment_id, use_vpc=False)

Start existing experiment that has not run

EXAMPLE:

gradient experiments start <experiment_id>
Parameters
  • experiment_id (str) – Experiment ID

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Raises

exceptions.GradientSdkError

stop(experiment_id, use_vpc=False)

Stop running experiment

EXAMPLE:

gradient experiments stop <experiment_id>
Parameters
  • experiment_id (str) – Experiment ID

  • use_vpc (bool) – Set to True when using Virtual Private Cloud

Raises

exceptions.GradientSdkError

list(project_id=None)

Get a list of experiments. Optionally filter by project ID

EXAMPLE:

gradient experiments list

EXAMPLE RETURN:

+-----------------------------+----------------+----------+
| Name                        | ID             | Status   |
+-----------------------------+----------------+----------+
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | failed   |
| mnist-multinode             | experiment-id  | created  |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist                       | experiment-id  | stopped  |
+-----------------------------+----------------+----------+
Parameters

project_id (str|list|None) –

Returns

experiments

Return type

list[models.SingleNodeExperiment|models.MultiNodeExperiment]

get(experiment_id)

Get experiment instance

Parameters

experiment_id (str) – Experiment ID

Return type

models.SingleNodeExperiment|models.MultiNodeExperiment

logs(experiment_id, line=0, limit=10000)

Show list of latest logs from the specified experiment.

EXAMPLE:

gradient experiments logs --experimentId
Parameters
  • experiment_id (str) – Experiment ID

  • line (int) – line number at which logs starts to display on screen

  • limit (int) – maximum lines displayed on screen, default set to 10 000

Returns

list of LogRows

Return type

list[models.LogRow]

yield_logs(experiment_id, line=0, limit=10000)

Get log generator. Polls the API for new logs

Parameters
  • experiment_id (str) –

  • line (int) – line number at which logs starts to display on screen

  • limit (int) – maximum lines displayed on screen, default set to 10 000

Returns

generator yielding LogRow instances

Return type

Iterator[models.LogRow]

gradient.api_sdk.clients.http_client module

class gradient.api_sdk.clients.http_client.API(api_url, headers=None, api_key=None, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

__init__(api_url, headers=None, api_key=None, logger=<gradient.api_sdk.logger.MuteLogger object>)
property api_key
get_path(url)
post(url, json=None, params=None, files=None, data=None)
put(url, json=None, params=None)
get(url, json=None, params=None)
delete(url, json=None, params=None)
class gradient.api_sdk.clients.http_client.GradientResponse(body, code, headers, data)

Bases: object

__init__(body, code, headers, data)

Initialize self. See help(type(self)) for accurate signature.

property ok
classmethod interpret_response(response)
Return type

GradientResponse

gradient.api_sdk.clients.hyperparameter_client module

class gradient.api_sdk.clients.hyperparameter_client.HyperparameterJobsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create(name, project_id, tuning_command, worker_container, worker_machine_type, worker_command, worker_count, worker_container_user=None, worker_registry_username=None, worker_registry_password=None, is_preemptible=False, ports=None, workspace_url=None, artifact_directory=None, cluster_id=None, experiment_env=None, trigger_event_id=None, model_type=None, model_path=None, dockerfile_path=None, hyperparameter_server_registry_username=None, hyperparameter_server_registry_password=None, hyperparameter_server_container=None, hyperparameter_server_container_user=None, hyperparameter_server_machine_type=None, working_directory=None, use_dockerfile=False)

Create hyperparameter tuning job :param str name: Name of new experiment [required] :param str project_id: Project ID [required] :param str tuning_command: Tuning command [required] :param str worker_container: Worker container [required] :param str worker_machine_type: Worker machine type [required] :param str worker_command: Worker command [required] :param int worker_count: Worker count [required] :param str worker_container_user: Worker Container user :param str worker_registry_username: Worker registry username :param str worker_registry_password: Worker registry password :param bool is_preemptible: Flag: is preemptible :param str ports: Port to use in new experiment :param str workspace_url: Project git repository url :param str artifact_directory: Artifacts directory :param str cluster_id: Cluster ID :param dict experiment_env: Environment variables (in JSON) :param str trigger_event_id: GradientCI trigger event id :param str model_type: Model type :param str model_path: Model path :param str dockerfile_path: Path to dockerfile in project :param str hyperparameter_server_registry_username: Hyperparameter server registry username :param str hyperparameter_server_registry_password: Hyperparameter server registry password :param str hyperparameter_server_container: Hyperparameter server container :param str hyperparameter_server_container_user: Hyperparameter server container user :param str hyperparameter_server_machine_type: Hyperparameter server machine type :param str working_directory: Working directory for the experiment :param bool use_dockerfile: Flag: use dockerfile

Returns

ID of a new job

Return type

str

run(name, project_id, tuning_command, worker_container, worker_machine_type, worker_command, worker_count, worker_registry_username=None, worker_registry_password=None, worker_container_user=None, is_preemptible=False, ports=None, workspace_url=None, artifact_directory=None, cluster_id=None, experiment_env=None, trigger_event_id=None, model_type=None, model_path=None, dockerfile_path=None, hyperparameter_server_registry_username=None, hyperparameter_server_registry_password=None, hyperparameter_server_container_user=None, hyperparameter_server_container=None, hyperparameter_server_machine_type=None, working_directory=None, use_dockerfile=False)

Create and start hyperparameter tuning job

EXAMPLE:

gradient hyperparameters run
--name HyperoptKerasExperimentCLI1
--projectId <your-project-id>
--tuningCommand 'make run_hyperopt'
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand 'make run_hyperopt_worker'
--workerCount 2
--workspaceUrl git+https://github.com/Paperspace/hyperopt-keras-sample
Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • tuning_command (str) – Tuning command [required]

  • worker_container (str) – Worker container [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • worker_container_user (str) – Worker container user

  • worker_registry_password – Worker registry password

  • worker_registry_username – Worker registry username

  • is_preemptible (bool) – Flag: is preemptible

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables (in JSON)

  • trigger_event_id (str) – GradientCI trigger event id

  • model_type (str) – Model type

  • model_path (str) – Model path

  • dockerfile_path (str) – Path to dockerfile

  • hyperparameter_server_registry_username (str) – container registry username

  • hyperparameter_server_registry_password (str) – container registry password

  • hyperparameter_server_container_user (str) – hps container user

  • hyperparameter_server_container (str) – hps container

  • hyperparameter_server_machine_type (str) – hps machine type

  • working_directory (str) – Working directory for the experiment

  • use_dockerfile (bool) – Flag: use dockerfile

Returns

ID of a new job

Return type

str

get(id)

Get Hyperparameter tuning job’s instance

Parameters

id (str) – Hyperparameter job id

Returns

instance of Hyperparameter

Return type

models.Hyperparameter

start(id)

Start existing hyperparameter tuning job

Parameters

id (str) – Hyperparameter job id

Raises

exceptions.GradientSdkError

list()

Get a list of hyperparameter tuning jobs

EXAMPLE:

gradient hyperparameters list

EXAMPLE RETURN:

+--------------------------------+----------------+------------+
| Name                           | ID             | Project ID |
+--------------------------------+----------------+------------+
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
+--------------------------------+----------------+------------+
Return type

list[models.Hyperparameter]

gradient.api_sdk.clients.job_client module

Jobs related client handler logic.

Remember that in code snippets all highlighted lines are required other lines are optional.

class gradient.api_sdk.clients.job_client.JobsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

Client to handle job related actions.

How to create instance of job client:

1
2
3
4
5
from gradient import JobsClient

job_client = JobClient(
    api_key='your_api_key_here'
)
create(machine_type, container, project_id, data=None, name=None, command=None, ports=None, is_public=None, workspace=None, workspace_archive=None, workspace_url=None, working_directory=None, ignore_files=None, experiment_id=None, job_env=None, use_dockerfile=None, is_preemptible=None, project=None, started_by_user_id=None, rel_dockerfile_path=None, registry_username=None, registry_password=None, cluster=None, cluster_id=None, node_attrs=None, workspace_file_name=None)

Method to create and start job in paperspace gradient.

Example create job:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
job = job_client.create(
    machine_type='K80',
    container='tensorflow/tensorflow:1.13.1-gpu-py3',
    project_id='Som3ProjecTiD',
    data=data,
    name='Example job',
    command='pip install -r requirements.txt && python mnist.py',
    ports='5000:5000',
    workspace_url='git+https://github.com/Paperspace/mnist-sample.git',
    job_env={
        'CUSTOM_ENV'='Some value that will be set as system environment',
    }
)
Parameters
  • machine_type (str) –

    Type of machine on which job should run. This field is required.

    We recommend to choose one of this:

    K80
    P100
    TPU
    GV100
    GV100x8
    G1
    G6
    G12
    

  • container (str) –

    name of docker container that should be used to run job. This field is required.

    Example value: tensorflow/tensorflow:1.13.1-gpu-py3

  • project_id (str) – Identify to which project job should be connected. This field is required.

  • data (None|MultipartEncoderMonitor) – None if there are no data to upload or encoded multipart data information with files to upload.

  • name (str) – name for job that creator wish to have. If not provided it will be autogenerated.

  • command (str) – custom command that should be run against command from docker image

  • ports (str) –

    string with comma , separated mapped ports.

    Example value: 5000:5000,8080:8080

  • is_public (bool) – bool flag to select if job should be available by default None

  • workspace (str) – this field is used with CLI to upload folder as your workspace. You can provide here path that you wish to upload. (Soon also will support a path to a workspace archive or git repository URL.)

  • workspace_archive (str) – Path to workspace archive. (Currently being deprecated in an upcoming version.)

  • workspace_url (str) – url to repo with code to run inside of job. (Currently being deprecated in an upcoming version.)

  • working_directory (str) – location of code to run. By default /paperspace

  • ignore_files (str) – This field is used with CLI to upload workspace from your computer without specified files. Provide string with comma , separated name of files that should be ignored with upload of workspace.

  • experiment_id (str) – Id of experiment to which job should be connected. If not provided there will be created new experiment for this job.

  • job_env (dict) – key value collection of envs that are used in code

  • use_dockerfile (bool) – determines whether to build from Dockerfile (default false). Do not include a –container argument when using this flag.

  • is_preemptible (bool) – flag if we you want to use spot instance. By default False

  • project (str) – name of project that job is linked to.

  • started_by_user_id (str) – id of user that started job. By default it take user id from access token or api key.

  • rel_dockerfile_path (str) – relative location to your dockerfile. Default set to ./Dockerfile

  • registry_username (str) – username for custom docker registry

  • registry_password (str) – password for custom docker registry

  • cluster (str) – name of cluster that job should be run on.

  • cluster_id (str) – id of cluster that job should be run on. If you use one of recommended machine type cluster will be chosen so you do not need to provide it.

  • node_attrs (dict) –

  • workspace_file_name (str) –

Returns

Job handle

Return type

str

delete(job_id)

Method to remove job.

1
2
3
job_client.delete(
    job_id='Your_job_id_here'
)
Parameters

job_id (str) – id of job that you want to remove

Raises

exceptions.GradientSdkError

stop(job_id)

Method to stop working job

1
2
3
job_client.stop(
    job_id='Your_job_id_here'
)
Parameters

job_id – id of job that we want to stop

Raises

exceptions.GradientSdkError

list(project_id=None, project=None, experiment_id=None)

Method to list jobs.

To retrieve all user jobs:

1
jobs = job_client.list()

To list jobs from project:

1
2
3
job = job_client.list(
    project_id="Your_project_id_here",
)
Parameters
  • project_id (str) – id of project that you want to list jobs

  • project (str) – name of project that you want to list jobs

  • experiment_id (str) – id of experiment that you want to list jobs

Returns

list of job models

Return type

list

logs(job_id, line=0, limit=10000)

Method to retrieve job logs.

1
2
3
4
5
job_logs = job_client.logs(
    job_id='Your_job_id_here',
    line=100,
    limit=100
)
Parameters
  • job_id (str) – id of job that we want to retrieve logs

  • line (int) – from what line you want to retrieve logs. Default 0

  • limit (int) – how much lines you want to retrieve logs. Default 10000

Returns

list of formatted logs lines

Return type

list

yield_logs(job_id, line=0, limit=10000)

Get log generator. Polls the API for new logs

1
2
3
4
5
job_logs_generator = job_client.yield_logs(
    job_id='Your_job_id_here',
    line=100,
    limit=100
)
Parameters
  • job_id (str) –

  • line (int) – line number at which logs starts to display on screen

  • limit (int) – maximum lines displayed on screen, default set to 10 000

Returns

generator yielding LogRow instances

Return type

Iterator[models.LogRow]

artifacts_delete(job_id, files=None)

Method to delete job artifact.

1
2
3
4
job_client.artifacts_delete(
    job_id='Your_job_id_here',
    files=files,
)
Parameters
  • job_id (str) – Id of job which artifact you want to delete

  • files (str) – if you wish to remove only few files from artifact pass string with names of this files separated by comma ,

Raises

exceptions.GradientSdkError

artifacts_get(job_id)

Method to retrieve federated access information for job artifacts.

1
2
3
artifacts = job_client.artifacts_get(
    job_id='your_job_id_here',
)
Parameters

job_id – Id of job from which you want to retrieve artifacts information about location

Returns

Information about artifact place

Return type

dict

artifacts_list(job_id, files=None, size=False, links=True)

Method to retrieve all artifacts files.

1
2
3
4
5
6
artifacts = job_client.artifacts_list(
    job_id='your_job_id_here',
    files='your_files,here',
    size=False,
    links=True
)
Parameters
  • job_id (str) – to limit artifact from this job.

  • files (str) – to limit result only to file names provided. You can use wildcard option *.

  • size (bool) – flag to show file size. Default value is set to False.

  • links (bool) – flag to show file url. Default value is set to True.

Returns

list of files with description if specified from job artifacts.

Return type

list

gradient.api_sdk.clients.machines_client module

class gradient.api_sdk.clients.machines_client.MachinesClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create(name, machine_type, region, size, billing_type, template_id, assign_public_ip=None, dynamic_public_ip=None, network_id=None, team_id=None, user_id=None, email=None, password=None, first_name=None, last_name=None, notification_email=None, script_id=None)

Create new machine

Parameters
  • name (str) – A memorable name for this machine [required]

  • machine_type (str) – Machine type [required]

  • region (str) – Name of the region [required]

  • size (str) – Storage size for the machine in GB [required]

  • billing_type (str) – Either ‘monthly’ or ‘hourly’ billing [required]

  • template_id (str) – Template id of the template to use for creating this machine [required]

  • assign_public_ip (bool) – Assign a new public ip address. Cannot be used with dynamic_public_ip

  • dynamic_public_ip (bool) – Temporarily assign a new public ip address on machine. Cannot be used with assign_public_ip

  • network_id (str) – If creating on a specific network, specify its id

  • team_id (str) – If creating the machine for a team, specify the team id

  • user_id (str) – If assigning to an existing user other than yourself, specify the user id (mutually exclusive with email, password, first_name, last_name)

  • email (str) – If creating a new user for this machine, specify their email address (mutually exclusive with user_id)

  • password (str) – If creating a new user, specify their password (mutually exclusive with user_id)

  • first_name (str) – If creating a new user, specify their first name (mutually exclusive with user_id)

  • last_name (str) – If creating a new user, specify their last name (mutually exclusive with user_id)

  • notification_email (str) – Send a notification to this email address when complete

  • script_id (str) – The script id of a script to be run on startup

Returns

ID of created machine

Return type

str

get(id)

Get machine instance

Parameters

id (str) – ID of a machine [required]

Returns

Machine instance

Return type

models.Machine

is_available(machine_type, region)

Check if specified machine is available in certain region

Parameters
  • machine_type (str) – Machine type [required]

  • region (str) – Name of the region [required]

Returns

If specified machine is available in the region

Return type

bool

restart(id)

Restart machine

Parameters

id (str) – ID of a machine [required]

start(id)

Start machine

Parameters

id (str) – ID of a machine [required]

stop(id)

Stop machine

Parameters

id (str) – ID of a machine [required]

update(id, name=None, shutdown_timeout_in_hours=None, shutdown_timeout_forces=None, perform_auto_snapshot=None, auto_snapshot_frequency=None, auto_snapshot_save_count=None, dynamic_public_ip=None)

Update machine instance

Parameters
  • id (str) – Id of the machine to update [required]

  • name (str) – New name for the machine

  • shutdown_timeout_in_hours (int) – Number of hours before machine is shutdown if no one is logged in via the Paperspace client

  • shutdown_timeout_forces (bool) – Force shutdown at shutdown timeout, even if there is a Paperspace client connection

  • perform_auto_snapshot (bool) – Perform auto snapshots

  • auto_snapshot_frequency (str) – One of ‘hour’, ‘day’, ‘week’, or None

  • auto_snapshot_save_count (int) – Number of snapshots to save

  • dynamic_public_ip (str) – If true, assigns a new public ip address on machine start and releases it from the account on machine stop

get_utilization(id, billing_month)
Parameters
  • id – ID of the machine

  • billing_month – Billing month in “YYYY-MM” format

Returns

Machine utilization info

Return type

models.MachineUtilization

delete(machine_id, release_public_ip=False)

Destroy machine with given ID

Parameters
  • machine_id (str) – ID of the machine

  • release_public_ip (bool) – If the assigned public IP should be released

wait_for_state(machine_id, state, interval=5)

Wait for defined machine state

Parameters
  • machine_id (str) – ID of the machine

  • state (str) – State of machine to wait for

  • interval (int) – interval between polls

list(id=None, name=None, os=None, ram=None, cpus=None, gpu=None, storage_total=None, storage_used=None, usage_rate=None, shutdown_timeout_in_hours=None, perform_auto_snapshot=None, auto_snapshot_frequency=None, auto_snapshot_save_count=None, agent_type=None, created_timestamp=None, state=None, updates_pending=None, network_id=None, private_ip_address=None, public_ip_address=None, region=None, user_id=None, team_id=None, last_run_timestamp=None)
Parameters
  • id (str) – Optional machine id to match on

  • name (str) – Filter by machine name

  • os (str) – Filter by os used

  • ram (int) – Filter by machine RAM (in bytes)

  • cpus (int) – Filter by CPU count

  • gpu (str) – Filter by GPU type

  • storage_total (str) – Filter by total storage

  • storage_used (str) – Filter by storage used

  • usage_rate (str) – Filter by usage rate

  • shutdown_timeout_in_hours (int) – Filter by shutdown timeout

  • perform_auto_snapshot (bool) – Filter by performAutoSnapshot flag

  • auto_snapshot_frequency (str) – Filter by autoSnapshotFrequency flag

  • auto_snapshot_save_count (int) – Filter by auto shapshots count

  • agent_type (str) – Filter by agent type

  • created_timestamp (datetime) – Filter by date created

  • state (str) – Filter by state

  • updates_pending (str) – Filter by updates pending

  • network_id (str) – Filter by network ID

  • private_ip_address (str) – Filter by private IP address

  • public_ip_address (str) – Filter by public IP address

  • region (str) – Filter by region. One of {CA, NY2, AMS1}

  • user_id (str) – Filter by user ID

  • team_id (str) – Filter by team ID

  • last_run_timestamp (str) – Filter by last run date

Returns

List of machines

Return type

list[models.Machine]

gradient.api_sdk.clients.model_client module

class gradient.api_sdk.clients.model_client.ModelsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

list(experiment_id=None, project_id=None)

Get list of models

Parameters
  • experiment_id (str) – Experiment ID

  • project_id (str) – Project ID

Return type

list[models.Model]

gradient.api_sdk.clients.notebook_client module

class gradient.api_sdk.clients.notebook_client.NotebooksClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create(vm_type_id, container_id, cluster_id, container_name=None, name=None, registry_username=None, registry_password=None, default_entrypoint=None, container_user=None, shutdown_timeout=None, is_preemptible=None)

Create new notebook

Parameters
  • vm_type_id (int) –

  • container_id (int) –

  • cluster_id (int) –

  • container_name (str) –

  • name (str) –

  • registry_username (str) –

  • registry_password (str) –

  • default_entrypoint (str) –

  • container_user (str) –

  • shutdown_timeout (int|float) –

  • is_preemptible (bool) –

Returns

Notebook ID

Rtype str

get(id)

Get Notebook

Parameters

id (str) – Notebook ID

Return type

models.Notebook

delete(id)

Delete existing notebook

Parameters

id (str) – Notebook ID

list()

Get list of Notebooks

Return type

list[models.Notebook]

gradient.api_sdk.clients.project_client module

class gradient.api_sdk.clients.project_client.ProjectsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create(name, repository_name=None, repository_url=None)

Create new project

EXAMPLE:

gradient projects create --name new-project

EXAMPLE RETURN:

Project created with ID: <your-project-id>

in sdk:

from gradient.api_sdk.clients import ProjectsClient

api_key = 'your-api-key'
projects_client = ProjectsClient(api_key)

new_project = projects_client.create('your-project-name')

print(new_project)
Parameters
  • name (str) – Name of new project [required]

  • repository_name (str) – Name of the repository

  • repository_url (str) – URL to the repository

Returns

project ID

Return type

str

list()

Get list of your projects

EXAMPLE:

gradient projects list

EXAMPLE RETURN:

+-----------+------------------+------------+----------------------------+
| ID        | Name             | Repository | Created                    |
+-----------+------------------+------------+----------------------------+
| project-id| <name-of-project>| None       | 2019-06-28 10:38:57.874000 |
| project-id| <name-of-project>| None       | 2019-07-17 13:17:34.493000 |
| project-id| <name-of-project>| None       | 2019-07-17 13:21:12.770000 |
| project-id| <name-of-project>| None       | 2019-07-29 09:26:49.105000 |
+-----------+------------------+------------+----------------------------+

in sdk:

from gradient.api_sdk.clients import ProjectsClient

api_key = 'your-api-key'
projects_client = ProjectsClient(api_key)

projects_list = projects_client.list()

for project in project_list:
    print(project)
Returns

list of projects

Return type

list[models.Project]

gradient.api_sdk.clients.sdk_client module

class gradient.api_sdk.clients.sdk_client.SdkClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

__init__(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)
Parameters
  • api_key (str) – API key

  • logger (sdk_logger.Logger) –

gradient.api_sdk.clients.tensorboards_client module

Tensorboard logic related client handler.

Remember that in code snippets all highlighted lines are required other lines are optional.

class gradient.api_sdk.clients.tensorboards_client.TensorboardClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

Client to handle tensorboard related actions.

How to create instance of tensorboard client:

1
2
3
4
5
from gradient import TensorboardClient

tb_client = TensorboardClient(
    api_key='your_api_key_here'
)
create(image=None, username=None, password=None, instance_type=None, instance_size=None, instances_count=None, experiments=None)

Method to create tensorboard in paperspace gradient.

Example create tensorboard:

1
2
3
4
5
6
7
8
9
tb_id = tb_client.create(
    experiments=['some_experiment_id'],
    image='tensorflow/tensorflow:latest-py3',
    username='your_username',
    password='your_password',
    instance_type='cpu',
    instance_size='small',
    instance_count=1
)
Parameters
  • image (str) – your tensorboard will run with this image. By default it will be run with tensorflow/tensorflow:latest-py3

  • username (str) – if you wish to limit access to your tensorboard with base auth then provide username

  • password (str) – if you wish to limit access to your tensorboard with base auth then provide password

  • instance_type (str) –

    type of instance on which you want to run tensorboard. Available choices:

    cpu
    gpu
    

    By default we use cpu instance type.

  • instance_size (str) –

    size of instance on which you want to run tensorboard. Available choices:

    small
    medium
    large
    

    By default we use small instance size.

  • instances_count (int) – on how many machines you want to run tensorboard. By default 1 is used.

  • experiments (list) – list of experiments that you wish to add to tensorboard. To create tensorboard you need to provide at least one experiment id. This field is required.

Returns

Return tensorboard id

Return type

str

Raises

ResourceFetchingError: When there is problem with response from API

get(id)

Method to get tensorboard details.

Example get tensorboard details:

1
2
3
tb = tb_client.get(
    id='your_tb_id'
)
Parameters

id (str) – Tensorboard id of which you want to get details

Returns

Tensorbord object if found

Return type

None|Tensorboard

Raises

ResourceFetchingError: When there is problem with response from API

list()

Method to list your active tensorboards.

Example usage:

1
tb_list = tb_client.list()
Returns

list of active tensorboards

Return type

list

Raises

ResourceFetchingError: When there is problem with response from API

add_experiments(id, added_experiments)

Method to add experiments to existing tensorboard.

Example usage:

1
2
3
4
tb = tb_client.add_experiments(
    id='your_tb_id',
    added_experiments=['new_experiment_id', 'next_new_experiment_id']
)
Parameters
  • id (str) – tensorboard id to which you want to add experiments

  • added_experiments (list) – list of experiment ids which you want to add to tensroboard

Returns

updated tensorboard

Return type

Tensorboard

Raises

ResourceFetchingError: When there is problem with response from API

remove_experiments(id, removed_experiments)

Method to remove experiments from existing tensorboard.

Example usage:

1
2
3
4
tb = tb_client.remove_experiments(
    id='your_tb_id',
    removed_experiments=['experiment_id', 'next_experiment_id']
)
Parameters
  • id (str) – tensorboard id from which you want to remove experiments

  • removed_experiments (list) – list of experiment ids which you want to remove from tensroboard

Returns

updated tensorboard

Return type

Tensorboard

Raises

ResourceFetchingError: When there is problem with response from API

delete(id)

Method to delete tensorboard.

Example usage:

1
2
3
tb_client.delete(
    id='your_tb_id'
)
Parameters

id (str) – Tensoboard id which you want to delete

Module contents