gradient.api_sdk.clients package

Submodules

gradient.api_sdk.clients.base_client module

class gradient.api_sdk.clients.base_client.BaseClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

HOST_URL = 'https://services.paperspace.io/experiments/v1/'
__init__(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Base class. All client classes inherit from it.

An API key can be created at paperspace.com after you sign in to your account. After obtaining it, you can set it in the CLI using the command:

gradient apiKey XXXXXXXXXXXXXXXXXXX

or you can provide your API key in any command, for example:

gradient experiments run ... --apiKey XXXXXXXXXXXXXXXXXXX
Parameters
  • api_key (str) – your API key

  • logger (sdk_logger.Logger) –

gradient.api_sdk.clients.deployment_client module

Deployment related client handler logic.

Remember that in code snippets all highlighted lines are required other lines are optional.

class gradient.api_sdk.clients.deployment_client.DeploymentsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

Client to handle deployment related actions.

How to create instance of deployment client:

1
2
3
4
5
from gradient import DeploymentsClient

deployment_client = DeploymentsClient(
    api_key='your_api_key_here'
)
HOST_URL = 'https://api.paperspace.io'
create(deployment_type, model_id, name, machine_type, image_url, instance_count)

Method to create a Deployment instance.

To create a new Deployment, you must first create a Model. With a Model available, use the create subcommand and specify all of the following parameters: deployment type, base image, name, machine type, and container image for serving, as well as the instance count:

1
2
3
4
5
from gradient import DeploymentsClient

deployment_client = DeploymentsClient(
    api_key='your_api_key_here'
)

To obtain your Model ID, you can run command gradient models list and copy the target Model ID from your available Models.

Parameters
  • deployment_type – Model deployment type. Only TensorFlow Model deployment type is currently supported [required]

  • modelId – ID of a trained model [required]

  • name – Human-friendly name for new model deployment [required]

  • machine_type – [G1|G6|G12|K80|P100|GV100] Type of machine for new deployment [required]

  • image_url – Docker image for model deployment [required]

  • instance_count – Number of machine instances [required]

Returns

Created deployment id

Return type

str

start(deployment_id)

Start deployment

EXAMPLE:

gradient deployments start --id <your-deployment-id>
Parameters

deployment_id (str) – Deployment ID

stop(deployment_id)

Stop deployment

EXAMPLE:

gradient deployments stop --id <your-deployment-id>
Parameters

deployment_id – Deployment ID

list(filters)

List deployments with optional filtering

To view all running deployments in your team, run:

gradient deployments list --state RUNNING

Options:

--state [BUILDING|PROVISIONING|STARTING|RUNNING|STOPPING|STOPPED|ERROR] Filter by deployment state
--projectId TEXT Use to filter by project ID
--modelId TEXT Use to filter by model ID
Parameters

filters (state|projectId|modelId) –

gradient.api_sdk.clients.experiment_client module

class gradient.api_sdk.clients.experiment_client.ExperimentsClient(api_key, *args, **kwargs)

Bases: gradient.api_sdk.clients.base_client.BaseClient

HOST_URL = 'https://services.paperspace.io/experiments/v1/'
LOG_HOST_URL = 'https://logs.paperspace.io'
__init__(api_key, *args, **kwargs)

Base class. All client classes inherit from it.

An API key can be created at paperspace.com after you sign in to your account. After obtaining it, you can set it in the CLI using the command:

gradient apiKey XXXXXXXXXXXXXXXXXXX

or you can provide your API key in any command, for example:

gradient experiments run ... --apiKey XXXXXXXXXXXXXXXXXXX
Parameters
  • api_key (str) – your API key

  • logger (sdk_logger.Logger) –

create_single_node(name, project_id, machine_type, command, ports=None, workspace_url=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, container=None, container_user=None, registry_username=None, registry_password=None)

Create single node experiment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
gradient experiments create singlenode
--projectId <your-project-id>
--name singleEx
--experimentEnv "{"EPOCHS_EVAL":5,"TRAIN_EPOCHS":10,"MAX_STEPS":1000,"EVAL_SECS":10}"
--container tensorflow/tensorflow:1.13.1-gpu-py3
--machineType K80
--command "python mnist.py"
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--modelType Tensorflow
--modelPath /artifacts

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • machine_type (str) – Machine type [required]

  • command (str) – Container entrypoint command [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • container (str) – Container (dockerfile) [required]

  • container_user (str) – Container user for running the specified command in the container. If no containerUser is specified, the user will default to ‘root’ in the container.

  • registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • registry_password (str) – Registry password for accessing private docker registry container if nessesary

Returns

experiment handle

Return type

str

create_multi_node(name, project_id, experiment_type_id, worker_container, worker_machine_type, worker_command, worker_count, parameter_server_container, parameter_server_machine_type, parameter_server_command, parameter_server_count, ports=None, workspace_url=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, worker_container_user=None, worker_registry_username=None, worker_registry_password=None, parameter_server_container_user=None, parameter_server_registry_container_user=None, parameter_server_registry_password=None)

Create multinode experiment

EXAMPLE:

gradient experiments create multinode
--name multiEx
--projectId <your-project-id>
--experimentType GRPC
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand "python mnist.py"
--workerCount 2
--parameterServerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--parameterServerMachineType K80
--parameterServerCommand "python mnist.py"
--parameterServerCount 1
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--modelType Tensorflow

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • experiment_type_id (str) – Experiment Type ID [GRPC|MPI] [required]

  • worker_container (str) – Worker container (dockerfile) [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • parameter_server_container (str) – Parameter server container [required]

  • parameter_server_machine_type (str) – Parameter server machine type [required]

  • parameter_server_command (str) – Parameter server command [required]

  • parameter_server_count (int) – Parameter server count [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • worker_container_user (str) – Worker container user

  • worker_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • worker_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • parameter_server_container_user (str) – Parameter server container user

  • parameter_server_registry_container_user (str) – Registry username for accessing private docker registry container if nessesary

  • parameter_server_registry_password (str) – Registry password for accessing private docker registry container if nessesary

Returns

experiment handle

Return type

str

run_single_node(name, project_id, machine_type, command, ports=None, workspace_url=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, container=None, container_user=None, registry_username=None, registry_password=None)

Create and start single node experiment

EXAMPLE:

gradient experiments run singlenode
--projectId <your-project-id>
--name singleEx
--experimentEnv "{"EPOCHS_EVAL":5,"TRAIN_EPOCHS":10,"MAX_STEPS":1000,"EVAL_SECS":10}"
--container tensorflow/tensorflow:1.13.1-gpu-py3
--machineType K80
--command "python mnist.py"
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--modelType Tensorflow
--modelPath /artifacts

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models. Also, --modelPath /artifacts is currently required for singlenode experiments if you need your model to appear in your Model Repository so that you can deploy it using Deployments.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • machine_type (str) – Machine type [required]

  • command (str) – Container entrypoint command [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • container (str) – Container (dockerfile) [required]

  • container_user (str) – Container user for running the specified command in the container. If no containerUser is specified, the user will default to ‘root’ in the container.

  • registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • registry_password (str) – Registry password for accessing private docker registry container if nessesary

Returns

experiment handle

Return type

str

run_multi_node(name, project_id, experiment_type_id, worker_container, worker_machine_type, worker_command, worker_count, parameter_server_container, parameter_server_machine_type, parameter_server_command, parameter_server_count, ports=None, workspace_url=None, working_directory=None, artifact_directory=None, cluster_id=None, experiment_env=None, model_type=None, model_path=None, worker_container_user=None, worker_registry_username=None, worker_registry_password=None, parameter_server_container_user=None, parameter_server_registry_container_user=None, parameter_server_registry_password=None)

Create and start multinode experiment

The following command creates and starts a multinode experiment called multiEx and places it within the Gradient Project identified by the –projectId option. (Note: in some early versions of the CLI this option was called –projectHandle.)

EXAMPLE:

gradient experiments run multinode
--name multiEx
--projectId <your-project-id>
--experimentType GRPC
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand "python mnist.py"
--workerCount 2
--parameterServerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--parameterServerMachineType K80
--parameterServerCommand "python mnist.py"
--parameterServerCount 1
--workspaceUrl https://github.com/Paperspace/mnist-sample.git
--modelType Tensorflow

Note: --modelType Tensorflow is currently required if you wish you create a Deployment from your model, since Deployments currently only use Tensorflow Serving to serve models.

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • experiment_type_id (str) – Experiment Type ID [GRPC|MPI] [required]

  • worker_container (str) – Worker container (dockerfile) [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • parameter_server_container (str) – Parameter server container [required]

  • parameter_server_machine_type (str) – Parameter server machine type [required]

  • parameter_server_command (str) – Parameter server command [required]

  • parameter_server_count (int) – Parameter server count [required]

  • ports (str) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • working_directory (str) – Working directory for the experiment

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables in a JSON

  • model_type (str) – defines the type of model that is being generated by the experiment. Model type must be one of Tensorflow, ONNX, or Custom

  • model_path (str) – Model path

  • worker_container_user (str) – Worker container user

  • worker_registry_username (str) – Registry username for accessing private docker registry container if nessesary

  • worker_registry_password (str) – Registry password for accessing private docker registry container if nessesary

  • parameter_server_container_user (str) – Parameter server container user

  • parameter_server_registry_container_user (str) – Registry username for accessing private docker registry container if nessesary

  • parameter_server_registry_password (str) – Registry password for accessing private docker registry container if nessesary

Returns

experiment handle

Return type

str

start(experiment_id)

Start existing experiment that has not run

EXAMPLE:

gradient experiments start <experiment_id>
Parameters

experiment_id (str) – Experiment ID

Raises

exceptions.GradientSdkError

stop(experiment_id)

Stop running experiment

EXAMPLE:

gradient experiments stop <experiment_id>
Parameters

experiment_id (str) – Experiment ID

Raises

exceptions.GradientSdkError

list(project_id=None)

Get a list of experiments. Optionally filter by project ID

EXAMPLE:

gradient experiments list

EXAMPLE RETURN:

+-----------------------------+----------------+----------+
| Name                        | ID             | Status   |
+-----------------------------+----------------+----------+
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | failed   |
| mnist-multinode             | experiment-id  | created  |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist-multinode             | experiment-id  | canceled |
| mnist                       | experiment-id  | stopped  |
+-----------------------------+----------------+----------+
Parameters

project_id (str|list|None) –

Returns

experiments

Return type

list[models.SingleNodeExperiment|models.MultiNodeExperiment]

get(experiment_id)

Get experiment instance

Parameters

experiment_id (str) – Experiment ID

Return type

models.SingleNodeExperiment|models.MultiNodeExperiment

logs(experiment_id, line=0, limit=10000)

Show list of latest logs from the specified experiment.

EXAMPLE:

gradient experiments logs --experimentId
Parameters
  • experiment_id (str) – Experiment ID

  • line (int) – line number at which logs starts to display on screen

  • limit (int) – maximum lines displayed on screen, default set to 10 000

Returns

list of LogRows

Return type

list[models.LogRow]

yield_logs(experiment_id, line=0, limit=10000)

Get log generator. Polls the API for new logs

Parameters
  • experiment_id (str) –

  • line (int) – line number at which logs starts to display on screen

  • limit (int) – maximum lines displayed on screen, default set to 10 000

Returns

generator yielding LogRow instances

Return type

Iterator[models.LogRow]

gradient.api_sdk.clients.http_client module

class gradient.api_sdk.clients.http_client.API(api_url, headers=None, api_key=None, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

__init__(api_url, headers=None, api_key=None, logger=<gradient.api_sdk.logger.MuteLogger object>)
property api_key
get_path(url)
post(url, json=None, params=None, files=None, data=None)
put(url, json=None, params=None)
get(url, json=None, params=None)
delete(url, json=None, params=None)
class gradient.api_sdk.clients.http_client.GradientResponse(body, code, headers, data)

Bases: object

__init__(body, code, headers, data)

Initialize self. See help(type(self)) for accurate signature.

property ok
classmethod interpret_response(response)
Return type

GradientResponse

gradient.api_sdk.clients.hyperparameter_client module

class gradient.api_sdk.clients.hyperparameter_client.HyperparameterJobsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

create(name, project_id, tuning_command, worker_container, worker_machine_type, worker_command, worker_count, is_preemptible=True, ports=None, workspace_url=None, artifact_directory=None, cluster_id=None, experiment_env=None, trigger_event_id=None, model_type=None, model_path=None, dockerfile_path=None, registry_username=None, registry_password=None, container_user=None, working_directory=None, use_dockerfile=False)

Create hyperparameter tuning job

Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • tuning_command (str) – Tuning command [required]

  • worker_container (str) – Worker container [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (int) – Worker count [required]

  • is_preemptible (bool) – Flag: is preemptible

  • ports (list[str]) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables (in JSON)

  • trigger_event_id (str) – GradientCI trigger event id

  • model_type (str) – Model type

  • model_path (str) – Model path

  • dockerfile_path (str) – Path to dockerfile in project

  • registry_username (str) – Hyperparameter server registry username

  • registry_password (str) – Hyperparameter server registry password

  • container_user (str) – Hyperparameter server container user

  • working_directory (str) – Working directory for the experiment

  • use_dockerfile (bool) – Flag: use dockerfile

Returns

ID of a new job

Return type

str

run(name, project_id, tuning_command, worker_container, worker_machine_type, worker_command, worker_count, is_preemptible=True, ports=None, workspace_url=None, artifact_directory=None, cluster_id=None, experiment_env=None, trigger_event_id=None, model_type=None, model_path=None, dockerfile_path=None, registry_username=None, registry_password=None, container_user=None, working_directory=None, use_dockerfile=False)

Create and start hyperparameter tuning job

EXAMPLE:

gradient hyperparameters run
--name HyperoptKerasExperimentCLI1
--projectId <your-project-id>
--tuningCommand 'make run_hyperopt'
--workerContainer tensorflow/tensorflow:1.13.1-gpu-py3
--workerMachineType K80
--workerCommand 'make run_hyperopt_worker'
--workerCount 2
--workspaceUrl git+https://github.com/Paperspace/hyperopt-keras-sample
Parameters
  • name (str) – Name of new experiment [required]

  • project_id (str) – Project ID [required]

  • tuning_command (str) – Tuning command [required]

  • worker_container (str) – Worker container [required]

  • worker_machine_type (str) – Worker machine type [required]

  • worker_command (str) – Worker command [required]

  • worker_count (str) – Worker count [required]

  • is_preemptible (bool) – Flag: is preemptible

  • ports (list[str]) – Port to use in new experiment

  • workspace_url (str) – Project git repository url

  • artifact_directory (str) – Artifacts directory

  • cluster_id (str) – Cluster ID

  • experiment_env (dict) – Environment variables (in JSON)

  • trigger_event_id (str) – GradientCI trigger event id

  • model_type (str) – Model type

  • model_path (str) – Model path

  • dockerfile_path (str) – Path to dockerfile

  • registry_username (str) – container registry username

  • registry_password (str) – container registry password

  • container_user (str) – container user

  • working_directory (str) – Working directory for the experiment

  • use_dockerfile (bool) – Flag: use dockerfile

Returns

ID of a new job

Return type

str

get(id_)

Get Hyperparameter tuning job’s instance

Parameters

id (str) – Hyperparameter job id

Returns

instance of Hyperparameter

Return type

models.Hyperparameter

start(id_)

Start existing hyperparameter tuning job

Parameters

id (str) – Hyperparameter job id

Raises

exceptions.GradientSdkError

list()

Get a list of hyperparameter tuning jobs

EXAMPLE:

gradient hyperparameters list

EXAMPLE RETURN:

+--------------------------------+----------------+------------+
| Name                           | ID             | Project ID |
+--------------------------------+----------------+------------+
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
| name-of-your-experiment-job    | job-id         | project-id |
+--------------------------------+----------------+------------+
Return type

list[models.Hyperparameter]

gradient.api_sdk.clients.job_client module

Jobs related client handler logic.

Remember that in code snippets all highlighted lines are required other lines are optional.

class gradient.api_sdk.clients.job_client.JobsClient(*args, **kwargs)

Bases: gradient.api_sdk.clients.base_client.BaseClient

Client to handle job related actions.

How to create instance of job client:

1
2
3
4
5
from gradient import JobsClient

job_client = JobClient(
    api_key='your_api_key_here'
)
HOST_URL = 'https://api.paperspace.io'
__init__(*args, **kwargs)

Base class. All client classes inherit from it.

An API key can be created at paperspace.com after you sign in to your account. After obtaining it, you can set it in the CLI using the command:

gradient apiKey XXXXXXXXXXXXXXXXXXX

or you can provide your API key in any command, for example:

gradient experiments run ... --apiKey XXXXXXXXXXXXXXXXXXX
Parameters
  • api_key (str) – your API key

  • logger (sdk_logger.Logger) –

create(machine_type, container, project_id, data=None, name=None, command=None, ports=None, is_public=None, workspace=None, workspace_archive=None, workspace_url=None, working_directory=None, ignore_files=None, experiment_id=None, job_env=None, use_dockerfile=None, is_preemptible=None, project=None, started_by_user_id=None, rel_dockerfile_path=None, registry_username=None, registry_password=None, cluster=None, cluster_id=None, node_attrs=None, workspace_file_name=None)

Method to create and start job in paperspace gradient.

Example create job:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
job = job_client.create(
    machine_type='K80',
    container='tensorflow/tensorflow:1.13.1-gpu-py3',
    project_id='Som3ProjecTiD',
    data=data,
    name='Example job',
    command='pip install -r requirements.txt && python mnist.py',
    ports='5000:5000',
    workspace_url='git+https://github.com/Paperspace/mnist-sample.git',
    job_env={
        'CUSTOM_ENV'='Some value that will be set as system environment',
    }
)
Parameters
  • machine_type (str) –

    Type of machine on which job should run. This field is required.

    We recommend to choose one of this:

    K80
    P100
    TPU
    GV100
    GV100x8
    G1
    G6
    G12
    

  • container (str) –

    name of docker container that should be used to run job. This field is required.

    Example value: tensorflow/tensorflow:1.13.1-gpu-py3

  • project_id (str) – Identify to which project job should be connected. This field is required.

  • data (None|MultipartEncoderMonitor) – None if there are no data to upload or encoded multipart data information with files to upload.

  • name (str) – name for job that creator wish to have. If not provided it will be autogenerated.

  • command (str) – custom command that should be run against command from docker image

  • ports (str) –

    string with comma , separated mapped ports.

    Example value: 5000:5000,8080:8080

  • is_public (bool) – bool flag to select if job should be available by default None

  • workspace (str) – this field is used with CLI to upload folder as your workspace. You can provide here path that you wish to upload. (Soon also will support a path to a workspace archive or git repository URL.)

  • workspace_archive (str) – Path to workspace archive. (Currently being deprecated in an upcoming version.)

  • workspace_url (str) – url to repo with code to run inside of job. (Currently being deprecated in an upcoming version.)

  • working_directory (str) – location of code to run. By default /paperspace

  • ignore_files (str) – This field is used with CLI to upload workspace from your computer without specified files. Provide string with comma , separated name of files that should be ignored with upload of workspace.

  • experiment_id (str) – Id of experiment to which job should be connected. If not provided there will be created new experiment for this job.

  • job_env (dict) – key value collection of envs that are used in code

  • use_dockerfile (bool) – determines whether to build from Dockerfile (default false). Do not include a –container argument when using this flag.

  • is_preemptible (bool) – flag if we you want to use spot instance. By default False

  • project (str) – name of project that job is linked to.

  • started_by_user_id (str) – id of user that started job. By default it take user id from access token or api key.

  • rel_dockerfile_path (str) – relative location to your dockerfile. Default set to ./Dockerfile

  • registry_username (str) – username for custom docker registry

  • registry_password (str) – password for custom docker registry

  • cluster (str) – name of cluster that job should be run on.

  • cluster_id (str) – id of cluster that job should be run on. If you use one of recommended machine type cluster will be chosen so you do not need to provide it.

  • node_attrs (dict) –

  • workspace_file_name (str) –

Returns

Job handle

Return type

str

delete(job_id)

Method to remove job.

1
2
3
job_client.delete(
    job_id='Your_job_id_here'
)
Parameters

job_id (str) – id of job that you want to remove

Raises

exceptions.GradientSdkError

stop(job_id)

Method to stop working job

1
2
3
job_client.stop(
    job_id='Your_job_id_here'
)
Parameters

job_id – id of job that we want to stop

Raises

exceptions.GradientSdkError

list(project_id=None, project=None, experiment_id=None)

Method to list jobs.

To retrieve all user jobs:

1
jobs = job_client.list()

To list jobs from project:

1
2
3
job = job_client.list(
    project_id="Your_project_id_here",
)
Parameters
  • project_id (str) – id of project that you want to list jobs

  • project (str) – name of project that you want to list jobs

  • experiment_id (str) – id of experiment that you want to list jobs

Returns

list of job models

Return type

list

logs(job_id, line=0, limit=10000)

Method to retrieve job logs.

1
2
3
4
5
job = job_client.logs(
    job_id='Your_job_id_here',
    line=100,
    limit=100
)
Parameters
  • job_id (str) – id of job that we want to retrieve logs

  • line (int) – from what line you want to retrieve logs. Default 0

  • limit (int) – how much lines you want to retrieve logs. Default 10000

Returns

list of formatted logs lines

Return type

list

artifacts_delete(job_id, files=None)

Method to delete job artifact.

1
2
3
4
job_client.artifacts_delete(
    job_id='Your_job_id_here',
    files=files,
)
Parameters
  • job_id (str) – Id of job which artifact you want to delete

  • files (str) – if you wish to remove only few files from artifact pass string with names of this files separated by comma ,

Raises

exceptions.GradientSdkError

artifacts_get(job_id)

Method to retrieve federated access information for job artifacts.

1
2
3
artifacts = job_client.artifacts_get(
    job_id='your_job_id_here',
)
Parameters

job_id – Id of job from which you want to retrieve artifacts information about location

Returns

Information about artifact place

Return type

dict

artifacts_list(job_id, files=None, size=False, links=True)

Method to retrieve all artifacts files.

1
2
3
4
5
6
artifacts = job_client.artifacts_list(
    job_id='your_job_id_here',
    files='your_files,here',
    size=False,
    links=True
)
Parameters
  • job_id (str) – to limit artifact from this job.

  • files (str) – to limit result only to file names provided. You can use wildcard option *.

  • size (bool) – flag to show file size. Default value is set to False.

  • links (bool) – flag to show file url. Default value is set to True.

Returns

list of files with description if specified from job artifacts.

Return type

list

gradient.api_sdk.clients.model_client module

class gradient.api_sdk.clients.model_client.ModelsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

HOST_URL = 'https://api.paperspace.io'
list(experiment_id=None, project_id=None)

Get list of models

Parameters
  • experiment_id (str) – Experiment ID

  • project_id (str) – Project ID

Return type

list[models.Model]

gradient.api_sdk.clients.project_client module

class gradient.api_sdk.clients.project_client.ProjectsClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: gradient.api_sdk.clients.base_client.BaseClient

HOST_URL = 'https://api.paperspace.io'
create(name, repository_name=None, repository_url=None)

Create new project

EXAMPLE:

gradient projects create --name new-project

EXAMPLE RETURN:

Project created with ID: <your-project-id>

in sdk:

from gradient.api_sdk.clients import ProjectsClient

api_key = 'your-api-key'
projects_client = ProjectsClient(api_key)

new_project = projects_client.create('your-project-name')

print(new_project)
Parameters
  • name (str) – Name of new project [required]

  • repository_name (str) – Name of the repository

  • repository_url (str) – URL to the repository

Returns

project ID

Return type

str

list()

Get list of your projects

EXAMPLE:

gradient projects list

EXAMPLE RETURN:

+-----------+------------------+------------+----------------------------+
| ID        | Name             | Repository | Created                    |
+-----------+------------------+------------+----------------------------+
| project-id| <name-of-project>| None       | 2019-06-28 10:38:57.874000 |
| project-id| <name-of-project>| None       | 2019-07-17 13:17:34.493000 |
| project-id| <name-of-project>| None       | 2019-07-17 13:21:12.770000 |
| project-id| <name-of-project>| None       | 2019-07-29 09:26:49.105000 |
+-----------+------------------+------------+----------------------------+

in sdk:

from gradient.api_sdk.clients import ProjectsClient

api_key = 'your-api-key'
projects_client = ProjectsClient(api_key)

projects_list = projects_client.list()

for project in project_list:
    print(project)
Returns

list of projects

Return type

list[models.Project]

gradient.api_sdk.clients.sdk_client module

class gradient.api_sdk.clients.sdk_client.SdkClient(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)

Bases: object

__init__(api_key, logger=<gradient.api_sdk.logger.MuteLogger object>)
Parameters
  • api_key (str) – API key

  • logger (sdk_logger.Logger) –

Module contents