This is the multi-page printable view of this section. Click here to print.
Python
- 1: Analytics and Query API
- 1.1: api
- 1.2: artifacts
- 1.3: files
- 1.4: history
- 1.5: jobs
- 1.6: projects
- 1.7: query_generator
- 1.8: reports
- 1.9: runs
- 1.10: sweeps
- 1.11: teams
- 1.12: users
- 2: Data Types
- 2.1: Audio
- 2.2: box3d
- 2.3: Histogram
- 2.4: Html
- 2.5: Image
- 2.6: Molecule
- 2.7: Object3D
- 2.8: Plotly
- 2.9: Table
- 2.10: Video
- 3: Launch Library
- 3.1: create_and_run_agent
- 3.2: launch
- 3.3: launch_add
- 3.4: LaunchAgent
- 3.5: load_wandb_config
- 3.6: manage_config_file
- 3.7: manage_wandb_config
- 4: SDK
- 4.1: agent
- 4.2: Artifact
- 4.3: controller
- 4.4: define_metric
- 4.5: Error
- 4.6: finish
- 4.7: init
- 4.8: link_model
- 4.9: log
- 4.10: log_artifact
- 4.11: log_model
- 4.12: login
- 4.13: plot
- 4.14: plot_table
- 4.15: restore
- 4.16: save
- 4.17: setup
- 4.18: sweep
- 4.19: termwarn
- 4.20: unwatch
- 4.21: use_artifact
- 4.22: use_model
- 4.23: watch
1 - Analytics and Query API
Query and analyze data logged to W&B.
1.1 - api
module wandb.apis.public
Use the Public API to export or update data that you have saved to W&B.
Before using this API, you’ll want to log data from your script — check the Quickstart for more details.
You might use the Public API to
- update metadata or metrics for an experiment after it has been completed,
- pull down your results as a dataframe for post-hoc analysis in a Jupyter notebook, or
- check your saved model artifacts for those tagged as
ready-to-deploy
.
For more on using the Public API, check out our guide.
class RetryingClient
method RetryingClient.__init__
__init__(client: wandb_gql.client.Client)
property RetryingClient.app_url
property RetryingClient.server_info
method RetryingClient.execute
execute(*args, **kwargs)
method RetryingClient.version_supported
version_supported(min_version: str) → bool
class Api
Used for querying the wandb server.
Examples:
Most common way to initialize wandb.Api()
Args:
-
overrides
: (dict) You can setbase_url
if you are using a wandb server -
other than https
: //api.wandb.ai. You can also set defaults forentity
,project
, andrun
.
method Api.__init__
__init__(
overrides: Optional[Dict[str, Any]] = None,
timeout: Optional[int] = None,
api_key: Optional[str] = None
) → None
property Api.api_key
property Api.client
property Api.default_entity
property Api.user_agent
property Api.viewer
method Api.artifact
artifact(name: str, type: Optional[str] = None)
Return a single artifact by parsing path in the form project/name
or entity/project/name
.
Args:
name
: (str) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms:name
: versionname
: aliastype
: (str, optional) The type of artifact to fetch.
Returns:
An Artifact
object.
Raises:
ValueError
: If the artifact name is not specified.ValueError
: If the artifact type is specified but does not match the type of the fetched artifact.
Note:
This method is intended for external use only. Do not call
api.artifact()
within the wandb repository code.
method Api.artifact_collection
artifact_collection(type_name: str, name: str) → public.ArtifactCollection
Return a single artifact collection by type and parsing path in the form entity/project/name
.
Args:
type_name
: (str) The type of artifact collection to fetch.name
: (str) An artifact collection name. May be prefixed with entity/project.
Returns:
An ArtifactCollection
object.
method Api.artifact_collection_exists
artifact_collection_exists(name: str, type: str) → bool
Return whether an artifact collection exists within a specified project and entity.
Args:
name
: (str) An artifact collection name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”.type
: (str) The type of artifact collection
Returns: True if the artifact collection exists, False otherwise.
method Api.artifact_collections
artifact_collections(
project_name: str,
type_name: str,
per_page: int = 50
) → public.ArtifactCollections
Return a collection of matching artifact collections.
Args:
project_name
: (str) The name of the project to filter on.type_name
: (str) The name of the artifact type to filter on.per_page
: (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns:
An iterable ArtifactCollections
object.
method Api.artifact_exists
artifact_exists(name: str, type: Optional[str] = None) → bool
Return whether an artifact version exists within a specified project and entity.
Args:
name
: (str) An artifact name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”. Valid names can be in the following forms:name
: versionname
: aliastype
: (str, optional) The type of artifact
Returns: True if the artifact version exists, False otherwise.
method Api.artifact_type
artifact_type(
type_name: str,
project: Optional[str] = None
) → public.ArtifactType
Return the matching ArtifactType
.
Args:
type_name
: (str) The name of the artifact type to retrieve.project
: (str, optional) If given, a project name or path to filter on.
Returns:
An ArtifactType
object.
method Api.artifact_types
artifact_types(project: Optional[str] = None) → public.ArtifactTypes
Return a collection of matching artifact types.
Args:
project
: (str, optional) If given, a project name or path to filter on.
Returns:
An iterable ArtifactTypes
object.
method Api.artifact_versions
artifact_versions(type_name, name, per_page=50)
Deprecated, use artifacts(type_name, name)
instead.
method Api.artifacts
artifacts(
type_name: str,
name: str,
per_page: int = 50,
tags: Optional[List[str]] = None
) → public.Artifacts
Return an Artifacts
collection from the given parameters.
Args:
type_name
: (str) The type of artifacts to fetch.name
: (str) An artifact collection name. May be prefixed with entity/project.per_page
: (int) Sets the page size for query pagination. Usually there is no reason to change this.tags
: (list[str], optional) Only return artifacts with all of these tags.
Returns:
An iterable Artifacts
object.
method Api.create_project
create_project(name: str, entity: str) → None
Create a new project.
Args:
name
: (str) The name of the new project.entity
: (str) The entity of the new project.
method Api.create_run
create_run(
run_id: Optional[str] = None,
project: Optional[str] = None,
entity: Optional[str] = None
) → public.Run
Create a new run.
Args:
run_id
: (str, optional) The ID to assign to the run, if given. The run ID is automatically generated by default, so in general, you do not need to specify this and should only do so at your own risk.project
: (str, optional) If given, the project of the new run.entity
: (str, optional) If given, the entity of the new run.
Returns:
The newly created Run
.
method Api.create_run_queue
create_run_queue(
name: str,
type: 'public.RunQueueResourceType',
entity: Optional[str] = None,
prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None,
config: Optional[dict] = None,
template_variables: Optional[dict] = None
) → public.RunQueue
Create a new run queue (launch).
Args:
name
: (str) Name of the queue to createtype
: (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.entity
: (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.prioritization_mode
: (str) Optional version of prioritization to use. Either “V0” or Noneconfig
: (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg.{{var}}
) to specify template variables.template_variables
: (dict) A dictionary of template variable schemas to be used with the config. Expected format of: `{"var-name"
: {"schema"
: {"type"
: (“string”, “number”, or “integer”),"default"
: (optional value),"minimum"
: (optional minimum),"maximum"
: (optional maximum),"enum"
: […"(options)"] } } }`
Returns:
The newly created RunQueue
Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors
method Api.create_team
create_team(team, admin_username=None)
Create a new team.
Args:
team
: (str) The name of the teamadmin_username
: (str) optional username of the admin user of the team, defaults to the current user.
Returns:
A Team
object
method Api.create_user
create_user(email, admin=False)
Create a new user.
Args:
email
: (str) The email address of the useradmin
: (bool) Whether this user should be a global instance admin
Returns:
A User
object
method Api.flush
flush()
Flush the local cache.
The api object keeps a local cache of runs, so if the state of the run may change while executing your script you must clear the local cache with api.flush()
to get the latest values associated with the run.
method Api.from_path
from_path(path)
Return a run, sweep, project or report from a path.
Examples:
project = api.from_path("my_project")
team_project = api.from_path("my_team/my_project")
run = api.from_path("my_team/my_project/runs/id")
sweep = api.from_path("my_team/my_project/sweeps/id")
report = api.from_path("my_team/my_project/reports/My-Report-Vm11dsdf")
```
**Args:**
- `path`: (str) The path to the project, run, sweep or report
**Returns:**
A `Project`, `Run`, `Sweep`, or `BetaReport` instance.
**Raises:**
wandb.Error if path is invalid or the object doesn't exist
---
### <kbd>method</kbd> `Api.job`
```python
job(name: Optional[str], path: Optional[str] = None) → public.Job
Return a Job
from the given parameters.
Args:
name
: (str) The job name.path
: (str, optional) If given, the root path in which to download the job artifact.
Returns:
A Job
object.
method Api.list_jobs
list_jobs(entity: str, project: str) → List[Dict[str, Any]]
Return a list of jobs, if any, for the given entity and project.
Args:
entity
: (str) The entity for the listed job(s).project
: (str) The project for the listed job(s).
Returns: A list of matching jobs.
method Api.project
project(name: str, entity: Optional[str] = None) → public.Project
Return the Project
with the given name (and entity, if given).
Args:
name
: (str) The project name.entity
: (str) Name of the entity requested. If None, will fall back to the default entity passed toApi
. If no default entity, will raise aValueError
.
Returns:
A Project
object.
method Api.projects
projects(entity: Optional[str] = None, per_page: int = 200) → public.Projects
Get projects for a given entity.
Args:
entity
: (str) Name of the entity requested. If None, will fall back to the default entity passed toApi
. If no default entity, will raise aValueError
.per_page
: (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns:
A Projects
object which is an iterable collection of Project
objects.
method Api.queued_run
queued_run(
entity,
project,
queue_name,
run_queue_item_id,
project_queue=None,
priority=None
)
Return a single queued run based on the path.
Parses paths of the form entity/project/queue_id/run_queue_item_id.
method Api.registries
registries(
organization: Optional[str] = None,
filter: Optional[Dict[str, Any]] = None
) → Registries
Returns a Registry iterator.
Use the iterator to search and filter registries, collections, or artifact versions across your organization’s registry.
Examples: Find all registries with the names that contain “model” ```python import wandb
api = wandb.Api() # specify an org if your entity belongs to multiple orgs
api.registries(filter={"name": {"$regex": "model"}})
```
Find all collections in the registries with the name “my_collection” and the tag “my_tag” python api.registries().collections(filter={"name": "my_collection", "tag": "my_tag"})
Find all artifact versions in the registries with a collection name that contains “my_collection” and a version that has the alias “best” python api.registries().collections( filter={"name": {"$regex": "my_collection"}} ).versions(filter={"alias": "best"})
Find all artifact versions in the registries that contain “model” and have the tag “prod” or alias “best” python api.registries(filter={"name": {"$regex": "model"}}).versions( filter={"$or": [{"tag": "prod"}, {"alias": "best"}]} )
Args:
organization
: (str, optional) The organization of the registry to fetch. If not specified, use the organization specified in the user’s settings.filter
: (dict, optional) MongoDB-style filter to apply to each object in the registry iterator. Fields available to filter for collections arename
,description
,created_at
,updated_at
. Fields available to filter for collections arename
,tag
,description
,created_at
,updated_at
Fields available to filter for versions aretag
,alias
,created_at
,updated_at
,metadata
Returns: A registry iterator.
method Api.reports
reports(
path: str = '',
name: Optional[str] = None,
per_page: int = 50
) → public.Reports
Get reports for a given project path.
WARNING: This api is in beta and will likely change in a future release
Args:
path
: (str) path to project the report resides in, should be in the form: “entity/project”name
: (str, optional) optional name of the report requested.per_page
: (int) Sets the page size for query pagination. Usually there is no reason to change this.
Returns:
A Reports
object which is an iterable collection of BetaReport
objects.
method Api.run
run(path='')
Return a single run by parsing path in the form entity/project/run_id.
Args:
path
: (str) path to run in the formentity/project/run_id
. Ifapi.entity
is set, this can be in the formproject/run_id
and ifapi.project
is set this can just be the run_id.
Returns:
A Run
object.
method Api.run_queue
run_queue(entity, name)
Return the named RunQueue
for entity.
To create a new RunQueue
, use wandb.Api().create_run_queue(...)
.
method Api.runs
runs(
path: Optional[str] = None,
filters: Optional[Dict[str, Any]] = None,
order: str = '+created_at',
per_page: int = 50,
include_sweeps: bool = True
)
Return a set of runs from a project that match the filters provided.
Fields you can filter by include:
createdAt
: The timestamp when the run was created. (in ISO 8601 format, e.g. “2023-01-01T12:00:00Z”)displayName
: The human-readable display name of the run. (e.g. “eager-fox-1”)duration
: The total runtime of the run in seconds.group
: The group name used to organize related runs together.host
: The hostname where the run was executed.jobType
: The type of job or purpose of the run.name
: The unique identifier of the run. (e.g. “a1b2cdef”)state
: The current state of the run.tags
: The tags associated with the run.username
: The username of the user who initiated the run
Additionally, you can filter by items in the run config or summary metrics. Such as config.experiment_name
, summary_metrics.loss
, etc.
For more complex filtering, you can use MongoDB query operators. For details, see: https://docs.mongodb.com/manual/reference/operator/query The following operations are supported:
$and
$or
$nor
$eq
$ne
$gt
$gte
$lt
$lte
$in
$nin
$exists
$regex
Examples:
Find runs in my_project where config.experiment_name has been set to “foo” api.runs( path="my_entity/my_project", filters={"config.experiment_name": "foo"}, )
Find runs in my_project where config.experiment_name has been set to “foo” or “bar” api.runs( path="my_entity/my_project", filters={ "$or": [ {"config.experiment_name": "foo"}, {"config.experiment_name": "bar"}, ] }, )
Find runs in my_project where config.experiment_name matches a regex (anchors are not supported) api.runs( path="my_entity/my_project", filters={"config.experiment_name": {"$regex": "b.*"}}, )
Find runs in my_project where the run name matches a regex (anchors are not supported) api.runs( path="my_entity/my_project", filters={"display_name": {"$regex": "^foo.*"}}, )
Find runs in my_project where config.experiment contains a nested field “category” with value “testing” api.runs( path="my_entity/my_project", filters={"config.experiment.category": "testing"}, )
Find runs in my_project with a loss value of 0.5 nested in a dictionary under model1 in the summary metrics api.runs( path="my_entity/my_project", filters={"summary_metrics.model1.loss": 0.5}, )
Find runs in my_project sorted by ascending loss api.runs(path="my_entity/my_project", order="+summary_metrics.loss")
Args:
path
: (str) path to project, should be in the form: “entity/project”filters
: (dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc.For example
:{"config.experiment_name": "foo"}
would find runs with a config entry of experiment name set to “foo”order
: (str) Order can becreated_at
,heartbeat_at
,config.*.value
, orsummary_metrics.*
. If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest.per_page
: (int) Sets the page size for query pagination.include_sweeps
: (bool) Whether to include the sweep runs in the results.
Returns:
A Runs
object, which is an iterable collection of Run
objects.
method Api.sweep
sweep(path='')
Return a sweep by parsing path in the form entity/project/sweep_id
.
Args:
path
: (str, optional) path to sweep in the form entity/project/sweep_id. Ifapi.entity
is set, this can be in the form project/sweep_id and ifapi.project
is set this can just be the sweep_id.
Returns:
A Sweep
object.
method Api.sync_tensorboard
sync_tensorboard(root_dir, run_id=None, project=None, entity=None)
Sync a local directory containing tfevent files to wandb.
method Api.team
team(team: str) → public.Team
Return the matching Team
with the given name.
Args:
team
: (str) The name of the team.
Returns:
A Team
object.
method Api.upsert_run_queue
upsert_run_queue(
name: str,
resource_config: dict,
resource_type: 'public.RunQueueResourceType',
entity: Optional[str] = None,
template_variables: Optional[dict] = None,
external_links: Optional[dict] = None,
prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None
)
Upsert a run queue (launch).
Args:
name
: (str) Name of the queue to createentity
: (str) Optional name of the entity to create the queue. If None, will use the configured or default entity.resource_config
: (dict) Optional default resource configuration to be used for the queue. Use handlebars (eg.{{var}}
) to specify template variables.resource_type
: (str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.template_variables
: (dict) A dictionary of template variable schemas to be used with the config. Expected format of: `{"var-name"
: {"schema"
: {"type"
: (“string”, “number”, or “integer”),"default"
: (optional value),"minimum"
: (optional minimum),"maximum"
: (optional maximum),"enum"
: […"(options)"] } } }`external_links
: (dict) Optional dictionary of external links to be used with the queue. Expected format of: `{"name"
: “url” }`prioritization_mode
: (str) Optional version of prioritization to use. Either “V0” or None
Returns:
The upserted RunQueue
.
Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors
method Api.user
user(username_or_email: str) → Optional[ForwardRef('public.User')]
Return a user from a username or email address.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args:
username_or_email
: (str) The username or email address of the user
Returns:
A User
object or None if a user couldn’t be found
method Api.users
users(username_or_email: str) → List[ForwardRef('public.User')]
Return all users from a partial username or email address query.
Note: This function only works for Local Admins, if you are trying to get your own user object, please use api.viewer
.
Args:
username_or_email
: (str) The prefix or suffix of the user you want to find
Returns:
An array of User
objects
1.2 - artifacts
module wandb.apis.public
W&B Public API for Artifact Management.
This module provides classes for interacting with W&B artifacts and their collections. Classes include:
ArtifactTypes: A paginated collection of artifact types in a project
- List and query artifact types
- Access type metadata
- Create new types
ArtifactCollection: A collection of related artifacts
- Manage artifact collections
- Update metadata and descriptions
- Work with tags and aliases
- Change collection types
Artifacts: A paginated collection of artifact versions
- Filter and query artifacts
- Access artifact metadata
- Download artifacts
ArtifactFiles: A paginated collection of files within an artifact
- List and query artifact files
- Access file metadata
- Download individual files
function server_supports_artifact_collections_gql_edges
server_supports_artifact_collections_gql_edges(
client: 'RetryingClient',
warn: bool = False
) → bool
Check if W&B server supports GraphQL edges for artifact collections.
function artifact_collection_edge_name
artifact_collection_edge_name(server_supports_artifact_collections: bool) → str
Return the GraphQL edge name for artifact collections or sequences.
function artifact_collection_plural_edge_name
artifact_collection_plural_edge_name(
server_supports_artifact_collections: bool
) → str
Return the GraphQL edge name for artifact collections or sequences.
class ArtifactTypes
An iterable collection of artifact types associated with a project.
Args:
client
: The client instance to use for querying W&B.entity
: The entity (user or team) that owns the project.project
: The name of the project to query for artifact types.per_page
: The number of artifact types to fetch per page. Default is 50.
method ArtifactTypes.__init__
__init__(
client: wandb_gql.client.Client,
entity: str,
project: str,
per_page: int = 50
)
property ArtifactTypes.cursor
Returns the cursor position for pagination of file results.
property ArtifactTypes.length
Returns None
.
property ArtifactTypes.more
Returns True
if there are more artifacts to fetch. Returns False
if there are no more files to fetch.
method ArtifactTypes.convert_objects
convert_objects()
Converts GraphQL edges to ArtifactType objects.
method ArtifactTypes.update_variables
update_variables()
Updates the variables dictionary with the cursor.
class ArtifactType
An artifact object that satisfies query based on the specified type.
Args:
client
: The client instance to use for querying W&B.entity
: The entity (user or team) that owns the project.project
: The name of the project to query for artifact types.type_name
: The name of the artifact type.attrs
: Optional mapping of attributes to initialize the artifact type. If not provided, the object will load its attributes from W&B upon initialization.
method ArtifactType.__init__
__init__(
client: wandb_gql.client.Client,
entity: str,
project: str,
type_name: str,
attrs: Optional[Mapping[str, Any]] = None
)
property ArtifactType.id
The unique identifier of the artifact type.
property ArtifactType.name
The name of the artifact type.
method ArtifactType.collection
collection(name)
Get a specific artifact collection by name.
Args:
name
(str): The name of the artifact collection to retrieve.
method ArtifactType.collections
collections(per_page=50)
Get all artifact collections associated with this artifact type.
Args:
per_page
(int): The number of artifact collections to fetch per page. Default is 50.
method ArtifactType.load
load()
Load the artifact type metadata.
class ArtifactCollections
An iterable collection of artifact collections associated with a project and artifact type.
Args:
client
: The client instance to use for querying W&B.entity
: The entity (user or team) that owns the project.project
: The name of the project to query for artifact collections.type_name
: The name of the artifact type for which to fetch collections.per_page
: The number of artifact collections to fetch per page. Default is 50.
method ArtifactCollections.__init__
__init__(
client: wandb_gql.client.Client,
entity: str,
project: str,
type_name: str,
per_page: int = 50
)
property ArtifactCollections.cursor
Returns the cursor position for pagination of file results.
property ArtifactCollections.length
Returns the number of artifact collections.
property ArtifactCollections.more
Returns True
if there are more artifact collections to fetch.
Returns False
if there are no more files to fetch.
method ArtifactCollections.convert_objects
convert_objects()
Converts GraphQL edges to ArtifactCollection objects.
method ArtifactCollections.update_variables
update_variables()
Updates the variables dictionary with the cursor.
class ArtifactCollection
An artifact collection that represents a group of related artifacts.
Args:
client
: The client instance to use for querying W&B.entity
: The entity (user or team) that owns the project.project
: The name of the project to query for artifact collections.name
: The name of the artifact collection.type
: The type of the artifact collection (e.g., “dataset”, “model”).organization
: Optional organization name if applicable.attrs
: Optional mapping of attributes to initialize the artifact collection. If not provided, the object will load its attributes from W&B upon initialization.
method ArtifactCollection.__init__
__init__(
client: wandb_gql.client.Client,
entity: str,
project: str,
name: str,
type: str,
organization: Optional[str] = None,
attrs: Optional[Mapping[str, Any]] = None
)
property ArtifactCollection.aliases
The aliases associated with the artifact collection.
property ArtifactCollection.created_at
The creation timestamp of the artifact collection.
property ArtifactCollection.description
A description of the artifact collection.
property ArtifactCollection.id
The unique identifier of the artifact collection.
property ArtifactCollection.name
The name of the artifact collection.
property ArtifactCollection.tags
The tags associated with the artifact collection.
property ArtifactCollection.type
Returns the type of the artifact collection.
method ArtifactCollection.artifacts
artifacts(per_page=50)
Get all artifact versions associated with this artifact collection.
method ArtifactCollection.change_type
change_type(new_type: str) → None
Deprecated, change type directly with save
instead.
method ArtifactCollection.delete
delete()
Delete the entire artifact collection.
method ArtifactCollection.is_sequence
is_sequence() → bool
Return whether the artifact collection is a sequence.
method ArtifactCollection.load
load()
Load the artifact collection metadata.
method ArtifactCollection.save
save() → None
Persist any changes made to the artifact collection.
class Artifacts
An iterable collection of artifact versions associated with a project.
Optionally pass in filters to narrow down the results based on specific criteria.
Args:
client
: The client instance to use for querying W&B.entity
: The entity (user or team) that owns the project.project
: The name of the project to query for artifacts.collection_name
: The name of the artifact collection to query.type
: The type of the artifacts to query. Common examples include “dataset” or “model”.filters
: Optional mapping of filters to apply to the query.order
: Optional string to specify the order of the results.per_page
: The number of artifact versions to fetch per page. Default is 50.tags
: Optional string or list of strings to filter artifacts by tags.
method Artifacts.__init__
__init__(
client: wandb_gql.client.Client,
entity: str,
project: str,
collection_name: str,
type: str,
filters: Optional[Mapping[str, Any]] = None,
order: Optional[str] = None,
per_page: int = 50,
tags: Optional[str, List[str]] = None
)
property Artifacts.cursor
Returns the cursor position for pagination of file results.
property Artifacts.length
Returns the number of artifact versions.
property Artifacts.more
Returns True
if there are more artifact versions to fetch.
method Artifacts.convert_objects
convert_objects()
Converts GraphQL edges to Artifact objects.
class RunArtifacts
An iterable collection of artifacts associated with a run.
Args:
client
: The client instance to use for querying W&B.run
: The run object to query for artifacts.mode
: The mode of artifacts to fetch, either “logged” (output artifacts) or “used” (input artifacts). Default is “logged”.per_page
: The number of artifacts to fetch per page. Default is 50.
method RunArtifacts.__init__
__init__(
client: wandb_gql.client.Client,
run: 'Run',
mode='logged',
per_page: int = 50
)
property RunArtifacts.cursor
Returns the cursor position for pagination of file results.
property RunArtifacts.length
Returns the number of artifacts associated with the run.
property RunArtifacts.more
Returns True
if there are more artifacts to fetch.
method RunArtifacts.convert_objects
convert_objects()
Converts GraphQL edges to Artifact objects.
class ArtifactFiles
An iterable collection of files associated with an artifact version.
Args:
client
: The client instance to use for querying W&B.artifact
: The artifact object to query for files.names
: Optional sequence of file names to filter the results by. IfNone
, all files will be returned.per_page
: The number of files to fetch per page. Default is 50.
method ArtifactFiles.__init__
__init__(
client: wandb_gql.client.Client,
artifact: 'wandb.Artifact',
names: Optional[Sequence[str]] = None,
per_page: int = 50
)
property ArtifactFiles.cursor
Returns the cursor position for pagination of file results.
property ArtifactFiles.length
Returns the number of files in the artifact.
property ArtifactFiles.more
Returns True
if there are more files to fetch. Returns
property ArtifactFiles.path
Returns the path of the artifact.
The path is a list containingthe entity, project name, and artifact name.
method ArtifactFiles.convert_objects
convert_objects()
Converts GraphQL edges to File objects.
method ArtifactFiles.update_variables
update_variables()
Updates the variables dictionary with the cursor and limit.
1.3 - files
module wandb.apis.public
W&B Public API for File Management.
This module provides classes for interacting with files stored in W&B. Classes include:
Files: A paginated collection of files associated with a run
- Iterate through files with automatic pagination
- Filter files by name
- Access file metadata and properties
- Download multiple files
File: A single file stored in W&B
- Access file metadata (size, mimetype, URLs)
- Download files to local storage
- Delete files from W&B
- Work with S3 URIs for direct access
Example:
from wandb.apis.public import Api
# Initialize API
api = Api()
# Get files from a specific run
run = api.run("entity/project/run_id")
files = run.files()
# Work with files
for file in files:
print(f"File: {file.name}")
print(f"Size: {file.size} bytes")
print(f"Type: {file.mimetype}")
# Download file
if file.size < 1000000: # Less than 1MB
file.download(root="./downloads")
# Get S3 URI for large files
if file.size >= 1000000:
print(f"S3 URI: {file.path_uri}")
Note:
This module is part of the W&B Public API and provides methods to access, download, and manage files stored in W&B. Files are typically associated with specific runs and can include model weights, datasets, visualizations, and other artifacts.
class Files
An iterable collection of File
objects.
Access and manage files uploaded to W&B during a run. Handles pagination automatically when iterating through large collections of files.
Args:
client
: The run object that contains the filesrun
: The run object that contains the filesnames
(list, optional): A list of file names to filter the filesper_page
(int, optional): The number of files to fetch per pageupload
(bool, optional): IfTrue
, fetch the upload URL for each file
Example:
from wandb.apis.public.files import Files
from wandb.apis.public.api import Api
# Initialize the API client
api = Api()
# Example run object
run = api.run("entity/project/run-id")
# Create a Files object to iterate over files in the run
files = Files(api.client, run)
# Iterate over files
for file in files:
print(file.name)
print(file.url)
print(file.size)
# Download the file
file.download(root="download_directory", replace=True)
method Files.__init__
__init__(client, run, names=None, per_page=50, upload=False)
property Files.cursor
Returns the cursor position for pagination of file results.
property Files.length
The number of files saved to the specified run.
property Files.more
Returns True
if there are more files to fetch. Returns False
if there are no more files to fetch.
method Files.convert_objects
convert_objects()
Converts GraphQL edges to File objects.
method Files.update_variables
update_variables()
Updates the GraphQL query variables for pagination.
class File
File saved to W&B.
Represents a single file stored in W&B. Includes access to file metadata. Files are associated with a specific run and can include text files, model weights, datasets, visualizations, and other artifacts. You can download the file, delete the file, and access file properties.
Specify one or more attributes in a dictionary to fine a specific file logged to a specific run. You can search using the following keys:
- id (str): The ID of the run that contains the file
- name (str): Name of the file
- url (str): path to file
- direct_url (str): path to file in the bucket
- sizeBytes (int): size of file in bytes
- md5 (str): md5 of file
- mimetype (str): mimetype of file
- updated_at (str): timestamp of last update
- path_uri (str): path to file in the bucket, currently only available for files stored in S3
Args:
client
: The run object that contains the fileattrs
(dict): A dictionary of attributes that define the filerun
: The run object that contains the file
Example:
from wandb.apis.public.files import File
from wandb.apis.public.api import Api
# Initialize the API client
api = Api()
# Example attributes dictionary
file_attrs = {
"id": "file-id",
"name": "example_file.txt",
"url": "https://example.com/file",
"direct_url": "https://example.com/direct_file",
"sizeBytes": 1024,
"mimetype": "text/plain",
"updated_at": "2025-03-25T21:43:51Z",
"md5": "d41d8cd98f00b204e9800998ecf8427e",
}
# Example run object
run = api.run("entity/project/run-id")
# Create a File object
file = File(api.client, file_attrs, run)
# Access some of the attributes
print("File ID:", file.id)
print("File Name:", file.name)
print("File URL:", file.url)
print("File MIME Type:", file.mimetype)
print("File Updated At:", file.updated_at)
# Access File properties
print("File Size:", file.size)
print("File Path URI:", file.path_uri)
# Download the file
file.download(root="download_directory", replace=True)
# Delete the file
file.delete()
method File.__init__
__init__(client, attrs, run=None)
property File.path_uri
Returns the URI path to the file in the storage bucket.
property File.size
Returns the size of the file in bytes.
method File.delete
delete()
Deletes the file from the W&B server.
method File.download
download(
root: str = '.',
replace: bool = False,
exist_ok: bool = False,
api: Optional[wandb.apis.public.api.Api] = None
) → TextIOWrapper
Downloads a file previously saved by a run from the wandb server.
Args:
root
: Local directory to save the file. Defaults to “.”.replace
: IfTrue
, download will overwrite a local file if it exists. Defaults toFalse
.exist_ok
: IfTrue
, will not raise ValueError if file already exists and will not re-download unless replace=True. Defaults toFalse
.api
: If specified, theApi
instance used to download the file.
Raises:
ValueError
if file already exists, replace=False
and exist_ok=False
.
1.4 - history
module wandb.apis.public
W&B Public API for Run History.
This module provides classes for efficiently scanning and sampling run history data. Classes include:
HistoryScan: Iterator for scanning complete run history
- Paginated access to all metrics
- Configure step ranges and page sizes
- Raw access to all logged data
SampledHistoryScan: Iterator for sampling run history data
- Efficient access to downsampled metrics
- Filter by specific keys
- Control sample size and step ranges
Note:
This module is part of the W&B Public API and provides methods to access run history data. It handles pagination automatically and offers both complete and sampled access to metrics logged during training runs.
class HistoryScan
Iterator for scanning complete run history.
Args:
client
: (wandb.apis.internal.Api
) The client instance to userun
: (wandb.sdk.internal.Run
) The run object to scan history formin_step
: (int) The minimum step to start scanning frommax_step
: (int) The maximum step to scan up topage_size
: (int) Number of samples per page (default is 1000)
method HistoryScan.__init__
__init__(client, run, min_step, max_step, page_size=1000)
class SampledHistoryScan
Iterator for sampling run history data.
Args:
client
: (wandb.apis.internal.Api
) The client instance to userun
: (wandb.sdk.internal.Run
) The run object to sample history fromkeys
: (list) List of keys to sample from the historymin_step
: (int) The minimum step to start sampling frommax_step
: (int) The maximum step to sample up topage_size
: (int) Number of samples per page (default is 1000)
method SampledHistoryScan.__init__
__init__(client, run, keys, min_step, max_step, page_size=1000)
1.5 - jobs
module wandb.apis.public
W&B Public API for Job Management and Queuing.
This module provides classes for managing W&B jobs, queued runs, and run queues. Classes include:
Job: Manage W&B job definitions and execution
- Load and configure jobs from artifacts
- Set entrypoints and runtime configurations
- Execute jobs with different resource types
- Handle notebook and container-based jobs
QueuedRun: Track and manage individual queued runs
- Monitor run state and execution
- Wait for run completion
- Access run results and artifacts
- Delete queued runs
RunQueue: Manage job queues and execution resources
- Create and configure run queues
- Set resource types and configurations
- Monitor queue items and status
- Control queue access and priorities
class Job
method Job.__init__
__init__(api: 'Api', name, path: Optional[str] = None) → None
property Job.name
The name of the job.
method Job.call
call(
config,
project=None,
entity=None,
queue=None,
resource='local-container',
resource_args=None,
template_variables=None,
project_queue=None,
priority=None
)
Call the job with the given configuration.
Args:
config
(dict): The configuration to pass to the job. This should be a dictionary containing key-value pairs that match the input types defined in the job.project
(str, optional): The project to log the run to. Defaults to the job’s project.entity
(str, optional): The entity to log the run under. Defaults to the job’s entity.queue
(str, optional): The name of the queue to enqueue the job to. Defaults to None.resource
(str, optional): The resource type to use for execution. Defaults to “local-container”.resource_args
(dict, optional): Additional arguments for the resource type. Defaults to None.template_variables
(dict, optional): Template variables to use for the job. Defaults to None.project_queue
(str, optional): The project that manages the queue. Defaults to None.priority
(int, optional): The priority of the queued run. Defaults to None.
method Job.set_entrypoint
set_entrypoint(entrypoint: List[str])
Set the entrypoint for the job.
class QueuedRun
A single queued run associated with an entity and project.
Args:
entity
: The entity associated with the queued run.project
(str): The project where runs executed by the queue are logged to.queue_name
(str): The name of the queue.run_queue_item_id
(int): The id of the run queue item.project_queue
(str): The project that manages the queue.priority
(str): The priority of the queued run.
Call run = queued_run.wait_until_running()
or run = queued_run.wait_until_finished()
to access the run.
method QueuedRun.__init__
__init__(
client,
entity,
project,
queue_name,
run_queue_item_id,
project_queue='model-registry',
priority=None
)
property QueuedRun.entity
The entity associated with the queued run.
property QueuedRun.id
The id of the queued run.
property QueuedRun.project
The project associated with the queued run.
property QueuedRun.queue_name
The name of the queue.
property QueuedRun.state
The state of the queued run.
method QueuedRun.delete
delete(delete_artifacts=False)
Delete the given queued run from the wandb backend.
method QueuedRun.wait_until_finished
wait_until_finished()
Wait for the queued run to complete and return the finished run.
method QueuedRun.wait_until_running
wait_until_running()
Wait until the queued run is running and return the run.
class RunQueue
Class that represents a run queue in W&B.
Args:
client
: W&B API client instance.name
: Name of the run queueentity
: The entity (user or team) that owns this queueprioritization_mode
: Queue priority mode Can be “DISABLED” or “V0”. Defaults toNone
._access
: Access level for the queue Can be “project” or “user”. Defaults toNone
._default_resource_config_id
: ID of default resource config_default_resource_config
: Default resource configuration
method RunQueue.__init__
__init__(
client: 'RetryingClient',
name: str,
entity: str,
prioritization_mode: Optional[Literal['DISABLED', 'V0']] = None,
_access: Optional[Literal['project', 'user']] = None,
_default_resource_config_id: Optional[int] = None,
_default_resource_config: Optional[dict] = None
) → None
property RunQueue.access
The access level of the queue.
property RunQueue.default_resource_config
The default configuration for resources.
property RunQueue.entity
The entity that owns the queue.
property RunQueue.external_links
External resource links for the queue.
property RunQueue.id
The id of the queue.
property RunQueue.items
Up to the first 100 queued runs. Modifying this list will not modify the queue or any enqueued items!
property RunQueue.name
The name of the queue.
property RunQueue.prioritization_mode
The prioritization mode of the queue.
Can be set to “DISABLED” or “V0”.
property RunQueue.template_variables
Variables for resource templates.
property RunQueue.type
The resource type for execution.
classmethod RunQueue.create
create(
name: str,
resource: 'RunQueueResourceType',
entity: Optional[str] = None,
prioritization_mode: Optional[ForwardRef('RunQueuePrioritizationMode')] = None,
config: Optional[dict] = None,
template_variables: Optional[dict] = None
) → RunQueue
Create a RunQueue.
Args:
name
: The name of the run queue to create.resource
: The resource type for execution.entity
: The entity (user or team) that will own the queue. Defaults to the default entity of the API client.prioritization_mode
: The prioritization mode for the queue. Can be “DISABLED” or “V0”. Defaults to None.config
: Optional dictionary for the default resource configuration. Defaults to None.template_variables
: Optional dictionary for template variables used in the resource configuration.
method RunQueue.delete
delete()
Delete the run queue from the wandb backend.
1.6 - projects
module wandb.apis.public
W&B Public API for Projects.
This module provides classes for interacting with W&B projects and their associated data. Classes include:
Projects: A paginated collection of projects associated with an entity
- Iterate through all projects
- Access project metadata
- Query project information
Project: A single project that serves as a namespace for runs
- Access project properties
- Work with artifacts and their types
- Manage sweeps
- Generate HTML representations for Jupyter
Example:
from wandb.apis.public import Api
# Initialize API
api = Api()
# Get all projects for an entity
projects = api.projects("entity")
# Access project data
for project in projects:
print(f"Project: {project.name}")
print(f"URL: {project.url}")
# Get artifact types
for artifact_type in project.artifacts_types():
print(f"Artifact Type: {artifact_type.name}")
# Get sweeps
for sweep in project.sweeps():
print(f"Sweep ID: {sweep.id}")
print(f"State: {sweep.state}")
Note:
This module is part of the W&B Public API and provides methods to access and manage projects. For creating new projects, use wandb.init() with a new project name.
class Projects
An iterable collection of Project
objects.
An iterable interface to access projects created and saved by the entity.
Args:
client
(wandb.apis.internal.Api
): The API client instance to use.entity
(str): The entity name (username or team) to fetch projects for.per_page
(int): Number of projects to fetch per request (default is 50).
Example:
from wandb.apis.public.api import Api
# Initialize the API client
api = Api()
# Find projects that belong to this entity
projects = api.projects(entity="entity")
# Iterate over files
for project in projects:
print(f"Project: {project.name}")
print(f"- URL: {project.url}")
print(f"- Created at: {project.created_at}")
print(f"- Is benchmark: {project.is_benchmark}")
method Projects.__init__
__init__(client, entity, per_page=50)
property Projects.cursor
Returns the cursor position for pagination of project results.
property Projects.length
Returns the total number of projects.
Note: This property is not available for projects.
property Projects.more
Returns True
if there are more projects to fetch. Returns False
if there are no more projects to fetch.
method Projects.convert_objects
convert_objects()
Converts GraphQL edges to File objects.
class Project
A project is a namespace for runs.
Args:
client
: W&B API client instance.name
(str): The name of the project.entity
(str): The entity name that owns the project.
method Project.__init__
__init__(client, entity, project, attrs)
property Project.path
Returns the path of the project. The path is a list containing the entity and project name.
property Project.url
Returns the URL of the project.
method Project.artifacts_types
artifacts_types(per_page=50)
Returns all artifact types associated with this project.
method Project.sweeps
sweeps()
Fetches all sweeps associated with the project.
method Project.to_html
to_html(height=420, hidden=False)
Generate HTML containing an iframe displaying this project.
1.7 - query_generator
module wandb.apis.public
method QueryGenerator.filter_to_mongo
filter_to_mongo(filter)
Returns dictionary with filter format converted to MongoDB filter.
classmethod QueryGenerator.format_order_key
format_order_key(key: str)
Format a key for sorting.
method QueryGenerator.key_to_server_path
key_to_server_path(key)
Convert a key dictionary to the corresponding server path string.
method QueryGenerator.keys_to_order
keys_to_order(keys)
Convert a list of key dictionaries to an order string.
method QueryGenerator.mongo_to_filter
mongo_to_filter(filter)
Returns dictionary with MongoDB filter converted to filter format.
method QueryGenerator.order_to_keys
order_to_keys(order)
Convert an order string to a list of key dictionaries.
method QueryGenerator.server_path_to_key
server_path_to_key(path)
Convert a server path string to the corresponding key dictionary.
1.8 - reports
module wandb.apis.public
Public API: reports.
class Reports
Reports is an iterable collection of BetaReport
objects.
Args:
client
(wandb.apis.internal.Api
): The API client instance to use.project
(wandb.sdk.internal.Project
): The project to fetch reports from.name
(str, optional): The name of the report to filter by. IfNone
, fetches all reports.entity
(str, optional): The entity name for the project. Defaults to the project entity.per_page
(int): Number of reports to fetch per page (default is 50).
method Reports.__init__
__init__(client, project, name=None, entity=None, per_page=50)
property Reports.cursor
Returns the cursor position for pagination of file results.
property Reports.length
The number of reports in the project.
property Reports.more
Returns True
if there are more files to fetch. Returns False
if there are no more files to fetch.
method Reports.convert_objects
convert_objects()
Converts GraphQL edges to File objects.
method Reports.update_variables
update_variables()
Updates the GraphQL query variables for pagination.
class BetaReport
BetaReport is a class associated with reports created in wandb.
WARNING: this API will likely change in a future release
Attributes:
name
(string): report namedescription
(string): report descriptionuser
(User): the user that created the reportspec
(dict): the spec off the reportupdated_at
(string): timestamp of last update
method BetaReport.__init__
__init__(client, attrs, entity=None, project=None)
property BetaReport.sections
Get the panel sections (groups) from the report.
property BetaReport.updated_at
Timestamp of last update
property BetaReport.url
URL of the report.
Contains the entity, project, display name, and id.
method BetaReport.runs
runs(section, per_page=50, only_selected=True)
Get runs associated with a section of the report.
method BetaReport.to_html
to_html(height=1024, hidden=False)
Generate HTML containing an iframe displaying this report.
1.9 - runs
module wandb.apis.public
W&B Public API for ML Runs.
This module provides classes for interacting with W&B runs and their associated data. Classes include:
Runs: A paginated collection of runs associated with a project
- Filter and query runs
- Access run histories and metrics
- Export data in various formats (pandas, polars)
Run: A single machine learning training run
- Access run metadata, configs, and metrics
- Upload and download files
- Work with artifacts
- Query run history
- Update run information
Example:
from wandb.apis.public import Api
# Initialize API
api = Api()
# Get runs matching filters
runs = api.runs(
path="entity/project", filters={"state": "finished", "config.batch_size": 32}
)
# Access run data
for run in runs:
print(f"Run: {run.name}")
print(f"Config: {run.config}")
print(f"Metrics: {run.summary}")
# Get history with pandas
history_df = run.history(keys=["loss", "accuracy"], pandas=True)
# Work with artifacts
for artifact in run.logged_artifacts():
print(f"Artifact: {artifact.name}")
Note:
This module is part of the W&B Public API and provides read/write access to run data. For logging new runs, use the wandb.init() function from the main wandb package.
class Runs
An iterable collection of runs associated with a project and optional filter.
This is generally used indirectly using the Api.runs
namespace.
Args:
client
: (wandb.apis.public.RetryingClient
) The API client to use for requests.entity
: (str) The entity (username or team) that owns the project.project
: (str) The name of the project to fetch runs from.filters
: (Optional[Dict[str, Any]]) A dictionary of filters to apply to the runs query.order
: (Optional[str]) The order of the runs, can be “asc” or “desc” Defaults to “desc”.per_page
: (int) The number of runs to fetch per request (default is 50).include_sweeps
: (bool) Whether to include sweep information in the runs. Defaults to True.
Examples:
from wandb.apis.public.runs import Runs
from wandb.apis.public import Api
# Initialize the API client
api = Api()
# Get all runs from a project that satisfy the filters
filters = {"state": "finished", "config.optimizer": "adam"}
runs = Runs(
client=api.client,
entity="entity",
project="project_name",
filters=filters,
)
# Iterate over runs and print details
for run in runs:
print(f"Run name: {run.name}")
print(f"Run ID: {run.id}")
print(f"Run URL: {run.url}")
print(f"Run state: {run.state}")
print(f"Run config: {run.config}")
print(f"Run summary: {run.summary}")
print(f"Run history (samples=5): {run.history(samples=5)}")
print("----------")
# Get histories for all runs with specific metrics
histories_df = runs.histories(
samples=100, # Number of samples per run
keys=["loss", "accuracy"], # Metrics to fetch
x_axis="_step", # X-axis metric
format="pandas", # Return as pandas DataFrame
)
method Runs.__init__
__init__(
client: 'RetryingClient',
entity: str,
project: str,
filters: Optional[Dict[str, Any]] = None,
order: Optional[str] = None,
per_page: int = 50,
include_sweeps: bool = True
)
property Runs.cursor
Returns the cursor position for pagination of runs results.
property Runs.length
Returns the total number of runs.
property Runs.more
Returns True
if there are more runs to fetch. Returns False
if there are no more runs to fetch.
method Runs.convert_objects
convert_objects()
Converts GraphQL edges to Runs objects.
method Runs.histories
histories(
samples: int = 500,
keys: Optional[List[str]] = None,
x_axis: str = '_step',
format: Literal['default', 'pandas', 'polars'] = 'default',
stream: Literal['default', 'system'] = 'default'
)
Return sampled history metrics for all runs that fit the filters conditions.
Args:
samples
: The number of samples to return per runkeys
: Only return metrics for specific keysx_axis
: Use this metric as the xAxis defaults to _stepformat
: Format to return data in, options are “default”, “pandas”, “polars”stream
: “default” for metrics, “system” for machine metrics
Returns:
pandas.DataFrame
: Ifformat="pandas"
, returns apandas.DataFrame
of history metrics.polars.DataFrame
: Ifformat="polars"
, returns apolars.DataFrame
of history metrics.list of dicts
: Ifformat="default"
, returns a list of dicts containing history metrics with arun_id
key.
class Run
A single run associated with an entity and project.
Args:
client
: The W&B API client.entity
: The entity associated with the run.project
: The project associated with the run.run_id
: The unique identifier for the run.attrs
: The attributes of the run.include_sweeps
: Whether to include sweeps in the run.
Attributes:
tags
([str]): a list of tags associated with the runurl
(str): the url of this runid
(str): unique identifier for the run (defaults to eight characters)name
(str): the name of the runstate
(str): one of: running, finished, crashed, killed, preempting, preemptedconfig
(dict): a dict of hyperparameters associated with the runcreated_at
(str): ISO timestamp when the run was startedsystem_metrics
(dict): the latest system metrics recorded for the runsummary
(dict): A mutable dict-like property that holds the current summary. Calling update will persist any changes.project
(str): the project associated with the runentity
(str): the name of the entity associated with the runproject_internal_id
(int): the internal id of the projectuser
(str): the name of the user who created the runpath
(str): Unique identifier [entity]/[project]/[run_id]notes
(str): Notes about the runread_only
(boolean): Whether the run is editablehistory_keys
(str): Keys of the history metrics that have been loggedwith
wandb.log({key: value})
metadata
(str): Metadata about the run from wandb-metadata.json
method Run.__init__
__init__(
client: 'RetryingClient',
entity: str,
project: str,
run_id: str,
attrs: Optional[Mapping] = None,
include_sweeps: bool = True
)
Initialize a Run object.
Run is always initialized by calling api.runs() where api is an instance of wandb.Api.
property Run.entity
The entity associated with the run.
property Run.id
The unique identifier for the run.
property Run.json_config
property Run.lastHistoryStep
Returns the last step logged in the run’s history.
property Run.metadata
Metadata about the run from wandb-metadata.json.
Metadata includes the run’s description, tags, start time, memory usage and more.
property Run.name
The name of the run.
property Run.path
The path of the run. The path is a list containing the entity, project, and run_id.
property Run.state
The state of the run. Can be one of: Finished, Failed, Crashed, or Running.
property Run.storage_id
The unique storage identifier for the run.
property Run.summary
A mutable dict-like property that holds summary values associated with the run.
property Run.url
The URL of the run.
The run URL is generated from the entity, project, and run_id. For SaaS users, it takes the form of https://wandb.ai/entity/project/run_id
.
property Run.username
This API is deprecated. Use entity
instead.
classmethod Run.create
create(api, run_id=None, project=None, entity=None)
Create a run for the given project.
method Run.delete
delete(delete_artifacts=False)
Delete the given run from the wandb backend.
Args:
delete_artifacts
(bool, optional): Whether to delete the artifacts associated with the run.
method Run.file
file(name)
Return the path of a file with a given name in the artifact.
Args:
name
(str): name of requested file.
Returns:
A File
matching the name argument.
method Run.files
files(names=None, per_page=50)
Return a file path for each file named.
Args:
names
(list): names of the requested files, if empty returns all filesper_page
(int): number of results per page.
Returns:
A Files
object, which is an iterator over File
objects.
method Run.history
history(samples=500, keys=None, x_axis='_step', pandas=True, stream='default')
Return sampled history metrics for a run.
This is simpler and faster if you are ok with the history records being sampled.
Args:
samples
: (int, optional) The number of samples to returnpandas
: (bool, optional) Return a pandas dataframekeys
: (list, optional) Only return metrics for specific keysx_axis
: (str, optional) Use this metric as the xAxis defaults to _stepstream
: (str, optional) “default” for metrics, “system” for machine metrics
Returns:
pandas.DataFrame
: If pandas=True returns apandas.DataFrame
of history metrics.list of dicts
: If pandas=False returns a list of dicts of history metrics.
method Run.load
load(force=False)
Fetch and update run data from GraphQL database.
Ensures run data is up to date.
Args:
force
(bool): Whether to force a refresh of the run data.
method Run.log_artifact
log_artifact(
artifact: 'wandb.Artifact',
aliases: Optional[Collection[str]] = None,
tags: Optional[Collection[str]] = None
)
Declare an artifact as output of a run.
Args:
artifact
(Artifact
): An artifact returned fromwandb.Api().artifact(name)
.aliases
(list, optional): Aliases to apply to this artifact.tags
: (list, optional) Tags to apply to this artifact, if any.
Returns:
A Artifact
object.
method Run.logged_artifacts
logged_artifacts(per_page: int = 100) → RunArtifacts
Fetches all artifacts logged by this run.
Retrieves all output artifacts that were logged during the run. Returns a paginated result that can be iterated over or collected into a single list.
Args:
per_page
: Number of artifacts to fetch per API request.
Returns: An iterable collection of all Artifact objects logged as outputs during this run.
Example:
import wandb
import tempfile
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as tmp:
tmp.write("This is a test artifact")
tmp_path = tmp.name
run = wandb.init(project="artifact-example")
artifact = wandb.Artifact("test_artifact", type="dataset")
artifact.add_file(tmp_path)
run.log_artifact(artifact)
run.finish()
api = wandb.Api()
finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
for logged_artifact in finished_run.logged_artifacts():
print(logged_artifact.name)
method Run.save
save()
Persist changes to the run object to the W&B backend.
method Run.scan_history
scan_history(keys=None, page_size=1000, min_step=None, max_step=None)
Returns an iterable collection of all history records for a run.
Args:
keys
([str], optional): only fetch these keys, and only fetch rows that have all of keys defined.page_size
(int, optional): size of pages to fetch from the api.min_step
(int, optional): the minimum number of pages to scan at a time.max_step
(int, optional): the maximum number of pages to scan at a time.
Returns: An iterable collection over history records (dict).
Example: Export all the loss values for an example run
run = api.run("entity/project-name/run-id")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]
method Run.to_html
to_html(height=420, hidden=False)
Generate HTML containing an iframe displaying this run.
method Run.update
update()
Persist changes to the run object to the wandb backend.
method Run.upload_file
upload_file(path, root='.')
Uploads a local file to W&B, associating it with this run.
Args:
path
(str): Path to the file to upload. Can be absolute or relative.root
(str): The root path to save the file relative to. For example, if you want to have the file saved in the run as “my_dir/file.txt” and you’re currently in “my_dir” you would set root to “../”. Defaults to current directory (".").
Returns:
A File
object representing the uploaded file.
method Run.use_artifact
use_artifact(artifact, use_as=None)
Declare an artifact as an input to a run.
Args:
artifact
(Artifact
): An artifact returned fromwandb.Api().artifact(name)
use_as
(string, optional): A string identifying how the artifact is used in the script. Used to easily differentiate artifacts used in a run, when using the beta wandb launch feature’s artifact swapping functionality.
Returns:
A Artifact
object.
method Run.used_artifacts
used_artifacts(per_page: int = 100) → RunArtifacts
Fetches artifacts explicitly used by this run.
Retrieves only the input artifacts that were explicitly declared as used during the run, typically via run.use_artifact()
. Returns a paginated result that can be iterated over or collected into a single list.
Args:
per_page
: Number of artifacts to fetch per API request.
Returns: An iterable collection of Artifact objects explicitly used as inputs in this run.
Example:
import wandb
run = wandb.init(project="artifact-example")
run.use_artifact("test_artifact:latest")
run.finish()
api = wandb.Api()
finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
for used_artifact in finished_run.used_artifacts():
print(used_artifact.name)
test_artifact
method Run.wait_until_finished
wait_until_finished()
Check the state of the run until it is finished.
1.10 - sweeps
module wandb.apis.public
W&B Public API for Hyperparameter Sweeps.
This module provides classes for interacting with W&B hyperparameter optimization sweeps. Classes include:
Sweep: Represents a hyperparameter optimization sweep, providing access to:
- Sweep configuration and state
- Associated runs and their metrics
- Best performing runs
- URLs for visualization
Example:
from wandb.apis.public import Api
# Initialize API
api = Api()
# Get a specific sweep
sweep = api.sweep("entity/project/sweep_id")
# Access sweep properties
print(f"Sweep: {sweep.name}")
print(f"State: {sweep.state}")
print(f"Best Loss: {sweep.best_loss}")
# Get best performing run
best_run = sweep.best_run()
print(f"Best Run: {best_run.name}")
print(f"Metrics: {best_run.summary}")
Note:
This module is part of the W&B Public API and provides read-only access to sweep data. For creating and controlling sweeps, use the wandb.sweep() and wandb.agent() functions from the main wandb package.
class Sweep
The set of runs associated with the sweep.
Attributes:
runs
(Runs
): List of runsid
(str): Sweep IDproject
(str): The name of the project the sweep belongs toconfig
(dict): Dictionary containing the sweep configurationstate
(str): The state of the sweep. Can be “Finished”, “Failed”, “Crashed”, or “Running”.expected_run_count
(int): The number of expected runs for the sweep
method Sweep.__init__
__init__(client, entity, project, sweep_id, attrs=None)
property Sweep.config
The sweep configuration used for the sweep.
property Sweep.entity
The entity associated with the sweep.
property Sweep.expected_run_count
Return the number of expected runs in the sweep or None for infinite runs.
property Sweep.name
The name of the sweep.
If the sweep has a name, it will be returned. Otherwise, the sweep ID will be returned.
property Sweep.order
Return the order key for the sweep.
property Sweep.path
Returns the path of the project.
The path is a list containing the entity, project name, and sweep ID.
property Sweep.url
The URL of the sweep.
The sweep URL is generated from the entity, project, the term “sweeps”, and the sweep ID.run_id. For SaaS users, it takes the form of https://wandb.ai/entity/project/sweeps/sweeps_ID
.
property Sweep.username
Note: Deprecated. Use entity instead.
method Sweep.best_run
best_run(order=None)
Return the best run sorted by the metric defined in config or the order passed in.
classmethod Sweep.get
get(
client,
entity=None,
project=None,
sid=None,
order=None,
query=None,
**kwargs
)
Execute a query against the cloud backend.
method Sweep.load
load(force: bool = False)
Fetch and update sweep data logged to the run from GraphQL database.
method Sweep.to_html
to_html(height=420, hidden=False)
Generate HTML containing an iframe displaying this sweep.
1.11 - teams
module wandb.apis.public
W&B Public API for managing teams and team members.
This module provides classes for managing W&B teams and their members. Classes include:
Team: Manage W&B teams and their settings
- Create new teams
- Invite team members
- Create service accounts
- Manage team permissions and settings
Member: Represent and manage team members
- Access member information
- Delete members
- Manage member permissions
Note:
This module is part of the W&B Public API and provides methods to manage teams and their members. Team management operations require appropriate permissions.
class Member
A member of a team.
Args:
client
(wandb.apis.internal.Api
): The client instance to useteam
(str): The name of the team this member belongs toattrs
(dict): The member attributes
method Member.__init__
__init__(client, team, attrs)
method Member.delete
delete()
Remove a member from a team.
Returns: Boolean indicating success
class Team
A class that represents a W&B team.
This class provides methods to manage W&B teams, including creating teams, inviting members, and managing service accounts. It inherits from Attrs to handle team attributes.
Args:
client
(wandb.apis.public.Api
): The api instance to usename
(str): The name of the teamattrs
(dict): Optional dictionary of team attributes
Note:
Team management requires appropriate permissions.
method Team.__init__
__init__(client, name, attrs=None)
classmethod Team.create
create(api, team, admin_username=None)
Create a new team.
Args:
api
: (Api
) The api instance to useteam
: (str) The name of the teamadmin_username
: (str) optional username of the admin user of the team, defaults to the current user.
Returns:
A Team
object
method Team.create_service_account
create_service_account(description)
Create a service account for the team.
Args:
description
: (str) A description for this service account
Returns:
The service account Member
object, or None on failure
method Team.invite
invite(username_or_email, admin=False)
Invite a user to a team.
Args:
username_or_email
: (str) The username or email address of the user you want to inviteadmin
: (bool) Whether to make this user a team admin, defaults to False
Returns: True on success, False if user was already invited or didn’t exist
method Team.load
load(force=False)
Return members that belong to a team.
1.12 - users
module wandb.apis.public
W&B Public API for User Management.
This module provides classes for managing W&B users and their API keys. Classes include:
User: Manage W&B user accounts and authentication
- Create new users
- Generate and manage API keys
- Access team memberships
- Handle user properties and permissions
Note:
This module is part of the W&B Public API and provides methods to manage users and their authentication. Some operations require admin privileges.
class User
A class representing a W&B user with authentication and management capabilities.
This class provides methods to manage W&B users, including creating users, managing API keys, and accessing team memberships. It inherits from Attrs to handle user attributes.
Args:
client
: (wandb.apis.internal.Api
) The client instance to useattrs
: (dict) The user attributes
Note:
Some operations require admin privileges
method User.__init__
__init__(client, attrs)
property User.api_keys
List of API key names associated with the user.
Returns:
list[str]
: Names of API keys associated with the user. Empty list if user has no API keys or if API key data hasn’t been loaded.
property User.teams
List of team names that the user is a member of.
Returns:
list
(list): Names of teams the user belongs to. Empty list if user has no team memberships or if teams data hasn’t been loaded.
property User.user_api
An instance of the api using credentials from the user.
classmethod User.create
create(api, email, admin=False)
Create a new user.
Args:
api
(Api
): The api instance to useemail
(str): The name of the teamadmin
(bool): Whether this user should be a global instance admin
Returns:
A User
object
method User.delete_api_key
delete_api_key(api_key)
Delete a user’s api key.
Args:
api_key
(str): The name of the API key to delete. This should be one of the names returned by theapi_keys
property.
Returns: Boolean indicating success
Raises: ValueError if the api_key couldn’t be found
method User.generate_api_key
generate_api_key(description=None)
Generate a new api key.
Args:
description
(str, optional): A description for the new API key. This can be used to identify the purpose of the API key.
Returns: The new api key, or None on failure
2 - Data Types
Defines Data Types for logging interactive visualizations to W&B.
2.1 - Audio
class Audio
W&B class for audio clips.
Attributes:
data_or_path
(string or numpy array): A path to an audio file or a numpy array of audio data.sample_rate
(int): Sample rate, required when passing in raw numpy array of audio data.caption
(string): Caption to display with audio.
method Audio.__init__
__init__(data_or_path, sample_rate=None, caption=None)
Accept a path to an audio file or a numpy array of audio data.
2.2 - box3d
function box3d
box3d(
center: 'npt.ArrayLike',
size: 'npt.ArrayLike',
orientation: 'npt.ArrayLike',
color: 'RGBColor',
label: 'Optional[str]' = None,
score: 'Optional[numeric]' = None
) → Box3D
Returns a Box3D.
Args:
center
: The center point of the box as a length-3 ndarray.size
: The box’s X, Y and Z dimensions as a length-3 ndarray.orientation
: The rotation transforming global XYZ coordinates into the box’s local XYZ coordinates, given as a length-4 ndarray [r, x, y, z] corresponding to the non-zero quaternion r + xi + yj + zk.color
: The box’s color as an (r, g, b) tuple with 0 <= r,g,b <= 1.label
: An optional label for the box.score
: An optional score for the box.
2.3 - Histogram
class Histogram
W&B class for histograms.
This object works just like numpy’s histogram function https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
Args:
sequence
: Input data for histogram.np_histogram
: Alternative input of a precomputed histogram.num_bins
: Number of bins for the histogram. The default number of bins is 64. The maximum number of bins is 512.
Attributes:
bins
([float]): Edges of binshistogram
([int]): Number of elements falling in each bin.
Examples: Generate histogram from a sequence.
import wandb
wandb.Histogram([1, 2, 3])
Efficiently initialize from np.histogram.
import numpy as np
import wandb
hist = np.histogram(data)
wandb.Histogram(np_histogram=hist)
method Histogram.__init__
__init__(
sequence: Optional[Sequence] = None,
np_histogram: Optional[ForwardRef('NumpyHistogram')] = None,
num_bins: int = 64
) → None
2.4 - Html
class Html
W&B class for arbitrary html.
Args:
data
: HTML to display in wandbinject
: Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.
method Html.__init__
__init__(data: Union[str, ForwardRef('TextIO')], inject: bool = True) → None
2.5 - Image
class Image
Format images for logging to W&B.
See https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes for more information on modes.
Args:
data_or_path
: Accepts numpy array of image data, or a PIL image. The class attempts to infer the data format and converts it.mode
: The PIL mode for an image. Most common are “L”, “RGB”, “RGBA”.caption
: Label for display of image.
When logging a torch.Tensor
as a wandb.Image
, images are normalized. If you do not want to normalize your images, convert your tensors to a PIL Image.
Examples:
# Create a wandb.Image from a numpy array
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Create a wandb.Image from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# log .jpg rather than .png (default)
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
examples.append(image)
run.log({"examples": examples})
method Image.__init__
__init__(
data_or_path: 'ImageDataOrPathType',
mode: Optional[str] = None,
caption: Optional[str] = None,
grouping: Optional[int] = None,
classes: Optional[ForwardRef('Classes'), Sequence[dict]] = None,
boxes: Optional[Dict[str, ForwardRef('BoundingBoxes2D')], Dict[str, dict]] = None,
masks: Optional[Dict[str, ForwardRef('ImageMask')], Dict[str, dict]] = None,
file_type: Optional[str] = None
) → None
method Image.guess_mode
guess_mode(
data: Union[ForwardRef('np.ndarray'), ForwardRef('torch.Tensor')],
file_type: Optional[str] = None
) → str
Guess what type of image the np.array is representing.
2.6 - Molecule
class Molecule
W&B class for 3D Molecular data.
Args:
data_or_path
: Molecule can be initialized from a file name or an io object.caption
: Caption associated with the molecule for display.
method Molecule.__init__
__init__(
data_or_path: Union[str, ForwardRef('TextIO')],
caption: Optional[str] = None,
**kwargs: str
) → None
2.7 - Object3D
class Object3D
W&B class for 3D point clouds.
Args:
data_or_path
: Object3D can be initialized from a file or a NumPy array. You can pass a path to a file or an io object and a file_type which must be one of SUPPORTED_TYPES.
Examples: The shape of the numpy array must be one of either
[[x y z], ...] nx3
[[x y z c], ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color
method Object3D.__init__
__init__(
data_or_path: Union[ForwardRef('np.ndarray'), str, ForwardRef('TextIO'), dict],
caption: Optional[str] = None,
**kwargs: Optional[str, ForwardRef('FileFormat3D')]
) → None
2.8 - Plotly
class Plotly
W&B class for Plotly plots.
Args:
val
: Matplotlib or Plotly figure.
method Plotly.__init__
__init__(
val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
)
classmethod Plotly.get_media_subdir
get_media_subdir() → str
classmethod Plotly.make_plot_media
make_plot_media(
val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
) → Union[wandb.sdk.data_types.image.Image, ForwardRef('Plotly')]
method Plotly.to_json
to_json(
run_or_artifact: Union[ForwardRef('LocalRun'), ForwardRef('Artifact')]
) → dict
2.9 - Table
class Table
The Table class used to display and analyze tabular data.
Unlike traditional spreadsheets, Tables support numerous types of data: scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media
. This means you can embed Images
, Video
, Audio
, and other sorts of rich, annotated media directly in Tables, alongside other traditional scalar values.
This class is the primary class used to generate the Table Visualizer in the UI: https://docs.wandb.ai/guides/data-vis/tables.
Attributes:
columns
(List[str]): Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].data
: (List[List[any]]) 2D row-oriented array of values.dataframe
(pandas.DataFrame): DataFrame object used to create the table. When set,data
andcolumns
arguments are ignored.optional
(Union[bool,List[bool]]): Determines ifNone
values are allowed. Default toTrue
. - If a singular bool value, then the optionality is enforced for all columns specified at construction time. - If a list of bool values, then the optionality is applied to each column - should be the same length ascolumns
. applies to all columns. A list of bool values applies to each respective column.allow_mixed_types
(bool): Determines if columns are allowed to have mixed types (disables type validation). Defaults to False.
method Table.__init__
__init__(
columns=None,
data=None,
rows=None,
dataframe=None,
dtype=None,
optional=True,
allow_mixed_types=False
)
Initializes a Table object.
The rows is available for legacy reasons and should not be used. The Table class uses data to mimic the Pandas API.
method Table.add_column
add_column(name, data, optional=False)
Adds a column of data to the table.
Args:
name
: (str) - the unique name of the columndata
: (list | np.array) - a column of homogeneous dataoptional
: (bool) - if null-like values are permitted
method Table.add_computed_columns
add_computed_columns(fn)
Adds one or more computed columns based on existing data.
Args:
fn
: A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names.
ndx
is an integer representing the index of the row. Only included if include_ndx
is set to True
.
row
is a dictionary keyed by existing columns
method Table.add_data
add_data(*data)
Adds a new row of data to the table.
The maximum amount ofrows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS
.
The length of the data should match the length of the table column.
method Table.add_row
add_row(*row)
Deprecated; use add_data instead.
method Table.cast
cast(col_name, dtype, optional=False)
Casts a column to a specific data type.
This can be one of the normal python classes, an internal W&B type, or an example object, like an instance of wandb.Image or wandb.Classes.
Args:
col_name
(str): The name of the column to cast.dtype
(class, wandb.wandb_sdk.interface._dtypes.Type, any): The target dtype.optional
(bool): If the column should allow Nones.
method Table.get_column
get_column(name, convert_to=None)
Retrieves a column from the table and optionally converts it to a NumPy object.
Args:
name
: (str) - the name of the columnconvert_to
: (str, optional) - “numpy”: will convert the underlying data to numpy object
method Table.get_dataframe
get_dataframe()
Returns a pandas.DataFrame
of the table.
method Table.get_index
get_index()
Returns an array of row indexes for use in other tables to create links.
2.10 - Video
class Video
Format a video for logging to W&B.
Args:
data_or_path
: Video can be initialized with a path to a file or an io object. The format must be “gif”, “mp4”, “webm” or “ogg”. The format must be specified with the format argument. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. Channels should be (time, channel, height, width) or (batch, time, channel, height width)caption
: Caption associated with the video for display.fps
: The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes.format
: Format of video, necessary if initializing with path or io object.
Examples: Log a numpy array as a video
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
run.log({"video": wandb.Video(frames, fps=4)})
method Video.__init__
__init__(
data_or_path: Union[ForwardRef('np.ndarray'), str, ForwardRef('TextIO'), ForwardRef('BytesIO')],
caption: Optional[str] = None,
fps: Optional[int] = None,
format: Optional[str] = None
)
3 - Launch Library
A collection of launch APIs for W&B.
3.1 - create_and_run_agent
function create_and_run_agent
create_and_run_agent(
api: wandb.apis.internal.Api,
config: Dict[str, Any]
) → None
3.2 - launch
function launch
launch(
api: wandb.apis.internal.Api,
job: Optional[str] = None,
entry_point: Optional[List[str]] = None,
version: Optional[str] = None,
name: Optional[str] = None,
resource: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
docker_image: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
synchronous: Optional[bool] = True,
run_id: Optional[str] = None,
repository: Optional[str] = None
) → AbstractRun
Launch a W&B launch experiment.
Arguments:
job
: string reference to a wandb.Job eg: wandb/test/my-job:latestapi
: An instance of a wandb Api from wandb.apis.internal.entry_point
: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.version
: For Git-based projects, either a commit hash or a branch name.name
: Name run under which to launch the run.resource
: Execution backend for the run.resource_args
: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config underresource_args
.project
: Target project to send launched run toentity
: Target entity to send launched run toconfig
: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”.synchronous
: Whether to block while waiting for a run to complete. Defaults to True. Note that ifsynchronous
is False andbackend
is “local-container”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. Ifsynchronous
is True and the run fails, the current process will error out as well.run_id
: ID for the run (To ultimately replace the :name: field)repository
: string name of repository path for remote registry
Example:
from wandb.sdk.launch import launch
job = "wandb/jobs/Hello World:latest"
params = {"epochs": 5}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch(api, job, parameters=params)
```
**Returns:**
an instance of`wandb.launch.SubmittedRun` exposing information (e.g. run ID) about the launched run.
**Raises:**
`wandb.exceptions.ExecutionError` If a run launched in blocking mode is unsuccessful.
3.3 - launch_add
function launch_add
launch_add(
uri: Optional[str] = None,
job: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
template_variables: Optional[Dict[str, Union[float, int, str]]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
queue_name: Optional[str] = None,
resource: Optional[str] = None,
entry_point: Optional[List[str]] = None,
name: Optional[str] = None,
version: Optional[str] = None,
docker_image: Optional[str] = None,
project_queue: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
run_id: Optional[str] = None,
build: Optional[bool] = False,
repository: Optional[str] = None,
sweep_id: Optional[str] = None,
author: Optional[str] = None,
priority: Optional[int] = None
) → public.QueuedRun
Enqueue a W&B launch experiment. With either a source uri, job or docker_image.
Arguments:
uri
: URI of experiment to run. A wandb run uri or a Git repository URI.job
: string reference to a wandb.Job eg: wandb/test/my-job:latestconfig
: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”template_variables
: A dictionary containing values of template variables for a run queue.Expected format of
{“VAR_NAME”: VAR_VALUE}
project
: Target project to send launched run toentity
: Target entity to send launched run toqueue
: the name of the queue to enqueue the run topriority
: the priority level of the job, where 1 is the highest priorityresource
: Execution backend for the run: W&B provides built-in support for “local-container” backendentry_point
: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.name
: Name run under which to launch the run.version
: For Git-based projects, either a commit hash or a branch name.docker_image
: The name of the docker image to use for the run.resource_args
: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config underresource_args
.run_id
: optional string indicating the id of the launched runbuild
: optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job artifact, pushes a reference to that job artifact to queuerepository
: optional string to control the name of the remote repository, used when pushing images to a registryproject_queue
: optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues
Example:
from wandb.sdk.launch import launch_add
project_uri = "https://github.com/wandb/examples"
params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)
Returns:
an instance ofwandb.api.public.QueuedRun
which gives information about the queued run, or if wait_until_started
or wait_until_finished
are called, gives access to the underlying Run information.
Raises:
wandb.exceptions.LaunchError
if unsuccessful
3.4 - LaunchAgent
class LaunchAgent
Launch agent class which polls run given run queues and launches runs for wandb launch.
method LaunchAgent.__init__
__init__(api: wandb.apis.internal.Api, config: Dict[str, Any])
Initialize a launch agent.
Arguments:
api
: Api object to use for making requests to the backend.config
: Config dictionary for the agent.
property LaunchAgent.num_running_jobs
Return the number of jobs not including schedulers.
property LaunchAgent.num_running_schedulers
Return just the number of schedulers.
property LaunchAgent.thread_ids
Returns a list of keys running thread ids for the agent.
method LaunchAgent.check_sweep_state
check_sweep_state(
launch_spec: Dict[str, Any],
api: wandb.apis.internal.Api
) → None
Check the state of a sweep before launching a run for the sweep.
method LaunchAgent.fail_run_queue_item
fail_run_queue_item(
run_queue_item_id: str,
message: str,
phase: str,
files: Optional[List[str]] = None
) → None
method LaunchAgent.finish_thread_id
finish_thread_id(
thread_id: int,
exception: Optional[Exception, wandb.sdk.launch.errors.LaunchDockerError] = None
) → None
Removes the job from our list for now.
method LaunchAgent.get_job_and_queue
get_job_and_queue() → Optional[wandb.sdk.launch.agent.agent.JobSpecAndQueue]
classmethod LaunchAgent.initialized
initialized() → bool
Return whether the agent is initialized.
method LaunchAgent.loop
loop() → None
Loop infinitely to poll for jobs and run them.
Raises:
KeyboardInterrupt
: if the agent is requested to stop.
classmethod LaunchAgent.name
name() → str
Return the name of the agent.
method LaunchAgent.pop_from_queue
pop_from_queue(queue: str) → Any
Pops an item off the runqueue to run as a job.
Arguments:
queue
: Queue to pop from.
Returns: Item popped off the queue.
Raises:
Exception
: if there is an error popping from the queue.
method LaunchAgent.print_status
print_status() → None
Prints the current status of the agent.
method LaunchAgent.run_job
run_job(
job: Dict[str, Any],
queue: str,
file_saver: wandb.sdk.launch.agent.run_queue_item_file_saver.RunQueueItemFileSaver
) → None
Set up project and run the job.
Arguments:
job
: Job to run.
method LaunchAgent.task_run_job
task_run_job(
launch_spec: Dict[str, Any],
job: Dict[str, Any],
default_config: Dict[str, Any],
api: wandb.apis.internal.Api,
job_tracker: wandb.sdk.launch.agent.job_status_tracker.JobAndRunStatusTracker
) → None
method LaunchAgent.update_status
update_status(status: str) → None
Update the status of the agent.
Arguments:
status
: Status to update the agent to.
3.5 - load_wandb_config
function load_wandb_config
load_wandb_config() → Config
Load wandb config from WANDB_CONFIG environment variable(s).
The WANDB_CONFIG environment variable is a json string that can contain multiple config keys. The WANDB_CONFIG_[0-9]+ environment variables are used for environments where there is a limit on the length of environment variables. In that case, we shard the contents of WANDB_CONFIG into multiple environment variables numbered from 0.
Returns: A dictionary of wandb config values.
3.6 - manage_config_file
function manage_config_file
manage_config_file(
path: str,
include: Optional[List[str]] = None,
exclude: Optional[List[str]] = None,
schema: Optional[Any] = None
)
Declare an overridable configuration file for a launch job.
If a new job version is created from the active run, the configuration file will be added to the job’s inputs. If the job is launched and overrides have been provided for the configuration file, this function will detect the overrides from the environment and update the configuration file on disk. Note that these overrides will only be applied in ephemeral containers. include
and exclude
are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.
For example, given the following configuration file: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32
Passing include=['model']
will only include the model
subtree in the job’s inputs. Passing exclude=['model.layers']
will exclude the layers
key from the model
subtree. Note that exclude
takes precedence over include
.
.
is used as a separator for nested keys. If a key contains a .
, it should be escaped with a backslash, e.g. include=[r'model\.layers']
. Note the use of r
to denote a raw string when using escape chars.
Args:
path
(str): The path to the configuration file. This path must be relative and must not contain backwards traversal, i.e...
.include
(List[str]): A list of keys to include in the configuration file.exclude
(List[str]): A list of keys to exclude from the configuration file.schema
(dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.
Raises:
LaunchError
: If the path is not valid, or if there is no active run.
3.7 - manage_wandb_config
function manage_wandb_config
manage_wandb_config(
include: Optional[List[str]] = None,
exclude: Optional[List[str]] = None,
schema: Optional[Any] = None
)
Declare wandb.config as an overridable configuration for a launch job.
If a new job version is created from the active run, the run config (wandb.config) will become an overridable input of the job. If the job is launched and overrides have been provided for the run config, the overrides will be applied to the run config when wandb.init
is called. include
and exclude
are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.
For example, given the following run config contents: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32
Passing include=['model']
will only include the model
subtree in the job’s inputs. Passing exclude=['model.layers']
will exclude the layers
key from the model
subtree. Note that exclude
takes precedence over include
. .
is used as a separator for nested keys. If a key contains a .
, it should be escaped with a backslash, e.g. include=[r'model\.layers']
. Note the use of r
to denote a raw string when using escape chars.
Args:
include
(List[str]): A list of subtrees to include in the configuration.exclude
(List[str]): A list of subtrees to exclude from the configuration.schema
(dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.
Raises:
LaunchError
: If there is no active run.
4 - SDK
Use during training to log experiments, track metrics, and save model artifacts.
4.1 - agent
function agent
agent(
sweep_id: str,
function: Optional[Callable] = None,
entity: Optional[str] = None,
project: Optional[str] = None,
count: Optional[int] = None
) → None
Start one or more sweep agents.
The sweep agent uses the sweep_id
to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.
Args:
sweep_id
: The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.function
: A function to call instead of the “program” specified in the sweep config.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.count
: The number of sweep config trials to try.
4.2 - Artifact
class Artifact
Flexible and lightweight building block for dataset and model versioning.
Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add
. Once the artifact has all the desired files, you can call wandb.log_artifact()
to log it.
Args:
name
(str): A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with theuse_artifact
Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.type
(str): The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types includedataset
ormodel
. Includemodel
within your type string if you want to link the artifact to the W&B Model Registry.description (str | None) = None
: A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with theArtifact.description
attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.metadata (dict[str, Any] | None) = None
: Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.incremental
: UseArtifact.new_draft()
method instead to modify an existing artifact.use_as
: W&B Launch specific parameter. Not recommended for general use.
Returns:
An Artifact
object.
method Artifact.__init__
__init__(
name: 'str',
type: 'str',
description: 'str | None' = None,
metadata: 'dict[str, Any] | None' = None,
incremental: 'bool' = False,
use_as: 'str | None' = None
) → None
property Artifact.aliases
List of one or more semantically-friendly references or
identifying “nicknames” assigned to an artifact version.
Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
property Artifact.collection
The collection this artifact was retrieved from.
A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
property Artifact.commit_hash
The hash returned when this artifact was committed.
property Artifact.created_at
Timestamp when the artifact was created.
property Artifact.description
A description of the artifact.
property Artifact.digest
The logical digest of the artifact.
The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest
version, then log_artifact
is a no-op.
property Artifact.distributed_id
property Artifact.entity
The name of the entity of the secondary (portfolio) artifact collection.
property Artifact.file_count
The number of files (including references).
property Artifact.id
The artifact’s ID.
property Artifact.incremental
property Artifact.manifest
The artifact’s manifest.
The manifest lists all of its contents, and can’t be changed once the artifact has been logged.
property Artifact.metadata
User-defined artifact metadata.
Structured data associated with the artifact.
property Artifact.name
The artifact name and version in its secondary (portfolio) collection.
A string with the format {collection}:{alias}
. Before the artifact is saved, contains only the name since the version is not yet known.
property Artifact.project
The name of the project of the secondary (portfolio) artifact collection.
property Artifact.qualified_name
The entity/project/name of the secondary (portfolio) collection.
property Artifact.size
The total size of the artifact in bytes.
Includes any references tracked by this artifact.
property Artifact.source_collection
The artifact’s primary (sequence) collection.
property Artifact.source_entity
The name of the entity of the primary (sequence) artifact collection.
property Artifact.source_name
The artifact name and version in its primary (sequence) collection.
A string with the format {collection}:{alias}
. Before the artifact is saved, contains only the name since the version is not yet known.
property Artifact.source_project
The name of the project of the primary (sequence) artifact collection.
property Artifact.source_qualified_name
The entity/project/name of the primary (sequence) collection.
property Artifact.source_version
The artifact’s version in its primary (sequence) collection.
A string with the format v{number}
.
property Artifact.state
The status of the artifact. One of: “PENDING”, “COMMITTED”, or “DELETED”.
property Artifact.tags
List of one or more tags assigned to this artifact version.
property Artifact.ttl
The time-to-live (TTL) policy of an artifact.
Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None
, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
Raises:
ArtifactNotLoggedError
: Unable to fetch inherited TTL if the artifact has not been logged or saved.
property Artifact.type
The artifact’s type. Common types include dataset
or model
.
property Artifact.updated_at
The time when the artifact was last updated.
property Artifact.url
Constructs the URL of the artifact.
Returns:
str
: The URL of the artifact.
property Artifact.use_as
property Artifact.version
The artifact’s version in its secondary (portfolio) collection.
method Artifact.add
add(
obj: 'WBValue',
name: 'StrPath',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add wandb.WBValue obj
to the artifact.
Args:
obj
: The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3Dname
: The path within the artifact to add the object.overwrite
: If True, overwrite existing objects with the same file path if applicable.
Returns: The added manifest entry
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.add_dir
add_dir(
local_path: 'str',
name: 'str | None' = None,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable'
) → None
Add a local directory to the artifact.
Args:
local_path
: The path of the local directory.name
: The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’stype
. Defaults to the root of the artifact.skip_cache
: If set toTrue
, W&B will not copy/move files to the cache while uploadingpolicy
: By default, “mutable”."mutable"
: Create a temporary copy of the file to prevent corruption during upload."immutable"
: Disable protection, rely on the user not to delete or change the file.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_file
add_file(
local_path: 'str',
name: 'str | None' = None,
is_tmp: 'bool | None' = False,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add a local file to the artifact.
Args:
local_path
: The path to the file being added.name
: The path within the artifact to use for the file being added. Defaults to the basename of the file.is_tmp
: If true, then the file is renamed deterministically to avoid collisions.skip_cache
: IfTrue
, do not copy files to the cache after uploading.policy
: By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.overwrite
: IfTrue
, overwrite the file if it already exists.
Returns: The added manifest entry.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_reference
add_reference(
uri: 'ArtifactManifestEntry | str',
name: 'StrPath | None' = None,
checksum: 'bool' = True,
max_objects: 'int | None' = None
) → Sequence[ArtifactManifestEntry]
Add a reference denoted by a URI to the artifact.
Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.
By default, the following schemes are supported:
- http(s): The size and digest of the file will be inferred by the
Content-Length
and theETag
response headers returned by the server. - s3: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- gs: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- https, domain matching
*.blob.core.windows.net
- Azure: The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked.
- file: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload.
For any other scheme, the digest is just a hash of the URI and the size is left blank.
Args:
uri
: The URI path of the reference to add. The URI path can be an object returned fromArtifact.get_entry
to store a reference to another artifact’s entry.name
: The path within the artifact to place the contents of this reference.checksum
: Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend settingchecksum=False
when adding reference objects, in which case a new version will only be created if the reference URI changes.max_objects
: The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.
Returns: The added manifest entries.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.checkout
checkout(root: 'str | None' = None) → str
Replace the specified root directory with the contents of the artifact.
WARNING: This will delete all files in root
that are not included in the artifact.
Args:
root
: The directory to replace with this artifact’s files.
Returns: The path of the checked out contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.delete
delete(delete_aliases: 'bool' = False) → None
Delete an artifact and its files.
If called on a linked artifact, only the link is deleted, and the source artifact is unaffected.
Args:
delete_aliases
: If set toTrue
, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (a member of a portfolio collection).
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.download
download(
root: 'StrPath | None' = None,
allow_missing_references: 'bool' = False,
skip_cache: 'bool | None' = None,
path_prefix: 'StrPath | None' = None
) → FilePathStr
Download the contents of the artifact to the specified root directory.
Existing files located within root
are not modified. Explicitly delete root
before you call download
if you want the contents of root
to exactly match the artifact.
Args:
root
: The directory W&B stores the artifact’s files.allow_missing_references
: If set toTrue
, any invalid reference paths will be ignored while downloading referenced files.skip_cache
: If set toTrue
, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.path_prefix
: If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).
Returns: The path to the downloaded contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.RuntimeError
: If the artifact is attempted to be downloaded in offline mode.
method Artifact.file
file(root: 'str | None' = None) → StrPath
Download a single file artifact to the directory you specify with root
.
Args:
root
: The root directory to store the file. Defaults to./artifacts/self.name/
.
Returns: The full path of the downloaded file.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact contains more than one file.
method Artifact.files
files(names: 'list[str] | None' = None, per_page: 'int' = 50) → ArtifactFiles
Iterate over all files stored in this artifact.
Args:
names
: The filename paths relative to the root of the artifact you wish to list.per_page
: The number of files to return per request.
Returns:
An iterator containing File
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.finalize
finalize() → None
Finalize the artifact version.
You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact
.
method Artifact.get
get(name: 'str') → WBValue | None
Get the WBValue object located at the artifact relative name
.
Args:
name
: The artifact relative name to retrieve.
Returns:
W&B object that can be logged with wandb.log()
and visualized in the W&B UI.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline.
method Artifact.get_added_local_path_name
get_added_local_path_name(local_path: 'str') → str | None
Get the artifact relative name of a file added by a local filesystem path.
Args:
local_path
: The local path to resolve into an artifact relative name.
Returns: The artifact relative name.
method Artifact.get_entry
get_entry(name: 'StrPath') → ArtifactManifestEntry
Get the entry with the given name.
Args:
name
: The artifact relative name to get
Returns:
A W&B
object.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline.KeyError
: if the artifact doesn’t contain an entry with the given name.
method Artifact.get_path
get_path(name: 'StrPath') → ArtifactManifestEntry
Deprecated. Use get_entry(name)
.
method Artifact.is_draft
is_draft() → bool
Check if artifact is not saved.
Returns:
Boolean. False
if artifact is saved. True
if artifact is not saved.
method Artifact.json_encode
json_encode() → dict[str, Any]
Returns the artifact encoded to the JSON format.
Returns:
A dict
with string
keys representing attributes of the artifact.
method Artifact.link
link(target_path: 'str', aliases: 'list[str] | None' = None) → None
Link this artifact to a portfolio (a promoted collection of artifacts).
Args:
target_path
: The path to the portfolio inside a project. The target path must adhere to one of the following schemas{portfolio}
,{project}/{portfolio}
or{entity}/{project}/{portfolio}
. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, settarget_path
to the following schema{"model-registry"}/{Registered Model Name}
or{entity}/{"model-registry"}/{Registered Model Name}
.aliases
: A list of strings that uniquely identifies the artifact inside the specified portfolio.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.logged_by
logged_by() → Run | None
Get the W&B run that originally logged the artifact.
Returns: The name of the W&B run that originally logged the artifact.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_draft
new_draft() → Artifact
Create a new draft artifact with the same content as this committed artifact.
Modifying an existing artifact creates a new artifact version known as an “incremental artifact”. The artifact returned can be extended or modified and logged as a new version.
Returns:
An Artifact
object.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_file
new_file(
name: 'str',
mode: 'str' = 'x',
encoding: 'str | None' = None
) → Iterator[IO]
Open a new temporary file and add it to the artifact.
Args:
name
: The name of the new file to add to the artifact.mode
: The file access mode to use to open the new file.encoding
: The encoding used to open the new file.
Returns: A new file object that can be written to. Upon closing, the file is automatically added to the artifact.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.remove
remove(item: 'StrPath | ArtifactManifestEntry') → None
Remove an item from the artifact.
Args:
item
: The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.FileNotFoundError
: If the item isn’t found in the artifact.
method Artifact.save
save(
project: 'str | None' = None,
settings: 'wandb.Settings | None' = None
) → None
Persist any changes made to the artifact.
If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.
Args:
project
: A project to use for the artifact in the case that a run is not already in context.settings
: A settings object to use when initializing an automatic run. Most commonly used in testing harness.
method Artifact.unlink
unlink() → None
Unlink this artifact if it is currently a member of a promoted collection of artifacts.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact is not linked, in other words, it is not a member of a portfolio collection.
method Artifact.used_by
used_by() → list[Run]
Get a list of the runs that have used this artifact.
Returns:
A list of Run
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.verify
verify(root: 'str | None' = None) → None
Verify that the contents of an artifact match the manifest.
All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.
Args:
root
: The directory to verify. If None artifact will be downloaded to./artifacts/self.name/
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the verification fails.
method Artifact.wait
wait(timeout: 'int | None' = None) → Artifact
If needed, wait for this artifact to finish logging.
Args:
timeout
: The time, in seconds, to wait.
Returns:
An Artifact
object.
4.3 - controller
function controller
controller(
sweep_id_or_config: Optional[str, Dict] = None,
entity: Optional[str] = None,
project: Optional[str] = None
) → _WandbController
Public sweep controller constructor.
Examples:
import wandb
tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)
4.4 - define_metric
function wandb.define_metric
wandb.define_metric(
name: 'str',
step_metric: 'str | wandb_metric.Metric | None' = None,
step_sync: 'bool | None' = None,
hidden: 'bool | None' = None,
summary: 'str | None' = None,
goal: 'str | None' = None,
overwrite: 'bool | None' = None
) → wandb_metric.Metric
Customize metrics logged with wandb.log()
.
Args:
name
: The name of the metric to customize.step_metric
: The name of another metric to serve as the X-axis for this metric in automatically generated charts.step_sync
: Automatically insert the last value of step_metric intorun.log()
if it is not provided explicitly. Defaults to True if step_metric is specified.hidden
: Hide this metric from automatic plots.summary
: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.goal
: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.overwrite
: If false, then this call is merged with previousdefine_metric
calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns: An object that represents this call but can otherwise be discarded.
4.5 - Error
class Error
Base W&B Error.
method Error.__init__
__init__(message, context: Optional[dict] = None) → None
4.6 - finish
function finish
finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced. - Failed: Run completed with errors (
exit_code!=0
).
Args:
exit_code
: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.quiet
: Deprecated. Configure logging verbosity usingwandb.Settings(quiet=...)
.
4.7 - init
function init
init(
entity: 'str | None' = None,
project: 'str | None' = None,
dir: 'StrPath | None' = None,
id: 'str | None' = None,
name: 'str | None' = None,
notes: 'str | None' = None,
tags: 'Sequence[str] | None' = None,
config: 'dict[str, Any] | str | None' = None,
config_exclude_keys: 'list[str] | None' = None,
config_include_keys: 'list[str] | None' = None,
allow_val_change: 'bool | None' = None,
group: 'str | None' = None,
job_type: 'str | None' = None,
mode: "Literal['online', 'offline', 'disabled'] | None" = None,
force: 'bool | None' = None,
anonymous: "Literal['never', 'allow', 'must'] | None" = None,
reinit: "bool | Literal[None, 'default', 'return_previous', 'finish_previous']" = None,
resume: "bool | Literal['allow', 'never', 'must', 'auto'] | None" = None,
resume_from: 'str | None' = None,
fork_from: 'str | None' = None,
save_code: 'bool | None' = None,
tensorboard: 'bool | None' = None,
sync_tensorboard: 'bool | None' = None,
monitor_gym: 'bool | None' = None,
settings: 'Settings | dict[str, Any] | None' = None
) → Run
Start a new run to track and log to W&B.
In an ML training pipeline, you could add wandb.init()
to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.
wandb.init()
spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time. When you’re done logging data, call wandb.finish()
to end the run. If you don’t call run.finish()
, the run will end when your script exits.
Run IDs must not contain any of the following special characters / \ # ? % :
Args:
entity
: The username or team name the runs are logged to. The entity must already exist, so ensure you create your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity, go to your settings and update the “Default location to create new projects” under “Default team”.project
: The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to"uncategorized"
.dir
: The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the./wandb
directory. Note that this does not affect the location where artifacts are stored when callingdownload()
.id
: A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. For a short descriptive name, use thename
field, or for saving hyperparameters to compare across runs, useconfig
.name
: A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using theconfig
field.notes
: A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.tags
: A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, userun.tags += ["new_tag"]
after callingrun = wandb.init()
.config
: Setswandb.config
, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (.
), and values should be smaller than 10 MB. If a dictionary,argparse.Namespace
, orabsl.flags.FLAGS
is provided, the key-value pairs will be loaded directly intowandb.config
. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded intowandb.config
.config_exclude_keys
: A list of specific keys to exclude fromwandb.config
.config_include_keys
: A list of specific keys to include inwandb.config
.allow_val_change
: Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider usingwandb.log()
instead. By default, this isFalse
in scripts andTrue
in Notebook environments.group
: Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment.job_type
: Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.mode
: Specifies how run data is managed, with the following options:"online"
(default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations."offline"
: Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing."disabled"
: Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
force
: Determines if a W&B login is required to run the script. IfTrue
, the user must be logged in to W&B; otherwise, the script will not proceed. IfFalse
(default), the script can proceed without a login, switching to offline mode if the user is not logged in.anonymous
: Specifies the level of control over anonymous data logging. Available options are:"never"
(default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account."allow"
: Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI."must"
: Forces the run to be logged to an anonymous account, even if the user is logged in.
reinit
: Shorthand for the “reinit” setting. Determines the behavior ofwandb.init()
when a run is active.resume
: Controls the behavior when resuming a run with the specifiedid
. Available options are:"allow"
: If a run with the specifiedid
exists, it will resume from the last step; otherwise, a new run will be created."never"
: If a run with the specifiedid
exists, an error will be raised. If no such run is found, a new run will be created."must"
: If a run with the specifiedid
exists, it will resume from the last step. If no run is found, an error will be raised."auto"
: Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run.True
: Deprecated. Use"auto"
instead.False
: Deprecated. Use the default behavior (leavingresume
unset) to always start a new run. Ifresume
is set,fork_from
andresume_from
cannot be used. Whenresume
is unset, the system will always start a new run.
resume_from
: Specifies a moment in a previous run to resume a run from, using the format{run_id}?_step={step}
. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If anid
argument is also provided, theresume_from
argument will take precedence.resume
,resume_from
andfork_from
cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.fork_from
: Specifies a point in a previous run from which to fork a new run, using the format{id}?_step={step}
. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If anid
argument is also provided, it must be different from thefork_from
argument, an error will be raised if they are the same.resume
,resume_from
andfork_from
cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.save_code
: Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page.tensorboard
: Deprecated. Usesync_tensorboard
instead.sync_tensorboard
: Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI.saving relevant event files for viewing in the W&B UI. (Default
:False
)monitor_gym
: Enables automatic logging of videos of the environment when using OpenAI Gym.settings
: Specifies a dictionary orwandb.Settings
object with advanced settings for the run.
Raises:
Error
: if some unknown or internal error happened during the run initialization.AuthenticationError
: if the user failed to provide valid credentials.CommError
: if there was a problem communicating with the WandB server.UsageError
: if the user provided invalid arguments.KeyboardInterrupt
: if user interrupts the run.
Returns:
A Run
object.
Examples:
wandb.init()
returns a run object, and you can also access the run object with wandb.run
:
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
run.config.update({"architecture": "resnet", "depth": 34})
# ... your training code here ...
4.8 - link_model
function wandb.link_model
wandb.link_model(
path: 'StrPath',
registered_model_name: 'str',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Log a model artifact version and link it to a registered model in the model registry.
The linked model version will be visible in the UI for the specified registered model.
First, check if name
model artifact has been logged. If so, use the artifact version that matches the files located at path
or log a new version. Otherwise log files under path
as a new model artifact, name
of type “model”.
Next, check if registered model with name registered_model_name
exists in the ‘model-registry’ project.
If not, create a new registered model with name registered_model_name
.
- Link version of model artifact
name
to registered model,registered_model_name
. - Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args:
path
: A path to the contents of this model, can be in the following forms:/local/directory
/local/directory/file.txt
s3://bucket/path
registered_model_name
: the name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the runname
: the name of the model artifact that files inpath
will be logged to. This will default to the basename of the path prepended with the current run id if not specified.aliases
: alias(es) that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Raises:
AssertionError
: If registered_model_name is a path or if model artifactname
is of a type that does not contain the substring ‘model’ValueError
: if name has invalid special characters
Returns: None
Examples:
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.link_model(
path="/local/directory",
registered_model_name="my_entity/my_project/my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
4.9 - log
function wandb.log
wandb.log(
data: 'dict[str, Any]',
step: 'int | None' = None,
commit: 'bool | None' = None,
sync: 'bool | None' = None
) → None
Upload run data.
Use log
to log data from runs, such as scalars, images, video, histograms, plots, and tables.
See our guides to logging for live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9})
. This will save the loss and accuracy to the run’s history and update the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, e.g. in Jupyter notebooks, with our API.
Logged values don’t have to be scalars. Logging any wandb object is supported. For example run.log({"example": wandb.Image("myimage.jpg")})
will log an example image which will be displayed nicely in the W&B UI. See the reference documentation for all of the different supported types or check out our guides to logging for examples, from 3D molecular structures and segmentation masks to PR curves and histograms. You can use wandb.Table
to log structured data. See our guide to logging tables for details.
The W&B UI organizes metrics with a forward slash (/
) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log
is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.
With basic usage, each call to log
creates a new “step”. The step must always increase, and it is not possible to log to a previous step.
Note that you can use any metric as the X axis in charts. In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
See also define_metric.
It is possible to use multiple log
invocations to log to the same step with the step
and commit
parameters. The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args:
data
: Adict
withstr
keys and values that are serializablePython objects including
:int
,float
andstring
; any of thewandb.data_types
; lists, tuples and NumPy arrays of serializable Python objects; otherdict
s of this structure.step
: The step number to log. IfNone
, then an implicit auto-incrementing step is used. See the notes in the description.commit
: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. Ifstep
isNone
, then the default iscommit=True
; otherwise, the default iscommit=False
.sync
: This argument is deprecated and does nothing.
Raises:
wandb.Error
: if called beforewandb.init
ValueError
: if invalid data is passed
Examples:
# Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
# Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
# Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
# Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
# Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
# PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
# 3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
For more and more detailed examples, see our guides to logging.
4.10 - log_artifact
function wandb.log_artifact
wandb.log_artifact(
artifact_or_path: 'Artifact | StrPath',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
tags: 'list[str] | None' = None
) → Artifact
Declare an artifact as an output of a run.
Args:
artifact_or_path
: A path to the contents of this artifact, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
(Optional[str]): An artifact name. Defaults to the basename of the path prepended with the current run id if not specified. Valid names can be in the following forms:- name:version
- name:alias
- digest
type
: The type of artifact to log, examples includedataset
,model
aliases
: Aliases to apply to this artifact, defaults to["latest"]
tags
: Tags to apply to this artifact, if any.
Returns:
An Artifact
object.
4.11 - log_model
function wandb.log_model
wandb.log_model(
path: 'StrPath',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Logs a model artifact containing the contents inside the path
to a run and marks it as an output to this run.
Args:
path
: A path to the contents of this model, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
: A name to assign to the model artifact that the file contents will be added to. The string must contain only alphanumeric characters such as dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.aliases
: Aliases to apply to the created model artifact, defaults to["latest"]
Returns: None
Raises:
ValueError
: if name has invalid special characters
Examples:
run.log_model(
path="/local/directory",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.log_model(
path="/local/directory",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
4.12 - login
function login
login(
anonymous: Optional[Literal['must', 'allow', 'never']] = None,
key: Optional[str] = None,
relogin: Optional[bool] = None,
host: Optional[str] = None,
force: Optional[bool] = None,
timeout: Optional[int] = None,
verify: bool = False,
referrer: Optional[str] = None
) → bool
Set up W&B login credentials.
By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True
.
Args:
anonymous
: Set to “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.key
: The API key to use.relogin
: If true, will re-prompt for API key.host
: The host to connect to.force
: If true, will force a relogin.timeout
: Number of seconds to wait for user input.verify
: Verify the credentials with the W&B server.referrer
: The referrer to use in the URL login request.
Returns:
bool
: if key is configured
Raises:
AuthenticationError
if api_key fails verification with the server UsageError
if api_key cannot be configured and no tty
4.13 - plot
module wandb
Chart Visualization Utilities
This module offers a collection of predefined chart types, along with functionality for creating custom charts, enabling flexible visualization of your data beyond the built-in options.
Global Variables
- custom_chart
- utils
- viz
4.14 - plot_table
function plot_table
plot_table(
vega_spec_name: 'str',
data_table: 'wandb.Table',
fields: 'dict[str, Any]',
string_fields: 'dict[str, Any] | None' = None,
split_table: 'bool' = False
) → CustomChart
Creates a custom charts using a Vega-Lite specification and a wandb.Table
.
This function creates a custom chart based on a Vega-Lite specification and a data table represented by a wandb.Table
object. The specification needs to be predefined and stored in the W&B backend. The function returns a custom chart object that can be logged to W&B using wandb.log()
.
Args:
vega_spec_name
: The name or identifier of the Vega-Lite spec that defines the visualization structure.data_table
: Awandb.Table
object containing the data to be visualized.fields
: A mapping between the fields in the Vega-Lite spec and the corresponding columns in the data table to be visualized.string_fields
: A dictionary for providing values for any string constants required by the custom visualization.split_table
: Whether the table should be split into a separate section in the W&B UI. IfTrue
, the table will be displayed in a section named “Custom Chart Tables”. Default isFalse
.
Returns:
CustomChart
: A custom chart object that can be logged to W&B. To log the chart, pass it towandb.log()
.
Raises:
wandb.Error
: Ifdata_table
is not awandb.Table
object.
4.15 - restore
function restore
restore(
name: 'str',
run_path: 'str | None' = None,
replace: 'bool' = False,
root: 'str | None' = None
) → None | TextIO
Download the specified file from cloud storage.
File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.
Args:
name
: the name of the filerun_path
: Path to a run to pull files fromusername/project_name/run_id
. Ifwandb.init
has not been called, this is required.replace
: whether to download the file even if it already exists locallyroot
: the directory to download the file to. Defaults to the current directory or the run directory ifwandb.init
was called.
Returns: None if it can’t find the file, otherwise a file object open for reading
Raises:
wandb.CommError
: if we can’t connect to the wandb backendValueError
: if the file is not found or can’t find run_path
4.16 - save
function wandb.save
wandb.save(
glob_str: 'str | os.PathLike | None' = None,
base_path: 'str | os.PathLike | None' = None,
policy: 'PolicyName' = 'live'
) → bool | list[str]
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save
is called regardless of the policy
. In particular, new files are not picked up automatically.
A base_path
may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str
, and the directory structure beneath it is preserved. It’s best understood through
Note: when given an absolute path or glob and no base_path
, one directory level is preserved as in the example above.
Args:
glob_str
: A relative or absolute path or Unix glob.base_path
: A path to use to infer a directory structure; see examples.policy
: One oflive
,now
, orend
.- live: upload the file as it changes, overwriting the previous version
- now: upload the file once now
- end: upload file when the run ends
Returns: Paths to the symlinks created for the matched files.
For historical reasons, this may return a boolean in legacy code.
Examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
4.17 - setup
function setup
setup(settings: 'Settings | None' = None) → _WandbSetup
Prepares W&B for use in the current process and its children.
You can usually ignore this as it is implicitly called by wandb.init()
.
When using wandb in multiple processes, calling wandb.setup()
in the parent process before starting child processes may improve performance and resource utilization.
Note that wandb.setup()
modifies os.environ
, and it is important that child processes inherit the modified environment variables.
See also wandb.teardown()
.
Args:
settings
: Configuration settings to apply globally. These can be overridden by subsequentwandb.init()
calls.
Example:
import multiprocessing
import wandb
def run_experiment(params):
with wandb.init(config=params):
# Run experiment
pass
if __name__ == "__main__":
# Start backend and set global config
wandb.setup(settings={"project": "my_project"})
# Define experiment parameters
experiment_params = [
{"learning_rate": 0.01, "epochs": 10},
{"learning_rate": 0.001, "epochs": 20},
]
# Start multiple processes, each running a separate experiment
processes = []
for params in experiment_params:
p = multiprocessing.Process(target=run_experiment, args=(params,))
p.start()
processes.append(p)
# Wait for all processes to complete
for p in processes:
p.join()
# Optional: Explicitly shut down the backend
wandb.teardown()
4.18 - sweep
function sweep
sweep(
sweep: Union[dict, Callable],
entity: Optional[str] = None,
project: Optional[str] = None,
prior_runs: Optional[List[str]] = None
) → str
Initialize a hyperparameter sweep.
Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id
, that is returned. At a later step provide the sweep_id
to a sweep agent.
See Sweep configuration structure for information on how to define your sweep.
Args:
sweep
: The configuration of a hyperparameter search. (or configuration generator). If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.prior_runs
: The run IDs of existing runs to add to this sweep.
Returns:
sweep_id
: str. A unique identifier for the sweep.
4.19 - termwarn
function termwarn
termwarn(
string: 'str',
newline: 'bool' = True,
repeat: 'bool' = True,
prefix: 'bool' = True
) → None
Log a warning to stderr.
The arguments are the same as for termlog()
.
4.20 - unwatch
function wandb.unwatch
wandb.unwatch(
models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
) → None
Remove pytorch model topology, gradient and parameter hooks.
Args:
models
: Optional list of pytorch models that have had watch called on them
4.21 - use_artifact
function wandb.use_artifact
wandb.use_artifact(
artifact_or_name: 'str | Artifact',
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
use_as: 'str | None' = None
) → Artifact
Declare an artifact as an input to a run.
Call download
or file
on the returned object to get the contents locally.
Args:
artifact_or_name
: An artifact name. May be prefixed with project/ or entity/project/. You can also pass an Artifact object created by callingwandb.Artifact
. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms:- name:version
- name:alias
type
: The type of artifact to use.aliases
: Aliases to apply to this artifactuse_as
: Optional string indicating what purpose the artifact was used with. Will be shown in UI.
Returns:
An Artifact
object.
4.22 - use_model
function wandb.use_model
wandb.use_model(name: 'str') → FilePathStr
Download the files logged in a model artifact name
.
Args:
name
: A model artifact name.name
must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms:- model_artifact_name:version
- model_artifact_name:alias
Raises:
AssertionError
: if model artifactname
is of a type that does not contain the substring ‘model’.
Returns:
path
: path to downloaded model artifact file(s).
Examples:
run.use_model(
name="my_model_artifact:latest",
)
run.use_model(
name="my_project/my_model_artifact:v0",
)
run.use_model(
name="my_entity/my_project/my_model_artifact:<digest>",
)
Invalid usage
run.use_model(
name="my_entity/my_project/my_model_artifact",
)
4.23 - watch
function wandb.watch
wandb.watch(
models: 'torch.nn.Module | Sequence[torch.nn.Module]',
criterion: 'torch.F | None' = None,
log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
log_freq: 'int' = 1000,
idx: 'int | None' = None,
log_graph: 'bool' = False
) → None
Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training. It should be extended to support arbitrary machine learning models in the future.
Args:
models
: A single model or a sequence of models to be monitored.criterion
: The loss function being optimized (optional).log
: Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”)log_freq
: Frequency (in batches) to log gradients and parameters. (default=1000)idx
: Index used when tracking multiple models withwandb.watch
. (default=None)log_graph
: Whether to log the model’s computational graph. (default=False)
Raises:
ValueError: If wandb.init
has not been called or if any of the models are not instances of torch.nn.Module
.