Clients API Reference
API documentation for Azure service clients.
Types
aio_azure_clients_toolbox.clients.types
Azure Blob Storage
aio_azure_clients_toolbox.clients.azure_blobs
AzureBlobStorageClient(az_storage_url, container_name, credentials)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
az_storage_url
|
str
|
The URI to the storage account. |
required |
container_name
|
str
|
The container name for the blob. |
required |
credentials
|
DefaultAzureCredential
|
The credentials with which to authenticate. |
required |
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
delete_blob(blob_name)
async
delete a blob from the container.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
Raises: AzureBlobError: If the blob cannot be deleted. Returns: None
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
download_blob(blob_name)
async
Download a blob from the container into bytes in memory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
Raises: AzureBlobError: If the blob cannot be downloaded. Returns: bytes: ALL bytes of the blob.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
download_blob_to_dir(workspace_dir, blob_name)
async
Download Blob to a workspace_dir.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
workspace_dir
|
str
|
The directory to save the blob. |
required |
blob_name
|
str
|
The name of the blob. |
required |
Raises: AzureBlobError: If the blob cannot be downloaded. Returns: str: The path to the saved blob.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_client(blob_name)
async
Simple async context manager to get a BlobClient.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
Raises:
AttributeError: If az_storage_url is not configured.
AzureBlobError: If the blob cannot be accessed.
Returns:
BlobClient: The blob client.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_sas_token(blob_name, expiry=None)
async
Returns a read-only sas token for the blob with an automatically generated
user delegation key. For more than one, it's more efficient to call
get_blob_sas_token_list (below).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
expiry
|
Optional[datetime]
|
The expiry time of the token. |
None
|
Returns: str: The sas token.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_sas_token_list(blob_names, expiry=None)
async
Returns a dict of blob-name -> read-only sas tokens using an automatically generated user delegation key.
This function has the benefit of reusing a single BlobServiceClient for all tokens generated, so it will be a lot quicker than creating a new BlobServiceClient for each name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_names
|
List[str]
|
A list of blob names. |
required |
expiry
|
Optional[datetime]
|
The expiry time of the token. |
None
|
Returns: dict: A dict of blob-name -> sas token.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_sas_url(blob_name, expiry=None)
async
Returns a full download URL with sas token
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
expiry
|
Optional[datetime]
|
The expiry time of the token. |
None
|
Returns: str: The full download URL with sas token.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_sas_url_list(blob_names, expiry=None)
async
Returns a dict of blob-name -> download URL with sas token
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_names
|
List[str]
|
A list of blob names. |
required |
expiry
|
Optional[datetime]
|
The expiry time of the token. |
None
|
Returns: dict: A dict of blob-name -> download-URL-with-sas-token.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
get_blob_service_client()
Simple method to construct BlobServiceClient.
Note: calling async with blob_service_client()... opens
a pipeline which will exit afterward. Thus, you need to either
open-close a single one of these manually or throw it away
after every async context manager session.
Returns: BlobServiceClient: The blob service client.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
list_blobs(prefix=None, **kwargs)
async
List blobs in the container: convenience wrapper around ContainerClient.list_blobs. Args: prefix (Optional[str]): The prefix to filter blobs. Returns: AsyncGenerator[BlobProperties]: A generator of blob properties.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
safe_blob_name(blob_name, quoting=False)
staticmethod
Run a filter on blob names to make them 'safer'.
The most reliable blob names are urlencoded, but it's not strictly required outside of in sas-token-urls.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
quoting
|
bool
|
Whether to urlsafe encode the name. |
False
|
Returns: str: The 'safer' blob name.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
upload_blob(blob_name, file_data, **kwargs)
async
Upload a blob to the container.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
file_data
|
Union[bytes, str, Iterable, AsyncIterable, IO]
|
The data to upload. |
required |
**kwargs
|
Additional keyword arguments (passed to |
{}
|
Raises:
| Type | Description |
|---|---|
AzureBlobError
|
If the blob cannot be uploaded. |
Returns: tuple[bool, dict]: A tuple of a boolean indicating success and the result.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
upload_blob_from_url(blob_name, file_url, overwrite=True)
async
Upload a blob from another URL (can be blob-url with a sas-token)
Note: upload_blob_from_url means it will overwrite destination if it exists!
result usually looks like this:
{
"etag": ""0x8DBBAF4B8A6017C"",
"last_modified": "2023-09-21T22:47:23+00:00",
"content_md5": null,
"client_request_id": "d3e9c022-58d0-11ee-9777-422808c7c565",
"request_id": "b855e9cc-701e-0035-7ddd-ec4cc0000000",
"version": "2023-08-03",
"version_id": "2023-09-21T22:47:23.5730812Z",
"date": "2023-09-21T22:47:23+00:00",
"request_server_encrypted": true,
"encryption_key_sha256": null,
"encryption_scope": null
}
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
blob_name
|
str
|
The name of the blob. |
required |
file_url
|
str
|
The URL of the file to upload. |
required |
overwrite
|
bool
|
Whether to overwrite the destination if it exists. |
True
|
Raises: AzureBlobError: If the blob cannot be uploaded. Returns: dict: The result of the upload request.
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
blobify_filename(name, quoting=False)
see: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction#blobs
A blob name can contain any combination of characters.
A blob name must be at least one character long and cannot be more than 1,024 characters long, for blobs in Azure Storage.
Blob names are case-sensitive.
Reserved URL characters must be properly escaped.*
If your account does not have a hierarchical namespace, then the number of path segments comprising the blob name cannot exceed 254. A path segment is the string between consecutive delimiter characters (e.g., the forward slash '/') that corresponds to the name of a virtual directory.
Avoid blob names that end with a dot, a forward slash, a backslash, or a sequence or combination of the these. No path segments should end with a dot.
- urlsafe encode any blob URLs! Not the names!
Source code in aio_azure_clients_toolbox/clients/azure_blobs.py
Cosmos DB
aio_azure_clients_toolbox.clients.cosmos
Cosmos(endpoint, dbname, container_name, credential_factory, cosmos_client_ttl_seconds=CLIENT_TTL_SECONDS_DEFAULT)
Applications can subclass this class to interact with their container
Source code in aio_azure_clients_toolbox/clients/cosmos.py
get_container_client()
async
This async context manager will yield a container client.
Because making connections is expensive, we'd like to preserve them for a while.
Source code in aio_azure_clients_toolbox/clients/cosmos.py
ManagedCosmos(endpoint, dbname, container_name, credential_factory, client_limit=connection_pooling.DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT, max_size=connection_pooling.DEFAULT_MAX_SIZE, max_idle_seconds=CLIENT_IDLE_SECONDS_DEFAULT, max_lifespan_seconds=CLIENT_TTL_SECONDS_DEFAULT, pool_connection_create_timeout=10, pool_get_timeout=60)
Bases: AbstractorConnector
"Managed" version of the above: uses a connection pool to keep connections alive.
Applications can subclass this class to interact with their container
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
endpoint
|
str
|
A string URL of the Cosmos server. |
required |
dbname
|
str
|
Cosmos database name. |
required |
container_name
|
str
|
Cosmos container name. |
required |
credential_factory
|
CredentialFactory
|
A callable that returns an async DefaultAzureCredential which may be used to authenticate to the container. |
required |
client_limit
|
int
|
Client limit per connection (default: 100). |
DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT
|
max_size
|
int
|
Connection pool size (default: 10). |
DEFAULT_MAX_SIZE
|
max_idle_seconds
|
int
|
Maximum duration allowed for an idle connection before recylcing it. |
CLIENT_IDLE_SECONDS_DEFAULT
|
max_lifespan_seconds
|
int
|
Optional setting which controls how long a connection live before recycling. |
CLIENT_TTL_SECONDS_DEFAULT
|
pool_connection_create_timeout
|
int
|
Timeout for creating a connection in the pool (default: 10 seconds). |
10
|
pool_get_timeout
|
int
|
Timeout for getting a connection from the pool (default: 60 seconds). |
60
|
Source code in aio_azure_clients_toolbox/clients/cosmos.py
close()
async
create()
async
Creates a new connection for our pool
Source code in aio_azure_clients_toolbox/clients/cosmos.py
get_container_client()
async
This async context manager will yield a container client.
Because making connections is expensive, we'd like to preserve them for a while.
Source code in aio_azure_clients_toolbox/clients/cosmos.py
SimpleCosmos(endpoint, dbname, container_name, credential)
Applications can subclass this class to keep async connections open
Source code in aio_azure_clients_toolbox/clients/cosmos.py
get_container_client()
async
This method will return a container client.
Source code in aio_azure_clients_toolbox/clients/cosmos.py
ConnectionManager(endpoint, dbname, container_name, credential_factory, lifespan_enabled=False, cosmos_client_ttl_seconds=CLIENT_TTL_SECONDS_DEFAULT)
Source code in aio_azure_clients_toolbox/clients/cosmos.py
is_container_closed
property
Check if any attributes set to None
__aenter__()
async
Here we manage our connection: - if still alive, we return - if needing to recyle, we recyle and create - if not created, we create
Source code in aio_azure_clients_toolbox/clients/cosmos.py
__aexit__(exc_type, exc, tb)
async
get_container_client()
async
This method will return a container client.
Because making connections is expensive, we'd like to preserve them for a while.
Source code in aio_azure_clients_toolbox/clients/cosmos.py
PatchOp
Bases: str, Enum
Following is an example of patch operations
operations = [ {"op": "add", "path": "/favorite_color", "value": "red"}, {"op": "remove", "path": "/ttl"}, {"op": "replace", "path": "/tax_amount", "value": 14}, {"op": "set", "path": "/items/0/discount", "value": 20.0512}, {"op": "incr", "path": "/total_due", "value": 5}, {"op": "move", "from": "/freight", "path": "/service_addition"} ]
Note: set Set operation adds a property if it doesn't already exist
(except if there was an Array ) while replace operation fails if
the property doesn't exist.
as_op(path, value)
These variables make sense for all but Move
Source code in aio_azure_clients_toolbox/clients/cosmos.py
Operation(op, path, value)
dataclass
For turning patch Operations into instructions Cosmos understands
Event Grid
aio_azure_clients_toolbox.clients.eventgrid
EventGridClient(config, credential=None, async_credential=None)
A generic eventgrid client
This generic eventgrid client provides a few nice features on top of the native azure python client. Primarily it provides a convenient way to configure publishing to multiple topics using a single client.
Example:
```
topic1 = EventGridTopicConfig("topic1", "https://azure.net/topic1")
topic2 = EventGridTopicConfig("topic2", "https://azure.net/topic2")
client_config = EventGridConfig([topic1, topic2])
managed_identity_credential = DefaultAzureCredential() client =
EventGridClient(config, credential=credential)
```
The client run asynchronously or synchronously. To run the client async
provide the async_credential arg when creating the client and use the
asyncmethods, e.g. client.async_emit_event().
```
from azure.identity.aio import DefaultAzureCredential
credential = DefaultAzureCredential()
topic = EventGridTopicConfig("topic", "https://azure.net/topic")
config = EventGridConfig(topic)
client = EventGridClient(config, async_credential=credential)
await client.async_emit_event("topic", "ident", {},"event-type", "subject")
```
To run the client synchronously, provide the credential arg when
creating the client and call non-prefixed functions.
```
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
topic = EventGridTopicConfig("topic", "https://azure.net/topic")
config = EventGridConfig(topic)
client = EventGridClient(config,redential=credential)
client.emit_event"topic", "ident", {}, "event-type", "subject")
```
Internally sync/async versions of the azure eventgrid clients will be called accordingly.
Source code in aio_azure_clients_toolbox/clients/eventgrid.py
async_emit_event(topic, event_type, subject, data, data_version='v1', **kwargs)
async
Emit an event grid asynchronously.
Exceptions:
Raises HttpResponseError exception if failed to emit
Source code in aio_azure_clients_toolbox/clients/eventgrid.py
emit_event(topic, event_type, subject, data, data_version='v1', **kwargs)
Emit an event grid synchronously.
Exceptions:
Raises HttpResponseError exception if failed to emit
Source code in aio_azure_clients_toolbox/clients/eventgrid.py
get_async_client(topic)
get_client(topic)
EventGridConfig(topic_configs)
Configuration for all topics available to a single event grid client
Source code in aio_azure_clients_toolbox/clients/eventgrid.py
config(topic)
topics()
EventGridTopicConfig(name, url)
dataclass
Configuration for one event grid topic subscription
Event Hub
aio_azure_clients_toolbox.clients.eventhub
Eventhub(eventhub_namespace, eventhub_name, credential, eventhub_transport_type=TRANSPORT_PURE_AMQP)
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_event(event, partition_key=None)
async
Send a single EventHub event. See send_events_batch for
sending multiple events
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_event_data(event, partition_key=None)
async
Send a single EventHub event which is already encoded as EventData.
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_events_batch(events_list, partition_key=None)
async
Sending events in a batch is more performant than sending individual events.
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_events_data_batch(event_data_batch)
async
Sending events in a batch is more performant than sending individual events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
ManagedAzureEventhubProducer(eventhub_namespace, eventhub_name, credential_factory, eventhub_transport_type=TRANSPORT_PURE_AMQP, client_limit=connection_pooling.DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT, max_size=connection_pooling.DEFAULT_MAX_SIZE, max_idle_seconds=EVENTHUB_SEND_TTL_SECONDS, ready_message='Connection established', max_lifespan_seconds=None, pool_connection_create_timeout=10, pool_get_timeout=60)
Bases: AbstractorConnector
Azure Eventhub Producer client with connnection pooling built in.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
eventhub_namespace
|
str
|
String representing the Eventhub namespace. |
required |
eventhub_name
|
str
|
Eventhub name (the "topic"). |
required |
credential_factory
|
CredentialFactory
|
A callable that returns an async DefaultAzureCredential which may be used to authenticate to the container. |
required |
client_limit
|
int
|
Client limit per connection (default: 100). |
DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT
|
max_size
|
int
|
Connection pool size (default: 10). |
DEFAULT_MAX_SIZE
|
max_idle_seconds
|
int
|
Maximum duration allowed for an idle connection before recycling it. |
EVENTHUB_SEND_TTL_SECONDS
|
max_lifespan_seconds
|
int | None
|
Optional setting which controls how long a connection lives before recycling. |
None
|
pool_connection_create_timeout
|
int
|
Timeout for creating a connection in the pool (default: 10 seconds). |
10
|
pool_get_timeout
|
int
|
Timeout for getting a connection from the pool (default: 60 seconds). |
60
|
ready_message
|
str | bytes | EventData
|
A string, bytes, or EventData object representing the first "ready" message sent to establish connection. |
'Connection established'
|
Source code in aio_azure_clients_toolbox/clients/eventhub.py
close()
async
create()
async
Creates a new connection for our pool
Source code in aio_azure_clients_toolbox/clients/eventhub.py
ready(conn)
async
Establishes readiness for a new connection
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_event(event, partition_key=None)
async
Send a single EventHub event. See send_events_batch for
sending multiple events
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_event_data(event, partition_key=None)
async
Send a single EventHub event which is already encoded as EventData.
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_events_batch(events_list, partition_key=None)
async
Sending events in a batch is more performant than sending individual events.
partition_key will make a particular string identifier
"sticky" for a particular partition.
For instance, if you use a Salesforce record identifier as the partition_key
then you can ensure that a particular consumer always receives those events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
send_events_data_batch(event_data_batch)
async
Sending events in a batch is more performant than sending individual events.
Source code in aio_azure_clients_toolbox/clients/eventhub.py
Service Bus
aio_azure_clients_toolbox.clients.service_bus
service_bus.py
Wrapper class around a ServiceBusClient which allows sending messages or
subscribing to a queue.
AzureServiceBus(service_bus_namespace_url, service_bus_queue_name, credential_factory, socket_timeout=1)
Basic AzureServiceBus client without connection pooling.
For connection pooling see ManagedAzureServiceBus below.
Source code in aio_azure_clients_toolbox/clients/service_bus.py
send_message(msg, delay=0, unique_msg_id=None, **msg_kwargs)
async
Schedule a message for delivery.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str
|
Message body to send. |
required |
delay
|
int
|
Delay in seconds before the message is available for delivery. |
0
|
unique_msg_id
|
str | None
|
Optional unique Service Bus |
None
|
**msg_kwargs
|
Additional keyword arguments forwarded directly to
:class: |
{}
|
Source code in aio_azure_clients_toolbox/clients/service_bus.py
ManagedAzureServiceBusSender(service_bus_namespace_url, service_bus_queue_name, credential_factory, client_limit=connection_pooling.DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT, max_size=connection_pooling.DEFAULT_MAX_SIZE, max_idle_seconds=SERVICE_BUS_SEND_TTL_SECONDS, max_lifespan_seconds=None, ready_message='Connection established', pool_connection_create_timeout=10, pool_get_timeout=60)
Bases: AbstractorConnector
Azure ServiceBus Sender client with connnection pooling built in.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
service_bus_namespace_url
|
str
|
String representing the ServiceBus namespace URL. |
required |
service_bus_queue_name
|
str
|
Queue name (the "topic"). |
required |
credential_factory
|
CredentialFactory
|
A callable that returns an async DefaultAzureCredential which may be used to authenticate to the container. |
required |
client_limit
|
int
|
Client limit per connection (default: 100). |
DEFAULT_SHARED_TRANSPORT_CLIENT_LIMIT
|
max_size
|
int
|
Connection pool size (default: 10). |
DEFAULT_MAX_SIZE
|
max_idle_seconds
|
int
|
Maximum duration allowed for an idle connection before recylcing it. |
SERVICE_BUS_SEND_TTL_SECONDS
|
max_lifespan_seconds
|
int | None
|
Optional setting which controls how long a connection lives before recycling. |
None
|
pool_connection_create_timeout
|
int
|
Timeout for creating a connection in the pool (default: 10 seconds). |
10
|
pool_get_timeout
|
int
|
Timeout for getting a connection from the pool (default: 60 seconds). |
60
|
ready_message
|
str | bytes
|
A string or bytes representing the first "ready" message sent to establish connection. |
'Connection established'
|
Source code in aio_azure_clients_toolbox/clients/service_bus.py
close()
async
create()
async
get_receiver()
Proxy for AzureServiceBus.get_receiver. Here for consistency with above class.
Source code in aio_azure_clients_toolbox/clients/service_bus.py
ready(conn)
async
Establishes readiness for a new connection
Source code in aio_azure_clients_toolbox/clients/service_bus.py
send_message(msg, delay=0, unique_msg_id=None, **msg_kwargs)
async
Schedule a message for delivery using a pooled sender connection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str
|
Message body to send. |
required |
delay
|
int
|
Delay in seconds before the message is available for delivery. |
0
|
unique_msg_id
|
str | None
|
Optional unique Service Bus |
None
|
**msg_kwargs
|
Additional keyword arguments forwarded directly to
:class: |
{}
|