Container

class cloudstorage.base.Container(name: str, driver: cloudstorage.base.Driver, acl: str = None, meta_data: typing.Dict[str, str] = None, created_at: datetime.datetime = None) → None[source]

Represents a container (bucket or folder) which contains blobs.

container = storage.get_container('container-name')
container.name
# container-name
container.created_at
# 2017-04-11 08:58:12-04:00
len(container)
# 20

Todo

Add option to delete blobs before deleting the container.

Todo

Support extra headers like Content-Encoding.

Parameters:
  • name (str) – Container name (must be unique).
  • driver (Driver) – Reference to this container’s driver.
  • acl (str or None) –

    (optional) Container’s canned Access Control List (ACL). If None, defaults to storage backend default.

    • private
    • public-read
    • public-read-write
    • authenticated-read
    • bucket-owner-read
    • bucket-owner-full-control
    • aws-exec-read (Amazon S3)
    • project-private (Google Cloud Storage)
  • meta_data (Dict[str, str] or None) – (optional) Metadata stored with this container.
  • created_at (datetime.datetime or None) – Creation time of this container.
__contains__(blob: cloudstorage.base.Blob) → bool[source]

Determines whether or not the blob exists in this container.

container = storage.get_container('container-name')
picture_blob = container.get_blob('picture.png')
picture_blob in container
# True
'picture.png' in container
# True
Parameters:blob (str or Blob) – Blob or Blob name.
Returns:True if the blob exists.
Return type:bool
__iter__() → typing.Iterable[cloudstorage.base.Blob][source]

Get all blobs associated to the container.

container = storage.get_container('container-name')
for blob in container:
    blob.name
    # blob-1.ext, blob-2.ext
Returns:Iterable of all blobs belonging to this container.
Return type:Iterable{Blob]
__len__() → int[source]

Total number of blobs in this container.

Returns:Blob count in this container.
Return type:int
cdn_url

The Content Delivery Network URL for this container.

https://container-name.storage.com/

Returns:The CDN URL for this container.
Return type:str
patch() → None[source]

Saves all changed attributes for this container.

Warning

Not supported by all drivers yet.

Returns:NoneType
Return type:None
Raises:NotFoundError – If the container doesn’t exist.
delete() → None[source]

Delete this container.

Important

All blob objects in the container must be deleted before the container itself can be deleted.

container = storage.get_container('container-name')
container.delete()
container in storage
# False
Returns:

NoneType

Return type:

None

Raises:
upload_blob(filename: typing.Union[str, typing.IO[_io.BytesIO], _io.BytesIO, _io.FileIO, _io.TextIOWrapper], blob_name: str = None, acl: str = None, meta_data: typing.Dict[str, str] = None, content_type: str = None, content_disposition: str = None, extra: typing.Dict[str, str] = None) → cloudstorage.base.Blob[source]

Upload a filename or file like object to a container.

If content_type is None, Cloud Storage will attempt to guess the standard MIME type using the packages: python-magic or mimetypes. If that fails, Cloud Storage will leave it up to the storage backend to guess it.

Warning

The effect of uploading to an existing blob depends on the “versioning” and “lifecycle” policies defined on the blob’s container. In the absence of those policies, upload will overwrite any existing contents.

Basic example:

container = storage.get_container('container-name')
picture_blob = container.upload_blob('/path/picture.png')
# <Blob picture.png container-name S3>

Set Content-Type example:

container = storage.get_container('container-name')
with open('/path/resume.doc', 'rb') as resume_file:
    resume_blob = container.upload_blob(resume_file, 
        content_type='application/msword')
    resume_blob.content_type
    # 'application/msword'

Set Metadata and ACL:

picture_file = open('/path/picture.png', 'rb)
    'acl': 'public-read',
meta_data = {
    'owner-email': 'user.one@startup.com',
    'owner-id': '1'
}

container = storage.get_container('container-name')
picture_blob = container.upload_blob(picture_file, 
    acl='public-read', meta_data=meta_data)
picture_blob.meta_data
# {owner-id': '1', 'owner-email': 'user.one@startup.com'}

References:

Parameters:
  • filename (file or str) – A file handle open for reading or the path to the file.
  • acl (str or None) –

    (optional) Blob canned Access Control List (ACL). If None, defaults to storage backend default.

    • private
    • public-read
    • public-read-write
    • authenticated-read
    • bucket-owner-read
    • bucket-owner-full-control
    • aws-exec-read (Amazon S3)
    • project-private (Google Cloud Storage)
  • blob_name (str or None) – (optional) Override the blob’s name. If not set, will default to the filename from path or filename of iterator object.
  • meta_data (Dict[str, str] or None) – (optional) A map of metadata to store with the blob.
  • content_type (str or None) – (optional) A standard MIME type describing the format of the object data.
  • content_disposition (str or None) – (optional) Specifies presentational information for the blob.
  • extra (Dict[str, str] or None) – (optional) Extra parameters for the request.
Returns:

The uploaded blob.

Return type:

Blob

get_blob(blob_name: str) → cloudstorage.base.Blob[source]

Get a blob object by name.

container = storage.get_container('container-name')
picture_blob = container.get_blob('picture.png')
# <Blob picture.png container-name S3>
Parameters:blob_name (str) – The name of the blob to retrieve.
Returns:The blob object if it exists.
Return type:Blob
Raises:NotFoundError – If the blob object doesn’t exist.
generate_upload_url(blob_name: str, expires: int = 3600, acl: str = None, meta_data: typing.Dict[str, str] = None, content_disposition: str = None, content_length: typing.Dict[int, int] = None, content_type: str = None, extra: typing.Dict[str, str] = None) → typing.Dict[str, typing.Dict[str, str]][source]

Generate a signature and policy for uploading objects to this container.

This method gives your website a way to upload objects to a container through a web form without giving the user direct write access.

Basic example:

import requests

picture_file = open('/path/picture.png', 'rb')

container = storage.get_container('container-name')
form_post = container.generate_upload_url('avatar-user-1.png')

url = form_post['url']
fields = form_post['fields']
multipart_form_data = {
    'file': ('avatar.png', picture_file, 'image/png'),
}

resp = requests.post(url, data=fields, files=multipart_form_data)
# <Response [201]> or <Response [204]> 

avatar_blob = container.get_blob('avatar-user-1.png')
# <Blob avatar-user-1.png container-name S3>

Form example:

container = storage.get_container('container-name')
form_post = container.generate_upload_url('avatar-user-1.png')

# Generate an upload form using the form fields and url
fields = [
    '<input type="hidden" name="{name}" value="{value}" />'.format(
        name=name, value=value)
    for name, value in form_post['fields'].items()
]

upload_form = [
    '<form action="{url}" method="post" '
    'enctype="multipart/form-data">'.format(
        url=form_post['url']),
    *fields,
    '<input name="file" type="file" />',
    '<input type="submit" value="Upload" />',
    '</form>',
]

print('\n'.join(upload_form))
<!--Google Cloud Storage Generated Form-->
<form action="https://container-name.storage.googleapis.com" 
      method="post" enctype="multipart/form-data">
<input type="hidden" name="key" value="avatar-user-1.png" />
<input type="hidden" name="bucket" value="container-name" />
<input type="hidden" name="GoogleAccessId" value="<my-access-id>" />
<input type="hidden" name="policy" value="<generated-policy>" />
<input type="hidden" name="signature" value="<generated-sig>" />
<input name="file" type="file" />
<input type="submit" value="Upload" />
</form>

Content-Disposition and Metadata example:

import requests

params = {
    'blob_name': 'avatar-user-1.png',
    'meta_data': {
        'owner-id': '1',
        'owner-email': 'user.one@startup.com'
    },
    'content_type': 'image/png',
    'content_disposition': 'attachment; filename=attachment.png'
}
form_post = container.generate_upload_url(**params)

url = form_post['url']
fields = form_post['fields']
multipart_form_data = {
    'file': open('/path/picture.png', 'rb'),
}

resp = requests.post(url, data=fields, files=multipart_form_data)
# <Response [201]> or <Response [204]>

avatar_blob = container.get_blob('avatar-user-1.png')
avatar_blob.content_disposition
# 'attachment; filename=attachment.png'

References:

Parameters:
  • blob_name (str or None) – The blob’s name, prefix, or '' if a user is providing a file name. Note, Rackspace Cloud Files only supports prefixes.
  • expires (int) – (optional) Expiration in seconds.
  • acl (str or None) –

    (optional) Container canned Access Control List (ACL). If None, defaults to storage backend default.

    • private
    • public-read
    • public-read-write
    • authenticated-read
    • bucket-owner-read
    • bucket-owner-full-control
    • aws-exec-read (Amazon S3)
    • project-private (Google Cloud Storage)
  • meta_data (Dict[str, str] or None) – (optional) A map of metadata to store with the blob.
  • content_disposition (str or None) – (optional) Specifies presentational information for the blob.
  • content_type (str or None) – (optional) A standard MIME type describing the format of the object data.
  • content_length (tuple[int, int] or None) – Specifies that uploaded files can only be between a certain size range in bytes: (<min>, <max>).
  • extra (Dict[str, str] or None) –

    (optional) Extra parameters for the request.

    • success_action_redirect (str) – A URL that users are redirected to when an upload is successful. If you do not provide a URL, Cloud Storage responds with the status code that you specified in success_action_status.
    • success_action_status (str) – The status code that you want Cloud Storage to respond with when an upload is successful. The default is 204.
Returns:

Dictionary with URL and form fields (includes signature or policy).

Return type:

Dict[str, str]

enable_cdn() → bool[source]

Enable Content Delivery Network (CDN) for this container.

Returns:True if successful or false if not supported.
Return type:bool
disable_cdn() → bool[source]

Disable Content Delivery Network (CDN) for this container.

Returns:True if successful or false if not supported.
Return type:bool