Container¶
-
class
cloudstorage.base.
Container
(name, driver, acl=None, meta_data=None, created_at=None)[source]¶ Represents a container (bucket or folder) which contains blobs.
container = storage.get_container('container-name') container.name # container-name container.created_at # 2017-04-11 08:58:12-04:00 len(container) # 20
Todo
Add option to delete blobs before deleting the container.
Todo
Support extra headers like Content-Encoding.
Parameters: - name (str) – Container name (must be unique).
- driver (Driver) – Reference to this container’s driver.
- acl (str or None) –
(optional) Container’s canned Access Control List (ACL). If
None
, defaults to storage backend default.- private
- public-read
- public-read-write
- authenticated-read
- bucket-owner-read
- bucket-owner-full-control
- aws-exec-read (Amazon S3)
- project-private (Google Cloud Storage)
- meta_data (Dict[str, str] or None) – (optional) Metadata stored with this container.
- created_at (datetime.datetime or None) – Creation time of this container.
-
__contains__
(blob)[source]¶ Determines whether or not the blob exists in this container.
container = storage.get_container('container-name') picture_blob = container.get_blob('picture.png') picture_blob in container # True 'picture.png' in container # True
Parameters: blob (str or Blob) – Blob or Blob name. Returns: True if the blob exists. Return type: bool
-
__iter__
()[source]¶ Get all blobs associated to the container.
container = storage.get_container('container-name') for blob in container: blob.name # blob-1.ext, blob-2.ext
Returns: Iterable of all blobs belonging to this container. Return type: Iterable{Blob]
-
__len__
()[source]¶ Total number of blobs in this container.
Returns: Blob count in this container. Return type: int
-
cdn_url
¶ The Content Delivery Network URL for this container.
https://container-name.storage.com/
Returns: The CDN URL for this container. Return type: str
-
patch
()[source]¶ Saves all changed attributes for this container.
Warning
Not supported by all drivers yet.
Returns: NoneType Return type: None Raises: NotFoundError – If the container doesn’t exist.
-
delete
()[source]¶ Delete this container.
Important
All blob objects in the container must be deleted before the container itself can be deleted.
container = storage.get_container('container-name') container.delete() container in storage # False
Returns: NoneType
Return type: Raises: - IsNotEmptyError – If the container is not empty.
- NotFoundError – If the container doesn’t exist.
-
upload_blob
(filename, blob_name=None, acl=None, meta_data=None, content_type=None, content_disposition=None, cache_control=None, chunk_size=1024, extra=None)[source]¶ Upload a filename or file like object to a container.
If
content_type
isNone
, Cloud Storage will attempt to guess the standard MIME type using the packages:python-magic
ormimetypes
. If that fails, Cloud Storage will leave it up to the storage backend to guess it.Warning
The effect of uploading to an existing blob depends on the “versioning” and “lifecycle” policies defined on the blob’s container. In the absence of those policies, upload will overwrite any existing contents.
Basic example:
container = storage.get_container('container-name') picture_blob = container.upload_blob('/path/picture.png') # <Blob picture.png container-name S3>
Set Content-Type example:
container = storage.get_container('container-name') with open('/path/resume.doc', 'rb') as resume_file: resume_blob = container.upload_blob(resume_file, content_type='application/msword') resume_blob.content_type # 'application/msword'
Set Metadata and ACL:
picture_file = open('/path/picture.png', 'rb) 'acl': 'public-read', meta_data = { 'owner-email': 'user.one@startup.com', 'owner-id': '1' } container = storage.get_container('container-name') picture_blob = container.upload_blob(picture_file, acl='public-read', meta_data=meta_data) picture_blob.meta_data # {owner-id': '1', 'owner-email': 'user.one@startup.com'}
References:
- Boto 3: PUT Object
- Google Cloud Storage: upload_from_file / upload_from_filename
- Rackspace Cloud Files: Create or update object
Parameters: - filename (file or str) – A file handle open for reading or the path to the file.
- acl (str or None) –
(optional) Blob canned Access Control List (ACL). If
None
, defaults to storage backend default.- private
- public-read
- public-read-write
- authenticated-read
- bucket-owner-read
- bucket-owner-full-control
- aws-exec-read (Amazon S3)
- project-private (Google Cloud Storage)
- blob_name (str or None) – (optional) Override the blob’s name. If not set, will default to the filename from path or filename of iterator object.
- meta_data (Dict[str, str] or None) – (optional) A map of metadata to store with the blob.
- content_type (str or None) – (optional) A standard MIME type describing the format of the object data.
- content_disposition (str or None) – (optional) Specifies presentational information for the blob.
- cache_control (str or None) – (optional) Specify directives for caching mechanisms for the blob.
- chunk_size (int) – (optional) Optional chunk size for streaming a transfer.
- extra (Dict[str, str] or None) – (optional) Extra parameters for the request.
Returns: The uploaded blob.
Return type:
-
get_blob
(blob_name)[source]¶ Get a blob object by name.
container = storage.get_container('container-name') picture_blob = container.get_blob('picture.png') # <Blob picture.png container-name S3>
Parameters: blob_name (str) – The name of the blob to retrieve. Returns: The blob object if it exists. Return type: Blob Raises: NotFoundError – If the blob object doesn’t exist.
-
generate_upload_url
(blob_name, expires=3600, acl=None, meta_data=None, content_disposition=None, content_length=None, content_type=None, cache_control=None, extra=None)[source]¶ Generate a signature and policy for uploading objects to this container.
This method gives your website a way to upload objects to a container through a web form without giving the user direct write access.
Basic example:
import requests picture_file = open('/path/picture.png', 'rb') container = storage.get_container('container-name') form_post = container.generate_upload_url('avatar-user-1.png') url = form_post['url'] fields = form_post['fields'] multipart_form_data = { 'file': ('avatar.png', picture_file, 'image/png'), } resp = requests.post(url, data=fields, files=multipart_form_data) # <Response [201]> or <Response [204]> avatar_blob = container.get_blob('avatar-user-1.png') # <Blob avatar-user-1.png container-name S3>
Form example:
container = storage.get_container('container-name') form_post = container.generate_upload_url('avatar-user-1.png') # Generate an upload form using the form fields and url fields = [ '<input type="hidden" name="{name}" value="{value}" />'.format( name=name, value=value) for name, value in form_post['fields'].items() ] upload_form = [ '<form action="{url}" method="post" ' 'enctype="multipart/form-data">'.format( url=form_post['url']), *fields, '<input name="file" type="file" />', '<input type="submit" value="Upload" />', '</form>', ] print('\n'.join(upload_form))
<!--Google Cloud Storage Generated Form--> <form action="https://container-name.storage.googleapis.com" method="post" enctype="multipart/form-data"> <input type="hidden" name="key" value="avatar-user-1.png" /> <input type="hidden" name="bucket" value="container-name" /> <input type="hidden" name="GoogleAccessId" value="<my-access-id>" /> <input type="hidden" name="policy" value="<generated-policy>" /> <input type="hidden" name="signature" value="<generated-sig>" /> <input name="file" type="file" /> <input type="submit" value="Upload" /> </form>
Content-Disposition and Metadata example:
import requests params = { 'blob_name': 'avatar-user-1.png', 'meta_data': { 'owner-id': '1', 'owner-email': 'user.one@startup.com' }, 'content_type': 'image/png', 'content_disposition': 'attachment; filename=attachment.png' } form_post = container.generate_upload_url(**params) url = form_post['url'] fields = form_post['fields'] multipart_form_data = { 'file': open('/path/picture.png', 'rb'), } resp = requests.post(url, data=fields, files=multipart_form_data) # <Response [201]> or <Response [204]> avatar_blob = container.get_blob('avatar-user-1.png') avatar_blob.content_disposition # 'attachment; filename=attachment.png'
References:
- Boto 3: S3.Client.generate_presigned_post
- Google Cloud Storage: POST Object
- Rackspace Cloud Files: FormPost
Parameters: - blob_name (str or None) – The blob’s name, prefix, or
''
if a user is providing a file name. Note, Rackspace Cloud Files only supports prefixes. - expires (int) – (optional) Expiration in seconds.
- acl (str or None) –
(optional) Container canned Access Control List (ACL). If
None
, defaults to storage backend default.- private
- public-read
- public-read-write
- authenticated-read
- bucket-owner-read
- bucket-owner-full-control
- aws-exec-read (Amazon S3)
- project-private (Google Cloud Storage)
- meta_data (Dict[str, str] or None) – (optional) A map of metadata to store with the blob.
- content_disposition (str or None) – (optional) Specifies presentational information for the blob.
- content_length (tuple[int, int] or None) – Specifies that uploaded files can only be
between a certain size range in bytes:
(<min>, <max>)
. - content_type (str or None) – (optional) A standard MIME type describing the format of the object data.
- cache_control (str or None) – (optional) Specify directives for caching mechanisms for the blob.
- extra (Dict[str, str] or None) –
(optional) Extra parameters for the request.
- success_action_redirect (str) – A URL that users
are redirected to when an upload is successful. If you
do not provide a URL, Cloud Storage responds with the
status code that you specified in
success_action_status
. - success_action_status (str) – The status code
that you want Cloud Storage to respond with when an
upload is successful. The default is
204
.
- success_action_redirect (str) – A URL that users
are redirected to when an upload is successful. If you
do not provide a URL, Cloud Storage responds with the
status code that you specified in
Returns: Dictionary with URL and form fields (includes signature or policy).
Return type: Dict[Any, Any]