Using curl

  1. Create your credentials

    For a step-by-step guide, see the getting started article "Generating S3 credentials".

    Make sure you save the keys locally right after you create them as it is not possible to view the secret key again, neither via Cloud Console nor via API.

 

  1. Check curl version

    • curl version 7.86 or above

    On Debian, you have to enable the bookworm-backports apt repository and install curl from that.

    You can run curl -V to check the version.

    This getting started will use --aws-sigv4 for the signature (see curl documentation).

 

  1. Set up environment variables

    Replace <access_key>, <secret_key>, and <region> (e.g. fsn1) with your actual information, and run the command below to add the environment variables in ~/.bashrc.

    cat << 'EOF' >> ~/.bashrc
    export ACCESS_KEY="<access_key>"
    export SECRET_KEY="<secret_key>"
    export REGION="<region>"
    export ENDPOINT="${REGION}.your-objectstorage.com"
    EOF

    Now update your environment variables:

    source ~/.bashrc

 

  1. Manage your Buckets

    • List all Buckets

      curl -sS "https://${ENDPOINT}/" \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3" \
        | grep -oP "<Name>\K[^<]+"

    • Create a Bucket

      Replace <bucket_name> with an available name.

      curl \
        -X PUT \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3" \
        "https://${ENDPOINT}/<bucket_name>"

    • Delete a Bucket

      Replace <bucket_name> with the actual name.

      curl \
        -X DELETE \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3" \
        "https://${ENDPOINT}/<bucket_name>"

 

  1. Manage your objects

    • List objects

      Replace <bucket_name> with the actual name.

      curl -sS "https://<bucket_name>.${ENDPOINT}" \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3" \
        | grep -oP "<Key>\K[^<]+"

    • Upload objects

      Replace <bucket_name>, <filename>, and <local_filename> with the actual names.

      curl "https://<bucket_name>.${ENDPOINT}/<filename>" \
        -T "<local_filename>" \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3"

    • Download objects

      Replace <bucket_name>, <filename>, and <local_filename> with the actual names.

      curl "https://<bucket_name>.${ENDPOINT}/<filename>" \
        -o "<local_filename>" \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3"

    • Delete objects

      Replace <bucket_name> and <filename> with the actual names.

      curl "https://<bucket_name>.${ENDPOINT}/<filename>" \
        -X DELETE \
        --user "${ACCESS_KEY}:${SECRET_KEY}" \
        --aws-sigv4 "aws:amz:${REGION}:s3"

 

You should now be ready to manage your Buckets. For more information about available functions, see the article "List of supported actions".

 

  • 0 Kunder som kunne bruge dette svar
Hjalp dette svar dig?

Relaterede artikler

Generating S3 keys

To generate S3 credentials on your Cloud Console account, please open your project and do the...

Using S3 compatible API tools

Create your credentials For a step-by-step guide, see the getting started article...

Creating a Bucket

To create a new Bucket on your Cloud Console, please open your project and do the following:...

Creating a Bucket via MinIO Terraform Provider

To create a Bucket via Terraform, this example will use the aminueza/minio Terraform Provider....

Using libraries

In general, all the AWS SDKs should work well with our Object Storage. Below, you can find a few...

Powered by WHMCompleteSolution