305

I have searched on the web for over two days now, and probably have looked through most of the online documented scenarios and workarounds, but nothing worked for me so far.

I am on AWS SDK for PHP V2.8.7 running on PHP 5.3.

I am trying to connect to my Amazon S3 bucket with the following code:

// Create a `Aws` object using a configuration file
$aws = Aws::factory('config.php');

// Get the client from the service locator by namespace
$s3Client = $aws->get('s3');

$bucket = "xxx";
$keyname = "xxx";

try {
    $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key' => $keyname,
        'Body' => 'Hello World!'
    ));

    $file_error = false;
} catch (Exception $e) {
    $file_error = true;

    echo $e->getMessage();

    die();
}

My config.php file is as follows:

return [
    // Bootstrap the configuration file with AWS specific features
    'includes' => ['_aws'],
    'services' => [
        // All AWS clients extend from 'default_settings'. Here we are
        // overriding 'default_settings' with our default credentials and
        // providing a default region setting.
        'default_settings' => [
            'params' => [
                'credentials' => [
                    'key'    => 'key',
                    'secret' => 'secret'
                ]
            ]
        ]
    ]
];

It is producing the following error:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I've already checked my access key and secret at least 20 times, generated new ones, used different methods to pass in the information (i.e. profile and including credentials in code) but nothing is working at the moment.

5
  • 5
    So, the AWS SDK just implements a bunch of direct API calls. With AWS, every single call you make takes your private key (or secret above), and uses that to calculate a signature based on your access key, the current timestamp, plus a bunch of other factors. See docs.aws.amazon.com/general/latest/gr/…. It's a longshot, but given that they include the timestamp, perhaps your local environment's time is off? Commented May 29, 2015 at 0:58
  • Happened when we had passed an incorrect size (Content-Length) in object metadata. (Long version: we were directly passing the input stream from a Java HttpServletRequest to the S3 client, and passing in request.getContentLength() as Content-Length via metadata; when the servlet was (randomly) receiving chunked requests (Transfer-Encoding: chunked), getContentLength() was returning -1 - which led putObject to fail (randomly). Obscure; but clearly our fault because we were passing an incorrect object size.) Commented Jul 5, 2020 at 2:35
  • 4
    First time visitor, please go through many answers, there are many scenario in which you will get this error & various solutions given in this page Commented Sep 13, 2021 at 14:43
  • In my case, for opensearch, i had given different info in path and URL...
    – coolman
    Commented Sep 17, 2021 at 4:43
  • This error was occurring for me because my query params were not included as part of my path when signing. So it should be path: "/default/resource?param1=1111&param2=11111" Not just path: "/default/resource Commented Apr 6, 2023 at 18:21

69 Answers 69

196

After two days of debugging, I finally discovered the problem...

The key I was assigning to the object started with a period i.e. ..\images\ABC.jpg, and this caused the error to occur.

I wish the API provides more meaningful and relevant error message, alas, I hope this will help someone else out there!

27
  • 42
    A leading slash also caused this issue for me. You need just path/to/file, not /path/to/file
    – Graham
    Commented Jan 17, 2019 at 22:39
  • 17
    And for me the issue were white spaces inside of key
    – Adam Szmyd
    Commented May 27, 2019 at 9:59
  • 16
    To add to this, I was getting this error message when having a plus sign + in my key.
    – LCC
    Commented Jul 25, 2019 at 11:58
  • 19
    In my case this was caused by having a path in the bucket parameter. Instead of bucket = "bucketname", I had bucket = "bucketname/something". This also gives the Signature does not match error. Commented Nov 14, 2019 at 16:27
  • 9
    I was getting this when I did not provide the Content-Type header in my upload file request Commented Mar 11, 2020 at 7:24
77

I get this error with the wrong credentials. I think there were invisible characters when I pasted it originally.

7
  • 16
    I simply dobuble-clicked on key_hash_lala/key_hash_continues and it selected only one part. Alas, how hard is it to tell the user "wrong passsword, dude!"?
    – Ufos
    Commented May 10, 2019 at 10:34
  • The first time I had issues copying the key from the downloadable csv. For the second key i created, I just copied it from the the browser and didn't have any issues
    – nthaxis
    Commented Jul 11, 2019 at 19:49
  • +1 to @nthaxis - copying from the .csv caused a failure - copying directly from the browser and it works a treat
    – NKCampbell
    Commented Sep 15, 2019 at 3:38
  • For me, it was a result of wrong credentials as well. I missed a character in my credentials. Commented Jan 8, 2021 at 4:44
  • for all of us that use double click to select and copy, it won't copy trailing "+" chars!!
    – Cesc
    Commented May 18, 2022 at 10:15
44

I had the same error in nodejs. But adding signatureVersion in s3 constructor helped me:

const s3 = new AWS.S3({
  apiVersion: '2006-03-01',
  signatureVersion: 'v4',
});
5
  • 3
    Tried many things before i stumbled onto this! This was the answer for me.
    – DavidG
    Commented Nov 13, 2020 at 10:27
  • Worked for me, file path ok, every else was ok, currently the same function is in use for other app and never give this error in that app. Thanks, Oleg Commented Jul 2, 2021 at 0:09
  • 3
    This solved it for me too. Commented May 11, 2022 at 18:48
  • 1
    This is what worked for me as well signatureVersion. Would have been helpful if the document had mention about this docs.aws.amazon.com/sdk-for-php/v3/developer-guide/…
    – Manas
    Commented Feb 1, 2023 at 9:03
  • you are a life saver . i was facing this error for the past 8 hours Commented Jun 13, 2023 at 18:34
38

I've just encountered this and, I'm a little embarrassed to say, it was because I was using an HTTP POST request instead of PUT.

Despite my embarrassment, I thought I'd share in case it saves somebody an hour of head scratching.

3
  • 8
    lol, I'm so glad you shared this -- I did the same thing and didn't even think to check that! Commented Dec 9, 2022 at 22:03
  • 3
    You should not be embarrassed, saviour of my day! Commented Feb 3 at 12:48
  • 1
    It saved my time and day
    – Nirav Shah
    Commented Jul 8 at 8:11
34

I had the same problem when tried to copy an object with some UTF8 characters. Below is a JS example:

var s3 = new AWS.S3();

s3.copyObject({
    Bucket: 'somebucket',
    CopySource: 'path/to/Weird_file_name_ðÓpíu.jpg',
    Key: 'destination/key.jpg',
    ACL: 'authenticated-read'
}, cb);

Solved by encoding the CopySource with encodeURIComponent()

1
  • Thanks, worked with me! I also tried to encode the "Key" since the key also contains UTF8 characters, and it ends up in a wrong directory Only encoding the CopySource works just fine.
    – Eric Fu
    Commented Jan 28, 2022 at 13:30
27

My AccessKey had some special characters in that were not properly escaped.

I didn't check for special characters when I did the copy/paste of the keys. Tripped me up for a few mins.

A simple backslash fixed it. Example (not my real access key obviously):

secretAccessKey: 'Gk/JCK77STMU6VWGrVYa1rmZiq+Mn98OdpJRNV614tM'

becomes

secretAccessKey: 'Gk\/JCK77STMU6VWGrVYa1rmZiq\+Mn98OdpJRNV614tM'

2
  • 2
    this is a good catch i too have it in mine, but this also didnt solved my issue Commented Apr 13, 2023 at 2:32
  • A quick double click copy and paste, happened to me, misses the / and the ending. Commented Mar 13 at 11:07
25

This error seems to occur mostly if there is a space before or after your secret key

2
  • 1
    Had same problem. Skype sometimes copies values with blank lines. Just paste it to notepad and then copy it without whitespaces. Commented Aug 11, 2020 at 18:04
  • 1
    Yes ! Check also if you have spaces in any other headers. Commented Aug 31, 2020 at 9:41
16

For Python set - signature_version s3v4

s3 = boto3.client(
   's3',
   aws_access_key_id='AKIAIO5FODNN7EXAMPLE',
   aws_secret_access_key='ABCDEF+c2L7yXeGvUyrPgYsDnWRRC1AYEXAMPLE',
   config=Config(signature_version='s3v4')
)
1
14

In my case I was using s3.getSignedUrl('getObject') when I needed to be using s3.getSignedUrl('putObject') (because I'm using a PUT to upload my file), which is why the signatures didn't match.

2
  • 1
    Thank you! I was using POST instead of PUT... using PUT just worked.
    – Jk33
    Commented Apr 8, 2022 at 4:13
  • 1
    This also fixed my problem. chatgpt gave me wrong code =P
    – Stefan T
    Commented Feb 17 at 21:06
13

In a previous version of the aws-php-sdk, prior to the deprecation of the S3Client::factory() method, you were allowed to place part of the file path, or Key as it is called in the S3Client->putObject() parameters, on the bucket parameter. I had a file manager in production use, using the v2 SDK. Since the factory method still worked, I did not revisit this module after updating to ~3.70.0. Today I spent the better part of two hours debugging why I had started receiving this error, and it ended up being due to the parameters I was passing (which used to work):

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures/catsinhats',
    'Key' => 'whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

I had to move the catsinhats portion of my bucket/key path to the Key parameter, like so:

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures',
    'Key' => 'catsinhats/whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

What I believe is happening is that the Bucket name is now being URL Encoded. After further inspection of the exact message I was receiving from the SDK, I found this:

Error executing PutObject on https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png

AWS HTTP error: Client error: PUT https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png resulted in a 403 Forbidden

This shows that the / I provided to my Bucket parameter has been through urlencode() and is now %2F.

The way the Signature works is fairly complicated, but the issue boils down to the bucket and key are used to generate the encrypted signature. If they do not match exactly on both the calling client, and within AWS, then the request will be denied with a 403. The error message does actually point out the issue:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

So, my Key was wrong because my Bucket was wrong.

1
  • Thank you for posting this, when I saw "check your key" I was thinking the access key or secret key was wrong. In my case it was the object key (and bucket). So moving around the bucket and object key values as you describe worked. Amazon needs to clarify what key they're complaining about IMO. Thanks again
    – tbone
    Commented Dec 30, 2021 at 20:44
9

Actually in Java i was getting same error.After spending 4 hours to debug it what i found that that the problem was in meta data in S3 Objects as there was space while sitting cache controls in s3 files.This space was allowed in 1.6.* version but in 1.11.* it is disallowed and thus was throwing the signature mismatch error

1
  • Also happens if you pass an incorrect Content-Length in the metadata Commented Jul 5, 2020 at 2:37
8

For me I used axios and by deafult it sends header

content-type: application/x-www-form-urlencoded

so i change to send:

content-type: application/octet-stream

and also had to add this Content-Type to AWS signature

const params = {
    Bucket: bucket,
    Key: key,
    Expires: expires,
    ContentType: 'application/octet-stream'
}

const s3 = new AWS.S3()
s3.getSignedUrl('putObject', params)
1
  • Same, changing content-type did the trick.
    – Toto Briac
    Commented Jul 10, 2021 at 8:56
7

Another possible issue might be that the meta values contain non US-ASCII characters. For me it helped to UrlEncode the values when adding them to the putRequest:

request.Metadata.Add(AmzMetaPrefix + "artist", HttpUtility.UrlEncode(song.Artist));
request.Metadata.Add(AmzMetaPrefix + "title", HttpUtility.UrlEncode(song.Title));
0
7

I had the same issue, the problem I had was I imported the wrong environment variable, which means that my secret key for AWS was wrong. Based on reading all the answers, I would verify that all your access ID and secret key is right and there are no additional characters or anything.

7

If none of the other mentioned solution works for you , then try using

aws configure

This command (Getting started with the AWS CLI) will open a set of options asking for keys, region and output format.

Hope this helps!

5

In my case I parsed an S3 url into its components.

For example:

Url:    s3://bucket-name/path/to/file

Was parsed into:

Bucket: bucket-name
Path:   /path/to/file

Having the path part containing a leading '/' failed the request.

5

I had the same issue. I had the default method, PUT set to define the pre-signed URL but was trying to perform a GET. The error was due to method mismatch.

2
  • This worked for me. The HTTP verb (PUT, POST) used to generate the signed URL must be the same as the verb used when performing an upload with that URL. Commented May 27, 2020 at 21:46
  • It was the opposite for me, ie I was using GET to define the presigned URL, and then was trying to use the url with PUT metnod, which obviously resulted in a 403. Commented Feb 12 at 10:00
5

When I gave the wrong secret key which is of value "secret" knowingly, it gave this error. I was expecting some valid error message details like "authentication failed" or something

4

Most of the time it happens because of the wrong key (AWS_SECRET_ACCESS_KEY). Please cross verify your AWS_SECRET_ACCESS_KEY. Hope it will work...

0
4

This issue happened to me because I was accidentally assigning the value of the ACCESS_KEY_ID to SECRET_ACCESS_KEY_ID. Once this was fixed everything worked fine.

3

I just experienced this uploading an image to S3 using the AWS SDK with React Native. It turned out to be caused by the ContentEncoding parameter.

Removing that parameter "fixed" the issue.

3

generating a fresh access key worked for me.

1
  • fresh access key worked for me too - thankfully i got the hint from reading github.com/aws/aws-sdk-js/issues/86#issuecomment-153433220 and in my case it was SQS that was throwing the exception in the title. The keys I was earlier using (when getting exception) were 97 days old with exclamation mark in the IAM dashboard
    – gawkface
    Commented Apr 27, 2021 at 1:59
3

After debugging and spending a lot of time, in my case, the issue was with the access_key_id and secret_access_key, just double check your credentials or generate new one if possible and make sure you are passing the credentials in params.

1
  • When I read the above answer, I double-checked my secret key and realized that I have added / at the end. Commented Jul 22, 2020 at 12:45
2

Like others, I also had the similar issue but in java sdk v1. For me, below 2 fixes helped me.

  1. My key to object looked like this /path/to/obj/. In this, i first removed the / in the beginning.
  2. Further, point 1 alone did not solve the issue. I upgraded my sdk version from 1.9.x to 1.11.x

After applying both the fixes, it worked. So my suggestion is not slog it out. If nothing else is working, just try upgrading the lib.

2

I have spent 8 hours trying to fix this issue. For me, everything mentioned in all answers were fine. The keys were correct and tested through CLI. I was using SDK V3 which is the latest and doesn't need the signature version. It finally turned out to be passing a wrong object in the Body! (not a text nor a array buffer). Yes, it's one of the most stupid error messages that I have ever seen in my 16 years career. AWS sometimes drives me crazy.

1
  • I literally spent 1.5 days to fix this issue. This suggestion helped. I was uploading a blob object earlier but then I updated it to array buffer with the content type as 'application/octet-stream' and it worked. Commented Jan 26 at 7:42
1

I had a similar error, but for me it seemed to be caused by re-using an IAM user to work with S3 in two different Elastic Beanstalk environments. I treated the symptom by creating an identically permissioned IAM user for each environment and that made the error go away.

1

I don't know if anyone came to this issue while trying to test the outputted URL in browser but if you are using Postman and try to copy the generated url of AWS from the RAW tab, because of escaping backslashes you are going to get the above error.

Use the Pretty tab to copy and paste the url to see if it actually works.

I run into this issue recently and this solution solved my issue. It's for testing purposes to see if you actually retrieve the data through the url.

This answer is a reference to those who try to generate a download, temporary link from AWS or generally generate a URL from AWS to use.

1
  • can you please tell me how you solved that issue? it is working fine in postman but not in nodejs Commented Jun 13, 2021 at 13:25
1

If you are a Android developer and is using the signature function from AWS sample code, you are most likely wondering why the ListS3Object works but not the GetS3Object. This is because when you set the setDoOutput(true) and using GET HTTP method, Android's HttpURLConnection switches the request to a POST. Thus, invalidating your signature. Check my original post of the issue.

1

I was getting this error in our shared environment where the SDK was being used, but using the same key/secret and the aws cli, it worked fine. The build system script had a space after the key and secret and session keys, which the code read in as well. So the fix for me was to adjust the build script to remove the spaces after the variables being used.

Just adding this for anyone who might miss that frustrating invisible space at the end of their creds.

1

I encountered the same error message when using the Amazon SES SDK to instantiate an AmazonSimpleEmailServiceClient object and subsequently GetSendStatistics.

I was using my administrative level IAM users credentials to connect ... which failed with the familiar error: "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."

I resolved this by creating an Access Key under the My Security Credentials for my IAM user. When I used the credentials from the new access key, my connection to Amazon SES via the SDK worked.

Not the answer you're looking for? Browse other questions tagged or ask your own question.