Hostname/IP does not match certificate's altnames: Host
I am trying to upload files to linode object storage using nodejs & aws-sdk but getting following error:
NetworkingError [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames
Code:
const fs = require("fs");
const aws = require("aws-sdk");
const path = require("path");
const joinURL = require("url-join");
const uploadFile = () => {
const fileName = "contacts.csv";
const region = "";
const client = new aws.S3({
accessKeyId: "",
secretAccessKey: "",
endpoint: new aws.Endpoint(
joinURL("https://" + `${region}` + "linodeobjects.com")
),
});
fs.readFile(path.resolve(fileName), {}, async (error, data) => {
if (error)
return console.log("There was a error reading your file\n" + error);
console.log("Uploading...");
try {
await client
.putObject({
Bucket: "",
Key: fileName,
Body: JSON.stringify(data, null, 2),
})
.promise();
console.log(`Uploaded ${file}`);
} catch (e) {
console.error("Error in uploading\n" + e);
process.exit(1);
}
});
};
uploadFile();
13 Replies
✓ Best Answer
@andysh Actually joinURL will contain https://my-bucket-name.my-region.linodeobjects.com I am referring this npm package code here: https://github.com/ghostdevv/linode-object-upload/blob/stable/bin/linode-object-upload.js#L32
The code you reference is different:
joinURL('https://', region + '.linodeobjects.com'),
This passes 2 arguments to joinURL - 'https://' and '[region].linodeobjects.com' (note the comma that separates the 2 arguments.) I'm not aware what the joinURL
function does so this may or may not be a factor.
Your code only passes 1 argument - 'https://[bucket-name].[region].linodeobjects.com' (if the region variable also contains the bucket name and not just the region), note: no comma, so this is a single concatenated string.
joinURL("https://" +
${region}
+ "linodeobjects.com")In join url if I don't mention bucket name then it gives me this message
The "endpoint" parameter to the S3 client should not contain your bucket name - it should be 'https://eu-central-1.linodeobjects.com' for Frankfurt, or 'https://us-east-1.linodeobjects.com' for Newark.
Your bucket name should be in the putObject
call, here:
.putObject({
Bucket: bucket,
Key: fileName,
Body: data,
})
You communicate with the cluster (the eu-central-1… or us-east-1….) hostname, not the bucket hostname.
Error in uploading UnexpectedParameter: Unexpected key 'rejectUnauthorized' found in params
This suggests somewhere you are passing a parameter called "rejectUnauthorized" where it should not be passed.
Now I tried this github code: https://github.com/dcoles/acme-linode-objectstorage but I don't understand why CNAME and DNS configuration is required if I am just going to upload/download files from object storage is DNS configuration mandatory step?
No, you only need a custom certificate and DNS if you are hosting a static website off your bucket and want to use your own domain name - simply uploading a file to a bucket doesn't require either.
Look at your cert with:
openssl x509 -in certificate.crt -text -noout
and make sure the everything matches. If it doesn't, either fix your program or acquire a new cert with the correct information in it.
-- sw
I did not get any code examples in Nodejs for uploading files to and downloading files from linode object storage do you know any good resources which I can refer? Thanks
joinURL("https://" +
${region}
+ "linodeobjects.com")
Is this your actual code?
Assuming your region variable expands to, for example “eu-central-1”, your endpoint would be “https://eu-central-1linodeobjects.com”.
It looks like you’re missing a “.” just before “linodeobjects.com”, although I would have expected this to give an error along the lines of invalid hostname.
@andysh Actually joinURL will contain https://my-bucket-name.my-region.linodeobjects.com
I am referring this npm package code here: https://github.com/ghostdevv/linode-object-upload/blob/stable/bin/linode-object-upload.js#L32
In join url if I don't mention bucket name then it gives me this message Error in uploading UnexpectedParameter: Unexpected key 'rejectUnauthorized' found in params
I was going through some posts on linode community it seems like I have to add certificate so for that I was referring this: https://www.linode.com/community/questions/20937/feature-request-automatic-ssltls-certificates-for-object-storage-lets-encrypt
Now I tried this github code: https://github.com/dcoles/acme-linode-objectstorage but I don't understand why CNAME and DNS configuration is required if I am just going to upload/download files from object storage is DNS configuration mandatory step?
If I add this command:
py -m acme_linode_objectstorage -k account_key.pem mybucket-Name --cluster region --agree-to-terms-of-service
it says no matching bucket found any idea why so? What am I missing?
I changed these lines and it worked for me
const region = "my-region" //hardcoded the value here
joinURL("https://", region + ".linodeobjects.com")
Also changed putObject to
.putObject({
Bucket: "",
Key: fileName,
Body: JSON.stringify(data, null, 2),
})
instead of using this:
.putObject({
Bucket: "",
Key: fileName,
Body: JSON.stringify(data, null, 2),
rejectUnauthorized: false,
})
I missed that comma in joinURL. I was thinking I need to add certificate and configure DNS and all that stuff. I think I should leave programming :)
I changed these lines and it worked for me
Brilliant!
I think I should leave programming :)
Don’t be daft; this is all part of the fun :)
Is it possible to to make objects inside a bucket accessible? I mean can I set it to public read so that anyone can download the file from url? I have already added bucket access to public read but for each individual object can we do samething?
Is it possible to to make objects inside a bucket accessible?
I have already added bucket access to public read but for each individual object can we do samething?
That is actually how the permissions work - they are applied per object.
The bucket permission you see in Manager dictates what permissions an object gets when it is uploaded, but no permission is specified.
To give an object read permission for everyone, specify “ACL: ‘public-read’” in your PutObject call.
@andysh I tried using ACL: "public-read" but it did not work.
code:
var s3 = new aws.S3({
accessKeyId: "",
secretAccessKey: "",
endpoint: new aws.Endpoint(
joinURL("https://", region + ".linodeobjects.com")
),
ACL: "public-read",
});
var upload = multer({
storage: multerS3({
s3: s3,
bucket: "",
metadata: function (req, file, cb) {
cb(null, { fieldName: file.fieldname });
},
key: function (req, file, cb) {
cb(null, Date.now().toString());
},
}),
});
app.post("/upload", upload.single("image"), function (req, res, next) {
console.log("req", req);
res.send("Successfully uploaded " + " files!");
});
I checked few articles/blogs on the internet but no luck. Do we need to explicitly set bucket ACL policy using boto3 for linode object storage or it is not required?
I don’t know what library or language you are using so I’m afraid I cannot give you specific guidance based on the code sample you provided.
However the ACL property is usually set when you actually upload the object - in the PutObject call. You’re specifying the ACL property on the connection to S3, which is not valid.