File Storage

This section gives details on how ScaffoldHub implements file storage.

For setup, refer to Setup > File Storage.

Configuration

Each type of file upload has its own configuration and they all stay at:

  • frontend/src/security/storage.ts

  • backend/src/security/storage.ts

/**
* Storage permissions.
*
* @id - Used to identify the rule on permissions and upload.
* @folder - Folder where the files will be saved
* @maxSizeInBytes - Max allowed size in bytes
* @bypassWritingPermissions - Does not validate if the user has permission to write
* @publicRead - The file can be publicly accessed via the URL without the need for a signed token
*/
export default class Storage {
static get values() {
return {
userAvatarsProfiles: {
id: 'userAvatarsProfiles',
folder: 'user/avatars/profile/:userId',
maxSizeInBytes: 10 * 1024 * 1024,
bypassWritingPermissions: true,
publicRead: true,
},
settingsLogos: {
id: 'settingsLogos',
folder: 'tenant/:tenantId/settings/logos',
maxSizeInBytes: 10 * 1024 * 1024,
publicRead: true,
},
settingsBackgroundImages: {
id: 'settingsBackgroundImages',
folder:
'tenant/:tenantId/settings/backgroundImages',
maxSizeInBytes: 10 * 1024 * 1024,
publicRead: true,
},
productPhotos: {
id: 'productPhotos',
folder: 'tenant/:tenantId/product/photos',
maxSizeInBytes: 1000000,
},
orderAttachments: {
id: 'orderAttachments',
folder: 'tenant/:tenantId/order/attachments',
maxSizeInBytes: 1000000,
},
};
}
}
  • id: Used to identify the rule on permissions and upload.

  • folder: Folder where the files will be saved. It accepts two parameters, :userId and:tenantId, that, when saved, are replaced by their real value.

  • maxSizeInBytes: Max allowed size in bytes.

  • bypassWritingPermissions: Does not validate if the user has permission to write. This is usually when the user id is on the path, so they only access their folder.

  • publicRead: The file can be publicly accessed via the URL without the need for a signed token.

Credentials

For Amazon S3 and Google Cloud Storage, the uploaded files do not pass through the backend. The backend creates credentials that allow the frontend to submit directly to the file storage provider.

Before sending the credentials to the frontend, the backend validates if the user has all the needed permissions.

import PermissionChecker from '../../services/user/permissionChecker';
import Storage from '../../security/storage';
import FileStorage from '../../services/file/fileStorage';
import ApiResponseHandler from '../apiResponseHandler';
import Error403 from '../../errors/Error403';
export default async (req, res) => {
try {
const permissionChecker = new PermissionChecker(req);
const filename = req.query.filename;
const storageId = req.query.storageId;
if (!req.currentUser || !req.currentUser.id) {
throw new Error403();
}
if (!req.currentTenant || !req.currentTenant.id) {
throw new Error403();
}
// The config storage has the information on where
// to store the file and the max size
const config = Storage.values[storageId];
if (!config) {
throw new Error403();
}
if (
// Some permissions are related to the user itself,
// not related to any entity, that's why there is a bypass permissions
!config.bypassWritingPermissions &&
!permissionChecker.hasStorage(storageId)
) {
throw new Error403();
}
// The private URL is the path related to the bucket/file system folder
let privateUrl = `${config.folder}/${filename}`;
privateUrl = privateUrl.replace(
':tenantId',
req.currentTenant.id,
);
privateUrl = privateUrl.replace(
':userId',
req.currentUser.id,
);
const maxSizeInBytes = config.maxSizeInBytes;
const publicRead = Boolean(config.publicRead);
const downloadUrl = await FileStorage.downloadUrl(
privateUrl,
publicRead,
);
/**
* Upload Credentials has the URL and the fields to be sent
* to the upload server.
*/
const uploadCredentials = await FileStorage.uploadCredentials(
privateUrl,
maxSizeInBytes,
publicRead,
);
await ApiResponseHandler.success(req, res, {
privateUrl,
downloadUrl,
uploadCredentials,
});
} catch (error) {
await ApiResponseHandler.error(req, res, error);
}
};

Localhost

The localhost is a bit different from the Amazon S3 and Google Cloud Storage. Instead of generating a provider's token, it uses the JWT token of the current application to pass it to another endpoint that only validates this token and handles the upload to the localhost server.

Frontend

The frontend builds the upload form using the credentials created by the backend.

static async uploadToServer(file, uploadCredentials) {
try {
const url = uploadCredentials.url;
const formData = new FormData();
if (uploadCredentials.fields) {
for (const [key, value] of Object.entries(
uploadCredentials.fields,
)) {
formData.append(key, value as string);
}
}
formData.append('file', file);
return axios.post(url, formData, {
headers: {
'Content-Type': 'multipart/form-data',
},
});
} catch (error) {
console.error(error);
throw error;
}
}