File storage
Laravel has a filesystem abstraction (opens in a new tab) that lets us easily change where files are stored.
When running on Lambda, you will need to use the s3 adapter to store files on AWS S3.
Quick setup with Lift
The easiest way to set up S3 storage is using Serverless Lift (opens in a new tab):
First install the Lift plugin:
serverless plugin install -n serverless-liftThen use the storage construct (opens in a new tab) in serverless.yml:
provider:
# ...
environment:
# environment variable for Laravel
FILESYSTEM_DISK: s3
AWS_BUCKET: ${construct:storage.bucketName}
constructs:
storage:
type: storage
extensions:
bucket:
Properties:
OwnershipControls:
Rules:
- ObjectOwnership: BucketOwnerPreferredS3 ACLs: Since April 2023, S3 buckets have ACLs disabled by default. However, Laravel's storage layer (Flysystem (opens in a new tab)) sends ACL headers on every S3 operation (put, copy, move…). Without the OwnershipControls configuration above, these operations will fail silently — data won't be written to S3 but no error will be raised.
To avoid silent failures, we also recommend setting 'throw' => true on your S3 disk in config/filesystems.php:
's3' => [
'driver' => 's3',
// ...
'throw' => true,
],The BucketOwnerPreferred setting lets the bucket accept ACL headers while keeping the bucket owner in full control. Read more in the AWS documentation (opens in a new tab).
That's it! Lift automatically:
- Creates the S3 bucket
- Grants IAM permissions to your Lambda functions
- Exposes the bucket name via
${construct:storage.bucketName}
The AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) are set automatically in AWS Lambda, you don't have to define them.
Uploading files
Small files (< 6 MB)
For files under 6 MB, you can upload directly through your Laravel application as usual:
$request->file('document')->store('documents');Large files
AWS Lambda has a 6MB request payload limit. For larger files, you must upload directly to S3 from the browser using presigned URLs.
Since the browser uploads directly to S3 (cross-origin), you need to configure CORS on the bucket and add a lifecycle rule to clean up temporary files. Here is a complete storage construct configuration:
constructs:
storage:
type: storage
lifecycleRules:
# Temporary upload files will be cleaned after 1 day
- prefix: tmp/
expirationInDays: 1
extensions:
bucket:
Properties:
OwnershipControls:
Rules:
- ObjectOwnership: BucketOwnerPreferred
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- ${construct:website.url}
AllowedHeaders:
- '*'
AllowedMethods:
- PUTIf you are not using the website construct, replace ${construct:website.url} with your application's URL, or use '*' during development.
How it works:
- Your frontend requests a presigned upload URL from your backend
- Your backend generates a temporary presigned URL using Laravel's Storage
- The frontend uploads the file directly to S3
- The frontend sends the S3 key back to your backend to save in the database
Backend - Generate presigned URL:
temporaryUploadUrl returns an array with the URL and the headers that must be forwarded to S3 (they contain the request signature):
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
public function presignedUploadUrl(): JsonResponse
{
$key = 'tmp/' . Str::uuid() . '.pdf';
// Generate a presigned PUT URL valid for 15 minutes
$uploadUrl = Storage::temporaryUploadUrl($key, now()->addMinutes(15), [
// Optional: we restrict to PDF files here
'ContentType' => 'application/pdf',
]);
// PSR-7 headers are string[] values and include Host, which browsers forbid
$headers = collect($uploadUrl['headers'])
->except(['Host'])
->map(fn (array $values): string => implode(', ', $values))
->all();
return response()->json([
'url' => $uploadUrl['url'],
'headers' => $headers,
'key' => $key,
]);
}Frontend - Upload to S3:
// 1. Get presigned URL from your backend
const { url, headers, key } = await fetch('/api/presigned-upload-url', {
method: 'POST',
headers: { 'X-CSRF-TOKEN': csrfToken },
}).then(r => r.json());
// 2. Upload directly to S3, forwarding the presigned headers
await fetch(url, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
...headers,
},
});
// 3. Send the S3 key to your backend (via a form field, API call, etc.)
await fetch('/api/documents', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRF-TOKEN': csrfToken,
},
body: JSON.stringify({ file_key: key }),
});Backend - Move file to final location:
public function store(Request $request)
{
$validated = $request->validate([
'file_key' => 'required|string',
]);
// Move from temporary location to final location
$finalPath = "documents/{$document->id}.pdf";
Storage::move($validated['file_key'], $finalPath);
// Save the final path in the database
$document->update(['file_path' => $finalPath]);
}Downloading files
For private files, generate temporary presigned URLs:
// Generate a presigned download URL valid for 15 minutes
$url = Storage::temporaryUrl($document->file_path, now()->addMinutes(15));
return response()->json(['download_url' => $url]);The URL can be used directly in the browser or in an <a> tag.
For local development, Laravel's filesystem abstraction lets you use the same code locally and in production. Laravel supports temporary URLs for local files (opens in a new tab) since version 9.
Public files
Laravel has a special disk called public (opens in a new tab): this disk stores files that we want to make public, like uploaded photos, generated PDF files, etc.
Again, those files cannot be stored on Lambda, i.e. they cannot be stored in the default storage/app/public directory. You need to store those files on S3.
Do not run php artisan storage:link in AWS Lambda: it is now useless, and it will fail because the filesystem is read-only in Lambda.
To store public files on S3, you could replace the disk in the code:
- Storage::disk('public')->put('avatars/1', $fileContents);
+ Storage::disk('s3')->put('avatars/1', $fileContents);but doing this will not let your application work locally. A better solution, but more complex, involves making the public disk configurable. Let's change the following lines in config/filesystems.php:
/*
|--------------------------------------------------------------------------
| Default Public Filesystem Disk
|--------------------------------------------------------------------------
*/
'public' => env('FILESYSTEM_DISK_PUBLIC', 'public_local'),
...
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public_local' => [ // Rename `public` to `public_local`
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'token' => env('AWS_SESSION_TOKEN'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
],
's3_public' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'token' => env('AWS_SESSION_TOKEN'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_PUBLIC_BUCKET'),
'url' => env('AWS_URL'),
],
],You can now configure the public disk to use S3 by changing serverless.yml or your production .env:
FILESYSTEM_DISK=s3
FILESYSTEM_DISK_PUBLIC=s3