Documentation
Laravel
File Storage

File storage

Laravel has a filesystem abstraction (opens in a new tab) that lets us easily change where files are stored.

When running on Lambda, you will need to use the s3 adapter to store files on AWS S3.

Quick setup with Lift

The easiest way to set up S3 storage is using Serverless Lift (opens in a new tab):

First install the Lift plugin:

serverless plugin install -n serverless-lift

Then use the storage construct (opens in a new tab) in serverless.yml:

serverless.yml
provider:
    # ...
    environment:
        # environment variable for Laravel
        FILESYSTEM_DISK: s3
        AWS_BUCKET: ${construct:storage.bucketName}
 
constructs:
    storage:
        type: storage
        allowAcl: true

The allowAcl: true configuration (opens in a new tab) is needed because S3 buckets have ACLs disabled by default since April 2023. Many tools and libraries (including PHP's Flysystem, used by Laravel) send ACL headers on S3 operations, which will fail on buckets with ACLs disabled. The allowAcl: true setting lets the bucket accept these headers without errors. Note that this files in the bucket are still completely private, there is no change in the security of the bucket.

💡

To avoid silent failures, we also recommend setting 'throw' => true on your S3 disk in config/filesystems.php:

config/filesystems.php
's3' => [
    'driver' => 's3',
    // ...
    'throw' => true,
],

That's it! Lift automatically:

  • Creates the S3 bucket
  • Grants IAM permissions to your Lambda functions
  • Exposes the bucket name via ${construct:storage.bucketName}

The AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) are set automatically in AWS Lambda, you don't have to define them.

Uploading files

Small files (< 6 MB)

For files under 6 MB, you can upload directly through your Laravel application as usual:

$request->file('document')->store('documents');

Large files

AWS Lambda has a 6MB request payload limit. For larger files, you must upload directly to S3 from the browser using presigned URLs.

Since the browser uploads directly to S3 (cross-origin), you need to configure CORS on the bucket and add a lifecycle rule to clean up temporary files. Here is a complete storage construct configuration:

serverless.yml
constructs:
    storage:
        type: storage
        lifecycleRules:
            # Temporary upload files will be cleaned after 1 day
            -   prefix: tmp/
                expirationInDays: 1
        allowAcl: true
        # CORS is required for uploading files from the browser via presigned URLs, put the URL of your website here
        # See https://github.com/getlift/lift/blob/master/docs/storage.md#cors
        cors: ${construct:website.url}
💡

If you are not using the website construct, replace ${construct:website.url} with your application's URL, or use '*' during development.

How it works:

  1. Your frontend requests a presigned upload URL from your backend
  2. Your backend generates a temporary presigned URL using Laravel's Storage
  3. The frontend uploads the file directly to S3
  4. The frontend sends the S3 key back to your backend to save in the database

Backend - Generate presigned URL:

temporaryUploadUrl returns an array with the URL and the headers that must be forwarded to S3 (they contain the request signature):

use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
 
public function presignedUploadUrl(): JsonResponse
{
    $key = 'tmp/' . Str::uuid() . '.pdf';
 
    // Generate a presigned PUT URL valid for 15 minutes
    $uploadUrl = Storage::temporaryUploadUrl($key, now()->addMinutes(15), [
        // Optional: we restrict to PDF files here
        'ContentType' => 'application/pdf',
    ]);
 
    // PSR-7 headers are string[] values and include Host, which browsers forbid
    $headers = collect($uploadUrl['headers'])
        ->except(['Host'])
        ->map(fn (array $values): string => implode(', ', $values))
        ->all();
 
    return response()->json([
        'url' => $uploadUrl['url'],
        'headers' => $headers,
        'key' => $key,
    ]);
}

Frontend - Upload to S3:

// 1. Get presigned URL from your backend
const { url, headers, key } = await fetch('/api/presigned-upload-url', {
    method: 'POST',
    headers: { 'X-CSRF-TOKEN': csrfToken },
}).then(r => r.json());
 
// 2. Upload directly to S3, forwarding the presigned headers
await fetch(url, {
    method: 'PUT',
    body: file,
    headers: {
        'Content-Type': file.type,
        ...headers,
    },
});
 
// 3. Send the S3 key to your backend (via a form field, API call, etc.)
await fetch('/api/documents', {
    method: 'POST',
    headers: {
        'Content-Type': 'application/json',
        'X-CSRF-TOKEN': csrfToken,
    },
    body: JSON.stringify({ file_key: key }),
});

Backend - Move file to final location:

public function store(Request $request)
{
    $validated = $request->validate([
        'file_key' => 'required|string',
    ]);
 
    // Move from temporary location to final location
    $finalPath = "documents/{$document->id}.pdf";
    Storage::move($validated['file_key'], $finalPath);
 
    // Save the final path in the database
    $document->update(['file_path' => $finalPath]);
}

Downloading files

For private files, generate temporary presigned URLs:

// Generate a presigned download URL valid for 15 minutes
$url = Storage::temporaryUrl($document->file_path, now()->addMinutes(15));
 
return response()->json(['download_url' => $url]);

The URL can be used directly in the browser or in an <a> tag.

For local development, Laravel's filesystem abstraction lets you use the same code locally and in production. Laravel supports temporary URLs for local files (opens in a new tab) since version 9.

Public files

Laravel has a special disk called public (opens in a new tab): this disk stores files that we want to make public, like uploaded photos, generated PDF files, etc.

Again, those files cannot be stored on Lambda, i.e. they cannot be stored in the default storage/app/public directory. You need to store those files on S3.

💡

Do not run php artisan storage:link in AWS Lambda: it is now useless, and it will fail because the filesystem is read-only in Lambda.

To store public files on S3, you could replace the disk in the code:

- Storage::disk('public')->put('avatars/1', $fileContents);
+ Storage::disk('s3')->put('avatars/1', $fileContents);

but doing this will not let your application work locally. A better solution, but more complex, involves making the public disk configurable. Let's change the following lines in config/filesystems.php:

config/filesystems.php
    /*
    |--------------------------------------------------------------------------
    | Default Public Filesystem Disk
    |--------------------------------------------------------------------------
    */
 
   'public' => env('FILESYSTEM_DISK_PUBLIC', 'public_local'),
 
    ...
 
    'disks' => [
 
        'local' => [
            'driver' => 'local',
            'root' => storage_path('app'),
        ],
 
        'public_local' => [ // Rename `public` to `public_local`
            'driver' => 'local',
            'root' => storage_path('app/public'),
            'url' => env('APP_URL').'/storage',
            'visibility' => 'public',
        ],
 
        's3' => [
            'driver' => 's3',
            'key' => env('AWS_ACCESS_KEY_ID'),
            'secret' => env('AWS_SECRET_ACCESS_KEY'),
            'token' => env('AWS_SESSION_TOKEN'),
            'region' => env('AWS_DEFAULT_REGION'),
            'bucket' => env('AWS_BUCKET'),
            'url' => env('AWS_URL'),
        ],
 
        's3_public' => [
            'driver' => 's3',
            'key' => env('AWS_ACCESS_KEY_ID'),
            'secret' => env('AWS_SECRET_ACCESS_KEY'),
            'token' => env('AWS_SESSION_TOKEN'),
            'region' => env('AWS_DEFAULT_REGION'),
            'bucket' => env('AWS_PUBLIC_BUCKET'),
            'url' => env('AWS_URL'),
        ],
 
    ],

You can now configure the public disk to use S3 by changing serverless.yml or your production .env:

.env
FILESYSTEM_DISK=s3
FILESYSTEM_DISK_PUBLIC=s3