const stream = require ( 'stream' );

const passtrough = new stream.PassThrough();



Next, we need to write data to the stream. This is done by the youtube-dl library.

const youtubedl = require ( 'youtube-dl' );

const dl = youtubedl(event.videoUrl, [ '--format=best[ext=mp4]' ], { maxBuffer : Infinity });

dl.pipe(passtrough);



And finally, we need to upload the stream to S3. We make use of the Multipart Upload feature of S3 which allows us to upload a big file in smaller chunks. This way, we only have to buffer the small junk (64 MB in this case) in memory and not the whole file.

const AWS = require ( 'aws-sdk' );

const upload = new AWS.S3.ManagedUpload({

params: {

Bucket: process.env.BUCKET_NAME,

Key: 'video.mp4' ,

Body: passtrough

},

partSize: 1024 * 1024 * 64

});

upload.send( ( err ) => {

if (err) {

console .log( 'error' , err);

} else {

console .log( 'done' );

}

});



That’s it. Now you can download YouTube videos of any size with Lambda and upload them to S3. I recommend running the code in a “big” Lambda function with 3008 MB of memory for better network performance.

You can find the full source code on GitHub including a SAM template to provision the AWS resources. Have fun!

This is a shorter article. Do you prefer longer or shorter reads? Let me know! michael@widdix.de, LinkedIn, or @hellomichibye.