How to bypass the Netlify serverless function timeout?

Damian Wróblewski - June 2024

How to bypass the Netlify serverless function timeout?

Table of contents

  1. My case
  2. What are available solutions?

Imagine developing the next groundbreaking AI-powered app, the success of which is expected to make you only count subscription profits for the rest of your life. Everything works perfectly in your local development environment, so you conclude that it's time for the world to experience this divine creation. From among the available options (Netlify, Vercel, etc.), you choose a cloud platform to release your application, and after the first tests, you find that requests to the external API are aborted due to the timeout restriction of the serverless functions.

In this post, I describe how I dealt with the above problem while working on a recent side project.

My case

Recently, I've been working on GymCraft. It's a SvelteKit application powered by LLM (gpt-4o) that allows users to create personalized training plans based on the provided data. Under the hood, it configures the AI model accordingly and enriches that data using prompt engineering techniques to produce the best quality results.

At some point, the application sends a tailored request to the OpenAI Chat Completions API. It takes quite a long time for the model to generate a response, and this is where the problem arises.

The serverless functions in Netlify (and other such platforms) have a time limit of 10 seconds, after which they return a timeout error.

What are available solutions?

Upgrading the plan

One option is to upgrade your Netlify subscription plan. For the Pro and Enterprise plans, Netlify allows you to increase the limit to 26 seconds on demand. Unfortunately, it is highly likely that this will not be enough when working with the generative AI API, as it can take longer for the model to generate a response in many cases.

Paid subscription plans also allow the use of so called Background Functions - asynchronus functions that can run for up to 15 minutes. This solution could work but I assume that as an indie hacker you prefer no-cost solutions 😄

Netlify Edge Functions

Another option is to use Netlify Edge Functions in conjunction with Server-Sent Events (SSE). Edge Functions, compared to standard serverless functions, have significantly higher limits in terms of response time. The problem here, however, is the runtime environment. Well, for performance reasons, Edge Functions are run in a Deno environment rather than Node. This poses a problem because some of the native node packages, which are peer dependencies of the packages I use in the application, are not available in the Deno environment.

Sending a request from the client

Another alternative is to simply send a request from the browser - frontend layer. Unfortunately, this solution won't work in my case since I need to attach a secret API key to the request sent to the OpenAI API, which cannot be exposed for security reasons. And while it is possible to fool the bots with a simple trick (store the key in parts and concatenate it later), stealing such a key for a human would not be a problem. Better not to take the risk.

Detaching the API fragment to a separate service

Another possible solution is to detach this particular API route into a separate backend application by running a regular Node server that will only handle this problematic endpoint. As this solution seemed reasonable I created a proxy server that was deployed on the Render platform (I recommend as a free alternative to Heroku). Certainly the disadvantage of this approach is that the query now has to go through one more layer which makes it take longer, however, when generating a response from the model it makes a negligible difference.

Do you know other ways to get around the above problem? Share them in the comments or send me a DM.

Stay in touch with me on Twitter or LinkedIn if you're interested in the world of web applications. 😉


Join the discussion