At 16-Dec-2021 we have deployed changes to our API Rate Limiter.
- we have split some calls, for instance the Status call limit is split into 2 separate limits:
1. for the single Status call
2. for the multi Status call (search call)
- we have analyzed the logs of the past months and taken into account high peaks on e.g. Black Friday and have concluded some limits were still set too wide, so we have lowered those limits, to make sure we can deliver the promised platform uptime and availability for all of our customers.
In case you run into the rate limiter (error 429) make sure you lower the frequency and size of the calls you are doing. How to do this, can be read in the previous Release Note which was published at 30-Sep-2021
Release note 30-Sep-2021: Rate Limiter implemented in API services.
What is API rate limiting?
“To prevent an API from being overwhelmed, API owners often enforce a limit on the number of requests, or the quantity of data clients can consume. This is called Application Rate Limiting. If a user sends too many requests, API rate limiting can throttle client connections instead of disconnecting them immediately.”
What did Transsmart change?
Per API service type (shipment service, status service, report service etc…) per time frame we have setup rate limits per account. This means that, for example, only 10 report calls can be done per 60 seconds. If more calls within those 60 seconds would be done, the following message will appear:
“Error occurred for account: <<ACCOUNTCODE>>
429: Your call has been blocked by the rate limiter due to sending in too many requests in a given amount of time. Please contact Transsmart support.”
At the moment you stop sending calls, the 'bucket' gets emptied again, so after 60 seconds you can again do (max.) 10 calls.
Note: at this moment the time frame is set to 60 seconds for all services but this may change in the future based on experiences. We keep monitoring our platform and if we notice the time frame is too short or too long, we might change it.
We comply with a Fair Use Policy and will adjust the values -and in exceptional cases even block requesters- if we notice there is abuse in submitting or requesting too much data.
When does this change take place?
At 6-Sept-2021 this change was deployed on our Acceptation environment.
At 30-Sept-2021 this change will be deployed on our Production environment.
Why did Transsmart do this?
We have done this to prevent our platform from being overloaded with bursts of API calls. We guarantee a reliable platform and by making this adjustment, we remain in control, which of course benefits all users of our platform.
It’s important to understand that in normal circumstances you will not notice anything from this change. Only when high loads of calls are being done in a very short time frame, there could be impact. If that is the case, to avoid reaching our rate limiter, your configuration/integration needs to be verified by you or your (ERP/WMS) partner. Most probably the amount of calls can be reduced by lowering the configuration of sending API calls for (all) shipments.
What is the impact?
There is one scenario that can happen when you get this error message. It's possible that your system does not recognize this 429 error. Handling an error message as response is always something to be setup by an API system but we have seen examples where this was not correctly done. This caused the requesting system to keep trying to send the same API request over and over again. It then will start bursting not only ours, but also your platform.
The best way to implement a correct process for handling 429 errors, is to act on the information we send back in Response Header:
- X-RateLimit-Remaining: the number of remaining calls (if still allowed)
- X-RateLimit-Reset: the number of milliseconds before next call may be done (in case of a blocked call)
Two other ways are either simply wait longer before sending a retry -as mentioned in the note above, the current time frame is 60 seconds- or implement an exponential backoff which means you don’t retry sending a request for instance every 0.1 second but retry 1 second later, then 10 seconds later, then 1 minute later… etc. until you get the correct response call information.