API Rate Limiting Best Practices for High-Volume Tracking
When you’re tracking thousands of parcels daily, understanding and working with API rate limits becomes critical. This guide covers strategies to maximize throughput while staying within limits.
Understanding WhereParcel Rate Limits
WhereParcel’s rate limits are based on your plan:
| Plan | Requests/minute | Requests/day | Batch size |
|---|---|---|---|
| Free | 10 | 500 | 1 |
| Starter | 60 | 10,000 | 10 |
| Business | 300 | 100,000 | 50 |
| Enterprise | Custom | Custom | 100 |
Rate limit information is included in every response header:
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 297
X-RateLimit-Reset: 1706200000
Strategy 1: Use Batch Tracking
Instead of making individual requests, use the batch endpoint:
// ❌ Bad: 50 individual requests
for (const parcel of parcels) {
await fetch('/v2/track', {
body: JSON.stringify({
carrier: parcel.carrier,
trackingNumber: parcel.trackingNumber,
}),
});
}
// ✅ Good: 1 batch request
await fetch('/v2/track/batch', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.WHEREPARCEL_API_KEY}:${process.env.WHEREPARCEL_SECRET_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
parcels: parcels.map(p => ({
carrier: p.carrier,
trackingNumber: p.trackingNumber,
})),
}),
});
Batch requests count as 1 request regardless of how many parcels are included (up to your plan’s batch size limit).
Strategy 2: Implement Smart Caching
Not every tracking request needs to hit the API. Cache results based on the parcel’s current status:
function getCacheDuration(status) {
switch (status) {
case 'delivered':
return 24 * 60 * 60; // 24 hours - won't change
case 'out_for_delivery':
return 5 * 60; // 5 minutes - changing soon
case 'in_transit':
return 30 * 60; // 30 minutes
case 'picked_up':
return 60 * 60; // 1 hour
default:
return 15 * 60; // 15 minutes
}
}
async function getTracking(carrier, trackingNumber) {
const cacheKey = `tracking:${carrier}:${trackingNumber}`;
const cached = await cache.get(cacheKey);
if (cached) return cached;
const result = await whereparcel.track(carrier, trackingNumber);
const ttl = getCacheDuration(result.status);
await cache.set(cacheKey, result, ttl);
return result;
}
Strategy 3: Implement Exponential Backoff
When you hit a rate limit (HTTP 429), use exponential backoff:
async function trackWithRetry(carrier, trackingNumber, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch('/v2/track', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.WHEREPARCEL_API_KEY}:${process.env.WHEREPARCEL_SECRET_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ carrier, trackingNumber }),
});
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || 60;
const delay = Math.min(retryAfter * 1000, 2 ** attempt * 1000);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
return await response.json();
} catch (error) {
if (attempt === maxRetries) throw error;
await new Promise(resolve =>
setTimeout(resolve, 2 ** attempt * 1000)
);
}
}
}
Strategy 4: Use Webhooks Instead of Polling
The most effective way to reduce API calls is to stop polling entirely. Register webhooks and let updates come to you:
// ❌ Bad: Polling 1,000 parcels every 30 minutes = 48,000 requests/day
setInterval(async () => {
for (const parcel of activeParcels) {
await trackParcel(parcel);
}
}, 30 * 60 * 1000);
// ✅ Good: Register webhook once, receive updates automatically
await fetch('/v2/webhooks', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.WHEREPARCEL_API_KEY}:${process.env.WHEREPARCEL_SECRET_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
url: 'https://yourapp.com/webhooks/tracking',
events: ['status_changed'],
}),
});
// Result: ~3-5 webhook calls per parcel over its lifetime
For 1,000 active parcels, this reduces daily API usage from 48,000 to ~500 requests.
Strategy 5: Prioritize Active Parcels
Not all parcels need the same polling frequency:
function getPollingInterval(parcel) {
const daysSinceShipped = getDaysSince(parcel.shippedDate);
if (parcel.status === 'delivered') return null; // Stop polling
if (parcel.status === 'out_for_delivery') return 5; // 5 min
if (daysSinceShipped <= 1) return 30; // 30 min
if (daysSinceShipped <= 5) return 60; // 1 hour
if (daysSinceShipped <= 14) return 180; // 3 hours
return 720; // 12 hours
}
Monitoring Your Usage
Keep track of your API usage to avoid surprises:
class RateLimitMonitor {
constructor() {
this.remaining = Infinity;
this.resetTime = 0;
}
updateFromResponse(headers) {
this.remaining = parseInt(headers['x-ratelimit-remaining']);
this.resetTime = parseInt(headers['x-ratelimit-reset']);
if (this.remaining < 10) {
console.warn(`Rate limit warning: ${this.remaining} requests remaining`);
}
}
canMakeRequest() {
if (this.remaining <= 0) {
const now = Math.floor(Date.now() / 1000);
return now >= this.resetTime;
}
return true;
}
}
Summary
| Strategy | API Call Reduction |
|---|---|
| Batch tracking | Up to 50x |
| Smart caching | 40-70% |
| Webhooks vs polling | 95%+ |
| Priority-based polling | 50-80% |
Combining these strategies, you can handle tens of thousands of parcels while staying well within your rate limits. For high-volume needs, contact our team about Enterprise plans with custom limits.