Building PackSmart: An AI Packing List Generator That Actually Checks the Weather
How I built a free AI-powered packing list tool as part of the Vient Apps travel ecosystem, and what I learned about wrangling LLM output into reliable JSON.
After launching Roamly, I wanted to build out more travel tools under the Vient Apps umbrella. Not everything needs to be a full SaaS product with auth and billing. Sometimes you just need a simple, useful thing that solves one problem well. PackSmart came from that mindset: a free packing list generator that uses AI and real weather data to tell you exactly what to bring on your trip.
What it does
You punch in your destination, travel dates, trip type, luggage size, and planned activities. PackSmart geocodes the destination, pulls weather data for those exact dates, then feeds everything to Claude Haiku to generate a personalized packing list. The list is categorized, quantity-adjusted for your trip length, and comes with destination-specific travel tips.
You can check items off as you pack, and it saves your progress to localStorage so you can come back later. No account needed, no data stored on a server. Try it out at PackSmart.
The stack
Next.js 16 with App Router, deployed to Cloudflare Workers via OpenNext.js. React 19, Tailwind v4, shadcn/ui components. Claude Haiku 4.5 for the AI generation. Open-Meteo for weather data (free, no API key required). Cloudflare KV for distributed rate limiting.
I went with the same Next.js-on-Workers setup as Roamly because I already had the deployment pipeline figured out. For a tool like this, the edge deployment actually matters. The generate endpoint calls three APIs in sequence (geocoding, weather, Claude), so being physically closer to the user shaves real time off the round trip.
Open-Meteo was a great find. Free, no auth, solid data. The catch is that their forecast API only goes 16 days out.
The weather problem
Most packing list generators ignore weather entirely or ask you to describe it yourself. I wanted real data. But Open-Meteo’s forecast horizon is 16 days. If someone’s planning a trip three months from now, I can’t get a forecast.
The solution: a hybrid approach that blends real forecasts with historical data from the same dates last year.
export async function getWeather(
lat: number, lng: number,
startDate: string, endDate: string
): Promise<WeatherData> {
const today = new Date();
const start = new Date(startDate);
const daysUntilStart = differenceInDays(start, today);
if (daysUntilStart <= FORECAST_HORIZON_DAYS) {
const forecastEnd = new Date(today);
forecastEnd.setDate(forecastEnd.getDate() + FORECAST_HORIZON_DAYS);
const end = new Date(endDate);
if (end <= forecastEnd) {
const data = await fetchForecast(lat, lng, startDate, endDate);
return { ...parseResponse(data), isHistorical: false };
}
// Trip spans past forecast horizon, blend both sources
const forecastEndStr = format(forecastEnd, "yyyy-MM-dd");
const [forecastData, historicalData] = await Promise.all([
fetchForecast(lat, lng, startDate, forecastEndStr),
fetchHistorical(lat, lng, forecastEndStr, endDate),
]);
const forecast = parseResponse(forecastData);
const historical = parseResponse(historicalData);
return {
temperatureMin: [...forecast.temperatureMin, ...historical.temperatureMin],
temperatureMax: [...forecast.temperatureMax, ...historical.temperatureMax],
precipitationProbability: [
...forecast.precipitationProbability,
...historical.precipitationProbability,
],
weatherCode: [...forecast.weatherCode, ...historical.weatherCode],
isHistorical: true,
};
}
// Beyond forecast range, use last year's data
const data = await fetchHistorical(lat, lng, startDate, endDate);
return { ...parseResponse(data), isHistorical: true };
}
Three paths: if the trip is within 16 days, use the real forecast. If the trip starts soon but extends past the horizon, blend forecast and historical data in parallel. If the trip is months away, pull last year’s weather for the same dates. The UI flags when historical data is being used so people know it’s an estimate.
The hard part: making Claude return clean JSON
This is where most of my time went. Claude Haiku is fast and cheap, which makes it great for a free tool. But getting consistent, parseable JSON out of any LLM is a battle.
The system prompt is explicit about output format. No markdown, no explanation, just JSON matching a specific schema. And yet.
// Strip markdown code fences if the model wraps the JSON
let jsonText = textBlock.text.trim();
if (jsonText.startsWith("```")) {
jsonText = jsonText
.replace(/^```(?:json)?\s*\n?/, "")
.replace(/\n?```\s*$/, "");
}
// Fix common JSON issues from LLMs: trailing commas before } or ]
jsonText = jsonText.replace(/,\s*([}\]])/g, "$1");
Even with clear instructions, models will occasionally wrap their output in markdown code fences or drop trailing commas into arrays. These two lines handle the most common failures. After that, the response goes through Zod validation to make sure the structure matches what the frontend expects.
I also sanitize user inputs before they hit the prompt. Destination names are user-provided text that gets interpolated into the prompt, so stripping special characters is a basic defense against prompt injection:
function sanitizeInput(text: string): string {
return text.replace(/[{}<>|\\`]/g, "").trim();
}
What happens when the AI fails
Sometimes the API is slow, sometimes it returns garbage, sometimes it’s just down. I didn’t want users staring at an error screen, so there’s a deterministic fallback generator that builds a reasonable packing list from templates:
const fallbackList = generateFallbackList(
data.tripType,
data.luggageType,
data.activities,
data.travelers
);
// Use real weather summary if we have weather data
if (weather.temperatureMax.length > 0) {
const { formatWeatherSummary } = await import("@/lib/weather");
fallbackList.weatherSummary = formatWeatherSummary(weather);
}
return NextResponse.json({
...fallbackList,
fallback: true,
});
The fallback doesn’t count against the rate limit (3 lists per day per IP). If Claude fails, that’s not the user’s fault. The fallback also still uses real weather data if geocoding succeeded, so even the backup list is somewhat personalized.
Rate limiting on the edge
Since this is a free tool hitting a paid API, rate limiting is essential. Cloudflare KV makes this surprisingly clean for a serverless setup:
export async function incrementRateLimit(
ip: string, kv: KVNamespace
): Promise<void> {
const key = `rate:${ip}`;
const entry = await kv.get<RateLimitEntry>(key, "json");
const now = new Date();
if (!entry || new Date(entry.resetAt) < now) {
const resetAt = new Date(now);
resetAt.setUTCHours(24, 0, 0, 0);
await kv.put(
key,
JSON.stringify({ count: 1, resetAt: resetAt.toISOString() }),
{ expirationTtl: 86400 }
);
} else {
await kv.put(
key,
JSON.stringify({ count: entry.count + 1, resetAt: entry.resetAt }),
{ expirationTtl: 86400 }
);
}
}
The expirationTtl on each KV entry means old rate limit records clean themselves up. No cron jobs, no database maintenance. Resets happen at UTC midnight so the behavior is consistent globally.
Where it is now
PackSmart is live and working. It generates solid packing lists, the weather integration adds real value over generic generators, and the fallback system means it never leaves users empty-handed. It’s doing what I built it for: being a useful free tool in the Vient Apps ecosystem.
What I’d do differently
Spend more time on prompt engineering upfront. I went through a lot of iterations on the system prompt, fixing issues as they came up in production. Things like quantity calculations (7-day trip = 7 pairs of socks), shared vs. personal items, and keeping item names short. Most of those rules in the prompt were added reactively after seeing bad output. A more systematic approach to prompt design from the start would have saved time.
Use structured outputs from the beginning. The JSON sanitization code works, but it’s a band-aid. Anthropic’s API supports structured output modes that would eliminate the markdown-fence and trailing-comma problems entirely. I should have started there instead of adding post-processing hacks.
Test with more edge cases early. Trips to places with unusual weather patterns, very long trips, destinations with non-English names. The tool handles these now, but each one was a surprise in production rather than something I caught in testing.
Share this post
Stay in the loop
Get notified when I publish new posts. No spam, unsubscribe anytime.
Built as part of
View the project →