Client Area

Integrating OpenAI and Claude APIs in Your Web App (PHP, Node.js, Python)

ByDomain India Team·DomainIndia Engineering
6 min read24 Apr 20263 views
# Integrating OpenAI and Claude APIs in Your Web App (PHP, Node.js, Python)
TL;DR
Add AI features — chat, summarisation, image generation, semantic search — to any PHP, Node.js, or Python app using the OpenAI or Anthropic (Claude) API. This guide covers authentication, first request, streaming responses, cost control, and deployment on DomainIndia hosting.
## What you can build LLM APIs turn natural-language prompts into text, code, structured JSON, or embeddings. Common web-app features: - AI-powered search (semantic matching, not keyword) - Chatbot or helpdesk assistant - Content summariser for long articles - Automatic product descriptions, alt text, meta tags - Translation + paraphrasing - Code review, bug detection - Image generation (DALL·E, Claude vision) - Document Q&A (RAG — Retrieval-Augmented Generation) Two major providers matter today:
FeatureOpenAI (GPT-4o, GPT-5)Anthropic (Claude Opus, Sonnet, Haiku)
StrengthsImage generation, voice, wide ecosystemLonger context (1M tokens), better at structured output + coding
Pricing (2026)From $0.15 / 1M input tokens (GPT-5 Nano)From $0.25 / 1M (Haiku 4.5)
Free tierNone; $5 minimum purchaseLimited web-chat free; API paid
India card supportYesYes (Razorpay-backed gateway)
Both accept HTTP POST with a Bearer token. You don't need their official SDK unless you want streaming niceties. ## Step 1 — Get an API key
1
Sign up
2
Load credits ($5–$20 is enough to experiment — text costs fractions of a cent per request)
3
Copy the key — starts with sk-proj-... (OpenAI) or sk-ant-... (Anthropic). You only see it once.
4
Save to your app's environment variables (never commit to git)
Warning

Never hardcode API keys. Use .env files, cPanel's environment variable tool, or DomainIndia's App Platform secrets. An exposed key on GitHub is scraped within minutes and drained — we've seen customers lose ₹5,000+ in hours.

## Step 2 — First request (PHP) Using plain cURL, no SDK: ```php 'gpt-4o-mini', 'messages' => [ ['role' => 'system', 'content' => 'You are a helpful assistant.'], ['role' => 'user', 'content' => 'Summarise in 2 sentences: ' . $inputText], ], 'temperature' => 0.3, 'max_tokens' => 200, ]; $ch = curl_init('https://api.openai.com/v1/chat/completions'); curl_setopt_array($ch, [ CURLOPT_POST => true, CURLOPT_POSTFIELDS => json_encode($payload), CURLOPT_HTTPHEADER => [ 'Content-Type: application/json', 'Authorization: Bearer ' . $apiKey, ], CURLOPT_RETURNTRANSFER => true, CURLOPT_TIMEOUT => 60, ]); $response = curl_exec($ch); $data = json_decode($response, true); echo $data['choices'][0]['message']['content']; ``` For Anthropic Claude, change the URL and header: ```php $ch = curl_init('https://api.anthropic.com/v1/messages'); // headers: 'x-api-key: ' . $apiKey, 'anthropic-version: 2023-06-01', // payload model: 'claude-sonnet-4-6' (or claude-haiku-4-5 for cheaper) ``` ## Step 3 — Node.js / Express ```javascript const OpenAI = require('openai'); const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); app.post('/summarise', async (req, res) => { const completion = await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: 'system', content: 'Summarise in 2 sentences.' }, { role: 'user', content: req.body.text }, ], max_tokens: 200, }); res.json({ summary: completion.choices[0].message.content }); }); ``` Anthropic SDK: ```javascript const Anthropic = require('@anthropic-ai/sdk'); const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }); const message = await client.messages.create({ model: 'claude-sonnet-4-6', max_tokens: 200, messages: [{ role: 'user', content: 'Summarise: ' + req.body.text }], }); res.json({ summary: message.content[0].text }); ``` ## Step 4 — Python (Flask/Django) ```python from openai import OpenAI import os client = OpenAI(api_key=os.environ['OPENAI_API_KEY']) response = client.chat.completions.create( model='gpt-4o-mini', messages=[ {'role': 'system', 'content': 'Summarise in 2 sentences.'}, {'role': 'user', 'content': input_text}, ], max_tokens=200, ) print(response.choices[0].message.content) ``` ## Streaming responses (better UX) For chatbots, stream tokens as they're generated instead of waiting for the full response. Set `stream: true` in the request body. The response is a Server-Sent Events (SSE) stream — read chunk by chunk. Node.js streaming example: ```javascript const stream = await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: 'user', content: prompt }], stream: true, }); res.setHeader('Content-Type', 'text/event-stream'); for await (const chunk of stream) { const token = chunk.choices[0]?.delta?.content || ''; res.write(`data: ${JSON.stringify({ token })} `); } res.end(); ``` On the browser side, use `EventSource`: ```javascript const es = new EventSource('/chat-stream?q=' + encodeURIComponent(query)); es.onmessage = (e) => { const { token } = JSON.parse(e.data); chatBox.innerText += token; }; ``` ## Hosting considerations on DomainIndia
PlanAI API works?Streaming?Long-running requests
Shared cPanel/DAYes (outbound HTTPS allowed)Yes, with careMax 120s CGI timeout — keep per-request under 60s
VPSYesYesNo timeout limits; full control
App Platform (PaaS)YesYes60s HTTP timeout (configurable per service)
Info

For chatbots with long conversations, use a VPS. Shared hosting works for one-shot summarisation / meta-tag generation, but heavy streaming + long-running jobs need a VPS where you control timeouts.

## Cost control — don't get surprised LLM costs add up if you're not careful. Essential guardrails: 1. **Cap `max_tokens`** — every request should set this. 200–500 is plenty for summaries. 2. **Cache responses** — same prompt → same answer. Store in Redis or MySQL for 1 day. 3. **Rate-limit by user** — max 20 requests per IP per hour. Stops bots. 4. **Use cheap models by default** — GPT-4o-mini and Claude Haiku are 10–20× cheaper than their flagship siblings. Use expensive models only when quality matters. 5. **Log every request** — store prompt, response, token count, cost in a `ai_calls` table. Bill-shock prevention. 6. **Set a hard monthly budget** — OpenAI and Anthropic both let you cap spend in dashboard. ## Security checklist ## FAQ
Q Do I need a GPU?

No. You're calling a hosted API — the provider runs the model. Your server just sends HTTPS requests. Shared hosting and basic VPS are fine.

Q Which is cheaper — OpenAI or Claude?

For identical quality, Claude Haiku tends to be slightly cheaper per output token, and Claude has the 1M-token context window advantage for long documents. For images, OpenAI's DALL·E is more mature. Both are priced in USD and billable via any Indian international card.

Q Can I use this on DomainIndia shared hosting?

Yes — outbound HTTPS works on all our plans. For streaming/chatbot workloads expect responses under 120 seconds per request, or upgrade to VPS.

Q What happens if the API is down?

Wrap every call in try/catch, timeout at 30–60 seconds, and show a friendly fallback message. For critical features, consider a second provider (Claude as fallback to OpenAI, or vice versa).

Q How do I prevent prompt injection?

Treat user input as data, not instructions. Put user content after a clear delimiter (e.g. <user_message>...</user_message>) and write the system prompt to ignore any "ignore previous instructions" attempts.

Ready to add AI features? Start with a DomainIndia VPS for full control. Get a VPS plan

Was this article helpful?

Your feedback helps us improve our documentation

Still need help? Submit a support ticket