Steps to reproduce the behavior, please provide code snippets or a repository:
export default async (req, res) => {
let intervalID = null
res.setHeader('Content-Type', 'text/event-stream')
res.write('data: CONNECTION ESTABLISHED\n')
const end = () => {
if (intervalID) {
clearTimeout(intervalID)
req.on('aborted', end)
req.on('close', end)
const sendData = () => {
const timestamp = (new Date).toISOString()
res.write(`data: ${timestamp}\n`)
intervalID = setInterval(sendData, 1000)
Connect to the route with a tool that supports Server-Sent Events (i.e. Postwoman).
Expected behavior
The route sends a new event to the connection every second.
Actual behavior
The route doesn't send any data to the connection unless a call to res.end()
is added to the route.
System information
OS: macOS
Version of Next.js: 9.1.5
Additional context
When using other HTTP frameworks (Express, Koa, http
, etc) this method works as expected. It's explicitly supported by Node's http.incomingMessage
and http.ServerResponse
classes which, from what I understand, Next uses as a base for the req
and res
that are passed into Next API routes.
I'd hazard a guess that #5855 was caused by the same issue, but considered unrelated because the issue was obscured by the express-sse
library.
There are also two Spectrum topics about this (here and here) that haven't garnered much attention yet.
Supporting Websockets and SSE in Next API routes may be related, but fixing support for SSE should be a lower barrier than adding support Websockets. All of the inner workings are there, we just need to get the plumbing repaired.
For those stumbling onto this through Google, this is working as of Next.js 13 + Route Handlers:
// app/api/route.ts
import { Configuration, OpenAIApi } from 'openai';
export const runtime = 'nodejs';
// This is required to enable streaming
export const dynamic = 'force-dynamic';
export async function GET() {
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
let responseStream = new TransformStream();
const writer = responseStream.writable.getWriter();
const encoder = new TextEncoder();
writer.write(encoder.encode('Vercel is a platform for....'));
try {
const openaiRes = await openai.c…
You can use a custom server.js to workaround this for now:
require('dotenv').config();
const app = require('express')();
const server = require('http').Server(app);
const next = require('next');
const DSN = process.env.DSN || 'postgres://postgres:postgres@localhost/db';
const dev = process.env.NODE_ENV !== 'production';
const nextApp = next({ dev });
const nextHandler = nextApp.getRequestHandler();
nextApp.prepare().then(() => {
app.get('*', (req, res) => {
if (req.url === '/stream') {
res.writeHead(200, {
Connection: 'keep-alive',
'Cache-Control': 'no-cache',
'Content-Type': 'text/event-stream',
});
res.write('data: Processing...\n\n');
setTimeout(() => {
res.write('data: Processing2...\n\n');
}, 10000);
} else {
return nextHandler(req, res);
});
require('../websocket/initWebSocketServer')(server, DSN);
const port = 8080;
server.listen(port, err => {
if (err) throw err;
console.log('> Ready on http://localhost:' + port);
});
});
componentDidMount() {
this.source = new EventSource('/stream')
this.source.onmessage = function(e) {
console.log(e)
I would still recommend to keep any server sent event and websocket handlers in separate processes in production. It's very likely that the frequency of updates to those parts of the business logic are quite different. Your front-end most likely changes more often than the types of events you handle / need to push to the clients from the servers. If you only make changes to one, you probably don't want to restart the processes responsible for the other(s). Better to keep the connections alive rather than cause a flood of reconnections / server restarts for changes which have no effect.
@msand The main reason I'm trying to avoid using a custom server is that I'm deploying to Now. Using a custom server would break all of the wonderful serverless functionality I get there.
Your second point is fair. What I'm trying to do is create an SSE stream for data that would otherwise be handled with basic polling. The server is already dealing with constant reconnections in that case, so an SSE stream actually results in fewer reconnections.
I suppose I could set up a small webserver in the same repo that just uses a separate Now builder. That would allow the processes to remain separate, though it'd still cause all of the SSE connections to abort and reconnect when there are any changes to the project.
Even with those points, I can see plenty of scenarios in which it makes sense to be able to run an SSE endpoint from one of Next's API routes. Additionally, in the docs it's specifically stated that...
req
: An instance of http.IncomingMessage
, plus some pre-built middlewares you can see here
res
: An instance of http.ServerResponse
, plus some helper functions you can see here
Since it's specifically stated that res
is an instance of http.ServerResponse
, I'd expect it to behave exactly the way http.ServerResponse
behaves in any other circumstance. Either the documentation should change to reflect the quirks of the implementation or, preferably, res.write
should be fixed to behave the way it does in any other circumstance.
@trezy It seems the issue is that the middleware adds a gzip encoding which the browser has negotiated using the header:
Accept-Encoding: gzip, deflate, br
If you add
Content-Encoding: none
then it seems to work:
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Encoding': 'none',
'Cache-Control': 'no-cache',
'Content-Type': 'text/event-stream',
});
Actually, this seems to be documented here: https://github.com/expressjs/compression#server-sent-events
Have to call res.flush() when you think there's enough data for the compression to work efficiently
export default (req, res) => {
res.writeHead(200, {
'Cache-Control': 'no-cache',
'Content-Type': 'text/event-stream',
});
res.write('data: Processing...');
/* https://github.com/expressjs/compression#server-sent-events
Because of the nature of compression this module does not work out of the box with
server-sent events. To compress content, a window of the output needs to be
buffered up in order to get good compression. Typically when using server-sent
events, there are certain block of data that need to reach the client.
You can achieve this by calling res.flush() when you need the data written to
actually make it to the client.
res.flush();
setTimeout(() => {
res.write('data: Processing2...');
res.flush();
}, 1000);
I have switched to using a custom express server. That's the only way I
could get it to work. I guess that's cool since I can do more with express.
Before deciding to integrate express, I had tried the things mentioned
above, none worked.
1. Turned off gzip compression by setting the option in next.config.js. The
behavior remained the same. I inspected the headers on the client (using
postman) and confirmed the gzip encoding was removed, but that didn't seem
to fix the problem.
2. Calling res.flush had no effect either. Instead I get a warning in the
console that flush is deprecated and to use flushHeaders instead. But
that's not what I want.
This is a rather strange bug.. 😔
On Thursday, 9 January 2020, Mikael Sand ***@***.***> wrote:
It then applies gzip compression for you
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<
#9965>,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AHJC5ZDO2YUJFN5JLQPLYGLQ46QBFANCNFSM4KDHWFMA>
I have been trying to get SSE work in nextjs, but could not get working. With custom server and native node httpServer req,res it works, but with Nextjs 'res', no messages are sent to the client.
I started using Nextjs to get the advantages of server-side rendering, SLS and having the server and client code together. Using express beats the purpose
Any pointers on how this could work? This is a blocking problem for me
Hey @kavuri. It is possible to integrate a custom Node.js server (e.g using express) with your next.js app. That way, you can still get Server-Side Rendering without these Next.js limitations.
See this page of the official documentation for details: https://nextjs.org/docs/advanced-features/custom-server
Also, check out how I implemented this in my own app which I mentioned in the comment above yours: https://github.com/uxFeranmi/react-woocommerce/blob/master/server.js
@uxFeranmi I could use the custom server method as mentioned here https://nextjs.org/docs/advanced-features/custom-server to write messages as res.write(...). But in the Next app, I do not see any messages in my page
I have created a sample page index.js and a react component App.js in pages
dir as under
import EventSource from 'eventsource'
class App extends React.Component {
constructor(props) {
super(props)
this.events = new EventSource('http://localhost:3000/test')
this.events.onopen = function() {
console.log('connection is opened');
this.events.onerror = function() {
console.log('error in opening conn.');
componentDidMount() {
this.events.onmessage = (event) => {
console.log('got message..',event)
this.data = JSON.parse(event.data)
componentWillUnmount() {
// cleanup
render() {
return (
<h1>{this.data}</h1>
index.js
import App from './App.js'
function HomePage() {
return <div><App /></div>
export default HomePage
My custom server.js
const { createServer } = require('http')
const { parse } = require('url')
const next = require('next')
const fs = require('fs')
const port = parseInt(process.env.PORT, 10) || 3000
const dev = process.env.NODE_ENV !== 'production'
const app = next({ dev })
const handle = app.getRequestHandler()
console.log('starting server...')
function listen(req, res) {
console.log('listening for incoming orders...');
// Create a change stream. The 'change' event gets emitted when there's a
// change in the database
fs.watch('./', (eventType, filename) => {
if (filename) {
var obj = {"text": filename}
console.log('sending:',obj);
res.write('data:' + JSON.stringify(obj));
res.on('close', () => {
console.log('closing connection');
app.prepare().then(() => {
createServer((req, res) => {
const parsedUrl = parse(req.url, true)
const { pathname, query } = parsedUrl
if (pathname === '/test') {
const headers = {
'Content-Type': 'text/event-stream',
'Connection': 'keep-alive',
'Cache-Control': 'no-cache'
res.writeHead(200, headers);
res.write('\n')
listen(req, res)
} else {
handle(req, res, parsedUrl)
}).listen(port, err => {
if (err) throw err
console.log(`> Ready on http://localhost:${port}`)
I am not getting any message in the index page. But if I open the url http://localhost:3000/test
, I get the messages, which means that the EventSource itself is working, but the Next server side rendering for the eventsource is not. Or maybe I am doing something wrong! Any pointers?
Hey everyone, I wrote a little library that hopefully will get people started with some basic SSE implementation. Feel free to either use the lib directly or copy the code!
https://github.com/michaelangeloio/ts-sse
does this ts-sse ganna work in a deployed application for next ts?
On Wed, Oct 4, 2023, 2:10 AM Michael Angelo Rivera ***@***.***> wrote:
Also, I just posted an article on making SSE work with Nextjs!
Hope it helps!
https://michaelangelo.io/blog/server-sent-events
Reply to this email directly, view it on GitHub
<
#48427 (comment)>,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AVZ2FPAVY6FIVBAS5UEPD3TX5RILDAVCNFSM6AAAAAAW7RA6OSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TCNZYG44DE>
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
does this also gana work in a deployed application in vercel?
On Wed, Oct 4, 2023, 7:09 AM Michael Angelo Rivera ***@***.***> wrote:
It's working great in production right now on my app!
https://activitystreak.app
Reply to this email directly, view it on GitHub
<
#48427 (reply in thread)>,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AVZ2FPBN33Q6L6J23KFFINLX5SLL7AVCNFSM6AAAAAAW7RA6OSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TCOBQHE2DS>
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
After many many many many many hours, I've finally gotten this to work on my setup. Hopefully this can help someone 🎉
I'm using App router along with an api route.ts that "proxies" sse from my actual backend server to the nextjs server. The only issue that I ended up having is that when the UI refreshes, the server does not know that it should close the current EventSource and move to a new one. The following solution solves this issue.
The following code belongs inside the route.ts
const stream = new TransformStream()
const writer = stream.writable.getWriter()
const encoder = new TextEncoder();
const resp = new EventSource('<your url>');
req.signal.addEventListener("abort", async () => {
console.log("abort");
resp.close();
await writer.close();
req
is the get NextRequest object (first argument) for the route handler. This works because everytime the ui refreshes, an abort signal is broadcasted to the request before it dies, thus giving us an opportunity to close the eventSource.
This SSE works in local development but when I am publishing to vercel it gives whole response after the openai SSE is completed. Can someone please help me.
This codes are developed in nextjs
backend code:
import OpenAI from "openai";
export default async function handler(req, res) {
if (req.method === "POST") {
/// parse the request object
const body = JSON.parse(req.body);
const { similaritySearchResults, messages, userQuery } = body;
// Set response headers for SSE
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
res.setHeader("Content-Encoding", "none");
// Fetch the response from the OpenAI API
const openai = new OpenAI({
apiKey: process.env.NEXT_PUBLIC_OPENAI_KEY,
const stream = await openai.chat.completions.create({
model: "gpt-3.5-turbo-16k",
temperature: 0,
top_p: 1,
messages: [
role: "system",
content: `Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
context:
${similaritySearchResults}
Answer user query and include images write respect to each line if available`,
...messages,
role: "user",
content: `
Strictly write all the response in html format with only raw text and img tags.
Answer user query and include images in response if available in the given context
query: ${userQuery}`,
stream: true,
for await (const part of stream) {
res.write(part.choices[0]?.delta?.content || "");
res.end();
front-end code:
const responseFromBackend: any = await fetch(
`${process.env.NEXT_PUBLIC_WEBSITE_URL}api/chat`,
method: "POST",
body: JSON.stringify({
similaritySearchResults,
messages,
userQuery,
headers: {
"Content-Type": "text/event-stream",
let resptext = "";
const reader = responseFromBackend.body
.pipeThrough(new TextDecoderStream())
.getReader();
while (true) {
const { value, done } = await reader.read();
if (done) {
/// setting the response when completed
setMessages((prev: any) => [
...prev,
{ role: "assistant", content: resptext },
/// store the chathistory
setResponse("");
setLoading(false);
break;
resptext += value;
setResponse(resptext);
Please help me resolve this
Trying to implement something where I send status of server to client as it is working through logic. I'm using app router.
My problem is that I have to use all these nested `.then()' statements to get it working. Using await anywhere either doesn't stream any response with the connection open, or it sends the full stream after everything is processed instead of in chunks.
Does anyone know what I am doing wrong?
Here is my page.tsx
'use client'
import { Button } from '@/components/ui/button'
import EventSource from 'eventsource'
export default function Stream() {
function SSE() {
// Create a new EventSource instance
const eventSource = new EventSource('http://localhost:3000/api/stream')
// Handle an open event
eventSource.onopen = (e) => {
console.log('Connection to server opened')
// Handle a message event
eventSource.onmessage = (e) => {
const data = JSON.parse(e.data)
console.log('New message from server:', data)
// Handle an error event (or close)
eventSource.onerror = (e) => {
console.log('EventSource closed:', e)
eventSource.close() // Close the connection if an error occurs
// Cleanup function
return () => {
eventSource.close()
return (
<h1>Server-Sent Events (SSE) Demo</h1>
<Button onClick={SSE}>Initiate</Button>
</div>
Here is my route.ts:
import { NextRequest } from 'next/server'
export const runtime = 'nodejs'
// This is required to enable streaming
export const dynamic = 'force-dynamic'
export async function GET(request: NextRequest) {
let responseStream = new TransformStream()
const writer = responseStream.writable.getWriter()
const encoder = new TextEncoder()
// Close if client disconnects
request.signal.onabort = () => {
console.log('closing writer')
writer.close()
// Function to send data to the client
function sendData(data: any) {
const formattedData = `data: ${JSON.stringify(data)}\n\n`
writer.write(encoder.encode(formattedData))
// Initial Progress
sendData({ progress: '0%' })
// 50% done
const Note = getNote()
Note.then((Note) => {
sendData({ progress: '50%' })
// 100% done
const Todo = getTodo()
Todo.then((Todo) => {
sendData({ progress: '100%' })
// close writer
writer.close()
return new Response(responseStream.readable, {
headers: {
'Content-Type': 'text/event-stream',
Connection: 'keep-alive',
'Cache-Control': 'no-cache, no-transform'
async function getNote() {
await delay(3000)
return 'I am a Note'
async function getTodo() {
await delay(3000)
return 'I am a Todo'
function delay(ms: number) {
return new Promise((resolve) => setTimeout(resolve, ms))
@0x-Legend If you replace your promises with awaits, then you're delaying your return of the response.
You can fix this by wrapping your async stuff in a setImmediate
like so:
setImmediate(async () => {
sendData({ progress: "0%" });
const Note = await getNote();
sendData({ progress: "50%" });
const Todo = await getTodo();
sendData({ progress: "100%" });
writer.close();
return new Response(responseStream.readable, {
headers: {
"Content-Type": "text/event-stream",
Connection: "keep-alive",
"Cache-Control": "no-cache, no-transform",
Now you'll instantly return a response (streaming starts) then start running your async code a (very) small amount of time later.
you should return a response at first to establish an Eventsource connection, then you can send message by this connection, it's a simple SSE controller below
import { NextRequest } from "next/server";
import { ChatResponse, MookResponse, ResponseUnexcepted } from "../config";
export const dynamic = "force-dynamic"; // defaults to auto
const sleep = (delay) =>
new Promise<void>((resolve) => setTimeout(() => resolve(), delay));
export async function POST(request: NextRequest) {
const json = await request.json();
const responseStream = new TransformStream();
const writer = responseStream.writable.getWriter();
const encoder = new TextEncoder();
let messageList: ChatResponse[];
if (MookResponse[json.text]) {
messageList = MookResponse[json.text].message;
} else {
messageList = ResponseUnexcepted.message;
(async function () {
for (let i = 0; i < messageList.length; i++) {
await sleep(1000);
writer.write(
encoder.encode(
`event: message\ndata: ${JSON.stringify(messageList[i])}\n\n`
writer.close();
})();
return new Response(responseStream.readable, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Maybe someone finds it helpful and relevant. I wasn't able to get this thing to work based on the above.
My case was that I had endpoint sending event stream formatted data, which I had to proxy through route handler to the client. (this way I could get rid of NEXT_PUBLIC_ prefix which was giving a headache on the pipeline, production.
// src/app/system/status/SystemStatus/index.tsx
export const SystemStatus = () => {
const [data, setData] = useState<TSystemStatus>()
useEffect(() => {
const eventSource: EventSource = new EventSource(
'/api/system/status-telemetry-stream',
eventSource.addEventListener('telemetry', (event: MessageEvent) => {
const telemetry = JSON.parse(event.data)
setData(telemetry)
return () => eventSource.close()
}, [])
return ( ....
This is my "proxy" route
// src/app/api/system/status-telemetry-stream/route.ts
import { NextResponse } from 'next/server'
import { DASH_SYSTEM_TELEMETRY_STREAM } from '@/app/constants'
export const dynamic = 'force-dynamic'
export async function GET(req: Request) {
const origin = req.headers.get('origin')
const { body } = await fetch(DASH_SYSTEM_TELEMETRY_STREAM)
return new NextResponse(body, {
status: 200,
headers: {
'Access-Control-Allow-Origin': origin || '*',
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
Connection: 'keep-alive',
Also middleware.ts is needed in src/ or in the root if not using src/app but just app/
import { NextResponse } from 'next/server'
export function middleware() {
return NextResponse.next()
export const config = {
matcher: '/api/:path*',
Hi all! I know this is quite an old thread, but anyways... I got SSE working on Next 14 (App router) but when i build and start the app, the server kina hangs with the warning MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [Server]
. The thing is, I don't use any EventEmitters, I don't know where this comes from. It works fine in dev, I have searched all of the Internet and AI Bots for an answer, but I have come up with nothing...
Here is my route.ts, if anyone does have any suggestions I would take them :)
import { NextRequest, NextResponse } from 'next/server';
import leaderboardState from '@/utils/leaderboard-state';
const eventName = 'leaderboard-event';
export const dynamic = 'force-dynamic';
export async function GET(request: NextRequest) {
const stream = new TransformStream();
const writer = stream.writable.getWriter();
const encoder = new TextEncoder();
const sendEvent = (data: EventState) => {
void writer.write(encoder.encode(`event: ${eventName}\ndata: ${JSON.stringify(data)}\n\n`));
let lastSent = 0;
const checkForUpdates = () => {
const lastMessage = leaderboardState.getEventState();
const fifteenSeconds = 15000;
if (lastMessage.timestamp && lastMessage.timestamp > lastSent - fifteenSeconds) {
// If incomming message has a timestamp and it larger then lastSent minus 15 seconds, send it - this message comes via POST
sendEvent(lastMessage);
lastSent = lastMessage.timestamp + fifteenSeconds;
} else if (Date.now() > lastSent) {
// If fifteen seconds have passed, send current message
sendEvent(lastMessage);
lastSent = Date.now() + fifteenSeconds; //Add 15 seconds, so not to 'spam' the events
//Check for new updates on interval
const intervalId = setInterval(checkForUpdates, 1000); // Adjust interval as needed
request.signal.onabort = () => {
clearInterval(intervalId);
writer.close();
// Send initial data
checkForUpdates();
const response = new NextResponse(stream.readable, {
headers: {
'Content-Type': 'text/event-stream; charset=utf-8',
Connection: 'keep-alive',
'Cache-Control': 'no-cache, no-transform',
return response;
export async function POST() {
leaderboardState.setEventState('Update page', Date.now());
return new NextResponse('Ok', { status: 200 });
Ok I made this work using Upstash Redis as an intermediary and it works great. A few things I missed initially
the event data can only be in the format of data: [your data] \n\n
-- do not forget the \n\n at the end! otherwise it does not work.
managing eventSource connection on the client side -- check out this example: https://upstash.com/blog/realtime-notifications
API route example: https://github.com/rishi-raj-jain/upstash-nextjs-publish-messages-with-sse-example/blob/master/app/api/stream/route.js
Client handling example: https://github.com/rishi-raj-jain/upstash-nextjs-publish-messages-with-sse-example/blob/master/app/components/chat.client.jsx (see connectToStream)
This will work if you do export const runtime = 'nodejs'
even outside of vercel's edge runtime
Hi everyone, I demo some code, it works, I tried. It seems it is much smooth by using SSE comparing with Vercel ai package's StreamableValue base on RSC. Here is the code: app/api/chat/route.ts. For the client you can check here: features/chat-bot/hooks/use-sse-message.tsx
import { getOpenaiClient } from '@/features/chat-bot/utils/openai-client'
import { logger } from '@/lib/shared'
export const runtime = 'edge'
// Prevents this route's response from being cached
export const dynamic = 'force-dynamic'
type RequestData = {
currentModel: string
// message: { context: string; role: 'user' | 'assistant' }[]
message: any
export async function POST(request: Request) {
const { message } = (await request.json()) as RequestData
logger.trace({ message }, 'post message')
if (!message || !Array.isArray(message)) {
return new Response('No message in the request', { status: 400 })
try {
const openai = getOpenaiClient()
const completionStream = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
role: 'system',
content: 'You are a smart AI bot and you can answer anything!',
...message,
max_tokens: 4096,
stream: true,
const responseStream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder()
for await (const part of completionStream) {
const text = part.choices[0]?.delta.content ?? ''
// todo add event
const chunk = encoder.encode(`data: ${text}\n\n`)
controller.enqueue(chunk)
controller.close()
return new Response(responseStream, {
headers: {
'Content-Type': 'text/event-stream; charset=utf-8',
Connection: 'keep-alive',
'Cache-Control': 'no-cache, no-transform',
} catch (error) {
console.error('An error occurred during OpenAI request', error)
return new Response('An error occurred during OpenAI request', {
status: 500,
I don't know if anybody is still struggling with this, but I had the same sort of issues with streaming audio from OpenAI text to voice (and I would assume it would be an issue with all streaming audio/video). What I ended up doing was setting up a sort of virtual stream; in my case, it made sense to do some of it on the client side bc of routing issues. So what I did was use regex to convert a long article (maybe 50k characters) into 1000 (more or less bc I don't want to break words or sentences up) characters on the client side and then send those all to the server continuously, which converts to audio and then sends back to client in chunks. The client starts to play the audio once the first chunk is received and compiles the rest of the chunks in the background. There is some lag, but that's just due to the lag in the OpenAI api.
I've been working a bit with this recently and wrote a blog post on how to stream JSON and text in a single request (for e.g RAG applications where you want to include sources along your generated answer). Hopefully it can be of help:
https://erci.sh/posts/streaming-json-and-text-in-one-request/