With pictures and texts, we will teach you step by step how to use the OpenAI interface based on React+md to achieve the ChatGPT typewriter effect.

Preliminary preparation

  • Front-end project
  • Backend interface (OpenAI interface is enough)

Start a new React project

  • If you have existing projects, you can skip this step and go directly to the next step~
  • Next.js is a full-stack React framework. It’s versatile and allows you to create React apps of any size – from static blogs to complex dynamic apps. To create a new Next.js project, run:
npx create-next-app@latest

Download dependencies
cd xiaojin-react-chatgpt

npm i

Run the project
npm run dev


Introduction antd

  • antd official website
Install and introduce antd
npm install antd --save

Basic page preparation

  • Let’s first use simple code to achieve the effect
  • Modify the src\app\page.js code as follows
"use client";
import { useState } from "react";
import { Input, Button } from "antd";

const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT typewriter effect</h2>
      <TextArea rows={17} value={outputValue} />
      <Button>Send request</Button>
    </main>
  );
}

The page effect is as follows

Interface preparation

  • Register an OpenAI account (or use other interfaces)
Interface document example
  • Refer to OpenAI Chinese documentation

Chat chat-chat completion object-request parameter description
export interface RequestModel {
    /**
     * Defaults to a number between 0 -2.0 and 2.0. Positive values penalize new tokens based on how frequently the text currently exists, reducing the likelihood that the model will repeat the same line. More information on frequency and presence penalties.
     */
    frequency_penalty?: number;
    /**
     * Modify the probability that a specified tag appears in completion.
     *
     * Accepts a JSON object that maps tags (tag IDs specified by the tagger) to associated bias values (-100 to 100). Mathematically speaking, the bias is added to the logit generated by the model before sampling the model
     * middle. The exact effect varies between models, but values between -1 and 1 should reduce or increase the selection likelihood of the relevant marker; values like -100 or 100 should result in disabled or exclusive selection of the relevant marker.
     */
    logit_bias?: null;
    /**
     * Default is inf
     * Maximum number of tokens generated in chat completion.
     *
     * The total length of input tokens and generated tokens is limited by the context length of the model. Python code example for calculating tokens.
     */
    max_tokens?: number;
    /**
     * List of messages contained in the conversation so far. Python code examples.
     */
    messages: Message[];
    /**
     * The ID of the model to use. For more information about which models can be used with the Chat API, see the model endpoint compatibility table.
     */
    model: string;
    /**
     * Default is 1
     * How many chat completion choices are generated for each input message.
     */
    n?: number;
    /**
     * -A number between 2.0 and 2.0. Positive values penalize new tokens based on whether they have appeared in the text so far, thus increasing the likelihood that the model is talking about new topics.
     * [View more information on frequency and presence penalties. ](https://platform.openai.com/docs/api-reference/parameter-details)
     */
    presence_penalty?: number;
    /**
     * An object specifying the format in which the model must be output. Setting { "type": "json_object" } enables JSON mode, which ensures that messages generated by the model are valid JSON. IMPORTANT NOTE: USE
     * When using JSON mode, model generation must also be instructed through system or user messages
     *JSON. If you don't do this, the model may generate an endless stream of blanks until the token limit is reached, resulting in increased latency and the appearance of a "stuck" request. Please also note that if
     * finish_reason="length", the message content may be partially cut off, which means that the generation exceeds max_tokens or the conversation exceeds the maximum context length. display properties
     */
    response_format?: { [key: string]: any };
    /**
     *This feature is in beta. If specified, our system will do its best to sample deterministically so that repeated requests with the same seed and parameters should return the same results. Certainty cannot be guaranteed, you should refer to
     * system_fingerprint response parameter to monitor backend changes.
     */
    seen?: number;
    /**
     * Defaults to null for up to 4 sequences and the API will stop generating further tokens.
     */
    stop?: string;
    /**
     * Defaults to false If set, partial message deltas will be sent like in ChatGPT. Tags will be sent as data-only server-sent events when available, and in data: [DONE]
     * The message terminates the stream. Python code examples.
     */
    stream?: boolean;
    /**
     * What sampling temperature to use, between 0 and 2. A higher value (like 0.8) will make the output more random, while a lower value (like 0.2) will make the output more focused and deterministic.
     * We generally recommend changing this or `top_p` but not both.
     */
    temperature?: number;
    /**
     * Control which function (if any) the model calls. none means that the model will not call the function, but generate a message. auto means that the model can choose between generating messages and calling functions. Pass {"type":
     * "function", "function": {"name": "my_function"}} forces the model to call this function. If no function exists, the default is
     * none. If a function exists, it defaults to auto. Show possible types
     */
    tool_choice: { [key: string]: any };
    /**
     * A list of a set of tools that the model can call. Currently, only functions that are tools are supported. Use this feature to provide a list of functions for which the model can generate JSON input.
     */
    tools: string[];
    /**
     * An alternative to temperature sampling, called kernel sampling, where the model considers the results of markers with top_p probability mass. So 0.1 means that only tokens that make up the top 10% of probability mass are considered.
     * We generally recommend changing this or `temperature` but not both.
     */
    top_p?: number;
    /**
     * represents a unique identifier for your end user that helps OpenAI
     * Monitor and detect abuse. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids).
     */
    user?: string;
    [property: string]: any;
}

export interface Message {
    content?: string;
    role?: string;
    [property: string]: any;
}
Prepare interface parameters
 const data = {
      model: "XXX",
      messages: [
        {
          role: "user",
          content: "Write a 1,000-word essay about spring",
        },
      ],
      prompt: "Write a 1,000-word essay about spring",
      temperature: 0.75,
      stream: true,
    };

Option 1: Use fetch to process the stream to achieve the typewriter effect

Use streams to process Fetch
  • mdn document

  • The Fetch API allows you to fetch resources across the network, providing a modern API to replace XHR. It has a bunch of advantages, the really nice thing is that browsers have recently added the ability to use fetch responses as a readable stream.

  • The same goes for the Request.body and Response.body properties, which expose the body content as a readable stream getter.

Chat chat-chat completion object-interface return parameter description
Parameters Type Description
id string Unique identifier of chat completion
choices array Chat completes list of options. If n is greater than 1, there can be multiple options
created integer Unix timestamp (seconds) when the created chat is completed
model string Model for chat completion
system_fingerprint string The fingerprint represents the backend configuration of the model running
object string Object type, always chat.completion
usage object Complete request usage statistics
completion_tokens integer Number of completion tokens generated
prompt_tokens integer Number of tokens in prompt
total_tokens integer The total number of tags used in the request (prompt + completion)
Call code example
 const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      if (done) {
        console.log("**********************done");
        console.log(value);
        break;
      }
      console.log("--------------------value");
      console.log(value);
    }
  • In the function, we use response.body.getReader() to lock the reader to the stream, and then follow the same pattern we saw before – read each chunk with the reader, before running the read() method again, Check whether done is true. If it is true, the processing ends. If it is false, read the next chunk and process it.
  • Get transferred data through loop
Write page logic code
  • We temporarily use fixed parameters to simulate
  • Write a simple demo to demonstrate
"use client";
import { useState } from "react";
import { Input, Button } from "antd";
const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  const send = async () => {
    const url = "http://xxxxxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "Write a 1,000-word essay about spring",
        },
      ],
      prompt: "Write a 1,000-word essay about spring",
      temperature: 0.75,
      stream: true,
    };
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      if (done) {
        console.log("**********************done");
        console.log(value);
        break;
      }
      console.log("--------------------value");
      console.log(value);
    }
  };
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT typewriter effect</h2>
      <TextArea rows={17} value={outputValue} />
      <Button onClick={send}>Send request</Button>
    </main>
  );
}

Click the button to view the print results

  • We can see that the printed buffer strings are all printed, and we need to parse them to know the final result.
Parse buffer

const encode = new TextDecoder("utf-8");
const reader = response.body.getReader();
 while (true) {
      const { done, value } = await reader.read();
      const text = encode.decode(value);
      if (done) {
        console.log("**********************done");
        console.log(text);
        break;
      }
      console.log("--------------------value");
      console.log(text);
    }
View analysis

We can see that the parsing result format is as follows

data: {"id": "chatcmpl-3zmRJUd4TTpm9xP9NbQVHw", "model": "chatglm2-6b", "choices": [{"index": 0, "delta": {"content": "hope"}, "finish_reason": null}]}
Observe the returned data
  • We can find that the returned data is some strings, and the number is inconsistent each time, but the data structure is fixed. We need regularity to parse, use regularity to parse the returned data each time into an array, and then merge the characters skewers~~
  • If other friends have better methods, please leave a message~

Use regular expressions to parse data

We write a function~~and then print the data

 const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\\
/g);
      result.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\\
/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr + = data?.choices[0].delta?.content || '';
      });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

Assign data to the text box


Preliminary realization of simple typewriter effect

Basic version of typewriter effect code (almost no dependencies)
"use client";
import { useState } from "react";
import { Input, Button } from "antd";
const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\\
/g);
      result.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\\
/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr + = data?.choices[0].delta?.content || "";
      });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

  const send = async () => {
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "Write me a 2000-word English article about spring",
        },
      ],
      temperature: 0.75,
      stream: true,
    };
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });
    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (done) {
        console.log(decodeText);
        break;
      }
      setOutputValue((str) => (str + = getReaderText(decodeText)));
    }
  };
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT typewriter effect</h2>
      <TextArea rows={24} value={outputValue} />
      <Button onClick={send}>Send request</Button>
    </main>
  );
}

Auto scroll
import { useState } from "react";


const ref = useRef();

//Add after text box assignment:
ref.current & amp; & amp;
      (ref.current.resizableTextArea.textArea.scrollTop =
        ref.current.resizableTextArea.textArea.scrollHeight);


html

<TextArea rows={24} value={outputValue} ref={ref}/>

What if you want a slower typewriter effect?
  • Because multiple words are parsed at one time, sometimes it doesn’t look like one word for another. We can use the following solution to solve it.
  • Solution: Save the obtained data strings, cut all strings, set a time interval of 50 milliseconds through setTimeout, and update the dom every 50 milliseconds.
  • The following only shows the case. It is not recommended to write like this. I finally removed this code~~~

Complete custom speed typewriter code

"use client";
import { useState, useRef, useEffect } from "react";
import { Input, Button } from "antd";
import "./index.css";
const { TextArea } = Input;
let testDataString = "";
export default function Home() {
  const ref = useRef();
  let [outputValue, setOutputValue] = useState("");

  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\\
/g);
      result & amp; & amp;
        result.forEach((_) => {
          const matchStrItem = _.match(/data:\s*({.*?})\s*\\
/)[1];
          const data = JSON.parse(matchStrItem);
          matchStr + = (data?.choices[0].delta?.content || '');
        });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };
  const writing = (index) => {
    const data = testDataString.split("");
    if (index === 0 & amp; & amp; data.length > 0) {
      setOutputValue(data[index]);
    }
    if (index < data.length - 1) {
      setOutputValue((str) => (str + = data[index]));
    }
    ref.current & amp; & amp;
      (ref.current.resizableTextArea.textArea.scrollTop =
        ref.current.resizableTextArea.textArea.scrollHeight);
    setTimeout(writing, 100, + + index);
  };
  const send = async () => {
    setOutputValue("");
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "hello",
        },
      ],
      temperature: 0.75,
      stream: true,
    };

    testDataString = "";
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (testDataString.length === 0) {
        testDataString + = getReaderText(decodeText);
        writing(0);
      } else {
        testDataString + = getReaderText(decodeText);
      }
      if (done) {
        console.log(decodeText);
        break;
      }
    }
  };
  return (
    <main className="chat-container flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT typewriter effect</h2>
      <TextArea rows={3} value={outputValue} ref={ref} />
      <Button onClick={send}>Send request</Button>
    </main>
  );
}

Code block support (to be added)

Download dependencies
npm i @uiw/react-md-editor
Add key code
import MDEditor from '@uiw/react-md-editor';

html

<MDEditor.Markdown source={outputValue} className="markdown-body" ref={ref}/>
Configuration style
  • I randomly searched for a style case on the Internet and copied and pasted it directly into the project. You can refer to it~~
  • Click here to go directly: github-markdown-css
View the effect

The core code is as follows
"use client";
import { useState, useRef, useEffect } from "react";
import MDEditor from '@uiw/react-md-editor';
import { Input, Button } from "antd";
import "./index.css";
import './md.css'
const { TextArea } = Input;
let testDataString = "";
export default function Home() {
  const ref = useRef();
  let [outputValue, setOutputValue] = useState("");

  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let resultList = str.match(/data:\s*({.*?})\s*\\
/g);
      resultList & amp; & amp;
      resultList.forEach((_) => {
          const matchStrItem = _.match(/data:\s*({.*?})\s*\\
/)[1];
          const data = JSON.parse(matchStrItem);
          matchStr + = (data?.choices[0].delta?.content || '');
        });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

  const send = async () => {
    setOutputValue("");
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "Please implement a login function",
        },
      ],
      temperature: 0.75,
      stream: true,
    };

    testDataString = "";
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (done) {
        console.log(decodeText);
        break;
      }
      setOutputValue((str) => (str + = getReaderText(decodeText)));
      console.log(ref.current.mdp.current)
       ref.current & amp; & amp;
        (ref.current.mdp.current.scrollTop =
          ref.current.mdp.current.scrollHeight);
    }
  };
  return (
    

Chat GPT typewriter effect

<MDEditor.Markdown source={outputValue} className="markdown-body" ref={ref}/>
); }

Option 2: axios request method (this method is not suitable for browsers and can be used in nodejs code)

  • When calling the axios stream type request on the browser side, use the XMLHttpRequest object to implement the request. The XMLHttpRequestResponseType type does not support stream, and the following warning will be reported:
The provided value 'stream' is not a valid enum value of type XMLHttpRequestResponseType.
Complete case of using axios stream to call OpenAI in nodejs

Next, I will show you how to write axios.

const axios = require("axios");
let testDataString = "";
const getReaderText = (str) => {
  let matchStr = "";
  try {
    let resultList = str.match(/data:\s*({.*?})\s*\\
/g);
    resultList & amp; & amp;
      resultList.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\\
/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr + = data?.choices[0].delta?.content || "";
      });
  } catch (e) {
    console.log(e);
  }
  return matchStr;
};
const url = "http://10.169.112.194:7100/v1/chat/completions";
const data = {
  model: "chatglm2-6b",
  messages: [
    {
      role: "user",
      content: "Please implement a login function",
    },
  ],
  temperature: 0.75,
  stream: true,
};
const encode = new TextDecoder("utf-8");
axios
  .post(url, data, {
    responseType: "stream",
    headers: { "Content-Type": "application/json" },
  })
  .then((response) => {
    response.data.on("data", (value) => {
      const currentString = getReaderText(encode.decode(value));
      testDataString + = currentString;
      console.log(currentString);
    });
    response.data.on("end", () => {
      console.log(testDataString);
    });
  });

Calling effect

Code repository

  • code cloud warehouse
That’s all for today~
  • Friends, ( ̄ω ̄( ̄ω ̄〃 ( ̄ω ̄〃)ゝSee you tomorrow~~
  • Everyone, please be happy every day

Everyone is welcome to point out where the article needs to be corrected~
There is no end to learning and win-win cooperation

Welcome the brothers and sisters passing by to give us better opinions~~