“Amazon Cloud Technology Product Review” event call for papers|Use vue + element ui to access large language models

Amazon Cloud Technology Cloud Discovery Lab Activity Essay Collection | Using vue + element ui to access large language models

Authorization statement: This article authorizes the official Amazon Cloud Technology article to forward and rewrite the rights, including but not limited to Amazon Cloud Technology official channels such as Developer Centre, Zhihu, self-media platforms, third-party developer media, etc.

Foreword

A few years ago, chatGPT was very popular. Everyone is talking about chatGPT, everyone is using chatGPT, and there is a scene where everyone is empty. To this day, chatGPT is still popular.

At that time, I was thinking, could I build a chatGPT myself?

No, because I don’t have an AI model, so I can’t build it.

But now, some large language models can be deployed on Amazon cloud servers. Using these large language models, we can create our own chatGPT.

Front-end project

If you want to build your own chatGPT, you definitely need to build a website. So how to build this website? Next we will introduce how to build a chatGPT website.

Here, I plan to use the vue + element ui technology stack to build our front-end website.

Create project

Here, I plan to use the vue2 version. Therefore, we can use vue’s official scaffolding vue-cli to create projects.

We use the following command to create a vue project

vue init webpack my-project

Open cmd and enter the above command.

Follow the above tips and create step by step to complete the creation of the vue project.

After creating the project, it is still an empty project and we need to develop it.

The ui library I use here is element ui, so I need to install element ui.

Go to the official website of element ui and enter its installation page. We will be prompted on how to install element ui in the project.

We just follow the prompts and install it.

After installation, follow the prompts on the element ui official website to introduce the configuration into the project.

Page development

After the project and dependencies are installed, we can develop.

Routing configuration

Let’s configure routing first. Here, we are only developing one page, so we only need to configure one route.

We create a router folder, and in the router folder, create an index.js file. We will complete our routing configuration in this index.js file.

Since there is only one page, just configure the routing of this page.

This is my routing configuration, you can refer to it.

Page development

After the routing configuration is completed, we can develop the page.

Regarding page development, it mainly involves some page layouts, plus style adjustments corresponding to DOM elements. These are relatively basic contents, and the contents are quite lengthy, so I won’t introduce them here.

The following is a partial screenshot of my code

Finally, the effect of our completed page is as follows

The page we use chatGPT is actually very similar. Because when I was developing, I was developing with reference to the chatGPT page.

The page layout is not very complicated, I believe you will complete it quickly.

Large model deployment

The development of the front-end page is completed. Next, we can deploy the large model.

We enter the official website of Amazon Cloud Server, log in, and enter the console page.

In the search bar at the top, search for sagemaker

We can go to a search result for Amazon SageMaker and click on it.

After entering, you will enter the Amazon SageMaker console page.

We click the Studio option in the left sidebar to enter the Studio console page. Then click the Open Studio button to enter the Studio page.

Then click SageMaker JumpStart in the sidebar on the left, the Models, notebooks, and solutions options below to enter the language model options page.

Here, the language model we use is Llama-2-7b-chat. In the search bar, search for 2-7b-chat

After searching, we can see the search results below. We are using the Llama-2-7b-chat model, so just select the first model.

Go to the Llama-2-7b-chat model console page

We click the Deploy button to deploy the model.

When deployed, it takes 5-10 minutes, we can just wait.

Wait for a while and the deployment is successful.

Next, we enter the Lambda console and create a function.

You can name it yourself, here I named it myChat.

After creating the function, we deploy the function. After the deployment is completed, the server-side deployment is roughly completed.

Debugging

The front-end page has been developed. Just now, we also deployed the big prediction model on the server.

Next, you can perform front-end and back-end debugging.

In the front-end page, we enter: What is peanut oil, and then click Send.

At this time, the front-end page will send a request and pass the question to the server through parameters.

After the server receives the request, it processes the parameters and then calls the llama 2 model. The llama 2 model, after getting the question, processes it through the model and finally gives the answer. Then the answer response is given to the client through the request. After the client gets the answer, it displays it on the page.

We can see that we ask questions and the llama 2 model gives answers. The joint debugging of the front and back ends is complete here.

At this point, the introduction to using vue + element ui to access the large language model has been completed. I believe that through this article, you have a certain understanding of how to use vue + element ui to access large language models.

If you are interested in accessing a large language model, you may wish to refer to this article and try to access it yourself.