Current status of knowledge representation models – related to symbolic representation

The concept of the Semantic Web was proposed by Tim Berners Lee, the inventor of the World Wide Web, in 1996. The goal is to convert current information into machine-friendly language. The Semantic Web is not an independent network, but an extension of the current network. It empowers The clear meaning of information makes it […]

diffusers-Understanding models and schedulers

https://huggingface.co/docs/diffusers/using-diffusers/write_own_pipelinehttps://huggingface.co/docs/diffusers/using-diffusers/write_own_pipelinediffusers There are 3 modules: diffusion pipelines, noise schedulers, model. This library is very good, and its design ideas are comparable to those of the mmlab series. The mm series generation algorithm is in mmagic, but it is not as rich as diffusers. Furthermore, almost all new algorithm training and reasoning will use the standard […]

DevChat: AI intelligent programming assistant based on large models in VSCode

Which #AI programming assistant is the best? DevChat is “really” easy to use# Article directory 1 Introduction 2. Installation 2.1 Register new user 2.2 Install the DevChat plug-in in VSCode 2.3 Set Access Key 3. Practical use 3.1 Code writing 3.2 Project creation 3.3 Code explanation 4. Summary 1. Preface DevChat is an AI intelligent […]

Generative AI – application architecture and solutions based on large models

This article explores the process of building language model (LLM)-based applications using document loaders, embeddings, vector stores, and hint templates. Due to its ability to generate coherent and contextual text, LLM is becoming increasingly popular in natural language processing tasks. This article discusses the importance of LLM, compares fine-tuning and context injection methods, introduces LangChain, […]

LoRA and QLoRA fine-tuning large language models: insights from hundreds of experiments

LoRA is an efficient parameter fine-tuning technique for training custom LLMs. Sebastian Raschka, the author of this article, provides practical insights into fine-tuning LLM using LoRA and QLoRA through hundreds of experiments, including saving memory, selecting the best configuration, etc. Sebastia is an assistant professor of statistics at the University of Wisconsin-Madison and an LLM […]

Four examples of automated testing models and their advantages and disadvantages

1. Linear test 1. Concept: A linear script generated by recording or writing the steps corresponding to the application. Simply to simulate the user’s complete operation scenario. (operations, repeated operations, data) all mixed together. 2. Advantages: Each script is relatively independent and does not generate other dependencies or calls. 3. Disadvantages: The development cost is […]

Python automated testing five models

Python automated testing five models Table of contents 1. Foreword 2. Linear model 3. Modular drive model 4. Data-driven model 5. Keyword-driven model 6. Behavior-driven model 1. Foreword In automated testing, we often classify automated scripts into which framework model they belong to, such as keyword-driven models, etc. This article will list the five models […]

Prompt design and fine-tuning of large language models

This article mainly introduces the application of Prompt design, large language model SFT and LLM in the mobile Tmall AI shopping guide assistant project. Basic principles of ChatGPT “Talking AI”, “Intelligent Agent” It can be briefly summarized into the following steps: Preprocessing text: The input text of ChatGPT needs to be preprocessed. Input encoding: ChatGPT […]

Network Programming-Five IO Models

Tip: After the article is written, the table of contents can be automatically generated. For how to generate it, please refer to the help document on the right. Article Directory Table of Contents Article directory Preface 1. What should the network module handle? 1. Blocking IO (blocking IO) 2. Non-blocking IO 3. IO multiplexing 1.SELECT […]