Picture for Zhengxiao Du

Zhengxiao Du

ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback

Add code
Apr 03, 2024
Viaarxiv icon

ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline

Add code
Apr 03, 2024
Viaarxiv icon

Understanding Emergent Abilities of Language Models from the Loss Perspective

Mar 30, 2024
Viaarxiv icon

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning

Add code
Jan 15, 2024
Viaarxiv icon

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Add code
Aug 28, 2023
Figure 1 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 2 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 3 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 4 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Viaarxiv icon

AgentBench: Evaluating LLMs as Agents

Add code
Aug 07, 2023
Figure 1 for AgentBench: Evaluating LLMs as Agents
Figure 2 for AgentBench: Evaluating LLMs as Agents
Figure 3 for AgentBench: Evaluating LLMs as Agents
Figure 4 for AgentBench: Evaluating LLMs as Agents
Viaarxiv icon

WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences

Add code
Jun 13, 2023
Figure 1 for WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
Figure 2 for WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
Figure 3 for WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
Figure 4 for WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
Viaarxiv icon

GLM-130B: An Open Bilingual Pre-trained Model

Add code
Oct 05, 2022
Figure 1 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 2 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 3 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 4 for GLM-130B: An Open Bilingual Pre-trained Model
Viaarxiv icon

P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

Add code
Oct 18, 2021
Figure 1 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 2 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 3 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 4 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Viaarxiv icon

GPT Understands, Too

Add code
Mar 18, 2021
Figure 1 for GPT Understands, Too
Figure 2 for GPT Understands, Too
Figure 3 for GPT Understands, Too
Figure 4 for GPT Understands, Too
Viaarxiv icon