ENSPIRING.ai: Reinforcement Learning from Human Feedback (RLHF) Explained

ENSPIRING.ai: Reinforcement Learning from Human Feedback (RLHF) Explained

reinforcement learning from human feedback (RLHF) is a prominent method to improve the performance and alignment of AI systems with human preferences and values. The technique involves training AI models to respond in ways that align more closely with human values by incorporating human feedback into the learning process. The video addresses the components of reinforcement learning, such as state space, action space, reward function, and policy, as well as the challenges of creating a good reward function for complex tasks.

The RLHF process in large language models occurs in four phases: starting with a pre-trained model, followed by supervised fine-tuning, reward model training, and policy optimization. These phases involve input from human experts to guide models in generating appropriate responses. The use of a reward model based on human feedback helps translate preferences into numerical values, which guides the AI model's learning. However, the process requires careful policy optimization to ensure that models do not overly adjust their weights and produce irrelevant outputs.

Main takeaways from the video:

💡
RLHF enhances AI models by aligning their responses with human values via human feedback.
💡
The process includes pre-training, supervised fine-tuning, reward model training, and policy optimization.
💡
Limitations such as the cost of human input, subjectivity, potential biases, and risks of adversarial feedback highlight the challenges of RLHF.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. reinforcement learning [ˌriːɪnˈfɔːrsmənt ˈlɜrnɪŋ] - (noun) - A type of machine learning where agents learn to make decisions by receiving rewards or penalties. - Synonyms: (machine learning, AI training, deep learning)

Now, conceptually, reinforcement learning aims to emulate the way that human beings learn.

2. state space [steɪt speɪs] - (noun) - All the information available about a task that is relevant to AI agents' decision-making processes. - Synonyms: (information set, decision context, environment)

So first of all, we have a component called the state space, which is all available information about the task at hand that is relevant to decisions the AI agent might make.

3. action space [ˈækʃən speɪs] - (noun) - All the potential decisions or actions that an AI agent can make in a given scenario. - Synonyms: (decision set, choice spectrum, option range)

Another component is the action space. The action space contains all of the decisions the AI agent might make.

4. reward function [rɪˈwɔrd ˈfʌŋkʃən] - (noun) - A mathematical measure of success or progress, used to motivate AI agents. - Synonyms: (incentive measure, success metric, progress gauge)

Another component is the reward function, and this one really is key to reinforcement learning.

5. policy [ˈpɒləsi] - (noun) - The strategy or algorithm driving an AI agent's behavior. - Synonyms: (strategy, plan, approach)

policy is essentially the strategy, or the thought process that drives an AI agent's behavior.

6. supervised fine-tuning [ˈsuːpəvaɪzd faɪn ˈtjuːnɪŋ] - (noun) - A phase where pre-trained models are optimized further with labeled examples to generate desired outputs. - Synonyms: (model adjustment, targeted training, output optimization)

Now, supervised fine-tuning is used to prime the model to generate its responses in the format expected by users.

7. human feedback [ˈhjuːmən ˈfiːdˌbæk] - (noun) - Input or responses from humans used to guide the learning of AI models. - Synonyms: (user responses, human input, evaluator's feedback)

So enter us human beings with RL HF, with its ability to capture nuance and subjectivity by using positive human feedback.

8. elo rating system [ˈiːləʊ ˈreɪtɪŋ ˈsɪstəm] - (noun) - A system used to calculate the relative skill levels of players in competitive games. - Synonyms: (rating algorithm, ranking system, player evaluation method)

Now, often this is done by having users compare two text sequences, like the outputs of two different large language models, responding to the same prompt in head to head matchups, and then using an elo rating system to generate an aggregated ranking of each bit of generated text relative to one another.

9. Proximal policy Optimization [ˈprɒksɪməl ˈpɒləsi ˌɒptɪmaɪˈzeɪʃən] - (noun) - An algorithm that limits the extent to which an AI model can update its policy at each training iteration. - Synonyms: (policy correction method, strategic adjustment, control mechanism)

Now, an algorithm such as PPO or proximal policy optimization limits how much the policy can be updated in each training iteration.

10. adversarial input [ədˈvɜrsəriəl ˈɪnpʊt] - (noun) - Malicious, misleading, or harmful data input into a system to undermine its functionality or outcome. - Synonyms: (malicious data, misleading input, sabotage information)

Now, adversarial input could be entered into this process here, where human guidance to the model is not always provided in good faith

Reinforcement Learning from Human Feedback (RLHF) Explained

It's a mouthful, but you've almost certainly seen the impact of reinforcement learning from human feedback. That's abbreviated to RLHF. And you've seen it whenever you interact with a large language model. RLHF is a technique used to enhance the performance and alignment of AI systems with human preferences and values. You see, LLMs are of trained and they learn all sorts of stuff, and we need to be careful how some of that stuff surfaces to the user.

So for example, if I ask an LLM, how can I get revenge on somebody who's wronged me? Well, without the benefit of RLHF, we might get a response that says something like spread rumors about them to their friends, but it's much more likely an LLM will respond with something like this. Now this is a bit more of a boring standard LLM response, but it is better aligned to human values. That's the impact of RLHF.

So let's get into what RLHF is, how it works, and where it can be helpful or a hindrance. And we'll start by defining the R L in RLHF, which is reinforcement learning to. Now, conceptually, reinforcement learning aims to emulate the way that human beings learn. AI agents learn holistically through trial and error, motivated by strong incentives to succeed. It's actually a mathematical framework which consists of a few components, so let's take a look at some of those.

So first of all, we have a component called the state space, which is all available information about the task at hand that is relevant to decisions the AI agent might make. The state space usually changes with each decision the agent makes. Another component is the action space. The action space contains all of the decisions the AI agent might make.

Now, in the context of, let's say, a board game, the action space is discrete and well defined. It's all the legal moves available to the AI player at a given moment. For text generation, well, the action space is massive, the entire vocabulary of all of the tokens available to a large language model.

Another component is the reward function, and this one really is key to reinforcement learning. It's the measure of success or progress that incentivizes the AI agent. So for the board game, it's to win the game easy enough, but when the definition of success is nebulous, designing an effective reward function, it can be a bit of a challenge. There's also constraints that we need to be concerned about here.

Constraints where the reward function could be supplemented by penalties for actions deemed counterproductive to the task at hand, like the chatbot telling its users to spread rumors. And then underlying all of this, we have policy. policy is essentially the strategy, or the thought process that drives an AI agent's behavior. In mathematical terms, a policy is a function that takes a state as input and returns an action.

The goal of an RL algorithm is to optimize a policy to yield maximum reward. And conventional RL, it has achieved impressive real world results in many fields, but it can struggle to construct a good reward function for complex tasks where a clear cut definition of success is hard to establish. So enter us human beings with RL HF, with its ability to capture nuance and subjectivity by using positive human feedback in lieu of formally defined objectives.

So how does rl Hf actually work? Well in the realm of large language models, RL HF typically occurs in four phases, so let's take a brief look at each one of those. Now, phase one where we're going to start here, is with a pre trained model. We can't really perform this process without it.

Now, RLHF is generally employed to fine tune and optimize existing models. So an existing pre trained model rather than as an end to end training method. Now, with a pre trained model at the ready, we can move on to the next phase, which is supervised fine tuning of this model.

Now, supervised fine tuning is used to prime the model to generate its responses in the format expected by users. The LLM pre training process optimizes models for completion, predicting the next words in a sequence. Now, sometimes LLMs won't complete a sequence in a way that the user wants. So, for example, if a user's prompt is teach me how to make a resume, the LLM might respond with using Microsoft Word. I mean, it's valid, but it's not really aligned with the user's goal.

Supervised fine tuning trains models to respond appropriately to different kinds of prompts. And this is where the humans come in, because human experts create labeled examples to demonstrate how to respond to prompts for different use cases, like question answering or summarization or translation. Then we move to reward model training. So now we're actually going to train our model here we need a reward model to translate human preferences into a numerical reward signal.

The main purpose of this phase is to provide the reward model with sufficient training data. And what I mean by that is direct feedback from human evaluators. And that will help the model to learn to mimic the way that human preferences allocate rewards to different kinds of bottle responses. This lets training continue offline without the human in the loop.

Now, a reward model must intake a sequence of text and output a single reward value that predicts numerically how much a user would reward or penalize that text. Now, while it might seem intuitive to simply have human evaluators express their opinion of each model response with a rating scale of, let's say, one for worst and ten for best, it's difficult to get all human raters aligned on the relative value of a given score.

Instead, a rating system is usually built by comparing human feedback for different model outputs. Now, often this is done by having users compare two text sequences, like the outputs of two different large language models, responding to the same prompt in head to head matchups, and then using an elo rating system to generate an aggregated ranking of each bit of generated text relative to one another.

Now, a simple system might allow users to thumbs up or thumbs down each output, with outputs then being ranked by their relative favorability. More complex systems might ask labelers to provide an overall rating and answer categorical questions about the flaws of each response, then aggregate this feedback into weighted quality scores. But either way, the outcomes of the ranking systems are ultimately normalized into a reward signal to inform reward model training.

Now, the final hurdle of RLHF is determining how and how much the reward model should be used to update the AI agency's policy, and that is called policy optimization. We want to maximize reward, but if the reward function is used to train the LLM without any guardrails, the language model may dramatically change its weight to the point of outputting gibberish in an effort to game the reward system.

Now, an algorithm such as PPO or proximal policy optimization limits how much the policy can be updated in each training iteration. Okay? Now, though RLHF models have demonstrated impressive results in training AI agents for all sorts of complex tasks, from robotics and video games to NLP, using RLHF is not without its limitations. So let's think about some of those.

Now, gathering all of this firsthand human input, I think it's pretty obvious to say it could be quite expensive to do that, and it can create a costly bottleneck that limits model scalability. Also, you know, us humans and our feedback, it's highly subjective, so we need to consider that as well.

It's difficult, if not impossible, to establish firm consensus on what constitutes high quality output, as human annotators will often disagree on what high quality model behavior actually should mean, there is no human ground truth against which the model can be judged.

Now, we also have to be concerned about bad actors. So adversarial. Now, adversarial input could be entered into this process here, where human guidance to the model is not always provided in good faith. That would essentially be RLHF trolling.

And RHLF also has risks of overfitting and bias, which, you know, we talk about a lot with machine learning. And in this case, if human feedback is gathered from a narrow demographic, the model may demonstrate performance issues when used by different groups or prompted on subject matters for which the human evaluators hold certain biases.

Now, all of these limitations do beg a question. The question of can AI perform reinforcement learning for us? Can it do it without the humans? And there are proposed methods for something called Rlaif that stands for reinforcement learning from AI feedback that replaces some or whole of the human feedback by having another large language model evaluate model responses and may help overcome some or all of these limitations.

But at least for now, reinforcement learning from human feedback remains a popular and effective method for improving the behavior and performance of models, aligning them closer to our own desired human behaviors.

Artificial Intelligence, Technology, Innovation, Reinforcement Learning, Machine Learning, Human Feedback, Ibm Technology