HuggingFaceはTextEnvironmentsを紹介します:機械学習モデルと、モデルが特定のタスクを解決するために呼び出すことができる一連のツール(Python関数)の間のオーケストレーターです

HuggingFaceがTextEnvironmentsをご紹介:機械学習モデルとタスク解決ツールのオーケストレーター

<img alt=”” src=”https://ai.miximages.com/www.marktechpost.com/wp-content/uploads/2023/11/Screenshot-2023-11-03-at-11.55.50-AM-1024×581.png”/><img alt=”” src=”https://ai.miximages.com/www.marktechpost.com/wp-content/uploads/2023/11/Screenshot-2023-11-03-at-11.55.50-AM-150×150.png”/><p>Supervised Fine-tuning (SFT), Reward Modeling (RM), and Proximal Policy Optimization (PPO) are all part of TRL. In this full-stack library, researchers give tools to train transformer language models and stable diffusion models with Reinforcement Learning. The library is an extension of Hugging Face’s transformers collection. Therefore, language models can be loaded directly via transformers after they have been pre-trained. Most decoder and encoder-decoder designs are currently supported. For code snippets and instructions on how to use these programs, please consult the manual or the examples/ subdirectory.</p><p><strong>Highlights</strong></p><ul><li>Easily tune language models or adapters on a custom dataset with the help of SFTTrainer, a lightweight and user-friendly wrapper around Transformers Trainer.</li><li>To quickly and precisely modify language models for human preferences (Reward Modeling), you can use RewardTrainer, a lightweight wrapper over Transformers Trainer.</li><li>To optimize a language model, PPOTrainer only requires (query, response, reward) triplets.</li><li>A transformer model with an additional scalar output for each token that can be utilized as a value function in reinforcement learning is presented in AutoModelForCausalLMWithValueHead and AutoModelForSeq2SeqLMWithValueHead.</li><li>Train GPT2 to write favourable movie reviews using a BERT sentiment classifier; implement a full RLHF using only adapters; make GPT-j less toxic; provide an example of stack-llama, etc.</li></ul><p><strong>How does TRL work?</strong></p><p>In TRL, a transformer language model is trained to optimize a reward signal. Human experts or reward models determine the nature of the reward signal. The reward model is an ML model that estimates earnings from a specified stream of outputs. Proximal Policy Optimization (PPO) is a reinforcement learning technique TRL uses to train the transformer language model. Because it is a policy gradient method, PPO learns by modifying the transformer language model’s policy. The policy can be considered a function that converts one series of inputs into another.</p><p>Using PPO, a language model can be fine-tuned in three main ways:</p><ul><li>Release<strong>:</strong> The linguistic model provides a possible sentence starter in answer to a question.</li><li>The evaluation may involve using a function, a model, human judgment, or a mixture of these factors. Each query/response pair should ultimately result in a single numeric value.</li><li>The most difficult aspect is undoubtedly optimization. The log-probabilities of tokens in sequences are determined using the query/response pairs in the optimization phase. The trained model and a reference model (often the pre-trained model before tuning) are used for this purpose. An additional reward signal is the KL divergence between the two outputs, which ensures that the generated replies are not too far off from the reference language model. PPO is then used to train the operational language model.</li></ul><p><strong>Key features</strong></p><ul><li>When compared to more conventional approaches to training transformer language models, TRL has several advantages.</li><li>In addition to text creation, translation, and summarization, TRL can train transformer language models for a wide range of other tasks.</li><li>Training transformer language models with TRL is more efficient than conventional techniques like supervised learning.</li><li>Resistance to noise and adversarial inputs is improved in transformer language models trained with TRL compared to those learned with more conventional approaches.</li><li>TextEnvironments is a new feature in TRL.</li></ul><p>The TextEnvironments in TRL is a set of resources for developing RL-based language transformer models. They allow communication with the transformer language model and the production of results, which can be utilized to fine-tune the model’s performance. TRL uses classes to represent TextEnvironments. Classes in this hierarchy stand in for various contexts involving texts, for example, text generation contexts, translation contexts, and summary contexts. Several jobs, including those listed below, have employed TRL to train transformer language models.</p><p>Compared to text created by models trained using more conventional methods, TRL-trained transformer language models produce more creative and informative writing. It has been shown that transformer language models trained with TRL are superior to those trained with more conventional approaches for translating text from one language to another. Transformer language (TRL) has been used to train models that can summarize text more precisely and concisely than those trained using more conventional methods.</p>

詳細についてはGitHubページをご覧ください https://github.com/huggingface/trl

要約すると:

TRLは、RLを使用してTransformer言語モデルをトレーニングする効果的な方法です。従来のより一般的な方法でトレーニングされたモデルと比較すると、TRLでトレーニングされたTransformer言語モデルは、適応性、効率性、堅牢性の点でより優れています。テキスト生成、翻訳、要約などの活動のためのTransformer言語モデルをトレーニングするには、TRLを使用することができます。

We will continue to update VoAGI; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

データサイエンス

「オンライン大規模な推薦のためのデュアル拡張二つのタワーモデル」

推薦システムは、ユーザーに個別にカスタマイズされた提案を提供するために設計されたアルゴリズムですこれらのシステムは、...

機械学習

深層学習フレームワークの比較

「開発者に最適なトップのディープラーニングフレームワークを見つけてください機能、パフォーマンス、使いやすさを比較して...

AIテクノロジー

AIを活用した「ディープフェイク」詐欺:ケララ州のスキャマーに対する継続的な戦い

最近数ヶ月間、ケララではAIによる「ディープフェイク」技術を悪用した巧妙な詐欺の増加が目撃されています。300人以上が驚異...

機械学習

『Generative AIがサイバーセキュリティを強化する3つの方法』

人間のアナリストは、サイバーセキュリティ攻撃の速度と複雑さに対して効果的に防御することができなくなっています。データ...

機械学習

この AI ペーパーでは、X-Raydar を発表します:画期的なオープンソースの深層ニューラルネットワークによる胸部 X 線異常検出

“` イギリスの様々な大学の研究者たちは、豊富なデータセットを用いて、総合的な胸部X線異常検出のためのオープンソー...

人工知能

音楽作曲における創造的なジェネレーティブAIの交響曲

はじめに 生成型AIは、教科書、画像、音楽などの新しいデータを生成できる人工知能です。音楽作曲では、生成型AIは作曲家に新...