Note

  • 홈
  • 태그
  • 방명록

efficient computing 1

LoRA: Low-Rank Adaptation of Large Language Models

ReferenceLoRA, ICLR 2022GithubTL;DR - Replace 'dense layer' with 'rank decomposition matrices'Full fine-tuning of large language models (LLMs) for the specific downstream tasks is not feasible (e.g., fine-tune GPT-3 175B for document summarization with short due or limited GPUs)"Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by a factor of 10,000 ..

ML and DL 2025.05.26
이전
1
다음
더보기
프로필사진

Note

+

  • 분류 전체보기 (7)
    • ML and DL (6)

Tag

dense prediction, vision-language model, transformer, Diffusion model, vision transfomer, Language Model, conditional diffusion, Presentation, group study, benchmark dataset, efficient computing, Adapter,

최근글과 인기글

  • 최근글
  • 인기글

최근댓글

공지사항

페이스북 트위터 플러그인

  • Facebook
  • Twitter

Archives

Calendar

«   2025/07   »
일 월 화 수 목 금 토
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31

방문자수Total

  • Today :
  • Yesterday :

Copyright © Kakao Corp. All rights reserved.

티스토리툴바