전력 AI 모델 허브포털

Filters

Text classification
ulmfit/en/sp35k_uncased
ULMFiT language model (English, 35k tokens, uncased) trained on the Wikipedia corpus.
  • Publisher: Edrone
  • Updated: 11/25/2021
  • License: Apache-2.0
  • Publisher:
    ULMFiT
  • Dataset:
    Wikipedia Wikipedia
  • Language:
    English korean Norwegian Chinese (Taiwan)
  • Overall usage data : 143 Downloads
Model formats
TF2.0 Saved Model (v1)
  • Fine tunable : Yes
  • License : Apache-2.0
  • Last updated : 11/25/2021
  • Format : TF2.0 Saved Model
ULMFiT language model (English, 35k tokens, uncased) trained on the Wikipedia corpus.
  • Usage data : 143 Downloads
  • 공개옵션 : 전체공개
Overview and model architecture
This module provides pretrained weights for the ULMFiT language model encoder. The architecture is a 3-layer unidirectional LSTM network with several regularization techniques. It was trained using FastAI framework and its weights were then exported to a Tensoflow SavedModel. We verified the TF outputs to be numerically compatible at inference with outputs from FastAI.
tab2_content
tab3_content
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License . For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.