mirror of
https://huggingface.co/LargeWorldModel/LWM-Text-Chat-128K
synced 2025-03-17 21:32:20 +01:00
Update README.md
This commit is contained in:
parent
4097b00611
commit
e3f71d1246
28
README.md
28
README.md
@ -1,3 +1,29 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
inference: false
|
||||
---
|
||||
|
||||
<br>
|
||||
<br>
|
||||
|
||||
# LWM-Text-Chat-128K Model Card
|
||||
|
||||
## Model details
|
||||
|
||||
**Model type:**
|
||||
LWM-Text-Chat-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
|
||||
|
||||
**Model date:**
|
||||
LWM-Text-Chat-128K was trained in December 2023.
|
||||
|
||||
**Paper or resources for more information:**
|
||||
https://largeworldmodel.github.io/
|
||||
|
||||
## License
|
||||
Llama 2 is licensed under the LLAMA 2 Community License,
|
||||
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
||||
|
||||
**Where to send questions or comments about the model:**
|
||||
https://github.com/LargeWorldModel/lwm/issues
|
||||
|
||||
## Training dataset
|
||||
- 92K subset of Books3 documents with 100K to 200K tokens
|
Loading…
x
Reference in New Issue
Block a user