Closed
Bug 1912595
Opened 2 months ago
Closed 2 months ago
Switch from max_length to max_new_tokens in Examples
Categories
(Core :: Machine Learning, enhancement)
Tracking
()
RESOLVED
FIXED
131 Branch
Tracking | Status | |
---|---|---|
firefox131 | --- | fixed |
People
(Reporter: atossou, Assigned: atossou)
References
Details
(Whiteboard: [genai])
Attachments
(1 file)
In transformers.js, max_length
refers to the total length of both the prompt and the response. When using a non-seq2seq model, such as "Xenova/distilgpt2," this can result in no tokens being generated as output. To address this, we should switch to using max_new_tokens
, which specifies the maximum number of new tokens to generate.
Assignee | ||
Comment 1•2 months ago
|
||
Assignee | ||
Updated•2 months ago
|
Whiteboard: [genai]
Updated•2 months ago
|
Pushed by atossou@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/049fea96cde9
Switch from max_length to max_new_tokens in Examples - r=tarek,Mardak
Comment 3•2 months ago
|
||
bugherder |
Status: ASSIGNED → RESOLVED
Closed: 2 months ago
Resolution: --- → FIXED
Target Milestone: --- → 131 Branch
You need to log in
before you can comment on or make changes to this bug.
Description
•