fix: guard against None chat_template in _post_process_chat_template#1371
fix: guard against None chat_template in _post_process_chat_template#1371yeyu-nvidia wants to merge 1 commit intomainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughA null guard is added to the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 5 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 9/10 reviews remaining, refill in 6 minutes. Comment |
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1371 +/- ##
==========================================
+ Coverage 76.48% 76.97% +0.48%
==========================================
Files 471 471
Lines 50487 50489 +2
==========================================
+ Hits 38617 38862 +245
+ Misses 11870 11627 -243
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
When a tokenizer has no chat_template (e.g. base Llama-3.2 models),
_post_process_chat_template() crashed with:
AttributeError: 'NoneType' object has no attribute 'replace'
Add an early return when chat_template is None. The existing check at
line 164 will then raise a clear ValueError("No valid chat template!")
if no template is available after post-processing.
Fixes NVBug 6120958
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
0fb9ead to
378d3e9
Compare
Problem
When training with a model that has no
chat_templatein its tokenizer (e.g. base Llama-3.2 models),_post_process_chat_template()crashes:The DeepSeek WAR at the top of
_post_process_chat_templatecalled.replace()directly onself.tokenizer.chat_templatewithout checking forNonefirst.Fixes NVBug 6120958
Fix
Add an early return when
chat_template is None. The existing check at line 164 (if self.tokenizer.chat_template is None: raise ValueError) still provides a clear error message if no valid template is available after post-processing.Summary by CodeRabbit