When you see a message saying your lyrics or prompt don’t meet our guidelines, it means the AI has detected something it considers sensitive or inappropriate. We know it’s frustrating when real artistic expression gets tangled in AI caution tape. The filters can’t be turned off right now, but there are some workarounds to help you achieve the desired result.
To clarify: Songer no longer applies its own moderation to user submissions, but the AI model itself still has built-in safety filters. That means even though we’re not blocking your content directly, the model may still reject or flag certain lyrics or prompts if they include:
Sensitive topics (like death, funerals, or violent imagery)
Copyrighted or trademarked material
⚠️ Common Triggers
❌ Direct references to death, coffins, or burials often trip the AI’s filters.
❌ Lyrics containing copyrighted phrases or song titles might also get flagged.
✅ What Usually Works
✅ Abstract or symbolic wording tends to pass without issue.
✅ Using your own lyrics or content generated through "Generate" or "Custom Lyrics" tabs helps ensure compatibility.
Even figurative language or slang can sometimes confuse the model, so if your lyrics get flagged, try softening the language a bit. For example:
Instead of... | Try... |
“kill” | “hurt” or “destroy” |
“gun” | “weapon” or “shadow” |
“die” | “fade away” |
“suicide” | “losing hope” or “giving up inside” |
“blood everywhere” | “a crimson scene” or “a heavy silence” |
These small tweaks often make the difference between a blocked line and a brilliant verse.