I'm currently working on optimizing a large website and suspect there are duplicate documents affecting our rankings. I'm looking for the most efficient ways to find duplicate documents across multiple pages or even entire domains. I’ve tried some basic tools, but they’re either too slow or miss deeper duplicates.
Has anyone here used advanced tools, scripts, or SEO techniques that work well for detecting duplicate content at scale? Also curious if there's a way to automate this in a crawl or during audits.