

Any good web crawler has limits.
Yeah. Like, literally just:
- Keep track of which URLs you’ve been to
- Avoid going back to the same URL
- Set a soft limit, once you’ve hit it, start comparing the contents of the page with the previous one (to avoid things like dynamic URLs taking you to the same content)
- Set a hard limit, once you hit it, leave the domain altogether
What kind of lazy-ass crawler doesn’t even do that?
Not daily, but their canvas feature has a feature that lets you embed previews of your files into the flow charts you make. It’s pretty nice, since you can have shorter files entirely visible with everything else. Makes it pretty good for software development and project management, in my experience.
Careful not to go overboard with it, though. I feel like a lot of people fall down the “productivity pipeline” when using it, where they end up procrastinating by trying to optimize every little thing and end up doing nothing at all.