I recently finished up a project where I hit a roadblock for a few days. Part of the project was supplying the end user with a deny page that squid generated, plus some of my own code to enable a user to login. The problem? After the user logged in they were greeted with the same deny page if they tried to access a URL that previously generated the deny page.
Example: You browse to www.yahoo.com and get our login page. You login to our system and then continue to browse. But if you try and hit www.yahoo.com you will get the same login page. It’s being cached somewhere.
I was already setting the appropriate meta tags to tell squid not to cache the page, and verified that the document was not in the squid cache. Squid generates error pages not as http 200′s but as 403′s. That’s fine, but what’s different is that squid caches http errors (404′s, 403′s, etc) in memory for a customizable period of time, defaulting to 5 minutes.
Squid has a command to customize this: negative_ttl. If your customer is coming across the network from an external_acl you’ll need to add the negative_ttl option to it as well.