Recent Topics

1 Aug 12, 2005 22:39    

This is moved from spam:an ip based approach
http://forums.b2evolution.net/viewtopic.php?t=4876&highlight=antispam

I am thinking of password protecting all of my blogs and requiring anyone who wants access to at least use a generic username and password to enter. Kind of like driverguide does to keep bots out. If anyone has done this successfully please let me know.

EdB said we needed a new thread so I am starting one.

2 Aug 12, 2005 23:02

I'm thinking of two ways you could do that.

1. Set all post to protected status and then users would have to have a user account on your b2evo install to view posts. I guess they would still see the front page, but it would say 'no posts' or something. You could create a user account that doesn't have any permissions to post, but is set as a member of the blog.

2. Use [url=http://www.htmlite.com/HTA006.php].htaccess[/url] to lock up your index.php file, or your stub file, or your whole blogs folder. Then you could create a generic username and password for people to use to get in.

3 Aug 12, 2005 23:18

As a followup to personman's idea 1: you could make a post public that says you need to login to see the blog, then, if I follow you correctly, you could simply say in that post "you can login with the generic username 'bingo' and the password 'yahtzee'". That is, if you want everyone and their brother to see the blog but don't want any non-humans to get in. You then set up bingo as a member of blogs with no posting permissions. bingo will then be able to see the posts but not make any.

This will also stop search engines from getting in because they're not humans, but that's your choice to make.

You could even make the login trick be a footnote under your blog posts so that you could occasionally post publicly without losing the trick for regular mortals to see your blog.

4 Aug 13, 2005 00:04

Since you complained your 700-bytes "403" (access denied) page is already too big, using anything else than .htaccess or equivalent PHP script (HTTP directives) to protect your blog with passwords doesn't appear to be a good solution, since your login page is going to become the new spammers' target.

You might move your logging function at the top of your skin and call the PHP die() function (or an HTTP redirection directive) when a spammer is identified. However, I noticed the hit_log() function costs half of a page processing time (about 3,000 spammers to test...) That doesn't appear very interesting to fight huge amounts of spam.

You already applied the [url=http://forums.b2evolution.net/viewtopic.php?t=4876&highlight=antispam]BlockUntrustedVisitors()[/url] hack discussed into the [url=http://forums.b2evolution.net/viewtopic.php?t=4876&highlight=antispam]spam: an IP based approach[/url] thread. As you've already noticed, it prevents some spammers to reach your blog.

Finally, fighting spam becomes to cost a lot. I've begun some tests and about 30 to 70% of my server resources are dedicated (mainly) to fight spam. That becomes ridiculous. I am wondering about writing a "real" cache system for [url=http://b2evolution.net]b2evolution[/url] aimed to generate raw and static HTML pages up to several houndreds of times quicker to process by the web server than the original and heavy (X)HTML+PHP+MySQL dynamic pages... That wouldn't fight spam, but that also would make it more irrevelant on the server resources usage...

5 Dec 13, 2005 15:32

kwa wrote:

I am wondering about writing a "real" cache system for [url=http://b2evolution.net]b2evolution[/url] aimed to generate raw and static HTML pages up to several houndreds of times quicker to process by the web server than the original and heavy (X)HTML+PHP+MySQL dynamic pages...

That would be just peachy. I liked the idea of SimpleCache and this seems even better.

kwa wrote:

That wouldn't fight spam, but that also would make it more irrevelant on the server resources usage...

Fighting referral spam is exactly that. Not posting their links and not letting them drain (for people like me) precious server bandwidth. I don't think any blacklist, no matter how good, can keep up with spammers.

6 Dec 13, 2005 18:17

kwa wrote:

I am wondering about writing a "real" cache system for [url=http://b2evolution.net]b2evolution[/url] aimed to generate raw and static HTML pages up to several houndreds of times quicker to process by the web server than the original and heavy (X)HTML+PHP+MySQL dynamic pages... That wouldn't fight spam, but that also would make it more irrevelant on the server resources usage...

I've implemented memcached support already for anonymous pages. It's in the post-phoenix branch and I use it on a regular basis (for local tests). Benchmarks are indeed very promising. 70-80ms/page instead of 700-1000ms.

7 Dec 14, 2005 12:44

blueyed wrote:

I've implemented memcached support already for anonymous pages. It's in the post-phoenix branch and I use it on a regular basis (for local tests). Benchmarks are indeed very promising. 70-80ms/page instead of 700-1000ms.

I'm scared of memcashed :-/
plus it's something with .htaccess, right? And you have to ask your server to install something, right? If they're already mad at you for your bandwith usage, just think what they say when you ask for sketchy server add-ons.

And then some people don't even have the option of doing this.

My server has downed my site a few times in the past 6 months, and I have the feeling it's because of b2evo.
I had it installed on another server, but the spam sucked all the site's bandwidth, so we had to temporarily disable it.
I'm hoping that Phoenix solves some problems. Then again, a good static html caching system would solve some things too.

8 Dec 14, 2005 14:58

ilsott, Are you using version 0.9.1 now? It does a lot to help reduce the bandwidth that spammers use.

9 Dec 14, 2005 23:55

blueyed wrote:

I've implemented memcached support already for anonymous pages. It's in the post-phoenix branch and I use it on a regular basis (for local tests). Benchmarks are indeed very promising. 70-80ms/page instead of 700-1000ms.

Supporting [url=http://www.danga.com/memcached/]memcached[/url] is a great idea. It's exactly what people might have dreamt about. However... In reality, it might appear some hosts don't support [url=http://www.danga.com/memcached/]memcached[/url] on their servers. Mine don't want to install it. In such a case, a file-based SQL requests cache based on the [url=http://www.danga.com/memcached/]memcached[/url] implementation might help to increase speed. (To be tested yet, that's not so obvious.)

10 Dec 15, 2005 11:44

memcached does not seem to have much support from web hosts.
The most common cache/accelerators used are eAccelerator (buggy and losing favor) and IonCube PHP Accelerator.
Having Zend Optimizer installed helps as well.


Form is loading...