How much can misconfigured wordpress plugins stall your server’s performance

A couple of days ago I’ve met a guy who has a high traffic blog about tech stuff. The guy was telling me that he has hosting problems and that his blog is getting slower and slower by the day. I’ve offered to help him by providing hosting for him in one of the servers that I administer. After making the transition from his old hosting to my server, which was not an easy thing to do due to latin1 to utf8 conversions that had to be made – it deserves a post of it’s own, I started to notice increased load on my server. Sure his blog had heavy traffic…but could it be that bad ?
(more…)

Applications Meme

Following the request of comzeradd

answer the following questions with less than 50 words each (mind, only desktop/gui applications):
1. which desktop manager do you use more often ?
2. which desktop application you would not like to see implemented again on linux? and why?
3. which desktop application you definitely would like to see implemented on linux? describe it briefly or point out to a similar application.
4. write the name of the last project (not the very best, the last!) that made you wish to thank their developers so you can thank them now! 🙂

5.(optional) Link the blogs of 1-3 people you’d like to take part to this meme. (no more than three). you can skip this question if you like.

and here are my answers:

1. fluxbox
2. Audio players
3. I don’t have a single desktop application that I use on any other OS and I think it needs implementing on Linux, sure there are applications like CAD and serious audio/video editing for professionals missing, but I am sure that from the psychological point of view, if MS Office worked on linux a lot of people would feel much more comfortable switching to it. Just like what happens with MacOSX.
4. keepassX: excellent cross-platform tool to manage one’s passwords
5. stsimb, agorf, adamo

macports error

While trying to update the installed progs, I got errors like:

Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_textproc_libiconv/work/libiconv-1.12" && make CC= -f Makefile.devel && make all " returned error 2
Command output: O lib/genaliases.c -o genaliases
make: O: Command not found
make: [lib/aliases.h] Error 127 (ignored)
./genaliases > lib/aliases.gperf 3> canonical.sh 4> canonical_local.sh
/bin/sh: ./genaliases: No such file or directory
make: *** [lib/aliases.h] Error 127

The reason was that I was using an old macports version, 1.6.0 instead of 1.7.0. A supo port selfupdate fixed the problem. I hope that in the future macports will be upgradable through normal upgrades like gentoo’s portage and debian’s apt-get tools do so we don’t need to “selfupdate” every once in a while…

MacOSX: Vodafone Mobile Connect not opening

Today I had a very unpleasant surprise with my Vodafone Mobile Connect on Mac OS X. After a normal laptop standby, the application refused to open. Upon starting the application it peaked at 100% cpu usage but no gui ever appeared. I had to kill the application after a while…No messages at the console either. The solution was to (re)move the /Library/Application Support/nova media and /Library/Application Support/Vodafone folders to another location.

This way you lose your stats (data transfered, time used) but at least you can get back on the net…pheeeewwww

a small tip for more efficient command line usage on debian

Debian is one of the few distros that you can’t search the bash history backward or forward for past commands by default.
To change that behaviour you need to uncomment two lines inside /etc/inputrc.

Change:

# alternate mappings for "page up" and "page down" to search the history
# "\e[5~": history-search-backward
# "\e[6~": history-search-forward

To:

# alternate mappings for "page up" and "page down" to search the history
"\e[5~": history-search-backward
"\e[6~": history-search-forward

Example usage:
To search through your old commands that started with “ssh” (e.g. ssh -p 551 koko@lala.gr, ssh foo@bar.gr, ssh test@koko.gr -L1111:1.2.3.4:9876), just type ssh and hit PgUp, you will see the previous ssh commands appearing on the command line.
$ ssh[PgUp] transforms to $ ssh -p 551 koko@lala.gr hit PgUp again and it transforms to
$ ssh foo@bar.gr PgUp and it becomes $ ssh test@koko.gr -L1111:1.2.3.4:9876

Help needed on apache2 segfaults

Dear Internet,

I need your help!
I have a debian stable (4.0) server with apache2 (Version: 2.2.3-4+etch6) running which is hosting more than 10 different sites. The problem is that in the apache2 error log I can see a lot of segfaults. All sites though continue to work properly and nobody has ever complained about them.

Some logs:

[Tue Feb 03 18:30:36 2009] [notice] child pid 1353 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:37 2009] [notice] child pid 29343 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:37 2009] [notice] child pid 1350 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:38 2009] [notice] child pid 1349 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:38 2009] [notice] child pid 1352 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:39 2009] [notice] child pid 1354 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:41 2009] [notice] child pid 1380 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:42 2009] [notice] child pid 1378 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:42 2009] [notice] child pid 1714 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:44 2009] [notice] child pid 1715 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:44 2009] [notice] child pid 1718 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:45 2009] [notice] child pid 1720 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:45 2009] [notice] child pid 1721 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:46 2009] [notice] child pid 1723 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:47 2009] [notice] child pid 1724 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:47 2009] [notice] child pid 1725 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:49 2009] [notice] child pid 1726 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:49 2009] [notice] child pid 1728 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:50 2009] [notice] child pid 1729 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:50 2009] [notice] child pid 1730 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:51 2009] [notice] child pid 1358 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:51 2009] [notice] child pid 1733 exit signal Segmentation fault (11)

In order to find out what causes the segfaults I have enabled the following options:
inside /etc/apache2/apache2.conf
CoreDumpDirectory /tmp-apache/
$ ls -Fla / | grep tmp-apache
drwxrwxrwx 2 www-data www-data 4096 2009-01-31 11:01 tmp-apache/

I have changed the ulimit settings inside /etc/security/limits.conf
* soft core unlimited
* hard core unlimited

I have even added a ulimit -c unlimited setting inside /etc/init.d/apache2.
But still I get no core dumps inside /tmp-apache2/ from the segfaulting children.
If I manually kill -11 apache.pid then I can see a core file inside /tmp-apache/

I have only seen one or two core dumps generated by apache and using gdb I could see that they both “blamed” a function of /usr/lib/apache2/modules/libphp5.so. In my quest to find which site/code causes the segfaults I have recompiled apache2 to enable mod_whatkilledus. But no core dump was created in /tmp-apache/ for more than a week even if the segfaults keep happening.

I have reduced my modules, removed mod_python, mod_perl, etc and still these segfaults keep occuring but no core dumps. I suspect that the only time I got a core was when a parent and not a child process segfaulted. I don’t think that my apache2 children dump core when they segfault.

Is there anything I could have done and I haven’t done it ? Is there a way I can force apache2 children to dump core or any other way to determine what causes these segfaults ? All these without of course closing down the sites one by one to see when the segfaults stop…

Thanks in advance to anyone that replies!

P.S. blog’s database is making some tricks…I hope it’s ok now and the post is fully published

Gentoo’s epic phail

As some people already know I’ve joined the army since 2 months ago, this makes it somewhat difficult for me to keep up with the latest updates for every machine I use.
Today I tried to upgrade a machine running stable (x86) Gentoo Linux after more than 15 days since the last upgrade and I got confronted with an epic Gentoo failure. The problem is clearly described on Odi’s blog. While trying to update e2fsprogs you have to uninstall the old version of them (so far so good), remove sys-libs/ss (that’s also acceptable) and remove sys-libs/com_err which is a dependency for MANY MANY progs, wget among them. So when you remove all these packages and try to install e2fsprogs-libs, wget can’t work anymore due to the missing libcom_err.so.2 file, so you can’t download the updates! You can’t open a new ssh connection to the machine either, so you can’t sftp the libcom_err.so.2 file from another machine! I had success by placing libcom_err.so.2 on a nfs share in another machine, mounting that share from the first machine and trying to re-emerge e2fsprogs-libs since wget could then work.

You can read more about the bug on Gentoo’s bugzilla, here and here.

a) Since portage is NOT ready to handle these kind of situations (according to bugzilla) why did the maintainers mark e2fsprogs-libs as stable ? Why didn’t they do ANY testing at all on the stable branch ?
b) Why wasn’t there a warning/alert/whatever on Gentoo’s website ?
c) It’s been more than 10 days since the problem first appeared and there hasn’t been any official solution about it, either by portage upgrade or by package masking.
d) Should I have googled or searched the forums before upgrading ? Possibly yes…but Gentoo didn’t have such upgrade problems before. I could accept it if problems on Gentoo were created by an upstream ABI breakage like dev-libs/expat had not so long ago, but this looks to me as a totally Gentoo related problem and not an upstream one.

I award Gentoo and e2fsprogs-libs maintainers a sad trombone. Sorry people but this is an epic failure. You deserve it.

Mac OS X tips/reminders

3 simple tips/reminders for stuff I had to deal with while using Mac OS X the last two days…

To get arrows working while inside vim in a remote server one needs to change Mac OS X’s terminal type.
$ cat .profile
TERM=linux

If you use push "redirect-gateway" option in an openvpn server configuration file, you need to add redirect-gateway def1 in your client’s configuration file when using openvpn’s Mac OS X client (Tunnelblick) or else when you close the VPN the previous default route is not restored.

To check on the signal quality of nearby Access Points get AP Grapher.

rxvt-unicode 256 color support with vim

Following my previous post on minimizing the resources that urxvt needs on Gentoo, I tried applying some more patches to it that I found in Gentoo’s bugzilla.
Since that happened a few days ago there was no ebuild for version 9.05 yet. So I created one and applied the patch for the 256 color support.

Here’s my rxvt-unicode ebuild for version 9.05 with 256 color support: rxvt-unicode-9.05.ebuild.
(more…)

Euro 2008 open source tour

451 CAOS Theory has a mini review of what’s going on with open source among the countries that compete in Euro 2008.
It’s quite interesting.
Here’s the link about Greece. It has quite a point…Things don’t look very promising…

The quest for a better rxvt-unicode on Gentoo

Today, while studying I decided to manually run a prelink on my system. For no good reason. Just boredom I guess. The results were pretty interesting though.

Among the output there was a line that made a very big impression to me.
prelink: /usr/bin/urxvt: Cannot prelink against non-PIC shared library //usr//lib/opengl/nvidia/lib/libGL.so.1
Why oh why is libGL.so.1 inside the shared libraries of a terminal ???
(more…)

Speed up multiple ssh connections to the same destination

When you are doing multiple ssh connections to one host there’s a way to speed them up by multiplexing them. When you open the first network connection a special socket is created and then all other connections to the destination machine pass through the first network connection and don’t open any new ones. All that is done via ControlMaster and ControlPath settings for ssh_config.

Example usage:
Inside /etc/ssh/ssh_config
ControlMaster auto
ControlPath /tmp/%r@%h:%p

Firsh ssh connection:
% ssh foobar@foo.bar.gr
Password:
Linux foo.bar.gr 2.6.20.1-1-686 #1 SMP Sun Mar 4 12:44:55 UTC 2007 i686 GNU/Linux
foobar@foo:~$

Second ssh connection:
% ssh -p 22 foobar@foo.bar.gr
Linux foo.bar.gr 2.6.20.1-1-686 #1 SMP Sun Mar 4 12:44:55 UTC 2007 i686 GNU/Linux
foobar@foo:~$

No password is asked and the connection opens up immediately.

kudos to apoikos for telling me about this neat feature in fosscomm 🙂

Αναλύοντας ένα attack σε honeypot

Ο Δημήτρης έχει μια αρκετά καλή ανάλυση ενός attack σε ένα honeypot που έχει στήσει για πειραματισμούς. Αξίζει να του ρίξετε μια ματιά…

Επιτέλους μας την έπεσαν

44Mbit of multicast traffic can cause a lot more problems than you might think

I was reading my mails today and I bumped into some problems that Internet2 routers faced a couple of days ago with some multicast traffic sent from a host in France. Apparently the host was sending 44Mbit of traffic to a multicast group and that was more than enough to raise a very high load on some routers and cause problems to some firewalls too. Their solution was to either blacklist the host or to disable SAP listen on their routers.

To read more you can check the thread “Another SAP Storm?” from wg-multicast@internet2.edu All things related to multicast .

The same problem appeared on GrNET routers too, but unfortunately they don’t have any public archives of their exchanged mails on the problem. The only way to take a look at this problem from the GrNET point of view is to check on the GrNET router status page, click on the load of some routers and check the spike that appears on Wednesday night in the weekly graph.

Quite interesting…

Mobile view of the internet

This might be old news to most people but I didn’t know it…
You can use a special google url to view websites like mobiles phones do. Try this for example:
http://google.com/gwt/n?u=http://void.gr/kargig/blog/.
It’s quite useful when you want to see how your site looks like from a mobile phone or when you want to use a browser from a terminal like lynx or links (I know you don’t use these browsers but sometimes I do…)
To begin browsing in “mobile view” just go to http://www.google.com/gwt/n and all links you click afterwards will be parsed through the proxy.

And another link I liked was this: http://www.google.com/xhtml. Mobile view of google’s search.