Auto Deleting Gmail emails after 4 days

I’m working on a project that needs an external MySQL backup sent to an email address as an attachment. This is all working fine. I needed to install mutt on the server and then add this to the successful back up

echo "Hi, This is the database backup" | mutt -a /backups/mysql/a.sql.gz -s "Database backup" -- a@b.com

I needed to set the /etc/Muttrc set copy=no, otherwise I was getting an error message and the database wasn’t sent

This is going to result in the gmail inbox getting pretty full so I set up a script to delete all emails older than 4 days.

First I went to http://www.google.com/script/start/ to create a script. I created a blank project and copied the code below. I saved the project and then went to Resources -> current project’s triggers. I set the script to run every 12 hours and that meant that there were some current database backups if anything should happen to the server.

/**
* Adapted from http://www.addictivetips.com/web/set-gmail-to-auto-delete-emails-older-than-a-set-number-days/
*/
function cleanUp() {
   var delayDays = 4 // Enter # of days before messages are moved to trash
   var maxDate = new Date();
   maxDate.setDate(maxDate.getDate()-delayDays);
  //var label = GmailApp.getInboxThreads();
   var threads = GmailApp.getInboxThreads();//label.getThreads();
  for (var i = 0; i < threads.length; i++) {
    if (threads[i].getLastMessageDate()<maxDate)
   {
     threads[i].moveToTrash();
     Utilities.sleep(500);
   }
  }
}

Installing Redis to use as Laravel 4 cache

The following commands will install redis on a server running CentOS 6.4.

First, install the epel repo

sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm 

Next, install the remi repo

sudo rpm -Uvh http://rpms.famillecollet cialis bestellen ohne rezept.com/enterprise/remi-release-6.rpm 

Now, you should be able to install redis using the yum package manager.

yum install redis -y http://codybonney.com/installing-redis-on-centos-6-4/
http://codybonney.com/installing-redis-on-centos-6-4/

mkdir /usr/share/redis
cd /usr/share/redis
git clone https://github.com/ErikDubbelboer/phpRedisAdmin.git
cd phpRedisAdmin
git clone https://github.com/nrk/predis.git vendor
ln -s /usr/share/redis/phpRedisAdmin /vagrant/public/redis

in local/config/database.php

'redis' => array(
'cluster' => false,
'default' => array(
'host' => '127.0.0.1',
'port' => 6379,
'database' => 0,
),
)
'driver' => 'redis',
  • Find out the name of service’s script from /etc/init.d/ directory e.g. redis
  • Add it to chkconfig
    sudo /sbin/chkconfig --add redis
  • Make sure it is in the chkconfig.
    sudo /sbin/chkconfig --list redis
  • Set it to autostart
    sudo /sbin/chkconfig redis on

					

Updating PHP memory limits with Nginx

I needed to update the PHP memory limits.

Running phpinfo() showed that the php.ini file was in /etc/php.ini so I updated the file and restarted nginx

Nothing happened, the memory limit remained the same.

I found that I need to restart the php-fpm service to get it to update

service php-fpm restart

It’s also worth reading this article for more details

https://rtcamp.com/tutorials/php/increase-script-execution-time/

Tomcat 6 slow shutdown problems

I’m running Tomcat 6 for a client and the server has had a history of really slow shutdowns. The server admin people couldn’t find a reason but finally, I tracked down the problem

The Ubuntu syslog in /var/log/syslog was showing the following entry:

org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.ConnectException: Connection timed out

The cause of this was apparent when I ran the command line

hostname

The result was “www“, not very helpful

I updated the /etc/hostname file and then ran

/etc/init.d/hostname.sh start

The stop time dropped from around 3 minutes to 30 seconds.

Tomcat6 on Ubuntu, heap size and perm gen size

I’ve been having some trouble with a server running Tomcat6 on Ubuntu with a Spring/Hibernate set up. It also does some image resizing via Image Magick.

The problems were that the perm gen size for the Hibernate objects was too small and it ended up running out of memory.

No problem, in Ubuntu, the fix was going /etc/default/tomcat6 and editing the JAVA_OPTS variable.

I started off with

JAVA_OPTS="-XX:MaxPermSize=512m"

That solved the Hibernate problem but I was soon getting out of memory errors for image uploads. I had to add this into the file

JAVA_OPTS="-XX:MaxPermSize=512m -Xms256m -Xmx512m"

Running this gave me some useful feedback about the memory that Java was using.

jmap -heap <pid>

There are some useful resources out there for anyone interested in reading more

http://diegobenna.blogspot.co.uk/2011/02/how-to-increase-heap-size-in-tomcat-6.html

http://www.yaronco.com/tomcat6-and-java-heap-size/

PHP upload limits

Every now and then I need to re-set the PHP upload limits in the PHP ini file.

The obvious setting is

upload_max_filesize = 1024M

The less obvious settings are:

file_uploads = On
max_file_uploads=20
post_max_size=1024M

I’ve found it easy to forget to change post_max_size which need to be the same or greater than the upload_max_filesize.

Exporting large databases with MySQL dump

I had to export a 7GB database recently using mysqldump.

I started out with the usual export call

mysqldump -u admin -p sugar > sugar_28_may.sql

However, the users complained that they couldn’t access the system while it was being exported.

The culprit was mysqldump’s habit of locking tables to export them. The solution was this

mysqldump -u admin -p --lock-tables=false sugar > sugar_28_may.sql

The other problem I got was this message

mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `emails_text` at row: 84538

That message meant that I had to wait 15 minutes while the backup went up to 2GB and then the error kicked in.

I checked the max_allowed_packet by running

mysqldump --help
---
max_allowed_packet                25165824

It turned out the server had a 24MB packet limit. I increased it to 1GB.

mysqldump -u admin -p --max_allowed_packet=1073741824 --lock-tables=false sugar >sugar_28may12.sql

Note that the –max_allowed_packet option on MySQL 5.0xx seems to be max-allowed-packet on MySQL 5.5x. Running mysqldump –help will sort out the correct syntax to use.

Apache OSX Lion 403 Forbidden Problem

I had to set up multiple development sites using the Apache virtual hosting. All were coming back with the dreaded 403 page.

I did a lot of research and nothing seemed to work.

Finally, I found something that did. My virtual hosts were all running within directories that I had set up outside the /Library/Webserver location. I didn’t want to move the directories there and I didn’t want to sym link my directories to there either.

I found a line in the http.conf file

User _www
Group _www

I changed it to

User kevinsaunders
Group staff

All fine, except that PHP wasn’t allowed to set a session file in /var/tmp.

sudo chown -R kevinsaunders:staff /var/tmp/*

That seemed to fix it all.

Image Magick bash script to resize and crop images

Resizing images is often something that any web developer has to do.movie Hacksaw Ridge 2016 trailer

This is a bash script which resizes and crops images to a perfect 72×72 square.

#!/bin/sh

# script to resize and crop

dir="/Users/data/files/candidate/photo/"
resume=resume
orig=_orig
cd "$dir$orig"
for k in `ls * cialis von lilly.jpg`
do	
	h=`identify -format "%h" $k`
	w=`identify -format "%w" $k`
	#echo "$k, w:$w, h:$h"
	#If landscape
	if [ $w -ge $h ]
		then
	   	 convert $k -resize x72 -crop 72x72+0+0  "$dir$resume/$k"
	     #echo "landscape"
		else
		 convert $k -resize 72x -crop 72x72+0+0 "$dir$resume/$k"
	     #echo "portrait"
	fi
	
done