Debugging mysql crash on a low-memory VPS

Recently I had a run-in with a seemingly random, occasional crash of mysql on a system with only 512MB of memory. My suspicion was that sometimes mysql runs some cleanup tasks or something along those lines, causing the memory usage to spike and ultimately cause a crash. So I wrote this quick and dirty script to log 48 hours of memory:

#!/usr/bin/env python
import os
import datetime
import psutil
# This script will dump the memory state and write it to mem.log
# if more than 2880 entries are in the log file, it will start removing 
# old ones in order to keep the log size down.
# To use this script first install psutil: sudo pip install psutil
# then run: crontab -e
# add this line to the bottom(adjust the path to where the script is): 
# * * * * * cd /home/madmaze/trash; /usr/bin/python /home/madmaze/trash/
# This will write add a new entry to mem.log every minute and keep 48hrs of records
def getUsage():
    memkeys=["total", "available", "percent", "used", "free", "active", "inactive", "buffers", "cached"]
    swapkeys=["total", "used", "free", "perc","sin","sout"]
    # MEM
    for n,k in enumerate(m):
        if memkeys[n] not in ["percent","active","inactive"]:
            info["mem_"+memkeys[n]]=str(k/1024/1024)+" MB"
    # SWAP
    for n,k in enumerate(s):
        if n < 3:
            info["swap_"+memkeys[n]]=str(k/1024/1024)+" MB"
    return info
maxlen=2*24*60	# 2days * 24hr * 60m
if totalLen > maxlen+100:
    # Crop file to length
    for n,l in enumerate(lines):
        if (totalLen-n)<maxlen:
    # just append
# Actually append the next line of data
f_out.write(str(" "+str(getUsage())+"\n")

Github Gist:

Judging by the recorded memory usage pattern, it seems that at 3AM EST a series of database intensive cron jobs kicked off at the same time causing mysql's memory foot print to grow until it crashed. Long story short, I spaced the cron jobs out such that each has plenty of time to complete before the next one begins.