
Blog
Blog
mysql / replicating repair table
From the MySQL 5.1 manual: 15.4.1.16. Replication and REPAIR TABLE When used on a corrupted or otherwise damaged table, it is possible for the REPAIR TABLE statement to delete rows that cannot be recovered. However, any such modifications of table data performed by this statement are not replicated, which can cause master and slave to lose synchronization. For this reason, in the event that a table on the master becomes damaged and you use REPAIR TABLE to repair it, you should first stop replication (if it is still running) before using REPAIR TABLE, then afterward compare the master’s and slave’s copies of the table and be prepared to correct any discrepancies manually, before restarting replication.
indirect scp / bypass remote firewall rules
Suppose I’m on machine DESKTOP and I want to copy files from server APPLE to server BANANA. DESKTOP has access to both, but firewalls and/or missing ssh keys prevent direct access between APPLE and BANANA. Regular scp(1) will now fail. It will attempt to do a direct copy and then give up. This is where this indirect scp wrapper (view) comes in: First, it tries to do the direct copy.
mysql replication / relay log pos
So, hardware trouble caused a VPS to go down. This VPS was running a MySQL server in a slave setup. Not surprisingly, the unclean shutdown broke succesful slaving. There are several possibly causes for slave setup breakage. This time it was the local relay log file (mysqld-relay-bin.xxxx) that was out of sync. SHOW SLAVE STATUS\G looked like this: ... Master_Log_File: mysql-bin.001814 <-- remote/master file (IO thread) Read_Master_Log_Pos: 33453535 <-- remote/master pos (IO thread) Relay_Log_File: mysqld-relay-bin.
mysql slow / queries / sample
Sometimes you’re in a situation where you know that a database is more heavily loaded than it should be. Time to figure out which queries are stressing it the most. The standard thing to do with a MySQL database would be to enable query logging with general_log_file. Or, to get only slow queries and those not using indexes, the log_slow_queries. But, if this is a mission critical and heavily loaded database, adding expensive logging may be just enough to give it that final push to become overloaded.
postgres / alter column / look closer
Just now, I tried to convert an integer column in a PostgreSQL database to one of type VARCHAR. I knew you had to do an explicit cast, so I was a bit stumped that I still wasn’t allowed to perform the ALTER TABLE. mydb=> ALTER TABLE mytable ALTER COLUMN mycolumn TYPE VARCHAR(31) USING mycolumn::text; ERROR: operator does not exist: character varying >= integer HINT: No operator matches the given name and argument type(s).
fixing symptoms / not problems
Some people seem to think that fixing the symptom is fixing the problem. import random def return_one_of(list): return list[random.randint(0, len(list))] def say_something(): try: print return_one_of(["Hello World!", "Hi!", "How you doin'?"]) except: return say_something() say_something() Gah! This is obviously an example, but there are people who do this and claim to have “fixed the problem”. Let me reiterate: the fact that your code does not raise any exceptions does NOT mean that it is not broken code!
django / mongodb / manage dbshell
The current django-mongodb-engine doesn’t seem to ship with a working manage dbshell command yet. Right now it returns this: $ ./manage.py dbshell ... File "/home/walter/.virtualenvs/myproject/lib/python2.6/site-packages/django/core/management/commands/dbshell.py", line 21, in handle connection.client.runshell() File "/home/walter/.virtualenvs/myproject/lib/python2.6/site-packages/django_mongodb_engine/base.py", line 108, in __getattr__ raise AttributeError(attr) AttributeError: client The fix is simple, patch your django_mongodb_engine with this: --- django_mongodb_engine/base.py.orig 2011-11-15 11:53:47.000000000 +0100 +++ django_mongodb_engine/base.py 2011-11-15 11:54:07.000000000 +0100 @@ -7,6 +7,7 @@ from pymongo.connection import Connection from pymongo.collection import Collection +from .
certificate verify fail / crt / bundle
So. SSL certificates are still black magic to me. Especially when they cause trouble. Like when one of the sysadmins has forgotten to add the certificate bundle to the apache2 config. Then you get stuff like this: $ hg pull -u abort: error: _ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Most web browsers do not notice this as they already have the intermediate CA files, but /etc/ssl/certs/ca-certificates.crt seemingly doesn’t. The problem in this case was not that I was missing any certificates locally.
backtrace / without debugger
You may not always have gdb(1) at hand. Here are a couple of other options at your disposal. #1 Use addr2line to get the crash location $ cat badmem.c void function_c() { int *i = (int*)0xdeadbeef; *i = 123; } // <-- line 1 void function_b() { function_c(); } void function_a() { function_b(); } int main() { function_a(); return 0; } $ gcc -g badmem.c -o badmem $ ./badmem Segmentation fault No core dump?
gdb / backtrace / running process
Sometimes you want a backtrace or a core dump from a process that you do not want to stall. This could concern a multithreaded application of which some threads are still doing important work (like handling customer calls). Firing up gdb would halt the process for as long as you’re getting info, and raising a SIGABRT to get a core dump has the negative side-effect of killing the process. Neither is acceptable in a production environment.