#!/usr/bin/python # -*- coding: utf-8 -*- """Active gossip for instant updates. A very simple eventually-consistent distributed database providing an append-only log of sets of name-value pairs. CLI invocation -------------- This daemon runs in four modes: client, server, query, and update. It also has a very simple built-in chat app. Say mode is invoked with ``-s`` and takes name-value pairs on the command line. It appends an add to the log file. For example: $ ./gossip.py -s foo somename=somevalue $ Chat mode is invoked with ``-c``, the name of the log file, and optionally a nickname. Then you can type chat lines at it and quit by typing ^D. You may want to run inside Emacs shell-mode to get command-line editing, search, and scrollback. For example: user@debian:~/devel/aspmisc$ ./gossip.py -c foo * user joined hi hi (This will only work if ``foo`` already exists. You can create it with ``-s`` as above.) Client mode (when implemented) might be the default. It takes as arguments a command line to use to invoke a server, such as ``ssh myhost gossip.py --server``, which it will then proceed to exchange data with, efficiently bringing their two logs into sync. It prints XXX the message “synchronized” (or optionally, with the ``-q`` flag, it will exit) and then watches for new data in the file, which it will then transmit to the server, and allows the server to watch for new data in the file, which it will then add to the file. Server mode is invoked with ``--server`` and the filename of the log. Client mode is sort of unimplemented, because right now what works is running two servers connected by two pipes: mkfifo p < p ./gossip.py --server fn | ssh user@host ./gossip.py --server fn > p Add a ``-q`` to one or both of the command lines to get one-time synchronization instead of continuous synchronization. Ask mode evaluates a simple query or “ask” over the contents of the file and sends its results to stdout, one add per line. It is invoked with ``-a``, the filename of the log, and a mix of names with name-value pairs with an `=` separating the name from the value. If provided the ``-q`` flag, it will exit when it reaches the end of the log; otherwise, it will continue to run until killed, outputting any new adds it sees if they match. The default, empty ask will match all adds; each name in the ask must be found in an add for that add to match that ask, and furthermore, if values are given in the ask, at least one of the values for that name in that add must match at least one of the values in the ask. Log logical structure and guarantees ------------------------------------ This program maintains a log of “adds” in a log at a user-specified filename, which will be created if it is not present. The first add in the file is the identity of this node; it is randomly generated the first time you write to the log. Other adds have a “by” and an “at”, which allow them to be eventually synchronized to all other nodes in the system. Two names are reserved: ``by`` and ``at``. ``by`` is the identity of the node that originated that add; ``at`` is a serial number unique to that node and incrementing by 1 every time a new add is added. Modifications to the log file are undertaken with short-time advisory file locks in order to reduce the risk of data loss, but are not fsync()ed afterwards, so data written to the file can still be lost by a power outage. The log guarantees that adding an add is atomic, and that the data in the add will not be corrupted. Adds may be lost due to hardware errors, but lost adds will be automatically restored from other nodes when syncing. No guarantees are provided about add ordering. Each add must fit into your RAM. The log is append-only; existing adds cannot be modified or deleted. Log physical format ------------------- The log file is append-only, as the name suggests. The log format is self-delimiting and checksummed so that if a write is partial, or two writes are interleaved, or data in the middle of the log is corrupted, no corrupted data will be treated as valid, no other data will be lost, and you can continue to append adds to the log. Also, it can be efficiently read backwards or forwards. This is achieved by encoding each add as a header, a body, and a trailer. The header contains the byte "<" and the trailer contain the bytes ">" and "\n", and the body is encoded so that these bytes cannot occur in the body, enabling resynchronization after data corruption. The header and the trailer each contain the length of the body, enabling you to skip over it; the header additionally contains a secure hash, enabling any accidental data corruption to be detected. The header consists of '$hash $length<', where $length is the length of the body in bytes as a decimal number in ASCII digits and $hash is the base64-encoded SHA-256 of the body truncated to 66 bits (11_characters). 66 bits is adequate here because we’re not in a position to authenticate anything, and birthday collisions are not a concern; we’re just checking for *accidental* corruption. The body consists of a "&"-separated sequence of '$name=$value' pairs. The body does not begin or end with a "&". $name and $value are encoded with each forbidden *byte* (not Unicode character) being represented as "%xy", where x and y are two lowercase hexadecimal digits. Forbidden bytes include at least "<", ">", "+", "&", "\n", '%', and everything that isn't a printable ASCII byte. The trailer consists of '>$length\n', where $length is as before and '\n' is LF. I tried to make individual lines of the encoding JSON-compatible when not corrupted, but it was fucking bullshit, both because of Unicode (JSON wants Unicode text, not arbitrary binary data) and because of heavy overhead. Earlier, I was thinking of making it XML-compatible, but fucking XML can’t represent so much as a control-A, let alone random BIG5 data that you don’t know is BIG5. So in the interests of making the file format at least *somewhat* guessable and greppable, I’m using HTML form urlencoding. Phagh. Performance ----------- On my 1.6GHz Atom netbook under CPython 2.7.3, gossip.py has performance crudely approximated as follows: 250ms to start up (because it imports every module known to humankind) 6.5MB minimal resident set size <10ms to open a log 0.1–2.5ms to parse each add in a log 50ms to parse a megabyte of mostly-text add data in a log So, for adds of under about 3KiB, we should expect the per-add parsing time (4000–10000 adds/second) to dominate, while for larger adds, we should expect the per-byte parsing time (20 MB/second) to dominate. It’s probably possible to improve both of these substantially, and it would probably be a good idea before using it for real. Also, it’s pretty much guaranteed that performance will be far worse on raw binary data. You can cut startup time to 170ms by importing it and invoking main(), thus avoiding reparsing the Python. Some cProfile results from a particularly slow data file follow: 76997 1.121 0.000 2.381 0.000 gossip.py:745(parse_qsl) 1 0.766 0.766 5.629 5.629 gossip.py:539(individuate) 11009 0.423 0.000 1.304 0.000 gossip.py:725(decode_add) 27223 0.411 0.000 0.411 0.000 {method 'findall' of '_sre.SRE_Pattern' objects} 96967 0.286 0.000 0.286 0.000 {method 'get' of 'dict' objects} 11009 0.264 0.000 0.520 0.000 gossip.py:655(add) 65987 0.257 0.000 0.257 0.000 {method 'index' of 'str' objects} 11010 0.240 0.000 0.438 0.000 base64.py:310(encodestring) 11010 0.170 0.000 0.170 0.000 gossip.py:514(message_id) 11010 0.168 0.000 0.858 0.000 gossip.py:558(update_seen_indices) 11011 0.154 0.000 0.154 0.000 {_hashlib.openssl_sha256} 11010 0.115 0.000 1.491 0.000 gossip.py:575(__iter__) 11010 0.114 0.000 0.455 0.000 re.py:169(findall) BUGS ---- At the moment recovery from corrupted logs is very primitive, and the next message written after the corruption may be lost as a result. Since all name-value pairs are passed as command-line arguments, on Unix, there is no way to include NULs in the name or value data in the CLI. You can totally do it in the chat UI though. Probably a lot of bytes are getting unnecessarily %-encoded in the logfile. The client-server stuff won’t work until I write the client. This should be pretty easy but also involves spawning subprocesses. The network protocol shouldn’t involve reserializing in-memory data structures; instead, it should pass along the literal message from the on-disk format, so that any in-memory message corruption will be caught by the hash. The network protocol should probably wait for an explicit request to send an add across, and that request should probably be delayed by a random period of time somewhat greater than the estimated network latency, in order to avoid sending every add across every connection once. The sync protocol should not send you an add you already have. Nor should it write to disk adds you already have. Right now there is some bug related to this that causes each message to ping-pong across the sync connection twice. My original objective with this thing was Etherpad-like version control for documents that supports disconnected operation, so I don’t have to remember to push, pull, and merge in order to not leave documents stranded on a powered-off netbook. Or on my phone. So far I haven’t implemented anything approaching that. The basename equality checking isn’t there yet. It needs a clone command, which would create a new logfile with the same basename as an existing logfile and then sync them. individuate() needs to hold the lock to ensure only one identity gets created in case there are multiple processes. I’m not totally sure it won’t write the same message to the log twice. The ``-c`` flag fails to create new logs or report the error in a sensible way. Instead you get a long Python traceback. ``-a`` attempts to use JSON to output the data. Unfortunately, when the data is not UTF-8, json.py craps out. This also breaks synchronization because of the fucking JSON encoding/decoding involved there. """ import base64 import bisect import ctypes import errno import getopt import fcntl import hashlib import json import os import random import re import select import sys import time import urllib def main(argv): opts, args = getopt.getopt(argv[1:], 'asdqvc', ['server']) optnames = [opt for opt, v in opts] if '--server' in optnames and len(args) == 1: Log(args[0]).sync(sys.stdin, sys.stdout, quit=('-q' in optnames), verbose=('-v' in optnames)) elif '-a' in optnames and len(args) >= 1: Log(args[0]).ask([arg.split('=', 1) for arg in args[1:]], ('-q' in optnames)) elif '-s' in optnames and len(args) >= 1 and '-q' not in optnames: Log(args[0]).say([arg.split('=', 1) for arg in args[1:]]) elif '-d' in optnames: debug([arg.split('=', 1) for arg in args]) elif '-c' in optnames: chat(Log(args[0]), os.environ.get('LOGNAME', 'pobrecito hablador') if len(args) == 1 else args[1], sys.stdin, sys.stdout) else: return usage(argv[0]) def usage(argv0): sys.stderr.write( """Usage: %(me)s -a logfilename [names or name-value pairs] Query the gossip log. With -q, exits when it finishes reading the existing file; otherwise, waits for more data to be added. %(me)s -s logfilename name-value pairs Add a line to the gossip log. %(me)s logfilename ssh myserver ./gossip.py --server logfilename Connect this gossip log to another gossip log. Basenames must match to prevent accidental contagion between unrelated logs; symlink to circumvent this. With -q, exits when the two ends are synchronized. (basename check unimplemented; client code unimplemented) %(me)s -c logfilename Provide a simple example chat UI on top of the gossip log. """ % {'me': argv0}) return -1 def chat(log, nick, infh, outfh): "Really simple dumb chat program built on gossip log." set_nonblocking(infh) select_args, get_more, close = log.watch(future_only=False) select_args = merge_select_tuple(([infh.fileno()], [], []), select_args) pending_messages = list(get_more())[-16:] red = '\033[32m' plain = '\033[0m' log.say(dict(n=nick, join='1').items()) while True: for message in pending_messages: d = dict(message) if 'n' in d and 'm' in d: outfh.write("<%s%s%s> %s\n" % (red, d['n'], plain, d['m'])) elif 'n' in d and 'quit' in d: outfh.write("* %s quit\n" % d['n']) elif 'n' in d and 'join' in d: outfh.write("* %s joined\n" % d['n']) outfh.flush() pending_messages = [] while True: try: line = infh.readline() except IOError, e: if e.errno != errno.EAGAIN: raise readables, _, _ = select.select(*select_args) if infh.fileno() in readables: continue pending_messages = get_more() break else: if line == '': log.say(dict(n=nick, quit='1').items()) outfh.write('quitting\n') return log.say(dict(n=nick, m=line.rstrip()).items()) def debug(nvpairs): s = encode_add(nvpairs) print s print decode_add(s) class Log: def __init__(self, filename): "filename need not exist yet" self.filename = filename self.me = None # Not yet known self.seen_indices = {} # Okay, so my theory is currently # as per # to set the infh as nonblocking, and then always make sure # readline() on it throws an EAGAIN error before proceeding # to the select(). def sync(self, infh, outfh, quit=True, verbose=False): if self.me is None: self.individuate() set_nonblocking(infh) outfh.write(json.dumps(self.hi()) + "\n") outfh.flush() their_identity = 'unknown' we_synced = False # We don’t yet have the other end’s data. they_synced = False # Nor do they have ours. they_have = None # We don’t know what data they have, and watch_select = ([], [], []) # initially we don’t know what they need get_more = None # so we don’t start watching. while True: try: line = infh.readline() except IOError, e: if e.errno != errno.EAGAIN: raise # Okay, we have no requests incoming; wait either for # the next request to come in, or to have more data in # the log. while True: select_tuple = merge_select_tuple(([infh.fileno()], [], []), watch_select) readables, _, _ = select.select(*select_tuple) # If we may have a request, stop and answer it. if infh.fileno() in readables: break if they_have is None or get_more is None: sys.stderr.write("can't happen, %r %r" % ((they_have, watch_select, get_more),)) else: self.send_missing_adds(they_have, outfh, get_more()) else: if not line: sys.stderr.write('hangup, exiting\n') return if verbose: sys.stderr.write('%s got %r\n' % (self.me, line)) req = json.loads(line) if 'w' in req: add = req['w'] d = dict(add) mid = self.message_id(add) if mid: by, at = mid if (by not in self.seen_indices or at not in self.seen_indices[by]): self.write(add) self.update_seen_indices(d) else: sys.stderr.write('%s got weird message %r\n' % (add,)) elif 'has' in req: if 'me' in req: their_identity = req['me'] if they_have is None: they_have = self.parse_hi(req) select_args, get_more, close = self.watch(future_only=False) initial_adds = list(get_more()) self.send_missing_adds(they_have, outfh, initial_adds) outfh.write(json.dumps({'synchronized': 'yes'}) + "\n") outfh.flush() sys.stderr.write('%s is up-to-date\n' % their_identity) we_synced = True if they_synced and quit: break else: they_have = self.parse_hi(req) elif 'synchronized' in req: sys.stderr.write('%s am up-to-date\n' % self.me) they_synced = True if we_synced and quit: break else: sys.stderr.write("surprising request %r\n" % line) def watch(self, future_only): """Call to subscribe to all future adds. This returns a 3-tuple: a select() tuple file which unblocks when there is (or may be) more data, a function to call to get an iterator over the new adds, and a function to call to close the relevant file descriptors. That last one may be a little bit silly, since it's unnecessary in CPython, and this code won't work outside CPython. For information on select() tuples, see merge_select_tuple. If future_only is False, this will return all past adds as well. """ try: inotify_fh = self.start_inotify_watch(self.filename) except OSError: inotify_fh = None data_fh = open(self.filename) if future_only: data_fh.seek(0, 2) def get_more(): # Empty the inotify buffer, in case anything is present. try: if inotify_fh is not None: os.read(inotify_fh.fileno(), 32768) except OSError, e: if e.errno != errno.EAGAIN: raise while True: pos = data_fh.tell() line = data_fh.readline() if not line: break if line.endswith('\n'): yield self.parse(line) else: # If we get here it’s probably because we were # trying to read an incomplete add (which is # probably really big). We should probably calm # down and wait a bit before trying again. # This case is super untested. data_fh.seek(pos) time.sleep(.005) break def close(): if inotify_fh is not None: inotify_fh.close() data_fh.close() select_tuple = (([inotify_fh], [], [], None) if inotify_fh else ([], [], [], 0.1)) return select_tuple, get_more, close def start_inotify_watch(self, filename): # This will crash on anything other than Linux: libc = ctypes.CDLL('libc.so.6', use_errno=True) IN_NONBLOCK = 00004000 # at least on i386, sigh # Maybe use FSEvents on MacOS? # https://github.com/emcrisostomo/fswatch has code # Watchdog in the Cheese Shop does too https://pypi.python.org/pypi/watchdog # docs are at # https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/FSEvents_ProgGuide/Introduction/Introduction.html # also kqueue at https://github.com/sschober/kqwait/blob/master/kqwait.c inotify_fh = os.fdopen(libc.inotify_init1(IN_NONBLOCK)) IN_MODIFY = 2 watch_id = libc.inotify_add_watch(inotify_fh.fileno(), filename, IN_MODIFY) if watch_id < 0: raise OSError(ctypes.get_errno()) return inotify_fh def parse_hi(self, req): return {k: Intervals(v) for k, v in req['has']} def send_missing_adds(self, they_have, outfh, lines): for line in lines: if isinstance(line, CorruptedData): continue d = dict(line) mid = self.message_id(d) if mid: by, at = mid if by not in they_have: they_have[by] = Intervals() if at not in they_have[by]: outfh.write(json.dumps({'w': line}) + "\n") they_have[by].add(at) self.update_seen_indices(d) outfh.flush() def message_id(self, nvpairs): d = dict(nvpairs) try: return d['by'], int(d['at']) except (KeyError, ValueError): # maybe warn? return None def hi(self): assert self.me is not None return {'me': self.me, 'has': [(k, v.intervals()) for k, v in self.seen_indices.items()]} def say(self, nvpairs): if self.me is None: self.individuate() filtered_nvpairs = [(n, v) for n, v in nvpairs if n != 'by' and n != 'at'] self.write([('by', self.me), ('at', str(self.at()))] + filtered_nvpairs) def write(self, nvpairs): with FileLocked(open(self.filename, 'a')) as log: log.write(encode_add(nvpairs)) self.update_seen_indices(dict(nvpairs)) def individuate(self): assert self.me is None self.counter = 0 if not os.path.exists(self.filename): self.me = new_identity() self.write([('I', self.me)]) for line in self: if isinstance(line, CorruptedData): continue d = dict(line) if self.me is None: self.me = d['I'] if self.me == d.get('by') and 'at' in d: self.counter = max(self.counter, int(d['at']) + 1) self.update_seen_indices(d) def update_seen_indices(self, d): mid = self.message_id(d) if mid: by, at = mid if by not in self.seen_indices: self.seen_indices[by] = Intervals() self.seen_indices[by].add(at) def at(self): try: return self.counter finally: # XXX there are race conditions here with multiple writers # because we are not reading the previous one with a lock # held self.counter += 1 def __iter__(self): for line in open(self.filename): yield self.parse(line) def parse(self, line): # XXX we should be returning the raw lines too so we can # send them over the network so we can detect memory # corruption return decode_add(line) def ask(self, nvpairs, quit=True): matches = query(nvpairs) select_args, get_more, close = self.watch(future_only=False) try: while True: for line in get_more(): if isinstance(line, CorruptedData): print "couldn't parse %r" % line.data continue if matches(line): print json.dumps(line) if quit: break select.select(*select_args) finally: close() def merge_select_tuple(a, b): """Take the union of two sets of events you might select() on. a and b are expected to be tuples of arguments that would be valid for select.select(), as is the return value. """ def set_union(lista, listb): aset = set(lista) return lista + [bi for bi in listb if bi not in aset] if len(a) == 3: a = a + (None,) if len(b) == 3: b = b + (None,) if a[3] is None: timeout = b[3] else: timeout = min(a[3], b[3]) return (set_union(a[0], b[0]), set_union(a[1], b[1]), set_union(a[2], b[2]), timeout) def new_identity(): """Generate a random string. This string is not really security-sensitive, but if you generate the same one in two places, the gossip system will not work. I don't expect it to be able to scale above a few dozen nodes, so 24 bits here is probably unnecessary. """ n = random.SystemRandom().getrandbits(24) ns = chr(n % 256) + chr((n >> 8) % 256) + chr((n >> 16) % 256) return base64.encodestring(ns).strip() def set_nonblocking(fh): "Make a Python file object's fd nonblocking. Fuck Win32." # cf. http://stackoverflow.com/a/1810703 fd = fh.fileno() fl = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK) class FileLocked: def __init__(self, fh): self.fh = fh def __enter__(self): fcntl.flock(self.fh.fileno(), fcntl.LOCK_EX) return self.fh def __exit__(self, a, b, c): self.fh.flush() fcntl.flock(self.fh.fileno(), fcntl.LOCK_UN) self.fh.close() class Intervals: """Efficiently represent a set of mostly-contiguous integers. This works by maintaining a sorted list of (start, end) ranges, like a .newsrc file, so it will become inefficient if the number of discontinuities becomes large. (Actually, it maintains one sorted list of each so I can use bisect more easily.) """ def __init__(self, intervals=[]): self.starts = [start for start, end in intervals] self.ends = [end for start, end in intervals] def __iter__(self): for start, end in zip(self.starts, self.ends): for i in xrange(start, end): yield i def __repr__(self): return '{%s}' % " ∪ ".join("[%d, %d)" % t for t in zip(self.starts, self.ends)) def add(self, n): # Fast path for the common case where we’re adding most ints # to the end: if self.ends and n == self.ends[-1]: self.ends[-1] += 1 return if n in self: return i = bisect.bisect_left(self.starts, n+1) if i < len(self.starts) and self.starts[i] == n + 1: self.starts[i] = n if i > 0 and self.ends[i-1] == n: # Merge intervals self.ends.pop(i-1) self.starts.pop(i) else: i = bisect.bisect_left(self.ends, n) if i < len(self.ends) and self.ends[i] == n: self.ends[i] += 1 # No possibility of merge here because we're in the else. else: self.starts.insert(i, n) self.ends.insert(i, n+1) def __contains__(self, n): i = bisect.bisect_right(self.starts, n) return i != 0 and self.starts[i-1] <= n < self.ends[i-1] def intervals(self): return zip(self.starts, self.ends) def query(nvpairs): ask = index(nvpairs) def matches(add): addi = index(add) for n, vs in ask.items(): if vs is None: if n not in addi: return False else: if not any(v in addi.get(n, []) for v in vs): return False return True return matches def index(nvpairs): rv = {} for term in nvpairs: if len(term) == 1: n, = term if n not in rv: rv[n] = None continue n, v = term if rv.get(n) is None: rv[n] = set() rv[n].add(v) return rv def encode_add(nvpairs): body = encode_body(nvpairs) sha = hashlib.sha256(body).digest() h = base64.encodestring(sha)[:11] return '%s %d<%s>%d\n' % (h, len(body), body, len(body)) def encode_body(nvpairs): return '&'.join(encode_pair(name, value) for name, value in nvpairs) def encode_pair(name, value): return '%s=%s' % (urllib.quote(name, "/?#@,[]()!:;' "), urllib.quote(value, "/?#@,[]()!:;' =")) decode_pattern = re.compile("(.{11}) (\d+)<([^>]*)>(\d+)\n") def decode_add(s): mo = decode_pattern.match(s) if not mo: return IncompleteData(s) hash, size1, body, size2 = mo.groups() if size1 != size2 or len(body) != int(size1): return CorruptedData(s) if hash != base64.encodestring(hashlib.sha256(body).digest())[:11]: return HashCorruptedData(s) # We used cgi.parse_qsl here, but it turned out to be # pretty slow, so I wrote my own below. return list(parse_qsl(body)) class CorruptedData: def __init__(self, data): self.data = data class IncompleteData(CorruptedData): pass class HashCorruptedData(CorruptedData): pass def parse_qsl(qs): """Faster, generator version of cgi.parse_qsl with keep_blank_values=1. Using this instead of cgi.parse_qsl cut the runtime on one data file from 4.6 seconds down to 3.7 seconds. Re-encoding with less quoting got it down to 2.7 seconds. Further optimization down to 2.5. """ url_unquoting_pattern = _url_unquoting_pattern url_unquote = _url_unquote.get for name_value in re.findall('[^&]+', qs): i = name_value.index('=') name = name_value[:i] value = name_value[i+1:] yield (''.join([url_unquote(piece, piece) for piece in url_unquoting_pattern.findall( name.replace('+', ' '))]) if '%' in name else name, ''.join([url_unquote(piece, piece) for piece in url_unquoting_pattern.findall( value.replace('+', ' '))]) if '%' in value else value) _url_unquote = {} def _init_url_unquote(): for b in range(256): _url_unquote['%%%x%x' % (b >> 4, b & 15)] = chr(b) _url_unquote['%%%X%x' % (b >> 4, b & 15)] = chr(b) _url_unquote['%%%x%X' % (b >> 4, b & 15)] = chr(b) _url_unquote['%%%X%X' % (b >> 4, b & 15)] = chr(b) _init_url_unquote() _url_unquoting_pattern = re.compile(r'%[0-9a-fA-F][0-9a-fA-F]|[^%]+') def ok(a, b): assert a == b, (a, b) ok(list(parse_qsl('by=snz0&at=357&date=Mon%2c+04+May+2015+22%3A47%3A24+' '-0500&subject=%5BTeslaTurbine%5D+Black+Young+Guy+Fuc' 'k+8+Girls+Awesome&from=%22Tanya+Tate+teslaturbine%40' 'teslaturbine.aymrt.com+%5BTeslaTurbine%5D%22+%3CTesl' 'aTurbine%40yahoogroups.com%3E&offset=4374299')), [('by', 'snz0'), ('at', '357'), ('date', 'Mon, 04 May 2015 22:47:24 -0500'), ('subject', '[TeslaTurbine] Black Young Guy Fuck 8 Girls Awesome'), ('from', '"Tanya Tate teslaturbine@teslaturbine.aymrt.com [TeslaTurbine]" ' ''), ('offset', '4374299'), ]) if __name__ == '__main__': import cgitb cgitb.enable(format='text') sys.exit(main(sys.argv))