Eligibility: Only shows that have aired more than one season can qualify, and at least one season must have aired in the 2010s
If there appear to be any obvious omissions, then I probably haven't found time to watch it yet... or I just thought it was overrated.
Pelican provides support for generating content from two markup languages - [reStructuredText] (the default) and Markdown (if installed). For the most part both markup languages generate similar output, except when using Pygments to generate code blocks with syntax highlighting.
Code blocks generated from reStructuredText will use the CSS class highlight
to handle the syntax highlighting, while Markdown will use the codehilite
class by default. This can cause problems when developing themes for Pelican
users who may be using either reStructuredText or Markdown, or users who choose
to generate content using both markup languages.
Fortunately you can customise how the Markdown processor generates its output
using the MD_EXTENSIONS
setting in the pelicanconf.py
file. You can
configure the Markdown processor to generate code blocks using the highlight
CSS class by inserting the following entry in you pelicanconf.py
file:
MD_EXTENSIONS = ['codehilite(css_class=highlight)']
More information about pelicanconf.py settings can be found in the Pelican documentation.
[restructuredtext]:
On a number of occasions I have needed to make a site available via both HTTP
and HTTPS which can result in creating two almost identical
VirtualHost
stanzas. The HTTPS stanza usually ends up being a
copy & paste of the HTTP stanza with the SSL certificate stuff tacked on to the
end. This means you generally end up with a file that is something like the
following:
# file: /etc/apache2/sites-available/site.example.com.conf
<VirtualHost *:80>
ServerName site.example.com
ServerAdmin webmaster@example.com
DocumentRoot /var/www/site
ErrorLog /var/log/apache2/site-error_log
CustomLog /var/log/apache2/site-access_log vhost_combined
# ... some rewrite rules, ACLs, etc ...
</VirtualHost>
<VirtualHost *:443>
ServerName site.example.com
ServerAdmin webmaster@example.com
DocumentRoot /var/www/site
ErrorLog /var/log/apache2/site-error_log
CustomLog /var/log/apache2/site-access_log vhost_combined
# ... duplicate rewrite rules, ACLs, etc ...
SSLEngine On
SSLCertificateFile ssl/crt/wc.example.com.crt
SSLCertificateKeyFile ssl/key/wc.example.com.key
</VirtualHost>
This method tends to break the "don't repeat yourself" (DRY) principle and can
lead to inconsistencies if you make a typo, or forget to make changes to both
stanzas. One method I have found to overcome this is to make use of the
Include
directive.
The first step is to take all of the configuration settings that are common to both the HTTP and HTTPS stanzas and place them in a new file:
# file: /etc/apache2/sites-include/site.example.com.conf
ServerName site.example.com
ServerAdmin webmaster@example.com
DocumentRoot /var/www/site
ErrorLog /var/log/apache2/site-error_log
CustomLog /var/log/apache2/site-access_log vhost_combined
# ... some rewrite rules, ACLs, etc ...
Note: I generally use Debian systems which have the convention of storing
VirtualHost
configuration files in /etc/apache2/sites-available
, so I like
to keep these common setting files in /etc/apache2/sites-include
.
You can then Include
this common setting file in both of your VirtualHost
stanzas:
# file: /etc/apache2/sites-available/site.example.com.conf
<VirtualHost *:80>
Include sites-include/site.example.com.conf
</VirtualHost>
<VirtualHost *:443>
Include sites-include/site.example.com.conf
SSLEngine On
SSLCertificateFile ssl/crt/wc.example.com.crt
SSLCertificateKeyFile ssl/key/wc.example.com.key
</VirtualHost>
Using this method you only need to make changes in one location
(sites-include/site.example.com.conf
) and they will be applied to both HTTP
and HTTPS.
You can also do something similar if you use the same wildcard SSL certificate
in a number of different VirtualHost
files. First move the common SSL
settings into a new file:
# file: /etc/apache2/ssl/site.example.com.conf
SSLEngine On
SSLCertificateFile ssl/crt/wc.example.com.crt
SSLCertificateKeyFile ssl/key/wc.example.com.key
Then Include
the SSL settings file in your HTTPS VirtualHost
stanza:
# file: /etc/apache2/sites-available/site.example.com.conf
<VirtualHost *:80>
Include sites-include/site.example.com.conf
</VirtualHost>
<VirtualHost *:443>
Include sites-include/site.example.com.conf
Include ssl/wc.example.com.conf
</VirtualHost>
This can be particularly useful if you have a number of extra SSL settings that need to be configured.
Following on from my last post, I have now split the LiveServerTestCase
out into its own Python package to make it easier to reuse in other projects. I
have called it wsgi-liveserver and it is the first Python package that I have
released. The package can be downloaded from PyPI. The code can be found on
GitHub and a welcome any feedback.
Selenium is a really nice framework for testing web application front-ends by automating actions through a web browser, but it also requires a web server to be running so that the browser can interact with the web application. Most other tests usually interact with the code directly, so this requirement can also lead to a slight problem... how should the web server be started when running tests?
The simplest way to run a Selenium test is to manually start up a web server for your application and then run the tests against it, but this can get a bit tedious after a while (especially if you keep forgetting to start the server).
Django provides a LiveServerTestCase
which automates starting up a web
server to serve up your Django application, run your Selenium tests, and then
stop the server again. This is a really nice approach, but I wanted to be able
to do something similar when I am not using Django.
Last week I came across the flask-testing framework which provides similar
functionality for Flask applications. The flask-testing
LiveServerTestCase
is inspired by the Django version, but is much
simpler. Unfortunately it is also a bit specific to Flask applications.
What I really wanted was a something that could be used for any WSGI compliant
web application. So I wrote my own which is loosely based on the
flask-testing version. You simply inherit from the LiveServerTestCase
class instead of from unittest.TestCase
when creating your test class,
override the create_app()
method to return your WSGI application and,
write your tests as normal. When you run your tests it will handle starting and
stopping the web server in the background as required. I have written a very
basic example Bottle application called bottle-selenium to show it in
action.
I originally wrote this to use with Bottle applications, mainly because they are very simple to work with. My eventual goal is to use this for testing the development of Roundup instances, so it should work with any WSGI compliant web application.
Update (22/03/2012): The LiveServerTestCase
is now available in its own package
called wsgi-liveserver.
The Python unittest
module provides support for testing that an exception is
raised using the assertRaises()
method, but sometime we need to also test
that the exception message is what is expected. Python v2.7 introduced the
assertRaisesRegexp()
method which can be used to test exception messages
using regular expressions, but if you are stuck with v2.6 or earlier you will
need to do something like:
import unittest
def raise_exception(yup=True):
if yup:
raise ValueError('Yup, exception raised.')
class BasicExceptionTest(unittest.TestCase):
def test_message(self):
try:
raise_exception(True)
self.fail()
except ValueError as e:
self.assertEqual(str(e), 'Yup, exception raised.')
if __name__ == '__main__':
unittest.main(verbosity=2)
Looking at test_message()
we first wrap the function we are testing
(raise_exception()
) in a try
... except
statement to
catch any exception that may be raised. If no exception is raised then we call
fail()
to signal that the test has failed. If the correct exception has
been raised (in this case ValueError
) we use assertEqual()
to
test that the exception message is correct. If an exception that we were not
expecting is raised, then it will be handled by the TestCase
class and
the test will be marked as having an error. With this simple test pattern every
possible outcome should be handled correctly.
If you plan to be writing a lot of these sorts of tests, then it may be worth
creating your own TestCase
class that provides an assert method for
testing exception messages:
import unittest
def raise_exception(yup=True):
if yup:
raise ValueError('Yup, exception raised.')
class ExceptionMessageTestCase(unittest.TestCase):
def assertRaisesMessage(self, exception, msg, func, *args, **kwargs):
try:
func(*args, **kwargs)
self.fail()
except exception as e:
self.assertEqual(str(e), msg)
class MessageExceptionTest(ExceptionMessageTestCase):
def test_message(self):
self.assertRaisesMessage(ValueError, 'Yup, exception raised.',
raise_exception, True)
if __name__ == '__main__':
unittest.main(verbosity=2)
The assertRaisesMessage()
method is very similar to the
assertRaises()
method except that it also takes a msg
argument that
will be used to compare against the exception message.
Both of these test patterns could also be extended to include the ability to
use regular expression to test messages (similar to assertRaisesRegexp()
),
but I generally find that simple string comparisons are usually enough for my
needs.
I recently wrote a short function called dict_diff()
that would take
two dicts, compare them, and return another two dicts that contain only the
differences between the original dicts (the code is available as a gist). It
works something like:
dict_diff(
{'a': {'ab': 12}, 'b': {'ba': 21, 'bb': 22}, 'c': {'cc': 33}},
{'a': {}, 'b': {'ba': 21, 'bc': 23}, 'c': {'cc': 33}},
)
# outputs: (
# {'a': {'ab': 12}, 'b': {'bb': 22}},
# {'b': {'bc': 23}}
# )
I wrote it to make the output of assertEqual()
a lot easier to read
when dealing with large dicts that are not equal. It is a recursive function,
but other than that it is fairly simple and nothing very special. What is
different is that I wrote the function using test-driven development (TDD).
Generally when writing recursive functions I tend to get a bit caught up trying to ensure that the recursive part of the function works correctly from the beginning and lose sight of what the function is actually supposed to be doing. By knowing what the expected output would be ahead of time I was able to take a test-driven development approach and write the test cases beforehand, then just work my way through making all of the tests pass. By starting with the simple tests first and working my way through to the more complex ones it meant everything just fell into place and I didn't have to worry if I broke anything when I introduced the recursive stuff.
In the past I have tended to just write the tests in tandem with the code (sometimes before, sometimes after) and not really put a lot of thought into planning it all out with test cases. Being a simple function I knew what most of the results should be ahead of time without having to put much thought into it, but it was valuable to see how well this approach worked. I think I'll try to spend more time planning out my test cases to drive my development in the future.
I have finally set up my new blog after many months of thinking about doing it.
Ever since I first heard about using static site generators for blogs the idea appealed to me. By their nature the content of blogs do not need to be generated dynamically so using static html pages means a lower resource overhead than using something with a database backend. Another bonus is that I no longer have to worry about keeping on top of the upgrade/patch cycle to fix security issues that comes with using something like Wordpress.
I was mainly looking for a static site generator written in Python so that if I ever wanted to make any modifications, I would be working with a programming language I enjoy. The two main ones I came across that were actively being developed (with releases) were Nikola and Pelican. I eventually chose Pelican over Nikola because all of its dependencies were pure python libraries, making it easier to set up in a virtualenv.
I have also been meaning to have a play with Bootstrap for a very long time, so I took the opportunity to play around with it over the last few days and have come up with a simple Pelican theme called bootstrap-jerrykan. I am pretty happy with the result.
So, here is my new blog.
When creating init scripts or trying to debug services on Linux it can be handy
to know what the environment variables are for a running process. It turns out
that you can retrieve these variables from /proc
(along with lots of
other rather useful information). The environment variables are located in
/proc/$PID/environ
where $PID
is the ID of the process we are
interested in.
cat
can be used to print out the environment variables, but the entries
are separated by null characters which makes them a bit difficult to read. To
view the entries in a slightly more legible form we can pipe the output through
tr
to replace the null characters with new line characters:
cat /proc/$PID/environ | tr '\000' '\n'
References:
man proc
- Server Fault: Environment variables of a running process on Unix?