Chapter 17. Test Fixtures, Logging, and Server-Side Debugging
Now that we have a functional authentication system, we want to use it to identify users, and be able to show them all the lists they have created.
To do that, we’re going to have to write FTs that have a logged-in user. Rather than making each test go through the (time-consuming) Persona dialog, we want to be able to skip that part.
This is about separation of concerns. Functional tests aren’t like unit tests, in that they don’t usually have a single assertion. But, conceptually, they should be testing a single thing. There’s no need for every single FT to test the login/logout mechanisms. If we can figure out a way to “cheat” and skip that part, we’ll spend less time waiting for duplicated test paths.
Tip
Don’t overdo de-duplication in FTs. One of the benefits of an FT is that it can catch strange and unpredictable interactions between different parts of your application.
Skipping the Login Process by Pre-creating a Session
It’s quite common for a user to return to a site and still have a cookie, which means they are “pre-authenticated”, so this isn’t an unrealistic cheat at all. Here’s how you can set it up:
functional_tests/test_my_lists.py.
from
django.conf
import
settings
from
django.contrib.auth
import
BACKEND_SESSION_KEY
,
SESSION_KEY
,
get_user_model
User
=
get_user_model
(
)
from
django.contrib.sessions.backends.db
import
SessionStore
from
.base
import
FunctionalTest
class
MyListsTest
(
FunctionalTest
)
:
def
create_pre_authenticated_session
(
self
,
)
:
user
=
User
.
objects
.
create
(
=
)
session
=
SessionStore
(
)
session
[
SESSION_KEY
]
=
user
.
pk
#
session
[
BACKEND_SESSION_KEY
]
=
settings
.
AUTHENTICATION_BACKENDS
[
0
]
session
.
save
(
)
## to set a cookie we need to first visit the domain.
## 404 pages load the quickest!
self
.
browser
.
get
(
self
.
server_url
+
"
/404_no_such_url/
"
)
self
.
browser
.
add_cookie
(
dict
(
name
=
settings
.
SESSION_COOKIE_NAME
,
value
=
session
.
session_key
,
#
path
=
'
/
'
,
)
)
Note that, as it is, this will only work because we’re using
LiveServerTestCase
, so the User
and Session
objects we create will end up in
the same database as the test server. Later we’ll need to modify it so that it
works against the database on the staging server too.
Checking It Works
To check it works, it would be good to use the wait_to_be_logged_in
function
we defined in our last test. To access it from a different test, we’ll need
to pull it up into FunctionalTest
, as well as a couple of other methods.
We’ll also tweak them slightly so that they can take an arbitrary email address
as a parameter:
functional_tests/base.py (ch17l002-2).
from
selenium.webdriver.support.ui
import
WebDriverWait
[
...
]
class
FunctionalTest
(
StaticLiveServerTestCase
):
[
...
]
def
wait_for_element_with_id
(
self
,
element_id
):
[
...
]
def
wait_to_be_logged_in
(
self
,
):
self
.
wait_for_element_with_id
(
'id_logout'
)
navbar
=
self
.
browser
.
find_element_by_css_selector
(
'.navbar'
)
self
.
assertIn
(
,
navbar
.
text
)
def
wait_to_be_logged_out
(
self
,
):
self
.
wait_for_element_with_id
(
'id_login'
)
navbar
=
self
.
browser
.
find_element_by_css_selector
(
'.navbar'
)
self
.
assertNotIn
(
,
navbar
.
text
)
That means a small tweak in test_login.py:
functional_tests/test_login.py (ch17l003).
TEST_EMAIL
=
'edith@mockmyid.com'
[
...
]
def
test_login_with_persona
(
self
):
[
...
]
self
.
browser
.
find_element_by_id
(
'authentication_email'
)
.
send_keys
(
TEST_EMAIL
)
self
.
browser
.
find_element_by_tag_name
(
'button'
)
.
click
()
[
...
]
# She can see that she is logged in
self
.
wait_to_be_logged_in
(
=
TEST_EMAIL
)
[
...
]
self
.
wait_to_be_logged_in
(
=
TEST_EMAIL
)
[
...
]
self
.
wait_to_be_logged_out
(
=
TEST_EMAIL
)
[
...
]
self
.
wait_to_be_logged_out
(
=
TEST_EMAIL
)
Just to check we haven’t broken anything, we rerun the login test:
$ python3 manage.py test functional_tests.test_login
[...]
OK
And now we can write a placeholder for the “My Lists” test, to see if our pre-authenticated session creator really does work:
functional_tests/test_my_lists.py (ch17l004).
def
test_logged_in_users_lists_are_saved_as_my_lists
(
self
):
=
'edith@example.com'
self
.
browser
.
get
(
self
.
server_url
)
self
.
wait_to_be_logged_out
(
)
# Edith is a logged-in user
self
.
create_pre_authenticated_session
(
)
self
.
browser
.
get
(
self
.
server_url
)
self
.
wait_to_be_logged_in
(
)
That gets us:
$ python3 manage.py test functional_tests.test_my_lists
[...]
OK
That’s a good place for a commit:
$ git add functional_tests $ git commit -m "placeholder test_my_lists and move login checkers into base"
The Proof Is in the Pudding: Using Staging to Catch Final Bugs
That’s all very well for running the FTs locally, but how would it work against the staging server? Let’s try and deploy our site. Along the way we’ll catch an unexpected bug (that’s what staging is for!), and then we’ll have to figure out a way of managing the database on the test server.
$ cd deploy_tools $ fab deploy --host=elspeth@superlists-staging.ottg.eu [...]
And restart Gunicorn…
elspeth@server: sudo restart gunicorn-superlists-staging.ottg.eu
Here’s what happens when we run the functional tests:
$ python3 manage.py test functional_tests \
--liveserver=superlists-staging.ottg.eu
======================================================================
ERROR: test_login_with_persona (functional_tests.test_login.LoginTest)
---------------------------------------------------------------------
Traceback (most recent call last):
File "/worskpace/functional_tests/test_login.py", line 50, in
test_login_with_persona
[...]
self.wait_for_element_with_id('id_logout')
[...]
selenium.common.exceptions.TimeoutException: Message: Could not find element
with id id_logout. Page text was:
Superlists
Sign in
Start a new To-Do list
======================================================================
ERROR: test_logged_in_users_lists_are_saved_as_my_lists
(functional_tests.test_my_lists.MyListsTest)
---------------------------------------------------------------------
Traceback (most recent call last):
File "/worskpace/functional_tests/test_my_lists.py", line 34, in
test_logged_in_users_lists_are_saved_as_my_lists
self.wait_to_be_logged_in(email)
[...]
selenium.common.exceptions.TimeoutException: Message: Could not find element
with id id_logout. Page text was:
Superlists
Sign in
Start a new To-Do list
We can’t log in—either with the real Persona or with our pre-authenticated session. There’s some kind of bug.
I had considered just going back and fixing this in the previous chapter, and pretending it never happened, but I think leaving it illustrates the point of running tests against a staging environment. It would have been pretty embarrassing if we’d deployed this bug straight to our live site.
Aside from that, we’ll get to practice a bit of server-side debugging.
Setting Up Logging
In order to track this bug down, we have to set up Gunicorn to do some
logging. Adjust the Gunicorn config on the server, using vi
or nano
:
server: /etc/init/gunicorn-superlists-staging.ottg.eu.conf.
[
...]
exec
../virtualenv/bin/gunicorn\
--bind unix:/tmp/superlists-staging.ottg.eu.socket\
--access-logfile ../access.log\
--error-logfile ../error.log\
superlists.wsgi:application
That will put an access log and error log into the ~/sites/$SITENAME folder.
Then we add some debug calls in our authenticate
function (again, we can do
this directly on the server):
accounts/authentication.py.
import
logging
[
...
]
def
authenticate
(
self
,
assertion
):
logging
.
warning
(
'entering authenticate function'
)
response
=
requests
.
post
(
PERSONA_VERIFY_URL
,
data
=
{
'assertion'
:
assertion
,
'audience'
:
settings
.
DOMAIN
}
)
logging
.
warning
(
'got response from persona'
)
logging
.
warning
(
response
.
content
.
decode
())
[
...
]
Tip
Using the “root” logger like this (logging.warning
) isn’t generally
a good idea. We’ll set up a more robust logging configuration at the end of
the chapter.
You should also make sure your settings.py still contains the LOGGING
settings which will actually send stuff to the console:
superlists/settings.py.
LOGGING
=
{
'version'
:
1
,
'disable_existing_loggers'
:
False
,
'handlers'
:
{
'console'
:
{
'level'
:
'DEBUG'
,
'class'
:
'logging.StreamHandler'
,
},
},
'loggers'
:
{
'django'
:
{
'handlers'
:
[
'console'
],
},
},
'root'
:
{
'level'
:
'INFO'
},
}
We restart Gunicorn again, and then either rerun the FT, or just try to log in manually. While that happens, we can watch the logs on the server with a:
elspeth@server: $ tail -f error.log # assumes we are in ~/sites/$SITENAME folder
[...]
WARNING:root:{"status":"failure","reason":"audience mismatch: domain mismatch"}
You may even find the page gets stuck in a “redirect loop”, as Persona tries to resubmit the assertion again and again.
It turns out it’s because I overlooked an important part of the Persona system, which is that authentications are only valid for particular domains. We’ve left the domain hardcoded as “localhost” in accounts/authentication.py:
accounts/authentication.py.
PERSONA_VERIFY_URL
=
'https://verifier.login.persona.org/verify'
DOMAIN
=
'localhost'
User
=
get_user_model
()
We can try and hack in a fix on the server:
accounts/authentication.py.
DOMAIN
=
'superlists-staging.ottg.eu'
And check whether it works by doing a manual login. It does.
Fixing the Persona Bug
Here’s how we go about baking in a fix, switching back to coding on our local
PC. We start by moving the definition for the DOMAIN
variable into
settings.py, where we can later use the deploy script to override it:
superlists/settings.py (ch17l011).
# This setting is changed by the deploy script
DOMAIN
=
"localhost"
ALLOWED_HOSTS
=
[
DOMAIN
]
We feed that change back through the tests:
accounts/tests/test_authentication.py.
@@ -1,12 +1,14 @@
from unittest.mock import patch+from django.conf import settings
from django.contrib.auth import get_user_model from django.test import TestCase User = get_user_model() from accounts.authentication import (- PERSONA_VERIFY_URL, DOMAIN, PersonaAuthenticationBackend
+ PERSONA_VERIFY_URL, PersonaAuthenticationBackend
)+
@patch('accounts.authentication.requests.post') class AuthenticateTest(TestCase):@@ -21,7 +23,7 @@ class AuthenticateTest(TestCase):
self.backend.authenticate('an assertion') mock_post.assert_called_once_with( PERSONA_VERIFY_URL,- data={'assertion': 'an assertion', 'audience': DOMAIN}
+ data={'assertion': 'an assertion', 'audience': settings.DOMAIN}
)
And then we change the implementation:
accounts/authentication.py.
@@ -1,8 +1,8 @@
import requests+from django.conf import settings
from django.contrib.auth import get_user_model User = get_user_model() PERSONA_VERIFY_URL = 'https://verifier.login.persona.org/verify'-DOMAIN = 'localhost'
@@ -11,7 +11,7 @@ class PersonaAuthenticationBackend(object):
def authenticate(self, assertion): response = requests.post( PERSONA_VERIFY_URL,- data={'assertion': assertion, 'audience': DOMAIN}
+ data={'assertion': assertion, 'audience': settings.DOMAIN}
) if response.ok and response.json()['status'] == 'okay': email = response.json()['email']
Rerunning the tests just to be sure:
$ python3 manage.py test accounts
[...]
Ran 14 tests in 0.053s
OK
Next we update our fabfile to make it adjust the domain in settings.py,
removing the cumbersome two-line sed
on ALLOWED_HOSTS
:
deploy_tools/fabfile.py.
def
_update_settings
(
source_folder
,
site_name
):
settings_path
=
source_folder
+
'/superlists/settings.py'
sed
(
settings_path
,
"DEBUG = True"
,
"DEBUG = False"
)
sed
(
settings_path
,
'DOMAIN = "localhost"'
,
'DOMAIN = "
%s
"'
%
(
site_name
,))
secret_key_file
=
source_folder
+
'/superlists/secret_key.py'
if
not
exists
(
secret_key_file
):
[
...
]
We redeploy, and spot the sed
in the output:
$ fab deploy --host=superlists-staging.ottg.eu [...] [superlists-staging.ottg.eu] run: sed -i.bak -r -e s/DOMAIN = "localhost"/DOMAIN = "superlists-staging.ottg.eu"/g "$(echo /home/harry/sites/superlists-staging.ottg.eu/source/superlists/settings.py)" [...]
Managing the Test Database on Staging
Now we can rerun our FTs, and get to the next failure: our attempt to create pre-authenticated sessions doesn’t work, so the “My Lists” test fails:
$ python3 manage.py test functional_tests \
--liveserver=superlists-staging.ottg.eu
ERROR: test_logged_in_users_lists_are_saved_as_my_lists
(functional_tests.test_my_lists.MyListsTest)
[...]
selenium.common.exceptions.TimeoutException: Message: Could not find element
with id id_logout. Page text was:
Superlists
Sign in
Start a new To-Do list
Ran 7 tests in 72.742s
FAILED (errors=1)
It’s because our test utility function create_pre_authenticated_session
only
acts on the local database. Let’s find out how our tests can manage the
database on the server.
A Django Management Command to Create Sessions
To do things on the server, we’ll need to build a self-contained script that can be run from the command line on the server, most probably via Fabric.
When trying to build standalone scripts that work with the Django environment,
can talk to the database and so on, there are some fiddly issues you need to
get right, like setting the DJANGO_SETTINGS_MODULE
environment variable
correctly, and getting the sys.path
right. Instead of messing about with all
that, Django lets you create your own “management commands” (commands you can
run with python manage.py
), which will do all that path mangling for you.
They live in a folder called management/commands inside your apps:
$ mkdir -p functional_tests/management/commands $ touch functional_tests/management/__init__.py $ touch functional_tests/management/commands/__init__.py
The boilerplate in a management command is a class that inherits from
django.core.management.BaseCommand
, and that defines a method called
handle
:
functional_tests/management/commands/create_session.py.
from
django.conf
import
settings
from
django.contrib.auth
import
BACKEND_SESSION_KEY
,
SESSION_KEY
,
get_user_model
User
=
get_user_model
()
from
django.contrib.sessions.backends.db
import
SessionStore
from
django.core.management.base
import
BaseCommand
class
Command
(
BaseCommand
):
def
add_arguments
(
self
,
parser
):
parser
.
add_argument
(
'email'
)
def
handle
(
self
,
*
args
,
**
options
):
session_key
=
create_pre_authenticated_session
(
options
[
'email'
])
self
.
stdout
.
write
(
session_key
)
def
create_pre_authenticated_session
(
):
user
=
User
.
objects
.
create
(
=
)
session
=
SessionStore
()
session
[
SESSION_KEY
]
=
user
.
pk
session
[
BACKEND_SESSION_KEY
]
=
settings
.
AUTHENTICATION_BACKENDS
[
0
]
session
.
save
()
return
session
.
session_key
We’ve taken the code for create_pre_authenticated_session
code from
test_my_lists.py. handle
will pick up an email address from the parser,
and then return the session key that we’ll want to add to our browser cookies,
and the management command prints it out at the command line. Try it out:
$ python3 manage.py create_session a@b.com
Unknown command: 'create_session'
One more step: we need to add functional_tests
to our settings.py
for it to recognise it as a real app that might have management commands as
well as tests:
superlists/settings.py.
+++
b
/
superlists
/
settings
.
py
@@
-
42
,
6
+
42
,
7
@@
INSTALLED_APPS
=
(
'lists'
,
'accounts'
,
+
'functional_tests'
,
)
Now it works:
$ python3 manage.py create_session a@b.com
qnslckvp2aga7tm6xuivyb0ob1akzzwl
Note
If you see an error saying the auth_user
table is missing, you may need to
run manage.py migrate
. In case that doesn’t work, delete the db.sqlite3
file and
run migrate again, to get a clean slate.
Getting the FT to Run the Management Command on the Server
Next we need to adjust test_my_lists
so that it runs the local function
when we’re on the local server, and make it run the management command
on the staging server if we’re on that:
functional_tests/test_my_lists.py (ch17l016).
from
django.conf
import
settings
from
.base
import
FunctionalTest
from
.server_tools
import
create_session_on_server
from
.management.commands.create_session
import
create_pre_authenticated_session
class
MyListsTest
(
FunctionalTest
):
def
create_pre_authenticated_session
(
self
,
):
if
self
.
against_staging
:
session_key
=
create_session_on_server
(
self
.
server_host
,
)
else
:
session_key
=
create_pre_authenticated_session
(
)
## to set a cookie we need to first visit the domain.
## 404 pages load the quickest!
self
.
browser
.
get
(
self
.
server_url
+
"/404_no_such_url/"
)
self
.
browser
.
add_cookie
(
dict
(
name
=
settings
.
SESSION_COOKIE_NAME
,
value
=
session_key
,
path
=
'/'
,
))
[
...
]
Let’s see how we know whether or not we’re working against the
staging server. self.against_staging
gets populated in base.py:
functional_tests/base.py (ch17l017).
from
.server_tools
import
reset_database
#
class
FunctionalTest
(
StaticLiveServerTestCase
)
:
@classmethod
def
setUpClass
(
cls
)
:
for
arg
in
sys
.
argv
:
if
'
liveserver
'
in
arg
:
cls
.
server_host
=
arg
.
split
(
'
=
'
)
[
1
]
#
cls
.
server_url
=
'
http://
'
+
cls
.
server_host
cls
.
against_staging
=
True
#
return
super
(
)
.
setUpClass
(
)
cls
.
against_staging
=
False
cls
.
server_url
=
cls
.
live_server_url
@classmethod
def
tearDownClass
(
cls
)
:
if
not
cls
.
against_staging
:
super
(
)
.
tearDownClass
(
)
def
setUp
(
self
)
:
if
self
.
against_staging
:
reset_database
(
self
.
server_host
)
#
self
.
browser
=
webdriver
.
Firefox
(
)
self
.
browser
.
implicitly_wait
(
3
)
Instead of just storing
cls.server_url
, we also store theserver_host
andagainst_staging
attributes if we detect theliveserver
command-line argument.We also need a way of resetting the server database in between each test. I’ll explain the logic of the session-creation code, which should also explain how this works.
An Additional Hop via subprocess
Because our tests are Python 3, we can’t directly call our Fabric functions,
which are Python 2. Instead, we have to do an extra hop and call the fab
command as a new process, like we do from the command line when we do server
deploys. Here’s how that looks, in a module called server_tools:
functional_tests/server_tools.py.
from
os
import
path
import
subprocess
THIS_FOLDER
=
path
.
dirname
(
path
.
abspath
(
__file__
)
)
def
create_session_on_server
(
host
,
)
:
return
subprocess
.
check_output
(
[
'
fab
'
,
'
create_session_on_server:email={}
'
.
format
(
)
,
#
'
--host={}
'
.
format
(
host
)
,
'
--hide=everything,status
'
,
#
]
,
cwd
=
THIS_FOLDER
)
.
decode
(
)
.
strip
(
)
#
def
reset_database
(
host
)
:
subprocess
.
check_call
(
[
'
fab
'
,
'
reset_database
'
,
'
--host={}
'
.
format
(
host
)
]
,
cwd
=
THIS_FOLDER
)
Here we use the subprocess
module to call some Fabric functions using the
fab
command.
As you can see, the command-line syntax for arguments to
fab
functions is quite simple, a colon and then a variable=argument syntax.Incidentally, this is also the first time I’ve shown the “new-style” string formatting syntax. As you can see it uses curly brackets
{}
instead of%s
. I slightly prefer it to the old style, but you’re bound to come across both if you spend any time with Python. Take a look at some of the examples in the Python docs to learn more.Because of all the hopping around via Fabric and subprocesses, we’re forced to be quite careful about extracting the session key as a string from the output of the command as it gets run on the server.
You may need to adapt the call to subprocess
if you are using a custom
username or password: make it match the fab
arguments you use when you run
the automated deployment script.
Note
By the time you read this book, Fabric may well have been fully ported to Python 3. If so, investigate using the Fabric context managers to call Fabric functions directly inline with your code.
Finally, let’s look at the fabfile that defines those two commands we want to run server side, to reset the database or set up the session:
functional_tests/fabfile.py.
from
fabric.api
import
env
,
run
def
_get_base_folder
(
host
):
return
'~/sites/'
+
host
def
_get_manage_dot_py
(
host
):
return
'{path}/virtualenv/bin/python {path}/source/manage.py'
.
format
(
path
=
_get_base_folder
(
host
)
)
def
reset_database
():
run
(
'{manage_py} flush --noinput'
.
format
(
manage_py
=
_get_manage_dot_py
(
env
.
host
)
))
def
create_session_on_server
(
):
session_key
=
run
(
'{manage_py} create_session {email}'
.
format
(
manage_py
=
_get_manage_dot_py
(
env
.
host
),
=
,
))
(
session_key
)
Does that make a reasonable amount of sense? We’ve got a function that
can create a session in the database. If we detect we’re running locally,
we call it directly. If we’re against the server, there’s a couple of hops:
we use subprocess
to get to Fabric via fab
, which lets us run a management
command that calls that same function, on the server.
How about an ASCII-art illustration?
Locally:
MyListsTest .create_pre_authenticated_session --> .management.commands.create_session .create_pre_authenticated_session
Against staging:
MyListsTest .create_pre_authenticated_session .management.commands.create_session .create_pre_authenticated_session | \|/ /|\ | server_tools .create_session_on_server run manage.py create_session | /|\ \|/ | subprocess.check_output --> fab --> fabfile.create_session_on_server
Anyway, let’s see if it works. First, locally, to check we didn’t break anything:
$ python3 manage.py test functional_tests.test_my_lists
[...]
OK
Next, against the server. We push our code up first:
$ git push # you'll need to commit changes first. $ cd deploy_tools $ fab deploy --host=superlists-staging.ottg.eu
And now we run the test—notice we can include elspeth@
in the
specification of the liveserver
argument now:
$ python3 manage.py test functional_tests.test_my_lists \
--liveserver=elspeth@superlists-staging.ottg.eu
Creating test database for alias 'default'...
[superlists-staging.ottg.eu] Executing task 'reset_database'
~/sites/superlists-staging.ottg.eu/source/manage.py flush --noinput
[superlists-staging.ottg.eu] out: Syncing...
[superlists-staging.ottg.eu] out: Creating tables ...
[...]
.
---------------------------------------------------------------------
Ran 1 test in 25.701s
OK
Looking good! We can rerun all the tests to make sure…
$ python3 manage.py test functional_tests \
--liveserver=elspeth@superlists-staging.ottg.eu
Creating test database for alias 'default'...
[superlists-staging.ottg.eu] Executing task 'reset_database'
[...]
Ran 7 tests in 89.494s
OK
Destroying test database for alias 'default'...
Hooray!
Note
I’ve shown one way of managing the test database, but you could experiment
with others—for example, if you were using MySQL or Postgres, you could open
up an SSH tunnel to the server, and use port forwarding to talk to the database
directly. You could then amend settings.DATABASES
during FTs to talk to the
tunnelled port.
Baking In Our Logging Code
Before we finish, let’s “bake in” our logging code. It would be useful to keep our new logging code in there, under source control, so that we can debug any future login problems. They may indicate someone is up to no good, after all.
Let’s start by saving the Gunicorn config to our template file in deploy_tools:
deploy_tools/gunicorn-upstart.template.conf.
[
...]
chdir /home/elspeth/sites/SITENAME/sourceexec
../virtualenv/bin/gunicorn\
--bind unix:/tmp/SITENAME.socket\
--access-logfile ../access.log\
--error-logfile ../error.log\
superlists.wsgi:application
Using Hierarchical Logging Config
When we hacked in the logging.warning
earlier, we were using the root logger.
That’s not normally a good idea, since any third-party package can mess with the
root logger. The normal pattern is to use a logger named after the file you’re
in, by using:
logger
=
logging
.
getLogger
(
__name__
)
Logging configuration is hierarchical, so you can define “parent” loggers for top-level modules, and all the Python modules inside them will inherit that config.
Here’s how we add a logger for both our apps into settings.py:
superlists/settings.py.
LOGGING
=
{
'version'
:
1
,
'disable_existing_loggers'
:
False
,
'handlers'
:
{
'console'
:
{
'level'
:
'DEBUG'
,
'class'
:
'logging.StreamHandler'
,
},
},
'loggers'
:
{
'django'
:
{
'handlers'
:
[
'console'
],
},
'accounts'
:
{
'handlers'
:
[
'console'
],
},
'lists'
:
{
'handlers'
:
[
'console'
],
},
},
'root'
:
{
'level'
:
'INFO'
},
}
Now accounts.models, accounts.views, accounts.authentication, and all
the others will inherit the logging.StreamHandler
from the parent accounts
logger.
Unfortunately, because of Django’s project structure, there’s no way of defining a top-level logger for your whole project (aside from using the root logger), so you have to define one logger per app.
Here’s how to write a test for logging behaviour:
accounts/tests/test_authentication.py (ch17l023).
import
logging
[
.
.
.
]
@patch
(
'
accounts.authentication.requests.post
'
)
class
AuthenticateTest
(
TestCase
)
:
[
.
.
.
]
def
test_logs_non_okay_responses_from_persona
(
self
,
mock_post
)
:
response_json
=
{
'
status
'
:
'
not okay
'
,
'
reason
'
:
'
eg, audience mismatch
'
}
mock_post
.
return_value
.
ok
=
True
mock_post
.
return_value
.
json
.
return_value
=
response_json
#
logger
=
logging
.
getLogger
(
'
accounts.authentication
'
)
#
with
patch
.
object
(
logger
,
'
warning
'
)
as
mock_log_warning
:
#
self
.
backend
.
authenticate
(
'
an assertion
'
)
mock_log_warning
.
assert_called_once_with
(
'
Persona says no. Json was: {}
'
.
format
(
response_json
)
#
)
We set up our test with some data that should cause some logging.
We retrieve the actual logger for the module we’re testing.
We use
patch.object
to temporarily mock out its warning function, by usingwith
to make it a context manager around the function we’re testing.And then it’s available for us to make assertions against.
That gives us:
AssertionError: Expected 'warning' to be called once. Called 0 times.
Let’s just try it out, to make sure we really are testing what we think we are:
accounts/authentication.py (ch17l024).
import
logging
logger
=
logging
.
getLogger
(
__name__
)
[
...
]
if
response
.
ok
and
response
.
json
()[
'status'
]
==
'okay'
:
[
...
]
else
:
logger
.
warning
(
'foo'
)
We get the expected failure:
AssertionError: Expected call: warning("Persona says no. Json was: {'status': 'not okay', 'reason': 'eg, audience mismatch'}") Actual call: warning('foo')
And so we settle in with our real implementation:
accounts/authentication.py (ch17l025).
else
:
logger
.
warning
(
'Persona says no. Json was: {}'
.
format
(
response
.
json
())
)
$ python3 manage.py test accounts
[...]
Ran 15 tests in 0.033s
OK
You can easily imagine how you could test more combinations at this point,
if you wanted different error messages for response.ok != True
, and so on.
Wrap-Up
We now have test fixtures that work both locally and on the server, and we’ve got some more robust logging configuration.
But before we can deploy our actual live site, we’d better actually give the users what they wanted—the next chapter describes how to give them the ability to save their lists on a “My Lists” page.
Get Test-Driven Development with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.