How I debug failing Python tests.
This blog post is the second in a series I am writing, covering methods of simple parallelism. The following posts cover more convenient methods, as well as some things that should be considered.
Parallelism methods Basics and introduction Function objects If I’ve skipped your favourite method of parallelism, feel free to tweet me or add a comment on the tracking issue informing me.
This method was brought to my attention by Tom Marsh, and is a nice alternative to using functools.
I often get asked “how can I parallelise my Python code?". I’ve come up with this simple cheat sheet to explain it. I will only explain the most common method of parallel problems here: embarrassingly parallel problems.
This blog post is the first in a series I am writing, covering methods of simple parallelism. The following posts cover more convenient methods, as well as some things that should be considered.
Numpy has the ability to mask arrays and ignore their values for certain computations, called “masked arrays”. They contain a .mask attribute which is a boolean array, True where the value should be masked and False otherwise.
Numpy also comes with a suite of functions which can handle this masking naturally. Typically for a function in the np. namespace, there is a masked-array-aware version under the np.ma. namespace:
np.median => np.
pymysql Defaults to autocommit=False connection = pymysql.connect(user='user', db='test') cursor = connection.cursor() cursor.execute('insert into test (value) values (10)') connection.close() connection = pymysql.connect(user='user', db='test') cursor = connection.cursor() cursor.execute('select value from test') # =>  To commit changes to the database, #commit() must be called:
connection = pymysql.connect(user='user', db='test') cursor = connection.cursor() cursor.execute('insert into test (value) values (10)') # Call the commit line connection.commit() connection.close() connection = pymysql.connect(user='user', db='test') cursor = connection.
I used to have two simple shell aliases for IPython:
alias ipy=ipython alias pylab='ipython --pylab' These were separated for a couple of reasons:
The pylab mode of IPython was deprecated, for good reason. It “infects” the global namespace with all matplotlib and numpy functions. It breaks two entries in the famous “Zen of Python”:
Explicit is better than implicit. Namespaces are one honking great idea – let’s do more of those!
For interpolation in python, scipy includes the interpolateackage containing (amongst other things) interp1d for simple interpolation.
The function does not however perform extrapolation; if the interpolator is asked for a value outside the original range it will raise an exception. To get around this, the interpolator contains a .x parameter which contains the original x values used to construct itself. A boolean index can then be used to reject inputoints which fall outside of this range:
So IPython has updated to version 2.0. The full changelog can be found here and to summarise the key points:
interactive widgets for the notebook directory navigation in the notebook dashboard persistent URLs for notebooks a new modal user interface in the notebook a security model for notebooks I want to discuss a few of these and my thoughts on how they make IPython notebook finally an incredibly powerful tool for research, more so than before, and how I may be finally switching from simple scripts to using the notebook full time (with a major caveat!
I was trying to play with PyMC3 and as per usual with code under heavy development the tutorials were out of date, and the code wouldn’t run. When I say “out of date” in fact the code ran but no valid numbers were produced. The API seemed to be consistent though.
I managed to get the tutorial to run by installing the following:
Theano==0.6.0 pymc==3.0 scipy==0.13.3 PyMC3 was installed from git from the pymc3 branch as follows:
So I’ve been watching a lot of OO refactoring screencasts and reading posts and I’m able to say I’ve implemented some of the advice I’ve heard. Life’s all about learning eh?
So the main example I want to talk about here is Null objects.
Null objects In dynamic languages, and Ruby in particular1 the concept of the lack of something needs to be encapsulated. For example: you’re wrapping the database and no entry exists; what to you return?
Scipy contains functions for fitting equations with Python, in its scipy.optimize module. The two main ones I’ve used in the past are leastsq and curve_fit, which in itself is a convenience wrapper around leastsq.
curve_fit For this operation you require three (four) things:
a function to fit of form f(x, *params) x data y data Optionally error data You can also supply an initial guess with the p0 argument.
Often when developing complex client side apps, a simple
python -m SimpleHTTPServer can host the html. For a node backend though, a second server has to be run to host the REST api, which must be on a different domain. For example the python server is on port
8000, whereas the REST server is run on port
Combining Python with LaTeX is a powerful combination. It allows for arbitrary code to be executed which either gives the results of expressions, or can be used to embed programatically certain things e.g. the paths of files or images.
By downloading python.sty and including it in a usepackage block, python code can be run.
I was having trouble getting this to work as it seems I was using an invalid version of python.
This is a problem that has been challenging me for a while: my c++ code uses vectors everywhere so how can I wrap these classes and functions into python easily? I’ve tried many many times with e.g. swig or boost::python to no avail.
That is until today…
Quite often at work I have to generate colour maps of certain things, which are generally not sampled evenly in coordinate space. I am also a huge fan of Python, so I thought to myself: can I combine these things? Well until now I didnt think you could. Matplotlibs contour plots require evenly spaced x and y points with z points to match. This is until I found a way
I’m constantly opening an iPython interpreter and having to import my common modules (for me pyfits mostly).
The easiest way to import modules on iPython startup is to look in your
~/.ipython/ipy_user_conf.py file which is a nice easy way to add python code into your ipython startup (as opposed to the ipythonrc file which is about colours etc.).
In the main function, just add a line such as
ip.ex("import pyfits") This will allow custom code to be run at startup.