The grant writing process is a bit of a mystery and I suspect that this is also true for many other early career academics. I hope that a better understanding of the review process for grant proposals will make it easier when I do eventually apply for a grant. I thought it wise to share what I learned from the workshop. Obviously this is specific to mathematics, but the procedure in other subjects is presumably similar.

Disclaimer: *I do not claim to be an expert on grant proposal writing, and I do not represent the EPSRC in any way. I hope that this article is useful to people and that the information contained within is accurate. Please let me know if there are any errors so that they may be corrected.*

Once a grant proposal has been submitted, the proposal is first passed to one of a number of portfolio managers, each covering one or more subject areas within mathematics. They will find three reviewers, who will read and comment on the proposal. Two of these reviewers will be found by the portfolio manager from the EPSRC college of reviewers. The third reviewer is chosen will be one of three reviewers suggested by the applicant. (Assuming that these suggestions are appropriate and able to provide a review.)

The reviewers will score the proposal and make comments to be passed back to the applicant. If the reviews are supportive of the proposal, it will be passed to the next stage; an unsupported proposal will be rejected. If the proposal progresses to the next stage, the reviews of the proposal will be passed back to the applicant, and they will have the opportunity to respond to some of the points raised by the reviewers. It is important to note that this response will not bee seen by the reviewers!

The response by the applicant should address any criticisms and concerns, and should be reasonably self-contained. The reviews and the applicants response will be the main points considered in the next stage of the application procedure, which is a panel.

The grant review panel meets several times per year to consider proposals and decide which have sufficient merit to be funded. The panel consists of a number of academics, around 15 I believe, who sit at a table and discuss each proposal, its reviews, and the applicant’s response to the criticism. The proposals are ranked, and a number of proposals from the top of this list are chosen to be funded. The number will depend on the value of each proposal and the funds available.

At present, the EPSRC offer a number of grant and fellowship schemes. For most, especially those in established positions who have received grants in the past, the standard grant scheme will be most appropriate. This scheme covers any proposals that fall within the EPSRC remit with no restrictions placed on the length or value of the proposal.

For early career academics who have not yet submitted a grant proposal, there is also the “new investigator award“, which is only available for those who have not yet been the recipient of a significant grant (over the value of £100k). There are no restrictions on when an proposal can be submitted to this scheme – this is a recent change to allow more flexibility for those who have taken career breaks. Note that the applicant is expected to hold a permanent academic post in order to apply for this scheme.

In addition to these grant schemes are three fellowship schemes, which aim to pay for the applicants time to further their research and typically last three to five years. Unlike other awards a fellowship is a personal award, which means that the funding is tied to the investigator and not a specific institution. There are limitations placed on the subject of a fellowship: it must align with one of the EPSRC priority areas. At present, the areas “intradisciplinary research” and “new connections from mathematical sciences” cover a large number of possibilities.

There are three levels of fellowships available: postdoctoral; early career; established career. These levels represent the different career stages to which they are available, although it is left to the applicant to apply for the scheme that they believe is most appropriate. The EPSRC provides person specifications that describe typical applicants at each stage. Postdoctoral fellowships have a shorter duration but offer the greatest flexibility in subject area, whilst established career fellowships have the longest duration but are more restrictive on subject area.

It seems that there is a large amount of flexibility in the fellowship schemes. However, I feel that fellowships are still extremely competitive and will likely require a large investment of time, with the knowledge that the proposal might not be funded. The standard grant schemes may be less competitive but also seem to be intended for people who already hold (permanent) lectureship positions. (Even if this is not strictly the case, it would be troublesome to apply and receive a grant on a temporary contract.)

It seems that the EPSRC have made a number of positive steps recently to provide what support they can to academics on “non-standard” career paths; in particular, those who have taken career breaks or breaks from research. This is important for those who intend to start families early in their careers. It also seems that they value input from early career researchers through their early career forum, of which I was not aware before the workshop. They also seem to encourage the participation of early career researchers in the review and panel processes.

]]>Take a tennis ball and drill a cylindrical hole exactly through the centre of the ball so a to leave a ring around the circumference whose height is 2cm. (This ring would resemble a napkin ring.) Now take a much larger ball, say the Earth (if this were a perfect sphere), and perform the same task, leaving a ring around the equator of height 2cm. Then these two rings will have precisely the same volume.

On first hearing, this seems impossible. The Earth is much larger than a tennis ball. Taking even a small ring around the equator must be much larger than the tennis ball. Then we draw a few pictures – not to scale, of course – and see that as the sphere becomes larger, the size of the ring around the circumference gets much thinner.

If we fix a small length, say 1cm, then Pythagoras’s theorem tells us that the distance from the centre of the sphere to the inner edge of the ring is given by the square root of the radius squared minus this length squared. (Assuming the length is smaller than the radius, this can be done.) When the the length is relatively large compared to the radius, as in the case of the tennis ball, the thickness of the ring is also quite large. However, when the length is very small compared to the radius, the thickness of the ring must be very thin, as in the case of the Earth.

In case this is not clear, let’s put some numbers in to see how the thickness changes. Let’s suppose that the radius of the tennis ball is 3cm and the radius of the earth is 1000m – that is 100000cm, which is a gross underestimate of the radius of the earth. We take our small length to be 1cm. As in our argument above, the thickness of the ring for the tennis ball is 3cm minus the square root of 8, which gives approximately 0.172. Comparatively, performing the same calculation for the Earth gives the thickness of the ring as 0.000005cm. (For the actual radius of the Earth, this quantity is tiny!)

The argument above does not show that the volume does not change in these two cases. For this, we need to use a little calculus. What we actually need is to calculate the area of the segment of the circle that is “cut off” to make the ring. The volume is multiplied by this area. I should warn you here, the argument get’s a little technical at this point.

To get the area of this segment, we use the following double integral

Here is the radius of the sphere, is the height of the ring measured from the diameter (we are assuming, for simplicity, that the sphere is centred at the origin), and is the vertical distance from the horizontal diameter. The quantity denotes the distance from the centre of the cylindrical drilled-out section to a point in the sphere. (Technically, we are using *cylindrical polar* *coordinates* to evaluate this integral.) The limits of integration are obtained using some careful applications of Pythagoras’s theorem. Evaluating the integral, we see that the area is two-thirds of the length cubed; in particular, this is independent of the radius . Thus the volume of the ring is given by

This is a lovely problem, where the mathematics gives a relatively surprising fact. The solution using multiple integrals does not necessarily provide any intuition as to why this is true, but it is a nice application of vector calculus to a fairly real-world problem.

]]>It took quite some time before I had a “big picture” of what a surface is, and how this fits into the grander theory. Undoubtedly I am missing some of the major pieces to this puzzle, but it does show some of the elegance of the theory (at least for me).

Let us return to the relatively basic theory of differential calculus. Our first introduction to differentiation usually comes during A-level maths, where we are told various standard derivatives without much explanation, and then that these derivatives represent the *gradient* of a curve at a given point. (For those who may need to look at derivatives once again, I suggest the Wikipedia page on differentiation, which has some lovely illustrations that will help with this discussion.)

Geometrically, the what is happening is that we are constructing a line that meets the curve only once at our selected point (here we only consider the points that are relatively “close-by”, a tangent line might cross the curve elsewhere but this should be sufficiently “far away”). The gradient of this tangent line is equal to the value of the derivative at the chosen point.

Lines are very simple geometric figures that we can easily describe and manipulate. If we start at our points and move along the tangent line a small amount, then the point on the tangent line will be very close to the point on the curve a similar distance from our selected point. We might say then that a derivative gives us the means of approximating the value of a complicated curve using straight lines nearby to a given point. (In fact, this is the fundamental idea that underpins a large number of numerical methods for solving differential equations.)

In three dimensions, we have a new possibilities to consider. We are no longer restricted to curves, and we can instead investigate the properties of *surfaces* such as the unit sphere (those points that have distance 1 to the origin). Here we can no longer approximate using a single line, since the sphere expands away from any given point in many directions. This is also reflected in the equation that determines the points in the sphere, which has two independent variables. This is typical of surfaces in three-dimensional space. (Think of a sheet that has peaks and valleys, you can move about on the sheet as if you were in the plane, although your movements also move you through the third, unseen dimension.)

Many surfaces can be realised as the zero set of some function of several variables. The *partial* *derivatives *of such a function give us information about how the function evolves in the direction of each of its variables, which are usually the x, y, z (and so on) directions. From these partial derivatives, we can find a *directional derivative* of our function in any direction by taking a weighted combination of the partial derivatives (a linear combination).

The evolution of a surface, as we move away from a given point, can be approximated by the set of all the possible weighted combinations of the partial derivatives at that point. This is the *tangent space* of the surface at the point. In the case of the unit sphere, the associated function has two variables so the two partial derivatives determine a *plane* at each point.

Tangent spaces provide an essential tool in *differential geometry* – the study of smooth surfaces – because it is much easier to understand a plane than a complex surface.

At present, there are around 1.4 million articles hosted on the ArXiv, and more are added every day. (Should you wish to see a visual representation of the articles on the ArXiv, which I assume you do, you should visit paperscape.) In the sub-topics that I watch, there are (approximately) between 4-10 new papers added per day, and these topics are not amongst the most active on the ArXiv. The problem then is to filter the daily uploads to find the papers that are likely to be of interest for me. Luckily, about the same time that I started to think of an automated solution to this problem, I discovered that the ArXiv has some tools that can help with this problem.

My go-to language for automating things is Python, which has many tools for retrieving processing web data. Building on an example provided in the ArXiv’s API (Application Programming Interface) documentation, I decided to use the `feedparser`

to gather and process the ArXiv’s daily RSS feed. The basic code is as follows.

import feedparser url = 'http://arxiv.org/rss/math' feed = feedparser.parse(url)

Once the feed has been retrieved, we must extract the data that we need. My original approach was to use Python’s built-in `namedtuple`

to store the data. These give a nice class-like object with named attributes (in this case authors, title, id, and, abstract), but are still relatively light-weight data structures. The namedtuple comes from the Python ‘collections’ module in the standard library where many useful data-structure types can be found.

from collections import namedtuple ArxivEntry = namedtuple('ArxivEntry', ('authors', 'title', 'id', 'abstract')) entries = [ArxivEntry(entry.authors, entry.title, entry.id, entry.summary) for entry in feed.entries]

Now comes the tricky part of filtering out those entries that may be of interest. The method that I chose for this task was a simple keyword filter on the abstract of each entry. I set up a list of keywords from articles that I have read in the past, stored in a list called `keywords`

and filtered `entries`

by whether any of the keywords appeared in the abstract.

keywords = [ ... ] # too many to list here. accepted = [entry for entry in entries if any(kw in entry.abstract for kw in keywords)]

Now that I had the vital parts of the problem solved, I added some code to write all the accepted articles into a file for each day, and set the script to run on my Raspberry Pi at 6 am each day, using a Cron job.

So far it has selected some 30 articles, though not all have been exactly to my taste. The script too is rather simplistic, and does not allow for easy modification to the filtering method. I have started working on a new and improved set of tools for filtering ArXiv entries, which will eventually allow me to customise and experiment with the filtering method without major rewrites to my code.

]]>The spread of the zombie infection is an interesting problem to model mathematically. There are many factors to consider: the chances of a non-zombie becoming infected in an interaction with a zombie; the rate at which interactions between non-zombies and zombies occur; the spread the zombie horde from place to place; and the “critical mass” of the zombie horde at which point there is insufficient food for all zombies. We can model parts of this problem in isolation, with some simplifying assumptions.

For example, we might consider the following scenario. Suppose that the world is divided into nine “square” regions, numbered one to nine in the usual “keypad” arrangement, and that the zombie infection first arises in region five (in the centre of the arrangement). Then the zombie population across all nine regions can be modelled using a diffusion model.

We might also examine spread of the zombie infection from a probabilistic point of view, where we instead model the spread by assigning a probability that a non-zombie will become infected during an encounter with a zombie. At each point in time there will be interactions between existing zombies and non-infected people, and at each of these interactions, there is a chance that the non-infected person will become infected. This can be pictured as a tree, where each branch represents a different zombie, and each branch splits in two if the zombie infects a person at a given time. Mathematically, this is a branching process.

A zombie apocalypse is just one of many scenarios that can be modeled using a branching process, even if it is (most likely) fictitious . Indeed, epidemic modeling using branching processes, and more general stochastic processes, is an active field of research and provides a useful tool for predicting the evolution of an infection.

I have not studied probability formally since I was an undergraduate, and since then I have acquired a much more powerful arsenal of mathematical tools, and many concepts that once seemed impenetrable are now much more clear. I decided that it was time for me to refresh my knowledge of probability theory with my new-found knowledge and endeavor to understand some of the nuances of the theory.

This refresher is motivated by the surprising appearance of probabilistic aspects in operator theory. This gives me an excellent excuse for spending time searching for mathematical papers on the zombie apocalypse, and “researching” their role in popular culture, although I feel the latter would have happened either way.

]]>Mathematics and computers share the same basic language, the language of logic, but they also share a philosophy. In object orientated programming, objects are created according to a template called a class. A class outlines the properties and operations that can be performed on the corresponding objects, and can inherit properties from parent classes. This allows the programmer to ensure objects that are similar in some way to share a common collection of properties and operations. This process of abstraction is very powerful and flexible, and is precisely the same as the process of abstraction that has been employed by mathematicians.

There are, of course, differences in mathematical abstraction and the creation of classes in object orientated programming. There are many reasons to create an object in a computer program but, in my experience, most fall into two broad categories: objects created as convenient “containers” for information; and objects created to “interface” with another resource or program. In mathematics, we seek to understand the “big picture” that abstraction offers, and often an extra level of abstraction can help to solve problems that would otherwise be difficult.

Recently I have explored how we might use the similarities between mathematical construction and object orientated programming to better understand some of the basic objects in mathematics, specifically permutations and functions on finite sets. Of course, there are software packages that can perform symbolic algebra but I hope to describe how one might implement a simple permutation class in Python as an instructional device.

A permutation is a bijective function from a set to itself; more simply, it is a rearrangement of the elements in a set. The most basic operation one can perform with a permutation is to evaluate it at an element of the set. For this purpose, we must store information about how elements are “moved” by the permutation. Python provides us with a convenient container object to store such information called a dictionary. A simple implementation of a permutation in Python might be as follows.

class Permutation: "A simple permutation class" def __init__(self, spec): self.spec = spec def evaluate(self, element): return self.spec[element]

The keyword class tells Python that we are creating a new object class, which in this case is called Permutation. The two lines starting with def are declarations of methods for the Permutation class. The first tells Python how to assign the specification (stored in self.spec) for the permutation to the object, and the second tells Python how to evaluate the permutation at a given element. (We are making some assumptions that spec is a dictionary and that element is a valid key for spec.) We can create permutation objects using this class as follows.

spec = {1 : 2, 2 : 3, 3 : 1} p = Permutation(spec) p.evaluate(1) # returns 2 p.evaluate(2) # returns 3

Note that we do not need to provide the self variable to the __init__ function, this is done automatically by Python when creating an object. The dot is used in Python – and many other object orientated languages – to access the properties and methods of an object.

At the moment, we haven’t checked that the function we have implemented here is actually a permutation. For this, we must check that it is a function from a set to itself, so that the only values that can be returned from the evaluate method belong to the same set as those elements that can be used as inputs, and that every input corresponds to a unique output. This can be achieved by checking whether the set of keys of the dictionary spec (the valid inputs) is the same as the set of values of the dictionary (the possible outputs). Using the Python set built-in type, we will remove repeated elements and so checking for equality will check both conditions. At the same time, we can add some checks to make sure the inputs are valid for our methods. Let us modify the implementation above.

class Permutation: "A simple permutation class" def __init__(self, spec): if not isinstance(spec, dict): raise TypeError('spec must be of type dict') if not set(spec.keys()) == set(spec.values()): raise ValueError('Set of keys and values differ') self.spec = spec def evaluate(self, element): try: return self.spec[element] except KeyError: raise ValueError('Invalid input')

Here we are using the raise keyword to produce an error when spec does not satisfy the conditions of being a dictionary (dict) or does not define a valid permutation. The TypeError and ValueError objects are standard error types in Python. In the evaluate method, we try to return the output of the permutation, from the specification, when element is input. In the case where element is not a valid input, it will not be a key of the dictionary spec, and Python will raise a KeyError. We catch that error with the except statement, and raise our own error.

The class definition above covers creation and evaluation of a permutation object in Python, but of course permutations have many additional properties. These properties can be added as object properties or object methods, but I will let you try this for yourself.

]]>