Why programming?

Programming is a skill that all psychology students should learn. I can think of so many reasons on why, including automating boring stuff, and practicing problem solving skills through learning to code and programming. In this post I will focus on two more immediate ways that may be relevant for a Psychology student, particularly during data collection and data analysis. For a more elaborated discussion on the topic read the post on my personal blog: Every Psychologist Should Learn Programming.

Here is what we will do in this post:

Basic Python by example (i.e., a t-test for paired samples)

Program a Flanker task using the Python library Expyriment

Visualise and analyse data

Before going into how to use Python programming in Psychology I will briefly discuss why programming may be good for data collection and analysis.

Data collection

The data collection phase of Psychological research has largely been computerised. Thus, many of the methods and tasks used to collect data are created using software. Many of these tools offer graphical user interfaces (GUIs) that may at many times cover your needs. For instance, E-prime offers a GUI which enables you to, basically, drag and drop “objects” onto a timeline to create your experiment. However, in many tasks you may need to write some customised code on top of your built experiment. For instance, quasi-randomisation may be hard to implement in the GUI without some coding (i.e., by creating CSV-files with trial order and such). At some point in your study of the human mind you will probably need to write code before collecting data.

Data Analysis

Most programming languages can of course offer both graphical and statistical analysis of data. For instance, R statistical programming environment has recently gained more and more popularity in Psychology as well as in other disciplines. In other fields Python is also gaining popularity when it comes to analysing and visualisation of data. MATLAB has for many years also been used for quantitative methods in Psychology and cognitive science (e.g., for Psychophysical analysis, cognitive modelling, and general statistics). Python offers extensive support for both Web scraping and the analysis of scraped data.

What language should one learn?

“Okay. Okay. Programming may be useful for Psychologists! But there are so many languages! Where should I start?!” One very good start would be to learn Python. Python is a general-purpose and high-level language that was created by Guido van Rossum. Nowadays it is administrated by the non-profit organisation Python Software Foundation. Python is open source. Among many things this means that Python is free. Even for commercial use. Python is usually used and referred to as a scripting language. Thanks to its flexibility, Python is one of the most popular programming languages (e.g., 4th on the TIOBE Index for June 2016).

One of the most important aspects, however, is that there are a variety of both general-purpose (unlike R that focuses on statistical analysis) and specialised Python packages. Good news for us interested in Psychology! This means that there are specialised libraries for creating experiments (e.g., Expyriment, PsychoPy and OpenSesame), fitting psychometric functions (e.g., pypsignifit 3.0), and analysing data (e.g., Pandas and Statsmodels). In fact, there are packages focusing on only enabling data analyses of EEG/ERP data (see my resources list for more examples). Python can be run interactively using the Python interpreter (hold on I am going to show an example later). Note, that Python comes in two major versions 2.7 (legacy) and 3.5. Discussing them is really out of the scope for this post but you can read more here.

Python from data collection to analysis

In this part of the post. you will learn how Python can be used from creating an experiment to visualising and analysing the data collected during that experiment. I have chosen a task that fits one of my research interests; attention and cognitive function. From doing research on distractors in the auditory and tactile modalities and how they impact visual tasks I am, in general, interested in how some types of information cannot be blocked out. How is it that we are unable to suppress certain responses (i.e., response inhibition)? A well-used task to measure inhibition is the Flanker task (e.g., Colcombe, Kramer, Erickson, & Scalf, 2005; Eriksen & Eriksen, 1974). In the task we are going to create we will have two type of stimuli; congruent and incongruent. The task is to respond as quickly and accurate as possible to the direction an arrow is pointing. In congruent trials, the target arrow is surrounded by arrows pointing in the same direction (e.g., “<<<<<“) whereas on incongruent trials the surrounding arrows points to another direction (e.g., “<<><<“). Note, the target arrow is the one in the middle (e.g., the third).

For simplicity, we will examine whether the response time (RT) in congruent trials is different from RT in incongruent trials. Since we only will have two means to compare (incongruent vs congruent) we can use the paired sample t-test.

The following part is structured such that you get information on how to install Python and the libraries used. After this is done, you will get some basic information on how to write a Python script and then how to write the t-test function. After that, you get guided through how to write the Flanker task using Expyriment and, finally, you get to learn how to handle, visualise, and analyse the data from the Flanker task.

Installation of needed libraries

Before using Python you may need to install Python and the libraries that are used in the following examples. Python 2.7 can be downloaded here.

If you are running a Windows machine and have installed Python 2.7.11 your next step is to download and install Pygame. The second library needed is SciPy which is a set of external libraries for scientific computing in Python. Installing SciPy on Windows machines are a bit complicated; first, download NumPy and SciPy, open up windows command prompt (here is how) and use Pip to install NumPy and SciPy:

pip install numpy-1.10.4+mkl-cp27-cp27m-win32.whl scipy-0.17.1-cp27-cp27m-win32.whl 1 pip install numpy-1.10.4+mkl-cp27-cp27m-win32.whl scipy-0.17.1-cp27-cp27m-win32.whl

Expyriment, seaborn, and pandas can be downloaded and installed using Pip:

pip install expyriment pandas seaborn 1 pip install expyriment pandas seaborn

Linux users can install the packages using Pip only and Mac users can see here on how to install the SciPy stack. If you think that the installation procedure is cumbersome I suggest that you install a scientific Python distribution (e.g., Anaconda) that will get you both Python and the libraries needed (except for Expyriment).

How to write Python scripts

Python scripts are typically written in a text editor. Windows computers comes with one called Notepad:

OS-X users can use TextEdit. Whichever text editor you end up using is not crucial but you need to save your files with the file ending .py.

Writing a t-test function

Often a Python script uses modules/libraries and these are imported at the beginning of the document. As previously mentioned the t-test script is going to use SciPy but we also need some math functions (i.e., square root). These modules are going to be imported first in our script as will become clear later on.

Before we start defining our function, I am briefly going to touch on what a function is and describe one of the datatypes we are going to use. In Python a function is parts of organised code that can be used again later. The function we will create is going to be called paired_ttest and takes the arguments x, and y. What this means is that we can send the scores from two different conditions (x and y) to the function. Our function requires the x and y variables to be of the datatype list. In the list other values can be stored (e.g., in our case the RTs in the incongruent and congruent trials). Each value stored in a list gets an index (note, in Python the index start at 0). For instance, if we have a list containing 5 differences scores we can get each of them individually by using the index on which they are stored. If we start the Python interpreter we can type the following code (see here if you are unsure on how to start the Python interpreter):

pythonlist = [1,2,3,4,5] pythonlist[0] 1 2 pythonlist = [ 1 , 2 , 3 , 4 , 5 ] pythonlist [ 0 ]

Returning to the function we are going to write, I follow this formula for the paired sample t-test:

Basically, d̄ (“d-bar”; the d with the line above) is the mean difference between two scores, S d is the standard deviation of the differences, and n is the sample size.

Creating our function

Now we go on with defining the function in our Python script (i.e., def is what tells Python that the code in following lines are part of the function). Our function needs to calculate the difference score for each subject. Here we first create a list (i.e., di on line 5). We also need to know the sample size and we can obtain that by getting the length of the list x (by using the function len()). Note, here another datatype, int, is used. Int is short for integer and stores whole numbers. Also, worth noting here is that di and n are indented. In Python indentation is used to mark where certain code blocks start and stop.

import numpy as np import pandas as pd from scipy.stats import t def paired_ttest(x, y): di = [] n = len(x) 1 2 3 4 5 6 import numpy as np import pandas as pd from scipy . stats import t def paired_ttest ( x , y ) : di = [ ] n = len ( x )

Next we use a Python loop (e.g., line 7 below). A loop is typically used when we want to repeat something n number of times. To calculate the difference score we need to take each subject’s score in the x condition and subtract it to the score in the y condition (line 8). Here we use the list indices (e.g., x[i]). That is, i is an integer starting at 0 and going to n and the first repetition of the loop will get the first (i.e., index 0) subjects scores. The average difference score is now easy to calculate. It is just the sum of all difference scores divided by sample size (see, line 10).

for i in range(n): di.append(x[i] - y[i]) dbar = float(sum(di))/n 7 8 9 10 for i in range ( n ) : di . append ( x [ i ] - y [ i ] ) dbar = float ( sum ( di ) ) / n

Note, here we use another datatype, float. The float type represents real numbers and is stored with decimal point. In Python 2.7, we need to do this because dividing integers will lead to rounded results.

In the next part of our t-test function we are going to calculate the standard deviation. First, a float datatype is created (std_di) by using a dot after the digit (i.e., 0.). The scripts continue with looping through each difference score and adding the squared departure each subject’s score is from the average (i.e., d-dbar) to the std_di variable. In Python, squaring is done by typing “**” (see line 14). Finally, the standard deviation is obtained by taking the square root (using sqrt() from NumPy) of the value obtained in the loop.

std_di = 0. for d in di: std_di += (d-dbar)**2 std_di = np.sqrt(std_di/n) 11 12 13 14 15 std_di = 0. for d in di : std_di += ( d - dbar ) * * 2 std_di = np . sqrt ( std_di / n )

Next statistic to be calculated is the Standard error of the mean (line 16). Finally, one line 17 and 18 we can calculate the t-value and p-value. On line 20 we add all information in the dictionary datatype that can store other objects. However, the dictionary store objects linked to keys (e.g., “T-value” in our example below).

se_dbar = std_di/np.sqrt(n-1) t_val = dbar/se_dbar pval = t.sf(np.abs(t_val), df) * 2 statistics = {'T-value':t_val, 'Degree of Freedom':df, 'P-value':pval} return statistics 16 17 18 19 20 21 22 se_dbar = std_di / np . sqrt ( n - 1 ) t_val = dbar / se_dbar pval = t . sf ( np . abs ( t_val ) , df ) * 2 statistics = { 'T-value' : t_val , 'Degree of Freedom' : df , 'P-value' : pval } return statistics

The complete script, with an example how to use it, can be found here.

Flanker task in Expyriment

In this part of the post we are going to create the Flanker task using a Python library called Expyriment (Krause & Lindemann, 2014).

First, we import expyriment.

import expyriment 1 import expyriment

We continue with creating variables that contain basic settings of our Flanker task. As can be seen in the code below we are going to have 4 trials per block, 6 blocks, and durations of 2000ms. Our flanker stimuli are stored in a list and we have some task instructions (note “

” is the newline character and “\” just tells the Python interpreter that the string continues on the next line).

n_trials_block = 4 n_blocks = 6 durations = 2000 flanker_stimuli = ["<<<<<", ">>>>>", "<<><<", ">><>>"] instructions = "Press the arrow key that matches the arrow in the CENTER -- \ try to ignore all other arrows.

\ Press on x if the arrow points to the left. \

Press on m if the arrow points to the right.

\

press the SPACEBAR to start the test." 2 3 4 5 6 7 8 9 10 n_trials_block = 4 n_blocks = 6 durations = 2000 flanker_stimuli = [ "<<<<<" , ">>>>>" , "<<><<" , ">><>>" ] instructions = "Press the arrow key that matches the arrow in the CENTER -- \ try to ignore all other arrows.

\ Press on x if the arrow points to the left. \

Press on m if the arrow points to the right.

\

press the SPACEBAR to start the test."

It may be worth pointing out that most Python libraries and modules have a set of classes. The classes contain a set of methods. So what is a “Class” and what is a “Method”? Essentially, a class is a template to create an object. An object can be said be a “storage” of both variables and functions. Returning to our example, we now create the Experiment class. This object will, for now, contain the task name (“Flanker Task”). The last line of the code block uses a method to initialise our object (i.e., our experiment).

experiment = expyriment.design.Experiment(name="Flanker Task") expyriment.control.initialize(experiment) 11 12 experiment = expyriment . design . Experiment ( name = "Flanker Task" ) expyriment . control . initialize ( experiment )

We now carry on with the design of our experiment. First, we start with a for loop. In the loop we go from the first block to the last. Each block is created and temporarily stored in the variable temp_block.

for block in range(n_blocks): temp_block = expyriment.design.Block(name=str(block + 1)) 13 14 for block in range ( n_blocks ) : temp_block = expyriment . design . Block ( name = str ( block + 1 ) )

Next we are going to create our trials for each block. First, in the loop we create a stimulus. Here we use the list created previously (i.e., flanker_stimuli). We can obtain one object (e.g., “<<<<<“) from the list by using the trial (4 stimuli in each list and 4 trials/block) as the index. Remember, in our loop each trial will be a number from 0 to n (e.g., number of trials) After a stimulus is created we create a trial and add the stimulus to the trial.

for trial in range(n_trials_block): curr_stim = flanker_stimuli[trial] temp_stim = expyriment.stimuli.TextLine(text=curr_stim) temp_trial = expyriment.design.Trial() temp_trial.add_stimulus(temp_stim) 15 16 17 18 19 20 for trial in range ( n_trials_block ) : curr_stim = flanker_stimuli [ trial ] temp_stim = expyriment . stimuli . TextLine ( text = curr_stim ) temp_trial = expyriment . design . Trial ( ) temp_trial . add_stimulus ( temp_stim )

Since the flanker task can have both congruent (e.g., “<<<<<“) and incongruent trials (“<<><<“) we want to store this. The conditional statement (“if”) just checks whether there are as many of the first object in the list (e.g., “<“) as the length of the list. Note, count is a method of the list type object and counts the occurrences of something in the list. If the length and the number of arrows are the same the trial type is congruent:

if flanker_stimuli[trial].count(curr_stim[0]) == len(curr_stim): trialtype = 'congruent' else: trialtype = 'incongruent' 20 21 22 23 24 if flanker_stimuli [ trial ] . count ( curr_stim [ 0 ] ) == len ( curr_stim ) : trialtype = 'congruent' else : trialtype = 'incongruent'

Next we need to create the response mapping. In the tutorial example we are going to use the keys x and m as response keys. In Expyriment all character keys are represented as numbers. In the end of the code block we add the congruent/incongruent and response mapping information to our trial which, finally, is added to our block.

if curr_stim == '<': correctresponse = 120 elif curr_stim[2] == '>': correctresponse = 109 temp_trial.set_factor("trialtype", trialtype) temp_trial.set_factor("correctresponse", correctresponse) temp_block.add_trial(temp_trial) 25 26 27 28 29 30 31 32 33 34 if curr_stim == '<' : correctresponse = 120 elif curr_stim [ 2 ] == '>' : correctresponse = 109 temp_trial . set_factor ( "trialtype" , trialtype ) temp_trial . set_factor ( "correctresponse" , correctresponse ) temp_block . add_trial ( temp_trial )

At the end of the block loop we use the method shuffle_trials to randomise our trials and the block is, finally, added to our experiment.

temp_block.shuffle_trials() experiment.add_block(temp_block) 35 36 temp_block . shuffle_trials ( ) experiment . add_block ( temp_block )

Our design is now finalised. Expyriment will also save our data (lucky us, right?!) and we need to specify the column names for the data files. Expyriment has a method (FixCross) for creating fixation cross and we want one!

experiment.data_variable_names = ["block", "correctresp", "response", "trial", "RT", "accuracy", "trialtype"] fixation_cross = expyriment.stimuli.FixCross() fixation_cross.preload() 37 38 39 40 41 experiment . data_variable_names = [ "block" , "correctresp" , "response" , "trial" , "RT" , "accuracy" , "trialtype" ] fixation_cross = expyriment . stimuli . FixCross ( ) fixation_cross . preload ( )

We are now ready to start our experiment and present the task instructions on the screen. The last line makes the task wait for spacebar to be pressed in,

expyriment.control.start(skip_ready_screen=True) expyriment.stimuli.TextScreen("Flanker task", instructions).present() experiment.keyboard.wait(expyriment.misc.constants.K_SPACE) 41 42 43 expyriment . control . start ( skip_ready_screen = True ) expyriment . stimuli . TextScreen ( "Flanker task" , instructions ) . present ( ) experiment . keyboard . wait ( expyriment . misc . constants . K_SPACE )

The subjects will be prompted with this text:

After the spacebar is pressed the task starts. It starts with the trials in the first block, of course. In each trial the stimulus is preloaded, a fixation cross is presented for 2000ms (experiment.clock.wait(durations)), and then the flanker stimuli are presented.

for block in experiment.blocks: for trial in block.trials: trial.stimuli[0].preload() fixation_cross.present() experiment.clock.wait(durations) trial.stimuli[0].present() 44 45 46 47 48 49 50 51 for block in experiment . blocks : for trial in block . trials : trial . stimuli [ 0 ] . preload ( ) fixation_cross . present ( ) experiment . clock . wait ( durations ) trial . stimuli [ 0 ] . present ( )

The next line to be executed is line 52 and the code on that line resets a timer so that we can use it later. On line 54 we get response (key) and RT using the keyboard class and its wait method. We use the arguments keys (K_x and K_m are our keys, remember) and duration (2000ms). Here we use the clock method and subtract the current time (from the time that we reset the clock) from durations (line 57). This has to be done because the program waits for the subject to press a key (i.e., “m” or “r”) and next trial would continue when a key is pressed.

experiment.clock.reset_stopwatch() # Collect response and response time key, rt= experiment.keyboard.wait(keys=[expyriment.misc.constants.K_x, expyriment.misc.constants.K_m], duration = durations) experiment.clock.wait(durations - experiment.clock.stopwatch_time) 52 53 54 55 56 57 58 experiment . clock . reset_stopwatch ( ) # Collect response and response time key , rt = experiment . keyboard . wait ( keys = [ expyriment . misc . constants . K_x , expyriment . misc . constants . K_m ] , duration = durations ) experiment . clock . wait ( durations - experiment . clock . stopwatch_time )

Accuracy is controlled using the if and else statements. That is, the actual response is compared to the correct response. After the accuracy has been determined the we add the variables the order we previously created them (i.e., “block”, “correctresp”, “response”, “trial”, “RT”, “accuracy”, “trialtype”).

Finally, when the 4 trials of a block have been run, we implement a short break (i.e., 3000 ms) and present some text notifying the participant.

# Check response if key == trial.get_factor('correctresponse'): acc = 1 else: acc = 0 experiment.data.add([block.name, trial.get_factor('correctresponse'), key, trial.id, rt, acc, trial.get_factor("trialtype")]) expyriment.stimuli.TextScreen("Short break", "That was block: " + block.name + ".

Next block will soon start", ).present() experiment.clock.wait(3000) experiment.clock.wait(3000) 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 # Check response if key == trial . get_factor ( 'correctresponse' ) : acc = 1 else : acc = 0 experiment . data . add ( [ block . name , trial . get_factor ( 'correctresponse' ) , key , trial . id , rt , acc , trial . get_factor ( "trialtype" ) ] ) expyriment . stimuli . TextScreen ( "Short break" , "That was block: " + block . name + ".

Next block will soon start" , ) . present ( ) experiment . clock . wait ( 3000 ) experiment . clock . wait ( 3000 )

The experiment end with thanking the participants for their contribution:

expyriment.control.end(goodbye_text="Thank you for your contribution!", goodbye_delay=2000) 70 71 expyriment . control . end ( goodbye_text = "Thank you for your contribution!" , goodbye_delay = 2000 )

A recording of the task can be seen in this video:

That was how to create a Flanker task using Expyriment. For a better overview of the script as a whole see this GitHub gist. Documentation of Expyriment can be found here: Expyriment docs. To run a Python script you can open up the command prompt and change to the directory where the script is (using the command cd):

Data processing and analysis

Assume that we have collected data using the Flanker task and now we want to analyse our data. Expyriment saves the data of each subject in files with the file ending “.xpd”. Conveniently, the library also comes packed with methods that enables us to preprocess our data.

We are going to create a comma-separated value file (.csv) that we later are going to use to visualise and analyse our data. Lets create a script called “data_processing.py”. First, we import a module called os which lets us find the current directory (os.getcwd()) and by using os.sep we make our script compatible with both Windows, Linux, and OS-X. The variable datafolder stores the path to the data. In the last line, we use data_preprocessing to write a .csv file (“flanker_data.csv”) from the files starting with the name “flanker” in our data folder. Note, the Python script need to be run in the same directory as the folder ‘data’ is. Another option is to change the datafolder variable (e.g., datafolder =’path_to_where_the_data_is’).

import os from expyriment.misc import data_preprocessing datafolder = os.getcwd() + os.sep + 'data' data_preprocessing.write_concatenated_data(datafolder, 'flanker', output_file='flanker_data.csv', delimiter=', ') 1 2 3 4 5 6 import os from expyriment . misc import data_preprocessing datafolder = os . getcwd ( ) + os . sep + 'data' data_preprocessing . write_concatenated_data ( datafolder , 'flanker' , output_file = 'flanker_data.csv' , delimiter = ', ' )

Descriptive statistics and visualising

Each subject’s data files are now put together in “flanker_data.csv” and we can start our analyses. Here we are going to use the libraries Pandas and Seaborn. Pandas is very handy to create data structures. That is, it makes working with our data much easier. Here, in the code block below, we import Pandas as pd and Seaborn as sns. It makes using them a bit easier. The third line is going to make our plot white and without a grid.

import pandas as pd import seaborn as sns sns.set_style("white") 1 2 3 import pandas as pd import seaborn as sns sns . set_style ( "white" )

Now we can read our csv-file (‘flanker_data.csv’). When reading in our data we need to skip the first (“# –– coding: UTF-8 –-” is no use for us!):

Reading in data from the data file and skipping the first row:

dataframe = pd.read_csv('flanker_data.csv', skiprows=1) 4 dataframe = pd . read_csv ( 'flanker_data.csv' , skiprows = 1 )

Pandas makes descriptive statistics quite easy as well. Since we are interested in the two types of trials, we group them. For this example, we are only going to look at the RTs:

grouped_df = dataframe.groupby(['trialtype']) print grouped_df['RT'].describe().unstack() 5 6 grouped_df = dataframe . groupby ( [ 'trialtype' ] ) print grouped_df [ 'RT' ] . describe ( ) . unstack ( )

count mean std min 25% 50% 75% max trialtype congruent 360 560.525000 36.765310 451.0 534.75 561.0 584.0 658.0 incongruent 360 642.088889 55.847114 488.0 606.75 639.5 680.0 820.0

One way to obtain quite a lot information on our to trial types and RTs is doing a violin plot:

viol_ax = sns.violinplot(x="trialtype", y="RT", palette='colorblind', data=dataframe) save_the_fig = viol_ax.get_figure() sns.plt.show() 7 8 9 10 11 viol_ax = sns . violinplot ( x = "trialtype" , y = "RT" , palette = 'colorblind' , data = dataframe ) save_the_fig = viol_ax . get_figure ( ) sns . plt . show ( )

Testing our hypothesis

Just a brief reminder, we are interested here in whether people can suppress the irrelevant information (i.e., the flankers pointing to another direction than the target). We use the paired sample t-test to see if the difference in RT in incongruent and congruent trials is different from zero.

First, we need to aggregate the data, and we start by grouping our data by trial type and subject number. We can then get the mean RT for the two trial types:

grouped_sub = dataframe.groupby(['trialtype', 'subject_id']) means = grouped_sub['RT'].mean() 1 2 grouped_sub = dataframe . groupby ( [ 'trialtype' , 'subject_id' ] ) means = grouped_sub [ 'RT' ] . mean ( )

Next, we are going to take the RT (values, in the script) and assign them to x and y. Remember, the t-test function we started off with takes two lists containing data. The last code in the code block below calls the function which returns the statistics needed (i.e., t-value, p-value, and degree of freedom).

x, y = means['incongruent'].values, means['congruent'].values t_value = paired_ttest(x, y) 3 4 x , y = means [ 'incongruent' ] . values , means [ 'congruent' ] . values t_value = paired_ttest ( x , y )

Finally, before printing the results we may want to round the values. We use a for loop and go for each key and value in our dictionary (i.e., t_value). On line 7 we then round our numbers.

for key, value in t_value.iteritems(): t_value[key] = round(value, 3) print t_value 5 6 7 8 for key , value in t_value . iteritems ( ) : t_value [ key ] = round ( value , 3 ) print t_value

Printing the variable t_value (line 8 above) renders the following output:

{'P-value': 0.0, 'Degree of Freedom': 29.0, 'T-value': 27.358} 1 {'P-value': 0.0, 'Degree of Freedom': 29.0, 'T-value': 27.358}

We can conclude that there was a significant difference in the RT for incongruent (M = 642.08, SD = 55.85) and congruent (M = 560.53, SD = 36.52) trials; t(29) = 27.358, p < .001.

That was how to use Python from data collection to analysing data. If you want to play around with the scripts for processing data files for 30 simulated subjects can be downloaded here: data_flanker_expy.zip. All the scripts described above, as well as the script to simulate the subjects (i.e., run the task automatically), can be found on this GitHub Gist. Feel free to use the Flanker task above. If you do, I would suggest that you add a couple of practice trials.

Resources

As previously mentioned the Python community is large and helpful. Thus, there are so many resources to turn to both for learning Python and finding help. It can thus be hard to know where to start. Therefore, the end of this post contains a few of the Python resources I have found useful or interesting. All resources I list below are free.

Learning Python

Python in Psychology:

GESTALTREVISIONWIKI – Python in vision science resources. Tutorials on general Python, data analysis, PsychoPy, and more

Programming for Psychology in Python – a “set of lessons on the fundamentals of programming for psychology..”

Python distributions

If you think that installing Python packages seem complicated and time consuming there are a number of distributions. These distributions aims to simplify package management. That is, when you install one of them you will get many of the packages that you would have to install one by one. There are many distributions (see here) but I have personally used Anaconda and Python(x, y).

Data Collection

PsychoPy (Peirce, 2007) – offers both a GUI and you can use the API to program your experiments. You will find some learning/teaching resources on the homepage

Vision Science – PsychoPy/Python course for Vision Science



Expyriment – the library used in the tutorial above

Two example scripts: SNARC-effect and Simon Task



OpenSesame (Mathôt, Schreij, & Theeuwes, 2012) – offers both Python scripting (mainly inline scripts) and a GUI for building your experiments. You will find examples and tutorials on OpenSesame’s homepage.

PyGaze ( Dalmaijer , Mathôt, & d er Stigchel , 2014) – a toolbox for eye-tracking data and experiments.

Statistics

Pandas – Python data analysis (descriptive, mainly) toolkit

Statsmodels – Python library enabling many common statistical methods

pypsignifit – Python toolbox for fitting psychometric functions (Psychophysics)

MNE – For processing and analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data

Getting help

Stackoverflow – On Stackoverflow you can answer questions concerning every programming language. Questions are tagged with the programming language. Also, some of the developers of PsychoPy are active and you can tag your questions with PsychoPy.

User groups for PsychoPy and Expyriment can be found on Google Groups.

OpenSesame Forum e.g., the subforums for PyGaze and, most important, Expyriment.

That was it; I hope you have found my post valuable. If you have any questions you can either leave a comment here, on my homepage or email me.

References

Colcombe, S. J., Kramer, A. F., Erickson, K. I., & Scalf, P. (2005). The implications of cortical recruitment and brain morphology for individual differences in inhibitory function in aging humans. Psychology and Aging, 20(3), 363–375. http://doi.org/10.1037/0882-7974.20.3.363

Dalmaijer, E. S., Mathôt, S., & der Stigchel, S. (2014). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods, 46(4), 913–921. doi:10.3758/s13428-013-0422-2

Eriksen, B. a., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16(1), 143–149. doi:10.3758/BF03203267

Krause, F., & Lindemann, O. (2014). Expyriment: A Python library for cognitive and neuroscientific experiments. Behavior Research Methods, 46(2), 416-428. http://doi.org/10.3758/s13428-013-0390-6

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314–324. http://doi.org/10.3758/s13428-011-0168-7

Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8–13. http://doi.org/10.1016/j.jneumeth.2006.11.017

Erik Marsja Erik Marsja is a Ph.D. student at the Department of Psychology, Umeå University, Sweden. In his dissertation work, he examines attention and distraction from a cross-modal and multisensory perspective (i.e., using auditory, visual, and tactile stimuli). Erik is teaching in both qualitative and quantitative research methods, applied cognitive psychology, cognitive psychology, and perception. In the lab group he has been part of since his Bachelor's thesis he has been responsible for programming his own, and some of the other members and collaborators, experiments. Programming skills have been, and will be, something valuable for his research and his career. Some of the code that have been used can be found on his GitHub page. More Posts - Website