import pandas as pd
import numpy as np
One of the great advantages of using Pandas to handle tabular data is how simple it is to extract valuable information from them. Here we are going to see various types of operations that are available for this.
The strength of Numpy is its natural way of handling matrix operations, and Pandas reuses a lot of these features. For example one can use simple mathematical operations to operate at the cell level:
compo_pd = pd.read_excel('Data/composers.xlsx')
compo_pd
compo_pd['birth']*2
np.log(compo_pd['birth'])
Here we applied functions only to series. Indeed, since our Dataframe contains e.g. strings, no operation can be done on it:
#compo_pd+1
If however we have a homogenous Dataframe, this is possible:
compo_pd[['birth','death']]
compo_pd[['birth','death']]*2
There are other types of functions whose purpose is to summarize the data. For example the mean or standard deviation. Pandas by default applies such functions column-wise and returns a series containing e.g. the mean of each column:
np.mean(compo_pd)
Note that columns for which a mean does not make sense, like the city are discarded. A series of common functions like mean or standard deviation are directly implemented as methods and can be accessed in the alternative form:
compo_pd.describe()
compo_pd.std()
If you need the mean of only a single column you can of course chains operations:
compo_pd.birth.mean()
We can also do computations with multiple series as we would do with Numpy arrays:
compo_pd['death']-compo_pd['birth']
We can even use the result of this computation to create a new column in our Dataframe:
compo_pd
compo_pd['age'] = compo_pd['death']-compo_pd['birth']
compo_pd
Sometimes one needs to apply to a column a very specific function that is not provided by default. In that case we can use one of the different apply
methods of Pandas.
The simplest case is to apply a function to a column, or Series of a DataFrame. Let's say for example that we want to define the the age >60 as 'old' and <60 as 'young'. We can define the following general function:
def define_age(x):
if x>60:
return 'old'
else:
return 'young'
define_age(30)
define_age(70)
We can now apply this function on an entire Series:
compo_pd.age.apply(define_age)
compo_pd.age.apply(lambda x: x**2)
And again, if we want, we can directly use this output to create a new column:
compo_pd['age_def'] = compo_pd.age.apply(define_age)
compo_pd
We can also apply a function to an entire DataFrame. For example we can ask how many composers have birth and death dates within the XIXth century:
def nineteen_century_count(x):
return np.sum((x>=1800)&(x<1900))
compo_pd[['birth','death']].apply(nineteen_century_count)
The function is applied column-wise and returns a single number for each in the form of a series.
def nineteen_century_true(x):
return (x>=1800)&(x<1900)
compo_pd[['birth','death']].apply(nineteen_century_true)
Here the operation is again applied column-wise but the output is a Series.
There are more combinations of what can be the in- and output of the apply function and in what order (column- or row-wise) they are applied that cannot be covered here.
Just like with Numpy, it is possible to subselect parts of a Dataframe using logical indexing. Let's have a look again at an example:
compo_pd
If we use a logical comparison on a series, this yields a logical Series:
compo_pd['birth']
compo_pd['birth'] > 1859
Just like in Numpy we can use this logical Series as an index to select elements in the Dataframe:
log_indexer = compo_pd['birth'] > 1859
log_indexer
compo_pd
~log_indexer
compo_pd[~log_indexer]
We can also create more complex logical indexings:
(compo_pd['birth'] > 1859)&(compo_pd['age']>60)
compo_pd[(compo_pd['birth'] > 1859)&(compo_pd['age']>60)]
And we can create new arrays containing only these subselections:
compos_sub = compo_pd[compo_pd['birth'] > 1859]
compos_sub
We can then modify the new array:
compos_sub.loc[0,'birth'] = 3000
Note that we get this SettingWithCopyWarning warning. This is a very common problem hand has to do with how new arrays are created when making subselections. Simply stated, did we create an entirely new array or a "view" of the old one? This will be very case-dependent and to avoid this, if we want to create a new array we can just enforce it using the copy()
method (for more information on the topic see for example this explanation:
compos_sub2 = compo_pd[compo_pd['birth'] > 1859].copy()
compos_sub2.loc[0,'birth'] = 3000
compos_sub2