1. Introduction

This course assumes some familiarity with Python, Jupyter notebooks and python scientific packages such as Numpy. There are many great resources to learn Python, including within Jupyter environements. For example this is a great introduction that you can follow to refresh your memories if needed.

The course will mostly focus on image processing using the package scikit-image, which is 1) easy to install, 2) offers a huge choice of image processing functions and 3) has a simple syntax. Other tools that you may want to explore are OpenCV (focus on computer vision) and ITK (focus on medical image processing). Finally, it has recently become possible to "import" Fiji (ImageJ) into Jupyter, which may be of interest if you rely on specific plugins that are not implemented in Python (this is however in very beta mode).

1.1 Installation

1.1.1 Running the course material remotely

To avoid loosing time at the beginning of the course with faulty installations, we provide every attendee access to a JupyterHub allowing to remotely run the notebooks (links will be provided in time). This possibility is only offered for the duration of the course. The notebooks can however be permanently accessed and executed through the mybinder service that you can activate by clicking on the badge below that is also present on the repository. If you want to "full experience" you can also install all the necessary packages on your own computer (see below).

Binder

1.1.2 Local installation

Python and Jupyter can be installed on any operating system. Instead of manually installing all needed components, we highly recommend using the environment manager conda by installing either Anaconda or Miniconda (follow instructions on the website). This will install Python, Python tools (e.g. pip), several important libraries (including e.g. Numpy) and finally the conda tool itself. For Mac/Linux users: Anaconda is quite big so we recommend installing Miniconda, and then installing additional packages that you need from the Terminal. For Windows users: Anaconda might be better for you as it installs a command prompt (Anaconda prompt) from which you can easily issue conda commands.

The point of using conda is that it lets you install various packages and even versions of Python within closed environments that don't interfere with each other. In such a way, once you have an environment that functions as intended, you don't have to fear messing it up when you need to install other tools for you next project.

Once conda is installed, you should create a conda environment for the course. We have automated this process and you can simply follow the instructions below:

  • Clone or download and unzip this repository.
  • Open a terminal and cd to it.
  • Create the conda environment by typing:
    conda env create -f binder/environment.yml
    
  • Activate the environment:
      conda activate improc_env
  • Several imaging datasets are used during the course. The download of these data is automated through the following command (the total size is 6Gb so make sure you have a good internet connection and enough disk space):

    python installation/download_data.py
    

Note that if you need an additional package for that environment, you can still install it using conda or pip. To make it accessible within the course environment don't forget to type:

conda activate improc_env

before you conda or pip install anything. Alternatively you can type your instructions directly from a notebook e.g.:

! pip install mypackage

Whenever you close the terminal where notebooks are running, don't forget to first activate the environment before you want to run the notebooks next time:

conda activate improc_env

1.2 Some Python refresh

I give here a very short summary of basic Python, focusing on structures and operations that we will use during this lecture. So this is not an exhaustive Python introduction. There are many many operations that one can do on basic Python structures, however as we are mostly going to use Numpy arrays, those operations are not desribed here.

1.2.1 Variables and structures

There are multiple types of Python variables:

In [56]:
myint = 4
myfloat = 4.0
mystring ='Hello'
print(myint)
print(myfloat)
print(mystring)
4
4.0
Hello

The type of your variable can be found using type():

In [57]:
type(myint)
Out[57]:
int
In [58]:
type(myfloat)
Out[58]:
float

These variables can be assembled into various Python structures:

In [59]:
mylist = [7,5,9]
mydictionary = {'element1': 1, 'element2': 2}
print(mylist)
print(mydictionary)
[7, 5, 9]
{'element2': 2, 'element1': 1}

Elements of those structures can be accessed through zero-based indexing:

In [60]:
mylist[1]
Out[60]:
5
In [61]:
mydictionary['element2']
Out[61]:
2

One can append elements to a list:

In [62]:
mylist.append(1)
print(mylist)
[7, 5, 9, 1]

Measure its length:

In [63]:
len(mylist)
Out[63]:
4

Ask if some value exists in a list:

In [64]:
5 in mylist
Out[64]:
True
In [65]:
4 in mylist
Out[65]:
False

1.2.2 Basic operations

A lot of operations are included by default in Python. You can do arithmetic:

In [66]:
a = 2
b = 3
#addition
print(a+b)
#multiplication
print(a*b)
#powers
print(a**2)
5
6
4

Logical operations returning booleans (True/False)

In [67]:
a>b
Out[67]:
False
In [68]:
a<b
Out[68]:
True
In [69]:
a<b and 2*a<b
Out[69]:
False
In [70]:
a<b and 1.4*a<b
Out[70]:
True
In [71]:
a<b or 2*a<b
Out[71]:
True

Operations on strings:

In [72]:
mystring = 'This is my string'
mystring
Out[72]:
'This is my string'
In [73]:
mystring+ ' and an additional string'
Out[73]:
'This is my string and an additional string'
In [74]:
mystring.split()
Out[74]:
['This', 'is', 'my', 'string']

1.2.2 Functions and methods

In Python one can get information or modify any object using either functions or methods. We have already seen a few examples above. For example when we asked for the length of a list we used the len() function:

In [75]:
len(mylist)
Out[75]:
4

Python variables also have so-called methods, which are functions associated with particular object types. Those methods are written as variable.method(). For example we have seen above how to append an element to a list:

In [76]:
mylist.append(20)
print(mylist)
[7, 5, 9, 1, 20]

The two examples above involve only one argument, but any number can be used. All Python objects, inculding those created by other packages like Numpy function on the same scheme.

There are two ways to ask for help on funtions and methods. First, if you want to know how a specific function is supposed to work you can simply type:

In [77]:
help(len)
Help on built-in function len in module builtins:

len(obj, /)
    Return the number of items in a container.

This shows you that you can pass any container to the function len() (list, dictionary etc.) and it tells you what comes out. We will see later some more advanced examples of help information.

Second, if you want to know what methods are associated with a particular object you can just type:

In [111]:
#¼dir(mylist)

This returns a list of all possible methods. At the moment, only consider those not starting with an underscore. If you need help on one of those methods, you can type

In [79]:
help(mylist.append)
Help on built-in function append:

append(...) method of builtins.list instance
    L.append(object) -> None -- append object to end

Finally, whenever writing a function you can place the cursor in the empty function parenthesis and hit Command+Shift which will open a window with the help information looking like this:

1.2.2 For, if

Loops and conditions are classical programming features. In python, one can write them in a very natural way. A for loop:

In [80]:
for i in [1,2,3,4]:
    print(i)
1
2
3
4

An if condition:

In [81]:
a=5
if a>6:
    print('large')
else:
    print('small')
small

A mix of those:

In [82]:
for i in [1,2,3,4]:
    if i>3:
        print(i)
4

Note that indentation of blocks is crucial in Python.

1.2.3. Mixing lists, for's and if's

A very useful feature of Python is the very simple way it allows one to create lists. For exampel to create a list containing squares of certain values, in a classical programming languange one would do something like:

In [83]:
my_initial_list = [1,2,3,4]

my_list_to_create = []#initialize list

for i in my_initial_list:
    my_list_to_create.append(i*i)
print(my_list_to_create)
    
[1, 4, 9, 16]

Python allows one to do that in one line through a comprehension list, which is basically a compressed for loop:

In [84]:
[i*i for i in my_initial_list]
Out[84]:
[1, 4, 9, 16]

In a lot of cases, the list that the for loop goes through is not an explicit list but another function, typically range() which generate either numbers from 0 to N (range(N)) or from M to N in steps of P (range(M,N,P)):

In [85]:
[i for i in range(10)]
Out[85]:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
In [86]:
[i for i in range(0,10,2)]
Out[86]:
[0, 2, 4, 6, 8]

If statements can be introduced in comprehension lists:

In [87]:
[i for i in range(0,10,2) if i>3]
Out[87]:
[4, 6, 8]
In [88]:
[i if i>3 else 100 for i in range(0,10,2)]
Out[88]:
[100, 100, 4, 6, 8]

A last very useful trick offered by Python is the function enumerate. Often when traversing a list, one needs both the actual value and the index of that value:

In [89]:
for ind, val in enumerate([8,4,9]):
    print('index: '+str(ind))
    print('value: ' + str(val))
index: 0
value: 8
index: 1
value: 4
index: 2
value: 9

1.2.4 Using packages

Python comes with a default set of data structures and operations. For particular applications like matrix calculations (image processing) or visulaization, we are going to need additional resources. Those exist in the form of python packages, ensembles of functions and data structures whose defintiions can be simply imported in any Python program.

For example to do matrix operations, we are going to use Numpy, so we run:

In [90]:
import numpy

All functions of a package can be called by using the package name followed by a dot and a parenthesis numpy.xxx(). Most functions are used with an argument and either "act" on the argument e.g. to find the maximum in a list:

In [91]:
numpy.max([1,2])
Out[91]:
2

or use the arguments to create a new object e.g. a 4x3 matrix of zeros:

In [92]:
mymat = numpy.zeros((4,3))
In [93]:
mymat
Out[93]:
array([[0., 0., 0.],
       [0., 0., 0.],
       [0., 0., 0.],
       [0., 0., 0.]])

To avoid lengthy typing, package names are usually abbreviated by giving them another name when loading them:

In [94]:
import numpy as np

Within packages, some additional tools are grouped as submodules and are typically called e.g for numpy as numpy.submodule_name.xxx(). For example, generating random numbers can be done using the numpy.random submodule. An array of ten uniform random numbers can be for example generated using:

In [95]:
np.random.rand(10)
Out[95]:
array([0.00738174, 0.82510957, 0.59643586, 0.92919436, 0.46570716,
       0.92526076, 0.17081481, 0.03715798, 0.12744829, 0.35009797])

To avoid lengthy typing, specific functions can be directly imported, which allows one to call them without specifying their source module:

In [96]:
from numpy.random import rand

rand(10)
Out[96]:
array([0.81812159, 0.97452756, 0.4383594 , 0.91854004, 0.37517642,
       0.11077294, 0.66271078, 0.8482131 , 0.70100188, 0.44337187])

This should be used very cautiously, as it makes it more difficult to debgug code, once it is not clear anymore that a given function comes from a module.

1.3 Matplotlib

To quickly look at images, we are mostly going to use the package Matplotlib. We review here the bare minimum function calls needed to do a simple plot. First let's import the pyplot submodule:

In [97]:
import matplotlib.pyplot as plt

1.3.1 Plotting images

Using numpy we create a random 2D image of integers of 30x100 pixels (we will learn more about Numpy in the next chapters):

In [98]:
image = numpy.random.randint(0,255,(30,100))

The variable image is a Numpy array, and we'll see in the next chapter what that exactly is. For the moment just consider it as a 2D image.

To show this image we are using the plt.imshow() command which takes an Numpy array as argument:

In [99]:
plt.imshow(image)
Out[99]:
<matplotlib.image.AxesImage at 0x7f7496c27160>

In order to suppress the matplotlib figure reference, you can end the line with ;:

In [100]:
plt.imshow(image);

When plotting outside of an interactive environment like a notebook you will also have to use the show() command. If you use it in a notebook you won't have to use ;:

In [101]:
plt.imshow(image)
plt.show()

The rows and number indices are indicates on the left and the bottom and actually correspond to pixel indices. The image is just a gray-scale image, and Matplotlib used its default lookup table (or color map) to color it (LUT in Fiji). We can change that by specifiy another LUT (you can find the list of LUTs here by using the argument cmap (color map):

In [102]:
plt.imshow(image, cmap = 'gray');

Note that you can change the default color map used by matplotlib using a command of the type plt.yourcolor, e.g. for gray scale:

In [103]:
plt.gray()
<Figure size 432x288 with 0 Axes>

Sometimes we want to see a slightly larger image. To do that we have to add another line that specifies options for the figure.

In [104]:
plt.figure(figsize=(10,10))
plt.imshow(image);

Sometimes we want to show an array of figures to compare for example an original image and its segmentations. We use the subplot() function and pass three arguments: number of rows, number of columns and index of plot. We use it for each element and increment the plot index. There are multiple ways of creating complex figures and you can refer to the Matplotlib documentation for further information:

In [105]:
plt.subplot(1,2,1)
plt.imshow(image, cmap = 'gray')
plt.subplot(1,2,2)
plt.imshow(image, cmap = 'Reds');

The imshow() function takes basically two types of data. Either single planes as above, or images with three planes. In the latter case, imshow() assumes that the image is in RGB format (Red, Green, Blue) and uses those colors.

Finally, one can superpose various plot elements on top of each other. One very useful option in the frame of this course, is the possibility to ovelay an image in transparency on top of another using the alpha argument. We create a gradient image and then superpose it:

In [106]:
image_grad = np.ones((30,100))*np.linspace(0, 1, 100)[None, :]

plt.subplot(1,2,1)
plt.imshow(image, cmap = 'gray')
plt.subplot(1,2,2)
plt.imshow(image_grad, cmap = 'Reds');
In [107]:
plt.imshow(image, cmap = 'gray')
plt.imshow(image_grad, cmap = 'Reds', alpha = 0.2);

1.3.2 Plotting histograms

One thing that we are going to do very often is looking at histograms, typically of pixel values, for example to determine a threshold from background to signal. For that we can use the plt.hist() command.

If we have a list of numbers we can simply called the plt.hist() function on it (we will see more options later). We crate again a list of random numbers:

In [108]:
list_number = np.random.randint(0,100,100000)
In [109]:
plt.hist(list_number);

Once we have an idea of the distribution of values, we can refine the binning:

In [110]:
plt.hist(list_number, bins = np.arange(0,255,2));
02-Numpy_images

2. Numpy with images

All images are essentially matrices with a variable number of dimensions where each element represents the value of one pixel. The different dimensions and the pixel values can have very different meanings depending on the type of image considered, but the structure is the same.

Python does not allow by default to gracefully handle multi-dimensional data. In particular it is not desgined to handle matrix operations. Numpy was developed to fill in this blank and offers a very similar framework as the one offered by Matlab. It is underlying a large number of packages and has become abolsutely essential to Python scientific programming. In particular it underlies the functions of scikit-image. The latter in turn forms the basis of other software like CellProfiler. It is thus essential to have a good understanding of Numpy to proceed.

Instead of introducing Numpy in an abstract way, we are going here to present it through the lense of image processing in order to focus on the most useful features in the context of this course.

2.1 Exploring an image

Some test images are provided directly in skimage, so let us look at one (we'll deal with the details of image import later). First let us import the necessary packages.

In [1]:
import numpy as np
import skimage
import matplotlib.pyplot as plt
plt.gray();  # MZ: nsure it will use gray scale for the plotting
In [2]:
image = skimage.data.coins()

# submodule skimage.data => provide images 
In [3]:
# MZ: added to have all outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
a=5
a
b=2
b
# => will print 5 and 2 and not only 2
Out[3]:
2

2.1.1 Image size

The first thing we can do with the image is simply look at the output:

In [4]:
image # MZ: it is a numpy arrray
Out[4]:
array([[ 47, 123, 133, ...,  14,   3,  12],
       [ 93, 144, 145, ...,  12,   7,   7],
       [126, 147, 143, ...,   2,  13,   3],
       ...,
       [ 81,  79,  74, ...,   6,   4,   7],
       [ 88,  82,  74, ...,   5,   7,   8],
       [ 91,  79,  68, ...,   4,  10,   7]], dtype=uint8)

We see that Numpy tells us we have an array and we don't have a simple list of pixels, but a list of lists representing the fact that we are dealing with a two-dimensional object. Each list represents one row of pixels. Numpy smartly only shows us the first/last rows/columns. We can use the .shape method to check the size of the array:

In [5]:
image.shape  # MZ: give the dimension
Out[5]:
(303, 384)

This means that we have an image of 303 rows and 384 columns. We can also visualize the image using matplotlib:

In [6]:
plt.imshow(image);
In [7]:
%matplotlib inline 
# %matplotlib notebook 
# with notebook -> you can zoom, convenient for notebook
# MZ: magic lines for jupyter with % 

2.1.2 Image type

In [8]:
image
Out[8]:
array([[ 47, 123, 133, ...,  14,   3,  12],
       [ 93, 144, 145, ...,  12,   7,   7],
       [126, 147, 143, ...,   2,  13,   3],
       ...,
       [ 81,  79,  74, ...,   6,   4,   7],
       [ 88,  82,  74, ...,   5,   7,   8],
       [ 91,  79,  68, ...,   4,  10,   7]], dtype=uint8)

In the output above we see that we have one additional piece of information: the array has dtype = uint8 , which means that the image is of type unsigned integer 8 bit. We can also get the type of an array by using:

In [9]:
image.dtype  # MZ: dtype is an attribute of "image" (// shape)
Out[9]:
dtype('uint8')

Standard formats we are going to see are 8bit (uint8), 16bit (uint16) and non-integers (usually float64). The type of the image pixels set what values they can take. For example 8bit means values from $0$ to $2^8 -1= 256-1 = 255$. Just like for example in Fiji, one cane change the type of the image. If we know we are going to do operations requiring non-integers we can turn the pixels into floats trough the .astype() function.

In [10]:
# MZ:
# a bit more careful with types of images !
# if integer or not it really matters !
# numpy different from Python philosophy and dynamic typing
# be careful, e.g. if values > 255 -> can behave weird
In [11]:
image_float = image.astype(float)

Notice the '.':

In [12]:
image_float
Out[12]:
array([[ 47., 123., 133., ...,  14.,   3.,  12.],
       [ 93., 144., 145., ...,  12.,   7.,   7.],
       [126., 147., 143., ...,   2.,  13.,   3.],
       ...,
       [ 81.,  79.,  74., ...,   6.,   4.,   7.],
       [ 88.,  82.,  74., ...,   5.,   7.,   8.],
       [ 91.,  79.,  68., ...,   4.,  10.,   7.]])
In [13]:
image_float.dtype
Out[13]:
dtype('float64')

The importance of the image type goes slightly against Python's philosophy of dynamics typing (no need to specify a type when creating a variable), but a necessity when handling images. We are going to see now what types of operations we can do with arrays, and the importance of types is going to be more obvious.

2.2 Operations on arrays

2.2.1 Arithmetics on arrays

Numpy is written in a smart way such that it is able to handle operations between arrays of different sizes. In the simplest case, one can combine a scalar and an array, for example through an addition:

In [14]:
image
Out[14]:
array([[ 47, 123, 133, ...,  14,   3,  12],
       [ 93, 144, 145, ...,  12,   7,   7],
       [126, 147, 143, ...,   2,  13,   3],
       ...,
       [ 81,  79,  74, ...,   6,   4,   7],
       [ 88,  82,  74, ...,   5,   7,   8],
       [ 91,  79,  68, ...,   4,  10,   7]], dtype=uint8)
In [15]:
image+10 # add 10 to each element of the array
# MZ:  advantage of using nupy ! will not work with list ! here it works pixel-wise
Out[15]:
array([[ 57, 133, 143, ...,  24,  13,  22],
       [103, 154, 155, ...,  22,  17,  17],
       [136, 157, 153, ...,  12,  23,  13],
       ...,
       [ 91,  89,  84, ...,  16,  14,  17],
       [ 98,  92,  84, ...,  15,  17,  18],
       [101,  89,  78, ...,  14,  20,  17]], dtype=uint8)

Here Numpy automatically added the scalar 10 to each element of the array. Beyond the scalar case, operations between arrays of different sizes are also possible through a mechanism called broadcasting. This is an advanced (and sometimes confusing) features that we won't use in this course but about which you can read for example here.

The only case we are going to consider here is operations between arrays of same size. For example we can multiply the image by itself. We use first the float version of the image:

In [16]:
image_sq = image_float*image_float 
# MZ:
# does not perform matrix multiplication !, but multiply each pixel with each pixel at the same position
# (will not perform like in linear algebra) (will have to use other numpy functions)
In [17]:
image_sq
Out[17]:
array([[2.2090e+03, 1.5129e+04, 1.7689e+04, ..., 1.9600e+02, 9.0000e+00,
        1.4400e+02],
       [8.6490e+03, 2.0736e+04, 2.1025e+04, ..., 1.4400e+02, 4.9000e+01,
        4.9000e+01],
       [1.5876e+04, 2.1609e+04, 2.0449e+04, ..., 4.0000e+00, 1.6900e+02,
        9.0000e+00],
       ...,
       [6.5610e+03, 6.2410e+03, 5.4760e+03, ..., 3.6000e+01, 1.6000e+01,
        4.9000e+01],
       [7.7440e+03, 6.7240e+03, 5.4760e+03, ..., 2.5000e+01, 4.9000e+01,
        6.4000e+01],
       [8.2810e+03, 6.2410e+03, 4.6240e+03, ..., 1.6000e+01, 1.0000e+02,
        4.9000e+01]])
In [18]:
image_float
Out[18]:
array([[ 47., 123., 133., ...,  14.,   3.,  12.],
       [ 93., 144., 145., ...,  12.,   7.,   7.],
       [126., 147., 143., ...,   2.,  13.,   3.],
       ...,
       [ 81.,  79.,  74., ...,   6.,   4.,   7.],
       [ 88.,  82.,  74., ...,   5.,   7.,   8.],
       [ 91.,  79.,  68., ...,   4.,  10.,   7.]])

Looking at the first row we see $47^2 = 2209$ and $123^2=15129$ etc. which means that the multiplication operation has happened pixel-wise. Note that this is NOT a classical matrix multiplication. We can also see that the output has the same size as the original arrays:

In [19]:
image_sq.shape
Out[19]:
(303, 384)
In [20]:
image_float.shape
Out[20]:
(303, 384)

Let's see now what happens when we square the original 8bit image:

In [21]:
image*image
Out[21]:
array([[161,  25,  25, ..., 196,   9, 144],
       [201,   0,  33, ..., 144,  49,  49],
       [  4, 105, 225, ...,   4, 169,   9],
       ...,
       [161,  97, 100, ...,  36,  16,  49],
       [ 64,  68, 100, ...,  25,  49,  64],
       [ 89,  97,  16, ...,  16, 100,  49]], dtype=uint8)

We see that we don't get at all the expected result. Since we multiplied two 8bit images, Numpy assumes we want an 8bit output. And therefore the values are bound between 0-255. For example the first value is just the remainder of the modulo 256:

In [22]:
# MZ:
# what is above 255 get reassigned to a 0-255 value
# as numpy assumed that we have 8bit int !!!

# if you want > 255 values -> first make the matrix as float
In [23]:
2209%256
Out[23]:
161

The same thing happens e.g. if we add an integer scaler to the matrix:

In [24]:
print(image+230)
[[ 21  97 107 ... 244 233 242]
 [ 67 118 119 ... 242 237 237]
 [100 121 117 ... 232 243 233]
 ...
 [ 55  53  48 ... 236 234 237]
 [ 62  56  48 ... 235 237 238]
 [ 65  53  42 ... 234 240 237]]

Clearly something went wrong as we get values that are smaller than 230. Again any value "over-flowing" above 255 goes back to 0.

This problem can be alleviated in different ways. For example we can combine a integer array with a float scaler and Numpy will automatically give a result using the "most complex" type:

In [25]:
image_plus_float = image+230.0
In [26]:
print(image_plus_float)  # MZ: e.g. has removed 256: 277-256 = 21
[[277. 353. 363. ... 244. 233. 242.]
 [323. 374. 375. ... 242. 237. 237.]
 [356. 377. 373. ... 232. 243. 233.]
 ...
 [311. 309. 304. ... 236. 234. 237.]
 [318. 312. 304. ... 235. 237. 238.]
 [321. 309. 298. ... 234. 240. 237.]]

To be on the safe side we can also explicitely change the type when we know we might run into this kind of trouble. This can be done via the .astype() method:

In [27]:
# MZ:
# combine integer with float -> Python logic, use the most complex type
# will convert int to float and the output will be float
In [28]:
image_float = image.astype(float)
In [29]:
image_float.dtype
Out[29]:
dtype('float64')

Again, if we combine floats and integers the output is going to be a float:

In [30]:
image_float+230
Out[30]:
array([[277., 353., 363., ..., 244., 233., 242.],
       [323., 374., 375., ..., 242., 237., 237.],
       [356., 377., 373., ..., 232., 243., 233.],
       ...,
       [311., 309., 304., ..., 236., 234., 237.],
       [318., 312., 304., ..., 235., 237., 238.],
       [321., 309., 298., ..., 234., 240., 237.]])

2.2.2 Logical operations

A set of important operations when processing images are logical (or boolean) operations that allow to create masks for features to segment. Those have a very simple syntax in Numpy. For example, let's compare pixel intensities to some value a:

In [31]:
threshold = 100
In [32]:
image > threshold
Out[32]:
array([[False,  True,  True, ..., False, False, False],
       [False,  True,  True, ..., False, False, False],
       [ True,  True,  True, ..., False, False, False],
       ...,
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False]])

We see that the result is again a pixel-wise comparison with a, generating in the end a boolean or logical matrix. We can directly assign this logical matrix to a variable and verify its shape and type and plot it:

In [33]:
image_threshold = image > threshold
In [34]:
image_threshold.shape
Out[34]:
(303, 384)
In [35]:
image_threshold.dtype
Out[35]:
dtype('bool')
In [36]:
image_threshold
Out[36]:
array([[False,  True,  True, ..., False, False, False],
       [False,  True,  True, ..., False, False, False],
       [ True,  True,  True, ..., False, False, False],
       ...,
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False]])
In [37]:
plt.imshow(image_threshold);

Of course other logical operator can be used (<, >, ==, !=) and the resulting boolean matrices combined:

In [38]:
threshold1 = 70
threshold2 = 100
image_threshold1 = image > threshold1
image_threshold2 = image < threshold2
In [39]:
# MZ
# logical: often use of masks
# e.g. you have a mask for dog and a mask for houses -> apply the masks to the images using logicals
In [40]:
# MZ: here we deal with logical matrices
image_AND = image_threshold1 & image_threshold2  # MZ: True in the 2 matrices
image_XOR = image_threshold1 ^ image_threshold2  # MZ: what is True in 1 matrix but not in the other one
In [41]:
# MZ: multiple panels on matplot
plt.figure(figsize=(15,15)) # set the figure sizes
plt.subplot(1,4,1) # how many 1 row, 4 columns, and what is the 1st element
plt.imshow(image_threshold1)
plt.subplot(1,4,2) # in the subplot where 1 row and 4 columns, what should be the 2nd element
plt.imshow(image_threshold2)
plt.subplot(1,4,3)
plt.imshow(image_AND)
plt.subplot(1,4,4)
plt.imshow(image_XOR);

2.3 Numpy functions

To broadly summarize, one can say that Numpy offers three types of operations: 1. Creation of various types of arrays, 2. Pixel-wise modifications of arrays, 3. Operations changing array dimensions, 4. Combinations of arrays.

2.3.1 Array creation

Often we are going to create new arrays that later transform them. Functions creating arrays usually take arguments spcifying both the content of the array and its dimensions.

Some of the most useful functions create 1D arrays of ordered values. For example to create a sequence of numbers separated by a given step size:

In [42]:
np.arange(0,20,2)  # MZ: from where to where in step of what
Out[42]:
array([ 0,  2,  4,  6,  8, 10, 12, 14, 16, 18])

Or to create an array with a given number of equidistant values:

In [43]:
np.linspace(0,20,5)
Out[43]:
array([ 0.,  5., 10., 15., 20.])

In higher dimensions, the simplest example is the creation of arrays full of ones or zeros. In that case one only has to specify the dimensions. For example to create a 3x5 array of zeros:

In [44]:
np.zeros((3,5))
Out[44]:
array([[0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0.]])

Same for an array filled with ones:

In [45]:
np.ones((3,5))
Out[45]:
array([[1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1.]])

Until now we have only created one-dimensional lists of 2D arrays. However Numpy is designed to work with arrays of arbitrary dimensions. For example we can easily create a three-dimensional "ones-array" of dimension 5x8x4:

In [46]:
array3D = np.ones((2,6,5))
In [47]:
array3D
Out[47]:
array([[[1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.]],

       [[1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.]]])
In [48]:
array3D.shape
# MZ: you should decide which dimension is the channel/volume (usually the 1st or the last)

# MZ: numpy functions can easily deal with any dimension 
# (e.g. it is easy to convert code written for 2D to code for 3D objects)
Out[48]:
(2, 6, 5)

And all operations that we have seen until now and the following ones apply to such high-dimensional arrays exactly in the same way as before:

In [49]:
array3D*5
Out[49]:
array([[[5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.]],

       [[5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.],
        [5., 5., 5., 5., 5.]]])

We can also create more complex arrays. For example an array filled with numbers drawn from a normal distribution:

In [50]:
np.random.standard_normal((3,5))
Out[50]:
array([[ 0.51920188, -1.74490051,  0.19059971, -1.22782172, -0.08362917],
       [-1.91288875, -1.46339209, -0.29266003,  1.58959264,  1.39652976],
       [-2.23327794,  0.4977774 , -0.04227832,  0.97826304, -0.99332756]])

As mentioned before, some array-creating functions take additional arguments. For example we can draw samples from a gaussian distribution whose mean and variance we can specify.

In [51]:
np.random.normal(10, 2, (5,2))
# MZ: NB "Tab" for auto-completion; "Shift+Tab" to get the help for the function
Out[51]:
array([[11.59504334, 10.84820206],
       [11.21592976,  9.46107067],
       [ 8.06999708, 10.02220069],
       [10.15008664, 11.81826128],
       [ 7.92993365, 11.43523018]])

2.3.2 Pixel-wise operations

Numpy has a large trove of functions to do all common mathematical operations matrix-wise. For example you can take the cosine of a matrix:

In [52]:
angles = np.random.random_sample(5)
angles
Out[52]:
array([0.94436116, 0.77710703, 0.8668537 , 0.68759525, 0.25572394])
In [53]:
np.cos(angles)
Out[53]:
array([0.58626054, 0.71294513, 0.64722816, 0.7727745 , 0.96748043])

Or to calculate exponential values:

In [54]:
np.exp(angles)
Out[54]:
array([2.57117028, 2.17517045, 2.37941273, 1.98892691, 1.29139618])

And many many more.

2.2.3 Operations changing dimensions

Some functions are accessible in the form of method, i.e. they are called using the dot notation. For example to find the maximum in an array:

In [55]:
angles.max() # MZ: return the max value inside the array
Out[55]:
0.9443611558667749

Alternatively there's also a maximum function:

In [56]:
np.max(angles)  # MZ: same as above but calling directly as a function
Out[56]:
0.9443611558667749

The max function like many others (min, mean, median etc.) can also be applied to a given axis. Let's imagine we have a 3D image (multiple planes) of 10x10x4 pixels:

In [ ]:
volume = np.random.random((10,10,4))
#volume

If we want to do a maximum projection along the third axis, we can specify:

In [58]:
projection = np.max(volume, axis = 2)
# MZ: specify an axis
# 0 1 2 
# maximum along the 3 -> axis = 2
# creates a projection
In [59]:
projection.shape
Out[59]:
(10, 10)
In [60]:
projection2 = np.max(volume, axis = 0)
projection2.shape
Out[60]:
(10, 4)
In [61]:
projection3 = np.max(volume, axis = 1)
projection3.shape
Out[61]:
(10, 4)

We see that we have indeed a new array with one dimension less because of the projection.

2.3.4 Combination of arrays

Finally arrays can be combined in multiple ways. For example if we want to assemble to images with the same size into a stack, we can use the stack function:

In [62]:
image1 = np.ones((4,4))
image2 = np.zeros((4,4))

stack = np.stack([image1, image2],axis = 2)
In [63]:
stack.shape
Out[63]:
(4, 4, 2)

2.3 Slicing and indexing

Just like broadcasting, the selection of parts of arrays by slicing or indexing can become very sophisticated. We present here only the very basics to avoid confusion. There are often multiple ways to do slicing/indexing and we favor here easier to understant but sometimes less efficient solutions.

To simplify the visualisation, we use here a natural image included in the skimage package.

In [64]:
image = skimage.data.chelsea()
In [65]:
image.shape # MZ: 300x451 pixels and 3 planes: RGB
Out[65]:
(300, 451, 3)

We see that the image has three dimensions, probably it's a stack of three images of size 300x400. Let us try to have a look at this image hoping that dimensions are handled gracefully:

In [66]:
plt.imshow(image); # MZ: if pass an image with 3 planes as last dim -> implicitly assumes it is an RGB image

So we have an image of a cat with dimensions 300x400. The image being in natural colors, the three dimensions probably indicate an RGB (red, green, blue) format, and the plotting function just knows what to do in that case.

2.3.1 Array slicing

Let us now just look at one of the three planes composing the image. To do that, we are going the select a portion of the image array by slicing it. One can give:

  • a single index e.g. 0 for the first element
  • a range e.g. 0:10 for the first 10 elements
  • take all elements using a semi-column :

What portion is selected has to be specified for each dimensions of an array. In our particular case, we want to select all rows, all columns and a single plane of the image:

In [67]:
image.shape
Out[67]:
(300, 451, 3)
In [68]:
image[:,:,1].shape # MZ: select only the 2nd plane
Out[68]:
(300, 451)
In [69]:
plt.imshow(image[:,:,0],cmap='gray')  
# MZ: cmap argument -> here redondant with plt.gray(); 
# different colormaps provided by matplotlib (map pixel-values to colors)
plt.title('First plane: Red');

We see now the red layer of the image. We can do the same for the others by specifying planes 0, 1, and 2:

In [70]:
plt.figure(figsize=(10,10))
plt.subplot(1,3,1)
plt.imshow(image[:,:,0],cmap='gray')
plt.title('First plane: Red')
plt.subplot(1,3,2)
plt.imshow(image[:,:,1],cmap='gray')
plt.title('Second plane: Green')
plt.subplot(1,3,3)
plt.imshow(image[:,:,2],cmap='gray')
plt.title('Third plane: Blue');


# MZ:
# no physical meaning to the colormaps, you can put what ever you want as colors
# is only the rendering of the pixel values

Logically intensities are high for the red channel and low for the blue channel as the image has red/brown patterns. We can confirm that by measuring the mean of each plane. To do that we use the same function as above but apply it to a singel sliced plane:

In [71]:
image0 = image[:,:,0] # MZ: retain only 1st dim
In [72]:
np.mean(image0) # MZ: mean of all pixels
Out[72]:
147.67308943089432

and for all planes using a comprehension list:

In [73]:
[np.mean(image[:,:,i]) for i in range(3)]  # MZ: calculat the mean of every plane
Out[73]:
[147.67308943089432, 111.44447893569844, 86.79785661492978]

To look at some more details let us focus on a smaller portion of the image e.g. one of the cat's eyes. For that we are going to take a slice of the red image and store it in a new variable and display the selection. We consider pixel rows from 80 to 150 and columns from 130 to 210 of the first plane (0).

In [74]:
image_red = image[80:150,130:210,0]
plt.imshow(image_red,cmap='gray');

There are different ways to select parts of an array. For example one can select every n'th element by giving a step size. In the case of an image, this subsamples the data:

In [75]:
image_subsample = image[80:150:3,130:210:3,0]
plt.imshow(image_subsample,cmap='gray');

2.3.2 Array indexing

In addition to slicing an array, we can also select specific values out of it. There are many different ways to achieve that, but we focus here on two main ones.

First, one might have a list of pixel positions and one wishes to get the values of those pixels. By passing two lists of the same size containing the rows and columns positions of those pixels, one can recover them:

In [76]:
row_position = [0,1,2,3]
col_position = [0,1,0,1]

print(image_red[0:5,0:5]) 
# MZ: pass the 2 lists -> assumes that you mean the pixels you want

image_red[row_position,col_position]
# MZ: output is just a list of pixels, not in 3 dim anymore ! output is 1D

# MZ => you can extract either with 3-dot notation or by passing a list
[[166 162 169 174 185]
 [183 192 185 183 173]
 [179 178 168 175 176]
 [187 184 187 189 185]
 [195 192 187 181 169]]
Out[76]:
array([166, 192, 179, 184], dtype=uint8)

Alternatively, one can pass a logical array of the same dimensions as the original array, and only the True pixels are selected. For example, let us create a logical array by picking values above a threshold:

In [77]:
threshold_image = image_red>120

Let's visualize it. Matplotlib handles logical arrays simply as a binary image:

In [78]:
plt.imshow(threshold_image)
plt.title('Thresholded logical image');

We can recover the value of all the "white" (True) pixels in the original image by indexing one array with the other:

In [79]:
selected_pixels = image_red[threshold_image] 
# MZ:
# create a mask with logical array
# pass another image, of the same size, should be a boolean array and 
# instead of passing explicit lists of rows/columns -> direct pass an array
# output is again a list 
# useful e.g. for segmentation (create a mask where you have the cells only to extract 
# from other panes where you have light emission and average the light emission)
print(selected_pixels)
[166 162 169 ... 148 137 132]

And now ask how many pixels are above threshold and what their average value is.

In [80]:
len(selected_pixels)
Out[80]:
2585
In [81]:
np.mean(selected_pixels)
Out[81]:
153.59381044487426
In [82]:
threshold_image # MZ: mask is a boolean array 2D
Out[82]:
array([[ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True],
       [ True,  True,  True, ...,  True,  True,  True],
       ...,
       [ True, False, False, ..., False, False, False],
       [ True,  True,  True, ..., False, False, False],
       [ True,  True,  True, ..., False, False, False]])
In [83]:
np.argwhere(threshold_image)
# MZ: 2 dim arrays -> gives where are the True values in x,y coordinates
Out[83]:
array([[ 0,  0],
       [ 0,  1],
       [ 0,  2],
       ...,
       [69, 65],
       [69, 66],
       [69, 67]])
In [ ]:
# MZ: to have all attributes and functions associated with an object
#dir(threshold_image)
In [ ]:
# MZ: same works for packages
#dir(np)

We now know that there are 2585 pixels above the threshold and that their mean is 153.6

In [86]:
# to plot with transprency: (e.g. to plot 1 fig on the top of another)
# imshow(alpha=0.5)
03-Image_import

3. Image import/export

For the moment, we have only used images that were provided internally by skimage. We are however normally going to use data located in the file system. The module skimage.io deals with all in/out operations and supports a variety of different import mechanisms.

In [4]:
import numpy as np
import matplotlib.pyplot as plt
import skimage.io as io

3.1 Simple case

Most of the time the simples command imread() will do the job. One has just to specifiy the path of the file or a url. In general your path is going to look something like:

image = io.imread('/This/is/a/path/MyData/Klee.jpg')
In [5]:
file_path = 'Data/Klee.jpg'
print(file_path)
Data/Klee.jpg

Here we only use a relative path, knowing that the Data folder is in the same folder as the notebook. However you can also give a complete path. We can also check what's the complete path of the current file:

In [6]:
import os
print(os.path.realpath(file_path))
/home/marie/Documents/CAS_data_science/CAS_21.01.2020_Python_Image_Processing/PyImageCourse-master/Data/Klee.jpg

Now we can import the image:

In [7]:
image = io.imread(file_path)
In [8]:
image.shape
Out[8]:
(643, 471, 3)
In [9]:
plt.imshow(image);

Now with a url:

In [18]:
image = io.imread('https://upload.wikimedia.org/wikipedia/commons/0/09/FluorescentCells.jpg')
In [19]:
plt.imshow(image)
plt.show()

3.2 Series of images (.tif)

Popular compressed formats such as jpg are usually used for natural images e.g. in facial recognition. The reason for that is that for those applications, in most situations one does not care about quantitative information and effects of information compression occurring in jpg are irrelevant. Also, those kind of data are rarely multi-dimensional (except for RGB).

In most other cases, the actual pixel intensity gives important information and one needs a format that preserves that information. Usually this is the .tif format or one of its many derivatives. One advantage is that the .tif format allows to save multiple images within a single file, a very useful feature for multi-dimensional acquisitions.

You might encounter different situations.

3.2.1 Series of separate images

In the first case, you would have multiple single .tif files within one folder. In that case, the file name usually contains indications about the content of the image, e.g a time point or a channel. The general way of dealing with this kind of situation is to use regular expressions, a powerful tool to parse information in text. This can be done in Python using the re module.

Here we will use an approach that identifies much simpler patterns.

Let's first see what files are contained within a folder of a microscopy experiment containing images acquired at two wavelengths using the os module:

In [10]:
import glob
import os
In [11]:
folder = 'Data/BBBC007_v1_images/A9'

Let's list all the files contained in the folder

In [13]:
files = os.listdir(folder) # MZ: list all files that are within the directory
print(files)
['A9 p10f.tif', 'A9 p5f.tif', 'A9 p9d.tif', 'A9 p7f.tif', 'A9 p7d.tif', 'A9 p10d.tif', 'A9 p9f.tif', 'A9 p5d.tif']

The two channels are defined by the last character before .tif. Using the wild-card sign we can define a pattern to select only the 'd' channel: d.tif. We complete that name with the correct path. Now we use the native Python module glob to parse the folder content using this pattern:

In [14]:
d_channel = glob.glob(folder+'/*d.tif')
d_channel
Out[14]:
['Data/BBBC007_v1_images/A9/A9 p9d.tif',
 'Data/BBBC007_v1_images/A9/A9 p7d.tif',
 'Data/BBBC007_v1_images/A9/A9 p10d.tif',
 'Data/BBBC007_v1_images/A9/A9 p5d.tif']

Then we use again the imread() function to import a specific file:

In [15]:
image1 = io.imread(d_channel[0])
In [16]:
image1.shape
Out[16]:
(450, 450)
In [17]:
plt.imshow(image1);

These two steps can in principle be done in one step using the imread_collection() function of skimage.

We can also import all images and put them in list if we have sufficient memory:

In [18]:
channel1_list = []
for x in d_channel:
    temp_im = io.imread(x)
    channel1_list.append(temp_im)

Let's see what we have in that list of images by plotting them:

In [94]:
channel1_list[0].shape
Out[94]:
(450, 450)
In [106]:
plt.imshow(channel1_list[0]);
In [105]:
num_plots = len(channel1_list)
plt.figure(figsize=(20,30))
for i in range(num_plots):
    plt.subplot(1,num_plots,i+1)
    plt.imshow(channel1_list[i],cmap = 'gray')

3.2.2 Multi-dimensional stacks

We now look at a more complex multi-dimensional case taken from a public dataset (J Cell Biol. 2010 Jan 11;188(1):49-68) that can be found here.

We already provide it in the datafolder:

In [19]:
file = 'Data/30567/30567.tif'
# MZ: tif can contain many data and also can contain metadata -> very useful
In [21]:
image = io.imread(file)

The dataset is a time-lapse 3D confocal microscopy acquired in two channels, one showing the location of tubulin, the other of lamin (cell nuclei).

All .tif variants have the same basic structure: single image planes are stored in individual "sub-directories" within the file. Some meta information is stored with each plane, some is stored for the entire file. However, how the different dimensions are ordered within the file (e.g. all time-points of a given channel first, or alternatively all channels of a given time-point) can vary wildly. The simplest solution is therefore usually to just import the file, look at the size of various dimensions and plot a few images to figure out how the data are organized.

In [22]:
image.shape
Out[22]:
(72, 2, 5, 512, 672)

We know we have two channels (dimension 2), and five planes (dimension 3). Usually the large numbers are the image dimension, and therefore 72 is probably the number of time-points. Using slicing, we look at the first time point, of both channels, of the first plane, and we indeed get an appropriate result:

In [23]:
plt.figure(figsize=(20,10))
plt.subplot(1,2,1)
plt.imshow(image[0,0,0,:,:],cmap = 'gray')
plt.subplot(1,2,2)
plt.imshow(image[0,1,0,:,:],cmap = 'gray');

We can check that our indexing works by checking the dimensions of the sliced image:

In [24]:
# where are the metadata and how to access them -> data-specific
image[0,0,0,:,:].shape
Out[24]:
(512, 672)

As we have seen in the Numpy chapter, we can do various operations on arrays. In particular we saw that we can do projections. Let's extract all planes of a given time point and channel:

In [109]:
stack = image[0,0,:,:,:]
stack.shape
Out[109]:
(5, 512, 672)

Here, to do a max projection, we now have to project all the planes along the first dimension, hence:

In [25]:
maxproj = np.max(image[0,0,:,:,:],axis = 0) 
#MZ:  1st time point, 1st channel, but all the planes; take all max along 1st dimension -> projection
# project on the 1st dimension (axis=0)
maxproj.shape
Out[25]:
(512, 672)
In [26]:
plt.imshow(maxproj,cmap = 'gray')
plt.show()

skimage allows one to use specific import plug-ins for various applications (e.g. gdal for geographic data, FITS for astronomy etc.).

In particular it offers a lower-lever access to tif files through the tifffile module. This allows one for example to import only a subset of planes from the dataset if the latter is large.

In [27]:
# load only what you want (e.g. the 1st time point)
# so you don't need to load all the timepoints in memory

# tif -> most often used format for this kind of data
In [28]:
from skimage.external.tifffile import TiffFile

data = TiffFile(file)

Now the file is open but not imported, and one can query information about it. For example some metadata:

In [29]:
data.info()
Out[29]:
'TIFF file: 30567.tif, 473 MiB, big endian, ome, 720 pages\n\nSeries 0: 72x2x5x512x672, uint16, TCZYX, 720 pages, not mem-mappable\n\nPage 0: 512x672, uint16, 16 bit, minisblack, raw, ome|contiguous\n* 256 image_width (1H) 672\n* 257 image_length (1H) 512\n* 258 bits_per_sample (1H) 16\n* 259 compression (1H) 1\n* 262 photometric (1H) 1\n* 270 image_description (3320s) b\'<?xml version="1.0" encoding="UTF-8"?><!-- Wa\n* 273 strip_offsets (86I) (182, 8246, 16310, 24374, 32438, 40502, 48566, 56630,\n* 277 samples_per_pixel (1H) 1\n* 278 rows_per_strip (1H) 6\n* 279 strip_byte_counts (86I) (8064, 8064, 8064, 8064, 8064, 8064, 8064, 8064, \n* 282 x_resolution (2I) (1, 1)\n* 283 y_resolution (2I) (1, 1)\n* 296 resolution_unit (1H) 1\n* 305 software (17s) b\'LOCI Bio-Formats\''

Some specific planes:

In [30]:
plt.imshow(data.pages[6].asarray())
plt.show()
In [31]:
image = [data.pages[x].asarray() for x in range(3)]
In [32]:
plt.figure(figsize=(20,10))
for i in range(3):
    plt.subplot(1,3,i+1)
    plt.imshow(image[i])
plt.show()

3.2.3 Alternative formats

While a large majority of image formats is somehow based on tif, instrument providers often make their own tif version by creating a proprietary format. This is for example the case of the Zeiss microscopes which create the .czi format.

In almost all cases, you can find an dedicated library that allows you to open your particular file. For example for czi, there is a specific package.

More generally your research field might use some particular format. For example Geospatial data use the format GDAL, and for that there is of course a dedicated package.

Note that a lot of biology formats are well handled by the tifffile package. io.imread() tries to use the best plugin to open a format, but sometimes if fails. If you get an error using the default io.imread() you can try to specific what plugin should open the image, .e.g

image = io.imread(file, plugin='tifffile')

3.3 Exporting images

There are two ways to save images. Either as plain matrices, which can be written and re-loaded very fast, or as actual images.

Just like for loading, saving single planes is easy. Let us save a small region of one of the images above:

In [119]:
image[0].shape
Out[119]:
(512, 672)
In [33]:
io.imsave('Data/region.tif',image[0][200:300,200:300])  # MZ: specify what you want to save
io.imsave('Data/region.jpg',image[0][200:300,200:300])
/usr/local/lib/python3.5/dist-packages/skimage/util/dtype.py:141: UserWarning: Possible precision loss when converting from uint16 to uint8
  .format(dtypeobj_in, dtypeobj_out))
In [34]:
reload_im = io.imread('Data/region.jpg') # MZ: jpg not good for scientific purposes
In [35]:
plt.imshow(reload_im,cmap='gray')
plt.show()

Saving multi-dimensional .tif files is a bit more complicated as one has of course to be careful with the dimension order. Here again the tifffile module allows to achieve that task. We won't go through the details, but here's an example of how to save a dataset with two time poins, 5 stacks, 3 channels into a file that can then be opened as a hyper-stack in Fiji:

In [36]:
from skimage.external.tifffile import TiffWriter

data = np.random.rand(2, 5, 3, 301, 219)#generate random images
data = (data*100).astype(np.uint8)#transform data in a reasonable 8bit range

with TiffWriter('Data/multiD_set.tif', bigtiff=False, imagej=True) as tif:
    for i in range(data.shape[0]):
        tif.save(data[i])

3.4 Interactive plotting

Jupyter offers a solution to interact with various types of plots: ipywidget

In [125]:
from ipywidgets import interact, IntSlider

The interact() function takes as input a function and a value for that function. That function should plot or print some information. interact() then creates a widget, typically a slider, executes the plotting function and adjusts the ouptut when the slider is moving. For example:

In [126]:
def square(num=1):
    print(str(num)+' squared is: '+str(num*num))
In [127]:
square(3)
3 squared is: 9
In [128]:
interact(square, num=(0,20,1));

Depending on the values passed as arugments, interact() will create different widgets. E.g. with text:

In [129]:
def f(x): 
    return x
interact(f, x='Hi there!');

An important note for our imaging topic: when moving the slider, the function is continuously updated. If the function does some computationally intensitve work, this might just overload the system. To avoid that, one can explicitly specifiy the slider type and its specificities:

In [130]:
def square(num=1):
    print(str(num)+' squared is: '+str(num*num))
interact(square, num = IntSlider(min=-10,max=30,step=1,value=10,continuous_update = False));

If we want to scroll through our image stack we can do just that. Let's first define a function that will plot the first plane of the channel 1 at all time points:

In [131]:
image = io.imread(file)
In [132]:
def plot_plane(t):
    plt.imshow(image[t,0,0,:,:])
    plt.show()
In [133]:
interact(plot_plane, t = IntSlider(min=0,max=71,step=1,value=0,continuous_update = False));

Of course we can do that for multiple dimensions:

In [134]:
def plot_plane(t,c,z):
    plt.imshow(image[t,c,z,:,:])
    plt.show()

interact(plot_plane, t = IntSlider(min=0,max=71,step=1,value=0,continuous_update = True),
         c = IntSlider(min=0,max=1,step=1,value=0,continuous_update = True),
         z = IntSlider(min=0,max=4,step=1,value=0,continuous_update = True));

And we can make it as fancy as we want:

In [135]:
def plot_plane(t,c,z):
    if c == 0:
        plt.imshow(image[t,c,z,:,:], cmap = 'Reds')
    else:
        plt.imshow(image[t,c,z,:,:], cmap = 'Blues')
    plt.show()

interact(plot_plane, t = IntSlider(min=0,max=71,step=1,value=0,continuous_update = True),
         c = IntSlider(min=0,max=1,step=1,value=0,continuous_update = True),
         z = IntSlider(min=0,max=4,step=1,value=0,continuous_update = True));
04-Filtering_thresholding

4. Basic Image processing: Filtering, scaling, thresholding

Almost all image processing pipelines start with some basic procedures like thresholding, scaling, or projecting a multi-dimensional image.

Let us import again all necessary packages:

In [1]:
import numpy as np
import matplotlib.pyplot as plt
import skimage.io as io
from skimage.external.tifffile import TiffFile

Most filtering functions will come out from the filters module of scikit-image:

In [2]:
import skimage.filters as skf

A specific region size/shape has often to be specified for filters. Those are defined in the morphology module:

In [3]:
import skimage.morphology as skm

Additionally, this module offers a set of binary operators essential to operate on the masks resulting from segmentation.

We will start working on a single plane of the dataset seen in chapter 3

In [4]:
#load image
data = TiffFile('Data/30567/30567.tif')
image = data.pages[3].asarray()
#plot image
plt.figure(figsize=(10,10))
plt.imshow(image,cmap = 'gray');

4.1 Filtering

A large set of filters are offered in scikit-image. Filtering is a local operation, where a value is calculated for each pixel and its surrounding region according to some function. For example a median filter of size 3, calculates for each pixel the median value of the 3x3 region around it.

Most filters take as input a specified region to consider for the calculation (e.g. 3x3 region). Those can be defined using the morphology module e.g.

In [5]:
disk = skm.disk(10)
diamond  = skm.diamond(5)
plt.subplot(1,2,1)
plt.imshow(disk,cmap = 'gray')
plt.subplot(1,2,2)
plt.imshow(diamond,cmap = 'gray');
In [6]:
image_mean = skf.median(image,selem=skm.disk(3))  # MZ: selem = selection element
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)
In [7]:
plt.figure(figsize=(10,10))
plt.imshow(image_mean,cmap = 'gray');

Similar filters can be defined for a large range of operations: sum, min, max, mean etc.

More specific filters are also provide in skimage. For example finding the gradient of intensity in an image can be done with a Sobel filter. Here for horizontal, vertical and their sum:

In [8]:
image_gradienth = skf.sobel_h(image) # MZ: sobel filter, applied horizontally
image_gradientv = skf.sobel_v(image) # MZ: same filter, applied vertically
image_gradient = np.sqrt(image_gradientv**2+image_gradienth**2) # combine both
In [9]:
plt.figure(figsize=(20,10))
plt.subplot(1,3,1)
plt.imshow(image_gradienth,cmap = 'gray')
plt.subplot(1,3,2)
plt.imshow(image_gradientv,cmap = 'gray')
plt.subplot(1,3,3)
plt.imshow(image_gradient,cmap = 'gray')

# MZ: highlight the edges horizontally (1) and vertically (2)
# (3) combined, the edges are highlighted
Out[9]:
<matplotlib.image.AxesImage at 0x7f7f96116a90>

Finally, some functions can be used to filter the image, and one can pass function parameters to the filter. For example to filter with a Gaussian of large standard deviation $\sigma = 10$:

In [10]:
image_gauss = skf.gaussian(image, sigma=10)#, preserve_range=True)
# MZ: Gaussian with really large sigma (e.g. highlight the nuclei)
# MZ: to just filter noise: use much smaller sigma
plt.imshow(image_gauss,cmap = 'gray');
# Gaussian filter automatically re-scales the image between the 0 and 1

A warning regarding filters: some filters can change the type and even the range of intensity of the image. Typically the gaussian filter used above rescales the image between 0 and 1:

In [11]:
print(image.dtype)
print(image.max())
print(image.min())
uint16
20303
2827
In [12]:
print(image_gauss.dtype)
print(image_gauss.max())
print(image_gauss.min())
float64
0.12531917375072713
0.054386287321711344

In many cases, one can specify whether the original range should be preserved:

In [13]:
image_gauss_preserve = skf.gaussian(image, sigma=10, preserve_range=True)  
# MZ: use preserve_range, so that values are not re-scaled
plt.imshow(image_gauss_preserve,cmap = 'gray');
print(image_gauss_preserve.dtype)
print(image_gauss_preserve.max())
print(image_gauss_preserve.min())
float64
8212.792051753902
3564.2053396283527

4.2 Intensity re-scaling

A very common operation to do in an image processing pipeline, is to rescale the intensity of images. The reason can be diverse: for example, one might want to remove an offset added to each pixel by the camera, or one might want to homogenize multiple images with slightly varying exposures.

The simplest thing to do is to rescale from min to max in the range 0-1. To create a histogram of the pixel values of an image, we first have to "flatten" the array, i.e. remove the dimensions, so that the plotting function doesn't believe we have a series of separate measurements.

In [14]:
np.ravel(image).shape 
# MZ convert 2D to 1D array -> flatten to have 1 big list of pixels
# (needed to draw one single histogram for all values)
Out[14]:
(344064,)
In [15]:
plt.hist(np.ravel(image), bins = np.arange(0,15000,500))
plt.show()
print("min val: "+ str(np.min(image)))
print("max val: "+ str(np.max(image)))
min val: 2827
max val: 20303
In [16]:
image_minmax = (image-image.min())/(image.max()-image.min())
image_minmax[image_minmax>1] = 1

One problem that might emerge is that a few pixels might be affected by rare noise events that give them abnormal values. One way to remedy that is to use a small median filter in order to suppress those aberrant values:

In [17]:
image_median = skf.median(image,selem=np.ones((2,2)))
print("min val: "+ str(np.min(image_median)))
print("max val: "+ str(np.max(image_median)))

image_median_rescale = (image_median-image_median.min())/(image_median.max()-image_median.min())
image_median_rescale[image_minmax>1] = 1
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)
min val: 3084
max val: 20046
In [18]:
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.imshow(image_median_rescale,cmap = 'gray')
plt.subplot(1,2,2)
plt.hist(np.ravel(image_median_rescale))#, bins = np.arange(0,15000,500))
plt.show()

Note that the skimage.exposure module offers several functions to adjust the image intensity distribution.

4.3 Thresholding

Another common operation is to try isolating regions of an image based on their intensity by using an intensity threshold: one can create a maks object where all values larger than a threshold are 1 and the other 0. It is usually better to use a smoothed version of the image (e.g. median or gaussian filtering) to avoid including noisy pixels in the maks.

Let us imagine that we want to isolate the nuclei in our current image. To do that we can try to use their bright contour. Based on the intensity histogram, let's try to pick a threshold manually:

In [19]:
# MZ: thresholded image to only keep values above a given threshold

threshold_manual = 8000

#create a mask using a logical operation
image_threshold = image_median>threshold_manual  # MZ: create a boolean array

plt.imshow(image_threshold, cmap ='gray')
plt.show()

Instead of picking manually the threshold, one can use one of the many automatic methods available in skimage,

In [20]:
image_otsu_threshold = skf.threshold_otsu(image_median)
In [21]:
image_otsu_threshold
Out[21]:
7196
In [22]:
image_otsu_threshold = skf.threshold_otsu(image_median)
print(image_otsu_threshold)
image_li_threshold = skf.threshold_li(image_median)
print(image_li_threshold)
7196
6416.599708799512

Knowing that threshold value we can create a binary image setting all pixels higher than the threshold to 1.

In [23]:
image_otsu = image_median > image_otsu_threshold
plt.figure(figsize=(10,10))
plt.imshow(image_otsu, cmap = 'gray')
plt.show()

Since the illumination is uneven accross the image, all standard thresholding methods are going to fail in some region of the image. What we could try to do instead is using a local thresholding, by repeating a standard thresholding method in sub-regions of the image:

In [24]:
image_local_threshold = skf.threshold_local(image_median,block_size=51)
In [25]:
image_local_threshold.shape
Out[25]:
(512, 672)
In [26]:
image_local_threshold = skf.threshold_local(image_median,block_size=51)
image_local = image_median > image_local_threshold
In [27]:
plt.figure(figsize=(10,10))
plt.imshow(image_local, cmap = 'gray')
plt.show()

We see that now each contour of the nuclei is recovered much better, however there is a lot of spurious background signal.

4.4 Note on higher-dimensional cases

Some functions of scikit-image are only designed for 2D images, and will generate an error message when used with 3D images. An alternative package to use in those cases is scipy and specifically scipy.ndimage and scipy.filtering

05-Binary_operations

5. Binary operations, regions

Binary operations are an important class of functions to modify mask images (composed of 0's and 1's) and that are crucial when working segmenting images.

Let us first import the necessary modules:

In [1]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray();
import skimage.io as io
from skimage.external.tifffile import TiffFile

import skimage.morphology as skm
import skimage.filters as skf

And we relaod the image from the last chapter and apply some thresholding to it:

In [2]:
#load image
data = TiffFile('Data/30567/30567.tif')
image = data.pages[3].asarray()
image = skf.rank.median(image,selem=np.ones((2,2)))
image_otsu_threshold = skf.threshold_otsu(image)
image_otsu = image > image_otsu_threshold
plt.imshow(image_otsu);
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)

5.1 Binary operations

Binary operations assign to each pixel a value depending on it's neighborhood. For example we can erode or dilate the image using an area of radius 5. Erosion: If a white pixel has a black neighbor in its region it becomes black (erode). Dilation: any black pixel which as a white neighbour becomes white:

In [3]:
image_erode = skm.binary_erosion(image_otsu, selem = skm.disk(1))
image_dilate = skm.binary_dilation(image_otsu, selem = skm.disk(10))
plt.figure(figsize=(15,10))
plt.subplot(1,3,1)
plt.imshow(image_otsu,cmap = 'gray')
plt.title('Original')
plt.subplot(1,3,2)
plt.imshow(image_erode,cmap = 'gray')
plt.title('Erode')
plt.subplot(1,3,3)
plt.imshow(image_dilate,cmap = 'gray')
plt.title('Dilate');
In [4]:
image_erode1 = skm.binary_erosion(image_otsu, selem = skm.disk(1))
image_erode1b = skm.binary_erosion(image_otsu, selem = skm.disk(5))
image_erode2 = skm.binary_erosion(image_otsu, selem = skm.disk(10))
plt.subplot(1,3,1)
plt.imshow(image_erode1,cmap = 'gray')
plt.subplot(1,3,2)
plt.imshow(image_erode1b,cmap = 'gray')
plt.subplot(1,3,3)
plt.imshow(image_erode2,cmap = 'gray')
Out[4]:
<matplotlib.image.AxesImage at 0x7f06ca6d8be0>

If one is only interested in the path of those shapes, one can also thin them to the maximum:

In [30]:
plt.figure(figsize=(10,10))
plt.imshow(skm.skeletonize(image_otsu));

Those operations can also be combined to "clean-up" an image. For example one can first erode the image to suppress isoltated pixels, and then dilate it again to restore larger structures to their original size. After that, the thinning operation gives a better result:

In [6]:
image_open = skm.binary_opening(image_otsu, selem = skm.disk(2))
image_thin = skm.skeletonize(image_open)
In [7]:
plt.figure(figsize=(15,15))
plt.subplot(2,1,1)
plt.imshow(image_open)
plt.subplot(2,1,2)
plt.imshow(image_thin);

The result of the segmentation is ok but we still have nuclei which are broken or not clean. Let's see if we can achieve a better result using another tool: region properties

5.2 Region properties

In [8]:
from skimage.measure import label, regionprops
# MZ: labeling and region properties
# you have sth to segment (a mask); you want to mesure them individually -> needs labeling
# 1 object for all pixels linked together, and label it (connected components)

When using binary masks, one can make use of functions that detect all objects (connected regions) in the image and calculate a list of properties for them. Using those properties, one can filer out unwanted objects more easily.

Thanks to this additional tool, we can now use the local thresholding method which preserved better all the nuclei but generated a lot of noise:

In [9]:
image_local_threshold = skf.threshold_local(image,block_size=51)
image_local = image > image_local_threshold

plt.figure(figsize=(10,10))
plt.imshow(image_local);

As the image id very noisy, there are a large number of small white regions, and applying the region functions on it will be very slow. So we first do some filtering and remove the smallest objects:

In [10]:
# MZ: still lot of noise !
# remove really small pixels with erosion
# to harsh erosion -> will remove also the patterns of interest, so only soft erosion

image_local_eroded = skm.binary_erosion(image_local, selem= skm.disk(1))

plt.figure(figsize=(10,10))
plt.imshow(image_local_eroded);

To measure the properties of each region, we need a lablled image, i.e. an image in which each individual object is attributed a number. This is achieved usin the skimage.measure.label() function.

In [11]:
image_labeled = label(image_local_eroded)
# MZ: check all neighbors

#code snippet to make a random color scale
vals = np.linspace(0,1,256)
np.random.shuffle(vals)
cmap = plt.cm.colors.ListedColormap(plt.cm.jet(vals))

plt.figure(figsize=(10,10)) # MZ: to have bigger figure
plt.imshow(image_labeled,cmap = cmap);
In [12]:
image_labeled
# MZ: it is again an array
Out[12]:
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    0,    0, ...,    0,    0,    0],
       [   0,    0,    0, ...,    0,    0,    0],
       ...,
       [   0,    0,    0, ...,    0,    0,    0],
       [   0,    0,    0, ...,    0,    0, 2894],
       [   0,    0,    0, ...,    0,    0,    0]])
In [13]:
image_labeled.max()
Out[13]:
2902

And now we can measure all the objects' properties

In [14]:
# MZ: now that we have regions -> we can use regionprops
# measure differences within each regions
# (we will get properties for each of the colored regions here above)
our_regions = regionprops(image_labeled)
len(our_regions)
Out[14]:
2902

We see that we have a list of 2902 regions. We can look at one of them more in detail and check what attributes exist:

In [15]:
# MZ: output is a list of structures, look at 1 element
our_regions[10]
Out[15]:
<skimage.measure._regionprops._RegionProperties at 0x7f06ca5b7b38>
In [29]:
# MZ: each region as a set of measurements associated with it
#dir(our_regions[10])

There are four types of information:

  • geometric information on each shape (area, extent, perimeter, bounding box, etc.)
  • vector information (pixel coordinates, centroid)
  • region image information (average intensity, minimal intensity etc.)
  • image-type information: the image enclosed in the bounding-box

Let us look at one region:

In [17]:
# MZ: a lot of other measurements (e.g. eccentricity, etc.)
our_regions[706].area
Out[17]:
526
In [18]:
# MZ: extract the image region that corresponds to the label
our_regions[706].image
Out[18]:
array([[False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False,  True, False, ..., False, False, False],
       ...,
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False]])
In [19]:
print(our_regions[706].area)
print(our_regions[706].coords)

plt.imshow(our_regions[706].image);
526
[[ 69 342]
 [ 69 343]
 [ 70 335]
 ...
 [ 99 350]
 [100 346]
 [100 347]]

Using the coordinates information we can then for example recreate an image that contains only that region:

In [20]:
our_regions[706].coords
Out[20]:
array([[ 69, 342],
       [ 69, 343],
       [ 70, 335],
       ...,
       [ 99, 350],
       [100, 346],
       [100, 347]])
In [21]:
#create a zero image
newimage = np.zeros(image.shape)
#fill in using region coordinates
newimage[our_regions[706].coords[:,0],our_regions[706].coords[:,1]] = 1
#plot the result
plt.imshow(newimage);

In general, one has an idea about the properties of the objects that are interesting. For example, here we know that objects contain at least several tens of pixels. Let us recover all the areas and look at their distributions:

In [22]:
areas = [x.area for x in our_regions]
plt.hist(areas)
plt.show()

We see that we have a large majority of regions that are very small and that we can discard. Let's create a new image where we do that:

In [23]:
#create a zero image
newimage = np.zeros(image.shape)  # MZ: create 0-array, and then put 1 onlx where area > 200 (clean smaller stuff)
#fill in using region coordinates
for x in our_regions:
    if x.area>200:
        newimage[x.coords[:,0],x.coords[:,1]] = 1
#plot the result
plt.imshow(newimage)
# MZ: create a new image containing only the regions that have area > 200
Out[23]:
<matplotlib.image.AxesImage at 0x7f06ca59fd30>

We see that we still have some spurious signal. We can measure again properties for the remaining regions and try to find another parameter for seleciton:

In [24]:
newimage_lab = label(newimage)
our_regions2 = regionprops(newimage_lab)

Most of our regions are circular, a property measures by the eccentricity. We can verifiy if we have outliers for that parameter:

In [25]:
plt.hist([x.eccentricity for x in our_regions2]);
/usr/local/lib/python3.5/dist-packages/skimage/measure/_regionprops.py:250: UserWarning: regionprops and image moments (including moments, normalized moments, central moments, and inertia tensor) of 2D images will change from xy coordinates to rc coordinates in version 0.16.
See http://scikit-image.org/docs/0.14.x/release_notes_and_installation.html#deprecations for details on how to avoid this message.
  warn(XY_TO_RC_DEPRECATION_MESSAGE)
/usr/local/lib/python3.5/dist-packages/skimage/measure/_regionprops.py:260: UserWarning: regionprops and image moments (including moments, normalized moments, central moments, and inertia tensor) of 2D images will change from xy coordinates to rc coordinates in version 0.16.
See http://scikit-image.org/docs/0.14.x/release_notes_and_installation.html#deprecations for details on how to avoid this message.
  warn(XY_TO_RC_DEPRECATION_MESSAGE)

Let's discard regions that are too oblong (>0.8):

In [26]:
# MZ: now create a new image to clean up using eccentricity

#create a zero image
newimage = np.zeros(image.shape)

#fill in using region coordinates
for x in our_regions2:
    if x.eccentricity<0.8:
        newimage[x.coords[:,0],x.coords[:,1]] = 1

#plot the result
plt.imshow(newimage);

This is a success! We can verify how good the segmentation is by superposing it to the image. A trick to superpose a mask on top of an image without obscuring the image, is not set all 0 elements of the mask to np.nan.

In [27]:
newimage[newimage == 0] = np.nan
In [28]:
plt.figure(figsize=(10,10))
plt.imshow(image,cmap = 'gray')
plt.imshow(newimage,alpha = 0.4,cmap = 'Reds', vmin = 0, vmax = 2);
06-Applicatio_satellite_image

6. Applications: Satellite image

In [1]:
import numpy as np
import matplotlib.pyplot as plt
import skimage.io as io

Looking at non-biology data

Most of this course focuses on biological data. To show the generality of the presented approaches, we show here short example based on satellite imagery.

Satellite imaging programs such as NASA's Landsat continuously image the earth and one can get retrieve data for free on several portals. We will deal here with images from a single region and use our basic image processing knowledge to do some vegetation analysis and image correction.

Let's first look at what a Landsat region data contains:

In [2]:
landsatfolder = 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/'
In [3]:
import glob
In [4]:
glob.glob(landsatfolder+'*tif')
Out[4]:
['Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band3_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band5_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band1_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band4_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_ipflag_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_cloud_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band6_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_cfmask_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band2_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_cfmask_conf_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band7_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_bqa_crop.tif']

The Landsat satellites acquires images in a series of wavelengths or "bands". Let us keep only those band files and sort them:

In [5]:
band_files = sorted(glob.glob(landsatfolder+'*band*tif'))
band_files
Out[5]:
['Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band1_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band2_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band3_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band4_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band5_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band6_crop.tif',
 'Data/geography/landsat/LC80340322016205-SC20170127160728/crop/LC80340322016205LGN00_sr_band7_crop.tif']

Now we can import all images and stack them into a Numpy array:

In [6]:
list_images = [io.imread(x) for x in band_files]
image_stack = np.stack(list_images)
image_stack.shape
Out[6]:
(7, 177, 246)

We see that we created an 3D array with the 7 different wavelength bands. Let's look at those:

In [7]:
fig, axarr = plt.subplots(3,3, figsize = (10,8))
for i in range(9):
    if i<7:
        axarr[int(i/3),np.mod(i,3)].imshow(image_stack[i,:,:],cmap = 'gray', vmin=0, vmax = 500)
        axarr[int(i/3),np.mod(i,3)].set_title('Band'+str(i+1))
    axarr[int(i/3),np.mod(i,3)].axis('off')
    
fig.tight_layout(h_pad = 0, w_pad = 0)

From the Landsat information we know that bands 4,3 and 2 are RGB. So let's select those to create a natural image and try plotting it as RGB image:

In [8]:
image_stack.shape
Out[8]:
(7, 177, 246)
In [9]:
image_RGB = image_stack[[3,2,1],:,:]
In [10]:
image_RGB.shape
Out[10]:
(3, 177, 246)
In [13]:
# plt.imshow(image_RGB)
# plt.show()
# # TypeError: Invalid dimensions for image data

Oups, the dimensions are not correct:

In [14]:
image_RGB.shape
Out[14]:
(3, 177, 246)

We created a stack where the leading dimension are the different bands. However in the RGB format, the different colors are the last dimension! So we have to move the first axis to the end to be able to plot it:

In [15]:
np.moveaxis(image_RGB,0,2).shape
Out[15]:
(177, 246, 3)
In [41]:
# plt.imshow(np.moveaxis(image_RGB,0,2))
# plt.show()
# Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
In [43]:
image_RGB
Out[43]:
array([[[535, 597, 576, ..., 242, 279, 281],
        [483, 547, 549, ..., 283, 321, 364],
        [436, 424, 432, ..., 324, 399, 481],
        ...,
        [667, 832, 854, ..., 425, 413, 433],
        [985, 745, 764, ..., 372, 385, 397],
        [455, 415, 352, ..., 388, 380, 384]],

       [[514, 537, 525, ..., 311, 338, 364],
        [488, 516, 510, ..., 327, 354, 407],
        [484, 490, 463, ..., 364, 411, 477],
        ...,
        [594, 727, 701, ..., 403, 403, 409],
        [738, 662, 710, ..., 364, 401, 425],
        [429, 354, 277, ..., 353, 375, 413]],

       [[263, 300, 292, ..., 141, 158, 156],
        [238, 268, 275, ..., 148, 172, 176],
        [203, 208, 209, ..., 163, 172, 188],
        ...,
        [303, 429, 392, ..., 183, 172, 189],
        [443, 314, 410, ..., 169, 170, 179],
        [230, 188, 118, ..., 162, 164, 166]]], dtype=int16)

Next problem: the values of the pixels are not between 0-1 (floats) or 0-255 (ints). So we have to correct for that. We could do it manually, but skimage has some function to help us where we can say what should be the output scale:

In [18]:
from skimage.exposure import rescale_intensity
In [19]:
plt.imshow(rescale_intensity(np.moveaxis(image_RGB,0,2), out_range = (0,1)));

Now it starts looking like something reasonable. However the exposure is still not optimal. Let's clip values around the the dimmest and brightest pixels and pass that as an argument to the rescaling function:

In [20]:
v_min, v_max = np.percentile(image_RGB, (0.2, 99.8))
plt.imshow(rescale_intensity(np.moveaxis(image_RGB,0,2),in_range=(v_min, v_max), out_range=(0,1)));

So that's much better. Note that we don't modify the image data. We just use the correcting functions within the plotting function. Indeed, we only want to improve the visual impression, not change the underlaying data.

Let us look at the images of the other day provided in the data for which we have the same bands:

In [21]:
landsatfolder = 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/'
band_files = sorted(glob.glob(landsatfolder+'*band*tif'))
band_files
Out[21]:
['Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band1_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band2_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band3_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band4_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band5_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band6_crop.tif',
 'Data/geography/landsat/LC80340322016189-SC20170128091153/crop/LC80340322016189LGN00_sr_band7_crop.tif']
In [22]:
list_images = [io.imread(x) for x in band_files]
image_stack2 = np.stack(list_images)

fig, axarr = plt.subplots(3,3, figsize = (10,8))
for i in range(9):
    if i<7:
        axarr[int(i/3),np.mod(i,3)].imshow(image_stack2[i,:,:],cmap = 'gray', vmin=0, vmax = 500)
        axarr[int(i/3),np.mod(i,3)].set_title('Band'+str(i+1))
    axarr[int(i/3),np.mod(i,3)].axis('off')
    
fig.tight_layout(h_pad = 0, w_pad = 0)

We see that there is a cloud in the image. In addition the cloud is casting a shadow. If our goal was to compare the evolution of the vegetation between these two days, we would somehow have to remove those areas from our dataset. Let's first try to plot our image in real colors:

In [23]:
image_RGB = image_stack2[[3,2,1],:,:]
v_min, v_max = np.percentile(image_RGB, (0.2, 99.8))
plt.imshow(rescale_intensity(np.moveaxis(image_RGB,0,2).astype(float),in_range=(v_min, v_max),out_range=(0, 1)))
plt.show()

Because the cloud is so bright, the exposure in the rest of the image is really dim. We can manually clip the maximal values to be able to visualize our data:

In [24]:
plt.imshow(rescale_intensity(np.moveaxis(image_RGB,0,2).astype(float),in_range=(v_min, 0.2*v_max),out_range=(0, 1)))
plt.show()

Now let us try to remove the cloud and it's shadow. Fortunately, we see that in band1 the clouds clearly appear as much brighter than the rest of the image. The histogram shows that most pixels are below ~1000. To avoid picking a value manually we can use the Otsu threshold and verify our mask

In [25]:
from skimage.filters import threshold_otsu
In [26]:
plt.hist(np.ravel(image_stack2[0,:,:]))#, bins = np.arange(0,20000,100))
plt.show()
In [27]:
otsu_th = threshold_otsu(image_stack2[0,:,:])
plt.imshow(image_stack2[0,:,:]>otsu_th,cmap = 'gray')
plt.show()

The shadow on the other side, appears as a clear dark region in band 7. The histogram shows clearly that we have a set of pixels that have been clipped in the lower range. If we create a maks just above, we get:

In [28]:
plt.hist(np.ravel(image_stack2[6,:,:]), bins = np.arange(0,5000,100))
plt.show()
In [29]:
plt.imshow(image_stack2[6,:,:]<100,cmap = 'gray')
plt.show()

Now we have two masks that we can combine into one logical mask using Numpy logical operations

In [30]:
global_mask = (image_stack2[0,:,:]>otsu_th) | (image_stack2[6,:,:]<100)
In [31]:
plt.imshow(global_mask)
plt.show()

We do in addition one round of binary closing/opening to close holes in our maks and remove small pixels:

In [32]:
from skimage.morphology import binary_closing, disk, binary_opening
In [33]:
global_mask = binary_opening(binary_closing(global_mask, selem=disk(5)),selem= disk(1))
In [34]:
plt.imshow(global_mask, cmap = 'gray')
plt.show()

We can now apply the mask to our entire image stack, and use the fact that the 2D mask will be reproduced along the leading dimension of the stack

In [35]:
image_stack2_masked = image_stack2*~global_mask

Normally now we should be able to plot our RGB image without having to correct for the very bright cloud pixels:

In [36]:
image_RGB = image_stack2_masked[[3,2,1],:,:]
v_min, v_max = np.percentile(image_RGB, (0.2, 99.8))
plt.imshow(rescale_intensity(np.moveaxis(image_RGB,0,2).astype(float),in_range=(v_min, v_max),out_range=(0, 1)));

Calculating the effect of fire

By comparing two channels reflecting vegetation areas and burned/earth areas, we can estimate where fire caused dammage. One typical value that is measured is Band5-Band7/(Band5+Band7)

In [37]:
burn_day1 = (image_stack2_masked[4]-image_stack2_masked[6])/(image_stack2_masked[4]+image_stack2_masked[6])
burn_day2 = (image_stack[4]-image_stack[6])/(image_stack[4]+image_stack[6])
difference = burn_day1-burn_day2

# MZ: to compare the 2 images to see where it has burnt
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in true_divide
  """Entry point for launching an IPython kernel.
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in true_divide
  
In [38]:
f, axarr = plt.subplots(1,3, figsize= (15,10))
axarr[0].imshow(burn_day1,cmap = 'hot')
axarr[0].axis('off')
axarr[0].set_title('Day1')
axarr[1].imshow(burn_day1,cmap = 'hot')
axarr[1].axis('off')
axarr[1].set_title('Day2')
axarr[2].imshow(difference,cmap = 'hot')
axarr[2].axis('off')
axarr[2].set_title('Difference')
Out[38]:
Text(0.5, 1.0, 'Difference')
In [39]:
plt.hist(np.ravel(difference));
/usr/local/lib/python3.5/dist-packages/numpy/lib/histograms.py:754: RuntimeWarning: invalid value encountered in greater_equal
  keep = (tmp_a >= first_edge)
/usr/local/lib/python3.5/dist-packages/numpy/lib/histograms.py:755: RuntimeWarning: invalid value encountered in less_equal
  keep &= (tmp_a <= last_edge)
In [40]:
plt.imshow(difference>0.5)
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in greater
  """Entry point for launching an IPython kernel.
Out[40]:
<matplotlib.image.AxesImage at 0x7f65240aaef0>
07-Functions

7. Functions

In the previous chapter we developped a small procedure to segment our image of nuclei. If you develop such a routine you are going to re-use it multiple times, so it makes sense to package it into a re-usable unit.

We will summarize here how to achieve that in this brief chapter.

In [1]:
#importing packages
import numpy as np
import matplotlib.pyplot as plt
plt.gray();

from skimage.external.tifffile import TiffFile

import skimage.morphology as skm
import skimage.filters as skf
In [2]:
#load the image to process
data = TiffFile('Data/30567/30567.tif')
image = data.pages[3].asarray()
In [3]:
plt.imshow(image);

Let us summarize all the necessary steps within one code block

In [4]:
from skimage.measure import label, regionprops

#median filter
image_med = skf.rank.median(image,selem=np.ones((2,2)))
#otsu thresholding
image_local_threshold = skf.threshold_local(image_med,block_size=51)
image_local = image > image_local_threshold
#remove tiny features
image_local_eroded = skm.binary_erosion(image_local, selem= skm.disk(1))
#label image
image_labeled = label(image_local_eroded)
#analyze regions
our_regions = regionprops(image_labeled)
#create a new mask with constraints on the regions to keep
newimage = np.zeros(image.shape)
#fill in using region coordinates
for x in our_regions:
    if (x.area>200):# and (x.eccentricity<0.8):
        newimage[x.coords[:,0],x.coords[:,1]] = 1
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)
In [5]:
plt.figure(figsize=(10,10))
plt.imshow(newimage)
Out[5]:
<matplotlib.image.AxesImage at 0x7fcb880eeb38>

We can now make a function out of it. You can choose the "level" of your function depending on your needs. For example you could pass a filename and a plane index to the function and make it import your data, or you can pass directly an image.

In addition to the image, you coud pass other arguments if you want to make your function more general. For example, you might not always want to filter objects of the same size or shape, and so you can set those as parameters:

In [6]:
from skimage.measure import label, regionprops

def detect_nuclei(image, size = 200, shape = 0.8):
    #median filter
    image_med = skf.rank.median(image,selem=np.ones((2,2)))
    #otsu thresholding
    image_local_threshold = skf.threshold_local(image_med,block_size=51)
    image_local = image > image_local_threshold
    #remove tiny features
    image_local_eroded = skm.binary_erosion(image_local, selem= skm.disk(1))
    #label image
    image_labeled = label(image_local_eroded)
    #analyze regions
    our_regions = regionprops(image_labeled)
    #create a new mask with constraints on the regions to keep
    newimage = np.zeros(image.shape)
    #fill in using region coordinates
    for x in our_regions:
        if (x.area>size) and (x.eccentricity<shape):
            newimage[x.coords[:,0],x.coords[:,1]] = 1
            
    return newimage

And now we can test the function (which appears also now in autocompletion):

In [7]:
nuclei = detect_nuclei(image, size = 400)
plt.imshow(nuclei);
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)
/usr/local/lib/python3.5/dist-packages/skimage/measure/_regionprops.py:250: UserWarning: regionprops and image moments (including moments, normalized moments, central moments, and inertia tensor) of 2D images will change from xy coordinates to rc coordinates in version 0.16.
See http://scikit-image.org/docs/0.14.x/release_notes_and_installation.html#deprecations for details on how to avoid this message.
  warn(XY_TO_RC_DEPRECATION_MESSAGE)
/usr/local/lib/python3.5/dist-packages/skimage/measure/_regionprops.py:260: UserWarning: regionprops and image moments (including moments, normalized moments, central moments, and inertia tensor) of 2D images will change from xy coordinates to rc coordinates in version 0.16.
See http://scikit-image.org/docs/0.14.x/release_notes_and_installation.html#deprecations for details on how to avoid this message.
  warn(XY_TO_RC_DEPRECATION_MESSAGE)

In order to avoid cluttering your notebooks with function definitions and to be able to reuse your functions across multiple notebooks, I also strongly advise you to create your own module files. Those are .py files that group multipe functions and that can be called from any notebook.

Let's create one, call it my_module.py and copy our function in it. Now we can use the function like this:

In [8]:
import my_module
#or alternatively: from my_module import detect_nuclei
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-8-a9447689b240> in <module>()
----> 1 import my_module
      2 #or alternatively: from my_module import detect_nuclei

ImportError: No module named 'my_module'
In [ ]:
nuclei2 = my_module.detect_nuclei(image)

We get an error because in that module, we use skimage functions that were not imported in the module itself. We have them in the notebook, but they are not accessible from there. We thus restart the kernel as re-loading a module doesn't work:

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray();
from skimage.external.tifffile import TiffFile

data = TiffFile('Data/30567/30567.tif')
image = data.pages[3].asarray()

import my_module
nuclei2 = my_module.detect_nuclei(image)
In [ ]:
plt.imshow(nuclei2);

Your own modules are accessible if they are in the same folder as your notebook or on some path recognized by Python (on the PYTHONPATH). For more details see here.

08-Pattern_matching

8. Pattern matching, local maxima

Sometimes threholding and binary operations are not appropriate tools to segment image features. This is particularly true when the object to be detected has as specific shape but a very variable intensity or if the image has low contrast. In that case it is useful to attempt to build a "model" of the object and look for similar shapes in the image. It is very similar in essence to convolution, however the operation is normalized so that after filtering every pixel is assigned a value between -1 (anti-correlation) to +1 perfect correlation. One can then look for local matching maxima to identify objects.

In [1]:
from skimage.feature import match_template, peak_local_max
import skimage.io as io
In [2]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray()
from skimage.external.tifffile import TiffFile

8.1 Virus on electron microscopy

Electron microscopy is a typical case where pixel intensity cannot be directly used for segmentation. For example in the following picture of a virus, even though we see the virus as white disks, many other regions are as bright.

In [3]:
#load the image to process
image = io.imread('http://res.publicdomainfiles.com/pdf_view/29/13512183019720.jpg')
#image = io.imread('http://res.publicdomainfiles.com.s3.amazonaws.com/pdf_alternate/29/13512183019720.tif?AWSAccessKeyId=AKIAJBE24BKMOLMJBBXA&Expires=1579466193&Signature=uMi8UqvJbUX2mGkgZuEGAx6J6r4%3D')
In [4]:
plt.imshow(image);

What is unique to the virus is the shape of the objects. So let's try to make a model of them to do template matching. Essentially a virus appears as a white disk surrounded by a thin dark line:

In [5]:
radius = 90

template = np.zeros((220,220))
center = [(template.shape[0]-1)/2,(template.shape[1]-1)/2]
Y, X = np.mgrid[0:template.shape[0],0:template.shape[1]]
dist_from_center = np.sqrt((X - center[0])**2 + (Y-center[1])**2)
template[dist_from_center<=radius] = 1
template[dist_from_center>radius+3] = 1

# MZ: identify all areas in the image that match the pattern of your interest
In [6]:
plt.imshow(template)
Out[6]:
<matplotlib.image.AxesImage at 0x7fc3bea62d68>

Now we do the template matching. Note that we specify the option pad_input to make sure the coordinates of the local maxima is not affected by boreder effects (try to turn it to False to see the effecf):

In [7]:
matched = match_template(image=image, template=template, pad_input=True)

And this is how the matched image looks like. Wherever there's a particle a local maximum appears.

In [8]:
plt.imshow(matched)
Out[8]:
<matplotlib.image.AxesImage at 0x7fc3bd1ae048>

We can try to detect the local maxima to have the position of each particle. For that we use the scipy peak_local_max function. We specify that two maximia cannot be closer than 20 pixels (min_distance) and we also set a threshold on the quality of matching (threshold_abs). We also want to recover a list of indices rather than a binary mask of local maxima.

In [9]:
local_max_indices = peak_local_max(matched, min_distance=60,indices=True, threshold_abs=0.1)

Finally we can plot the result:

In [10]:
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.plot(local_max_indices[:,1],local_max_indices[:,0],'ro')
plt.show()

8.2 Fluorescence microscopy

In the following example we are looking at a nuclei imaged by fluorescence microscopy. Here, intensity can clearly be used for segmentation but is going to lead to merged objects when they are too close. To identify each nucleus in a first step before actual segmentation, we can again use template matching.

In [11]:
import skimage.io as io
In [12]:
image = io.imread('Data/BBBC007_v1_images/A9/A9 p9d.tif')
In [13]:
plt.figure(figsize=(10,10))
plt.imshow(image);

In this image, nuclei have radius of around 10 pixels. We can generate again a template:

In [14]:
radius = 10

template = np.zeros((25,25))
center = [(template.shape[0]-1)/2,(template.shape[1]-1)/2]
Y, X = np.mgrid[0:template.shape[0],0:template.shape[1]]
dist_from_center = np.sqrt((X - center[0])**2 + (Y-center[1])**2)
template[dist_from_center<=radius] = 1
In [15]:
plt.imshow(template, cmap = 'gray')
plt.show()
In [16]:
matched = match_template(image=image, template=template, pad_input=True)
In [17]:
plt.figure(figsize=(10,10))
plt.imshow(matched, cmap = 'gray', vmin = -1, vmax = 1)
Out[17]:
<matplotlib.image.AxesImage at 0x7fc3bd16c9b0>
In [18]:
local_max = peak_local_max(matched, min_distance=10,indices=False)
local_max_indices = peak_local_max(matched, min_distance=10,indices=True)
In [19]:
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.plot(local_max_indices[:,1],local_max_indices[:,0],'ro');

We didn't set any threshold on what intensity local maxima should have, therefore we have a few detected cells that are clearly in the background. We could masks those using a rough threshold.

In [20]:
import skimage.filters
import skimage.morphology
In [21]:
otsu = skimage.filters.threshold_otsu(image)

otsu_mask = image>otsu

plt.imshow(otsu_mask);

We can dilate a bit all the regions to make sure we fill the holes and do not cut off dim cells

In [22]:
otsu_mask = skimage.morphology.binary_dilation(otsu_mask, np.ones((5,5)))
plt.imshow(otsu_mask);

Now we can mask the image returned by the peak finder:

In [23]:
masked_peaks = local_max & otsu_mask

And recover the coordinates of the detected peaks:

In [24]:
peak_coords = np.argwhere(masked_peaks)
In [25]:
plt.figure(figsize=(10,10))
plt.imshow(image, cmap = 'gray',vmax = 100)
plt.plot(peak_coords[:,1],peak_coords[:,0],'ro');
In [26]:
# intensity is high, they touch each other -> would be complicated to do without pattern matching
09-Watershed

9. Watershed algorithm

In a number of cases, one is able to detect the positions of multiple objects on an image, but it might be difficult to segment them because they are close together or very irregular. This is where the wahtersehd algorithm is very practical. It takes as input an image, and a series of seeds and expands each region centered around a seed as if it was filling a topographic map.

In [1]:
from skimage.morphology import watershed
from skimage.measure import regionprops
In [2]:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
plt.gray()
from skimage.external.tifffile import TiffFile
import skimage.io as io
from skimage.morphology import label

import course_functions
In [3]:
#load the image to process
image = io.imread('Data/BBBC007_v1_images/A9/A9 p9d.tif')

9.1 Create seeds

We can use the code of the last chapter to produce the seeds. We added the necessary code in our course module called course_functions

In [4]:
#generate template
template = course_functions.create_disk_template(10)
#generate seed map
seed_map, global_mask = course_functions.detect_nuclei_template(image, template)
In [5]:
plt.imshow(global_mask)
plt.show()

We need to create a labeled image, so that the watershed algorithm creates regions with different labels:

In [6]:
plt.figure(figsize=(10,10))
plt.imshow(image);
In [7]:
plt.figure(figsize=(10,10))
plt.imshow(seed_map);
In [8]:
seed_label = label(seed_map)
In [9]:
plt.figure(figsize=(10,10))
plt.imshow(seed_label)
Out[9]:
<matplotlib.image.AxesImage at 0x7fb84e9f8ac8>

Now we can use the image and the labeled seed map to run the watershed algorithm. However, remember the analogy of filling a topographic map: our nuclei should be "deep" regions, so we need to invert the image. Finally we also require that a thin line separates regions (watershed_line option).

In [10]:
watershed_labels = watershed(image = -image, markers = seed_label, watershed_line=True)
In [11]:
watershed_labels.max()
Out[11]:
136
In [12]:
#create a random map 
plt.figure(figsize = (10,10))

cmap = matplotlib.colors.ListedColormap ( np.random.rand ( 256,3))

plt.imshow(image)
plt.imshow(watershed_labels, cmap = cmap, alpha = 0.3);

The algorithm worked well and created regions around each nucleus. However we are only interested in the actual nuclei properties. So let's use our global masks to limit ourselves to those regions:

In [13]:
watershed_labels = watershed(image = -image, markers = seed_label, mask = global_mask, watershed_line=True)
In [14]:
plt.figure(figsize = (10,10))
plt.imshow(image)
plt.imshow(watershed_labels, cmap = cmap, alpha = 0.3)
plt.plot(np.argwhere(seed_map)[:,1],np.argwhere(seed_map)[:,0],'o');

Finally, now that you have all the nuclei segmented you can proceed to do actual measurements e.g. by using the previously seen regionprops function.

In [15]:
myregions = regionprops(watershed_labels)
In [18]:
shape  = [x.area for x in myregions]
In [17]:
plt.hist(shape);
10-3D_case

10. 3D case

Until now we have exclusively processed 2D images, eventhough the sometimes came from 3D acquisition. We are now going to look at an example of 3D processing where we are going to use the same tools as in 2D but in a 3D version.

Extending an image processing pipeline from 2D to 3D can be challenging for two reasons: first, computations can become very slow because of the amount of data changes usually roughly by an order of magnitude, and second, visualization of both original and processed data is more complicated.

In [1]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray()
from ipywidgets import interact, IntSlider, fixed

import skimage.io as io
from skimage.transform import rescale, resize
from skimage.morphology import white_tophat
from skimage.feature import peak_local_max
from skimage.measure import regionprops, label
from skimage.filters import threshold_otsu, gaussian
import scipy.ndimage as ndi


#convenience functions
#create a segmentation image where background is NaN to use as overlay
def nan_image(image):
    image_nan = np.zeros(image.shape)
    image_nan[:] = np.nan
    for i in range(1,image.max()):
        image_nan[image==i]=i

#image plotting function used in concert with ipywidget interact. Plots a single image.
def plot_plane(t,im, cmap):
    
    plt.figure(figsize=(10,10))
    plt.imshow(im[t,:,:],cmap = cmap)
    plt.show()

#image plotting function used in concert with ipywidget interact. Plots two superposed images.
def plot_superpose(t, im1, im2, cmap):
    plt.figure(figsize=(10,10))
    plt.imshow(im1[t,:,:],cmap = 'gray')
    plt.imshow(im2[t,:,:],cmap = cmap, alpha = 0.3, vmin = 0, vmax = im2.max())
    plt.show()

#Wrapping function to create an interactive view of an image stack for one or a pair of stacks
def image_browser(image, image2 = None , color = True):
    if color == True:
        vals = np.linspace(0,1,int(image.max()))
        np.random.shuffle(vals)
        cmap = plt.cm.colors.ListedColormap(plt.cm.jet(vals))
    else:
        cmap = 'gray'
    
    if image2 is None:
        interact(plot_plane, t = IntSlider(min=0,max=image.shape[0],step=1,value=0,
                                           continuous_update = False),im = fixed(image), cmap = fixed(cmap));
    else:
        interact(plot_superpose, t = IntSlider(min=0,max=image.shape[0]-1,step=1,value=0,
                                               continuous_update = False),im1 = fixed(image), im2 = fixed(image2),cmap = fixed(cmap));
        
In [2]:
from skimage.morphology import binary_closing, white_tophat, label, watershed
from skimage.measure import regionprops, label
from skimage.feature import match_template, peak_local_max

We are going to look at a dataset of an embryo imaged in 3D in multiple wavelengths. We are first going to focus on one channel where the nuclei are marked. Then we will use that information to extract information from another channel where we will try to extract spot-like structures.

The goal here is to illustrate that most functions used before in 2D can be used in the same way in 3D, but with some new issues, especially around visualizations and computing time.

Let's load the first image and look at it along two projections:

In [3]:
image = io.imread('Data/BBBC032_v1_dataset/BMP4blastocystC3.tif')
In [4]:
image.shape
Out[4]:
(172, 1344, 1024)
In [5]:
np.size(image)/10**6
Out[5]:
236.716032
In [6]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(image,axis = 0))
ax[1].imshow(np.max(image,axis = 1));

The image is really large, so any operation we are going to do on it will be very slow (e.g. a filter will have to visit every single one of the 230 millions pixels). As we just want to identify the nuclei we don't care about the details in the image, so a practical thing to do is to resample the imge. As the z dimension is larger than the xy (image on the right looks squished) we are going to use the opportunity to "stretch" the image during resampling:

In [7]:
image_resampled = rescale(image,(0.5,0.15,0.15), multichannel=False,preserve_range=True, anti_aliasing=True)
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
In [8]:
image_resampled.shape
Out[8]:
(86, 202, 154)

Let's look at the result;

In [9]:
#image_resampled = gaussian(image_resampled,sigma=(2,2,2))
In [10]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(image_resampled,axis = 0))
ax[1].imshow(np.max(image_resampled,axis = 1));

To remove some of the glare in the image we can use a top-hat filter, which keeps objects which are smaller than a structuring element and brighter than their surroundings. "Flat" low-illumination regions get therefore removed:

In [11]:
from skimage.morphology import binary_closing, white_tophat, label, watershed, black_tophat
In [12]:
im_tophat = white_tophat(image_resampled,selem=np.ones((20,20,20)))
In [13]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(im_tophat,axis = 0))
ax[1].imshow(np.max(im_tophat,axis = 1));

We can have a look at what happens if we do a classical thresholding of the image, which works just like in 2D.

In [14]:
image_browser(im_tophat>threshold_otsu(im_tophat), color=False)

The result is poor because the nucleus signal is not homogeneous, i.e. each nucleus is made of sparse bright signals. To identify larger scale structures, we thus have to filter the image with a structuring element that has approximately the shape of the nuclei. A typical filter used to detect "blobs" is the LoG filter (Laplacian of a Gaussian).

The filter doesn't exist per se in scikit-image so we are going to use the one of scipy.

In [15]:
im_log = -ndi.filters.gaussian_laplace(im_tophat,(4,4,4))
In [16]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(im_log,axis = 0), cmap = 'gray')
ax[1].imshow(np.max(im_log,axis = 1), cmap = 'gray')
plt.show()

Now that we have more homogeneous regions, we can try again to use a classical thresholding, which should give a much better result.

In [17]:
image_browser(im_log>threshold_otsu(im_log), color = False)

We can now go back to some of the methods we have seen previously: we can find local maxima corresponding to single nuclei, define a global mask, and use the watershed algorithm for segmentation.

In [18]:
peak_image = peak_local_max(im_log, footprint=np.ones((10,10,10)), indices=False, threshold_abs= 1)
In [ ]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(im_log,axis = 0), cmap = 'gray')
ax[0].imshow(np.max(peak_image,axis = 0), cmap = 'Reds',alpha = 0.3)

ax[1].imshow(np.max(im_log,axis = 1), cmap = 'gray')
ax[1].imshow(np.max(peak_image,axis = 1), cmap = 'Reds',alpha = 0.3)

plt.show()
In [ ]:
mask = im_log>threshold_otsu(im_log)
im_label = label(peak_image)
im_water = watershed(image=-im_log,markers=im_label,mask = mask, compactness=0.01)
In [ ]:
image_browser(im_log, im_water, color = False)
In [ ]:
image_browser(image_resampled, im_water, color = False)

The result is rather crude but a good start for potential further processing. Note that we didn't segment the nuclei per se but their convolution with a LoG filter. We can also visualize the result in 3D. For that we use the ipyvolume package which allows one to represent 3D data in various ways. For example as isosurface (on a binary image, it just gives the surface of the objects):

In [ ]:
import ipyvolume.pylab as ipv
In [ ]:
ipv.figure()
ipv.plot_isosurface(im_water>0)
ipv.show()

But we can of course also show the volume data of our resampled image:

In [ ]:
ipv.figure()
ipv.volshow(im_tophat.astype(int).T)
ipv.style.background_color('black')
ipv.show()
/usr/local/lib/python3.5/dist-packages/ipyvolume/serialize.py:81: RuntimeWarning: invalid value encountered in true_divide
  gradient = gradient / np.sqrt(gradient[0]**2 + gradient[1]**2 + gradient[2]**2)

Detecting features within features

In [ ]:
image2 = io.imread('Data/BBBC032_v1_dataset/BMP4blastocystC1.tif')

In another wavelength, the collected signal appears as puncti in the image. We could for example now wish to know how many of those puncti appear in the nuclei. Here we cannot downscale the image as those small structures would otherwise disappear, so we use the fact that we know where nuclei are to just analyse those regions.

Let us first resize our segmentation map. Note that we use order = 0 (nearest neighbors) to preserve our labeling.

In [ ]:
im_nuclei_segm = resize(im_water, image.shape, order = 0, preserve_range=True)
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
In [ ]:
np.unique(im_nuclei_segm)

Let's recover all the single nuclei regions using regionproperties

In [ ]:
regions = regionprops(im_nuclei_segm.astype(int), image2)
In [ ]:
im_crop = image2#regions[10].intensity_image
In [ ]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(im_crop,axis = 0))
ax[1].imshow(np.max(im_crop,axis = 1))

The spots from those images have approximately a gaussian shape. So we can try to filter our image with an appropriately size 3D Gaussian to detect the spots:

In [ ]:
im_gauss = gaussian(im_crop, sigma = [1,1.5,1.5], preserve_range=True)
In [ ]:
fig, ax = plt.subplots(1,2,figsize = (10,10))
ax[0].imshow(np.max(im_gauss,axis = 0))
ax[1].imshow(np.max(im_gauss,axis = 1));
In [ ]:
peaks = peak_local_max(im_gauss,min_distance=4)
In [ ]:
fig, ax = plt.subplots(1,2,figsize = (20,10))
ax[0].imshow(np.max(im_gauss,axis = 0))
ax[0].plot(peaks[:,2], peaks[:,1],'ro',markersize = 0.1)
ax[1].imshow(np.max(im_gauss,axis = 1))
ax[1].plot(peaks[:,2], peaks[:,0],'ro',markersize = 0.1);
In [ ]:
plt.hist(im_gauss[peaks[:,0],peaks[:,1],peaks[:,2]],bins = np.arange(200,1600,1));
In [ ]:
plt.hist(im_gauss[peaks[:,0],peaks[:,1],peaks[:,2]],bins = np.arange(200,800,10));
In [ ]:
peak_val = im_gauss[peaks[:,0],peaks[:,1],peaks[:,2]]
In [ ]:
peaks_selected = peaks[peak_val>600,:]
In [ ]:
fig, ax = plt.subplots(1,2,figsize = (20,10))
ax[0].imshow(np.max(im_gauss,axis = 0))
ax[0].plot(peaks_selected[:,2], peaks_selected[:,1],'ro',markersize = 0.1)
ax[1].imshow(np.max(im_gauss,axis = 1))
ax[1].plot(peaks_selected[:,2], peaks_selected[:,0],'ro',markersize = 0.1);
In [ ]:
peak_crop = peaks_selected[peaks_selected[:,2]>400,:]
peak_crop = peak_crop[peak_crop[:,2]<600,:]
peak_crop = peak_crop[peak_crop[:,1]<800,:]
peak_crop = peak_crop[peak_crop[:,1]>600,:]


plt.figure(figsize = (20,10))
plt.imshow(np.max(im_gauss,axis = 0)[600:800,400:600])
plt.plot(peak_crop[:,2]-400, peak_crop[:,1]-600,'ro',markersize = 1);
11-Complete_analysis

11. Create a short complete analysis

Until now we have only seen pieces of code to do some specific segmentation of images. Typically however, one is going to have a complete analysis, including image processing and some further data analysis.

Here we are going to come back to an earlier dataset where nuclei appeared as circles. That dataset was a time-lapse, and we might be interested in knowing how those nuclei move over time. So we will have to analyze images at every time-point, find the position of the nuclei, track them and measure the distance traveled.

In [1]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray()
from skimage.external.tifffile import TiffFile
from skimage.measure import label, regionprops

#import your function
from course_functions import detect_nuclei

11.1 Remembering previous work

Let's remember what we did in previous chapters. We opened the tif dataset, selected a specific plane to look at and segmented the nuclei:

In [2]:
#load the image to process
data = TiffFile('Data/30567/30567.tif')
image = data.pages[3].asarray()
#create your mask
nuclei = detect_nuclei(image)
#create a nan-mask for overlay
nuclei_nan = nuclei.copy().astype(float)
nuclei_nan[nuclei == 0] = np.nan

#plot
plt.figure(figsize=(10,10))
plt.imshow(image, cmap = 'gray')
plt.imshow(nuclei_nan, cmap = 'Reds',vmin = 0,vmax = 1,alpha = 0.6)
plt.show()
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)

Let's also remember what was the format of that file (usually one would already know that or verify e.g. in Fiji)

In [3]:
data.info()
Out[3]:
'TIFF file: 30567.tif, 473 MiB, big endian, ome, 720 pages\n\nSeries 0: 72x2x5x512x672, uint16, TCZYX, 720 pages, not mem-mappable\n\nPage 0: 512x672, uint16, 16 bit, minisblack, raw, ome|contiguous\n* 256 image_width (1H) 672\n* 257 image_length (1H) 512\n* 258 bits_per_sample (1H) 16\n* 259 compression (1H) 1\n* 262 photometric (1H) 1\n* 270 image_description (3320s) b\'<?xml version="1.0" encoding="UTF-8"?><!-- Wa\n* 273 strip_offsets (86I) (182, 8246, 16310, 24374, 32438, 40502, 48566, 56630,\n* 277 samples_per_pixel (1H) 1\n* 278 rows_per_strip (1H) 6\n* 279 strip_byte_counts (86I) (8064, 8064, 8064, 8064, 8064, 8064, 8064, 8064, \n* 282 x_resolution (2I) (1, 1)\n* 283 y_resolution (2I) (1, 1)\n* 296 resolution_unit (1H) 1\n* 305 software (17s) b\'LOCI Bio-Formats\''

On the first line we see that we have 72 time points, 2 colors, 5 planes per color.

The nuclei are going to move a bit in Z (perpendicular to the image) over time, so it will be more accurate to segment a projection of the entire stack. So how do we get a complete stack at a given time point. Let's plot the first few images, to understand how they are stored.

In [4]:
for i in range(15):
    plt.imshow(data.pages[i].asarray())
    plt.show()

11.2 Processing a time-lapse

So it looks like we have all planes of colour 1 at time =0, then all planes of color 2 at time =0, then all planes of colour 1 at time = 1 etc... Therefore to get a full stack at a given time we have to use:

In [5]:
images_per_time = 10
time = 10
color = 1

image_stack = np.stack([x.asarray() 
                        for x in data.pages[time*images_per_time+0+color*5:time*images_per_time+5+color*5]])

plt.imshow(np.max(image_stack, axis = 0));

Let's make a little function out of that:

In [6]:
def get_stack(data, time, color, images_per_time):
    image_stack = np.stack([x.asarray() 
                        for x in data.pages[time*images_per_time+0+color*5:time*images_per_time+5+color*5]])
    return image_stack
In [7]:
plt.imshow(np.max(get_stack(data, 0, 1, 10), axis = 0));

Now we can chose any time point and segment if using our two functions. In addition we can use the region properties to define the average position of each detected nucleus:

In [8]:
#choose a time
time = 10

#load the stack and segment it
image_stack = get_stack(data, time,0,10)
image = np.max(image_stack, axis = 0)
nuclei = nuclei = detect_nuclei(image)

#find position of nuclei
nuclei_label = label(nuclei)
regions = regionprops(nuclei_label)
centroids = np.array([x.centroid for x in regions])

#create a nan-mask for overlay
nuclei_nan = nuclei.copy().astype(float)
nuclei_nan[nuclei == 0] = np.nan

#plto the result
plt.figure(figsize=(10,10))
plt.imshow(image, cmap = 'gray')
plt.imshow(nuclei_nan, cmap = 'Reds',vmin = 0,vmax = 1,alpha = 0.6)
plt.plot(centroids[:,1], centroids[:,0],'o');
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)

So now we can repeat the same operation for multiple time points and add the array with the coordinates to a list to keep them safe

In [9]:
centroids_time = []
for time in range(10):

    #load the stack and segment it
    image_stack = get_stack(data, time,0,10)
    image = np.max(image_stack, axis = 0)
    nuclei = nuclei = detect_nuclei(image)

    #find position of nuclei
    nuclei_label = label(nuclei)
    regions = regionprops(nuclei_label)
    centroids = np.array([x.centroid for x in regions])
    
    centroids_time.append(centroids)
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)

Let's plot all those centroids for all time points

In [10]:
for x in centroids_time:
    plt.plot(x[:,1],x[:,0],'o')

We definitely see tracks corresponding to single nuclei here. How are we going to track them?

11.3 Tracking trajectories

The wonderful thing with Python, is that there are a lot of resources that one can just use. For example, if we Google "python tracking", one of the first hits if for the package trackpy which is originally designed to track diffusion particles but can be repurposed for anything.

Browsing through the documentation, we see that we need the function link_df. df stands for dataframe, which is a special data format offered by the package Pandas, and is very close to the R dataframe. Let's load those two modules:

In [11]:
import trackpy
import pandas as pd

And look for some help:

In [12]:
help(trackpy.link_df)
Help on function link in module trackpy.linking.linking:

link(f, search_range, pos_columns=None, t_column='frame', **kwargs)
    link(f, search_range, pos_columns=None, t_column='frame', memory=0,
        predictor=None, adaptive_stop=None, adaptive_step=0.95,
        neighbor_strategy=None, link_strategy=None, dist_func=None,
        to_eucl=None)
    
    Link a DataFrame of coordinates into trajectories.
    
    Parameters
    ----------
    f : DataFrame
        The DataFrame must include any number of column(s) for position and a
        column of frame numbers. By default, 'x' and 'y' are expected for
        position, and 'frame' is expected for frame number. See below for
        options to use custom column names.
    search_range : float or tuple
        the maximum distance features can move between frames,
        optionally per dimension
    pos_columns : list of str, optional
        Default is ['y', 'x'], or ['z', 'y', 'x'] when 'z' is present in f
    t_column : str, optional
        Default is 'frame'
    memory : integer, optional
        the maximum number of frames during which a feature can vanish,
        then reappear nearby, and be considered the same particle. 0 by default.
    predictor : function, optional
        Improve performance by guessing where a particle will be in
        the next frame.
        For examples of how this works, see the "predict" module.
    adaptive_stop : float, optional
        If not None, when encountering an oversize subnet, retry by progressively
        reducing search_range until the subnet is solvable. If search_range
        becomes <= adaptive_stop, give up and raise a SubnetOversizeException.
    adaptive_step : float, optional
        Reduce search_range by multiplying it by this factor.
    neighbor_strategy : {'KDTree', 'BTree'}
        algorithm used to identify nearby features. Default 'KDTree'.
    link_strategy : {'recursive', 'nonrecursive', 'numba', 'hybrid', 'drop', 'auto'}
        algorithm used to resolve subnetworks of nearby particles
        'auto' uses hybrid (numba+recursive) if available
        'drop' causes particles in subnetworks to go unlinked
    dist_func : function, optional
        a custom distance function that takes two 1D arrays of coordinates and
        returns a float. Must be used with the 'BTree' neighbor_strategy.
    to_eucl : function, optional
        function that transforms a N x ndim array of positions into coordinates
        in Euclidean space. Useful for instance to link by Euclidean distance
        starting from radial coordinates. If search_range is anisotropic, this
        parameter cannot be used.
    
    Returns
    -------
    DataFrame with added column 'particle' containing trajectory labels.
    The t_column (by default: 'frame') will be coerced to integer.
    
    See also
    --------
    link_iter
    
    Notes
    -----
    This is an implementation of the Crocker-Grier linking algorithm.
    [1]_
    
    References
    ----------
    .. [1] Crocker, J.C., Grier, D.G. http://dx.doi.org/10.1006/jcis.1996.0217

So we have a lot of options, but the most important thing is to get our data into a dataframe that has three columns, x,y and frame. How are we going to create such a dataframe ?

11.3.1 Pandas dataframe

In [13]:
help(pd.DataFrame)
Help on class DataFrame in module pandas.core.frame:

class DataFrame(pandas.core.generic.NDFrame)
 |  Two-dimensional size-mutable, potentially heterogeneous tabular data
 |  structure with labeled axes (rows and columns). Arithmetic operations
 |  align on both row and column labels. Can be thought of as a dict-like
 |  container for Series objects. The primary pandas data structure.
 |  
 |  Parameters
 |  ----------
 |  data : ndarray (structured or homogeneous), Iterable, dict, or DataFrame
 |      Dict can contain Series, arrays, constants, or list-like objects
 |  
 |      .. versionchanged :: 0.23.0
 |         If data is a dict, argument order is maintained for Python 3.6
 |         and later.
 |  
 |  index : Index or array-like
 |      Index to use for resulting frame. Will default to RangeIndex if
 |      no indexing information part of input data and no index provided
 |  columns : Index or array-like
 |      Column labels to use for resulting frame. Will default to
 |      RangeIndex (0, 1, 2, ..., n) if no column labels are provided
 |  dtype : dtype, default None
 |      Data type to force. Only a single dtype is allowed. If None, infer
 |  copy : boolean, default False
 |      Copy data from inputs. Only affects DataFrame / 2d ndarray input
 |  
 |  See Also
 |  --------
 |  DataFrame.from_records : Constructor from tuples, also record arrays.
 |  DataFrame.from_dict : From dicts of Series, arrays, or dicts.
 |  DataFrame.from_items : From sequence of (key, value) pairs
 |      pandas.read_csv, pandas.read_table, pandas.read_clipboard.
 |  
 |  Examples
 |  --------
 |  Constructing DataFrame from a dictionary.
 |  
 |  >>> d = {'col1': [1, 2], 'col2': [3, 4]}
 |  >>> df = pd.DataFrame(data=d)
 |  >>> df
 |     col1  col2
 |  0     1     3
 |  1     2     4
 |  
 |  Notice that the inferred dtype is int64.
 |  
 |  >>> df.dtypes
 |  col1    int64
 |  col2    int64
 |  dtype: object
 |  
 |  To enforce a single dtype:
 |  
 |  >>> df = pd.DataFrame(data=d, dtype=np.int8)
 |  >>> df.dtypes
 |  col1    int8
 |  col2    int8
 |  dtype: object
 |  
 |  Constructing DataFrame from numpy ndarray:
 |  
 |  >>> df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
 |  ...                    columns=['a', 'b', 'c'])
 |  >>> df2
 |     a  b  c
 |  0  1  2  3
 |  1  4  5  6
 |  2  7  8  9
 |  
 |  Method resolution order:
 |      DataFrame
 |      pandas.core.generic.NDFrame
 |      pandas.core.base.PandasObject
 |      pandas.core.base.StringMixin
 |      pandas.core.accessor.DirNamesMixin
 |      pandas.core.base.SelectionMixin
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __add__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __add__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __and__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __and__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __div__ = __truediv__(self, other, axis=None, level=None, fill_value=None)
 |  
 |  __eq__(self, other)
 |      Wrapper for comparison method __eq__
 |  
 |  __floordiv__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __floordiv__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __ge__(self, other)
 |      Wrapper for comparison method __ge__
 |  
 |  __getitem__(self, key)
 |  
 |  __gt__(self, other)
 |      Wrapper for comparison method __gt__
 |  
 |  __iadd__(self, other)
 |  
 |  __iand__(self, other)
 |  
 |  __ifloordiv__(self, other)
 |  
 |  __imod__(self, other)
 |  
 |  __imul__(self, other)
 |  
 |  __init__(self, data=None, index=None, columns=None, dtype=None, copy=False)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  __ior__(self, other)
 |  
 |  __ipow__(self, other)
 |  
 |  __isub__(self, other)
 |  
 |  __itruediv__(self, other)
 |  
 |  __ixor__(self, other)
 |  
 |  __le__(self, other)
 |      Wrapper for comparison method __le__
 |  
 |  __len__(self)
 |      Returns length of info axis, but here we use the index.
 |  
 |  __lt__(self, other)
 |      Wrapper for comparison method __lt__
 |  
 |  __matmul__(self, other)
 |      Matrix multiplication using binary `@` operator in Python>=3.5.
 |  
 |  __mod__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __mod__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __mul__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __mul__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __ne__(self, other)
 |      Wrapper for comparison method __ne__
 |  
 |  __or__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __or__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __pow__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __pow__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __radd__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __radd__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rand__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __rand__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rdiv__ = __rtruediv__(self, other, axis=None, level=None, fill_value=None)
 |  
 |  __rfloordiv__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rfloordiv__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rmatmul__(self, other)
 |      Matrix multiplication using binary `@` operator in Python>=3.5.
 |  
 |  __rmod__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rmod__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rmul__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rmul__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __ror__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __ror__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rpow__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rpow__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rsub__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rsub__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rtruediv__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __rtruediv__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __rxor__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __rxor__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __setitem__(self, key, value)
 |  
 |  __sub__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __sub__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __truediv__(self, other, axis=None, level=None, fill_value=None)
 |      Binary operator __truediv__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  __unicode__(self)
 |      Return a string representation for a particular DataFrame.
 |      
 |      Invoked by unicode(df) in py2 only. Yields a Unicode String in both
 |      py2/py3.
 |  
 |  __xor__(self, other, axis='columns', level=None, fill_value=None)
 |      Binary operator __xor__ with support to substitute a fill_value for missing data in
 |      one of the inputs
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame, or constant
 |      axis : {0, 1, 'index', 'columns'}
 |          For Series input, axis to match Series index on
 |      fill_value : None or float value, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together
 |  
 |  add(self, other, axis='columns', level=None, fill_value=None)
 |      Addition of dataframe and other, element-wise (binary operator `add`).
 |      
 |      Equivalent to ``dataframe + other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `radd`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  agg = aggregate(self, func, axis=0, *args, **kwargs)
 |  
 |  aggregate(self, func, axis=0, *args, **kwargs)
 |      Aggregate using one or more operations over the specified axis.
 |      
 |      .. versionadded:: 0.20.0
 |      
 |      Parameters
 |      ----------
 |      func : function, str, list or dict
 |          Function to use for aggregating the data. If a function, must either
 |          work when passed a DataFrame or when passed to DataFrame.apply.
 |      
 |          Accepted combinations are:
 |      
 |          - function
 |          - string function name
 |          - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
 |          - dict of axis labels -> functions, function names or list of such.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |              If 0 or 'index': apply function to each column.
 |              If 1 or 'columns': apply function to each row.
 |      *args
 |          Positional arguments to pass to `func`.
 |      **kwargs
 |          Keyword arguments to pass to `func`.
 |      
 |      Returns
 |      -------
 |      DataFrame, Series or scalar
 |          if DataFrame.agg is called with a single function, returns a Series
 |          if DataFrame.agg is called with several functions, returns a DataFrame
 |          if Series.agg is called with single function, returns a scalar
 |          if Series.agg is called with several functions, returns a Series
 |      
 |      
 |      The aggregation operations are always performed over an axis, either the
 |      index (default) or the column axis. This behavior is different from
 |      `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,
 |      `var`), where the default is to compute the aggregation of the flattened
 |      array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d,
 |      axis=0)``.
 |      
 |      `agg` is an alias for `aggregate`. Use the alias.
 |      
 |      See Also
 |      --------
 |      DataFrame.apply : Perform any type of operations.
 |      DataFrame.transform : Perform transformation type operations.
 |      pandas.core.groupby.GroupBy : Perform operations over groups.
 |      pandas.core.resample.Resampler : Perform operations over resampled bins.
 |      pandas.core.window.Rolling : Perform operations over rolling window.
 |      pandas.core.window.Expanding : Perform operations over expanding window.
 |      pandas.core.window.EWM : Perform operation over exponential weighted
 |          window.
 |      
 |      
 |      Notes
 |      -----
 |      `agg` is an alias for `aggregate`. Use the alias.
 |      
 |      A passed user-defined-function will be passed a Series for evaluation.
 |      
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([[1, 2, 3],
 |      ...                    [4, 5, 6],
 |      ...                    [7, 8, 9],
 |      ...                    [np.nan, np.nan, np.nan]],
 |      ...                   columns=['A', 'B', 'C'])
 |      
 |      Aggregate these functions over the rows.
 |      
 |      >>> df.agg(['sum', 'min'])
 |              A     B     C
 |      sum  12.0  15.0  18.0
 |      min   1.0   2.0   3.0
 |      
 |      Different aggregations per column.
 |      
 |      >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
 |              A    B
 |      max   NaN  8.0
 |      min   1.0  2.0
 |      sum  12.0  NaN
 |      
 |      Aggregate over the columns.
 |      
 |      >>> df.agg("mean", axis="columns")
 |      0    2.0
 |      1    5.0
 |      2    8.0
 |      3    NaN
 |      dtype: float64
 |  
 |  align(self, other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)
 |      Align two objects on their axes with the
 |      specified join method for each axis Index.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame or Series
 |      join : {'outer', 'inner', 'left', 'right'}, default 'outer'
 |      axis : allowed axis of the other object, default None
 |          Align on index (0), columns (1), or both (None)
 |      level : int or level name, default None
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level
 |      copy : boolean, default True
 |          Always returns new objects. If copy=False and no reindexing is
 |          required then original objects are returned.
 |      fill_value : scalar, default np.NaN
 |          Value to use for missing values. Defaults to NaN, but can be any
 |          "compatible" value
 |      method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
 |          Method to use for filling holes in reindexed Series
 |          pad / ffill: propagate last valid observation forward to next valid
 |          backfill / bfill: use NEXT valid observation to fill gap
 |      limit : int, default None
 |          If method is specified, this is the maximum number of consecutive
 |          NaN values to forward/backward fill. In other words, if there is
 |          a gap with more than this number of consecutive NaNs, it will only
 |          be partially filled. If method is not specified, this is the
 |          maximum number of entries along the entire axis where NaNs will be
 |          filled. Must be greater than 0 if not None.
 |      fill_axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Filling axis, method and limit
 |      broadcast_axis : {0 or 'index', 1 or 'columns'}, default None
 |          Broadcast values along this axis, if aligning two objects of
 |          different dimensions
 |      
 |      Returns
 |      -------
 |      (left, right) : (DataFrame, type of other)
 |          Aligned objects
 |  
 |  all(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs)
 |      Return whether all elements are True, potentially over an axis.
 |      
 |      Returns True unless there at least one element within a series or
 |      along a Dataframe axis that is False or equivalent (e.g. zero or
 |      empty).
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns', None}, default 0
 |          Indicate which axis or axes should be reduced.
 |      
 |          * 0 / 'index' : reduce the index, return a Series whose index is the
 |            original column labels.
 |          * 1 / 'columns' : reduce the columns, return a Series whose index is the
 |            original index.
 |          * None : reduce all axes, return a scalar.
 |      
 |      bool_only : bool, default None
 |          Include only boolean columns. If None, will attempt to use everything,
 |          then use only boolean data. Not implemented for Series.
 |      skipna : bool, default True
 |          Exclude NA/null values. If the entire row/column is NA and skipna is
 |          True, then the result will be True, as for an empty row/column.
 |          If skipna is False, then NA are treated as True, because these are not
 |          equal to zero.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      **kwargs : any, default None
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          If level is specified, then, DataFrame is returned; otherwise, Series
 |          is returned.
 |      
 |      See Also
 |      --------
 |      Series.all : Return True if all elements are True.
 |      DataFrame.any : Return True if one (or more) elements are True.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> pd.Series([True, True]).all()
 |      True
 |      >>> pd.Series([True, False]).all()
 |      False
 |      >>> pd.Series([]).all()
 |      True
 |      >>> pd.Series([np.nan]).all()
 |      True
 |      >>> pd.Series([np.nan]).all(skipna=False)
 |      True
 |      
 |      **DataFrames**
 |      
 |      Create a dataframe from a dictionary.
 |      
 |      >>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})
 |      >>> df
 |         col1   col2
 |      0  True   True
 |      1  True  False
 |      
 |      Default behaviour checks if column-wise values all return True.
 |      
 |      >>> df.all()
 |      col1     True
 |      col2    False
 |      dtype: bool
 |      
 |      Specify ``axis='columns'`` to check if row-wise values all return True.
 |      
 |      >>> df.all(axis='columns')
 |      0     True
 |      1    False
 |      dtype: bool
 |      
 |      Or ``axis=None`` for whether every value is True.
 |      
 |      >>> df.all(axis=None)
 |      False
 |  
 |  any(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs)
 |      Return whether any element is True, potentially over an axis.
 |      
 |      Returns False unless there at least one element within a series or
 |      along a Dataframe axis that is True or equivalent (e.g. non-zero or
 |      non-empty).
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns', None}, default 0
 |          Indicate which axis or axes should be reduced.
 |      
 |          * 0 / 'index' : reduce the index, return a Series whose index is the
 |            original column labels.
 |          * 1 / 'columns' : reduce the columns, return a Series whose index is the
 |            original index.
 |          * None : reduce all axes, return a scalar.
 |      
 |      bool_only : bool, default None
 |          Include only boolean columns. If None, will attempt to use everything,
 |          then use only boolean data. Not implemented for Series.
 |      skipna : bool, default True
 |          Exclude NA/null values. If the entire row/column is NA and skipna is
 |          True, then the result will be False, as for an empty row/column.
 |          If skipna is False, then NA are treated as True, because these are not
 |          equal to zero.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      **kwargs : any, default None
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          If level is specified, then, DataFrame is returned; otherwise, Series
 |          is returned.
 |      
 |      See Also
 |      --------
 |      numpy.any : Numpy version of this method.
 |      Series.any : Return whether any element is True.
 |      Series.all : Return whether all elements are True.
 |      DataFrame.any : Return whether any element is True over requested axis.
 |      DataFrame.all : Return whether all elements are True over requested axis.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      For Series input, the output is a scalar indicating whether any element
 |      is True.
 |      
 |      >>> pd.Series([False, False]).any()
 |      False
 |      >>> pd.Series([True, False]).any()
 |      True
 |      >>> pd.Series([]).any()
 |      False
 |      >>> pd.Series([np.nan]).any()
 |      False
 |      >>> pd.Series([np.nan]).any(skipna=False)
 |      True
 |      
 |      **DataFrame**
 |      
 |      Whether each column contains at least one True element (the default).
 |      
 |      >>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
 |      >>> df
 |         A  B  C
 |      0  1  0  0
 |      1  2  2  0
 |      
 |      >>> df.any()
 |      A     True
 |      B     True
 |      C    False
 |      dtype: bool
 |      
 |      Aggregating over the columns.
 |      
 |      >>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
 |      >>> df
 |             A  B
 |      0   True  1
 |      1  False  2
 |      
 |      >>> df.any(axis='columns')
 |      0    True
 |      1    True
 |      dtype: bool
 |      
 |      >>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
 |      >>> df
 |             A  B
 |      0   True  1
 |      1  False  0
 |      
 |      >>> df.any(axis='columns')
 |      0    True
 |      1    False
 |      dtype: bool
 |      
 |      Aggregating over the entire DataFrame with ``axis=None``.
 |      
 |      >>> df.any(axis=None)
 |      True
 |      
 |      `any` for an empty DataFrame is an empty Series.
 |      
 |      >>> pd.DataFrame([]).any()
 |      Series([], dtype: bool)
 |  
 |  append(self, other, ignore_index=False, verify_integrity=False, sort=None)
 |      Append rows of `other` to the end of caller, returning a new object.
 |      
 |      Columns in `other` that are not in the caller are added as new columns.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame or Series/dict-like object, or list of these
 |          The data to append.
 |      ignore_index : boolean, default False
 |          If True, do not use the index labels.
 |      verify_integrity : boolean, default False
 |          If True, raise ValueError on creating index with duplicates.
 |      sort : boolean, default None
 |          Sort columns if the columns of `self` and `other` are not aligned.
 |          The default sorting is deprecated and will change to not-sorting
 |          in a future version of pandas. Explicitly pass ``sort=True`` to
 |          silence the warning and sort. Explicitly pass ``sort=False`` to
 |          silence the warning and not sort.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      Returns
 |      -------
 |      appended : DataFrame
 |      
 |      See Also
 |      --------
 |      pandas.concat : General function to concatenate DataFrame, Series
 |          or Panel objects.
 |      
 |      Notes
 |      -----
 |      If a list of dict/series is passed and the keys are all contained in
 |      the DataFrame's index, the order of the columns in the resulting
 |      DataFrame will be unchanged.
 |      
 |      Iteratively appending rows to a DataFrame can be more computationally
 |      intensive than a single concatenate. A better solution is to append
 |      those rows to a list and then concatenate the list with the original
 |      DataFrame all at once.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
 |      >>> df
 |         A  B
 |      0  1  2
 |      1  3  4
 |      >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
 |      >>> df.append(df2)
 |         A  B
 |      0  1  2
 |      1  3  4
 |      0  5  6
 |      1  7  8
 |      
 |      With `ignore_index` set to True:
 |      
 |      >>> df.append(df2, ignore_index=True)
 |         A  B
 |      0  1  2
 |      1  3  4
 |      2  5  6
 |      3  7  8
 |      
 |      The following, while not recommended methods for generating DataFrames,
 |      show two ways to generate a DataFrame from multiple data sources.
 |      
 |      Less efficient:
 |      
 |      >>> df = pd.DataFrame(columns=['A'])
 |      >>> for i in range(5):
 |      ...     df = df.append({'A': i}, ignore_index=True)
 |      >>> df
 |         A
 |      0  0
 |      1  1
 |      2  2
 |      3  3
 |      4  4
 |      
 |      More efficient:
 |      
 |      >>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)],
 |      ...           ignore_index=True)
 |         A
 |      0  0
 |      1  1
 |      2  2
 |      3  3
 |      4  4
 |  
 |  apply(self, func, axis=0, broadcast=None, raw=False, reduce=None, result_type=None, args=(), **kwds)
 |      Apply a function along an axis of the DataFrame.
 |      
 |      Objects passed to the function are Series objects whose index is
 |      either the DataFrame's index (``axis=0``) or the DataFrame's columns
 |      (``axis=1``). By default (``result_type=None``), the final return type
 |      is inferred from the return type of the applied function. Otherwise,
 |      it depends on the `result_type` argument.
 |      
 |      Parameters
 |      ----------
 |      func : function
 |          Function to apply to each column or row.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Axis along which the function is applied:
 |      
 |          * 0 or 'index': apply function to each column.
 |          * 1 or 'columns': apply function to each row.
 |      broadcast : bool, optional
 |          Only relevant for aggregation functions:
 |      
 |          * ``False`` or ``None`` : returns a Series whose length is the
 |            length of the index or the number of columns (based on the
 |            `axis` parameter)
 |          * ``True`` : results will be broadcast to the original shape
 |            of the frame, the original index and columns will be retained.
 |      
 |          .. deprecated:: 0.23.0
 |             This argument will be removed in a future version, replaced
 |             by result_type='broadcast'.
 |      
 |      raw : bool, default False
 |          * ``False`` : passes each row or column as a Series to the
 |            function.
 |          * ``True`` : the passed function will receive ndarray objects
 |            instead.
 |            If you are just applying a NumPy reduction function this will
 |            achieve much better performance.
 |      reduce : bool or None, default None
 |          Try to apply reduction procedures. If the DataFrame is empty,
 |          `apply` will use `reduce` to determine whether the result
 |          should be a Series or a DataFrame. If ``reduce=None`` (the
 |          default), `apply`'s return value will be guessed by calling
 |          `func` on an empty Series
 |          (note: while guessing, exceptions raised by `func` will be
 |          ignored).
 |          If ``reduce=True`` a Series will always be returned, and if
 |          ``reduce=False`` a DataFrame will always be returned.
 |      
 |          .. deprecated:: 0.23.0
 |             This argument will be removed in a future version, replaced
 |             by ``result_type='reduce'``.
 |      
 |      result_type : {'expand', 'reduce', 'broadcast', None}, default None
 |          These only act when ``axis=1`` (columns):
 |      
 |          * 'expand' : list-like results will be turned into columns.
 |          * 'reduce' : returns a Series if possible rather than expanding
 |            list-like results. This is the opposite of 'expand'.
 |          * 'broadcast' : results will be broadcast to the original shape
 |            of the DataFrame, the original index and columns will be
 |            retained.
 |      
 |          The default behaviour (None) depends on the return value of the
 |          applied function: list-like results will be returned as a Series
 |          of those. However if the apply function returns a Series these
 |          are expanded to columns.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      args : tuple
 |          Positional arguments to pass to `func` in addition to the
 |          array/series.
 |      **kwds
 |          Additional keyword arguments to pass as keywords arguments to
 |          `func`.
 |      
 |      Returns
 |      -------
 |      applied : Series or DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.applymap: For elementwise operations.
 |      DataFrame.aggregate: Only perform aggregating type operations.
 |      DataFrame.transform: Only perform transforming type operations.
 |      
 |      Notes
 |      -----
 |      In the current implementation apply calls `func` twice on the
 |      first column/row to decide whether it can take a fast or slow
 |      code path. This can lead to unexpected behavior if `func` has
 |      side-effects, as they will take effect twice for the first
 |      column/row.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame([[4, 9],] * 3, columns=['A', 'B'])
 |      >>> df
 |         A  B
 |      0  4  9
 |      1  4  9
 |      2  4  9
 |      
 |      Using a numpy universal function (in this case the same as
 |      ``np.sqrt(df)``):
 |      
 |      >>> df.apply(np.sqrt)
 |           A    B
 |      0  2.0  3.0
 |      1  2.0  3.0
 |      2  2.0  3.0
 |      
 |      Using a reducing function on either axis
 |      
 |      >>> df.apply(np.sum, axis=0)
 |      A    12
 |      B    27
 |      dtype: int64
 |      
 |      >>> df.apply(np.sum, axis=1)
 |      0    13
 |      1    13
 |      2    13
 |      dtype: int64
 |      
 |      Retuning a list-like will result in a Series
 |      
 |      >>> df.apply(lambda x: [1, 2], axis=1)
 |      0    [1, 2]
 |      1    [1, 2]
 |      2    [1, 2]
 |      dtype: object
 |      
 |      Passing result_type='expand' will expand list-like results
 |      to columns of a Dataframe
 |      
 |      >>> df.apply(lambda x: [1, 2], axis=1, result_type='expand')
 |         0  1
 |      0  1  2
 |      1  1  2
 |      2  1  2
 |      
 |      Returning a Series inside the function is similar to passing
 |      ``result_type='expand'``. The resulting column names
 |      will be the Series index.
 |      
 |      >>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
 |         foo  bar
 |      0    1    2
 |      1    1    2
 |      2    1    2
 |      
 |      Passing ``result_type='broadcast'`` will ensure the same shape
 |      result, whether list-like or scalar is returned by the function,
 |      and broadcast it along the axis. The resulting column names will
 |      be the originals.
 |      
 |      >>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast')
 |         A  B
 |      0  1  2
 |      1  1  2
 |      2  1  2
 |  
 |  applymap(self, func)
 |      Apply a function to a Dataframe elementwise.
 |      
 |      This method applies a function that accepts and returns a scalar
 |      to every element of a DataFrame.
 |      
 |      Parameters
 |      ----------
 |      func : callable
 |          Python function, returns a single value from a single value.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Transformed DataFrame.
 |      
 |      See Also
 |      --------
 |      DataFrame.apply : Apply a function along input axis of DataFrame.
 |      
 |      Notes
 |      -----
 |      In the current implementation applymap calls `func` twice on the
 |      first column/row to decide whether it can take a fast or slow
 |      code path. This can lead to unexpected behavior if `func` has
 |      side-effects, as they will take effect twice for the first
 |      column/row.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]])
 |      >>> df
 |             0      1
 |      0  1.000  2.120
 |      1  3.356  4.567
 |      
 |      >>> df.applymap(lambda x: len(str(x)))
 |         0  1
 |      0  3  4
 |      1  5  5
 |      
 |      Note that a vectorized version of `func` often exists, which will
 |      be much faster. You could square each number elementwise.
 |      
 |      >>> df.applymap(lambda x: x**2)
 |                 0          1
 |      0   1.000000   4.494400
 |      1  11.262736  20.857489
 |      
 |      But it's better to avoid applymap in that case.
 |      
 |      >>> df ** 2
 |                 0          1
 |      0   1.000000   4.494400
 |      1  11.262736  20.857489
 |  
 |  assign(self, **kwargs)
 |      Assign new columns to a DataFrame.
 |      
 |      Returns a new object with all original columns in addition to new ones.
 |      Existing columns that are re-assigned will be overwritten.
 |      
 |      Parameters
 |      ----------
 |      **kwargs : dict of {str: callable or Series}
 |          The column names are keywords. If the values are
 |          callable, they are computed on the DataFrame and
 |          assigned to the new columns. The callable must not
 |          change input DataFrame (though pandas doesn't check it).
 |          If the values are not callable, (e.g. a Series, scalar, or array),
 |          they are simply assigned.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          A new DataFrame with the new columns in addition to
 |          all the existing columns.
 |      
 |      Notes
 |      -----
 |      Assigning multiple columns within the same ``assign`` is possible.
 |      For Python 3.6 and above, later items in '\*\*kwargs' may refer to
 |      newly created or modified columns in 'df'; items are computed and
 |      assigned into 'df' in order.  For Python 3.5 and below, the order of
 |      keyword arguments is not specified, you cannot refer to newly created
 |      or modified columns. All items are computed first, and then assigned
 |      in alphabetical order.
 |      
 |      .. versionchanged :: 0.23.0
 |      
 |         Keyword argument order is maintained for Python 3.6 and later.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'temp_c': [17.0, 25.0]},
 |      ...                   index=['Portland', 'Berkeley'])
 |      >>> df
 |                temp_c
 |      Portland    17.0
 |      Berkeley    25.0
 |      
 |      Where the value is a callable, evaluated on `df`:
 |      
 |      >>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)
 |                temp_c  temp_f
 |      Portland    17.0    62.6
 |      Berkeley    25.0    77.0
 |      
 |      Alternatively, the same behavior can be achieved by directly
 |      referencing an existing Series or sequence:
 |      
 |      >>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32)
 |                temp_c  temp_f
 |      Portland    17.0    62.6
 |      Berkeley    25.0    77.0
 |      
 |      In Python 3.6+, you can create multiple columns within the same assign
 |      where one of the columns depends on another one defined within the same
 |      assign:
 |      
 |      >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32,
 |      ...           temp_k=lambda x: (x['temp_f'] +  459.67) * 5 / 9)
 |                temp_c  temp_f  temp_k
 |      Portland    17.0    62.6  290.15
 |      Berkeley    25.0    77.0  298.15
 |  
 |  boxplot = boxplot_frame(self, column=None, by=None, ax=None, fontsize=None, rot=0, grid=True, figsize=None, layout=None, return_type=None, **kwds)
 |      Make a box plot from DataFrame columns.
 |      
 |      Make a box-and-whisker plot from DataFrame columns, optionally grouped
 |      by some other columns. A box plot is a method for graphically depicting
 |      groups of numerical data through their quartiles.
 |      The box extends from the Q1 to Q3 quartile values of the data,
 |      with a line at the median (Q2). The whiskers extend from the edges
 |      of box to show the range of the data. The position of the whiskers
 |      is set by default to `1.5 * IQR (IQR = Q3 - Q1)` from the edges of the box.
 |      Outlier points are those past the end of the whiskers.
 |      
 |      For further details see
 |      Wikipedia's entry for `boxplot <https://en.wikipedia.org/wiki/Box_plot>`_.
 |      
 |      Parameters
 |      ----------
 |      column : str or list of str, optional
 |          Column name or list of names, or vector.
 |          Can be any valid input to :meth:`pandas.DataFrame.groupby`.
 |      by : str or array-like, optional
 |          Column in the DataFrame to :meth:`pandas.DataFrame.groupby`.
 |          One box-plot will be done per value of columns in `by`.
 |      ax : object of class matplotlib.axes.Axes, optional
 |          The matplotlib axes to be used by boxplot.
 |      fontsize : float or str
 |          Tick label font size in points or as a string (e.g., `large`).
 |      rot : int or float, default 0
 |          The rotation angle of labels (in degrees)
 |          with respect to the screen coordinate system.
 |      grid : boolean, default True
 |          Setting this to True will show the grid.
 |      figsize : A tuple (width, height) in inches
 |          The size of the figure to create in matplotlib.
 |      layout : tuple (rows, columns), optional
 |          For example, (3, 5) will display the subplots
 |          using 3 columns and 5 rows, starting from the top-left.
 |      return_type : {'axes', 'dict', 'both'} or None, default 'axes'
 |          The kind of object to return. The default is ``axes``.
 |      
 |          * 'axes' returns the matplotlib axes the boxplot is drawn on.
 |          * 'dict' returns a dictionary whose values are the matplotlib
 |            Lines of the boxplot.
 |          * 'both' returns a namedtuple with the axes and dict.
 |          * when grouping with ``by``, a Series mapping columns to
 |            ``return_type`` is returned.
 |      
 |            If ``return_type`` is `None`, a NumPy array
 |            of axes with the same shape as ``layout`` is returned.
 |      **kwds
 |          All other plotting keyword arguments to be passed to
 |          :func:`matplotlib.pyplot.boxplot`.
 |      
 |      Returns
 |      -------
 |      result :
 |      
 |          The return type depends on the `return_type` parameter:
 |      
 |          * 'axes' : object of class matplotlib.axes.Axes
 |          * 'dict' : dict of matplotlib.lines.Line2D objects
 |          * 'both' : a namedtuple with structure (ax, lines)
 |      
 |          For data grouped with ``by``:
 |      
 |          * :class:`~pandas.Series`
 |          * :class:`~numpy.array` (for ``return_type = None``)
 |      
 |      See Also
 |      --------
 |      Series.plot.hist: Make a histogram.
 |      matplotlib.pyplot.boxplot : Matplotlib equivalent plot.
 |      
 |      Notes
 |      -----
 |      Use ``return_type='dict'`` when you want to tweak the appearance
 |      of the lines after plotting. In this case a dict containing the Lines
 |      making up the boxes, caps, fliers, medians, and whiskers is returned.
 |      
 |      Examples
 |      --------
 |      
 |      Boxplots can be created for every column in the dataframe
 |      by ``df.boxplot()`` or indicating the columns to be used:
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          >>> np.random.seed(1234)
 |          >>> df = pd.DataFrame(np.random.randn(10,4),
 |          ...                   columns=['Col1', 'Col2', 'Col3', 'Col4'])
 |          >>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
 |      
 |      Boxplots of variables distributions grouped by the values of a third
 |      variable can be created using the option ``by``. For instance:
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          >>> df = pd.DataFrame(np.random.randn(10, 2),
 |          ...                   columns=['Col1', 'Col2'])
 |          >>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
 |          ...                      'B', 'B', 'B', 'B', 'B'])
 |          >>> boxplot = df.boxplot(by='X')
 |      
 |      A list of strings (i.e. ``['X', 'Y']``) can be passed to boxplot
 |      in order to group the data by combination of the variables in the x-axis:
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          >>> df = pd.DataFrame(np.random.randn(10,3),
 |          ...                   columns=['Col1', 'Col2', 'Col3'])
 |          >>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
 |          ...                      'B', 'B', 'B', 'B', 'B'])
 |          >>> df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A',
 |          ...                      'B', 'A', 'B', 'A', 'B'])
 |          >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])
 |      
 |      The layout of boxplot can be adjusted giving a tuple to ``layout``:
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
 |          ...                      layout=(2, 1))
 |      
 |      Additional formatting can be done to the boxplot, like suppressing the grid
 |      (``grid=False``), rotating the labels in the x-axis (i.e. ``rot=45``)
 |      or changing the fontsize (i.e. ``fontsize=15``):
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          >>> boxplot = df.boxplot(grid=False, rot=45, fontsize=15)
 |      
 |      The parameter ``return_type`` can be used to select the type of element
 |      returned by `boxplot`.  When ``return_type='axes'`` is selected,
 |      the matplotlib axes on which the boxplot is drawn are returned:
 |      
 |          >>> boxplot = df.boxplot(column=['Col1','Col2'], return_type='axes')
 |          >>> type(boxplot)
 |          <class 'matplotlib.axes._subplots.AxesSubplot'>
 |      
 |      When grouping with ``by``, a Series mapping columns to ``return_type``
 |      is returned:
 |      
 |          >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
 |          ...                      return_type='axes')
 |          >>> type(boxplot)
 |          <class 'pandas.core.series.Series'>
 |      
 |      If ``return_type`` is `None`, a NumPy array of axes with the same shape
 |      as ``layout`` is returned:
 |      
 |          >>> boxplot =  df.boxplot(column=['Col1', 'Col2'], by='X',
 |          ...                       return_type=None)
 |          >>> type(boxplot)
 |          <class 'numpy.ndarray'>
 |  
 |  combine(self, other, func, fill_value=None, overwrite=True)
 |      Perform column-wise combine with another DataFrame based on a
 |      passed function.
 |      
 |      Combines a DataFrame with `other` DataFrame using `func`
 |      to element-wise combine columns. The row and column indexes of the
 |      resulting DataFrame will be the union of the two.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame
 |          The DataFrame to merge column-wise.
 |      func : function
 |          Function that takes two series as inputs and return a Series or a
 |          scalar. Used to merge the two dataframes column by columns.
 |      fill_value : scalar value, default None
 |          The value to fill NaNs with prior to passing any column to the
 |          merge func.
 |      overwrite : boolean, default True
 |          If True, columns in `self` that do not exist in `other` will be
 |          overwritten with NaNs.
 |      
 |      Returns
 |      -------
 |      result : DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.combine_first : Combine two DataFrame objects and default to
 |          non-null values in frame calling the method.
 |      
 |      Examples
 |      --------
 |      Combine using a simple function that chooses the smaller column.
 |      
 |      >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
 |      >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
 |      >>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
 |      >>> df1.combine(df2, take_smaller)
 |         A  B
 |      0  0  3
 |      1  0  3
 |      
 |      Example using a true element-wise combine function.
 |      
 |      >>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]})
 |      >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
 |      >>> df1.combine(df2, np.minimum)
 |         A  B
 |      0  1  2
 |      1  0  3
 |      
 |      Using `fill_value` fills Nones prior to passing the column to the
 |      merge function.
 |      
 |      >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
 |      >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
 |      >>> df1.combine(df2, take_smaller, fill_value=-5)
 |         A    B
 |      0  0 -5.0
 |      1  0  4.0
 |      
 |      However, if the same element in both dataframes is None, that None
 |      is preserved
 |      
 |      >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
 |      >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]})
 |      >>> df1.combine(df2, take_smaller, fill_value=-5)
 |         A    B
 |      0  0  NaN
 |      1  0  3.0
 |      
 |      Example that demonstrates the use of `overwrite` and behavior when
 |      the axis differ between the dataframes.
 |      
 |      >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
 |      >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1],}, index=[1, 2])
 |      >>> df1.combine(df2, take_smaller)
 |           A    B     C
 |      0  NaN  NaN   NaN
 |      1  NaN  3.0 -10.0
 |      2  NaN  3.0   1.0
 |      
 |      >>> df1.combine(df2, take_smaller, overwrite=False)
 |           A    B     C
 |      0  0.0  NaN   NaN
 |      1  0.0  3.0 -10.0
 |      2  NaN  3.0   1.0
 |      
 |      Demonstrating the preference of the passed in dataframe.
 |      
 |      >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1],}, index=[1, 2])
 |      >>> df2.combine(df1, take_smaller)
 |         A    B   C
 |      0  0.0  NaN NaN
 |      1  0.0  3.0 NaN
 |      2  NaN  3.0 NaN
 |      
 |      >>> df2.combine(df1, take_smaller, overwrite=False)
 |           A    B   C
 |      0  0.0  NaN NaN
 |      1  0.0  3.0 1.0
 |      2  NaN  3.0 1.0
 |  
 |  combine_first(self, other)
 |      Update null elements with value in the same location in `other`.
 |      
 |      Combine two DataFrame objects by filling null values in one DataFrame
 |      with non-null values from other DataFrame. The row and column indexes
 |      of the resulting DataFrame will be the union of the two.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame
 |          Provided DataFrame to use to fill null values.
 |      
 |      Returns
 |      -------
 |      combined : DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.combine : Perform series-wise operation on two DataFrames
 |          using a given function.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]})
 |      >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
 |      >>> df1.combine_first(df2)
 |           A    B
 |      0  1.0  3.0
 |      1  0.0  4.0
 |      
 |      Null values still persist if the location of that null value
 |      does not exist in `other`
 |      
 |      >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]})
 |      >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2])
 |      >>> df1.combine_first(df2)
 |           A    B    C
 |      0  NaN  4.0  NaN
 |      1  0.0  3.0  1.0
 |      2  NaN  3.0  1.0
 |  
 |  compound(self, axis=None, skipna=None, level=None)
 |      Return the compound percentage of the values for the requested axis.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      compounded : Series or DataFrame (if level specified)
 |  
 |  corr(self, method='pearson', min_periods=1)
 |      Compute pairwise correlation of columns, excluding NA/null values.
 |      
 |      Parameters
 |      ----------
 |      method : {'pearson', 'kendall', 'spearman'} or callable
 |          * pearson : standard correlation coefficient
 |          * kendall : Kendall Tau correlation coefficient
 |          * spearman : Spearman rank correlation
 |          * callable: callable with input two 1d ndarrays
 |              and returning a float
 |              .. versionadded:: 0.24.0
 |      
 |      min_periods : int, optional
 |          Minimum number of observations required per pair of columns
 |          to have a valid result. Currently only available for pearson
 |          and spearman correlation
 |      
 |      Returns
 |      -------
 |      y : DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.corrwith
 |      Series.corr
 |      
 |      Examples
 |      --------
 |      >>> histogram_intersection = lambda a, b: np.minimum(a, b
 |      ... ).sum().round(decimals=1)
 |      >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
 |      ...                   columns=['dogs', 'cats'])
 |      >>> df.corr(method=histogram_intersection)
 |            dogs cats
 |      dogs   1.0  0.3
 |      cats   0.3  1.0
 |  
 |  corrwith(self, other, axis=0, drop=False, method='pearson')
 |      Compute pairwise correlation between rows or columns of DataFrame
 |      with rows or columns of Series or DataFrame.  DataFrames are first
 |      aligned along both axes before computing the correlations.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame, Series
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          0 or 'index' to compute column-wise, 1 or 'columns' for row-wise
 |      drop : boolean, default False
 |          Drop missing indices from result
 |      method : {'pearson', 'kendall', 'spearman'} or callable
 |          * pearson : standard correlation coefficient
 |          * kendall : Kendall Tau correlation coefficient
 |          * spearman : Spearman rank correlation
 |          * callable: callable with input two 1d ndarrays
 |              and returning a float
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      correls : Series
 |      
 |      See Also
 |      -------
 |      DataFrame.corr
 |  
 |  count(self, axis=0, level=None, numeric_only=False)
 |      Count non-NA cells for each column or row.
 |      
 |      The values `None`, `NaN`, `NaT`, and optionally `numpy.inf` (depending
 |      on `pandas.options.mode.use_inf_as_na`) are considered NA.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          If 0 or 'index' counts are generated for each column.
 |          If 1 or 'columns' counts are generated for each **row**.
 |      level : int or str, optional
 |          If the axis is a `MultiIndex` (hierarchical), count along a
 |          particular `level`, collapsing into a `DataFrame`.
 |          A `str` specifies the level name.
 |      numeric_only : boolean, default False
 |          Include only `float`, `int` or `boolean` data.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          For each column/row the number of non-NA/null entries.
 |          If `level` is specified returns a `DataFrame`.
 |      
 |      See Also
 |      --------
 |      Series.count: Number of non-NA elements in a Series.
 |      DataFrame.shape: Number of DataFrame rows and columns (including NA
 |          elements).
 |      DataFrame.isna: Boolean same-sized DataFrame showing places of NA
 |          elements.
 |      
 |      Examples
 |      --------
 |      Constructing DataFrame from a dictionary:
 |      
 |      >>> df = pd.DataFrame({"Person":
 |      ...                    ["John", "Myla", "Lewis", "John", "Myla"],
 |      ...                    "Age": [24., np.nan, 21., 33, 26],
 |      ...                    "Single": [False, True, True, True, False]})
 |      >>> df
 |         Person   Age  Single
 |      0    John  24.0   False
 |      1    Myla   NaN    True
 |      2   Lewis  21.0    True
 |      3    John  33.0    True
 |      4    Myla  26.0   False
 |      
 |      Notice the uncounted NA values:
 |      
 |      >>> df.count()
 |      Person    5
 |      Age       4
 |      Single    5
 |      dtype: int64
 |      
 |      Counts for each **row**:
 |      
 |      >>> df.count(axis='columns')
 |      0    3
 |      1    2
 |      2    3
 |      3    3
 |      4    3
 |      dtype: int64
 |      
 |      Counts for one level of a `MultiIndex`:
 |      
 |      >>> df.set_index(["Person", "Single"]).count(level="Person")
 |              Age
 |      Person
 |      John      2
 |      Lewis     1
 |      Myla      1
 |  
 |  cov(self, min_periods=None)
 |      Compute pairwise covariance of columns, excluding NA/null values.
 |      
 |      Compute the pairwise covariance among the series of a DataFrame.
 |      The returned data frame is the `covariance matrix
 |      <https://en.wikipedia.org/wiki/Covariance_matrix>`__ of the columns
 |      of the DataFrame.
 |      
 |      Both NA and null values are automatically excluded from the
 |      calculation. (See the note below about bias from missing values.)
 |      A threshold can be set for the minimum number of
 |      observations for each value created. Comparisons with observations
 |      below this threshold will be returned as ``NaN``.
 |      
 |      This method is generally used for the analysis of time series data to
 |      understand the relationship between different measures
 |      across time.
 |      
 |      Parameters
 |      ----------
 |      min_periods : int, optional
 |          Minimum number of observations required per pair of columns
 |          to have a valid result.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          The covariance matrix of the series of the DataFrame.
 |      
 |      See Also
 |      --------
 |      pandas.Series.cov : Compute covariance with another Series.
 |      pandas.core.window.EWM.cov: Exponential weighted sample covariance.
 |      pandas.core.window.Expanding.cov : Expanding sample covariance.
 |      pandas.core.window.Rolling.cov : Rolling sample covariance.
 |      
 |      Notes
 |      -----
 |      Returns the covariance matrix of the DataFrame's time series.
 |      The covariance is normalized by N-1.
 |      
 |      For DataFrames that have Series that are missing data (assuming that
 |      data is `missing at random
 |      <https://en.wikipedia.org/wiki/Missing_data#Missing_at_random>`__)
 |      the returned covariance matrix will be an unbiased estimate
 |      of the variance and covariance between the member Series.
 |      
 |      However, for many applications this estimate may not be acceptable
 |      because the estimate covariance matrix is not guaranteed to be positive
 |      semi-definite. This could lead to estimate correlations having
 |      absolute values which are greater than one, and/or a non-invertible
 |      covariance matrix. See `Estimation of covariance matrices
 |      <http://en.wikipedia.org/w/index.php?title=Estimation_of_covariance_
 |      matrices>`__ for more details.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
 |      ...                   columns=['dogs', 'cats'])
 |      >>> df.cov()
 |                dogs      cats
 |      dogs  0.666667 -1.000000
 |      cats -1.000000  1.666667
 |      
 |      >>> np.random.seed(42)
 |      >>> df = pd.DataFrame(np.random.randn(1000, 5),
 |      ...                   columns=['a', 'b', 'c', 'd', 'e'])
 |      >>> df.cov()
 |                a         b         c         d         e
 |      a  0.998438 -0.020161  0.059277 -0.008943  0.014144
 |      b -0.020161  1.059352 -0.008543 -0.024738  0.009826
 |      c  0.059277 -0.008543  1.010670 -0.001486 -0.000271
 |      d -0.008943 -0.024738 -0.001486  0.921297 -0.013692
 |      e  0.014144  0.009826 -0.000271 -0.013692  0.977795
 |      
 |      **Minimum number of periods**
 |      
 |      This method also supports an optional ``min_periods`` keyword
 |      that specifies the required minimum number of non-NA observations for
 |      each column pair in order to have a valid result:
 |      
 |      >>> np.random.seed(42)
 |      >>> df = pd.DataFrame(np.random.randn(20, 3),
 |      ...                   columns=['a', 'b', 'c'])
 |      >>> df.loc[df.index[:5], 'a'] = np.nan
 |      >>> df.loc[df.index[5:10], 'b'] = np.nan
 |      >>> df.cov(min_periods=12)
 |                a         b         c
 |      a  0.316741       NaN -0.150812
 |      b       NaN  1.248003  0.191417
 |      c -0.150812  0.191417  0.895202
 |  
 |  cummax(self, axis=None, skipna=True, *args, **kwargs)
 |      Return cumulative maximum over a DataFrame or Series axis.
 |      
 |      Returns a DataFrame or Series of the same size containing the cumulative
 |      maximum.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The index or the name of the axis. 0 is equivalent to None or 'index'.
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      *args, **kwargs :
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      cummax : Series or DataFrame
 |      
 |      See Also
 |      --------
 |      core.window.Expanding.max : Similar functionality
 |          but ignores ``NaN`` values.
 |      DataFrame.max : Return the maximum over
 |          DataFrame axis.
 |      DataFrame.cummax : Return cumulative maximum over DataFrame axis.
 |      DataFrame.cummin : Return cumulative minimum over DataFrame axis.
 |      DataFrame.cumsum : Return cumulative sum over DataFrame axis.
 |      DataFrame.cumprod : Return cumulative product over DataFrame axis.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([2, np.nan, 5, -1, 0])
 |      >>> s
 |      0    2.0
 |      1    NaN
 |      2    5.0
 |      3   -1.0
 |      4    0.0
 |      dtype: float64
 |      
 |      By default, NA values are ignored.
 |      
 |      >>> s.cummax()
 |      0    2.0
 |      1    NaN
 |      2    5.0
 |      3    5.0
 |      4    5.0
 |      dtype: float64
 |      
 |      To include NA values in the operation, use ``skipna=False``
 |      
 |      >>> s.cummax(skipna=False)
 |      0    2.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame([[2.0, 1.0],
 |      ...                    [3.0, np.nan],
 |      ...                    [1.0, 0.0]],
 |      ...                    columns=list('AB'))
 |      >>> df
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |      
 |      By default, iterates over rows and finds the maximum
 |      in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
 |      
 |      >>> df.cummax()
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  3.0  1.0
 |      
 |      To iterate over columns and find the maximum in each row,
 |      use ``axis=1``
 |      
 |      >>> df.cummax(axis=1)
 |           A    B
 |      0  2.0  2.0
 |      1  3.0  NaN
 |      2  1.0  1.0
 |  
 |  cummin(self, axis=None, skipna=True, *args, **kwargs)
 |      Return cumulative minimum over a DataFrame or Series axis.
 |      
 |      Returns a DataFrame or Series of the same size containing the cumulative
 |      minimum.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The index or the name of the axis. 0 is equivalent to None or 'index'.
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      *args, **kwargs :
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      cummin : Series or DataFrame
 |      
 |      See Also
 |      --------
 |      core.window.Expanding.min : Similar functionality
 |          but ignores ``NaN`` values.
 |      DataFrame.min : Return the minimum over
 |          DataFrame axis.
 |      DataFrame.cummax : Return cumulative maximum over DataFrame axis.
 |      DataFrame.cummin : Return cumulative minimum over DataFrame axis.
 |      DataFrame.cumsum : Return cumulative sum over DataFrame axis.
 |      DataFrame.cumprod : Return cumulative product over DataFrame axis.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([2, np.nan, 5, -1, 0])
 |      >>> s
 |      0    2.0
 |      1    NaN
 |      2    5.0
 |      3   -1.0
 |      4    0.0
 |      dtype: float64
 |      
 |      By default, NA values are ignored.
 |      
 |      >>> s.cummin()
 |      0    2.0
 |      1    NaN
 |      2    2.0
 |      3   -1.0
 |      4   -1.0
 |      dtype: float64
 |      
 |      To include NA values in the operation, use ``skipna=False``
 |      
 |      >>> s.cummin(skipna=False)
 |      0    2.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame([[2.0, 1.0],
 |      ...                    [3.0, np.nan],
 |      ...                    [1.0, 0.0]],
 |      ...                    columns=list('AB'))
 |      >>> df
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |      
 |      By default, iterates over rows and finds the minimum
 |      in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
 |      
 |      >>> df.cummin()
 |           A    B
 |      0  2.0  1.0
 |      1  2.0  NaN
 |      2  1.0  0.0
 |      
 |      To iterate over columns and find the minimum in each row,
 |      use ``axis=1``
 |      
 |      >>> df.cummin(axis=1)
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |  
 |  cumprod(self, axis=None, skipna=True, *args, **kwargs)
 |      Return cumulative product over a DataFrame or Series axis.
 |      
 |      Returns a DataFrame or Series of the same size containing the cumulative
 |      product.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The index or the name of the axis. 0 is equivalent to None or 'index'.
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      *args, **kwargs :
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      cumprod : Series or DataFrame
 |      
 |      See Also
 |      --------
 |      core.window.Expanding.prod : Similar functionality
 |          but ignores ``NaN`` values.
 |      DataFrame.prod : Return the product over
 |          DataFrame axis.
 |      DataFrame.cummax : Return cumulative maximum over DataFrame axis.
 |      DataFrame.cummin : Return cumulative minimum over DataFrame axis.
 |      DataFrame.cumsum : Return cumulative sum over DataFrame axis.
 |      DataFrame.cumprod : Return cumulative product over DataFrame axis.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([2, np.nan, 5, -1, 0])
 |      >>> s
 |      0    2.0
 |      1    NaN
 |      2    5.0
 |      3   -1.0
 |      4    0.0
 |      dtype: float64
 |      
 |      By default, NA values are ignored.
 |      
 |      >>> s.cumprod()
 |      0     2.0
 |      1     NaN
 |      2    10.0
 |      3   -10.0
 |      4    -0.0
 |      dtype: float64
 |      
 |      To include NA values in the operation, use ``skipna=False``
 |      
 |      >>> s.cumprod(skipna=False)
 |      0    2.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame([[2.0, 1.0],
 |      ...                    [3.0, np.nan],
 |      ...                    [1.0, 0.0]],
 |      ...                    columns=list('AB'))
 |      >>> df
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |      
 |      By default, iterates over rows and finds the product
 |      in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
 |      
 |      >>> df.cumprod()
 |           A    B
 |      0  2.0  1.0
 |      1  6.0  NaN
 |      2  6.0  0.0
 |      
 |      To iterate over columns and find the product in each row,
 |      use ``axis=1``
 |      
 |      >>> df.cumprod(axis=1)
 |           A    B
 |      0  2.0  2.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |  
 |  cumsum(self, axis=None, skipna=True, *args, **kwargs)
 |      Return cumulative sum over a DataFrame or Series axis.
 |      
 |      Returns a DataFrame or Series of the same size containing the cumulative
 |      sum.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The index or the name of the axis. 0 is equivalent to None or 'index'.
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      *args, **kwargs :
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with NumPy.
 |      
 |      Returns
 |      -------
 |      cumsum : Series or DataFrame
 |      
 |      See Also
 |      --------
 |      core.window.Expanding.sum : Similar functionality
 |          but ignores ``NaN`` values.
 |      DataFrame.sum : Return the sum over
 |          DataFrame axis.
 |      DataFrame.cummax : Return cumulative maximum over DataFrame axis.
 |      DataFrame.cummin : Return cumulative minimum over DataFrame axis.
 |      DataFrame.cumsum : Return cumulative sum over DataFrame axis.
 |      DataFrame.cumprod : Return cumulative product over DataFrame axis.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([2, np.nan, 5, -1, 0])
 |      >>> s
 |      0    2.0
 |      1    NaN
 |      2    5.0
 |      3   -1.0
 |      4    0.0
 |      dtype: float64
 |      
 |      By default, NA values are ignored.
 |      
 |      >>> s.cumsum()
 |      0    2.0
 |      1    NaN
 |      2    7.0
 |      3    6.0
 |      4    6.0
 |      dtype: float64
 |      
 |      To include NA values in the operation, use ``skipna=False``
 |      
 |      >>> s.cumsum(skipna=False)
 |      0    2.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame([[2.0, 1.0],
 |      ...                    [3.0, np.nan],
 |      ...                    [1.0, 0.0]],
 |      ...                    columns=list('AB'))
 |      >>> df
 |           A    B
 |      0  2.0  1.0
 |      1  3.0  NaN
 |      2  1.0  0.0
 |      
 |      By default, iterates over rows and finds the sum
 |      in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
 |      
 |      >>> df.cumsum()
 |           A    B
 |      0  2.0  1.0
 |      1  5.0  NaN
 |      2  6.0  1.0
 |      
 |      To iterate over columns and find the sum in each row,
 |      use ``axis=1``
 |      
 |      >>> df.cumsum(axis=1)
 |           A    B
 |      0  2.0  3.0
 |      1  3.0  NaN
 |      2  1.0  1.0
 |  
 |  diff(self, periods=1, axis=0)
 |      First discrete difference of element.
 |      
 |      Calculates the difference of a DataFrame element compared with another
 |      element in the DataFrame (default is the element in the same column
 |      of the previous row).
 |      
 |      Parameters
 |      ----------
 |      periods : int, default 1
 |          Periods to shift for calculating difference, accepts negative
 |          values.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Take difference over rows (0) or columns (1).
 |      
 |          .. versionadded:: 0.16.1.
 |      
 |      Returns
 |      -------
 |      diffed : DataFrame
 |      
 |      See Also
 |      --------
 |      Series.diff: First discrete difference for a Series.
 |      DataFrame.pct_change: Percent change over given number of periods.
 |      DataFrame.shift: Shift index by desired number of periods with an
 |          optional time freq.
 |      
 |      Examples
 |      --------
 |      Difference with previous row
 |      
 |      >>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
 |      ...                    'b': [1, 1, 2, 3, 5, 8],
 |      ...                    'c': [1, 4, 9, 16, 25, 36]})
 |      >>> df
 |         a  b   c
 |      0  1  1   1
 |      1  2  1   4
 |      2  3  2   9
 |      3  4  3  16
 |      4  5  5  25
 |      5  6  8  36
 |      
 |      >>> df.diff()
 |           a    b     c
 |      0  NaN  NaN   NaN
 |      1  1.0  0.0   3.0
 |      2  1.0  1.0   5.0
 |      3  1.0  1.0   7.0
 |      4  1.0  2.0   9.0
 |      5  1.0  3.0  11.0
 |      
 |      Difference with previous column
 |      
 |      >>> df.diff(axis=1)
 |          a    b     c
 |      0 NaN  0.0   0.0
 |      1 NaN -1.0   3.0
 |      2 NaN -1.0   7.0
 |      3 NaN -1.0  13.0
 |      4 NaN  0.0  20.0
 |      5 NaN  2.0  28.0
 |      
 |      Difference with 3rd previous row
 |      
 |      >>> df.diff(periods=3)
 |           a    b     c
 |      0  NaN  NaN   NaN
 |      1  NaN  NaN   NaN
 |      2  NaN  NaN   NaN
 |      3  3.0  2.0  15.0
 |      4  3.0  4.0  21.0
 |      5  3.0  6.0  27.0
 |      
 |      Difference with following row
 |      
 |      >>> df.diff(periods=-1)
 |           a    b     c
 |      0 -1.0  0.0  -3.0
 |      1 -1.0 -1.0  -5.0
 |      2 -1.0 -1.0  -7.0
 |      3 -1.0 -2.0  -9.0
 |      4 -1.0 -3.0 -11.0
 |      5  NaN  NaN   NaN
 |  
 |  div = truediv(self, other, axis='columns', level=None, fill_value=None)
 |  
 |  divide = truediv(self, other, axis='columns', level=None, fill_value=None)
 |  
 |  dot(self, other)
 |      Compute the matrix mutiplication between the DataFrame and other.
 |      
 |      This method computes the matrix product between the DataFrame and the
 |      values of an other Series, DataFrame or a numpy array.
 |      
 |      It can also be called using ``self @ other`` in Python >= 3.5.
 |      
 |      Parameters
 |      ----------
 |      other : Series, DataFrame or array-like
 |          The other object to compute the matrix product with.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          If other is a Series, return the matrix product between self and
 |          other as a Serie. If other is a DataFrame or a numpy.array, return
 |          the matrix product of self and other in a DataFrame of a np.array.
 |      
 |      See Also
 |      --------
 |      Series.dot: Similar method for Series.
 |      
 |      Notes
 |      -----
 |      The dimensions of DataFrame and other must be compatible in order to
 |      compute the matrix multiplication.
 |      
 |      The dot method for Series computes the inner product, instead of the
 |      matrix product here.
 |      
 |      Examples
 |      --------
 |      Here we multiply a DataFrame with a Series.
 |      
 |      >>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]])
 |      >>> s = pd.Series([1, 1, 2, 1])
 |      >>> df.dot(s)
 |      0    -4
 |      1     5
 |      dtype: int64
 |      
 |      Here we multiply a DataFrame with another DataFrame.
 |      
 |      >>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]])
 |      >>> df.dot(other)
 |          0   1
 |      0   1   4
 |      1   2   2
 |      
 |      Note that the dot method give the same result as @
 |      
 |      >>> df @ other
 |          0   1
 |      0   1   4
 |      1   2   2
 |      
 |      The dot method works also if other is an np.array.
 |      
 |      >>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]])
 |      >>> df.dot(arr)
 |          0   1
 |      0   1   4
 |      1   2   2
 |  
 |  drop(self, labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')
 |      Drop specified labels from rows or columns.
 |      
 |      Remove rows or columns by specifying label names and corresponding
 |      axis, or by specifying directly index or column names. When using a
 |      multi-index, labels on different levels can be removed by specifying
 |      the level.
 |      
 |      Parameters
 |      ----------
 |      labels : single label or list-like
 |          Index or column labels to drop.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Whether to drop labels from the index (0 or 'index') or
 |          columns (1 or 'columns').
 |      index, columns : single label or list-like
 |          Alternative to specifying axis (``labels, axis=1``
 |          is equivalent to ``columns=labels``).
 |      
 |          .. versionadded:: 0.21.0
 |      level : int or level name, optional
 |          For MultiIndex, level from which the labels will be removed.
 |      inplace : bool, default False
 |          If True, do operation inplace and return None.
 |      errors : {'ignore', 'raise'}, default 'raise'
 |          If 'ignore', suppress error and only existing labels are
 |          dropped.
 |      
 |      Returns
 |      -------
 |      dropped : pandas.DataFrame
 |      
 |      Raises
 |      ------
 |      KeyError
 |          If none of the labels are found in the selected axis
 |      
 |      See Also
 |      --------
 |      DataFrame.loc : Label-location based indexer for selection by label.
 |      DataFrame.dropna : Return DataFrame with labels on given axis omitted
 |          where (all or any) data are missing.
 |      DataFrame.drop_duplicates : Return DataFrame with duplicate rows
 |          removed, optionally only considering certain columns.
 |      Series.drop : Return Series with specified index labels removed.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(np.arange(12).reshape(3,4),
 |      ...                   columns=['A', 'B', 'C', 'D'])
 |      >>> df
 |         A  B   C   D
 |      0  0  1   2   3
 |      1  4  5   6   7
 |      2  8  9  10  11
 |      
 |      Drop columns
 |      
 |      >>> df.drop(['B', 'C'], axis=1)
 |         A   D
 |      0  0   3
 |      1  4   7
 |      2  8  11
 |      
 |      >>> df.drop(columns=['B', 'C'])
 |         A   D
 |      0  0   3
 |      1  4   7
 |      2  8  11
 |      
 |      Drop a row by index
 |      
 |      >>> df.drop([0, 1])
 |         A  B   C   D
 |      2  8  9  10  11
 |      
 |      Drop columns and/or rows of MultiIndex DataFrame
 |      
 |      >>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
 |      ...                              ['speed', 'weight', 'length']],
 |      ...                      codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
 |      ...                             [0, 1, 2, 0, 1, 2, 0, 1, 2]])
 |      >>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
 |      ...                   data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
 |      ...                         [250, 150], [1.5, 0.8], [320, 250],
 |      ...                         [1, 0.8], [0.3,0.2]])
 |      >>> df
 |                      big     small
 |      lama    speed   45.0    30.0
 |              weight  200.0   100.0
 |              length  1.5     1.0
 |      cow     speed   30.0    20.0
 |              weight  250.0   150.0
 |              length  1.5     0.8
 |      falcon  speed   320.0   250.0
 |              weight  1.0     0.8
 |              length  0.3     0.2
 |      
 |      >>> df.drop(index='cow', columns='small')
 |                      big
 |      lama    speed   45.0
 |              weight  200.0
 |              length  1.5
 |      falcon  speed   320.0
 |              weight  1.0
 |              length  0.3
 |      
 |      >>> df.drop(index='length', level=1)
 |                      big     small
 |      lama    speed   45.0    30.0
 |              weight  200.0   100.0
 |      cow     speed   30.0    20.0
 |              weight  250.0   150.0
 |      falcon  speed   320.0   250.0
 |              weight  1.0     0.8
 |  
 |  drop_duplicates(self, subset=None, keep='first', inplace=False)
 |      Return DataFrame with duplicate rows removed, optionally only
 |      considering certain columns.
 |      
 |      Parameters
 |      ----------
 |      subset : column label or sequence of labels, optional
 |          Only consider certain columns for identifying duplicates, by
 |          default use all of the columns
 |      keep : {'first', 'last', False}, default 'first'
 |          - ``first`` : Drop duplicates except for the first occurrence.
 |          - ``last`` : Drop duplicates except for the last occurrence.
 |          - False : Drop all duplicates.
 |      inplace : boolean, default False
 |          Whether to drop duplicates in place or to return a copy
 |      
 |      Returns
 |      -------
 |      deduplicated : DataFrame
 |  
 |  dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False)
 |      Remove missing values.
 |      
 |      See the :ref:`User Guide <missing_data>` for more on which values are
 |      considered missing, and how to work with missing data.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Determine if rows or columns which contain missing values are
 |          removed.
 |      
 |          * 0, or 'index' : Drop rows which contain missing values.
 |          * 1, or 'columns' : Drop columns which contain missing value.
 |      
 |          .. deprecated:: 0.23.0
 |      
 |             Pass tuple or list to drop on multiple axes.
 |             Only a single axis is allowed.
 |      
 |      how : {'any', 'all'}, default 'any'
 |          Determine if row or column is removed from DataFrame, when we have
 |          at least one NA or all NA.
 |      
 |          * 'any' : If any NA values are present, drop that row or column.
 |          * 'all' : If all values are NA, drop that row or column.
 |      
 |      thresh : int, optional
 |          Require that many non-NA values.
 |      subset : array-like, optional
 |          Labels along other axis to consider, e.g. if you are dropping rows
 |          these would be a list of columns to include.
 |      inplace : bool, default False
 |          If True, do operation inplace and return None.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          DataFrame with NA entries dropped from it.
 |      
 |      See Also
 |      --------
 |      DataFrame.isna: Indicate missing values.
 |      DataFrame.notna : Indicate existing (non-missing) values.
 |      DataFrame.fillna : Replace missing values.
 |      Series.dropna : Drop missing values.
 |      Index.dropna : Drop missing indices.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'],
 |      ...                    "toy": [np.nan, 'Batmobile', 'Bullwhip'],
 |      ...                    "born": [pd.NaT, pd.Timestamp("1940-04-25"),
 |      ...                             pd.NaT]})
 |      >>> df
 |             name        toy       born
 |      0    Alfred        NaN        NaT
 |      1    Batman  Batmobile 1940-04-25
 |      2  Catwoman   Bullwhip        NaT
 |      
 |      Drop the rows where at least one element is missing.
 |      
 |      >>> df.dropna()
 |           name        toy       born
 |      1  Batman  Batmobile 1940-04-25
 |      
 |      Drop the columns where at least one element is missing.
 |      
 |      >>> df.dropna(axis='columns')
 |             name
 |      0    Alfred
 |      1    Batman
 |      2  Catwoman
 |      
 |      Drop the rows where all elements are missing.
 |      
 |      >>> df.dropna(how='all')
 |             name        toy       born
 |      0    Alfred        NaN        NaT
 |      1    Batman  Batmobile 1940-04-25
 |      2  Catwoman   Bullwhip        NaT
 |      
 |      Keep only the rows with at least 2 non-NA values.
 |      
 |      >>> df.dropna(thresh=2)
 |             name        toy       born
 |      1    Batman  Batmobile 1940-04-25
 |      2  Catwoman   Bullwhip        NaT
 |      
 |      Define in which columns to look for missing values.
 |      
 |      >>> df.dropna(subset=['name', 'born'])
 |             name        toy       born
 |      1    Batman  Batmobile 1940-04-25
 |      
 |      Keep the DataFrame with valid entries in the same variable.
 |      
 |      >>> df.dropna(inplace=True)
 |      >>> df
 |           name        toy       born
 |      1  Batman  Batmobile 1940-04-25
 |  
 |  duplicated(self, subset=None, keep='first')
 |      Return boolean Series denoting duplicate rows, optionally only
 |      considering certain columns.
 |      
 |      Parameters
 |      ----------
 |      subset : column label or sequence of labels, optional
 |          Only consider certain columns for identifying duplicates, by
 |          default use all of the columns
 |      keep : {'first', 'last', False}, default 'first'
 |          - ``first`` : Mark duplicates as ``True`` except for the
 |            first occurrence.
 |          - ``last`` : Mark duplicates as ``True`` except for the
 |            last occurrence.
 |          - False : Mark all duplicates as ``True``.
 |      
 |      Returns
 |      -------
 |      duplicated : Series
 |  
 |  eq(self, other, axis='columns', level=None)
 |      Equal to of dataframe and other, element-wise (binary operator `eq`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  eval(self, expr, inplace=False, **kwargs)
 |      Evaluate a string describing operations on DataFrame columns.
 |      
 |      Operates on columns only, not specific rows or elements.  This allows
 |      `eval` to run arbitrary code, which can make you vulnerable to code
 |      injection if you pass user input to this function.
 |      
 |      Parameters
 |      ----------
 |      expr : str
 |          The expression string to evaluate.
 |      inplace : bool, default False
 |          If the expression contains an assignment, whether to perform the
 |          operation inplace and mutate the existing DataFrame. Otherwise,
 |          a new DataFrame is returned.
 |      
 |          .. versionadded:: 0.18.0.
 |      kwargs : dict
 |          See the documentation for :func:`~pandas.eval` for complete details
 |          on the keyword arguments accepted by
 |          :meth:`~pandas.DataFrame.query`.
 |      
 |      Returns
 |      -------
 |      ndarray, scalar, or pandas object
 |          The result of the evaluation.
 |      
 |      See Also
 |      --------
 |      DataFrame.query : Evaluates a boolean expression to query the columns
 |          of a frame.
 |      DataFrame.assign : Can evaluate an expression or function to create new
 |          values for a column.
 |      pandas.eval : Evaluate a Python expression as a string using various
 |          backends.
 |      
 |      Notes
 |      -----
 |      For more details see the API documentation for :func:`~pandas.eval`.
 |      For detailed examples see :ref:`enhancing performance with eval
 |      <enhancingperf.eval>`.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})
 |      >>> df
 |         A   B
 |      0  1  10
 |      1  2   8
 |      2  3   6
 |      3  4   4
 |      4  5   2
 |      >>> df.eval('A + B')
 |      0    11
 |      1    10
 |      2     9
 |      3     8
 |      4     7
 |      dtype: int64
 |      
 |      Assignment is allowed though by default the original DataFrame is not
 |      modified.
 |      
 |      >>> df.eval('C = A + B')
 |         A   B   C
 |      0  1  10  11
 |      1  2   8  10
 |      2  3   6   9
 |      3  4   4   8
 |      4  5   2   7
 |      >>> df
 |         A   B
 |      0  1  10
 |      1  2   8
 |      2  3   6
 |      3  4   4
 |      4  5   2
 |      
 |      Use ``inplace=True`` to modify the original DataFrame.
 |      
 |      >>> df.eval('C = A + B', inplace=True)
 |      >>> df
 |         A   B   C
 |      0  1  10  11
 |      1  2   8  10
 |      2  3   6   9
 |      3  4   4   8
 |      4  5   2   7
 |  
 |  ewm(self, com=None, span=None, halflife=None, alpha=None, min_periods=0, adjust=True, ignore_na=False, axis=0)
 |      Provides exponential weighted functions.
 |      
 |      .. versionadded:: 0.18.0
 |      
 |      Parameters
 |      ----------
 |      com : float, optional
 |          Specify decay in terms of center of mass,
 |          :math:`\alpha = 1 / (1 + com),\text{ for } com \geq 0`
 |      span : float, optional
 |          Specify decay in terms of span,
 |          :math:`\alpha = 2 / (span + 1),\text{ for } span \geq 1`
 |      halflife : float, optional
 |          Specify decay in terms of half-life,
 |          :math:`\alpha = 1 - exp(log(0.5) / halflife),\text{ for } halflife > 0`
 |      alpha : float, optional
 |          Specify smoothing factor :math:`\alpha` directly,
 |          :math:`0 < \alpha \leq 1`
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      min_periods : int, default 0
 |          Minimum number of observations in window required to have a value
 |          (otherwise result is NA).
 |      adjust : bool, default True
 |          Divide by decaying adjustment factor in beginning periods to account
 |          for imbalance in relative weightings (viewing EWMA as a moving average)
 |      ignore_na : bool, default False
 |          Ignore missing values when calculating weights;
 |          specify True to reproduce pre-0.15.0 behavior
 |      
 |      Returns
 |      -------
 |      a Window sub-classed for the particular operation
 |      
 |      See Also
 |      --------
 |      rolling : Provides rolling window calculations.
 |      expanding : Provides expanding transformations.
 |      
 |      Notes
 |      -----
 |      Exactly one of center of mass, span, half-life, and alpha must be provided.
 |      Allowed values and relationship between the parameters are specified in the
 |      parameter descriptions above; see the link at the end of this section for
 |      a detailed explanation.
 |      
 |      When adjust is True (default), weighted averages are calculated using
 |      weights (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1.
 |      
 |      When adjust is False, weighted averages are calculated recursively as:
 |         weighted_average[0] = arg[0];
 |         weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
 |      
 |      When ignore_na is False (default), weights are based on absolute positions.
 |      For example, the weights of x and y used in calculating the final weighted
 |      average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and
 |      (1-alpha)**2 and alpha (if adjust is False).
 |      
 |      When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based
 |      on relative positions. For example, the weights of x and y used in
 |      calculating the final weighted average of [x, None, y] are 1-alpha and 1
 |      (if adjust is True), and 1-alpha and alpha (if adjust is False).
 |      
 |      More details can be found at
 |      http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
 |           B
 |      0  0.0
 |      1  1.0
 |      2  2.0
 |      3  NaN
 |      4  4.0
 |      
 |      >>> df.ewm(com=0.5).mean()
 |                B
 |      0  0.000000
 |      1  0.750000
 |      2  1.615385
 |      3  1.615385
 |      4  3.670213
 |  
 |  expanding(self, min_periods=1, center=False, axis=0)
 |      Provides expanding transformations.
 |      
 |      .. versionadded:: 0.18.0
 |      
 |      Parameters
 |      ----------
 |      min_periods : int, default 1
 |          Minimum number of observations in window required to have a value
 |          (otherwise result is NA).
 |      center : bool, default False
 |          Set the labels at the center of the window.
 |      axis : int or str, default 0
 |      
 |      Returns
 |      -------
 |      a Window sub-classed for the particular operation
 |      
 |      See Also
 |      --------
 |      rolling : Provides rolling window calculations.
 |      ewm : Provides exponential weighted functions.
 |      
 |      Notes
 |      -----
 |      By default, the result is set to the right edge of the window. This can be
 |      changed to the center of the window by setting ``center=True``.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
 |           B
 |      0  0.0
 |      1  1.0
 |      2  2.0
 |      3  NaN
 |      4  4.0
 |      
 |      >>> df.expanding(2).sum()
 |           B
 |      0  NaN
 |      1  1.0
 |      2  3.0
 |      3  3.0
 |      4  7.0
 |  
 |  fillna(self, value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs)
 |      Fill NA/NaN values using the specified method.
 |      
 |      Parameters
 |      ----------
 |      value : scalar, dict, Series, or DataFrame
 |          Value to use to fill holes (e.g. 0), alternately a
 |          dict/Series/DataFrame of values specifying which value to use for
 |          each index (for a Series) or column (for a DataFrame). (values not
 |          in the dict/Series/DataFrame will not be filled). This value cannot
 |          be a list.
 |      method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
 |          Method to use for filling holes in reindexed Series
 |          pad / ffill: propagate last valid observation forward to next valid
 |          backfill / bfill: use NEXT valid observation to fill gap
 |      axis : {0 or 'index', 1 or 'columns'}
 |      inplace : boolean, default False
 |          If True, fill in place. Note: this will modify any
 |          other views on this object, (e.g. a no-copy slice for a column in a
 |          DataFrame).
 |      limit : int, default None
 |          If method is specified, this is the maximum number of consecutive
 |          NaN values to forward/backward fill. In other words, if there is
 |          a gap with more than this number of consecutive NaNs, it will only
 |          be partially filled. If method is not specified, this is the
 |          maximum number of entries along the entire axis where NaNs will be
 |          filled. Must be greater than 0 if not None.
 |      downcast : dict, default is None
 |          a dict of item->dtype of what to downcast if possible,
 |          or the string 'infer' which will try to downcast to an appropriate
 |          equal type (e.g. float64 to int64 if possible)
 |      
 |      Returns
 |      -------
 |      filled : DataFrame
 |      
 |      See Also
 |      --------
 |      interpolate : Fill NaN values using interpolation.
 |      reindex, asfreq
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
 |      ...                    [3, 4, np.nan, 1],
 |      ...                    [np.nan, np.nan, np.nan, 5],
 |      ...                    [np.nan, 3, np.nan, 4]],
 |      ...                    columns=list('ABCD'))
 |      >>> df
 |           A    B   C  D
 |      0  NaN  2.0 NaN  0
 |      1  3.0  4.0 NaN  1
 |      2  NaN  NaN NaN  5
 |      3  NaN  3.0 NaN  4
 |      
 |      Replace all NaN elements with 0s.
 |      
 |      >>> df.fillna(0)
 |          A   B   C   D
 |      0   0.0 2.0 0.0 0
 |      1   3.0 4.0 0.0 1
 |      2   0.0 0.0 0.0 5
 |      3   0.0 3.0 0.0 4
 |      
 |      We can also propagate non-null values forward or backward.
 |      
 |      >>> df.fillna(method='ffill')
 |          A   B   C   D
 |      0   NaN 2.0 NaN 0
 |      1   3.0 4.0 NaN 1
 |      2   3.0 4.0 NaN 5
 |      3   3.0 3.0 NaN 4
 |      
 |      Replace all NaN elements in column 'A', 'B', 'C', and 'D', with 0, 1,
 |      2, and 3 respectively.
 |      
 |      >>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3}
 |      >>> df.fillna(value=values)
 |          A   B   C   D
 |      0   0.0 2.0 2.0 0
 |      1   3.0 4.0 2.0 1
 |      2   0.0 1.0 2.0 5
 |      3   0.0 3.0 2.0 4
 |      
 |      Only replace the first NaN element.
 |      
 |      >>> df.fillna(value=values, limit=1)
 |          A   B   C   D
 |      0   0.0 2.0 2.0 0
 |      1   3.0 4.0 NaN 1
 |      2   NaN 1.0 NaN 5
 |      3   NaN 3.0 NaN 4
 |  
 |  floordiv(self, other, axis='columns', level=None, fill_value=None)
 |      Integer division of dataframe and other, element-wise (binary operator `floordiv`).
 |      
 |      Equivalent to ``dataframe // other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rfloordiv`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  ge(self, other, axis='columns', level=None)
 |      Greater than or equal to of dataframe and other, element-wise (binary operator `ge`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  get_value(self, index, col, takeable=False)
 |      Quickly retrieve single value at passed column and index.
 |      
 |      .. deprecated:: 0.21.0
 |          Use .at[] or .iat[] accessors instead.
 |      
 |      Parameters
 |      ----------
 |      index : row label
 |      col : column label
 |      takeable : interpret the index/col as indexers, default False
 |      
 |      Returns
 |      -------
 |      value : scalar value
 |  
 |  gt(self, other, axis='columns', level=None)
 |      Greater than of dataframe and other, element-wise (binary operator `gt`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  hist = hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None, ax=None, sharex=False, sharey=False, figsize=None, layout=None, bins=10, **kwds)
 |      Make a histogram of the DataFrame's.
 |      
 |      A `histogram`_ is a representation of the distribution of data.
 |      This function calls :meth:`matplotlib.pyplot.hist`, on each series in
 |      the DataFrame, resulting in one histogram per column.
 |      
 |      .. _histogram: https://en.wikipedia.org/wiki/Histogram
 |      
 |      Parameters
 |      ----------
 |      data : DataFrame
 |          The pandas object holding the data.
 |      column : string or sequence
 |          If passed, will be used to limit data to a subset of columns.
 |      by : object, optional
 |          If passed, then used to form histograms for separate groups.
 |      grid : boolean, default True
 |          Whether to show axis grid lines.
 |      xlabelsize : int, default None
 |          If specified changes the x-axis label size.
 |      xrot : float, default None
 |          Rotation of x axis labels. For example, a value of 90 displays the
 |          x labels rotated 90 degrees clockwise.
 |      ylabelsize : int, default None
 |          If specified changes the y-axis label size.
 |      yrot : float, default None
 |          Rotation of y axis labels. For example, a value of 90 displays the
 |          y labels rotated 90 degrees clockwise.
 |      ax : Matplotlib axes object, default None
 |          The axes to plot the histogram on.
 |      sharex : boolean, default True if ax is None else False
 |          In case subplots=True, share x axis and set some x axis labels to
 |          invisible; defaults to True if ax is None otherwise False if an ax
 |          is passed in.
 |          Note that passing in both an ax and sharex=True will alter all x axis
 |          labels for all subplots in a figure.
 |      sharey : boolean, default False
 |          In case subplots=True, share y axis and set some y axis labels to
 |          invisible.
 |      figsize : tuple
 |          The size in inches of the figure to create. Uses the value in
 |          `matplotlib.rcParams` by default.
 |      layout : tuple, optional
 |          Tuple of (rows, columns) for the layout of the histograms.
 |      bins : integer or sequence, default 10
 |          Number of histogram bins to be used. If an integer is given, bins + 1
 |          bin edges are calculated and returned. If bins is a sequence, gives
 |          bin edges, including left edge of first bin and right edge of last
 |          bin. In this case, bins is returned unmodified.
 |      **kwds
 |          All other plotting keyword arguments to be passed to
 |          :meth:`matplotlib.pyplot.hist`.
 |      
 |      Returns
 |      -------
 |      axes : matplotlib.AxesSubplot or numpy.ndarray of them
 |      
 |      See Also
 |      --------
 |      matplotlib.pyplot.hist : Plot a histogram using matplotlib.
 |      
 |      Examples
 |      --------
 |      
 |      .. plot::
 |          :context: close-figs
 |      
 |          This example draws a histogram based on the length and width of
 |          some animals, displayed in three bins
 |      
 |          >>> df = pd.DataFrame({
 |          ...     'length': [1.5, 0.5, 1.2, 0.9, 3],
 |          ...     'width': [0.7, 0.2, 0.15, 0.2, 1.1]
 |          ...     }, index= ['pig', 'rabbit', 'duck', 'chicken', 'horse'])
 |          >>> hist = df.hist(bins=3)
 |  
 |  idxmax(self, axis=0, skipna=True)
 |      Return index of first occurrence of maximum over requested axis.
 |      NA/null values are excluded.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          0 or 'index' for row-wise, 1 or 'columns' for column-wise
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      
 |      Returns
 |      -------
 |      idxmax : Series
 |      
 |      Raises
 |      ------
 |      ValueError
 |          * If the row/column is empty
 |      
 |      See Also
 |      --------
 |      Series.idxmax
 |      
 |      Notes
 |      -----
 |      This method is the DataFrame version of ``ndarray.argmax``.
 |  
 |  idxmin(self, axis=0, skipna=True)
 |      Return index of first occurrence of minimum over requested axis.
 |      NA/null values are excluded.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          0 or 'index' for row-wise, 1 or 'columns' for column-wise
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA.
 |      
 |      Returns
 |      -------
 |      idxmin : Series
 |      
 |      Raises
 |      ------
 |      ValueError
 |          * If the row/column is empty
 |      
 |      See Also
 |      --------
 |      Series.idxmin
 |      
 |      Notes
 |      -----
 |      This method is the DataFrame version of ``ndarray.argmin``.
 |  
 |  info(self, verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None)
 |      Print a concise summary of a DataFrame.
 |      
 |      This method prints information about a DataFrame including
 |      the index dtype and column dtypes, non-null values and memory usage.
 |      
 |      Parameters
 |      ----------
 |      verbose : bool, optional
 |          Whether to print the full summary. By default, the setting in
 |          ``pandas.options.display.max_info_columns`` is followed.
 |      buf : writable buffer, defaults to sys.stdout
 |          Where to send the output. By default, the output is printed to
 |          sys.stdout. Pass a writable buffer if you need to further process
 |          the output.
 |      max_cols : int, optional
 |          When to switch from the verbose to the truncated output. If the
 |          DataFrame has more than `max_cols` columns, the truncated output
 |          is used. By default, the setting in
 |          ``pandas.options.display.max_info_columns`` is used.
 |      memory_usage : bool, str, optional
 |          Specifies whether total memory usage of the DataFrame
 |          elements (including the index) should be displayed. By default,
 |          this follows the ``pandas.options.display.memory_usage`` setting.
 |      
 |          True always show memory usage. False never shows memory usage.
 |          A value of 'deep' is equivalent to "True with deep introspection".
 |          Memory usage is shown in human-readable units (base-2
 |          representation). Without deep introspection a memory estimation is
 |          made based in column dtype and number of rows assuming values
 |          consume the same memory amount for corresponding dtypes. With deep
 |          memory introspection, a real memory usage calculation is performed
 |          at the cost of computational resources.
 |      null_counts : bool, optional
 |          Whether to show the non-null counts. By default, this is shown
 |          only if the frame is smaller than
 |          ``pandas.options.display.max_info_rows`` and
 |          ``pandas.options.display.max_info_columns``. A value of True always
 |          shows the counts, and False never shows the counts.
 |      
 |      Returns
 |      -------
 |      None
 |          This method prints a summary of a DataFrame and returns None.
 |      
 |      See Also
 |      --------
 |      DataFrame.describe: Generate descriptive statistics of DataFrame
 |          columns.
 |      DataFrame.memory_usage: Memory usage of DataFrame columns.
 |      
 |      Examples
 |      --------
 |      >>> int_values = [1, 2, 3, 4, 5]
 |      >>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
 |      >>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
 |      >>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
 |      ...                   "float_col": float_values})
 |      >>> df
 |         int_col text_col  float_col
 |      0        1    alpha       0.00
 |      1        2     beta       0.25
 |      2        3    gamma       0.50
 |      3        4    delta       0.75
 |      4        5  epsilon       1.00
 |      
 |      Prints information of all columns:
 |      
 |      >>> df.info(verbose=True)
 |      <class 'pandas.core.frame.DataFrame'>
 |      RangeIndex: 5 entries, 0 to 4
 |      Data columns (total 3 columns):
 |      int_col      5 non-null int64
 |      text_col     5 non-null object
 |      float_col    5 non-null float64
 |      dtypes: float64(1), int64(1), object(1)
 |      memory usage: 200.0+ bytes
 |      
 |      Prints a summary of columns count and its dtypes but not per column
 |      information:
 |      
 |      >>> df.info(verbose=False)
 |      <class 'pandas.core.frame.DataFrame'>
 |      RangeIndex: 5 entries, 0 to 4
 |      Columns: 3 entries, int_col to float_col
 |      dtypes: float64(1), int64(1), object(1)
 |      memory usage: 200.0+ bytes
 |      
 |      Pipe output of DataFrame.info to buffer instead of sys.stdout, get
 |      buffer content and writes to a text file:
 |      
 |      >>> import io
 |      >>> buffer = io.StringIO()
 |      >>> df.info(buf=buffer)
 |      >>> s = buffer.getvalue()
 |      >>> with open("df_info.txt", "w",
 |      ...           encoding="utf-8") as f:  # doctest: +SKIP
 |      ...     f.write(s)
 |      260
 |      
 |      The `memory_usage` parameter allows deep introspection mode, specially
 |      useful for big DataFrames and fine-tune memory optimization:
 |      
 |      >>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
 |      >>> df = pd.DataFrame({
 |      ...     'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
 |      ...     'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
 |      ...     'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
 |      ... })
 |      >>> df.info()
 |      <class 'pandas.core.frame.DataFrame'>
 |      RangeIndex: 1000000 entries, 0 to 999999
 |      Data columns (total 3 columns):
 |      column_1    1000000 non-null object
 |      column_2    1000000 non-null object
 |      column_3    1000000 non-null object
 |      dtypes: object(3)
 |      memory usage: 22.9+ MB
 |      
 |      >>> df.info(memory_usage='deep')
 |      <class 'pandas.core.frame.DataFrame'>
 |      RangeIndex: 1000000 entries, 0 to 999999
 |      Data columns (total 3 columns):
 |      column_1    1000000 non-null object
 |      column_2    1000000 non-null object
 |      column_3    1000000 non-null object
 |      dtypes: object(3)
 |      memory usage: 188.8 MB
 |  
 |  insert(self, loc, column, value, allow_duplicates=False)
 |      Insert column into DataFrame at specified location.
 |      
 |      Raises a ValueError if `column` is already contained in the DataFrame,
 |      unless `allow_duplicates` is set to True.
 |      
 |      Parameters
 |      ----------
 |      loc : int
 |          Insertion index. Must verify 0 <= loc <= len(columns)
 |      column : string, number, or hashable object
 |          label of the inserted column
 |      value : int, Series, or array-like
 |      allow_duplicates : bool, optional
 |  
 |  isin(self, values)
 |      Whether each element in the DataFrame is contained in values.
 |      
 |      Parameters
 |      ----------
 |      values : iterable, Series, DataFrame or dict
 |          The result will only be true at a location if all the
 |          labels match. If `values` is a Series, that's the index. If
 |          `values` is a dict, the keys must be the column names,
 |          which must match. If `values` is a DataFrame,
 |          then both the index and column labels must match.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          DataFrame of booleans showing whether each element in the DataFrame
 |          is contained in values.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq: Equality test for DataFrame.
 |      Series.isin: Equivalent method on Series.
 |      Series.str.contains: Test if pattern or regex is contained within a
 |          string of a Series or Index.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]},
 |      ...                   index=['falcon', 'dog'])
 |      >>> df
 |              num_legs  num_wings
 |      falcon         2          2
 |      dog            4          0
 |      
 |      When ``values`` is a list check whether every value in the DataFrame
 |      is present in the list (which animals have 0 or 2 legs or wings)
 |      
 |      >>> df.isin([0, 2])
 |              num_legs  num_wings
 |      falcon      True       True
 |      dog        False       True
 |      
 |      When ``values`` is a dict, we can pass values to check for each
 |      column separately:
 |      
 |      >>> df.isin({'num_wings': [0, 3]})
 |              num_legs  num_wings
 |      falcon     False      False
 |      dog        False       True
 |      
 |      When ``values`` is a Series or DataFrame the index and column must
 |      match. Note that 'falcon' does not match based on the number of legs
 |      in df2.
 |      
 |      >>> other = pd.DataFrame({'num_legs': [8, 2],'num_wings': [0, 2]},
 |      ...                      index=['spider', 'falcon'])
 |      >>> df.isin(other)
 |              num_legs  num_wings
 |      falcon      True       True
 |      dog        False      False
 |  
 |  isna(self)
 |      Detect missing values.
 |      
 |      Return a boolean same-sized object indicating if the values are NA.
 |      NA values, such as None or :attr:`numpy.NaN`, gets mapped to True
 |      values.
 |      Everything else gets mapped to False values. Characters such as empty
 |      strings ``''`` or :attr:`numpy.inf` are not considered NA values
 |      (unless you set ``pandas.options.mode.use_inf_as_na = True``).
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Mask of bool values for each element in DataFrame that
 |          indicates whether an element is not an NA value.
 |      
 |      See Also
 |      --------
 |      DataFrame.isnull : Alias of isna.
 |      DataFrame.notna : Boolean inverse of isna.
 |      DataFrame.dropna : Omit axes labels with missing values.
 |      isna : Top-level isna.
 |      
 |      Examples
 |      --------
 |      Show which entries in a DataFrame are NA.
 |      
 |      >>> df = pd.DataFrame({'age': [5, 6, np.NaN],
 |      ...                    'born': [pd.NaT, pd.Timestamp('1939-05-27'),
 |      ...                             pd.Timestamp('1940-04-25')],
 |      ...                    'name': ['Alfred', 'Batman', ''],
 |      ...                    'toy': [None, 'Batmobile', 'Joker']})
 |      >>> df
 |         age       born    name        toy
 |      0  5.0        NaT  Alfred       None
 |      1  6.0 1939-05-27  Batman  Batmobile
 |      2  NaN 1940-04-25              Joker
 |      
 |      >>> df.isna()
 |           age   born   name    toy
 |      0  False   True  False   True
 |      1  False  False  False  False
 |      2   True  False  False  False
 |      
 |      Show which entries in a Series are NA.
 |      
 |      >>> ser = pd.Series([5, 6, np.NaN])
 |      >>> ser
 |      0    5.0
 |      1    6.0
 |      2    NaN
 |      dtype: float64
 |      
 |      >>> ser.isna()
 |      0    False
 |      1    False
 |      2     True
 |      dtype: bool
 |  
 |  isnull(self)
 |      Detect missing values.
 |      
 |      Return a boolean same-sized object indicating if the values are NA.
 |      NA values, such as None or :attr:`numpy.NaN`, gets mapped to True
 |      values.
 |      Everything else gets mapped to False values. Characters such as empty
 |      strings ``''`` or :attr:`numpy.inf` are not considered NA values
 |      (unless you set ``pandas.options.mode.use_inf_as_na = True``).
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Mask of bool values for each element in DataFrame that
 |          indicates whether an element is not an NA value.
 |      
 |      See Also
 |      --------
 |      DataFrame.isnull : Alias of isna.
 |      DataFrame.notna : Boolean inverse of isna.
 |      DataFrame.dropna : Omit axes labels with missing values.
 |      isna : Top-level isna.
 |      
 |      Examples
 |      --------
 |      Show which entries in a DataFrame are NA.
 |      
 |      >>> df = pd.DataFrame({'age': [5, 6, np.NaN],
 |      ...                    'born': [pd.NaT, pd.Timestamp('1939-05-27'),
 |      ...                             pd.Timestamp('1940-04-25')],
 |      ...                    'name': ['Alfred', 'Batman', ''],
 |      ...                    'toy': [None, 'Batmobile', 'Joker']})
 |      >>> df
 |         age       born    name        toy
 |      0  5.0        NaT  Alfred       None
 |      1  6.0 1939-05-27  Batman  Batmobile
 |      2  NaN 1940-04-25              Joker
 |      
 |      >>> df.isna()
 |           age   born   name    toy
 |      0  False   True  False   True
 |      1  False  False  False  False
 |      2   True  False  False  False
 |      
 |      Show which entries in a Series are NA.
 |      
 |      >>> ser = pd.Series([5, 6, np.NaN])
 |      >>> ser
 |      0    5.0
 |      1    6.0
 |      2    NaN
 |      dtype: float64
 |      
 |      >>> ser.isna()
 |      0    False
 |      1    False
 |      2     True
 |      dtype: bool
 |  
 |  items = iteritems(self)
 |  
 |  iteritems(self)
 |      Iterator over (column name, Series) pairs.
 |      
 |      Iterates over the DataFrame columns, returning a tuple with
 |      the column name and the content as a Series.
 |      
 |      Yields
 |      ------
 |      label : object
 |          The column names for the DataFrame being iterated over.
 |      content : Series
 |          The column entries belonging to each label, as a Series.
 |      
 |      See Also
 |      --------
 |      DataFrame.iterrows : Iterate over DataFrame rows as
 |          (index, Series) pairs.
 |      DataFrame.itertuples : Iterate over DataFrame rows as namedtuples
 |          of the values.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
 |      ...                   'population': [1864, 22000, 80000]},
 |      ...                   index=['panda', 'polar', 'koala'])
 |      >>> df
 |              species   population
 |      panda   bear      1864
 |      polar   bear      22000
 |      koala   marsupial 80000
 |      >>> for label, content in df.iteritems():
 |      ...     print('label:', label)
 |      ...     print('content:', content, sep='\n')
 |      ...
 |      label: species
 |      content:
 |      panda         bear
 |      polar         bear
 |      koala    marsupial
 |      Name: species, dtype: object
 |      label: population
 |      content:
 |      panda     1864
 |      polar    22000
 |      koala    80000
 |      Name: population, dtype: int64
 |  
 |  iterrows(self)
 |      Iterate over DataFrame rows as (index, Series) pairs.
 |      
 |      Yields
 |      ------
 |      index : label or tuple of label
 |          The index of the row. A tuple for a `MultiIndex`.
 |      data : Series
 |          The data of the row as a Series.
 |      
 |      it : generator
 |          A generator that iterates over the rows of the frame.
 |      
 |      See Also
 |      --------
 |      itertuples : Iterate over DataFrame rows as namedtuples of the values.
 |      iteritems : Iterate over (column name, Series) pairs.
 |      
 |      Notes
 |      -----
 |      
 |      1. Because ``iterrows`` returns a Series for each row,
 |         it does **not** preserve dtypes across the rows (dtypes are
 |         preserved across columns for DataFrames). For example,
 |      
 |         >>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])
 |         >>> row = next(df.iterrows())[1]
 |         >>> row
 |         int      1.0
 |         float    1.5
 |         Name: 0, dtype: float64
 |         >>> print(row['int'].dtype)
 |         float64
 |         >>> print(df['int'].dtype)
 |         int64
 |      
 |         To preserve dtypes while iterating over the rows, it is better
 |         to use :meth:`itertuples` which returns namedtuples of the values
 |         and which is generally faster than ``iterrows``.
 |      
 |      2. You should **never modify** something you are iterating over.
 |         This is not guaranteed to work in all cases. Depending on the
 |         data types, the iterator returns a copy and not a view, and writing
 |         to it will have no effect.
 |  
 |  itertuples(self, index=True, name='Pandas')
 |      Iterate over DataFrame rows as namedtuples.
 |      
 |      Parameters
 |      ----------
 |      index : bool, default True
 |          If True, return the index as the first element of the tuple.
 |      name : str or None, default "Pandas"
 |          The name of the returned namedtuples or None to return regular
 |          tuples.
 |      
 |      Yields
 |      -------
 |      collections.namedtuple
 |          Yields a namedtuple for each row in the DataFrame with the first
 |          field possibly being the index and following fields being the
 |          column values.
 |      
 |      See Also
 |      --------
 |      DataFrame.iterrows : Iterate over DataFrame rows as (index, Series)
 |          pairs.
 |      DataFrame.iteritems : Iterate over (column name, Series) pairs.
 |      
 |      Notes
 |      -----
 |      The column names will be renamed to positional names if they are
 |      invalid Python identifiers, repeated, or start with an underscore.
 |      With a large number of columns (>255), regular tuples are returned.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
 |      ...                   index=['dog', 'hawk'])
 |      >>> df
 |            num_legs  num_wings
 |      dog          4          0
 |      hawk         2          2
 |      >>> for row in df.itertuples():
 |      ...     print(row)
 |      ...
 |      Pandas(Index='dog', num_legs=4, num_wings=0)
 |      Pandas(Index='hawk', num_legs=2, num_wings=2)
 |      
 |      By setting the `index` parameter to False we can remove the index
 |      as the first element of the tuple:
 |      
 |      >>> for row in df.itertuples(index=False):
 |      ...     print(row)
 |      ...
 |      Pandas(num_legs=4, num_wings=0)
 |      Pandas(num_legs=2, num_wings=2)
 |      
 |      With the `name` parameter set we set a custom name for the yielded
 |      namedtuples:
 |      
 |      >>> for row in df.itertuples(name='Animal'):
 |      ...     print(row)
 |      ...
 |      Animal(Index='dog', num_legs=4, num_wings=0)
 |      Animal(Index='hawk', num_legs=2, num_wings=2)
 |  
 |  join(self, other, on=None, how='left', lsuffix='', rsuffix='', sort=False)
 |      Join columns of another DataFrame.
 |      
 |      Join columns with `other` DataFrame either on index or on a key
 |      column. Efficiently join multiple DataFrame objects by index at once by
 |      passing a list.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame, Series, or list of DataFrame
 |          Index should be similar to one of the columns in this one. If a
 |          Series is passed, its name attribute must be set, and that will be
 |          used as the column name in the resulting joined DataFrame.
 |      on : str, list of str, or array-like, optional
 |          Column or index level name(s) in the caller to join on the index
 |          in `other`, otherwise joins index-on-index. If multiple
 |          values given, the `other` DataFrame must have a MultiIndex. Can
 |          pass an array as the join key if it is not already contained in
 |          the calling DataFrame. Like an Excel VLOOKUP operation.
 |      how : {'left', 'right', 'outer', 'inner'}, default 'left'
 |          How to handle the operation of the two objects.
 |      
 |          * left: use calling frame's index (or column if on is specified)
 |          * right: use `other`'s index.
 |          * outer: form union of calling frame's index (or column if on is
 |            specified) with `other`'s index, and sort it.
 |            lexicographically.
 |          * inner: form intersection of calling frame's index (or column if
 |            on is specified) with `other`'s index, preserving the order
 |            of the calling's one.
 |      lsuffix : str, default ''
 |          Suffix to use from left frame's overlapping columns.
 |      rsuffix : str, default ''
 |          Suffix to use from right frame's overlapping columns.
 |      sort : bool, default False
 |          Order result DataFrame lexicographically by the join key. If False,
 |          the order of the join key depends on the join type (how keyword).
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          A dataframe containing columns from both the caller and `other`.
 |      
 |      See Also
 |      --------
 |      DataFrame.merge : For column(s)-on-columns(s) operations.
 |      
 |      Notes
 |      -----
 |      Parameters `on`, `lsuffix`, and `rsuffix` are not supported when
 |      passing a list of `DataFrame` objects.
 |      
 |      Support for specifying index levels as the `on` parameter was added
 |      in version 0.23.0.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
 |      ...                    'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
 |      
 |      >>> df
 |        key   A
 |      0  K0  A0
 |      1  K1  A1
 |      2  K2  A2
 |      3  K3  A3
 |      4  K4  A4
 |      5  K5  A5
 |      
 |      >>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
 |      ...                       'B': ['B0', 'B1', 'B2']})
 |      
 |      >>> other
 |        key   B
 |      0  K0  B0
 |      1  K1  B1
 |      2  K2  B2
 |      
 |      Join DataFrames using their indexes.
 |      
 |      >>> df.join(other, lsuffix='_caller', rsuffix='_other')
 |        key_caller   A key_other    B
 |      0         K0  A0        K0   B0
 |      1         K1  A1        K1   B1
 |      2         K2  A2        K2   B2
 |      3         K3  A3       NaN  NaN
 |      4         K4  A4       NaN  NaN
 |      5         K5  A5       NaN  NaN
 |      
 |      If we want to join using the key columns, we need to set key to be
 |      the index in both `df` and `other`. The joined DataFrame will have
 |      key as its index.
 |      
 |      >>> df.set_index('key').join(other.set_index('key'))
 |            A    B
 |      key
 |      K0   A0   B0
 |      K1   A1   B1
 |      K2   A2   B2
 |      K3   A3  NaN
 |      K4   A4  NaN
 |      K5   A5  NaN
 |      
 |      Another option to join using the key columns is to use the `on`
 |      parameter. DataFrame.join always uses `other`'s index but we can use
 |      any column in `df`. This method preserves the original DataFrame's
 |      index in the result.
 |      
 |      >>> df.join(other.set_index('key'), on='key')
 |        key   A    B
 |      0  K0  A0   B0
 |      1  K1  A1   B1
 |      2  K2  A2   B2
 |      3  K3  A3  NaN
 |      4  K4  A4  NaN
 |      5  K5  A5  NaN
 |  
 |  kurt(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return unbiased kurtosis over requested axis using Fisher's definition of
 |      kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      kurt : Series or DataFrame (if level specified)
 |  
 |  kurtosis = kurt(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |  
 |  le(self, other, axis='columns', level=None)
 |      Less than or equal to of dataframe and other, element-wise (binary operator `le`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  lookup(self, row_labels, col_labels)
 |      Label-based "fancy indexing" function for DataFrame.
 |      
 |      Given equal-length arrays of row and column labels, return an
 |      array of the values corresponding to each (row, col) pair.
 |      
 |      Parameters
 |      ----------
 |      row_labels : sequence
 |          The row labels to use for lookup
 |      col_labels : sequence
 |          The column labels to use for lookup
 |      
 |      Notes
 |      -----
 |      Akin to::
 |      
 |          result = [df.get_value(row, col)
 |                    for row, col in zip(row_labels, col_labels)]
 |      
 |      Examples
 |      --------
 |      values : ndarray
 |          The found values
 |  
 |  lt(self, other, axis='columns', level=None)
 |      Less than of dataframe and other, element-wise (binary operator `lt`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  mad(self, axis=None, skipna=None, level=None)
 |      Return the mean absolute deviation of the values for the requested axis.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      mad : Series or DataFrame (if level specified)
 |  
 |  max(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return the maximum of the values for the requested axis.
 |      
 |                  If you want the *index* of the maximum, use ``idxmax``. This is
 |                  the equivalent of the ``numpy.ndarray`` method ``argmax``.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      max : Series or DataFrame (if level specified)
 |      
 |      See Also
 |      --------
 |      Series.sum : Return the sum.
 |      Series.min : Return the minimum.
 |      Series.max : Return the maximum.
 |      Series.idxmin : Return the index of the minimum.
 |      Series.idxmax : Return the index of the maximum.
 |      DataFrame.min : Return the sum over the requested axis.
 |      DataFrame.min : Return the minimum over the requested axis.
 |      DataFrame.max : Return the maximum over the requested axis.
 |      DataFrame.idxmin : Return the index of the minimum over the requested axis.
 |      DataFrame.idxmax : Return the index of the maximum over the requested axis.
 |      
 |      Examples
 |      --------
 |      
 |      >>> idx = pd.MultiIndex.from_arrays([
 |      ...     ['warm', 'warm', 'cold', 'cold'],
 |      ...     ['dog', 'falcon', 'fish', 'spider']],
 |      ...     names=['blooded', 'animal'])
 |      >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
 |      >>> s
 |      blooded  animal
 |      warm     dog       4
 |               falcon    2
 |      cold     fish      0
 |               spider    8
 |      Name: legs, dtype: int64
 |      
 |      >>> s.max()
 |      8
 |      
 |      Max using level names, as well as indices.
 |      
 |      >>> s.max(level='blooded')
 |      blooded
 |      warm    4
 |      cold    8
 |      Name: legs, dtype: int64
 |      
 |      >>> s.max(level=0)
 |      blooded
 |      warm    4
 |      cold    8
 |      Name: legs, dtype: int64
 |  
 |  mean(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return the mean of the values for the requested axis.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      mean : Series or DataFrame (if level specified)
 |  
 |  median(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return the median of the values for the requested axis.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      median : Series or DataFrame (if level specified)
 |  
 |  melt(self, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None)
 |      Unpivots a DataFrame from wide format to long format, optionally
 |      leaving identifier variables set.
 |      
 |      This function is useful to massage a DataFrame into a format where one
 |      or more columns are identifier variables (`id_vars`), while all other
 |      columns, considered measured variables (`value_vars`), are "unpivoted" to
 |      the row axis, leaving just two non-identifier columns, 'variable' and
 |      'value'.
 |      
 |      .. versionadded:: 0.20.0
 |      
 |      Parameters
 |      ----------
 |      frame : DataFrame
 |      id_vars : tuple, list, or ndarray, optional
 |          Column(s) to use as identifier variables.
 |      value_vars : tuple, list, or ndarray, optional
 |          Column(s) to unpivot. If not specified, uses all columns that
 |          are not set as `id_vars`.
 |      var_name : scalar
 |          Name to use for the 'variable' column. If None it uses
 |          ``frame.columns.name`` or 'variable'.
 |      value_name : scalar, default 'value'
 |          Name to use for the 'value' column.
 |      col_level : int or string, optional
 |          If columns are a MultiIndex then use this level to melt.
 |      
 |      See Also
 |      --------
 |      melt
 |      pivot_table
 |      DataFrame.pivot
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
 |      ...                    'B': {0: 1, 1: 3, 2: 5},
 |      ...                    'C': {0: 2, 1: 4, 2: 6}})
 |      >>> df
 |         A  B  C
 |      0  a  1  2
 |      1  b  3  4
 |      2  c  5  6
 |      
 |      >>> df.melt(id_vars=['A'], value_vars=['B'])
 |         A variable  value
 |      0  a        B      1
 |      1  b        B      3
 |      2  c        B      5
 |      
 |      >>> df.melt(id_vars=['A'], value_vars=['B', 'C'])
 |         A variable  value
 |      0  a        B      1
 |      1  b        B      3
 |      2  c        B      5
 |      3  a        C      2
 |      4  b        C      4
 |      5  c        C      6
 |      
 |      The names of 'variable' and 'value' columns can be customized:
 |      
 |      >>> df.melt(id_vars=['A'], value_vars=['B'],
 |      ...         var_name='myVarname', value_name='myValname')
 |         A myVarname  myValname
 |      0  a         B          1
 |      1  b         B          3
 |      2  c         B          5
 |      
 |      If you have multi-index columns:
 |      
 |      >>> df.columns = [list('ABC'), list('DEF')]
 |      >>> df
 |         A  B  C
 |         D  E  F
 |      0  a  1  2
 |      1  b  3  4
 |      2  c  5  6
 |      
 |      >>> df.melt(col_level=0, id_vars=['A'], value_vars=['B'])
 |         A variable  value
 |      0  a        B      1
 |      1  b        B      3
 |      2  c        B      5
 |      
 |      >>> df.melt(id_vars=[('A', 'D')], value_vars=[('B', 'E')])
 |        (A, D) variable_0 variable_1  value
 |      0      a          B          E      1
 |      1      b          B          E      3
 |      2      c          B          E      5
 |  
 |  memory_usage(self, index=True, deep=False)
 |      Return the memory usage of each column in bytes.
 |      
 |      The memory usage can optionally include the contribution of
 |      the index and elements of `object` dtype.
 |      
 |      This value is displayed in `DataFrame.info` by default. This can be
 |      suppressed by setting ``pandas.options.display.memory_usage`` to False.
 |      
 |      Parameters
 |      ----------
 |      index : bool, default True
 |          Specifies whether to include the memory usage of the DataFrame's
 |          index in returned Series. If ``index=True`` the memory usage of the
 |          index the first item in the output.
 |      deep : bool, default False
 |          If True, introspect the data deeply by interrogating
 |          `object` dtypes for system-level memory consumption, and include
 |          it in the returned values.
 |      
 |      Returns
 |      -------
 |      sizes : Series
 |          A Series whose index is the original column names and whose values
 |          is the memory usage of each column in bytes.
 |      
 |      See Also
 |      --------
 |      numpy.ndarray.nbytes : Total bytes consumed by the elements of an
 |          ndarray.
 |      Series.memory_usage : Bytes consumed by a Series.
 |      pandas.Categorical : Memory-efficient array for string values with
 |          many repeated values.
 |      DataFrame.info : Concise summary of a DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']
 |      >>> data = dict([(t, np.ones(shape=5000).astype(t))
 |      ...              for t in dtypes])
 |      >>> df = pd.DataFrame(data)
 |      >>> df.head()
 |         int64  float64  complex128 object  bool
 |      0      1      1.0      (1+0j)      1  True
 |      1      1      1.0      (1+0j)      1  True
 |      2      1      1.0      (1+0j)      1  True
 |      3      1      1.0      (1+0j)      1  True
 |      4      1      1.0      (1+0j)      1  True
 |      
 |      >>> df.memory_usage()
 |      Index            80
 |      int64         40000
 |      float64       40000
 |      complex128    80000
 |      object        40000
 |      bool           5000
 |      dtype: int64
 |      
 |      >>> df.memory_usage(index=False)
 |      int64         40000
 |      float64       40000
 |      complex128    80000
 |      object        40000
 |      bool           5000
 |      dtype: int64
 |      
 |      The memory footprint of `object` dtype columns is ignored by default:
 |      
 |      >>> df.memory_usage(deep=True)
 |      Index             80
 |      int64          40000
 |      float64        40000
 |      complex128     80000
 |      object        160000
 |      bool            5000
 |      dtype: int64
 |      
 |      Use a Categorical for efficient storage of an object-dtype column with
 |      many repeated values.
 |      
 |      >>> df['object'].astype('category').memory_usage(deep=True)
 |      5168
 |  
 |  merge(self, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
 |      Merge DataFrame or named Series objects with a database-style join.
 |      
 |      The join is done on columns or indexes. If joining columns on
 |      columns, the DataFrame indexes *will be ignored*. Otherwise if joining indexes
 |      on indexes or indexes on a column or columns, the index will be passed on.
 |      
 |      Parameters
 |      ----------
 |      right : DataFrame or named Series
 |          Object to merge with.
 |      how : {'left', 'right', 'outer', 'inner'}, default 'inner'
 |          Type of merge to be performed.
 |      
 |          * left: use only keys from left frame, similar to a SQL left outer join;
 |            preserve key order.
 |          * right: use only keys from right frame, similar to a SQL right outer join;
 |            preserve key order.
 |          * outer: use union of keys from both frames, similar to a SQL full outer
 |            join; sort keys lexicographically.
 |          * inner: use intersection of keys from both frames, similar to a SQL inner
 |            join; preserve the order of the left keys.
 |      on : label or list
 |          Column or index level names to join on. These must be found in both
 |          DataFrames. If `on` is None and not merging on indexes then this defaults
 |          to the intersection of the columns in both DataFrames.
 |      left_on : label or list, or array-like
 |          Column or index level names to join on in the left DataFrame. Can also
 |          be an array or list of arrays of the length of the left DataFrame.
 |          These arrays are treated as if they are columns.
 |      right_on : label or list, or array-like
 |          Column or index level names to join on in the right DataFrame. Can also
 |          be an array or list of arrays of the length of the right DataFrame.
 |          These arrays are treated as if they are columns.
 |      left_index : bool, default False
 |          Use the index from the left DataFrame as the join key(s). If it is a
 |          MultiIndex, the number of keys in the other DataFrame (either the index
 |          or a number of columns) must match the number of levels.
 |      right_index : bool, default False
 |          Use the index from the right DataFrame as the join key. Same caveats as
 |          left_index.
 |      sort : bool, default False
 |          Sort the join keys lexicographically in the result DataFrame. If False,
 |          the order of the join keys depends on the join type (how keyword).
 |      suffixes : tuple of (str, str), default ('_x', '_y')
 |          Suffix to apply to overlapping column names in the left and right
 |          side, respectively. To raise an exception on overlapping columns use
 |          (False, False).
 |      copy : bool, default True
 |          If False, avoid copy if possible.
 |      indicator : bool or str, default False
 |          If True, adds a column to output DataFrame called "_merge" with
 |          information on the source of each row.
 |          If string, column with information on source of each row will be added to
 |          output DataFrame, and column will be named value of string.
 |          Information column is Categorical-type and takes on a value of "left_only"
 |          for observations whose merge key only appears in 'left' DataFrame,
 |          "right_only" for observations whose merge key only appears in 'right'
 |          DataFrame, and "both" if the observation's merge key is found in both.
 |      
 |      validate : str, optional
 |          If specified, checks if merge is of specified type.
 |      
 |          * "one_to_one" or "1:1": check if merge keys are unique in both
 |            left and right datasets.
 |          * "one_to_many" or "1:m": check if merge keys are unique in left
 |            dataset.
 |          * "many_to_one" or "m:1": check if merge keys are unique in right
 |            dataset.
 |          * "many_to_many" or "m:m": allowed, but does not result in checks.
 |      
 |          .. versionadded:: 0.21.0
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          A DataFrame of the two merged objects.
 |      
 |      See Also
 |      --------
 |      merge_ordered : Merge with optional filling/interpolation.
 |      merge_asof : Merge on nearest keys.
 |      DataFrame.join : Similar method using indices.
 |      
 |      Notes
 |      -----
 |      Support for specifying index levels as the `on`, `left_on`, and
 |      `right_on` parameters was added in version 0.23.0
 |      Support for merging named Series objects was added in version 0.24.0
 |      
 |      Examples
 |      --------
 |      
 |      >>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
 |      ...                     'value': [1, 2, 3, 5]})
 |      >>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
 |      ...                     'value': [5, 6, 7, 8]})
 |      >>> df1
 |          lkey value
 |      0   foo      1
 |      1   bar      2
 |      2   baz      3
 |      3   foo      5
 |      >>> df2
 |          rkey value
 |      0   foo      5
 |      1   bar      6
 |      2   baz      7
 |      3   foo      8
 |      
 |      Merge df1 and df2 on the lkey and rkey columns. The value columns have
 |      the default suffixes, _x and _y, appended.
 |      
 |      >>> df1.merge(df2, left_on='lkey', right_on='rkey')
 |        lkey  value_x rkey  value_y
 |      0  foo        1  foo        5
 |      1  foo        1  foo        8
 |      2  foo        5  foo        5
 |      3  foo        5  foo        8
 |      4  bar        2  bar        6
 |      5  baz        3  baz        7
 |      
 |      Merge DataFrames df1 and df2 with specified left and right suffixes
 |      appended to any overlapping columns.
 |      
 |      >>> df1.merge(df2, left_on='lkey', right_on='rkey',
 |      ...           suffixes=('_left', '_right'))
 |        lkey  value_left rkey  value_right
 |      0  foo           1  foo            5
 |      1  foo           1  foo            8
 |      2  foo           5  foo            5
 |      3  foo           5  foo            8
 |      4  bar           2  bar            6
 |      5  baz           3  baz            7
 |      
 |      Merge DataFrames df1 and df2, but raise an exception if the DataFrames have
 |      any overlapping columns.
 |      
 |      >>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False))
 |      Traceback (most recent call last):
 |      ...
 |      ValueError: columns overlap but no suffix specified:
 |          Index(['value'], dtype='object')
 |  
 |  min(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return the minimum of the values for the requested axis.
 |      
 |                  If you want the *index* of the minimum, use ``idxmin``. This is
 |                  the equivalent of the ``numpy.ndarray`` method ``argmin``.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      min : Series or DataFrame (if level specified)
 |      
 |      See Also
 |      --------
 |      Series.sum : Return the sum.
 |      Series.min : Return the minimum.
 |      Series.max : Return the maximum.
 |      Series.idxmin : Return the index of the minimum.
 |      Series.idxmax : Return the index of the maximum.
 |      DataFrame.min : Return the sum over the requested axis.
 |      DataFrame.min : Return the minimum over the requested axis.
 |      DataFrame.max : Return the maximum over the requested axis.
 |      DataFrame.idxmin : Return the index of the minimum over the requested axis.
 |      DataFrame.idxmax : Return the index of the maximum over the requested axis.
 |      
 |      Examples
 |      --------
 |      
 |      >>> idx = pd.MultiIndex.from_arrays([
 |      ...     ['warm', 'warm', 'cold', 'cold'],
 |      ...     ['dog', 'falcon', 'fish', 'spider']],
 |      ...     names=['blooded', 'animal'])
 |      >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
 |      >>> s
 |      blooded  animal
 |      warm     dog       4
 |               falcon    2
 |      cold     fish      0
 |               spider    8
 |      Name: legs, dtype: int64
 |      
 |      >>> s.min()
 |      0
 |      
 |      Min using level names, as well as indices.
 |      
 |      >>> s.min(level='blooded')
 |      blooded
 |      warm    2
 |      cold    0
 |      Name: legs, dtype: int64
 |      
 |      >>> s.min(level=0)
 |      blooded
 |      warm    2
 |      cold    0
 |      Name: legs, dtype: int64
 |  
 |  mod(self, other, axis='columns', level=None, fill_value=None)
 |      Modulo of dataframe and other, element-wise (binary operator `mod`).
 |      
 |      Equivalent to ``dataframe % other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rmod`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  mode(self, axis=0, numeric_only=False, dropna=True)
 |      Get the mode(s) of each element along the selected axis.
 |      
 |      The mode of a set of values is the value that appears most often.
 |      It can be multiple values.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to iterate over while searching for the mode:
 |      
 |          * 0 or 'index' : get mode of each column
 |          * 1 or 'columns' : get mode of each row
 |      numeric_only : bool, default False
 |          If True, only apply to numeric columns.
 |      dropna : bool, default True
 |          Don't consider counts of NaN/NaT.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          The modes of each column or row.
 |      
 |      See Also
 |      --------
 |      Series.mode : Return the highest frequency value in a Series.
 |      Series.value_counts : Return the counts of values in a Series.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([('bird', 2, 2),
 |      ...                    ('mammal', 4, np.nan),
 |      ...                    ('arthropod', 8, 0),
 |      ...                    ('bird', 2, np.nan)],
 |      ...                   index=('falcon', 'horse', 'spider', 'ostrich'),
 |      ...                   columns=('species', 'legs', 'wings'))
 |      >>> df
 |                 species  legs  wings
 |      falcon        bird     2    2.0
 |      horse       mammal     4    NaN
 |      spider   arthropod     8    0.0
 |      ostrich       bird     2    NaN
 |      
 |      By default, missing values are not considered, and the mode of wings
 |      are both 0 and 2. The second row of species and legs contains ``NaN``,
 |      because they have only one mode, but the DataFrame has two rows.
 |      
 |      >>> df.mode()
 |        species  legs  wings
 |      0    bird   2.0    0.0
 |      1     NaN   NaN    2.0
 |      
 |      Setting ``dropna=False`` ``NaN`` values are considered and they can be
 |      the mode (like for wings).
 |      
 |      >>> df.mode(dropna=False)
 |        species  legs  wings
 |      0    bird     2    NaN
 |      
 |      Setting ``numeric_only=True``, only the mode of numeric columns is
 |      computed, and columns of other types are ignored.
 |      
 |      >>> df.mode(numeric_only=True)
 |         legs  wings
 |      0   2.0    0.0
 |      1   NaN    2.0
 |      
 |      To compute the mode over columns and not rows, use the axis parameter:
 |      
 |      >>> df.mode(axis='columns', numeric_only=True)
 |                 0    1
 |      falcon   2.0  NaN
 |      horse    4.0  NaN
 |      spider   0.0  8.0
 |      ostrich  2.0  NaN
 |  
 |  mul(self, other, axis='columns', level=None, fill_value=None)
 |      Multiplication of dataframe and other, element-wise (binary operator `mul`).
 |      
 |      Equivalent to ``dataframe * other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rmul`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  multiply = mul(self, other, axis='columns', level=None, fill_value=None)
 |  
 |  ne(self, other, axis='columns', level=None)
 |      Not equal to of dataframe and other, element-wise (binary operator `ne`).
 |      
 |      Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
 |      operators.
 |      
 |      Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
 |      (rows or columns) and level for comparison.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}, default 'columns'
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns').
 |      level : int or label
 |          Broadcast across a level, matching Index values on the passed
 |          MultiIndex level.
 |      
 |      Returns
 |      -------
 |      DataFrame of bool
 |          Result of the comparison.
 |      
 |      See Also
 |      --------
 |      DataFrame.eq : Compare DataFrames for equality elementwise.
 |      DataFrame.ne : Compare DataFrames for inequality elementwise.
 |      DataFrame.le : Compare DataFrames for less than inequality
 |          or equality elementwise.
 |      DataFrame.lt : Compare DataFrames for strictly less than
 |          inequality elementwise.
 |      DataFrame.ge : Compare DataFrames for greater than inequality
 |          or equality elementwise.
 |      DataFrame.gt : Compare DataFrames for strictly greater than
 |          inequality elementwise.
 |      
 |      Notes
 |      --------
 |      Mismatched indices will be unioned together.
 |      `NaN` values are considered different (i.e. `NaN` != `NaN`).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'cost': [250, 150, 100],
 |      ...                    'revenue': [100, 250, 300]},
 |      ...                   index=['A', 'B', 'C'])
 |      >>> df
 |         cost  revenue
 |      A   250      100
 |      B   150      250
 |      C   100      300
 |      
 |      Comparison with a scalar, using either the operator or method:
 |      
 |      >>> df == 100
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      >>> df.eq(100)
 |          cost  revenue
 |      A  False     True
 |      B  False    False
 |      C   True    False
 |      
 |      When `other` is a :class:`Series`, the columns of a DataFrame are aligned
 |      with the index of `other` and broadcast:
 |      
 |      >>> df != pd.Series([100, 250], index=["cost", "revenue"])
 |          cost  revenue
 |      A   True     True
 |      B   True    False
 |      C  False     True
 |      
 |      Use the method to control the broadcast axis:
 |      
 |      >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
 |         cost  revenue
 |      A  True    False
 |      B  True     True
 |      C  True     True
 |      D  True     True
 |      
 |      When comparing to an arbitrary sequence, the number of columns must
 |      match the number elements in `other`:
 |      
 |      >>> df == [250, 100]
 |          cost  revenue
 |      A   True     True
 |      B  False    False
 |      C  False    False
 |      
 |      Use the method to control the axis:
 |      
 |      >>> df.eq([250, 250, 100], axis='index')
 |          cost  revenue
 |      A   True    False
 |      B  False     True
 |      C   True    False
 |      
 |      Compare to a DataFrame of different shape.
 |      
 |      >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
 |      ...                      index=['A', 'B', 'C', 'D'])
 |      >>> other
 |         revenue
 |      A      300
 |      B      250
 |      C      100
 |      D      150
 |      
 |      >>> df.gt(other)
 |          cost  revenue
 |      A  False    False
 |      B  False    False
 |      C  False     True
 |      D  False    False
 |      
 |      Compare to a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
 |      ...                              'revenue': [100, 250, 300, 200, 175, 225]},
 |      ...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
 |      ...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
 |      >>> df_multindex
 |            cost  revenue
 |      Q1 A   250      100
 |         B   150      250
 |         C   100      300
 |      Q2 A   150      200
 |         B   300      175
 |         C   220      225
 |      
 |      >>> df.le(df_multindex, level=1)
 |             cost  revenue
 |      Q1 A   True     True
 |         B   True     True
 |         C   True     True
 |      Q2 A  False     True
 |         B   True    False
 |         C   True    False
 |  
 |  nlargest(self, n, columns, keep='first')
 |      Return the first `n` rows ordered by `columns` in descending order.
 |      
 |      Return the first `n` rows with the largest values in `columns`, in
 |      descending order. The columns that are not specified are returned as
 |      well, but not used for ordering.
 |      
 |      This method is equivalent to
 |      ``df.sort_values(columns, ascending=False).head(n)``, but more
 |      performant.
 |      
 |      Parameters
 |      ----------
 |      n : int
 |          Number of rows to return.
 |      columns : label or list of labels
 |          Column label(s) to order by.
 |      keep : {'first', 'last', 'all'}, default 'first'
 |          Where there are duplicate values:
 |      
 |          - `first` : prioritize the first occurrence(s)
 |          - `last` : prioritize the last occurrence(s)
 |          - ``all`` : do not drop any duplicates, even it means
 |                      selecting more than `n` items.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          The first `n` rows ordered by the given columns in descending
 |          order.
 |      
 |      See Also
 |      --------
 |      DataFrame.nsmallest : Return the first `n` rows ordered by `columns` in
 |          ascending order.
 |      DataFrame.sort_values : Sort DataFrame by the values.
 |      DataFrame.head : Return the first `n` rows without re-ordering.
 |      
 |      Notes
 |      -----
 |      This function cannot be used with all column types. For example, when
 |      specifying columns with `object` or `category` dtypes, ``TypeError`` is
 |      raised.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
 |      ...                                   434000, 434000, 337000, 11300,
 |      ...                                   11300, 11300],
 |      ...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
 |      ...                            17036, 182, 38, 311],
 |      ...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
 |      ...                                "IS", "NR", "TV", "AI"]},
 |      ...                   index=["Italy", "France", "Malta",
 |      ...                          "Maldives", "Brunei", "Iceland",
 |      ...                          "Nauru", "Tuvalu", "Anguilla"])
 |      >>> df
 |                population      GDP alpha-2
 |      Italy       59000000  1937894      IT
 |      France      65000000  2583560      FR
 |      Malta         434000    12011      MT
 |      Maldives      434000     4520      MV
 |      Brunei        434000    12128      BN
 |      Iceland       337000    17036      IS
 |      Nauru          11300      182      NR
 |      Tuvalu         11300       38      TV
 |      Anguilla       11300      311      AI
 |      
 |      In the following example, we will use ``nlargest`` to select the three
 |      rows having the largest values in column "population".
 |      
 |      >>> df.nlargest(3, 'population')
 |              population      GDP alpha-2
 |      France    65000000  2583560      FR
 |      Italy     59000000  1937894      IT
 |      Malta       434000    12011      MT
 |      
 |      When using ``keep='last'``, ties are resolved in reverse order:
 |      
 |      >>> df.nlargest(3, 'population', keep='last')
 |              population      GDP alpha-2
 |      France    65000000  2583560      FR
 |      Italy     59000000  1937894      IT
 |      Brunei      434000    12128      BN
 |      
 |      When using ``keep='all'``, all duplicate items are maintained:
 |      
 |      >>> df.nlargest(3, 'population', keep='all')
 |                population      GDP alpha-2
 |      France      65000000  2583560      FR
 |      Italy       59000000  1937894      IT
 |      Malta         434000    12011      MT
 |      Maldives      434000     4520      MV
 |      Brunei        434000    12128      BN
 |      
 |      To order by the largest values in column "population" and then "GDP",
 |      we can specify multiple columns like in the next example.
 |      
 |      >>> df.nlargest(3, ['population', 'GDP'])
 |              population      GDP alpha-2
 |      France    65000000  2583560      FR
 |      Italy     59000000  1937894      IT
 |      Brunei      434000    12128      BN
 |  
 |  notna(self)
 |      Detect existing (non-missing) values.
 |      
 |      Return a boolean same-sized object indicating if the values are not NA.
 |      Non-missing values get mapped to True. Characters such as empty
 |      strings ``''`` or :attr:`numpy.inf` are not considered NA values
 |      (unless you set ``pandas.options.mode.use_inf_as_na = True``).
 |      NA values, such as None or :attr:`numpy.NaN`, get mapped to False
 |      values.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Mask of bool values for each element in DataFrame that
 |          indicates whether an element is not an NA value.
 |      
 |      See Also
 |      --------
 |      DataFrame.notnull : Alias of notna.
 |      DataFrame.isna : Boolean inverse of notna.
 |      DataFrame.dropna : Omit axes labels with missing values.
 |      notna : Top-level notna.
 |      
 |      Examples
 |      --------
 |      Show which entries in a DataFrame are not NA.
 |      
 |      >>> df = pd.DataFrame({'age': [5, 6, np.NaN],
 |      ...                    'born': [pd.NaT, pd.Timestamp('1939-05-27'),
 |      ...                             pd.Timestamp('1940-04-25')],
 |      ...                    'name': ['Alfred', 'Batman', ''],
 |      ...                    'toy': [None, 'Batmobile', 'Joker']})
 |      >>> df
 |         age       born    name        toy
 |      0  5.0        NaT  Alfred       None
 |      1  6.0 1939-05-27  Batman  Batmobile
 |      2  NaN 1940-04-25              Joker
 |      
 |      >>> df.notna()
 |           age   born  name    toy
 |      0   True  False  True  False
 |      1   True   True  True   True
 |      2  False   True  True   True
 |      
 |      Show which entries in a Series are not NA.
 |      
 |      >>> ser = pd.Series([5, 6, np.NaN])
 |      >>> ser
 |      0    5.0
 |      1    6.0
 |      2    NaN
 |      dtype: float64
 |      
 |      >>> ser.notna()
 |      0     True
 |      1     True
 |      2    False
 |      dtype: bool
 |  
 |  notnull(self)
 |      Detect existing (non-missing) values.
 |      
 |      Return a boolean same-sized object indicating if the values are not NA.
 |      Non-missing values get mapped to True. Characters such as empty
 |      strings ``''`` or :attr:`numpy.inf` are not considered NA values
 |      (unless you set ``pandas.options.mode.use_inf_as_na = True``).
 |      NA values, such as None or :attr:`numpy.NaN`, get mapped to False
 |      values.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Mask of bool values for each element in DataFrame that
 |          indicates whether an element is not an NA value.
 |      
 |      See Also
 |      --------
 |      DataFrame.notnull : Alias of notna.
 |      DataFrame.isna : Boolean inverse of notna.
 |      DataFrame.dropna : Omit axes labels with missing values.
 |      notna : Top-level notna.
 |      
 |      Examples
 |      --------
 |      Show which entries in a DataFrame are not NA.
 |      
 |      >>> df = pd.DataFrame({'age': [5, 6, np.NaN],
 |      ...                    'born': [pd.NaT, pd.Timestamp('1939-05-27'),
 |      ...                             pd.Timestamp('1940-04-25')],
 |      ...                    'name': ['Alfred', 'Batman', ''],
 |      ...                    'toy': [None, 'Batmobile', 'Joker']})
 |      >>> df
 |         age       born    name        toy
 |      0  5.0        NaT  Alfred       None
 |      1  6.0 1939-05-27  Batman  Batmobile
 |      2  NaN 1940-04-25              Joker
 |      
 |      >>> df.notna()
 |           age   born  name    toy
 |      0   True  False  True  False
 |      1   True   True  True   True
 |      2  False   True  True   True
 |      
 |      Show which entries in a Series are not NA.
 |      
 |      >>> ser = pd.Series([5, 6, np.NaN])
 |      >>> ser
 |      0    5.0
 |      1    6.0
 |      2    NaN
 |      dtype: float64
 |      
 |      >>> ser.notna()
 |      0     True
 |      1     True
 |      2    False
 |      dtype: bool
 |  
 |  nsmallest(self, n, columns, keep='first')
 |      Return the first `n` rows ordered by `columns` in ascending order.
 |      
 |      Return the first `n` rows with the smallest values in `columns`, in
 |      ascending order. The columns that are not specified are returned as
 |      well, but not used for ordering.
 |      
 |      This method is equivalent to
 |      ``df.sort_values(columns, ascending=True).head(n)``, but more
 |      performant.
 |      
 |      Parameters
 |      ----------
 |      n : int
 |          Number of items to retrieve.
 |      columns : list or str
 |          Column name or names to order by.
 |      keep : {'first', 'last', 'all'}, default 'first'
 |          Where there are duplicate values:
 |      
 |          - ``first`` : take the first occurrence.
 |          - ``last`` : take the last occurrence.
 |          - ``all`` : do not drop any duplicates, even it means
 |            selecting more than `n` items.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.nlargest : Return the first `n` rows ordered by `columns` in
 |          descending order.
 |      DataFrame.sort_values : Sort DataFrame by the values.
 |      DataFrame.head : Return the first `n` rows without re-ordering.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
 |      ...                                   434000, 434000, 337000, 11300,
 |      ...                                   11300, 11300],
 |      ...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
 |      ...                            17036, 182, 38, 311],
 |      ...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
 |      ...                                "IS", "NR", "TV", "AI"]},
 |      ...                   index=["Italy", "France", "Malta",
 |      ...                          "Maldives", "Brunei", "Iceland",
 |      ...                          "Nauru", "Tuvalu", "Anguilla"])
 |      >>> df
 |                population      GDP alpha-2
 |      Italy       59000000  1937894      IT
 |      France      65000000  2583560      FR
 |      Malta         434000    12011      MT
 |      Maldives      434000     4520      MV
 |      Brunei        434000    12128      BN
 |      Iceland       337000    17036      IS
 |      Nauru          11300      182      NR
 |      Tuvalu         11300       38      TV
 |      Anguilla       11300      311      AI
 |      
 |      In the following example, we will use ``nsmallest`` to select the
 |      three rows having the smallest values in column "a".
 |      
 |      >>> df.nsmallest(3, 'population')
 |                population  GDP alpha-2
 |      Nauru          11300  182      NR
 |      Tuvalu         11300   38      TV
 |      Anguilla       11300  311      AI
 |      
 |      When using ``keep='last'``, ties are resolved in reverse order:
 |      
 |      >>> df.nsmallest(3, 'population', keep='last')
 |                population  GDP alpha-2
 |      Anguilla       11300  311      AI
 |      Tuvalu         11300   38      TV
 |      Nauru          11300  182      NR
 |      
 |      When using ``keep='all'``, all duplicate items are maintained:
 |      
 |      >>> df.nsmallest(3, 'population', keep='all')
 |                population  GDP alpha-2
 |      Nauru          11300  182      NR
 |      Tuvalu         11300   38      TV
 |      Anguilla       11300  311      AI
 |      
 |      To order by the largest values in column "a" and then "c", we can
 |      specify multiple columns like in the next example.
 |      
 |      >>> df.nsmallest(3, ['population', 'GDP'])
 |                population  GDP alpha-2
 |      Tuvalu         11300   38      TV
 |      Nauru          11300  182      NR
 |      Anguilla       11300  311      AI
 |  
 |  nunique(self, axis=0, dropna=True)
 |      Count distinct observations over requested axis.
 |      
 |      Return Series with number of distinct observations. Can ignore NaN
 |      values.
 |      
 |      .. versionadded:: 0.20.0
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for
 |          column-wise.
 |      dropna : bool, default True
 |          Don't include NaN in the counts.
 |      
 |      Returns
 |      -------
 |      nunique : Series
 |      
 |      See Also
 |      --------
 |      Series.nunique: Method nunique for Series.
 |      DataFrame.count: Count non-NA cells for each column or row.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 1, 1]})
 |      >>> df.nunique()
 |      A    3
 |      B    1
 |      dtype: int64
 |      
 |      >>> df.nunique(axis=1)
 |      0    1
 |      1    2
 |      2    2
 |      dtype: int64
 |  
 |  pivot(self, index=None, columns=None, values=None)
 |      Return reshaped DataFrame organized by given index / column values.
 |      
 |      Reshape data (produce a "pivot" table) based on column values. Uses
 |      unique values from specified `index` / `columns` to form axes of the
 |      resulting DataFrame. This function does not support data
 |      aggregation, multiple values will result in a MultiIndex in the
 |      columns. See the :ref:`User Guide <reshaping>` for more on reshaping.
 |      
 |      Parameters
 |      ----------
 |      index : string or object, optional
 |          Column to use to make new frame's index. If None, uses
 |          existing index.
 |      columns : string or object
 |          Column to use to make new frame's columns.
 |      values : string, object or a list of the previous, optional
 |          Column(s) to use for populating new frame's values. If not
 |          specified, all remaining columns will be used and the result will
 |          have hierarchically indexed columns.
 |      
 |          .. versionchanged :: 0.23.0
 |             Also accept list of column names.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Returns reshaped DataFrame.
 |      
 |      Raises
 |      ------
 |      ValueError:
 |          When there are any `index`, `columns` combinations with multiple
 |          values. `DataFrame.pivot_table` when you need to aggregate.
 |      
 |      See Also
 |      --------
 |      DataFrame.pivot_table : Generalization of pivot that can handle
 |          duplicate values for one index/column pair.
 |      DataFrame.unstack : Pivot based on the index values instead of a
 |          column.
 |      
 |      Notes
 |      -----
 |      For finer-tuned control, see hierarchical indexing documentation along
 |      with the related stack/unstack methods.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
 |      ...                            'two'],
 |      ...                    'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
 |      ...                    'baz': [1, 2, 3, 4, 5, 6],
 |      ...                    'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
 |      >>> df
 |          foo   bar  baz  zoo
 |      0   one   A    1    x
 |      1   one   B    2    y
 |      2   one   C    3    z
 |      3   two   A    4    q
 |      4   two   B    5    w
 |      5   two   C    6    t
 |      
 |      >>> df.pivot(index='foo', columns='bar', values='baz')
 |      bar  A   B   C
 |      foo
 |      one  1   2   3
 |      two  4   5   6
 |      
 |      >>> df.pivot(index='foo', columns='bar')['baz']
 |      bar  A   B   C
 |      foo
 |      one  1   2   3
 |      two  4   5   6
 |      
 |      >>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo'])
 |            baz       zoo
 |      bar   A  B  C   A  B  C
 |      foo
 |      one   1  2  3   x  y  z
 |      two   4  5  6   q  w  t
 |      
 |      A ValueError is raised if there are any duplicates.
 |      
 |      >>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'],
 |      ...                    "bar": ['A', 'A', 'B', 'C'],
 |      ...                    "baz": [1, 2, 3, 4]})
 |      >>> df
 |         foo bar  baz
 |      0  one   A    1
 |      1  one   A    2
 |      2  two   B    3
 |      3  two   C    4
 |      
 |      Notice that the first two rows are the same for our `index`
 |      and `columns` arguments.
 |      
 |      >>> df.pivot(index='foo', columns='bar', values='baz')
 |      Traceback (most recent call last):
 |         ...
 |      ValueError: Index contains duplicate entries, cannot reshape
 |  
 |  pivot_table(self, values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All')
 |      Create a spreadsheet-style pivot table as a DataFrame. The levels in
 |      the pivot table will be stored in MultiIndex objects (hierarchical
 |      indexes) on the index and columns of the result DataFrame.
 |      
 |      Parameters
 |      ----------
 |      values : column to aggregate, optional
 |      index : column, Grouper, array, or list of the previous
 |          If an array is passed, it must be the same length as the data. The
 |          list can contain any of the other types (except list).
 |          Keys to group by on the pivot table index.  If an array is passed,
 |          it is being used as the same manner as column values.
 |      columns : column, Grouper, array, or list of the previous
 |          If an array is passed, it must be the same length as the data. The
 |          list can contain any of the other types (except list).
 |          Keys to group by on the pivot table column.  If an array is passed,
 |          it is being used as the same manner as column values.
 |      aggfunc : function, list of functions, dict, default numpy.mean
 |          If list of functions passed, the resulting pivot table will have
 |          hierarchical columns whose top level are the function names
 |          (inferred from the function objects themselves)
 |          If dict is passed, the key is column to aggregate and value
 |          is function or list of functions
 |      fill_value : scalar, default None
 |          Value to replace missing values with
 |      margins : boolean, default False
 |          Add all row / columns (e.g. for subtotal / grand totals)
 |      dropna : boolean, default True
 |          Do not include columns whose entries are all NaN
 |      margins_name : string, default 'All'
 |          Name of the row / column that will contain the totals
 |          when margins is True.
 |      
 |      Returns
 |      -------
 |      table : DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.pivot : Pivot without aggregation that can handle
 |          non-numeric data.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
 |      ...                          "bar", "bar", "bar", "bar"],
 |      ...                    "B": ["one", "one", "one", "two", "two",
 |      ...                          "one", "one", "two", "two"],
 |      ...                    "C": ["small", "large", "large", "small",
 |      ...                          "small", "large", "small", "small",
 |      ...                          "large"],
 |      ...                    "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
 |      ...                    "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
 |      >>> df
 |           A    B      C  D  E
 |      0  foo  one  small  1  2
 |      1  foo  one  large  2  4
 |      2  foo  one  large  2  5
 |      3  foo  two  small  3  5
 |      4  foo  two  small  3  6
 |      5  bar  one  large  4  6
 |      6  bar  one  small  5  8
 |      7  bar  two  small  6  9
 |      8  bar  two  large  7  9
 |      
 |      This first example aggregates values by taking the sum.
 |      
 |      >>> table = pivot_table(df, values='D', index=['A', 'B'],
 |      ...                     columns=['C'], aggfunc=np.sum)
 |      >>> table
 |      C        large  small
 |      A   B
 |      bar one      4      5
 |          two      7      6
 |      foo one      4      1
 |          two    NaN      6
 |      
 |      We can also fill missing values using the `fill_value` parameter.
 |      
 |      >>> table = pivot_table(df, values='D', index=['A', 'B'],
 |      ...                     columns=['C'], aggfunc=np.sum, fill_value=0)
 |      >>> table
 |      C        large  small
 |      A   B
 |      bar one      4      5
 |          two      7      6
 |      foo one      4      1
 |          two      0      6
 |      
 |      The next example aggregates by taking the mean across multiple columns.
 |      
 |      >>> table = pivot_table(df, values=['D', 'E'], index=['A', 'C'],
 |      ...                     aggfunc={'D': np.mean,
 |      ...                              'E': np.mean})
 |      >>> table
 |                        D         E
 |                     mean      mean
 |      A   C
 |      bar large  5.500000  7.500000
 |          small  5.500000  8.500000
 |      foo large  2.000000  4.500000
 |          small  2.333333  4.333333
 |      
 |      We can also calculate multiple types of aggregations for any given
 |      value column.
 |      
 |      >>> table = pivot_table(df, values=['D', 'E'], index=['A', 'C'],
 |      ...                     aggfunc={'D': np.mean,
 |      ...                              'E': [min, max, np.mean]})
 |      >>> table
 |                        D   E
 |                     mean max      mean min
 |      A   C
 |      bar large  5.500000  9   7.500000   6
 |          small  5.500000  9   8.500000   8
 |      foo large  2.000000  5   4.500000   4
 |          small  2.333333  6   4.333333   2
 |  
 |  pow(self, other, axis='columns', level=None, fill_value=None)
 |      Exponential power of dataframe and other, element-wise (binary operator `pow`).
 |      
 |      Equivalent to ``dataframe ** other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rpow`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  prod(self, axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)
 |      Return the product of the values for the requested axis.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      min_count : int, default 0
 |          The required number of valid values to perform the operation. If fewer than
 |          ``min_count`` non-NA values are present the result will be NA.
 |      
 |          .. versionadded :: 0.22.0
 |      
 |             Added with the default being 0. This means the sum of an all-NA
 |             or empty Series is 0, and the product of an all-NA or empty
 |             Series is 1.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      prod : Series or DataFrame (if level specified)
 |      
 |      Examples
 |      --------
 |      By default, the product of an empty or all-NA Series is ``1``
 |      
 |      >>> pd.Series([]).prod()
 |      1.0
 |      
 |      This can be controlled with the ``min_count`` parameter
 |      
 |      >>> pd.Series([]).prod(min_count=1)
 |      nan
 |      
 |      Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
 |      empty series identically.
 |      
 |      >>> pd.Series([np.nan]).prod()
 |      1.0
 |      
 |      >>> pd.Series([np.nan]).prod(min_count=1)
 |      nan
 |  
 |  product = prod(self, axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)
 |  
 |  quantile(self, q=0.5, axis=0, numeric_only=True, interpolation='linear')
 |      Return values at the given quantile over requested axis.
 |      
 |      Parameters
 |      ----------
 |      q : float or array-like, default 0.5 (50% quantile)
 |          Value between 0 <= q <= 1, the quantile(s) to compute.
 |      axis : {0, 1, 'index', 'columns'} (default 0)
 |          Equals 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
 |      numeric_only : bool, default True
 |          If False, the quantile of datetime and timedelta data will be
 |          computed as well.
 |      interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
 |          This optional parameter specifies the interpolation method to use,
 |          when the desired quantile lies between two data points `i` and `j`:
 |      
 |          * linear: `i + (j - i) * fraction`, where `fraction` is the
 |            fractional part of the index surrounded by `i` and `j`.
 |          * lower: `i`.
 |          * higher: `j`.
 |          * nearest: `i` or `j` whichever is nearest.
 |          * midpoint: (`i` + `j`) / 2.
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      Returns
 |      -------
 |      quantiles : Series or DataFrame
 |      
 |          - If ``q`` is an array, a DataFrame will be returned where the
 |            index is ``q``, the columns are the columns of self, and the
 |            values are the quantiles.
 |          - If ``q`` is a float, a Series will be returned where the
 |            index is the columns of self and the values are the quantiles.
 |      
 |      See Also
 |      --------
 |      core.window.Rolling.quantile: Rolling quantile.
 |      numpy.percentile: Numpy function to compute the percentile.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]),
 |      ...                   columns=['a', 'b'])
 |      >>> df.quantile(.1)
 |      a    1.3
 |      b    3.7
 |      Name: 0.1, dtype: float64
 |      >>> df.quantile([.1, .5])
 |             a     b
 |      0.1  1.3   3.7
 |      0.5  2.5  55.0
 |      
 |      Specifying `numeric_only=False` will also compute the quantile of
 |      datetime and timedelta data.
 |      
 |      >>> df = pd.DataFrame({'A': [1, 2],
 |      ...                    'B': [pd.Timestamp('2010'),
 |      ...                          pd.Timestamp('2011')],
 |      ...                    'C': [pd.Timedelta('1 days'),
 |      ...                          pd.Timedelta('2 days')]})
 |      >>> df.quantile(0.5, numeric_only=False)
 |      A                    1.5
 |      B    2010-07-02 12:00:00
 |      C        1 days 12:00:00
 |      Name: 0.5, dtype: object
 |  
 |  query(self, expr, inplace=False, **kwargs)
 |      Query the columns of a DataFrame with a boolean expression.
 |      
 |      Parameters
 |      ----------
 |      expr : string
 |          The query string to evaluate.  You can refer to variables
 |          in the environment by prefixing them with an '@' character like
 |          ``@a + b``.
 |      inplace : bool
 |          Whether the query should modify the data in place or return
 |          a modified copy
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      kwargs : dict
 |          See the documentation for :func:`pandas.eval` for complete details
 |          on the keyword arguments accepted by :meth:`DataFrame.query`.
 |      
 |      Returns
 |      -------
 |      q : DataFrame
 |      
 |      See Also
 |      --------
 |      pandas.eval
 |      DataFrame.eval
 |      
 |      Notes
 |      -----
 |      The result of the evaluation of this expression is first passed to
 |      :attr:`DataFrame.loc` and if that fails because of a
 |      multidimensional key (e.g., a DataFrame) then the result will be passed
 |      to :meth:`DataFrame.__getitem__`.
 |      
 |      This method uses the top-level :func:`pandas.eval` function to
 |      evaluate the passed query.
 |      
 |      The :meth:`~pandas.DataFrame.query` method uses a slightly
 |      modified Python syntax by default. For example, the ``&`` and ``|``
 |      (bitwise) operators have the precedence of their boolean cousins,
 |      :keyword:`and` and :keyword:`or`. This *is* syntactically valid Python,
 |      however the semantics are different.
 |      
 |      You can change the semantics of the expression by passing the keyword
 |      argument ``parser='python'``. This enforces the same semantics as
 |      evaluation in Python space. Likewise, you can pass ``engine='python'``
 |      to evaluate an expression using Python itself as a backend. This is not
 |      recommended as it is inefficient compared to using ``numexpr`` as the
 |      engine.
 |      
 |      The :attr:`DataFrame.index` and
 |      :attr:`DataFrame.columns` attributes of the
 |      :class:`~pandas.DataFrame` instance are placed in the query namespace
 |      by default, which allows you to treat both the index and columns of the
 |      frame as a column in the frame.
 |      The identifier ``index`` is used for the frame index; you can also
 |      use the name of the index to identify it in a query. Please note that
 |      Python keywords may not be used as identifiers.
 |      
 |      For further details and examples see the ``query`` documentation in
 |      :ref:`indexing <indexing.query>`.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(np.random.randn(10, 2), columns=list('ab'))
 |      >>> df.query('a > b')
 |      >>> df[df.a > df.b]  # same result as the previous expression
 |  
 |  radd(self, other, axis='columns', level=None, fill_value=None)
 |      Addition of dataframe and other, element-wise (binary operator `radd`).
 |      
 |      Equivalent to ``other + dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `add`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rdiv = rtruediv(self, other, axis='columns', level=None, fill_value=None)
 |  
 |  reindex(self, labels=None, index=None, columns=None, axis=None, method=None, copy=True, level=None, fill_value=nan, limit=None, tolerance=None)
 |      Conform DataFrame to new index with optional filling logic, placing
 |      NA/NaN in locations having no value in the previous index. A new object
 |      is produced unless the new index is equivalent to the current one and
 |      ``copy=False``.
 |      
 |      Parameters
 |      ----------
 |      labels : array-like, optional
 |                  New labels / index to conform the axis specified by 'axis' to.
 |      index, columns : array-like, optional
 |          New labels / index to conform to, should be specified using
 |          keywords. Preferably an Index object to avoid duplicating data
 |      axis : int or str, optional
 |                  Axis to target. Can be either the axis name ('index', 'columns')
 |                  or number (0, 1).
 |      method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}
 |          Method to use for filling holes in reindexed DataFrame.
 |          Please note: this is only applicable to DataFrames/Series with a
 |          monotonically increasing/decreasing index.
 |      
 |          * None (default): don't fill gaps
 |          * pad / ffill: propagate last valid observation forward to next
 |            valid
 |          * backfill / bfill: use next valid observation to fill gap
 |          * nearest: use nearest valid observations to fill gap
 |      
 |      copy : bool, default True
 |          Return a new object, even if the passed indexes are the same.
 |      level : int or name
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : scalar, default np.NaN
 |          Value to use for missing values. Defaults to NaN, but can be any
 |          "compatible" value.
 |      limit : int, default None
 |          Maximum number of consecutive elements to forward or backward fill.
 |      tolerance : optional
 |          Maximum distance between original and new labels for inexact
 |          matches. The values of the index at the matching locations most
 |          satisfy the equation ``abs(index[indexer] - target) <= tolerance``.
 |      
 |          Tolerance may be a scalar value, which applies the same tolerance
 |          to all values, or list-like, which applies variable tolerance per
 |          element. List-like includes list, tuple, array, Series, and must be
 |          the same size as the index and its dtype must exactly match the
 |          index's type.
 |      
 |          .. versionadded:: 0.21.0 (list-like tolerance)
 |      
 |      Returns
 |      -------
 |      DataFrame with changed index.
 |      
 |      See Also
 |      --------
 |      DataFrame.set_index : Set row labels.
 |      DataFrame.reset_index : Remove row labels or move them to new columns.
 |      DataFrame.reindex_like : Change to same indices as other DataFrame.
 |      
 |      Examples
 |      --------
 |      
 |      ``DataFrame.reindex`` supports two calling conventions
 |      
 |      * ``(index=index_labels, columns=column_labels, ...)``
 |      * ``(labels, axis={'index', 'columns'}, ...)``
 |      
 |      We *highly* recommend using keyword arguments to clarify your
 |      intent.
 |      
 |      Create a dataframe with some fictional data.
 |      
 |      >>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
 |      >>> df = pd.DataFrame({
 |      ...      'http_status': [200,200,404,404,301],
 |      ...      'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
 |      ...       index=index)
 |      >>> df
 |                 http_status  response_time
 |      Firefox            200           0.04
 |      Chrome             200           0.02
 |      Safari             404           0.07
 |      IE10               404           0.08
 |      Konqueror          301           1.00
 |      
 |      Create a new index and reindex the dataframe. By default
 |      values in the new index that do not have corresponding
 |      records in the dataframe are assigned ``NaN``.
 |      
 |      >>> new_index= ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
 |      ...             'Chrome']
 |      >>> df.reindex(new_index)
 |                     http_status  response_time
 |      Safari               404.0           0.07
 |      Iceweasel              NaN            NaN
 |      Comodo Dragon          NaN            NaN
 |      IE10                 404.0           0.08
 |      Chrome               200.0           0.02
 |      
 |      We can fill in the missing values by passing a value to
 |      the keyword ``fill_value``. Because the index is not monotonically
 |      increasing or decreasing, we cannot use arguments to the keyword
 |      ``method`` to fill the ``NaN`` values.
 |      
 |      >>> df.reindex(new_index, fill_value=0)
 |                     http_status  response_time
 |      Safari                 404           0.07
 |      Iceweasel                0           0.00
 |      Comodo Dragon            0           0.00
 |      IE10                   404           0.08
 |      Chrome                 200           0.02
 |      
 |      >>> df.reindex(new_index, fill_value='missing')
 |                    http_status response_time
 |      Safari                404          0.07
 |      Iceweasel         missing       missing
 |      Comodo Dragon     missing       missing
 |      IE10                  404          0.08
 |      Chrome                200          0.02
 |      
 |      We can also reindex the columns.
 |      
 |      >>> df.reindex(columns=['http_status', 'user_agent'])
 |                 http_status  user_agent
 |      Firefox            200         NaN
 |      Chrome             200         NaN
 |      Safari             404         NaN
 |      IE10               404         NaN
 |      Konqueror          301         NaN
 |      
 |      Or we can use "axis-style" keyword arguments
 |      
 |      >>> df.reindex(['http_status', 'user_agent'], axis="columns")
 |                 http_status  user_agent
 |      Firefox            200         NaN
 |      Chrome             200         NaN
 |      Safari             404         NaN
 |      IE10               404         NaN
 |      Konqueror          301         NaN
 |      
 |      To further illustrate the filling functionality in
 |      ``reindex``, we will create a dataframe with a
 |      monotonically increasing index (for example, a sequence
 |      of dates).
 |      
 |      >>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
 |      >>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
 |      ...                    index=date_index)
 |      >>> df2
 |                  prices
 |      2010-01-01   100.0
 |      2010-01-02   101.0
 |      2010-01-03     NaN
 |      2010-01-04   100.0
 |      2010-01-05    89.0
 |      2010-01-06    88.0
 |      
 |      Suppose we decide to expand the dataframe to cover a wider
 |      date range.
 |      
 |      >>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D')
 |      >>> df2.reindex(date_index2)
 |                  prices
 |      2009-12-29     NaN
 |      2009-12-30     NaN
 |      2009-12-31     NaN
 |      2010-01-01   100.0
 |      2010-01-02   101.0
 |      2010-01-03     NaN
 |      2010-01-04   100.0
 |      2010-01-05    89.0
 |      2010-01-06    88.0
 |      2010-01-07     NaN
 |      
 |      The index entries that did not have a value in the original data frame
 |      (for example, '2009-12-29') are by default filled with ``NaN``.
 |      If desired, we can fill in the missing values using one of several
 |      options.
 |      
 |      For example, to back-propagate the last valid value to fill the ``NaN``
 |      values, pass ``bfill`` as an argument to the ``method`` keyword.
 |      
 |      >>> df2.reindex(date_index2, method='bfill')
 |                  prices
 |      2009-12-29   100.0
 |      2009-12-30   100.0
 |      2009-12-31   100.0
 |      2010-01-01   100.0
 |      2010-01-02   101.0
 |      2010-01-03     NaN
 |      2010-01-04   100.0
 |      2010-01-05    89.0
 |      2010-01-06    88.0
 |      2010-01-07     NaN
 |      
 |      Please note that the ``NaN`` value present in the original dataframe
 |      (at index value 2010-01-03) will not be filled by any of the
 |      value propagation schemes. This is because filling while reindexing
 |      does not look at dataframe values, but only compares the original and
 |      desired indexes. If you do want to fill in the ``NaN`` values present
 |      in the original dataframe, use the ``fillna()`` method.
 |      
 |      See the :ref:`user guide <basics.reindexing>` for more.
 |  
 |  reindex_axis(self, labels, axis=0, method=None, level=None, copy=True, limit=None, fill_value=nan)
 |      Conform input object to new index.
 |      
 |      .. deprecated:: 0.21.0
 |          Use `reindex` instead.
 |      
 |      By default, places NaN in locations having no value in the
 |      previous index. A new object is produced unless the new index
 |      is equivalent to the current one and copy=False.
 |      
 |      Parameters
 |      ----------
 |      labels : array-like
 |          New labels / index to conform to. Preferably an Index object to
 |          avoid duplicating data.
 |      axis : {0 or 'index', 1 or 'columns'}
 |          Indicate whether to use rows or columns.
 |      method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}, optional
 |          Method to use for filling holes in reindexed DataFrame:
 |      
 |          * default: don't fill gaps.
 |          * pad / ffill: propagate last valid observation forward to next
 |            valid.
 |          * backfill / bfill: use next valid observation to fill gap.
 |          * nearest: use nearest valid observations to fill gap.
 |      
 |      level : int or str
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      copy : bool, default True
 |          Return a new object, even if the passed indexes are the same.
 |      limit : int, optional
 |          Maximum number of consecutive elements to forward or backward fill.
 |      fill_value : float, default NaN
 |          Value used to fill in locations having no value in the previous
 |          index.
 |      
 |          .. versionadded:: 0.21.0 (list-like tolerance)
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Returns a new DataFrame object with new indices, unless the new
 |          index is equivalent to the current one and copy=False.
 |      
 |      See Also
 |      --------
 |      DataFrame.set_index : Set row labels.
 |      DataFrame.reset_index : Remove row labels or move them to new columns.
 |      DataFrame.reindex : Change to new indices or expand indices.
 |      DataFrame.reindex_like : Change to same indices as other DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
 |      ...                   index=['dog', 'hawk'])
 |      >>> df
 |            num_legs  num_wings
 |      dog          4          0
 |      hawk         2          2
 |      >>> df.reindex(['num_wings', 'num_legs', 'num_heads'],
 |      ...            axis='columns')
 |            num_wings  num_legs  num_heads
 |      dog           0         4        NaN
 |      hawk          2         2        NaN
 |  
 |  rename(self, mapper=None, index=None, columns=None, axis=None, copy=True, inplace=False, level=None)
 |      Alter axes labels.
 |      
 |      Function / dict values must be unique (1-to-1). Labels not contained in
 |      a dict / Series will be left as-is. Extra labels listed don't throw an
 |      error.
 |      
 |      See the :ref:`user guide <basics.rename>` for more.
 |      
 |      Parameters
 |      ----------
 |      mapper, index, columns : dict-like or function, optional
 |          dict-like or functions transformations to apply to
 |          that axis' values. Use either ``mapper`` and ``axis`` to
 |          specify the axis to target with ``mapper``, or ``index`` and
 |          ``columns``.
 |      axis : int or str, optional
 |          Axis to target with ``mapper``. Can be either the axis name
 |          ('index', 'columns') or number (0, 1). The default is 'index'.
 |      copy : boolean, default True
 |          Also copy underlying data
 |      inplace : boolean, default False
 |          Whether to return a new DataFrame. If True then value of copy is
 |          ignored.
 |      level : int or level name, default None
 |          In case of a MultiIndex, only rename labels in the specified
 |          level.
 |      
 |      Returns
 |      -------
 |      renamed : DataFrame
 |      
 |      See Also
 |      --------
 |      pandas.DataFrame.rename_axis
 |      
 |      Examples
 |      --------
 |      
 |      ``DataFrame.rename`` supports two calling conventions
 |      
 |      * ``(index=index_mapper, columns=columns_mapper, ...)``
 |      * ``(mapper, axis={'index', 'columns'}, ...)``
 |      
 |      We *highly* recommend using keyword arguments to clarify your
 |      intent.
 |      
 |      >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
 |      >>> df.rename(index=str, columns={"A": "a", "B": "c"})
 |         a  c
 |      0  1  4
 |      1  2  5
 |      2  3  6
 |      
 |      >>> df.rename(index=str, columns={"A": "a", "C": "c"})
 |         a  B
 |      0  1  4
 |      1  2  5
 |      2  3  6
 |      
 |      Using axis-style parameters
 |      
 |      >>> df.rename(str.lower, axis='columns')
 |         a  b
 |      0  1  4
 |      1  2  5
 |      2  3  6
 |      
 |      >>> df.rename({1: 2, 2: 4}, axis='index')
 |         A  B
 |      0  1  4
 |      2  2  5
 |      4  3  6
 |  
 |  reorder_levels(self, order, axis=0)
 |      Rearrange index levels using input order. May not drop or
 |      duplicate levels.
 |      
 |      Parameters
 |      ----------
 |      order : list of int or list of str
 |          List representing new level order. Reference level by number
 |          (position) or by key (label).
 |      axis : int
 |          Where to reorder levels.
 |      
 |      Returns
 |      -------
 |      type of caller (new object)
 |  
 |  replace(self, to_replace=None, value=None, inplace=False, limit=None, regex=False, method='pad')
 |      Replace values given in `to_replace` with `value`.
 |      
 |      Values of the DataFrame are replaced with other values dynamically.
 |      This differs from updating with ``.loc`` or ``.iloc``, which require
 |      you to specify a location to update with some value.
 |      
 |      Parameters
 |      ----------
 |      to_replace : str, regex, list, dict, Series, int, float, or None
 |          How to find the values that will be replaced.
 |      
 |          * numeric, str or regex:
 |      
 |              - numeric: numeric values equal to `to_replace` will be
 |                replaced with `value`
 |              - str: string exactly matching `to_replace` will be replaced
 |                with `value`
 |              - regex: regexs matching `to_replace` will be replaced with
 |                `value`
 |      
 |          * list of str, regex, or numeric:
 |      
 |              - First, if `to_replace` and `value` are both lists, they
 |                **must** be the same length.
 |              - Second, if ``regex=True`` then all of the strings in **both**
 |                lists will be interpreted as regexs otherwise they will match
 |                directly. This doesn't matter much for `value` since there
 |                are only a few possible substitution regexes you can use.
 |              - str, regex and numeric rules apply as above.
 |      
 |          * dict:
 |      
 |              - Dicts can be used to specify different replacement values
 |                for different existing values. For example,
 |                ``{'a': 'b', 'y': 'z'}`` replaces the value 'a' with 'b' and
 |                'y' with 'z'. To use a dict in this way the `value`
 |                parameter should be `None`.
 |              - For a DataFrame a dict can specify that different values
 |                should be replaced in different columns. For example,
 |                ``{'a': 1, 'b': 'z'}`` looks for the value 1 in column 'a'
 |                and the value 'z' in column 'b' and replaces these values
 |                with whatever is specified in `value`. The `value` parameter
 |                should not be ``None`` in this case. You can treat this as a
 |                special case of passing two lists except that you are
 |                specifying the column to search in.
 |              - For a DataFrame nested dictionaries, e.g.,
 |                ``{'a': {'b': np.nan}}``, are read as follows: look in column
 |                'a' for the value 'b' and replace it with NaN. The `value`
 |                parameter should be ``None`` to use a nested dict in this
 |                way. You can nest regular expressions as well. Note that
 |                column names (the top-level dictionary keys in a nested
 |                dictionary) **cannot** be regular expressions.
 |      
 |          * None:
 |      
 |              - This means that the `regex` argument must be a string,
 |                compiled regular expression, or list, dict, ndarray or
 |                Series of such elements. If `value` is also ``None`` then
 |                this **must** be a nested dictionary or Series.
 |      
 |          See the examples section for examples of each of these.
 |      value : scalar, dict, list, str, regex, default None
 |          Value to replace any values matching `to_replace` with.
 |          For a DataFrame a dict of values can be used to specify which
 |          value to use for each column (columns not in the dict will not be
 |          filled). Regular expressions, strings and lists or dicts of such
 |          objects are also allowed.
 |      inplace : bool, default False
 |          If True, in place. Note: this will modify any
 |          other views on this object (e.g. a column from a DataFrame).
 |          Returns the caller if this is True.
 |      limit : int, default None
 |          Maximum size gap to forward or backward fill.
 |      regex : bool or same types as `to_replace`, default False
 |          Whether to interpret `to_replace` and/or `value` as regular
 |          expressions. If this is ``True`` then `to_replace` *must* be a
 |          string. Alternatively, this could be a regular expression or a
 |          list, dict, or array of regular expressions in which case
 |          `to_replace` must be ``None``.
 |      method : {'pad', 'ffill', 'bfill', `None`}
 |          The method to use when for replacement, when `to_replace` is a
 |          scalar, list or tuple and `value` is ``None``.
 |      
 |          .. versionchanged:: 0.23.0
 |              Added to DataFrame.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Object after replacement.
 |      
 |      Raises
 |      ------
 |      AssertionError
 |          * If `regex` is not a ``bool`` and `to_replace` is not
 |            ``None``.
 |      TypeError
 |          * If `to_replace` is a ``dict`` and `value` is not a ``list``,
 |            ``dict``, ``ndarray``, or ``Series``
 |          * If `to_replace` is ``None`` and `regex` is not compilable
 |            into a regular expression or is a list, dict, ndarray, or
 |            Series.
 |          * When replacing multiple ``bool`` or ``datetime64`` objects and
 |            the arguments to `to_replace` does not match the type of the
 |            value being replaced
 |      ValueError
 |          * If a ``list`` or an ``ndarray`` is passed to `to_replace` and
 |            `value` but they are not the same length.
 |      
 |      See Also
 |      --------
 |      DataFrame.fillna : Fill NA values.
 |      DataFrame.where : Replace values based on boolean condition.
 |      Series.str.replace : Simple string replacement.
 |      
 |      Notes
 |      -----
 |      * Regex substitution is performed under the hood with ``re.sub``. The
 |        rules for substitution for ``re.sub`` are the same.
 |      * Regular expressions will only substitute on strings, meaning you
 |        cannot provide, for example, a regular expression matching floating
 |        point numbers and expect the columns in your frame that have a
 |        numeric dtype to be matched. However, if those floating point
 |        numbers *are* strings, then you can do this.
 |      * This method has *a lot* of options. You are encouraged to experiment
 |        and play with this method to gain intuition about how it works.
 |      * When dict is used as the `to_replace` value, it is like
 |        key(s) in the dict are the to_replace part and
 |        value(s) in the dict are the value parameter.
 |      
 |      Examples
 |      --------
 |      
 |      **Scalar `to_replace` and `value`**
 |      
 |      >>> s = pd.Series([0, 1, 2, 3, 4])
 |      >>> s.replace(0, 5)
 |      0    5
 |      1    1
 |      2    2
 |      3    3
 |      4    4
 |      dtype: int64
 |      
 |      >>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
 |      ...                    'B': [5, 6, 7, 8, 9],
 |      ...                    'C': ['a', 'b', 'c', 'd', 'e']})
 |      >>> df.replace(0, 5)
 |         A  B  C
 |      0  5  5  a
 |      1  1  6  b
 |      2  2  7  c
 |      3  3  8  d
 |      4  4  9  e
 |      
 |      **List-like `to_replace`**
 |      
 |      >>> df.replace([0, 1, 2, 3], 4)
 |         A  B  C
 |      0  4  5  a
 |      1  4  6  b
 |      2  4  7  c
 |      3  4  8  d
 |      4  4  9  e
 |      
 |      >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
 |         A  B  C
 |      0  4  5  a
 |      1  3  6  b
 |      2  2  7  c
 |      3  1  8  d
 |      4  4  9  e
 |      
 |      >>> s.replace([1, 2], method='bfill')
 |      0    0
 |      1    3
 |      2    3
 |      3    3
 |      4    4
 |      dtype: int64
 |      
 |      **dict-like `to_replace`**
 |      
 |      >>> df.replace({0: 10, 1: 100})
 |           A  B  C
 |      0   10  5  a
 |      1  100  6  b
 |      2    2  7  c
 |      3    3  8  d
 |      4    4  9  e
 |      
 |      >>> df.replace({'A': 0, 'B': 5}, 100)
 |           A    B  C
 |      0  100  100  a
 |      1    1    6  b
 |      2    2    7  c
 |      3    3    8  d
 |      4    4    9  e
 |      
 |      >>> df.replace({'A': {0: 100, 4: 400}})
 |           A  B  C
 |      0  100  5  a
 |      1    1  6  b
 |      2    2  7  c
 |      3    3  8  d
 |      4  400  9  e
 |      
 |      **Regular expression `to_replace`**
 |      
 |      >>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
 |      ...                    'B': ['abc', 'bar', 'xyz']})
 |      >>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
 |            A    B
 |      0   new  abc
 |      1   foo  new
 |      2  bait  xyz
 |      
 |      >>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
 |            A    B
 |      0   new  abc
 |      1   foo  bar
 |      2  bait  xyz
 |      
 |      >>> df.replace(regex=r'^ba.$', value='new')
 |            A    B
 |      0   new  abc
 |      1   foo  new
 |      2  bait  xyz
 |      
 |      >>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
 |            A    B
 |      0   new  abc
 |      1   xyz  new
 |      2  bait  xyz
 |      
 |      >>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
 |            A    B
 |      0   new  abc
 |      1   new  new
 |      2  bait  xyz
 |      
 |      Note that when replacing multiple ``bool`` or ``datetime64`` objects,
 |      the data types in the `to_replace` parameter must match the data
 |      type of the value being replaced:
 |      
 |      >>> df = pd.DataFrame({'A': [True, False, True],
 |      ...                    'B': [False, True, False]})
 |      >>> df.replace({'a string': 'new value', True: False})  # raises
 |      Traceback (most recent call last):
 |          ...
 |      TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
 |      
 |      This raises a ``TypeError`` because one of the ``dict`` keys is not of
 |      the correct type for replacement.
 |      
 |      Compare the behavior of ``s.replace({'a': None})`` and
 |      ``s.replace('a', None)`` to understand the peculiarities
 |      of the `to_replace` parameter:
 |      
 |      >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
 |      
 |      When one uses a dict as the `to_replace` value, it is like the
 |      value(s) in the dict are equal to the `value` parameter.
 |      ``s.replace({'a': None})`` is equivalent to
 |      ``s.replace(to_replace={'a': None}, value=None, method=None)``:
 |      
 |      >>> s.replace({'a': None})
 |      0      10
 |      1    None
 |      2    None
 |      3       b
 |      4    None
 |      dtype: object
 |      
 |      When ``value=None`` and `to_replace` is a scalar, list or
 |      tuple, `replace` uses the method parameter (default 'pad') to do the
 |      replacement. So this is why the 'a' values are being replaced by 10
 |      in rows 1 and 2 and 'b' in row 4 in this case.
 |      The command ``s.replace('a', None)`` is actually equivalent to
 |      ``s.replace(to_replace='a', value=None, method='pad')``:
 |      
 |      >>> s.replace('a', None)
 |      0    10
 |      1    10
 |      2    10
 |      3     b
 |      4     b
 |      dtype: object
 |  
 |  reset_index(self, level=None, drop=False, inplace=False, col_level=0, col_fill='')
 |      Reset the index, or a level of it.
 |      
 |      Reset the index of the DataFrame, and use the default one instead.
 |      If the DataFrame has a MultiIndex, this method can remove one or more
 |      levels.
 |      
 |      Parameters
 |      ----------
 |      level : int, str, tuple, or list, default None
 |          Only remove the given levels from the index. Removes all levels by
 |          default.
 |      drop : bool, default False
 |          Do not try to insert index into dataframe columns. This resets
 |          the index to the default integer index.
 |      inplace : bool, default False
 |          Modify the DataFrame in place (do not create a new object).
 |      col_level : int or str, default 0
 |          If the columns have multiple levels, determines which level the
 |          labels are inserted into. By default it is inserted into the first
 |          level.
 |      col_fill : object, default ''
 |          If the columns have multiple levels, determines how the other
 |          levels are named. If None then the index name is repeated.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          DataFrame with the new index.
 |      
 |      See Also
 |      --------
 |      DataFrame.set_index : Opposite of reset_index.
 |      DataFrame.reindex : Change to new indices or expand indices.
 |      DataFrame.reindex_like : Change to same indices as other DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([('bird', 389.0),
 |      ...                    ('bird', 24.0),
 |      ...                    ('mammal', 80.5),
 |      ...                    ('mammal', np.nan)],
 |      ...                   index=['falcon', 'parrot', 'lion', 'monkey'],
 |      ...                   columns=('class', 'max_speed'))
 |      >>> df
 |               class  max_speed
 |      falcon    bird      389.0
 |      parrot    bird       24.0
 |      lion    mammal       80.5
 |      monkey  mammal        NaN
 |      
 |      When we reset the index, the old index is added as a column, and a
 |      new sequential index is used:
 |      
 |      >>> df.reset_index()
 |          index   class  max_speed
 |      0  falcon    bird      389.0
 |      1  parrot    bird       24.0
 |      2    lion  mammal       80.5
 |      3  monkey  mammal        NaN
 |      
 |      We can use the `drop` parameter to avoid the old index being added as
 |      a column:
 |      
 |      >>> df.reset_index(drop=True)
 |          class  max_speed
 |      0    bird      389.0
 |      1    bird       24.0
 |      2  mammal       80.5
 |      3  mammal        NaN
 |      
 |      You can also use `reset_index` with `MultiIndex`.
 |      
 |      >>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'),
 |      ...                                    ('bird', 'parrot'),
 |      ...                                    ('mammal', 'lion'),
 |      ...                                    ('mammal', 'monkey')],
 |      ...                                   names=['class', 'name'])
 |      >>> columns = pd.MultiIndex.from_tuples([('speed', 'max'),
 |      ...                                      ('species', 'type')])
 |      >>> df = pd.DataFrame([(389.0, 'fly'),
 |      ...                    ( 24.0, 'fly'),
 |      ...                    ( 80.5, 'run'),
 |      ...                    (np.nan, 'jump')],
 |      ...                   index=index,
 |      ...                   columns=columns)
 |      >>> df
 |                     speed species
 |                       max    type
 |      class  name
 |      bird   falcon  389.0     fly
 |             parrot   24.0     fly
 |      mammal lion     80.5     run
 |             monkey    NaN    jump
 |      
 |      If the index has multiple levels, we can reset a subset of them:
 |      
 |      >>> df.reset_index(level='class')
 |               class  speed species
 |                        max    type
 |      name
 |      falcon    bird  389.0     fly
 |      parrot    bird   24.0     fly
 |      lion    mammal   80.5     run
 |      monkey  mammal    NaN    jump
 |      
 |      If we are not dropping the index, by default, it is placed in the top
 |      level. We can place it in another level:
 |      
 |      >>> df.reset_index(level='class', col_level=1)
 |                      speed species
 |               class    max    type
 |      name
 |      falcon    bird  389.0     fly
 |      parrot    bird   24.0     fly
 |      lion    mammal   80.5     run
 |      monkey  mammal    NaN    jump
 |      
 |      When the index is inserted under another level, we can specify under
 |      which one with the parameter `col_fill`:
 |      
 |      >>> df.reset_index(level='class', col_level=1, col_fill='species')
 |                    species  speed species
 |                      class    max    type
 |      name
 |      falcon           bird  389.0     fly
 |      parrot           bird   24.0     fly
 |      lion           mammal   80.5     run
 |      monkey         mammal    NaN    jump
 |      
 |      If we specify a nonexistent level for `col_fill`, it is created:
 |      
 |      >>> df.reset_index(level='class', col_level=1, col_fill='genus')
 |                      genus  speed species
 |                      class    max    type
 |      name
 |      falcon           bird  389.0     fly
 |      parrot           bird   24.0     fly
 |      lion           mammal   80.5     run
 |      monkey         mammal    NaN    jump
 |  
 |  rfloordiv(self, other, axis='columns', level=None, fill_value=None)
 |      Integer division of dataframe and other, element-wise (binary operator `rfloordiv`).
 |      
 |      Equivalent to ``other // dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `floordiv`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rmod(self, other, axis='columns', level=None, fill_value=None)
 |      Modulo of dataframe and other, element-wise (binary operator `rmod`).
 |      
 |      Equivalent to ``other % dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `mod`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rmul(self, other, axis='columns', level=None, fill_value=None)
 |      Multiplication of dataframe and other, element-wise (binary operator `rmul`).
 |      
 |      Equivalent to ``other * dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `mul`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rolling(self, window, min_periods=None, center=False, win_type=None, on=None, axis=0, closed=None)
 |      Provides rolling window calculations.
 |      
 |      .. versionadded:: 0.18.0
 |      
 |      Parameters
 |      ----------
 |      window : int, or offset
 |          Size of the moving window. This is the number of observations used for
 |          calculating the statistic. Each window will be a fixed size.
 |      
 |          If its an offset then this will be the time period of each window. Each
 |          window will be a variable sized based on the observations included in
 |          the time-period. This is only valid for datetimelike indexes. This is
 |          new in 0.19.0
 |      min_periods : int, default None
 |          Minimum number of observations in window required to have a value
 |          (otherwise result is NA). For a window that is specified by an offset,
 |          `min_periods` will default to 1. Otherwise, `min_periods` will default
 |          to the size of the window.
 |      center : bool, default False
 |          Set the labels at the center of the window.
 |      win_type : str, default None
 |          Provide a window type. If ``None``, all points are evenly weighted.
 |          See the notes below for further information.
 |      on : str, optional
 |          For a DataFrame, column on which to calculate
 |          the rolling window, rather than the index
 |      axis : int or str, default 0
 |      closed : str, default None
 |          Make the interval closed on the 'right', 'left', 'both' or
 |          'neither' endpoints.
 |          For offset-based windows, it defaults to 'right'.
 |          For fixed windows, defaults to 'both'. Remaining cases not implemented
 |          for fixed windows.
 |      
 |          .. versionadded:: 0.20.0
 |      
 |      Returns
 |      -------
 |      a Window or Rolling sub-classed for the particular operation
 |      
 |      See Also
 |      --------
 |      expanding : Provides expanding transformations.
 |      ewm : Provides exponential weighted functions.
 |      
 |      Notes
 |      -----
 |      By default, the result is set to the right edge of the window. This can be
 |      changed to the center of the window by setting ``center=True``.
 |      
 |      To learn more about the offsets & frequency strings, please see `this link
 |      <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
 |      
 |      The recognized win_types are:
 |      
 |      * ``boxcar``
 |      * ``triang``
 |      * ``blackman``
 |      * ``hamming``
 |      * ``bartlett``
 |      * ``parzen``
 |      * ``bohman``
 |      * ``blackmanharris``
 |      * ``nuttall``
 |      * ``barthann``
 |      * ``kaiser`` (needs beta)
 |      * ``gaussian`` (needs std)
 |      * ``general_gaussian`` (needs power, width)
 |      * ``slepian`` (needs width).
 |      
 |      If ``win_type=None`` all points are evenly weighted. To learn more about
 |      different window types see `scipy.signal window functions
 |      <https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions>`__.
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
 |      >>> df
 |           B
 |      0  0.0
 |      1  1.0
 |      2  2.0
 |      3  NaN
 |      4  4.0
 |      
 |      Rolling sum with a window length of 2, using the 'triang'
 |      window type.
 |      
 |      >>> df.rolling(2, win_type='triang').sum()
 |           B
 |      0  NaN
 |      1  1.0
 |      2  2.5
 |      3  NaN
 |      4  NaN
 |      
 |      Rolling sum with a window length of 2, min_periods defaults
 |      to the window length.
 |      
 |      >>> df.rolling(2).sum()
 |           B
 |      0  NaN
 |      1  1.0
 |      2  3.0
 |      3  NaN
 |      4  NaN
 |      
 |      Same as above, but explicitly set the min_periods
 |      
 |      >>> df.rolling(2, min_periods=1).sum()
 |           B
 |      0  0.0
 |      1  1.0
 |      2  3.0
 |      3  2.0
 |      4  4.0
 |      
 |      A ragged (meaning not-a-regular frequency), time-indexed DataFrame
 |      
 |      >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
 |      ...                   index = [pd.Timestamp('20130101 09:00:00'),
 |      ...                            pd.Timestamp('20130101 09:00:02'),
 |      ...                            pd.Timestamp('20130101 09:00:03'),
 |      ...                            pd.Timestamp('20130101 09:00:05'),
 |      ...                            pd.Timestamp('20130101 09:00:06')])
 |      
 |      >>> df
 |                             B
 |      2013-01-01 09:00:00  0.0
 |      2013-01-01 09:00:02  1.0
 |      2013-01-01 09:00:03  2.0
 |      2013-01-01 09:00:05  NaN
 |      2013-01-01 09:00:06  4.0
 |      
 |      Contrasting to an integer rolling window, this will roll a variable
 |      length window corresponding to the time period.
 |      The default for min_periods is 1.
 |      
 |      >>> df.rolling('2s').sum()
 |                             B
 |      2013-01-01 09:00:00  0.0
 |      2013-01-01 09:00:02  1.0
 |      2013-01-01 09:00:03  3.0
 |      2013-01-01 09:00:05  NaN
 |      2013-01-01 09:00:06  4.0
 |  
 |  round(self, decimals=0, *args, **kwargs)
 |      Round a DataFrame to a variable number of decimal places.
 |      
 |      Parameters
 |      ----------
 |      decimals : int, dict, Series
 |          Number of decimal places to round each column to. If an int is
 |          given, round each column to the same number of places.
 |          Otherwise dict and Series round to variable numbers of places.
 |          Column names should be in the keys if `decimals` is a
 |          dict-like, or in the index if `decimals` is a Series. Any
 |          columns not included in `decimals` will be left as is. Elements
 |          of `decimals` which are not columns of the input will be
 |          ignored.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |      
 |      See Also
 |      --------
 |      numpy.around
 |      Series.round
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(np.random.random([3, 3]),
 |      ...     columns=['A', 'B', 'C'], index=['first', 'second', 'third'])
 |      >>> df
 |                     A         B         C
 |      first   0.028208  0.992815  0.173891
 |      second  0.038683  0.645646  0.577595
 |      third   0.877076  0.149370  0.491027
 |      >>> df.round(2)
 |                 A     B     C
 |      first   0.03  0.99  0.17
 |      second  0.04  0.65  0.58
 |      third   0.88  0.15  0.49
 |      >>> df.round({'A': 1, 'C': 2})
 |                A         B     C
 |      first   0.0  0.992815  0.17
 |      second  0.0  0.645646  0.58
 |      third   0.9  0.149370  0.49
 |      >>> decimals = pd.Series([1, 0, 2], index=['A', 'B', 'C'])
 |      >>> df.round(decimals)
 |                A  B     C
 |      first   0.0  1  0.17
 |      second  0.0  1  0.58
 |      third   0.9  0  0.49
 |  
 |  rpow(self, other, axis='columns', level=None, fill_value=None)
 |      Exponential power of dataframe and other, element-wise (binary operator `rpow`).
 |      
 |      Equivalent to ``other ** dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `pow`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rsub(self, other, axis='columns', level=None, fill_value=None)
 |      Subtraction of dataframe and other, element-wise (binary operator `rsub`).
 |      
 |      Equivalent to ``other - dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `sub`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  rtruediv(self, other, axis='columns', level=None, fill_value=None)
 |      Floating division of dataframe and other, element-wise (binary operator `rtruediv`).
 |      
 |      Equivalent to ``other / dataframe``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `truediv`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  select_dtypes(self, include=None, exclude=None)
 |      Return a subset of the DataFrame's columns based on the column dtypes.
 |      
 |      Parameters
 |      ----------
 |      include, exclude : scalar or list-like
 |          A selection of dtypes or strings to be included/excluded. At least
 |          one of these parameters must be supplied.
 |      
 |      Returns
 |      -------
 |      subset : DataFrame
 |          The subset of the frame including the dtypes in ``include`` and
 |          excluding the dtypes in ``exclude``.
 |      
 |      Raises
 |      ------
 |      ValueError
 |          * If both of ``include`` and ``exclude`` are empty
 |          * If ``include`` and ``exclude`` have overlapping elements
 |          * If any kind of string dtype is passed in.
 |      
 |      Notes
 |      -----
 |      * To select all *numeric* types, use ``np.number`` or ``'number'``
 |      * To select strings you must use the ``object`` dtype, but note that
 |        this will return *all* object dtype columns
 |      * See the `numpy dtype hierarchy
 |        <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`__
 |      * To select datetimes, use ``np.datetime64``, ``'datetime'`` or
 |        ``'datetime64'``
 |      * To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or
 |        ``'timedelta64'``
 |      * To select Pandas categorical dtypes, use ``'category'``
 |      * To select Pandas datetimetz dtypes, use ``'datetimetz'`` (new in
 |        0.20.0) or ``'datetime64[ns, tz]'``
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'a': [1, 2] * 3,
 |      ...                    'b': [True, False] * 3,
 |      ...                    'c': [1.0, 2.0] * 3})
 |      >>> df
 |              a      b  c
 |      0       1   True  1.0
 |      1       2  False  2.0
 |      2       1   True  1.0
 |      3       2  False  2.0
 |      4       1   True  1.0
 |      5       2  False  2.0
 |      
 |      >>> df.select_dtypes(include='bool')
 |         b
 |      0  True
 |      1  False
 |      2  True
 |      3  False
 |      4  True
 |      5  False
 |      
 |      >>> df.select_dtypes(include=['float64'])
 |         c
 |      0  1.0
 |      1  2.0
 |      2  1.0
 |      3  2.0
 |      4  1.0
 |      5  2.0
 |      
 |      >>> df.select_dtypes(exclude=['int'])
 |             b    c
 |      0   True  1.0
 |      1  False  2.0
 |      2   True  1.0
 |      3  False  2.0
 |      4   True  1.0
 |      5  False  2.0
 |  
 |  sem(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)
 |      Return unbiased standard error of the mean over requested axis.
 |      
 |      Normalized by N-1 by default. This can be changed using the ddof argument
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series
 |      ddof : int, default 1
 |          Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
 |          where N represents the number of elements.
 |      numeric_only : boolean, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      
 |      Returns
 |      -------
 |      sem : Series or DataFrame (if level specified)
 |  
 |  set_index(self, keys, drop=True, append=False, inplace=False, verify_integrity=False)
 |      Set the DataFrame index using existing columns.
 |      
 |      Set the DataFrame index (row labels) using one or more existing
 |      columns or arrays (of the correct length). The index can replace the
 |      existing index or expand on it.
 |      
 |      Parameters
 |      ----------
 |      keys : label or array-like or list of labels/arrays
 |          This parameter can be either a single column key, a single array of
 |          the same length as the calling DataFrame, or a list containing an
 |          arbitrary combination of column keys and arrays. Here, "array"
 |          encompasses :class:`Series`, :class:`Index` and ``np.ndarray``.
 |      drop : bool, default True
 |          Delete columns to be used as the new index.
 |      append : bool, default False
 |          Whether to append columns to existing index.
 |      inplace : bool, default False
 |          Modify the DataFrame in place (do not create a new object).
 |      verify_integrity : bool, default False
 |          Check the new index for duplicates. Otherwise defer the check until
 |          necessary. Setting to False will improve the performance of this
 |          method.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Changed row labels.
 |      
 |      See Also
 |      --------
 |      DataFrame.reset_index : Opposite of set_index.
 |      DataFrame.reindex : Change to new indices or expand indices.
 |      DataFrame.reindex_like : Change to same indices as other DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'month': [1, 4, 7, 10],
 |      ...                    'year': [2012, 2014, 2013, 2014],
 |      ...                    'sale': [55, 40, 84, 31]})
 |      >>> df
 |         month  year  sale
 |      0      1  2012    55
 |      1      4  2014    40
 |      2      7  2013    84
 |      3     10  2014    31
 |      
 |      Set the index to become the 'month' column:
 |      
 |      >>> df.set_index('month')
 |             year  sale
 |      month
 |      1      2012    55
 |      4      2014    40
 |      7      2013    84
 |      10     2014    31
 |      
 |      Create a MultiIndex using columns 'year' and 'month':
 |      
 |      >>> df.set_index(['year', 'month'])
 |                  sale
 |      year  month
 |      2012  1     55
 |      2014  4     40
 |      2013  7     84
 |      2014  10    31
 |      
 |      Create a MultiIndex using an Index and a column:
 |      
 |      >>> df.set_index([pd.Index([1, 2, 3, 4]), 'year'])
 |               month  sale
 |         year
 |      1  2012  1      55
 |      2  2014  4      40
 |      3  2013  7      84
 |      4  2014  10     31
 |      
 |      Create a MultiIndex using two Series:
 |      
 |      >>> s = pd.Series([1, 2, 3, 4])
 |      >>> df.set_index([s, s**2])
 |            month  year  sale
 |      1 1       1  2012    55
 |      2 4       4  2014    40
 |      3 9       7  2013    84
 |      4 16     10  2014    31
 |  
 |  set_value(self, index, col, value, takeable=False)
 |      Put single value at passed column and index.
 |      
 |      .. deprecated:: 0.21.0
 |          Use .at[] or .iat[] accessors instead.
 |      
 |      Parameters
 |      ----------
 |      index : row label
 |      col : column label
 |      value : scalar value
 |      takeable : interpret the index/col as indexers, default False
 |      
 |      Returns
 |      -------
 |      frame : DataFrame
 |          If label pair is contained, will be reference to calling DataFrame,
 |          otherwise a new object
 |  
 |  shift(self, periods=1, freq=None, axis=0, fill_value=None)
 |      Shift index by desired number of periods with an optional time `freq`.
 |      
 |      When `freq` is not passed, shift the index without realigning the data.
 |      If `freq` is passed (in this case, the index must be date or datetime,
 |      or it will raise a `NotImplementedError`), the index will be
 |      increased using the periods and the `freq`.
 |      
 |      Parameters
 |      ----------
 |      periods : int
 |          Number of periods to shift. Can be positive or negative.
 |      freq : DateOffset, tseries.offsets, timedelta, or str, optional
 |          Offset to use from the tseries module or time rule (e.g. 'EOM').
 |          If `freq` is specified then the index values are shifted but the
 |          data is not realigned. That is, use `freq` if you would like to
 |          extend the index when shifting and preserve the original data.
 |      axis : {0 or 'index', 1 or 'columns', None}, default None
 |          Shift direction.
 |      fill_value : object, optional
 |          The scalar value to use for newly introduced missing values.
 |          the default depends on the dtype of `self`.
 |          For numeric data, ``np.nan`` is used.
 |          For datetime, timedelta, or period data, etc. :attr:`NaT` is used.
 |          For extension dtypes, ``self.dtype.na_value`` is used.
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Copy of input object, shifted.
 |      
 |      See Also
 |      --------
 |      Index.shift : Shift values of Index.
 |      DatetimeIndex.shift : Shift values of DatetimeIndex.
 |      PeriodIndex.shift : Shift values of PeriodIndex.
 |      tshift : Shift the time index, using the index's frequency if
 |          available.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'Col1': [10, 20, 15, 30, 45],
 |      ...                    'Col2': [13, 23, 18, 33, 48],
 |      ...                    'Col3': [17, 27, 22, 37, 52]})
 |      
 |      >>> df.shift(periods=3)
 |         Col1  Col2  Col3
 |      0   NaN   NaN   NaN
 |      1   NaN   NaN   NaN
 |      2   NaN   NaN   NaN
 |      3  10.0  13.0  17.0
 |      4  20.0  23.0  27.0
 |      
 |      >>> df.shift(periods=1, axis='columns')
 |         Col1  Col2  Col3
 |      0   NaN  10.0  13.0
 |      1   NaN  20.0  23.0
 |      2   NaN  15.0  18.0
 |      3   NaN  30.0  33.0
 |      4   NaN  45.0  48.0
 |      
 |      >>> df.shift(periods=3, fill_value=0)
 |         Col1  Col2  Col3
 |      0     0     0     0
 |      1     0     0     0
 |      2     0     0     0
 |      3    10    13    17
 |      4    20    23    27
 |  
 |  skew(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
 |      Return unbiased skew over requested axis
 |      Normalized by N-1.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      skew : Series or DataFrame (if level specified)
 |  
 |  sort_index(self, axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, by=None)
 |      Sort object by labels (along an axis)
 |      
 |      Parameters
 |      ----------
 |      axis : index, columns to direct sorting
 |      level : int or level name or list of ints or list of level names
 |          if not None, sort on values in specified index level(s)
 |      ascending : boolean, default True
 |          Sort ascending vs. descending
 |      inplace : bool, default False
 |          if True, perform operation in-place
 |      kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
 |           Choice of sorting algorithm. See also ndarray.np.sort for more
 |           information.  `mergesort` is the only stable algorithm. For
 |           DataFrames, this option is only applied when sorting on a single
 |           column or label.
 |      na_position : {'first', 'last'}, default 'last'
 |           `first` puts NaNs at the beginning, `last` puts NaNs at the end.
 |           Not implemented for MultiIndex.
 |      sort_remaining : bool, default True
 |          if true and sorting by level and index is multilevel, sort by other
 |          levels too (in order) after sorting by specified level
 |      
 |      Returns
 |      -------
 |      sorted_obj : DataFrame
 |  
 |  sort_values(self, by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last')
 |      Sort by the values along either axis
 |      
 |      Parameters
 |      ----------
 |              by : str or list of str
 |                  Name or list of names to sort by.
 |      
 |                  - if `axis` is 0 or `'index'` then `by` may contain index
 |                    levels and/or column labels
 |                  - if `axis` is 1 or `'columns'` then `by` may contain column
 |                    levels and/or index labels
 |      
 |                  .. versionchanged:: 0.23.0
 |                     Allow specifying index or column level names.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |           Axis to be sorted
 |      ascending : bool or list of bool, default True
 |           Sort ascending vs. descending. Specify list for multiple sort
 |           orders.  If this is a list of bools, must match the length of
 |           the by.
 |      inplace : bool, default False
 |           if True, perform operation in-place
 |      kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
 |           Choice of sorting algorithm. See also ndarray.np.sort for more
 |           information.  `mergesort` is the only stable algorithm. For
 |           DataFrames, this option is only applied when sorting on a single
 |           column or label.
 |      na_position : {'first', 'last'}, default 'last'
 |           `first` puts NaNs at the beginning, `last` puts NaNs at the end
 |      
 |      Returns
 |      -------
 |      sorted_obj : DataFrame
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({
 |      ...     'col1' : ['A', 'A', 'B', np.nan, 'D', 'C'],
 |      ...     'col2' : [2, 1, 9, 8, 7, 4],
 |      ...     'col3': [0, 1, 9, 4, 2, 3],
 |      ... })
 |      >>> df
 |          col1 col2 col3
 |      0   A    2    0
 |      1   A    1    1
 |      2   B    9    9
 |      3   NaN  8    4
 |      4   D    7    2
 |      5   C    4    3
 |      
 |      Sort by col1
 |      
 |      >>> df.sort_values(by=['col1'])
 |          col1 col2 col3
 |      0   A    2    0
 |      1   A    1    1
 |      2   B    9    9
 |      5   C    4    3
 |      4   D    7    2
 |      3   NaN  8    4
 |      
 |      Sort by multiple columns
 |      
 |      >>> df.sort_values(by=['col1', 'col2'])
 |          col1 col2 col3
 |      1   A    1    1
 |      0   A    2    0
 |      2   B    9    9
 |      5   C    4    3
 |      4   D    7    2
 |      3   NaN  8    4
 |      
 |      Sort Descending
 |      
 |      >>> df.sort_values(by='col1', ascending=False)
 |          col1 col2 col3
 |      4   D    7    2
 |      5   C    4    3
 |      2   B    9    9
 |      0   A    2    0
 |      1   A    1    1
 |      3   NaN  8    4
 |      
 |      Putting NAs first
 |      
 |      >>> df.sort_values(by='col1', ascending=False, na_position='first')
 |          col1 col2 col3
 |      3   NaN  8    4
 |      4   D    7    2
 |      5   C    4    3
 |      2   B    9    9
 |      0   A    2    0
 |      1   A    1    1
 |  
 |  stack(self, level=-1, dropna=True)
 |      Stack the prescribed level(s) from columns to index.
 |      
 |      Return a reshaped DataFrame or Series having a multi-level
 |      index with one or more new inner-most levels compared to the current
 |      DataFrame. The new inner-most levels are created by pivoting the
 |      columns of the current dataframe:
 |      
 |        - if the columns have a single level, the output is a Series;
 |        - if the columns have multiple levels, the new index
 |          level(s) is (are) taken from the prescribed level(s) and
 |          the output is a DataFrame.
 |      
 |      The new index levels are sorted.
 |      
 |      Parameters
 |      ----------
 |      level : int, str, list, default -1
 |          Level(s) to stack from the column axis onto the index
 |          axis, defined as one index or label, or a list of indices
 |          or labels.
 |      dropna : bool, default True
 |          Whether to drop rows in the resulting Frame/Series with
 |          missing values. Stacking a column level onto the index
 |          axis can create combinations of index and column values
 |          that are missing from the original dataframe. See Examples
 |          section.
 |      
 |      Returns
 |      -------
 |      DataFrame or Series
 |          Stacked dataframe or series.
 |      
 |      See Also
 |      --------
 |      DataFrame.unstack : Unstack prescribed level(s) from index axis
 |           onto column axis.
 |      DataFrame.pivot : Reshape dataframe from long format to wide
 |           format.
 |      DataFrame.pivot_table : Create a spreadsheet-style pivot table
 |           as a DataFrame.
 |      
 |      Notes
 |      -----
 |      The function is named by analogy with a collection of books
 |      being re-organised from being side by side on a horizontal
 |      position (the columns of the dataframe) to being stacked
 |      vertically on top of of each other (in the index of the
 |      dataframe).
 |      
 |      Examples
 |      --------
 |      **Single level columns**
 |      
 |      >>> df_single_level_cols = pd.DataFrame([[0, 1], [2, 3]],
 |      ...                                     index=['cat', 'dog'],
 |      ...                                     columns=['weight', 'height'])
 |      
 |      Stacking a dataframe with a single level column axis returns a Series:
 |      
 |      >>> df_single_level_cols
 |           weight height
 |      cat       0      1
 |      dog       2      3
 |      >>> df_single_level_cols.stack()
 |      cat  weight    0
 |           height    1
 |      dog  weight    2
 |           height    3
 |      dtype: int64
 |      
 |      **Multi level columns: simple case**
 |      
 |      >>> multicol1 = pd.MultiIndex.from_tuples([('weight', 'kg'),
 |      ...                                        ('weight', 'pounds')])
 |      >>> df_multi_level_cols1 = pd.DataFrame([[1, 2], [2, 4]],
 |      ...                                     index=['cat', 'dog'],
 |      ...                                     columns=multicol1)
 |      
 |      Stacking a dataframe with a multi-level column axis:
 |      
 |      >>> df_multi_level_cols1
 |           weight
 |               kg    pounds
 |      cat       1        2
 |      dog       2        4
 |      >>> df_multi_level_cols1.stack()
 |                  weight
 |      cat kg           1
 |          pounds       2
 |      dog kg           2
 |          pounds       4
 |      
 |      **Missing values**
 |      
 |      >>> multicol2 = pd.MultiIndex.from_tuples([('weight', 'kg'),
 |      ...                                        ('height', 'm')])
 |      >>> df_multi_level_cols2 = pd.DataFrame([[1.0, 2.0], [3.0, 4.0]],
 |      ...                                     index=['cat', 'dog'],
 |      ...                                     columns=multicol2)
 |      
 |      It is common to have missing values when stacking a dataframe
 |      with multi-level columns, as the stacked dataframe typically
 |      has more values than the original dataframe. Missing values
 |      are filled with NaNs:
 |      
 |      >>> df_multi_level_cols2
 |          weight height
 |              kg      m
 |      cat    1.0    2.0
 |      dog    3.0    4.0
 |      >>> df_multi_level_cols2.stack()
 |              height  weight
 |      cat kg     NaN     1.0
 |          m      2.0     NaN
 |      dog kg     NaN     3.0
 |          m      4.0     NaN
 |      
 |      **Prescribing the level(s) to be stacked**
 |      
 |      The first parameter controls which level or levels are stacked:
 |      
 |      >>> df_multi_level_cols2.stack(0)
 |                   kg    m
 |      cat height  NaN  2.0
 |          weight  1.0  NaN
 |      dog height  NaN  4.0
 |          weight  3.0  NaN
 |      >>> df_multi_level_cols2.stack([0, 1])
 |      cat  height  m     2.0
 |           weight  kg    1.0
 |      dog  height  m     4.0
 |           weight  kg    3.0
 |      dtype: float64
 |      
 |      **Dropping missing values**
 |      
 |      >>> df_multi_level_cols3 = pd.DataFrame([[None, 1.0], [2.0, 3.0]],
 |      ...                                     index=['cat', 'dog'],
 |      ...                                     columns=multicol2)
 |      
 |      Note that rows where all values are missing are dropped by
 |      default but this behaviour can be controlled via the dropna
 |      keyword parameter:
 |      
 |      >>> df_multi_level_cols3
 |          weight height
 |              kg      m
 |      cat    NaN    1.0
 |      dog    2.0    3.0
 |      >>> df_multi_level_cols3.stack(dropna=False)
 |              height  weight
 |      cat kg     NaN     NaN
 |          m      1.0     NaN
 |      dog kg     NaN     2.0
 |          m      3.0     NaN
 |      >>> df_multi_level_cols3.stack(dropna=True)
 |              height  weight
 |      cat m      1.0     NaN
 |      dog kg     NaN     2.0
 |          m      3.0     NaN
 |  
 |  std(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)
 |      Return sample standard deviation over requested axis.
 |      
 |      Normalized by N-1 by default. This can be changed using the ddof argument
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series
 |      ddof : int, default 1
 |          Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
 |          where N represents the number of elements.
 |      numeric_only : boolean, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      
 |      Returns
 |      -------
 |      std : Series or DataFrame (if level specified)
 |  
 |  sub(self, other, axis='columns', level=None, fill_value=None)
 |      Subtraction of dataframe and other, element-wise (binary operator `sub`).
 |      
 |      Equivalent to ``dataframe - other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rsub`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  subtract = sub(self, other, axis='columns', level=None, fill_value=None)
 |  
 |  sum(self, axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)
 |      Return the sum of the values for the requested axis.
 |      
 |                  This is equivalent to the method ``numpy.sum``.
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |          Axis for the function to be applied on.
 |      skipna : bool, default True
 |          Exclude NA/null values when computing the result.
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series.
 |      numeric_only : bool, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      min_count : int, default 0
 |          The required number of valid values to perform the operation. If fewer than
 |          ``min_count`` non-NA values are present the result will be NA.
 |      
 |          .. versionadded :: 0.22.0
 |      
 |             Added with the default being 0. This means the sum of an all-NA
 |             or empty Series is 0, and the product of an all-NA or empty
 |             Series is 1.
 |      **kwargs
 |          Additional keyword arguments to be passed to the function.
 |      
 |      Returns
 |      -------
 |      sum : Series or DataFrame (if level specified)
 |      
 |      See Also
 |      --------
 |      Series.sum : Return the sum.
 |      Series.min : Return the minimum.
 |      Series.max : Return the maximum.
 |      Series.idxmin : Return the index of the minimum.
 |      Series.idxmax : Return the index of the maximum.
 |      DataFrame.min : Return the sum over the requested axis.
 |      DataFrame.min : Return the minimum over the requested axis.
 |      DataFrame.max : Return the maximum over the requested axis.
 |      DataFrame.idxmin : Return the index of the minimum over the requested axis.
 |      DataFrame.idxmax : Return the index of the maximum over the requested axis.
 |      
 |      Examples
 |      --------
 |      
 |      >>> idx = pd.MultiIndex.from_arrays([
 |      ...     ['warm', 'warm', 'cold', 'cold'],
 |      ...     ['dog', 'falcon', 'fish', 'spider']],
 |      ...     names=['blooded', 'animal'])
 |      >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
 |      >>> s
 |      blooded  animal
 |      warm     dog       4
 |               falcon    2
 |      cold     fish      0
 |               spider    8
 |      Name: legs, dtype: int64
 |      
 |      >>> s.sum()
 |      14
 |      
 |      Sum using level names, as well as indices.
 |      
 |      >>> s.sum(level='blooded')
 |      blooded
 |      warm    6
 |      cold    8
 |      Name: legs, dtype: int64
 |      
 |      >>> s.sum(level=0)
 |      blooded
 |      warm    6
 |      cold    8
 |      Name: legs, dtype: int64
 |      
 |      By default, the sum of an empty or all-NA Series is ``0``.
 |      
 |      >>> pd.Series([]).sum()  # min_count=0 is the default
 |      0.0
 |      
 |      This can be controlled with the ``min_count`` parameter. For example, if
 |      you'd like the sum of an empty series to be NaN, pass ``min_count=1``.
 |      
 |      >>> pd.Series([]).sum(min_count=1)
 |      nan
 |      
 |      Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
 |      empty series identically.
 |      
 |      >>> pd.Series([np.nan]).sum()
 |      0.0
 |      
 |      >>> pd.Series([np.nan]).sum(min_count=1)
 |      nan
 |  
 |  swaplevel(self, i=-2, j=-1, axis=0)
 |      Swap levels i and j in a MultiIndex on a particular axis.
 |      
 |      Parameters
 |      ----------
 |      i, j : int, string (can be mixed)
 |          Level of index to be swapped. Can pass level name as string.
 |      
 |      Returns
 |      -------
 |      swapped : same type as caller (new object)
 |      
 |      .. versionchanged:: 0.18.1
 |      
 |         The indexes ``i`` and ``j`` are now optional, and default to
 |         the two innermost levels of the index.
 |  
 |  to_dict(self, orient='dict', into=<class 'dict'>)
 |      Convert the DataFrame to a dictionary.
 |      
 |      The type of the key-value pairs can be customized with the parameters
 |      (see below).
 |      
 |      Parameters
 |      ----------
 |      orient : str {'dict', 'list', 'series', 'split', 'records', 'index'}
 |          Determines the type of the values of the dictionary.
 |      
 |          - 'dict' (default) : dict like {column -> {index -> value}}
 |          - 'list' : dict like {column -> [values]}
 |          - 'series' : dict like {column -> Series(values)}
 |          - 'split' : dict like
 |            {'index' -> [index], 'columns' -> [columns], 'data' -> [values]}
 |          - 'records' : list like
 |            [{column -> value}, ... , {column -> value}]
 |          - 'index' : dict like {index -> {column -> value}}
 |      
 |          Abbreviations are allowed. `s` indicates `series` and `sp`
 |          indicates `split`.
 |      
 |      into : class, default dict
 |          The collections.Mapping subclass used for all Mappings
 |          in the return value.  Can be the actual class or an empty
 |          instance of the mapping type you want.  If you want a
 |          collections.defaultdict, you must pass it initialized.
 |      
 |          .. versionadded:: 0.21.0
 |      
 |      Returns
 |      -------
 |      dict, list or collections.Mapping
 |          Return a collections.Mapping object representing the DataFrame.
 |          The resulting transformation depends on the `orient` parameter.
 |      
 |      See Also
 |      --------
 |      DataFrame.from_dict: Create a DataFrame from a dictionary.
 |      DataFrame.to_json: Convert a DataFrame to JSON format.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'col1': [1, 2],
 |      ...                    'col2': [0.5, 0.75]},
 |      ...                   index=['row1', 'row2'])
 |      >>> df
 |            col1  col2
 |      row1     1  0.50
 |      row2     2  0.75
 |      >>> df.to_dict()
 |      {'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}}
 |      
 |      You can specify the return orientation.
 |      
 |      >>> df.to_dict('series')
 |      {'col1': row1    1
 |               row2    2
 |      Name: col1, dtype: int64,
 |      'col2': row1    0.50
 |              row2    0.75
 |      Name: col2, dtype: float64}
 |      
 |      >>> df.to_dict('split')
 |      {'index': ['row1', 'row2'], 'columns': ['col1', 'col2'],
 |       'data': [[1, 0.5], [2, 0.75]]}
 |      
 |      >>> df.to_dict('records')
 |      [{'col1': 1, 'col2': 0.5}, {'col1': 2, 'col2': 0.75}]
 |      
 |      >>> df.to_dict('index')
 |      {'row1': {'col1': 1, 'col2': 0.5}, 'row2': {'col1': 2, 'col2': 0.75}}
 |      
 |      You can also specify the mapping type.
 |      
 |      >>> from collections import OrderedDict, defaultdict
 |      >>> df.to_dict(into=OrderedDict)
 |      OrderedDict([('col1', OrderedDict([('row1', 1), ('row2', 2)])),
 |                   ('col2', OrderedDict([('row1', 0.5), ('row2', 0.75)]))])
 |      
 |      If you want a `defaultdict`, you need to initialize it:
 |      
 |      >>> dd = defaultdict(list)
 |      >>> df.to_dict('records', into=dd)
 |      [defaultdict(<class 'list'>, {'col1': 1, 'col2': 0.5}),
 |       defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})]
 |  
 |  to_feather(self, fname)
 |      Write out the binary feather-format for DataFrames.
 |      
 |      .. versionadded:: 0.20.0
 |      
 |      Parameters
 |      ----------
 |      fname : str
 |          string file path
 |  
 |  to_gbq(self, destination_table, project_id=None, chunksize=None, reauth=False, if_exists='fail', auth_local_webserver=False, table_schema=None, location=None, progress_bar=True, credentials=None, verbose=None, private_key=None)
 |      Write a DataFrame to a Google BigQuery table.
 |      
 |      This function requires the `pandas-gbq package
 |      <https://pandas-gbq.readthedocs.io>`__.
 |      
 |      See the `How to authenticate with Google BigQuery
 |      <https://pandas-gbq.readthedocs.io/en/latest/howto/authentication.html>`__
 |      guide for authentication instructions.
 |      
 |      Parameters
 |      ----------
 |      destination_table : str
 |          Name of table to be written, in the form ``dataset.tablename``.
 |      project_id : str, optional
 |          Google BigQuery Account project ID. Optional when available from
 |          the environment.
 |      chunksize : int, optional
 |          Number of rows to be inserted in each chunk from the dataframe.
 |          Set to ``None`` to load the whole dataframe at once.
 |      reauth : bool, default False
 |          Force Google BigQuery to re-authenticate the user. This is useful
 |          if multiple accounts are used.
 |      if_exists : str, default 'fail'
 |          Behavior when the destination table exists. Value can be one of:
 |      
 |          ``'fail'``
 |              If table exists, do nothing.
 |          ``'replace'``
 |              If table exists, drop it, recreate it, and insert data.
 |          ``'append'``
 |              If table exists, insert data. Create if does not exist.
 |      auth_local_webserver : bool, default False
 |          Use the `local webserver flow`_ instead of the `console flow`_
 |          when getting user credentials.
 |      
 |          .. _local webserver flow:
 |              http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_local_server
 |          .. _console flow:
 |              http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_console
 |      
 |          *New in version 0.2.0 of pandas-gbq*.
 |      table_schema : list of dicts, optional
 |          List of BigQuery table fields to which according DataFrame
 |          columns conform to, e.g. ``[{'name': 'col1', 'type':
 |          'STRING'},...]``. If schema is not provided, it will be
 |          generated according to dtypes of DataFrame columns. See
 |          BigQuery API documentation on available names of a field.
 |      
 |          *New in version 0.3.1 of pandas-gbq*.
 |      location : str, optional
 |          Location where the load job should run. See the `BigQuery locations
 |          documentation
 |          <https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a
 |          list of available locations. The location must match that of the
 |          target dataset.
 |      
 |          *New in version 0.5.0 of pandas-gbq*.
 |      progress_bar : bool, default True
 |          Use the library `tqdm` to show the progress bar for the upload,
 |          chunk by chunk.
 |      
 |          *New in version 0.5.0 of pandas-gbq*.
 |      credentials : google.auth.credentials.Credentials, optional
 |          Credentials for accessing Google APIs. Use this parameter to
 |          override default credentials, such as to use Compute Engine
 |          :class:`google.auth.compute_engine.Credentials` or Service
 |          Account :class:`google.oauth2.service_account.Credentials`
 |          directly.
 |      
 |          *New in version 0.8.0 of pandas-gbq*.
 |      
 |          .. versionadded:: 0.24.0
 |      verbose : bool, deprecated
 |          Deprecated in pandas-gbq version 0.4.0. Use the `logging module
 |          to adjust verbosity instead
 |          <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__.
 |      private_key : str, deprecated
 |          Deprecated in pandas-gbq version 0.8.0. Use the ``credentials``
 |          parameter and
 |          :func:`google.oauth2.service_account.Credentials.from_service_account_info`
 |          or
 |          :func:`google.oauth2.service_account.Credentials.from_service_account_file`
 |          instead.
 |      
 |          Service account private key in JSON format. Can be file path
 |          or string contents. This is useful for remote server
 |          authentication (eg. Jupyter/IPython notebook on remote host).
 |      
 |      See Also
 |      --------
 |      pandas_gbq.to_gbq : This function in the pandas-gbq library.
 |      pandas.read_gbq : Read a DataFrame from Google BigQuery.
 |  
 |  to_html(self, buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', bold_rows=True, classes=None, escape=True, notebook=False, border=None, table_id=None, render_links=False)
 |      Render a DataFrame as an HTML table.
 |      
 |      Parameters
 |      ----------
 |      buf : StringIO-like, optional
 |          Buffer to write to.
 |      columns : sequence, optional, default None
 |          The subset of columns to write. Writes all columns by default.
 |      col_space : int, optional
 |          The minimum width of each column.
 |      header : bool, optional
 |          Whether to print column labels, default True.
 |      index : bool, optional, default True
 |          Whether to print index (row) labels.
 |      na_rep : str, optional, default 'NaN'
 |          String representation of NAN to use.
 |      formatters : list or dict of one-param. functions, optional
 |          Formatter functions to apply to columns' elements by position or
 |          name.
 |          The result of each function must be a unicode string.
 |          List must be of length equal to the number of columns.
 |      float_format : one-parameter function, optional, default None
 |          Formatter function to apply to columns' elements if they are
 |          floats. The result of this function must be a unicode string.
 |      sparsify : bool, optional, default True
 |          Set to False for a DataFrame with a hierarchical index to print
 |          every multiindex key at each row.
 |      index_names : bool, optional, default True
 |          Prints the names of the indexes.
 |      justify : str, default None
 |          How to justify the column labels. If None uses the option from
 |          the print configuration (controlled by set_option), 'right' out
 |          of the box. Valid values are
 |      
 |          * left
 |          * right
 |          * center
 |          * justify
 |          * justify-all
 |          * start
 |          * end
 |          * inherit
 |          * match-parent
 |          * initial
 |          * unset.
 |      max_rows : int, optional
 |          Maximum number of rows to display in the console.
 |      max_cols : int, optional
 |          Maximum number of columns to display in the console.
 |      show_dimensions : bool, default False
 |          Display DataFrame dimensions (number of rows by number of columns).
 |      decimal : str, default '.'
 |          Character recognized as decimal separator, e.g. ',' in Europe.
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      bold_rows : bool, default True
 |          Make the row labels bold in the output.
 |      classes : str or list or tuple, default None
 |          CSS class(es) to apply to the resulting html table.
 |      escape : bool, default True
 |          Convert the characters <, >, and & to HTML-safe sequences.
 |      notebook : {True, False}, default False
 |          Whether the generated HTML is for IPython Notebook.
 |      border : int
 |          A ``border=border`` attribute is included in the opening
 |          `<table>` tag. Default ``pd.options.html.border``.
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      table_id : str, optional
 |          A css id is included in the opening `<table>` tag if specified.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      render_links : bool, default False
 |          Convert URLs to HTML links.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      
 |      Returns
 |      -------
 |      str (or unicode, depending on data and options)
 |          String representation of the dataframe.
 |      
 |      See Also
 |      --------
 |      to_string : Convert DataFrame to a string.
 |  
 |  to_numpy(self, dtype=None, copy=False)
 |      Convert the DataFrame to a NumPy array.
 |      
 |      .. versionadded:: 0.24.0
 |      
 |      By default, the dtype of the returned array will be the common NumPy
 |      dtype of all types in the DataFrame. For example, if the dtypes are
 |      ``float16`` and ``float32``, the results dtype will be ``float32``.
 |      This may require copying data and coercing values, which may be
 |      expensive.
 |      
 |      Parameters
 |      ----------
 |      dtype : str or numpy.dtype, optional
 |          The dtype to pass to :meth:`numpy.asarray`
 |      copy : bool, default False
 |          Whether to ensure that the returned value is a not a view on
 |          another array. Note that ``copy=False`` does not *ensure* that
 |          ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that
 |          a copy is made, even if not strictly necessary.
 |      
 |      Returns
 |      -------
 |      array : numpy.ndarray
 |      
 |      See Also
 |      --------
 |      Series.to_numpy : Similar method for Series.
 |      
 |      Examples
 |      --------
 |      >>> pd.DataFrame({"A": [1, 2], "B": [3, 4]}).to_numpy()
 |      array([[1, 3],
 |             [2, 4]])
 |      
 |      With heterogenous data, the lowest common type will have to
 |      be used.
 |      
 |      >>> df = pd.DataFrame({"A": [1, 2], "B": [3.0, 4.5]})
 |      >>> df.to_numpy()
 |      array([[1. , 3. ],
 |             [2. , 4.5]])
 |      
 |      For a mix of numeric and non-numeric types, the output array will
 |      have object dtype.
 |      
 |      >>> df['C'] = pd.date_range('2000', periods=2)
 |      >>> df.to_numpy()
 |      array([[1, 3.0, Timestamp('2000-01-01 00:00:00')],
 |             [2, 4.5, Timestamp('2000-01-02 00:00:00')]], dtype=object)
 |  
 |  to_panel(self)
 |      Transform long (stacked) format (DataFrame) into wide (3D, Panel)
 |      format.
 |      
 |      .. deprecated:: 0.20.0
 |      
 |      Currently the index of the DataFrame must be a 2-level MultiIndex. This
 |      may be generalized later
 |      
 |      Returns
 |      -------
 |      panel : Panel
 |  
 |  to_parquet(self, fname, engine='auto', compression='snappy', index=None, partition_cols=None, **kwargs)
 |      Write a DataFrame to the binary parquet format.
 |      
 |      .. versionadded:: 0.21.0
 |      
 |      This function writes the dataframe as a `parquet file
 |      <https://parquet.apache.org/>`_. You can choose different parquet
 |      backends, and have the option of compression. See
 |      :ref:`the user guide <io.parquet>` for more details.
 |      
 |      Parameters
 |      ----------
 |      fname : str
 |          File path or Root Directory path. Will be used as Root Directory
 |          path while writing a partitioned dataset.
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |      engine : {'auto', 'pyarrow', 'fastparquet'}, default 'auto'
 |          Parquet library to use. If 'auto', then the option
 |          ``io.parquet.engine`` is used. The default ``io.parquet.engine``
 |          behavior is to try 'pyarrow', falling back to 'fastparquet' if
 |          'pyarrow' is unavailable.
 |      compression : {'snappy', 'gzip', 'brotli', None}, default 'snappy'
 |          Name of the compression to use. Use ``None`` for no compression.
 |      index : bool, default None
 |          If ``True``, include the dataframe's index(es) in the file output.
 |          If ``False``, they will not be written to the file. If ``None``,
 |          the behavior depends on the chosen engine.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      partition_cols : list, optional, default None
 |          Column names by which to partition the dataset
 |          Columns are partitioned in the order they are given
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      **kwargs
 |          Additional arguments passed to the parquet library. See
 |          :ref:`pandas io <io.parquet>` for more details.
 |      
 |      See Also
 |      --------
 |      read_parquet : Read a parquet file.
 |      DataFrame.to_csv : Write a csv file.
 |      DataFrame.to_sql : Write to a sql table.
 |      DataFrame.to_hdf : Write to hdf.
 |      
 |      Notes
 |      -----
 |      This function requires either the `fastparquet
 |      <https://pypi.org/project/fastparquet>`_ or `pyarrow
 |      <https://arrow.apache.org/docs/python/>`_ library.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
 |      >>> df.to_parquet('df.parquet.gzip',
 |      ...               compression='gzip')  # doctest: +SKIP
 |      >>> pd.read_parquet('df.parquet.gzip')  # doctest: +SKIP
 |         col1  col2
 |      0     1     3
 |      1     2     4
 |  
 |  to_period(self, freq=None, axis=0, copy=True)
 |      Convert DataFrame from DatetimeIndex to PeriodIndex with desired
 |      frequency (inferred from index if not passed).
 |      
 |      Parameters
 |      ----------
 |      freq : string, default
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to convert (the index by default)
 |      copy : boolean, default True
 |          If False then underlying input data is not copied
 |      
 |      Returns
 |      -------
 |      ts : TimeSeries with PeriodIndex
 |  
 |  to_records(self, index=True, convert_datetime64=None, column_dtypes=None, index_dtypes=None)
 |      Convert DataFrame to a NumPy record array.
 |      
 |      Index will be included as the first field of the record array if
 |      requested.
 |      
 |      Parameters
 |      ----------
 |      index : bool, default True
 |          Include index in resulting record array, stored in 'index'
 |          field or using the index label, if set.
 |      convert_datetime64 : bool, default None
 |          .. deprecated:: 0.23.0
 |      
 |          Whether to convert the index to datetime.datetime if it is a
 |          DatetimeIndex.
 |      column_dtypes : str, type, dict, default None
 |          .. versionadded:: 0.24.0
 |      
 |          If a string or type, the data type to store all columns. If
 |          a dictionary, a mapping of column names and indices (zero-indexed)
 |          to specific data types.
 |      index_dtypes : str, type, dict, default None
 |          .. versionadded:: 0.24.0
 |      
 |          If a string or type, the data type to store all index levels. If
 |          a dictionary, a mapping of index level names and indices
 |          (zero-indexed) to specific data types.
 |      
 |          This mapping is applied only if `index=True`.
 |      
 |      Returns
 |      -------
 |      numpy.recarray
 |          NumPy ndarray with the DataFrame labels as fields and each row
 |          of the DataFrame as entries.
 |      
 |      See Also
 |      --------
 |      DataFrame.from_records: Convert structured or record ndarray
 |          to DataFrame.
 |      numpy.recarray: An ndarray that allows field access using
 |          attributes, analogous to typed columns in a
 |          spreadsheet.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': [1, 2], 'B': [0.5, 0.75]},
 |      ...                   index=['a', 'b'])
 |      >>> df
 |         A     B
 |      a  1  0.50
 |      b  2  0.75
 |      >>> df.to_records()
 |      rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
 |                dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')])
 |      
 |      If the DataFrame index has no label then the recarray field name
 |      is set to 'index'. If the index has a label then this is used as the
 |      field name:
 |      
 |      >>> df.index = df.index.rename("I")
 |      >>> df.to_records()
 |      rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
 |                dtype=[('I', 'O'), ('A', '<i8'), ('B', '<f8')])
 |      
 |      The index can be excluded from the record array:
 |      
 |      >>> df.to_records(index=False)
 |      rec.array([(1, 0.5 ), (2, 0.75)],
 |                dtype=[('A', '<i8'), ('B', '<f8')])
 |      
 |      Data types can be specified for the columns:
 |      
 |      >>> df.to_records(column_dtypes={"A": "int32"})
 |      rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
 |                dtype=[('I', 'O'), ('A', '<i4'), ('B', '<f8')])
 |      
 |      As well as for the index:
 |      
 |      >>> df.to_records(index_dtypes="<S2")
 |      rec.array([(b'a', 1, 0.5 ), (b'b', 2, 0.75)],
 |                dtype=[('I', 'S2'), ('A', '<i8'), ('B', '<f8')])
 |      
 |      >>> index_dtypes = "<S{}".format(df.index.str.len().max())
 |      >>> df.to_records(index_dtypes=index_dtypes)
 |      rec.array([(b'a', 1, 0.5 ), (b'b', 2, 0.75)],
 |                dtype=[('I', 'S1'), ('A', '<i8'), ('B', '<f8')])
 |  
 |  to_sparse(self, fill_value=None, kind='block')
 |      Convert to SparseDataFrame.
 |      
 |      Implement the sparse version of the DataFrame meaning that any data
 |      matching a specific value it's omitted in the representation.
 |      The sparse DataFrame allows for a more efficient storage.
 |      
 |      Parameters
 |      ----------
 |      fill_value : float, default None
 |          The specific value that should be omitted in the representation.
 |      kind : {'block', 'integer'}, default 'block'
 |          The kind of the SparseIndex tracking where data is not equal to
 |          the fill value:
 |      
 |          - 'block' tracks only the locations and sizes of blocks of data.
 |          - 'integer' keeps an array with all the locations of the data.
 |      
 |          In most cases 'block' is recommended, since it's more memory
 |          efficient.
 |      
 |      Returns
 |      -------
 |      SparseDataFrame
 |          The sparse representation of the DataFrame.
 |      
 |      See Also
 |      --------
 |      DataFrame.to_dense :
 |          Converts the DataFrame back to the its dense form.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([(np.nan, np.nan),
 |      ...                    (1., np.nan),
 |      ...                    (np.nan, 1.)])
 |      >>> df
 |           0    1
 |      0  NaN  NaN
 |      1  1.0  NaN
 |      2  NaN  1.0
 |      >>> type(df)
 |      <class 'pandas.core.frame.DataFrame'>
 |      
 |      >>> sdf = df.to_sparse()
 |      >>> sdf
 |           0    1
 |      0  NaN  NaN
 |      1  1.0  NaN
 |      2  NaN  1.0
 |      >>> type(sdf)
 |      <class 'pandas.core.sparse.frame.SparseDataFrame'>
 |  
 |  to_stata(self, fname, convert_dates=None, write_index=True, encoding='latin-1', byteorder=None, time_stamp=None, data_label=None, variable_labels=None, version=114, convert_strl=None)
 |      Export DataFrame object to Stata dta format.
 |      
 |      Writes the DataFrame to a Stata dataset file.
 |      "dta" files contain a Stata dataset.
 |      
 |      Parameters
 |      ----------
 |      fname : str, buffer or path object
 |          String, path object (pathlib.Path or py._path.local.LocalPath) or
 |          object implementing a binary write() function. If using a buffer
 |          then the buffer will not be automatically closed after the file
 |          data has been written.
 |      convert_dates : dict
 |          Dictionary mapping columns containing datetime types to stata
 |          internal format to use when writing the dates. Options are 'tc',
 |          'td', 'tm', 'tw', 'th', 'tq', 'ty'. Column can be either an integer
 |          or a name. Datetime columns that do not have a conversion type
 |          specified will be converted to 'tc'. Raises NotImplementedError if
 |          a datetime column has timezone information.
 |      write_index : bool
 |          Write the index to Stata dataset.
 |      encoding : str
 |          Default is latin-1. Unicode is not supported.
 |      byteorder : str
 |          Can be ">", "<", "little", or "big". default is `sys.byteorder`.
 |      time_stamp : datetime
 |          A datetime to use as file creation date.  Default is the current
 |          time.
 |      data_label : str, optional
 |          A label for the data set.  Must be 80 characters or smaller.
 |      variable_labels : dict
 |          Dictionary containing columns as keys and variable labels as
 |          values. Each label must be 80 characters or smaller.
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      version : {114, 117}, default 114
 |          Version to use in the output dta file.  Version 114 can be used
 |          read by Stata 10 and later.  Version 117 can be read by Stata 13
 |          or later. Version 114 limits string variables to 244 characters or
 |          fewer while 117 allows strings with lengths up to 2,000,000
 |          characters.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      convert_strl : list, optional
 |          List of column names to convert to string columns to Stata StrL
 |          format. Only available if version is 117.  Storing strings in the
 |          StrL format can produce smaller dta files if strings have more than
 |          8 characters and values are repeated.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      Raises
 |      ------
 |      NotImplementedError
 |          * If datetimes contain timezone information
 |          * Column dtype is not representable in Stata
 |      ValueError
 |          * Columns listed in convert_dates are neither datetime64[ns]
 |            or datetime.datetime
 |          * Column listed in convert_dates is not in DataFrame
 |          * Categorical label contains more than 32,000 characters
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      See Also
 |      --------
 |      read_stata : Import Stata data files.
 |      io.stata.StataWriter : Low-level writer for Stata data files.
 |      io.stata.StataWriter117 : Low-level writer for version 117 files.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'animal': ['falcon', 'parrot', 'falcon',
 |      ...                               'parrot'],
 |      ...                    'speed': [350, 18, 361, 15]})
 |      >>> df.to_stata('animals.dta')  # doctest: +SKIP
 |  
 |  to_string(self, buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', line_width=None)
 |      Render a DataFrame to a console-friendly tabular output.
 |      
 |      Parameters
 |      ----------
 |      buf : StringIO-like, optional
 |          Buffer to write to.
 |      columns : sequence, optional, default None
 |          The subset of columns to write. Writes all columns by default.
 |      col_space : int, optional
 |          The minimum width of each column.
 |      header : bool, optional
 |          Write out the column names. If a list of strings is given, it is assumed to be aliases for the column names.
 |      index : bool, optional, default True
 |          Whether to print index (row) labels.
 |      na_rep : str, optional, default 'NaN'
 |          String representation of NAN to use.
 |      formatters : list or dict of one-param. functions, optional
 |          Formatter functions to apply to columns' elements by position or
 |          name.
 |          The result of each function must be a unicode string.
 |          List must be of length equal to the number of columns.
 |      float_format : one-parameter function, optional, default None
 |          Formatter function to apply to columns' elements if they are
 |          floats. The result of this function must be a unicode string.
 |      sparsify : bool, optional, default True
 |          Set to False for a DataFrame with a hierarchical index to print
 |          every multiindex key at each row.
 |      index_names : bool, optional, default True
 |          Prints the names of the indexes.
 |      justify : str, default None
 |          How to justify the column labels. If None uses the option from
 |          the print configuration (controlled by set_option), 'right' out
 |          of the box. Valid values are
 |      
 |          * left
 |          * right
 |          * center
 |          * justify
 |          * justify-all
 |          * start
 |          * end
 |          * inherit
 |          * match-parent
 |          * initial
 |          * unset.
 |      max_rows : int, optional
 |          Maximum number of rows to display in the console.
 |      max_cols : int, optional
 |          Maximum number of columns to display in the console.
 |      show_dimensions : bool, default False
 |          Display DataFrame dimensions (number of rows by number of columns).
 |      decimal : str, default '.'
 |          Character recognized as decimal separator, e.g. ',' in Europe.
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      line_width : int, optional
 |          Width to wrap a line in characters.
 |      
 |      Returns
 |      -------
 |      str (or unicode, depending on data and options)
 |          String representation of the dataframe.
 |      
 |      See Also
 |      --------
 |      to_html : Convert DataFrame to HTML.
 |      
 |      Examples
 |      --------
 |      >>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}
 |      >>> df = pd.DataFrame(d)
 |      >>> print(df.to_string())
 |         col1  col2
 |      0     1     4
 |      1     2     5
 |      2     3     6
 |  
 |  to_timestamp(self, freq=None, how='start', axis=0, copy=True)
 |      Cast to DatetimeIndex of timestamps, at *beginning* of period.
 |      
 |      Parameters
 |      ----------
 |      freq : string, default frequency of PeriodIndex
 |          Desired frequency
 |      how : {'s', 'e', 'start', 'end'}
 |          Convention for converting period to timestamp; start of period
 |          vs. end
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to convert (the index by default)
 |      copy : boolean, default True
 |          If false then underlying input data is not copied
 |      
 |      Returns
 |      -------
 |      df : DataFrame with DatetimeIndex
 |  
 |  transform(self, func, axis=0, *args, **kwargs)
 |      Call ``func`` on self producing a DataFrame with transformed values
 |      and that has the same axis length as self.
 |      
 |      .. versionadded:: 0.20.0
 |      
 |      Parameters
 |      ----------
 |      func : function, str, list or dict
 |          Function to use for transforming the data. If a function, must either
 |          work when passed a DataFrame or when passed to DataFrame.apply.
 |      
 |          Accepted combinations are:
 |      
 |          - function
 |          - string function name
 |          - list of functions and/or function names, e.g. ``[np.exp. 'sqrt']``
 |          - dict of axis labels -> functions, function names or list of such.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          If 0 or 'index': apply function to each column.
 |          If 1 or 'columns': apply function to each row.
 |      *args
 |          Positional arguments to pass to `func`.
 |      **kwargs
 |          Keyword arguments to pass to `func`.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          A DataFrame that must have the same length as self.
 |      
 |      Raises
 |      ------
 |      ValueError : If the returned DataFrame has a different length than self.
 |      
 |      See Also
 |      --------
 |      DataFrame.agg : Only perform aggregating type operations.
 |      DataFrame.apply : Invoke function on a DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
 |      >>> df
 |         A  B
 |      0  0  1
 |      1  1  2
 |      2  2  3
 |      >>> df.transform(lambda x: x + 1)
 |         A  B
 |      0  1  2
 |      1  2  3
 |      2  3  4
 |      
 |      Even though the resulting DataFrame must have the same length as the
 |      input DataFrame, it is possible to provide several input functions:
 |      
 |      >>> s = pd.Series(range(3))
 |      >>> s
 |      0    0
 |      1    1
 |      2    2
 |      dtype: int64
 |      >>> s.transform([np.sqrt, np.exp])
 |             sqrt        exp
 |      0  0.000000   1.000000
 |      1  1.000000   2.718282
 |      2  1.414214   7.389056
 |  
 |  transpose(self, *args, **kwargs)
 |      Transpose index and columns.
 |      
 |      Reflect the DataFrame over its main diagonal by writing rows as columns
 |      and vice-versa. The property :attr:`.T` is an accessor to the method
 |      :meth:`transpose`.
 |      
 |      Parameters
 |      ----------
 |      copy : bool, default False
 |          If True, the underlying data is copied. Otherwise (default), no
 |          copy is made if possible.
 |      *args, **kwargs
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with numpy.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          The transposed DataFrame.
 |      
 |      See Also
 |      --------
 |      numpy.transpose : Permute the dimensions of a given array.
 |      
 |      Notes
 |      -----
 |      Transposing a DataFrame with mixed dtypes will result in a homogeneous
 |      DataFrame with the `object` dtype. In such a case, a copy of the data
 |      is always made.
 |      
 |      Examples
 |      --------
 |      **Square DataFrame with homogeneous dtype**
 |      
 |      >>> d1 = {'col1': [1, 2], 'col2': [3, 4]}
 |      >>> df1 = pd.DataFrame(data=d1)
 |      >>> df1
 |         col1  col2
 |      0     1     3
 |      1     2     4
 |      
 |      >>> df1_transposed = df1.T # or df1.transpose()
 |      >>> df1_transposed
 |            0  1
 |      col1  1  2
 |      col2  3  4
 |      
 |      When the dtype is homogeneous in the original DataFrame, we get a
 |      transposed DataFrame with the same dtype:
 |      
 |      >>> df1.dtypes
 |      col1    int64
 |      col2    int64
 |      dtype: object
 |      >>> df1_transposed.dtypes
 |      0    int64
 |      1    int64
 |      dtype: object
 |      
 |      **Non-square DataFrame with mixed dtypes**
 |      
 |      >>> d2 = {'name': ['Alice', 'Bob'],
 |      ...       'score': [9.5, 8],
 |      ...       'employed': [False, True],
 |      ...       'kids': [0, 0]}
 |      >>> df2 = pd.DataFrame(data=d2)
 |      >>> df2
 |          name  score  employed  kids
 |      0  Alice    9.5     False     0
 |      1    Bob    8.0      True     0
 |      
 |      >>> df2_transposed = df2.T # or df2.transpose()
 |      >>> df2_transposed
 |                    0     1
 |      name      Alice   Bob
 |      score       9.5     8
 |      employed  False  True
 |      kids          0     0
 |      
 |      When the DataFrame has mixed dtypes, we get a transposed DataFrame with
 |      the `object` dtype:
 |      
 |      >>> df2.dtypes
 |      name         object
 |      score       float64
 |      employed       bool
 |      kids          int64
 |      dtype: object
 |      >>> df2_transposed.dtypes
 |      0    object
 |      1    object
 |      dtype: object
 |  
 |  truediv(self, other, axis='columns', level=None, fill_value=None)
 |      Floating division of dataframe and other, element-wise (binary operator `truediv`).
 |      
 |      Equivalent to ``dataframe / other``, but with support to substitute a fill_value
 |      for missing data in one of the inputs. With reverse version, `rtruediv`.
 |      
 |      Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
 |      arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
 |      
 |      Parameters
 |      ----------
 |      other : scalar, sequence, Series, or DataFrame
 |          Any single or multiple element data structure, or list-like object.
 |      axis :  {0 or 'index', 1 or 'columns'}
 |          Whether to compare by the index (0 or 'index') or columns
 |          (1 or 'columns'). For Series input, axis to match Series index on.
 |      level : int or label
 |          Broadcast across a level, matching Index values on the
 |          passed MultiIndex level.
 |      fill_value : float or None, default None
 |          Fill existing missing (NaN) values, and any new element needed for
 |          successful DataFrame alignment, with this value before computation.
 |          If data in both corresponding DataFrame locations is missing
 |          the result will be missing.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          Result of the arithmetic operation.
 |      
 |      See Also
 |      --------
 |      DataFrame.add : Add DataFrames.
 |      DataFrame.sub : Subtract DataFrames.
 |      DataFrame.mul : Multiply DataFrames.
 |      DataFrame.div : Divide DataFrames (float division).
 |      DataFrame.truediv : Divide DataFrames (float division).
 |      DataFrame.floordiv : Divide DataFrames (integer division).
 |      DataFrame.mod : Calculate modulo (remainder after division).
 |      DataFrame.pow : Calculate exponential power.
 |      
 |      Notes
 |      -----
 |      Mismatched indices will be unioned together.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'angles': [0, 3, 4],
 |      ...                    'degrees': [360, 180, 360]},
 |      ...                   index=['circle', 'triangle', 'rectangle'])
 |      >>> df
 |                 angles  degrees
 |      circle          0      360
 |      triangle        3      180
 |      rectangle       4      360
 |      
 |      Add a scalar with operator version which return the same
 |      results.
 |      
 |      >>> df + 1
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      >>> df.add(1)
 |                 angles  degrees
 |      circle          1      361
 |      triangle        4      181
 |      rectangle       5      361
 |      
 |      Divide by constant with reverse version.
 |      
 |      >>> df.div(10)
 |                 angles  degrees
 |      circle        0.0     36.0
 |      triangle      0.3     18.0
 |      rectangle     0.4     36.0
 |      
 |      >>> df.rdiv(10)
 |                   angles   degrees
 |      circle          inf  0.027778
 |      triangle   3.333333  0.055556
 |      rectangle  2.500000  0.027778
 |      
 |      Subtract a list and Series by axis with operator version.
 |      
 |      >>> df - [1, 2]
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub([1, 2], axis='columns')
 |                 angles  degrees
 |      circle         -1      358
 |      triangle        2      178
 |      rectangle       3      358
 |      
 |      >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
 |      ...        axis='index')
 |                 angles  degrees
 |      circle         -1      359
 |      triangle        2      179
 |      rectangle       3      359
 |      
 |      Multiply a DataFrame of different shape with operator version.
 |      
 |      >>> other = pd.DataFrame({'angles': [0, 3, 4]},
 |      ...                      index=['circle', 'triangle', 'rectangle'])
 |      >>> other
 |                 angles
 |      circle          0
 |      triangle        3
 |      rectangle       4
 |      
 |      >>> df * other
 |                 angles  degrees
 |      circle          0      NaN
 |      triangle        9      NaN
 |      rectangle      16      NaN
 |      
 |      >>> df.mul(other, fill_value=0)
 |                 angles  degrees
 |      circle          0      0.0
 |      triangle        9      0.0
 |      rectangle      16      0.0
 |      
 |      Divide by a MultiIndex by level.
 |      
 |      >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
 |      ...                              'degrees': [360, 180, 360, 360, 540, 720]},
 |      ...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
 |      ...                                    ['circle', 'triangle', 'rectangle',
 |      ...                                     'square', 'pentagon', 'hexagon']])
 |      >>> df_multindex
 |                   angles  degrees
 |      A circle          0      360
 |        triangle        3      180
 |        rectangle       4      360
 |      B square          4      360
 |        pentagon        5      540
 |        hexagon         6      720
 |      
 |      >>> df.div(df_multindex, level=1, fill_value=0)
 |                   angles  degrees
 |      A circle        NaN      1.0
 |        triangle      1.0      1.0
 |        rectangle     1.0      1.0
 |      B square        0.0      0.0
 |        pentagon      0.0      0.0
 |        hexagon       0.0      0.0
 |  
 |  unstack(self, level=-1, fill_value=None)
 |      Pivot a level of the (necessarily hierarchical) index labels, returning
 |      a DataFrame having a new level of column labels whose inner-most level
 |      consists of the pivoted index labels.
 |      
 |      If the index is not a MultiIndex, the output will be a Series
 |      (the analogue of stack when the columns are not a MultiIndex).
 |      
 |      The level involved will automatically get sorted.
 |      
 |      Parameters
 |      ----------
 |      level : int, string, or list of these, default -1 (last level)
 |          Level(s) of index to unstack, can pass level name
 |      fill_value : replace NaN with this value if the unstack produces
 |          missing values
 |      
 |          .. versionadded:: 0.18.0
 |      
 |      Returns
 |      -------
 |      unstacked : DataFrame or Series
 |      
 |      See Also
 |      --------
 |      DataFrame.pivot : Pivot a table based on column values.
 |      DataFrame.stack : Pivot a level of the column labels (inverse operation
 |          from `unstack`).
 |      
 |      Examples
 |      --------
 |      >>> index = pd.MultiIndex.from_tuples([('one', 'a'), ('one', 'b'),
 |      ...                                    ('two', 'a'), ('two', 'b')])
 |      >>> s = pd.Series(np.arange(1.0, 5.0), index=index)
 |      >>> s
 |      one  a   1.0
 |           b   2.0
 |      two  a   3.0
 |           b   4.0
 |      dtype: float64
 |      
 |      >>> s.unstack(level=-1)
 |           a   b
 |      one  1.0  2.0
 |      two  3.0  4.0
 |      
 |      >>> s.unstack(level=0)
 |         one  two
 |      a  1.0   3.0
 |      b  2.0   4.0
 |      
 |      >>> df = s.unstack(level=0)
 |      >>> df.unstack()
 |      one  a  1.0
 |           b  2.0
 |      two  a  3.0
 |           b  4.0
 |      dtype: float64
 |  
 |  update(self, other, join='left', overwrite=True, filter_func=None, errors='ignore')
 |      Modify in place using non-NA values from another DataFrame.
 |      
 |      Aligns on indices. There is no return value.
 |      
 |      Parameters
 |      ----------
 |      other : DataFrame, or object coercible into a DataFrame
 |          Should have at least one matching index/column label
 |          with the original DataFrame. If a Series is passed,
 |          its name attribute must be set, and that will be
 |          used as the column name to align with the original DataFrame.
 |      join : {'left'}, default 'left'
 |          Only left join is implemented, keeping the index and columns of the
 |          original object.
 |      overwrite : bool, default True
 |          How to handle non-NA values for overlapping keys:
 |      
 |          * True: overwrite original DataFrame's values
 |            with values from `other`.
 |          * False: only update values that are NA in
 |            the original DataFrame.
 |      
 |      filter_func : callable(1d-array) -> bool 1d-array, optional
 |          Can choose to replace values other than NA. Return True for values
 |          that should be updated.
 |      errors : {'raise', 'ignore'}, default 'ignore'
 |          If 'raise', will raise a ValueError if the DataFrame and `other`
 |          both contain non-NA data in the same place.
 |      
 |          .. versionchanged :: 0.24.0
 |             Changed from `raise_conflict=False|True`
 |             to `errors='ignore'|'raise'`.
 |      
 |      Returns
 |      -------
 |      None : method directly changes calling object
 |      
 |      Raises
 |      ------
 |      ValueError
 |          * When `errors='raise'` and there's overlapping non-NA data.
 |          * When `errors` is not either `'ignore'` or `'raise'`
 |      NotImplementedError
 |          * If `join != 'left'`
 |      
 |      See Also
 |      --------
 |      dict.update : Similar method for dictionaries.
 |      DataFrame.merge : For column(s)-on-columns(s) operations.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': [1, 2, 3],
 |      ...                    'B': [400, 500, 600]})
 |      >>> new_df = pd.DataFrame({'B': [4, 5, 6],
 |      ...                        'C': [7, 8, 9]})
 |      >>> df.update(new_df)
 |      >>> df
 |         A  B
 |      0  1  4
 |      1  2  5
 |      2  3  6
 |      
 |      The DataFrame's length does not increase as a result of the update,
 |      only values at matching index/column labels are updated.
 |      
 |      >>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
 |      ...                    'B': ['x', 'y', 'z']})
 |      >>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']})
 |      >>> df.update(new_df)
 |      >>> df
 |         A  B
 |      0  a  d
 |      1  b  e
 |      2  c  f
 |      
 |      For Series, it's name attribute must be set.
 |      
 |      >>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
 |      ...                    'B': ['x', 'y', 'z']})
 |      >>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2])
 |      >>> df.update(new_column)
 |      >>> df
 |         A  B
 |      0  a  d
 |      1  b  y
 |      2  c  e
 |      >>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
 |      ...                    'B': ['x', 'y', 'z']})
 |      >>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2])
 |      >>> df.update(new_df)
 |      >>> df
 |         A  B
 |      0  a  x
 |      1  b  d
 |      2  c  e
 |      
 |      If `other` contains NaNs the corresponding values are not updated
 |      in the original dataframe.
 |      
 |      >>> df = pd.DataFrame({'A': [1, 2, 3],
 |      ...                    'B': [400, 500, 600]})
 |      >>> new_df = pd.DataFrame({'B': [4, np.nan, 6]})
 |      >>> df.update(new_df)
 |      >>> df
 |         A      B
 |      0  1    4.0
 |      1  2  500.0
 |      2  3    6.0
 |  
 |  var(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)
 |      Return unbiased variance over requested axis.
 |      
 |      Normalized by N-1 by default. This can be changed using the ddof argument
 |      
 |      Parameters
 |      ----------
 |      axis : {index (0), columns (1)}
 |      skipna : boolean, default True
 |          Exclude NA/null values. If an entire row/column is NA, the result
 |          will be NA
 |      level : int or level name, default None
 |          If the axis is a MultiIndex (hierarchical), count along a
 |          particular level, collapsing into a Series
 |      ddof : int, default 1
 |          Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
 |          where N represents the number of elements.
 |      numeric_only : boolean, default None
 |          Include only float, int, boolean columns. If None, will attempt to use
 |          everything, then use only numeric data. Not implemented for Series.
 |      
 |      Returns
 |      -------
 |      var : Series or DataFrame (if level specified)
 |  
 |  ----------------------------------------------------------------------
 |  Class methods defined here:
 |  
 |  from_csv(path, header=0, sep=',', index_col=0, parse_dates=True, encoding=None, tupleize_cols=None, infer_datetime_format=False) from builtins.type
 |      Read CSV file.
 |      
 |      .. deprecated:: 0.21.0
 |          Use :func:`pandas.read_csv` instead.
 |      
 |      It is preferable to use the more powerful :func:`pandas.read_csv`
 |      for most general purposes, but ``from_csv`` makes for an easy
 |      roundtrip to and from a file (the exact counterpart of
 |      ``to_csv``), especially with a DataFrame of time series data.
 |      
 |      This method only differs from the preferred :func:`pandas.read_csv`
 |      in some defaults:
 |      
 |      - `index_col` is ``0`` instead of ``None`` (take first column as index
 |        by default)
 |      - `parse_dates` is ``True`` instead of ``False`` (try parsing the index
 |        as datetime by default)
 |      
 |      So a ``pd.DataFrame.from_csv(path)`` can be replaced by
 |      ``pd.read_csv(path, index_col=0, parse_dates=True)``.
 |      
 |      Parameters
 |      ----------
 |      path : string file path or file handle / StringIO
 |      header : int, default 0
 |          Row to use as header (skip prior rows)
 |      sep : string, default ','
 |          Field delimiter
 |      index_col : int or sequence, default 0
 |          Column to use for index. If a sequence is given, a MultiIndex
 |          is used. Different default from read_table
 |      parse_dates : boolean, default True
 |          Parse dates. Different default from read_table
 |      tupleize_cols : boolean, default False
 |          write multi_index columns as a list of tuples (if True)
 |          or new (expanded format) if False)
 |      infer_datetime_format : boolean, default False
 |          If True and `parse_dates` is True for a column, try to infer the
 |          datetime format based on the first datetime string. If the format
 |          can be inferred, there often will be a large parsing speed-up.
 |      
 |      Returns
 |      -------
 |      y : DataFrame
 |      
 |      See Also
 |      --------
 |      pandas.read_csv
 |  
 |  from_dict(data, orient='columns', dtype=None, columns=None) from builtins.type
 |      Construct DataFrame from dict of array-like or dicts.
 |      
 |      Creates DataFrame object from dictionary by columns or by index
 |      allowing dtype specification.
 |      
 |      Parameters
 |      ----------
 |      data : dict
 |          Of the form {field : array-like} or {field : dict}.
 |      orient : {'columns', 'index'}, default 'columns'
 |          The "orientation" of the data. If the keys of the passed dict
 |          should be the columns of the resulting DataFrame, pass 'columns'
 |          (default). Otherwise if the keys should be rows, pass 'index'.
 |      dtype : dtype, default None
 |          Data type to force, otherwise infer.
 |      columns : list, default None
 |          Column labels to use when ``orient='index'``. Raises a ValueError
 |          if used with ``orient='columns'``.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      Returns
 |      -------
 |      pandas.DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.from_records : DataFrame from ndarray (structured
 |          dtype), list of tuples, dict, or DataFrame.
 |      DataFrame : DataFrame object creation using constructor.
 |      
 |      Examples
 |      --------
 |      By default the keys of the dict become the DataFrame columns:
 |      
 |      >>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
 |      >>> pd.DataFrame.from_dict(data)
 |         col_1 col_2
 |      0      3     a
 |      1      2     b
 |      2      1     c
 |      3      0     d
 |      
 |      Specify ``orient='index'`` to create the DataFrame using dictionary
 |      keys as rows:
 |      
 |      >>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']}
 |      >>> pd.DataFrame.from_dict(data, orient='index')
 |             0  1  2  3
 |      row_1  3  2  1  0
 |      row_2  a  b  c  d
 |      
 |      When using the 'index' orientation, the column names can be
 |      specified manually:
 |      
 |      >>> pd.DataFrame.from_dict(data, orient='index',
 |      ...                        columns=['A', 'B', 'C', 'D'])
 |             A  B  C  D
 |      row_1  3  2  1  0
 |      row_2  a  b  c  d
 |  
 |  from_items(items, columns=None, orient='columns') from builtins.type
 |      Construct a DataFrame from a list of tuples.
 |      
 |      .. deprecated:: 0.23.0
 |        `from_items` is deprecated and will be removed in a future version.
 |        Use :meth:`DataFrame.from_dict(dict(items)) <DataFrame.from_dict>`
 |        instead.
 |        :meth:`DataFrame.from_dict(OrderedDict(items)) <DataFrame.from_dict>`
 |        may be used to preserve the key order.
 |      
 |      Convert (key, value) pairs to DataFrame. The keys will be the axis
 |      index (usually the columns, but depends on the specified
 |      orientation). The values should be arrays or Series.
 |      
 |      Parameters
 |      ----------
 |      items : sequence of (key, value) pairs
 |          Values should be arrays or Series.
 |      columns : sequence of column labels, optional
 |          Must be passed if orient='index'.
 |      orient : {'columns', 'index'}, default 'columns'
 |          The "orientation" of the data. If the keys of the
 |          input correspond to column labels, pass 'columns'
 |          (default). Otherwise if the keys correspond to the index,
 |          pass 'index'.
 |      
 |      Returns
 |      -------
 |      frame : DataFrame
 |  
 |  from_records(data, index=None, exclude=None, columns=None, coerce_float=False, nrows=None) from builtins.type
 |      Convert structured or record ndarray to DataFrame.
 |      
 |      Parameters
 |      ----------
 |      data : ndarray (structured dtype), list of tuples, dict, or DataFrame
 |      index : string, list of fields, array-like
 |          Field of array to use as the index, alternately a specific set of
 |          input labels to use
 |      exclude : sequence, default None
 |          Columns or fields to exclude
 |      columns : sequence, default None
 |          Column names to use. If the passed data do not have names
 |          associated with them, this argument provides names for the
 |          columns. Otherwise this argument indicates the order of the columns
 |          in the result (any names not found in the data will become all-NA
 |          columns)
 |      coerce_float : boolean, default False
 |          Attempt to convert values of non-string, non-numeric objects (like
 |          decimal.Decimal) to floating point, useful for SQL result sets
 |      nrows : int, default None
 |          Number of rows to read if data is an iterator
 |      
 |      Returns
 |      -------
 |      df : DataFrame
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  T
 |      Transpose index and columns.
 |      
 |      Reflect the DataFrame over its main diagonal by writing rows as columns
 |      and vice-versa. The property :attr:`.T` is an accessor to the method
 |      :meth:`transpose`.
 |      
 |      Parameters
 |      ----------
 |      copy : bool, default False
 |          If True, the underlying data is copied. Otherwise (default), no
 |          copy is made if possible.
 |      *args, **kwargs
 |          Additional keywords have no effect but might be accepted for
 |          compatibility with numpy.
 |      
 |      Returns
 |      -------
 |      DataFrame
 |          The transposed DataFrame.
 |      
 |      See Also
 |      --------
 |      numpy.transpose : Permute the dimensions of a given array.
 |      
 |      Notes
 |      -----
 |      Transposing a DataFrame with mixed dtypes will result in a homogeneous
 |      DataFrame with the `object` dtype. In such a case, a copy of the data
 |      is always made.
 |      
 |      Examples
 |      --------
 |      **Square DataFrame with homogeneous dtype**
 |      
 |      >>> d1 = {'col1': [1, 2], 'col2': [3, 4]}
 |      >>> df1 = pd.DataFrame(data=d1)
 |      >>> df1
 |         col1  col2
 |      0     1     3
 |      1     2     4
 |      
 |      >>> df1_transposed = df1.T # or df1.transpose()
 |      >>> df1_transposed
 |            0  1
 |      col1  1  2
 |      col2  3  4
 |      
 |      When the dtype is homogeneous in the original DataFrame, we get a
 |      transposed DataFrame with the same dtype:
 |      
 |      >>> df1.dtypes
 |      col1    int64
 |      col2    int64
 |      dtype: object
 |      >>> df1_transposed.dtypes
 |      0    int64
 |      1    int64
 |      dtype: object
 |      
 |      **Non-square DataFrame with mixed dtypes**
 |      
 |      >>> d2 = {'name': ['Alice', 'Bob'],
 |      ...       'score': [9.5, 8],
 |      ...       'employed': [False, True],
 |      ...       'kids': [0, 0]}
 |      >>> df2 = pd.DataFrame(data=d2)
 |      >>> df2
 |          name  score  employed  kids
 |      0  Alice    9.5     False     0
 |      1    Bob    8.0      True     0
 |      
 |      >>> df2_transposed = df2.T # or df2.transpose()
 |      >>> df2_transposed
 |                    0     1
 |      name      Alice   Bob
 |      score       9.5     8
 |      employed  False  True
 |      kids          0     0
 |      
 |      When the DataFrame has mixed dtypes, we get a transposed DataFrame with
 |      the `object` dtype:
 |      
 |      >>> df2.dtypes
 |      name         object
 |      score       float64
 |      employed       bool
 |      kids          int64
 |      dtype: object
 |      >>> df2_transposed.dtypes
 |      0    object
 |      1    object
 |      dtype: object
 |  
 |  axes
 |      Return a list representing the axes of the DataFrame.
 |      
 |      It has the row axis labels and column axis labels as the only members.
 |      They are returned in that order.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
 |      >>> df.axes
 |      [RangeIndex(start=0, stop=2, step=1), Index(['coll', 'col2'],
 |      dtype='object')]
 |  
 |  columns
 |      The column labels of the DataFrame.
 |  
 |  index
 |      The index (row labels) of the DataFrame.
 |  
 |  shape
 |      Return a tuple representing the dimensionality of the DataFrame.
 |      
 |      See Also
 |      --------
 |      ndarray.shape
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
 |      >>> df.shape
 |      (2, 2)
 |      
 |      >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4],
 |      ...                    'col3': [5, 6]})
 |      >>> df.shape
 |      (2, 3)
 |  
 |  style
 |      Property returning a Styler object containing methods for
 |      building a styled HTML representation fo the DataFrame.
 |      
 |      See Also
 |      --------
 |      pandas.io.formats.style.Styler
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  plot = <class 'pandas.plotting._core.FramePlotMethods'>
 |      DataFrame plotting accessor and method
 |      
 |      Examples
 |      --------
 |      >>> df.plot.line()
 |      >>> df.plot.scatter('x', 'y')
 |      >>> df.plot.hexbin()
 |      
 |      These plotting methods can also be accessed by calling the accessor as a
 |      method with the ``kind`` argument:
 |      ``df.plot(kind='line')`` is equivalent to ``df.plot.line()``
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from pandas.core.generic.NDFrame:
 |  
 |  __abs__(self)
 |  
 |  __array__(self, dtype=None)
 |  
 |  __array_wrap__(self, result, context=None)
 |  
 |  __bool__ = __nonzero__(self)
 |  
 |  __contains__(self, key)
 |      True if the key is in the info axis
 |  
 |  __copy__(self, deep=True)
 |  
 |  __deepcopy__(self, memo=None)
 |      Parameters
 |      ----------
 |      memo, default None
 |          Standard signature. Unused
 |  
 |  __delitem__(self, key)
 |      Delete item
 |  
 |  __finalize__(self, other, method=None, **kwargs)
 |      Propagate metadata from other to self.
 |      
 |      Parameters
 |      ----------
 |      other : the object from which to get the attributes that we are going
 |          to propagate
 |      method : optional, a passed method name ; possibly to take different
 |          types of propagation actions based on this
 |  
 |  __getattr__(self, name)
 |      After regular attribute access, try looking up the name
 |      This allows simpler access to columns for interactive use.
 |  
 |  __getstate__(self)
 |  
 |  __hash__(self)
 |      Return hash(self).
 |  
 |  __invert__(self)
 |  
 |  __iter__(self)
 |      Iterate over infor axis
 |  
 |  __neg__(self)
 |  
 |  __nonzero__(self)
 |  
 |  __pos__(self)
 |  
 |  __round__(self, decimals=0)
 |  
 |  __setattr__(self, name, value)
 |      After regular attribute access, try setting the name
 |      This allows simpler access to columns for interactive use.
 |  
 |  __setstate__(self, state)
 |  
 |  abs(self)
 |      Return a Series/DataFrame with absolute numeric value of each element.
 |      
 |      This function only applies to elements that are all numeric.
 |      
 |      Returns
 |      -------
 |      abs
 |          Series/DataFrame containing the absolute value of each element.
 |      
 |      See Also
 |      --------
 |      numpy.absolute : Calculate the absolute value element-wise.
 |      
 |      Notes
 |      -----
 |      For ``complex`` inputs, ``1.2 + 1j``, the absolute value is
 |      :math:`\sqrt{ a^2 + b^2 }`.
 |      
 |      Examples
 |      --------
 |      Absolute numeric values in a Series.
 |      
 |      >>> s = pd.Series([-1.10, 2, -3.33, 4])
 |      >>> s.abs()
 |      0    1.10
 |      1    2.00
 |      2    3.33
 |      3    4.00
 |      dtype: float64
 |      
 |      Absolute numeric values in a Series with complex numbers.
 |      
 |      >>> s = pd.Series([1.2 + 1j])
 |      >>> s.abs()
 |      0    1.56205
 |      dtype: float64
 |      
 |      Absolute numeric values in a Series with a Timedelta element.
 |      
 |      >>> s = pd.Series([pd.Timedelta('1 days')])
 |      >>> s.abs()
 |      0   1 days
 |      dtype: timedelta64[ns]
 |      
 |      Select rows with data closest to certain value using argsort (from
 |      `StackOverflow <https://stackoverflow.com/a/17758115>`__).
 |      
 |      >>> df = pd.DataFrame({
 |      ...     'a': [4, 5, 6, 7],
 |      ...     'b': [10, 20, 30, 40],
 |      ...     'c': [100, 50, -30, -50]
 |      ... })
 |      >>> df
 |           a    b    c
 |      0    4   10  100
 |      1    5   20   50
 |      2    6   30  -30
 |      3    7   40  -50
 |      >>> df.loc[(df.c - 43).abs().argsort()]
 |           a    b    c
 |      1    5   20   50
 |      0    4   10  100
 |      2    6   30  -30
 |      3    7   40  -50
 |  
 |  add_prefix(self, prefix)
 |      Prefix labels with string `prefix`.
 |      
 |      For Series, the row labels are prefixed.
 |      For DataFrame, the column labels are prefixed.
 |      
 |      Parameters
 |      ----------
 |      prefix : str
 |          The string to add before each label.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          New Series or DataFrame with updated labels.
 |      
 |      See Also
 |      --------
 |      Series.add_suffix: Suffix row labels with string `suffix`.
 |      DataFrame.add_suffix: Suffix column labels with string `suffix`.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series([1, 2, 3, 4])
 |      >>> s
 |      0    1
 |      1    2
 |      2    3
 |      3    4
 |      dtype: int64
 |      
 |      >>> s.add_prefix('item_')
 |      item_0    1
 |      item_1    2
 |      item_2    3
 |      item_3    4
 |      dtype: int64
 |      
 |      >>> df = pd.DataFrame({'A': [1, 2, 3, 4],  'B': [3, 4, 5, 6]})
 |      >>> df
 |         A  B
 |      0  1  3
 |      1  2  4
 |      2  3  5
 |      3  4  6
 |      
 |      >>> df.add_prefix('col_')
 |           col_A  col_B
 |      0       1       3
 |      1       2       4
 |      2       3       5
 |      3       4       6
 |  
 |  add_suffix(self, suffix)
 |      Suffix labels with string `suffix`.
 |      
 |      For Series, the row labels are suffixed.
 |      For DataFrame, the column labels are suffixed.
 |      
 |      Parameters
 |      ----------
 |      suffix : str
 |          The string to add after each label.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          New Series or DataFrame with updated labels.
 |      
 |      See Also
 |      --------
 |      Series.add_prefix: Prefix row labels with string `prefix`.
 |      DataFrame.add_prefix: Prefix column labels with string `prefix`.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series([1, 2, 3, 4])
 |      >>> s
 |      0    1
 |      1    2
 |      2    3
 |      3    4
 |      dtype: int64
 |      
 |      >>> s.add_suffix('_item')
 |      0_item    1
 |      1_item    2
 |      2_item    3
 |      3_item    4
 |      dtype: int64
 |      
 |      >>> df = pd.DataFrame({'A': [1, 2, 3, 4],  'B': [3, 4, 5, 6]})
 |      >>> df
 |         A  B
 |      0  1  3
 |      1  2  4
 |      2  3  5
 |      3  4  6
 |      
 |      >>> df.add_suffix('_col')
 |           A_col  B_col
 |      0       1       3
 |      1       2       4
 |      2       3       5
 |      3       4       6
 |  
 |  as_blocks(self, copy=True)
 |      Convert the frame to a dict of dtype -> Constructor Types that each has
 |      a homogeneous dtype.
 |      
 |      .. deprecated:: 0.21.0
 |      
 |      NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in
 |            as_matrix)
 |      
 |      Parameters
 |      ----------
 |      copy : boolean, default True
 |      
 |      Returns
 |      -------
 |      values : a dict of dtype -> Constructor Types
 |  
 |  as_matrix(self, columns=None)
 |      Convert the frame to its Numpy-array representation.
 |      
 |      .. deprecated:: 0.23.0
 |          Use :meth:`DataFrame.values` instead.
 |      
 |      Parameters
 |      ----------
 |      columns : list, optional, default:None
 |          If None, return all columns, otherwise, returns specified columns.
 |      
 |      Returns
 |      -------
 |      values : ndarray
 |          If the caller is heterogeneous and contains booleans or objects,
 |          the result will be of dtype=object. See Notes.
 |      
 |      See Also
 |      --------
 |      DataFrame.values
 |      
 |      Notes
 |      -----
 |      Return is NOT a Numpy-matrix, rather, a Numpy-array.
 |      
 |      The dtype will be a lower-common-denominator dtype (implicit
 |      upcasting); that is to say if the dtypes (even of numeric types)
 |      are mixed, the one that accommodates all will be chosen. Use this
 |      with care if you are not dealing with the blocks.
 |      
 |      e.g. If the dtypes are float16 and float32, dtype will be upcast to
 |      float32.  If dtypes are int32 and uint8, dtype will be upcase to
 |      int32. By numpy.find_common_type convention, mixing int64 and uint64
 |      will result in a float64 dtype.
 |      
 |      This method is provided for backwards compatibility. Generally,
 |      it is recommended to use '.values'.
 |  
 |  asfreq(self, freq, method=None, how=None, normalize=False, fill_value=None)
 |      Convert TimeSeries to specified frequency.
 |      
 |      Optionally provide filling method to pad/backfill missing values.
 |      
 |      Returns the original data conformed to a new index with the specified
 |      frequency. ``resample`` is more appropriate if an operation, such as
 |      summarization, is necessary to represent the data at the new frequency.
 |      
 |      Parameters
 |      ----------
 |      freq : DateOffset object, or string
 |      method : {'backfill'/'bfill', 'pad'/'ffill'}, default None
 |          Method to use for filling holes in reindexed Series (note this
 |          does not fill NaNs that already were present):
 |      
 |          * 'pad' / 'ffill': propagate last valid observation forward to next
 |            valid
 |          * 'backfill' / 'bfill': use NEXT valid observation to fill
 |      how : {'start', 'end'}, default end
 |          For PeriodIndex only, see PeriodIndex.asfreq
 |      normalize : bool, default False
 |          Whether to reset output index to midnight
 |      fill_value : scalar, optional
 |          Value to use for missing values, applied during upsampling (note
 |          this does not fill NaNs that already were present).
 |      
 |          .. versionadded:: 0.20.0
 |      
 |      Returns
 |      -------
 |      converted : same type as caller
 |      
 |      See Also
 |      --------
 |      reindex
 |      
 |      Notes
 |      -----
 |      To learn more about the frequency strings, please see `this link
 |      <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
 |      
 |      Examples
 |      --------
 |      
 |      Start by creating a series with 4 one minute timestamps.
 |      
 |      >>> index = pd.date_range('1/1/2000', periods=4, freq='T')
 |      >>> series = pd.Series([0.0, None, 2.0, 3.0], index=index)
 |      >>> df = pd.DataFrame({'s':series})
 |      >>> df
 |                             s
 |      2000-01-01 00:00:00    0.0
 |      2000-01-01 00:01:00    NaN
 |      2000-01-01 00:02:00    2.0
 |      2000-01-01 00:03:00    3.0
 |      
 |      Upsample the series into 30 second bins.
 |      
 |      >>> df.asfreq(freq='30S')
 |                             s
 |      2000-01-01 00:00:00    0.0
 |      2000-01-01 00:00:30    NaN
 |      2000-01-01 00:01:00    NaN
 |      2000-01-01 00:01:30    NaN
 |      2000-01-01 00:02:00    2.0
 |      2000-01-01 00:02:30    NaN
 |      2000-01-01 00:03:00    3.0
 |      
 |      Upsample again, providing a ``fill value``.
 |      
 |      >>> df.asfreq(freq='30S', fill_value=9.0)
 |                             s
 |      2000-01-01 00:00:00    0.0
 |      2000-01-01 00:00:30    9.0
 |      2000-01-01 00:01:00    NaN
 |      2000-01-01 00:01:30    9.0
 |      2000-01-01 00:02:00    2.0
 |      2000-01-01 00:02:30    9.0
 |      2000-01-01 00:03:00    3.0
 |      
 |      Upsample again, providing a ``method``.
 |      
 |      >>> df.asfreq(freq='30S', method='bfill')
 |                             s
 |      2000-01-01 00:00:00    0.0
 |      2000-01-01 00:00:30    NaN
 |      2000-01-01 00:01:00    NaN
 |      2000-01-01 00:01:30    2.0
 |      2000-01-01 00:02:00    2.0
 |      2000-01-01 00:02:30    3.0
 |      2000-01-01 00:03:00    3.0
 |  
 |  asof(self, where, subset=None)
 |      Return the last row(s) without any NaNs before `where`.
 |      
 |      The last row (for each element in `where`, if list) without any
 |      NaN is taken.
 |      In case of a :class:`~pandas.DataFrame`, the last row without NaN
 |      considering only the subset of columns (if not `None`)
 |      
 |      .. versionadded:: 0.19.0 For DataFrame
 |      
 |      If there is no good value, NaN is returned for a Series or
 |      a Series of NaN values for a DataFrame
 |      
 |      Parameters
 |      ----------
 |      where : date or array-like of dates
 |          Date(s) before which the last row(s) are returned.
 |      subset : str or array-like of str, default `None`
 |          For DataFrame, if not `None`, only use these columns to
 |          check for NaNs.
 |      
 |      Returns
 |      -------
 |      scalar, Series, or DataFrame
 |      
 |         * scalar : when `self` is a Series and `where` is a scalar
 |         * Series: when `self` is a Series and `where` is an array-like,
 |           or when `self` is a DataFrame and `where` is a scalar
 |         * DataFrame : when `self` is a DataFrame and `where` is an
 |           array-like
 |      
 |      See Also
 |      --------
 |      merge_asof : Perform an asof merge. Similar to left join.
 |      
 |      Notes
 |      -----
 |      Dates are assumed to be sorted. Raises if this is not the case.
 |      
 |      Examples
 |      --------
 |      A Series and a scalar `where`.
 |      
 |      >>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
 |      >>> s
 |      10    1.0
 |      20    2.0
 |      30    NaN
 |      40    4.0
 |      dtype: float64
 |      
 |      >>> s.asof(20)
 |      2.0
 |      
 |      For a sequence `where`, a Series is returned. The first value is
 |      NaN, because the first element of `where` is before the first
 |      index value.
 |      
 |      >>> s.asof([5, 20])
 |      5     NaN
 |      20    2.0
 |      dtype: float64
 |      
 |      Missing values are not considered. The following is ``2.0``, not
 |      NaN, even though NaN is at the index location for ``30``.
 |      
 |      >>> s.asof(30)
 |      2.0
 |      
 |      Take all columns into consideration
 |      
 |      >>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
 |      ...                    'b': [None, None, None, None, 500]},
 |      ...                   index=pd.DatetimeIndex(['2018-02-27 09:01:00',
 |      ...                                           '2018-02-27 09:02:00',
 |      ...                                           '2018-02-27 09:03:00',
 |      ...                                           '2018-02-27 09:04:00',
 |      ...                                           '2018-02-27 09:05:00']))
 |      >>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
 |      ...                           '2018-02-27 09:04:30']))
 |                            a   b
 |      2018-02-27 09:03:30 NaN NaN
 |      2018-02-27 09:04:30 NaN NaN
 |      
 |      Take a single column into consideration
 |      
 |      >>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
 |      ...                           '2018-02-27 09:04:30']),
 |      ...         subset=['a'])
 |                               a   b
 |      2018-02-27 09:03:30   30.0 NaN
 |      2018-02-27 09:04:30   40.0 NaN
 |  
 |  astype(self, dtype, copy=True, errors='raise', **kwargs)
 |      Cast a pandas object to a specified dtype ``dtype``.
 |      
 |      Parameters
 |      ----------
 |      dtype : data type, or dict of column name -> data type
 |          Use a numpy.dtype or Python type to cast entire pandas object to
 |          the same type. Alternatively, use {col: dtype, ...}, where col is a
 |          column label and dtype is a numpy.dtype or Python type to cast one
 |          or more of the DataFrame's columns to column-specific types.
 |      copy : bool, default True
 |          Return a copy when ``copy=True`` (be very careful setting
 |          ``copy=False`` as changes to values then may propagate to other
 |          pandas objects).
 |      errors : {'raise', 'ignore'}, default 'raise'
 |          Control raising of exceptions on invalid data for provided dtype.
 |      
 |          - ``raise`` : allow exceptions to be raised
 |          - ``ignore`` : suppress exceptions. On error return original object
 |      
 |          .. versionadded:: 0.20.0
 |      
 |      kwargs : keyword arguments to pass on to the constructor
 |      
 |      Returns
 |      -------
 |      casted : same type as caller
 |      
 |      See Also
 |      --------
 |      to_datetime : Convert argument to datetime.
 |      to_timedelta : Convert argument to timedelta.
 |      to_numeric : Convert argument to a numeric type.
 |      numpy.ndarray.astype : Cast a numpy array to a specified type.
 |      
 |      Examples
 |      --------
 |      >>> ser = pd.Series([1, 2], dtype='int32')
 |      >>> ser
 |      0    1
 |      1    2
 |      dtype: int32
 |      >>> ser.astype('int64')
 |      0    1
 |      1    2
 |      dtype: int64
 |      
 |      Convert to categorical type:
 |      
 |      >>> ser.astype('category')
 |      0    1
 |      1    2
 |      dtype: category
 |      Categories (2, int64): [1, 2]
 |      
 |      Convert to ordered categorical type with custom ordering:
 |      
 |      >>> cat_dtype = pd.api.types.CategoricalDtype(
 |      ...                     categories=[2, 1], ordered=True)
 |      >>> ser.astype(cat_dtype)
 |      0    1
 |      1    2
 |      dtype: category
 |      Categories (2, int64): [2 < 1]
 |      
 |      Note that using ``copy=False`` and changing data on a new
 |      pandas object may propagate changes:
 |      
 |      >>> s1 = pd.Series([1,2])
 |      >>> s2 = s1.astype('int64', copy=False)
 |      >>> s2[0] = 10
 |      >>> s1  # note that s1[0] has changed too
 |      0    10
 |      1     2
 |      dtype: int64
 |  
 |  at_time(self, time, asof=False, axis=None)
 |      Select values at particular time of day (e.g. 9:30AM).
 |      
 |      Parameters
 |      ----------
 |      time : datetime.time or string
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      values_at_time : same type as caller
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the index is not  a :class:`DatetimeIndex`
 |      
 |      See Also
 |      --------
 |      between_time : Select values between particular times of the day.
 |      first : Select initial periods of time series based on a date offset.
 |      last : Select final periods of time series based on a date offset.
 |      DatetimeIndex.indexer_at_time : Get just the index locations for
 |          values at particular time of the day.
 |      
 |      Examples
 |      --------
 |      >>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
 |      >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i)
 |      >>> ts
 |                           A
 |      2018-04-09 00:00:00  1
 |      2018-04-09 12:00:00  2
 |      2018-04-10 00:00:00  3
 |      2018-04-10 12:00:00  4
 |      
 |      >>> ts.at_time('12:00')
 |                           A
 |      2018-04-09 12:00:00  2
 |      2018-04-10 12:00:00  4
 |  
 |  between_time(self, start_time, end_time, include_start=True, include_end=True, axis=None)
 |      Select values between particular times of the day (e.g., 9:00-9:30 AM).
 |      
 |      By setting ``start_time`` to be later than ``end_time``,
 |      you can get the times that are *not* between the two times.
 |      
 |      Parameters
 |      ----------
 |      start_time : datetime.time or string
 |      end_time : datetime.time or string
 |      include_start : boolean, default True
 |      include_end : boolean, default True
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      values_between_time : same type as caller
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the index is not  a :class:`DatetimeIndex`
 |      
 |      See Also
 |      --------
 |      at_time : Select values at a particular time of the day.
 |      first : Select initial periods of time series based on a date offset.
 |      last : Select final periods of time series based on a date offset.
 |      DatetimeIndex.indexer_between_time : Get just the index locations for
 |          values between particular times of the day.
 |      
 |      Examples
 |      --------
 |      >>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
 |      >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i)
 |      >>> ts
 |                           A
 |      2018-04-09 00:00:00  1
 |      2018-04-10 00:20:00  2
 |      2018-04-11 00:40:00  3
 |      2018-04-12 01:00:00  4
 |      
 |      >>> ts.between_time('0:15', '0:45')
 |                           A
 |      2018-04-10 00:20:00  2
 |      2018-04-11 00:40:00  3
 |      
 |      You get the times that are *not* between two times by setting
 |      ``start_time`` later than ``end_time``:
 |      
 |      >>> ts.between_time('0:45', '0:15')
 |                           A
 |      2018-04-09 00:00:00  1
 |      2018-04-12 01:00:00  4
 |  
 |  bfill(self, axis=None, inplace=False, limit=None, downcast=None)
 |      Synonym for :meth:`DataFrame.fillna` with ``method='bfill'``.
 |  
 |  bool(self)
 |      Return the bool of a single element PandasObject.
 |      
 |      This must be a boolean scalar value, either True or False.  Raise a
 |      ValueError if the PandasObject does not have exactly 1 element, or that
 |      element is not boolean
 |  
 |  clip(self, lower=None, upper=None, axis=None, inplace=False, *args, **kwargs)
 |      Trim values at input threshold(s).
 |      
 |      Assigns values outside boundary to boundary values. Thresholds
 |      can be singular values or array like, and in the latter case
 |      the clipping is performed element-wise in the specified axis.
 |      
 |      Parameters
 |      ----------
 |      lower : float or array_like, default None
 |          Minimum threshold value. All values below this
 |          threshold will be set to it.
 |      upper : float or array_like, default None
 |          Maximum threshold value. All values above this
 |          threshold will be set to it.
 |      axis : int or string axis name, optional
 |          Align object with lower and upper along the given axis.
 |      inplace : boolean, default False
 |          Whether to perform the operation in place on the data.
 |      
 |          .. versionadded:: 0.21.0
 |      *args, **kwargs
 |          Additional keywords have no effect but might be accepted
 |          for compatibility with numpy.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Same type as calling object with the values outside the
 |          clip boundaries replaced
 |      
 |      Examples
 |      --------
 |      >>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
 |      >>> df = pd.DataFrame(data)
 |      >>> df
 |         col_0  col_1
 |      0      9     -2
 |      1     -3     -7
 |      2      0      6
 |      3     -1      8
 |      4      5     -5
 |      
 |      Clips per column using lower and upper thresholds:
 |      
 |      >>> df.clip(-4, 6)
 |         col_0  col_1
 |      0      6     -2
 |      1     -3     -4
 |      2      0      6
 |      3     -1      6
 |      4      5     -4
 |      
 |      Clips using specific lower and upper thresholds per column element:
 |      
 |      >>> t = pd.Series([2, -4, -1, 6, 3])
 |      >>> t
 |      0    2
 |      1   -4
 |      2   -1
 |      3    6
 |      4    3
 |      dtype: int64
 |      
 |      >>> df.clip(t, t + 4, axis=0)
 |         col_0  col_1
 |      0      6      2
 |      1     -3     -4
 |      2      0      3
 |      3      6      8
 |      4      5      3
 |  
 |  clip_lower(self, threshold, axis=None, inplace=False)
 |      Trim values below a given threshold.
 |      
 |      .. deprecated:: 0.24.0
 |          Use clip(lower=threshold) instead.
 |      
 |      Elements below the `threshold` will be changed to match the
 |      `threshold` value(s). Threshold can be a single value or an array,
 |      in the latter case it performs the truncation element-wise.
 |      
 |      Parameters
 |      ----------
 |      threshold : numeric or array-like
 |          Minimum value allowed. All values below threshold will be set to
 |          this value.
 |      
 |          * float : every value is compared to `threshold`.
 |          * array-like : The shape of `threshold` should match the object
 |            it's compared to. When `self` is a Series, `threshold` should be
 |            the length. When `self` is a DataFrame, `threshold` should 2-D
 |            and the same shape as `self` for ``axis=None``, or 1-D and the
 |            same length as the axis being compared.
 |      
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Align `self` with `threshold` along the given axis.
 |      
 |      inplace : boolean, default False
 |          Whether to perform the operation in place on the data.
 |      
 |          .. versionadded:: 0.21.0
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Original data with values trimmed.
 |      
 |      See Also
 |      --------
 |      Series.clip : General purpose method to trim Series values to given
 |          threshold(s).
 |      DataFrame.clip : General purpose method to trim DataFrame values to
 |          given threshold(s).
 |      
 |      Examples
 |      --------
 |      
 |      Series single threshold clipping:
 |      
 |      >>> s = pd.Series([5, 6, 7, 8, 9])
 |      >>> s.clip(lower=8)
 |      0    8
 |      1    8
 |      2    8
 |      3    8
 |      4    9
 |      dtype: int64
 |      
 |      Series clipping element-wise using an array of thresholds. `threshold`
 |      should be the same length as the Series.
 |      
 |      >>> elemwise_thresholds = [4, 8, 7, 2, 5]
 |      >>> s.clip(lower=elemwise_thresholds)
 |      0    5
 |      1    8
 |      2    7
 |      3    8
 |      4    9
 |      dtype: int64
 |      
 |      DataFrames can be compared to a scalar.
 |      
 |      >>> df = pd.DataFrame({"A": [1, 3, 5], "B": [2, 4, 6]})
 |      >>> df
 |         A  B
 |      0  1  2
 |      1  3  4
 |      2  5  6
 |      
 |      >>> df.clip(lower=3)
 |         A  B
 |      0  3  3
 |      1  3  4
 |      2  5  6
 |      
 |      Or to an array of values. By default, `threshold` should be the same
 |      shape as the DataFrame.
 |      
 |      >>> df.clip(lower=np.array([[3, 4], [2, 2], [6, 2]]))
 |         A  B
 |      0  3  4
 |      1  3  4
 |      2  6  6
 |      
 |      Control how `threshold` is broadcast with `axis`. In this case
 |      `threshold` should be the same length as the axis specified by
 |      `axis`.
 |      
 |      >>> df.clip(lower=[3, 3, 5], axis='index')
 |         A  B
 |      0  3  3
 |      1  3  4
 |      2  5  6
 |      
 |      >>> df.clip(lower=[4, 5], axis='columns')
 |         A  B
 |      0  4  5
 |      1  4  5
 |      2  5  6
 |  
 |  clip_upper(self, threshold, axis=None, inplace=False)
 |      Trim values above a given threshold.
 |      
 |      .. deprecated:: 0.24.0
 |          Use clip(upper=threshold) instead.
 |      
 |      Elements above the `threshold` will be changed to match the
 |      `threshold` value(s). Threshold can be a single value or an array,
 |      in the latter case it performs the truncation element-wise.
 |      
 |      Parameters
 |      ----------
 |      threshold : numeric or array-like
 |          Maximum value allowed. All values above threshold will be set to
 |          this value.
 |      
 |          * float : every value is compared to `threshold`.
 |          * array-like : The shape of `threshold` should match the object
 |            it's compared to. When `self` is a Series, `threshold` should be
 |            the length. When `self` is a DataFrame, `threshold` should 2-D
 |            and the same shape as `self` for ``axis=None``, or 1-D and the
 |            same length as the axis being compared.
 |      
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Align object with `threshold` along the given axis.
 |      inplace : boolean, default False
 |          Whether to perform the operation in place on the data.
 |      
 |          .. versionadded:: 0.21.0
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Original data with values trimmed.
 |      
 |      See Also
 |      --------
 |      Series.clip : General purpose method to trim Series values to given
 |          threshold(s).
 |      DataFrame.clip : General purpose method to trim DataFrame values to
 |          given threshold(s).
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series([1, 2, 3, 4, 5])
 |      >>> s
 |      0    1
 |      1    2
 |      2    3
 |      3    4
 |      4    5
 |      dtype: int64
 |      
 |      >>> s.clip(upper=3)
 |      0    1
 |      1    2
 |      2    3
 |      3    3
 |      4    3
 |      dtype: int64
 |      
 |      >>> elemwise_thresholds = [5, 4, 3, 2, 1]
 |      >>> elemwise_thresholds
 |      [5, 4, 3, 2, 1]
 |      
 |      >>> s.clip(upper=elemwise_thresholds)
 |      0    1
 |      1    2
 |      2    3
 |      3    2
 |      4    1
 |      dtype: int64
 |  
 |  convert_objects(self, convert_dates=True, convert_numeric=False, convert_timedeltas=True, copy=True)
 |      Attempt to infer better dtype for object columns.
 |      
 |      .. deprecated:: 0.21.0
 |      
 |      Parameters
 |      ----------
 |      convert_dates : boolean, default True
 |          If True, convert to date where possible. If 'coerce', force
 |          conversion, with unconvertible values becoming NaT.
 |      convert_numeric : boolean, default False
 |          If True, attempt to coerce to numbers (including strings), with
 |          unconvertible values becoming NaN.
 |      convert_timedeltas : boolean, default True
 |          If True, convert to timedelta where possible. If 'coerce', force
 |          conversion, with unconvertible values becoming NaT.
 |      copy : boolean, default True
 |          If True, return a copy even if no copy is necessary (e.g. no
 |          conversion was done). Note: This is meant for internal use, and
 |          should not be confused with inplace.
 |      
 |      Returns
 |      -------
 |      converted : same as input object
 |      
 |      See Also
 |      --------
 |      to_datetime : Convert argument to datetime.
 |      to_timedelta : Convert argument to timedelta.
 |      to_numeric : Convert argument to numeric type.
 |  
 |  copy(self, deep=True)
 |      Make a copy of this object's indices and data.
 |      
 |      When ``deep=True`` (default), a new object will be created with a
 |      copy of the calling object's data and indices. Modifications to
 |      the data or indices of the copy will not be reflected in the
 |      original object (see notes below).
 |      
 |      When ``deep=False``, a new object will be created without copying
 |      the calling object's data or index (only references to the data
 |      and index are copied). Any changes to the data of the original
 |      will be reflected in the shallow copy (and vice versa).
 |      
 |      Parameters
 |      ----------
 |      deep : bool, default True
 |          Make a deep copy, including a copy of the data and the indices.
 |          With ``deep=False`` neither the indices nor the data are copied.
 |      
 |      Returns
 |      -------
 |      copy : Series, DataFrame or Panel
 |          Object type matches caller.
 |      
 |      Notes
 |      -----
 |      When ``deep=True``, data is copied but actual Python objects
 |      will not be copied recursively, only the reference to the object.
 |      This is in contrast to `copy.deepcopy` in the Standard Library,
 |      which recursively copies object data (see examples below).
 |      
 |      While ``Index`` objects are copied when ``deep=True``, the underlying
 |      numpy array is not copied for performance reasons. Since ``Index`` is
 |      immutable, the underlying data can be safely shared and a copy
 |      is not needed.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series([1, 2], index=["a", "b"])
 |      >>> s
 |      a    1
 |      b    2
 |      dtype: int64
 |      
 |      >>> s_copy = s.copy()
 |      >>> s_copy
 |      a    1
 |      b    2
 |      dtype: int64
 |      
 |      **Shallow copy versus default (deep) copy:**
 |      
 |      >>> s = pd.Series([1, 2], index=["a", "b"])
 |      >>> deep = s.copy()
 |      >>> shallow = s.copy(deep=False)
 |      
 |      Shallow copy shares data and index with original.
 |      
 |      >>> s is shallow
 |      False
 |      >>> s.values is shallow.values and s.index is shallow.index
 |      True
 |      
 |      Deep copy has own copy of data and index.
 |      
 |      >>> s is deep
 |      False
 |      >>> s.values is deep.values or s.index is deep.index
 |      False
 |      
 |      Updates to the data shared by shallow copy and original is reflected
 |      in both; deep copy remains unchanged.
 |      
 |      >>> s[0] = 3
 |      >>> shallow[1] = 4
 |      >>> s
 |      a    3
 |      b    4
 |      dtype: int64
 |      >>> shallow
 |      a    3
 |      b    4
 |      dtype: int64
 |      >>> deep
 |      a    1
 |      b    2
 |      dtype: int64
 |      
 |      Note that when copying an object containing Python objects, a deep copy
 |      will copy the data, but will not do so recursively. Updating a nested
 |      data object will be reflected in the deep copy.
 |      
 |      >>> s = pd.Series([[1, 2], [3, 4]])
 |      >>> deep = s.copy()
 |      >>> s[0][0] = 10
 |      >>> s
 |      0    [10, 2]
 |      1     [3, 4]
 |      dtype: object
 |      >>> deep
 |      0    [10, 2]
 |      1     [3, 4]
 |      dtype: object
 |  
 |  describe(self, percentiles=None, include=None, exclude=None)
 |      Generate descriptive statistics that summarize the central tendency,
 |      dispersion and shape of a dataset's distribution, excluding
 |      ``NaN`` values.
 |      
 |      Analyzes both numeric and object series, as well
 |      as ``DataFrame`` column sets of mixed data types. The output
 |      will vary depending on what is provided. Refer to the notes
 |      below for more detail.
 |      
 |      Parameters
 |      ----------
 |      percentiles : list-like of numbers, optional
 |          The percentiles to include in the output. All should
 |          fall between 0 and 1. The default is
 |          ``[.25, .5, .75]``, which returns the 25th, 50th, and
 |          75th percentiles.
 |      include : 'all', list-like of dtypes or None (default), optional
 |          A white list of data types to include in the result. Ignored
 |          for ``Series``. Here are the options:
 |      
 |          - 'all' : All columns of the input will be included in the output.
 |          - A list-like of dtypes : Limits the results to the
 |            provided data types.
 |            To limit the result to numeric types submit
 |            ``numpy.number``. To limit it instead to object columns submit
 |            the ``numpy.object`` data type. Strings
 |            can also be used in the style of
 |            ``select_dtypes`` (e.g. ``df.describe(include=['O'])``). To
 |            select pandas categorical columns, use ``'category'``
 |          - None (default) : The result will include all numeric columns.
 |      exclude : list-like of dtypes or None (default), optional,
 |          A black list of data types to omit from the result. Ignored
 |          for ``Series``. Here are the options:
 |      
 |          - A list-like of dtypes : Excludes the provided data types
 |            from the result. To exclude numeric types submit
 |            ``numpy.number``. To exclude object columns submit the data
 |            type ``numpy.object``. Strings can also be used in the style of
 |            ``select_dtypes`` (e.g. ``df.describe(include=['O'])``). To
 |            exclude pandas categorical columns, use ``'category'``
 |          - None (default) : The result will exclude nothing.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Summary statistics of the Series or Dataframe provided.
 |      
 |      See Also
 |      --------
 |      DataFrame.count: Count number of non-NA/null observations.
 |      DataFrame.max: Maximum of the values in the object.
 |      DataFrame.min: Minimum of the values in the object.
 |      DataFrame.mean: Mean of the values.
 |      DataFrame.std: Standard deviation of the obersvations.
 |      DataFrame.select_dtypes: Subset of a DataFrame including/excluding
 |          columns based on their dtype.
 |      
 |      Notes
 |      -----
 |      For numeric data, the result's index will include ``count``,
 |      ``mean``, ``std``, ``min``, ``max`` as well as lower, ``50`` and
 |      upper percentiles. By default the lower percentile is ``25`` and the
 |      upper percentile is ``75``. The ``50`` percentile is the
 |      same as the median.
 |      
 |      For object data (e.g. strings or timestamps), the result's index
 |      will include ``count``, ``unique``, ``top``, and ``freq``. The ``top``
 |      is the most common value. The ``freq`` is the most common value's
 |      frequency. Timestamps also include the ``first`` and ``last`` items.
 |      
 |      If multiple object values have the highest count, then the
 |      ``count`` and ``top`` results will be arbitrarily chosen from
 |      among those with the highest count.
 |      
 |      For mixed data types provided via a ``DataFrame``, the default is to
 |      return only an analysis of numeric columns. If the dataframe consists
 |      only of object and categorical data without any numeric columns, the
 |      default is to return an analysis of both the object and categorical
 |      columns. If ``include='all'`` is provided as an option, the result
 |      will include a union of attributes of each type.
 |      
 |      The `include` and `exclude` parameters can be used to limit
 |      which columns in a ``DataFrame`` are analyzed for the output.
 |      The parameters are ignored when analyzing a ``Series``.
 |      
 |      Examples
 |      --------
 |      Describing a numeric ``Series``.
 |      
 |      >>> s = pd.Series([1, 2, 3])
 |      >>> s.describe()
 |      count    3.0
 |      mean     2.0
 |      std      1.0
 |      min      1.0
 |      25%      1.5
 |      50%      2.0
 |      75%      2.5
 |      max      3.0
 |      dtype: float64
 |      
 |      Describing a categorical ``Series``.
 |      
 |      >>> s = pd.Series(['a', 'a', 'b', 'c'])
 |      >>> s.describe()
 |      count     4
 |      unique    3
 |      top       a
 |      freq      2
 |      dtype: object
 |      
 |      Describing a timestamp ``Series``.
 |      
 |      >>> s = pd.Series([
 |      ...   np.datetime64("2000-01-01"),
 |      ...   np.datetime64("2010-01-01"),
 |      ...   np.datetime64("2010-01-01")
 |      ... ])
 |      >>> s.describe()
 |      count                       3
 |      unique                      2
 |      top       2010-01-01 00:00:00
 |      freq                        2
 |      first     2000-01-01 00:00:00
 |      last      2010-01-01 00:00:00
 |      dtype: object
 |      
 |      Describing a ``DataFrame``. By default only numeric fields
 |      are returned.
 |      
 |      >>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']),
 |      ...                    'numeric': [1, 2, 3],
 |      ...                    'object': ['a', 'b', 'c']
 |      ...                   })
 |      >>> df.describe()
 |             numeric
 |      count      3.0
 |      mean       2.0
 |      std        1.0
 |      min        1.0
 |      25%        1.5
 |      50%        2.0
 |      75%        2.5
 |      max        3.0
 |      
 |      Describing all columns of a ``DataFrame`` regardless of data type.
 |      
 |      >>> df.describe(include='all')
 |              categorical  numeric object
 |      count            3      3.0      3
 |      unique           3      NaN      3
 |      top              f      NaN      c
 |      freq             1      NaN      1
 |      mean           NaN      2.0    NaN
 |      std            NaN      1.0    NaN
 |      min            NaN      1.0    NaN
 |      25%            NaN      1.5    NaN
 |      50%            NaN      2.0    NaN
 |      75%            NaN      2.5    NaN
 |      max            NaN      3.0    NaN
 |      
 |      Describing a column from a ``DataFrame`` by accessing it as
 |      an attribute.
 |      
 |      >>> df.numeric.describe()
 |      count    3.0
 |      mean     2.0
 |      std      1.0
 |      min      1.0
 |      25%      1.5
 |      50%      2.0
 |      75%      2.5
 |      max      3.0
 |      Name: numeric, dtype: float64
 |      
 |      Including only numeric columns in a ``DataFrame`` description.
 |      
 |      >>> df.describe(include=[np.number])
 |             numeric
 |      count      3.0
 |      mean       2.0
 |      std        1.0
 |      min        1.0
 |      25%        1.5
 |      50%        2.0
 |      75%        2.5
 |      max        3.0
 |      
 |      Including only string columns in a ``DataFrame`` description.
 |      
 |      >>> df.describe(include=[np.object])
 |             object
 |      count       3
 |      unique      3
 |      top         c
 |      freq        1
 |      
 |      Including only categorical columns from a ``DataFrame`` description.
 |      
 |      >>> df.describe(include=['category'])
 |             categorical
 |      count            3
 |      unique           3
 |      top              f
 |      freq             1
 |      
 |      Excluding numeric columns from a ``DataFrame`` description.
 |      
 |      >>> df.describe(exclude=[np.number])
 |             categorical object
 |      count            3      3
 |      unique           3      3
 |      top              f      c
 |      freq             1      1
 |      
 |      Excluding object columns from a ``DataFrame`` description.
 |      
 |      >>> df.describe(exclude=[np.object])
 |             categorical  numeric
 |      count            3      3.0
 |      unique           3      NaN
 |      top              f      NaN
 |      freq             1      NaN
 |      mean           NaN      2.0
 |      std            NaN      1.0
 |      min            NaN      1.0
 |      25%            NaN      1.5
 |      50%            NaN      2.0
 |      75%            NaN      2.5
 |      max            NaN      3.0
 |  
 |  droplevel(self, level, axis=0)
 |      Return DataFrame with requested index / column level(s) removed.
 |      
 |      .. versionadded:: 0.24.0
 |      
 |      Parameters
 |      ----------
 |      level : int, str, or list-like
 |          If a string is given, must be the name of a level
 |          If list-like, elements must be names or positional indexes
 |          of levels.
 |      
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |      
 |      Returns
 |      -------
 |      DataFrame.droplevel()
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([
 |      ...     [1, 2, 3, 4],
 |      ...     [5, 6, 7, 8],
 |      ...     [9, 10, 11, 12]
 |      ... ]).set_index([0, 1]).rename_axis(['a', 'b'])
 |      
 |      >>> df.columns = pd.MultiIndex.from_tuples([
 |      ...    ('c', 'e'), ('d', 'f')
 |      ... ], names=['level_1', 'level_2'])
 |      
 |      >>> df
 |      level_1   c   d
 |      level_2   e   f
 |      a b
 |      1 2      3   4
 |      5 6      7   8
 |      9 10    11  12
 |      
 |      >>> df.droplevel('a')
 |      level_1   c   d
 |      level_2   e   f
 |      b
 |      2        3   4
 |      6        7   8
 |      10      11  12
 |      
 |      >>> df.droplevel('level2', axis=1)
 |      level_1   c   d
 |      a b
 |      1 2      3   4
 |      5 6      7   8
 |      9 10    11  12
 |  
 |  equals(self, other)
 |      Test whether two objects contain the same elements.
 |      
 |      This function allows two Series or DataFrames to be compared against
 |      each other to see if they have the same shape and elements. NaNs in
 |      the same location are considered equal. The column headers do not
 |      need to have the same type, but the elements within the columns must
 |      be the same dtype.
 |      
 |      Parameters
 |      ----------
 |      other : Series or DataFrame
 |          The other Series or DataFrame to be compared with the first.
 |      
 |      Returns
 |      -------
 |      bool
 |          True if all elements are the same in both objects, False
 |          otherwise.
 |      
 |      See Also
 |      --------
 |      Series.eq : Compare two Series objects of the same length
 |          and return a Series where each element is True if the element
 |          in each Series is equal, False otherwise.
 |      DataFrame.eq : Compare two DataFrame objects of the same shape and
 |          return a DataFrame where each element is True if the respective
 |          element in each DataFrame is equal, False otherwise.
 |      assert_series_equal : Return True if left and right Series are equal,
 |          False otherwise.
 |      assert_frame_equal : Return True if left and right DataFrames are
 |          equal, False otherwise.
 |      numpy.array_equal : Return True if two arrays have the same shape
 |          and elements, False otherwise.
 |      
 |      Notes
 |      -----
 |      This function requires that the elements have the same dtype as their
 |      respective elements in the other Series or DataFrame. However, the
 |      column labels do not need to have the same type, as long as they are
 |      still considered equal.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({1: [10], 2: [20]})
 |      >>> df
 |          1   2
 |      0  10  20
 |      
 |      DataFrames df and exactly_equal have the same types and values for
 |      their elements and column labels, which will return True.
 |      
 |      >>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
 |      >>> exactly_equal
 |          1   2
 |      0  10  20
 |      >>> df.equals(exactly_equal)
 |      True
 |      
 |      DataFrames df and different_column_type have the same element
 |      types and values, but have different types for the column labels,
 |      which will still return True.
 |      
 |      >>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
 |      >>> different_column_type
 |         1.0  2.0
 |      0   10   20
 |      >>> df.equals(different_column_type)
 |      True
 |      
 |      DataFrames df and different_data_type have different types for the
 |      same values for their elements, and will return False even though
 |      their column labels are the same values and types.
 |      
 |      >>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
 |      >>> different_data_type
 |            1     2
 |      0  10.0  20.0
 |      >>> df.equals(different_data_type)
 |      False
 |  
 |  ffill(self, axis=None, inplace=False, limit=None, downcast=None)
 |      Synonym for :meth:`DataFrame.fillna` with ``method='ffill'``.
 |  
 |  filter(self, items=None, like=None, regex=None, axis=None)
 |      Subset rows or columns of dataframe according to labels in
 |      the specified index.
 |      
 |      Note that this routine does not filter a dataframe on its
 |      contents. The filter is applied to the labels of the index.
 |      
 |      Parameters
 |      ----------
 |      items : list-like
 |          List of axis to restrict to (must not all be present).
 |      like : string
 |          Keep axis where "arg in col == True".
 |      regex : string (regular expression)
 |          Keep axis with re.search(regex, col) == True.
 |      axis : int or string axis name
 |          The axis to filter on.  By default this is the info axis,
 |          'index' for Series, 'columns' for DataFrame.
 |      
 |      Returns
 |      -------
 |      same type as input object
 |      
 |      See Also
 |      --------
 |      DataFrame.loc
 |      
 |      Notes
 |      -----
 |      The ``items``, ``like``, and ``regex`` parameters are
 |      enforced to be mutually exclusive.
 |      
 |      ``axis`` defaults to the info axis that is used when indexing
 |      with ``[]``.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame(np.array(([1,2,3], [4,5,6])),
 |      ...                   index=['mouse', 'rabbit'],
 |      ...                   columns=['one', 'two', 'three'])
 |      
 |      >>> # select columns by name
 |      >>> df.filter(items=['one', 'three'])
 |               one  three
 |      mouse     1      3
 |      rabbit    4      6
 |      
 |      >>> # select columns by regular expression
 |      >>> df.filter(regex='e$', axis=1)
 |               one  three
 |      mouse     1      3
 |      rabbit    4      6
 |      
 |      >>> # select rows containing 'bbi'
 |      >>> df.filter(like='bbi', axis=0)
 |               one  two  three
 |      rabbit    4    5      6
 |  
 |  first(self, offset)
 |      Convenience method for subsetting initial periods of time series data
 |      based on a date offset.
 |      
 |      Parameters
 |      ----------
 |      offset : string, DateOffset, dateutil.relativedelta
 |      
 |      Returns
 |      -------
 |      subset : same type as caller
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the index is not  a :class:`DatetimeIndex`
 |      
 |      See Also
 |      --------
 |      last : Select final periods of time series based on a date offset.
 |      at_time : Select values at a particular time of the day.
 |      between_time : Select values between particular times of the day.
 |      
 |      Examples
 |      --------
 |      >>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
 |      >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i)
 |      >>> ts
 |                  A
 |      2018-04-09  1
 |      2018-04-11  2
 |      2018-04-13  3
 |      2018-04-15  4
 |      
 |      Get the rows for the first 3 days:
 |      
 |      >>> ts.first('3D')
 |                  A
 |      2018-04-09  1
 |      2018-04-11  2
 |      
 |      Notice the data for 3 first calender days were returned, not the first
 |      3 days observed in the dataset, and therefore data for 2018-04-13 was
 |      not returned.
 |  
 |  first_valid_index(self)
 |      Return index for first non-NA/null value.
 |      
 |      Returns
 |      --------
 |      scalar : type of index
 |      
 |      Notes
 |      --------
 |      If all elements are non-NA/null, returns None.
 |      Also returns None for empty NDFrame.
 |  
 |  get(self, key, default=None)
 |      Get item from object for given key (DataFrame column, Panel slice,
 |      etc.). Returns default value if not found.
 |      
 |      Parameters
 |      ----------
 |      key : object
 |      
 |      Returns
 |      -------
 |      value : same type as items contained in object
 |  
 |  get_dtype_counts(self)
 |      Return counts of unique dtypes in this object.
 |      
 |      Returns
 |      -------
 |      dtype : Series
 |          Series with the count of columns with each dtype.
 |      
 |      See Also
 |      --------
 |      dtypes : Return the dtypes in this object.
 |      
 |      Examples
 |      --------
 |      >>> a = [['a', 1, 1.0], ['b', 2, 2.0], ['c', 3, 3.0]]
 |      >>> df = pd.DataFrame(a, columns=['str', 'int', 'float'])
 |      >>> df
 |        str  int  float
 |      0   a    1    1.0
 |      1   b    2    2.0
 |      2   c    3    3.0
 |      
 |      >>> df.get_dtype_counts()
 |      float64    1
 |      int64      1
 |      object     1
 |      dtype: int64
 |  
 |  get_ftype_counts(self)
 |      Return counts of unique ftypes in this object.
 |      
 |      .. deprecated:: 0.23.0
 |      
 |      This is useful for SparseDataFrame or for DataFrames containing
 |      sparse arrays.
 |      
 |      Returns
 |      -------
 |      dtype : Series
 |          Series with the count of columns with each type and
 |          sparsity (dense/sparse)
 |      
 |      See Also
 |      --------
 |      ftypes : Return ftypes (indication of sparse/dense and dtype) in
 |          this object.
 |      
 |      Examples
 |      --------
 |      >>> a = [['a', 1, 1.0], ['b', 2, 2.0], ['c', 3, 3.0]]
 |      >>> df = pd.DataFrame(a, columns=['str', 'int', 'float'])
 |      >>> df
 |        str  int  float
 |      0   a    1    1.0
 |      1   b    2    2.0
 |      2   c    3    3.0
 |      
 |      >>> df.get_ftype_counts()  # doctest: +SKIP
 |      float64:dense    1
 |      int64:dense      1
 |      object:dense     1
 |      dtype: int64
 |  
 |  get_values(self)
 |      Return an ndarray after converting sparse values to dense.
 |      
 |      This is the same as ``.values`` for non-sparse data. For sparse
 |      data contained in a `pandas.SparseArray`, the data are first
 |      converted to a dense representation.
 |      
 |      Returns
 |      -------
 |      numpy.ndarray
 |          Numpy representation of DataFrame
 |      
 |      See Also
 |      --------
 |      values : Numpy representation of DataFrame.
 |      pandas.SparseArray : Container for sparse data.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'a': [1, 2], 'b': [True, False],
 |      ...                    'c': [1.0, 2.0]})
 |      >>> df
 |         a      b    c
 |      0  1   True  1.0
 |      1  2  False  2.0
 |      
 |      >>> df.get_values()
 |      array([[1, True, 1.0], [2, False, 2.0]], dtype=object)
 |      
 |      >>> df = pd.DataFrame({"a": pd.SparseArray([1, None, None]),
 |      ...                    "c": [1.0, 2.0, 3.0]})
 |      >>> df
 |           a    c
 |      0  1.0  1.0
 |      1  NaN  2.0
 |      2  NaN  3.0
 |      
 |      >>> df.get_values()
 |      array([[ 1.,  1.],
 |             [nan,  2.],
 |             [nan,  3.]])
 |  
 |  groupby(self, by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=False, observed=False, **kwargs)
 |      Group DataFrame or Series using a mapper or by a Series of columns.
 |      
 |      A groupby operation involves some combination of splitting the
 |      object, applying a function, and combining the results. This can be
 |      used to group large amounts of data and compute operations on these
 |      groups.
 |      
 |      Parameters
 |      ----------
 |      by : mapping, function, label, or list of labels
 |          Used to determine the groups for the groupby.
 |          If ``by`` is a function, it's called on each value of the object's
 |          index. If a dict or Series is passed, the Series or dict VALUES
 |          will be used to determine the groups (the Series' values are first
 |          aligned; see ``.align()`` method). If an ndarray is passed, the
 |          values are used as-is determine the groups. A label or list of
 |          labels may be passed to group by the columns in ``self``. Notice
 |          that a tuple is interpreted a (single) key.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Split along rows (0) or columns (1).
 |      level : int, level name, or sequence of such, default None
 |          If the axis is a MultiIndex (hierarchical), group by a particular
 |          level or levels.
 |      as_index : bool, default True
 |          For aggregated output, return object with group labels as the
 |          index. Only relevant for DataFrame input. as_index=False is
 |          effectively "SQL-style" grouped output.
 |      sort : bool, default True
 |          Sort group keys. Get better performance by turning this off.
 |          Note this does not influence the order of observations within each
 |          group. Groupby preserves the order of rows within each group.
 |      group_keys : bool, default True
 |          When calling apply, add group keys to index to identify pieces.
 |      squeeze : bool, default False
 |          Reduce the dimensionality of the return type if possible,
 |          otherwise return a consistent type.
 |      observed : bool, default False
 |          This only applies if any of the groupers are Categoricals.
 |          If True: only show observed values for categorical groupers.
 |          If False: show all values for categorical groupers.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      **kwargs
 |          Optional, only accepts keyword argument 'mutated' and is passed
 |          to groupby.
 |      
 |      Returns
 |      -------
 |      DataFrameGroupBy or SeriesGroupBy
 |          Depends on the calling object and returns groupby object that
 |          contains information about the groups.
 |      
 |      See Also
 |      --------
 |      resample : Convenience method for frequency conversion and resampling
 |          of time series.
 |      
 |      Notes
 |      -----
 |      See the `user guide
 |      <http://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'Animal' : ['Falcon', 'Falcon',
 |      ...                                'Parrot', 'Parrot'],
 |      ...                    'Max Speed' : [380., 370., 24., 26.]})
 |      >>> df
 |         Animal  Max Speed
 |      0  Falcon      380.0
 |      1  Falcon      370.0
 |      2  Parrot       24.0
 |      3  Parrot       26.0
 |      >>> df.groupby(['Animal']).mean()
 |              Max Speed
 |      Animal
 |      Falcon      375.0
 |      Parrot       25.0
 |      
 |      **Hierarchical Indexes**
 |      
 |      We can groupby different levels of a hierarchical index
 |      using the `level` parameter:
 |      
 |      >>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
 |      ...           ['Capitve', 'Wild', 'Capitve', 'Wild']]
 |      >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
 |      >>> df = pd.DataFrame({'Max Speed' : [390., 350., 30., 20.]},
 |      ...                    index=index)
 |      >>> df
 |                      Max Speed
 |      Animal Type
 |      Falcon Capitve      390.0
 |             Wild         350.0
 |      Parrot Capitve       30.0
 |             Wild          20.0
 |      >>> df.groupby(level=0).mean()
 |              Max Speed
 |      Animal
 |      Falcon      370.0
 |      Parrot       25.0
 |      >>> df.groupby(level=1).mean()
 |               Max Speed
 |      Type
 |      Capitve      210.0
 |      Wild         185.0
 |  
 |  head(self, n=5)
 |      Return the first `n` rows.
 |      
 |      This function returns the first `n` rows for the object based
 |      on position. It is useful for quickly testing if your object
 |      has the right type of data in it.
 |      
 |      Parameters
 |      ----------
 |      n : int, default 5
 |          Number of rows to select.
 |      
 |      Returns
 |      -------
 |      obj_head : same type as caller
 |          The first `n` rows of the caller object.
 |      
 |      See Also
 |      --------
 |      DataFrame.tail: Returns the last `n` rows.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'animal':['alligator', 'bee', 'falcon', 'lion',
 |      ...                    'monkey', 'parrot', 'shark', 'whale', 'zebra']})
 |      >>> df
 |            animal
 |      0  alligator
 |      1        bee
 |      2     falcon
 |      3       lion
 |      4     monkey
 |      5     parrot
 |      6      shark
 |      7      whale
 |      8      zebra
 |      
 |      Viewing the first 5 lines
 |      
 |      >>> df.head()
 |            animal
 |      0  alligator
 |      1        bee
 |      2     falcon
 |      3       lion
 |      4     monkey
 |      
 |      Viewing the first `n` lines (three in this case)
 |      
 |      >>> df.head(3)
 |            animal
 |      0  alligator
 |      1        bee
 |      2     falcon
 |  
 |  infer_objects(self)
 |      Attempt to infer better dtypes for object columns.
 |      
 |      Attempts soft conversion of object-dtyped
 |      columns, leaving non-object and unconvertible
 |      columns unchanged. The inference rules are the
 |      same as during normal Series/DataFrame construction.
 |      
 |      .. versionadded:: 0.21.0
 |      
 |      Returns
 |      -------
 |      converted : same type as input object
 |      
 |      See Also
 |      --------
 |      to_datetime : Convert argument to datetime.
 |      to_timedelta : Convert argument to timedelta.
 |      to_numeric : Convert argument to numeric type.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
 |      >>> df = df.iloc[1:]
 |      >>> df
 |         A
 |      1  1
 |      2  2
 |      3  3
 |      
 |      >>> df.dtypes
 |      A    object
 |      dtype: object
 |      
 |      >>> df.infer_objects().dtypes
 |      A    int64
 |      dtype: object
 |  
 |  interpolate(self, method='linear', axis=0, limit=None, inplace=False, limit_direction='forward', limit_area=None, downcast=None, **kwargs)
 |      Interpolate values according to different methods.
 |      
 |      Please note that only ``method='linear'`` is supported for
 |      DataFrame/Series with a MultiIndex.
 |      
 |      Parameters
 |      ----------
 |      method : str, default 'linear'
 |          Interpolation technique to use. One of:
 |      
 |          * 'linear': Ignore the index and treat the values as equally
 |            spaced. This is the only method supported on MultiIndexes.
 |          * 'time': Works on daily and higher resolution data to interpolate
 |            given length of interval.
 |          * 'index', 'values': use the actual numerical values of the index.
 |          * 'pad': Fill in NaNs using existing values.
 |          * 'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'spline',
 |            'barycentric', 'polynomial': Passed to
 |            `scipy.interpolate.interp1d`. Both 'polynomial' and 'spline'
 |            require that you also specify an `order` (int),
 |            e.g. ``df.interpolate(method='polynomial', order=4)``.
 |            These use the numerical values of the index.
 |          * 'krogh', 'piecewise_polynomial', 'spline', 'pchip', 'akima':
 |            Wrappers around the SciPy interpolation methods of similar
 |            names. See `Notes`.
 |          * 'from_derivatives': Refers to
 |            `scipy.interpolate.BPoly.from_derivatives` which
 |            replaces 'piecewise_polynomial' interpolation method in
 |            scipy 0.18.
 |      
 |          .. versionadded:: 0.18.1
 |      
 |             Added support for the 'akima' method.
 |             Added interpolate method 'from_derivatives' which replaces
 |             'piecewise_polynomial' in SciPy 0.18; backwards-compatible with
 |             SciPy < 0.18
 |      
 |      axis : {0 or 'index', 1 or 'columns', None}, default None
 |          Axis to interpolate along.
 |      limit : int, optional
 |          Maximum number of consecutive NaNs to fill. Must be greater than
 |          0.
 |      inplace : bool, default False
 |          Update the data in place if possible.
 |      limit_direction : {'forward', 'backward', 'both'}, default 'forward'
 |          If limit is specified, consecutive NaNs will be filled in this
 |          direction.
 |      limit_area : {`None`, 'inside', 'outside'}, default None
 |          If limit is specified, consecutive NaNs will be filled with this
 |          restriction.
 |      
 |          * ``None``: No fill restriction.
 |          * 'inside': Only fill NaNs surrounded by valid values
 |            (interpolate).
 |          * 'outside': Only fill NaNs outside valid values (extrapolate).
 |      
 |          .. versionadded:: 0.21.0
 |      
 |      downcast : optional, 'infer' or None, defaults to None
 |          Downcast dtypes if possible.
 |      **kwargs
 |          Keyword arguments to pass on to the interpolating function.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Returns the same object type as the caller, interpolated at
 |          some or all ``NaN`` values
 |      
 |      See Also
 |      --------
 |      fillna : Fill missing values using different methods.
 |      scipy.interpolate.Akima1DInterpolator : Piecewise cubic polynomials
 |          (Akima interpolator).
 |      scipy.interpolate.BPoly.from_derivatives : Piecewise polynomial in the
 |          Bernstein basis.
 |      scipy.interpolate.interp1d : Interpolate a 1-D function.
 |      scipy.interpolate.KroghInterpolator : Interpolate polynomial (Krogh
 |          interpolator).
 |      scipy.interpolate.PchipInterpolator : PCHIP 1-d monotonic cubic
 |          interpolation.
 |      scipy.interpolate.CubicSpline : Cubic spline data interpolator.
 |      
 |      Notes
 |      -----
 |      The 'krogh', 'piecewise_polynomial', 'spline', 'pchip' and 'akima'
 |      methods are wrappers around the respective SciPy implementations of
 |      similar names. These use the actual numerical values of the index.
 |      For more information on their behavior, see the
 |      `SciPy documentation
 |      <http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation>`__
 |      and `SciPy tutorial
 |      <http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html>`__.
 |      
 |      Examples
 |      --------
 |      Filling in ``NaN`` in a :class:`~pandas.Series` via linear
 |      interpolation.
 |      
 |      >>> s = pd.Series([0, 1, np.nan, 3])
 |      >>> s
 |      0    0.0
 |      1    1.0
 |      2    NaN
 |      3    3.0
 |      dtype: float64
 |      >>> s.interpolate()
 |      0    0.0
 |      1    1.0
 |      2    2.0
 |      3    3.0
 |      dtype: float64
 |      
 |      Filling in ``NaN`` in a Series by padding, but filling at most two
 |      consecutive ``NaN`` at a time.
 |      
 |      >>> s = pd.Series([np.nan, "single_one", np.nan,
 |      ...                "fill_two_more", np.nan, np.nan, np.nan,
 |      ...                4.71, np.nan])
 |      >>> s
 |      0              NaN
 |      1       single_one
 |      2              NaN
 |      3    fill_two_more
 |      4              NaN
 |      5              NaN
 |      6              NaN
 |      7             4.71
 |      8              NaN
 |      dtype: object
 |      >>> s.interpolate(method='pad', limit=2)
 |      0              NaN
 |      1       single_one
 |      2       single_one
 |      3    fill_two_more
 |      4    fill_two_more
 |      5    fill_two_more
 |      6              NaN
 |      7             4.71
 |      8             4.71
 |      dtype: object
 |      
 |      Filling in ``NaN`` in a Series via polynomial interpolation or splines:
 |      Both 'polynomial' and 'spline' methods require that you also specify
 |      an ``order`` (int).
 |      
 |      >>> s = pd.Series([0, 2, np.nan, 8])
 |      >>> s.interpolate(method='polynomial', order=2)
 |      0    0.000000
 |      1    2.000000
 |      2    4.666667
 |      3    8.000000
 |      dtype: float64
 |      
 |      Fill the DataFrame forward (that is, going down) along each column
 |      using linear interpolation.
 |      
 |      Note how the last entry in column 'a' is interpolated differently,
 |      because there is no entry after it to use for interpolation.
 |      Note how the first entry in column 'b' remains ``NaN``, because there
 |      is no entry befofe it to use for interpolation.
 |      
 |      >>> df = pd.DataFrame([(0.0,  np.nan, -1.0, 1.0),
 |      ...                    (np.nan, 2.0, np.nan, np.nan),
 |      ...                    (2.0, 3.0, np.nan, 9.0),
 |      ...                    (np.nan, 4.0, -4.0, 16.0)],
 |      ...                   columns=list('abcd'))
 |      >>> df
 |           a    b    c     d
 |      0  0.0  NaN -1.0   1.0
 |      1  NaN  2.0  NaN   NaN
 |      2  2.0  3.0  NaN   9.0
 |      3  NaN  4.0 -4.0  16.0
 |      >>> df.interpolate(method='linear', limit_direction='forward', axis=0)
 |           a    b    c     d
 |      0  0.0  NaN -1.0   1.0
 |      1  1.0  2.0 -2.0   5.0
 |      2  2.0  3.0 -3.0   9.0
 |      3  2.0  4.0 -4.0  16.0
 |      
 |      Using polynomial interpolation.
 |      
 |      >>> df['d'].interpolate(method='polynomial', order=2)
 |      0     1.0
 |      1     4.0
 |      2     9.0
 |      3    16.0
 |      Name: d, dtype: float64
 |  
 |  keys(self)
 |      Get the 'info axis' (see Indexing for more)
 |      
 |      This is index for Series, columns for DataFrame and major_axis for
 |      Panel.
 |  
 |  last(self, offset)
 |      Convenience method for subsetting final periods of time series data
 |      based on a date offset.
 |      
 |      Parameters
 |      ----------
 |      offset : string, DateOffset, dateutil.relativedelta
 |      
 |      Returns
 |      -------
 |      subset : same type as caller
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the index is not  a :class:`DatetimeIndex`
 |      
 |      See Also
 |      --------
 |      first : Select initial periods of time series based on a date offset.
 |      at_time : Select values at a particular time of the day.
 |      between_time : Select values between particular times of the day.
 |      
 |      Examples
 |      --------
 |      >>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
 |      >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i)
 |      >>> ts
 |                  A
 |      2018-04-09  1
 |      2018-04-11  2
 |      2018-04-13  3
 |      2018-04-15  4
 |      
 |      Get the rows for the last 3 days:
 |      
 |      >>> ts.last('3D')
 |                  A
 |      2018-04-13  3
 |      2018-04-15  4
 |      
 |      Notice the data for 3 last calender days were returned, not the last
 |      3 observed days in the dataset, and therefore data for 2018-04-11 was
 |      not returned.
 |  
 |  last_valid_index(self)
 |      Return index for last non-NA/null value.
 |      
 |      Returns
 |      --------
 |      scalar : type of index
 |      
 |      Notes
 |      --------
 |      If all elements are non-NA/null, returns None.
 |      Also returns None for empty NDFrame.
 |  
 |  mask(self, cond, other=nan, inplace=False, axis=None, level=None, errors='raise', try_cast=False, raise_on_error=None)
 |      Replace values where the condition is True.
 |      
 |      Parameters
 |      ----------
 |      cond : boolean NDFrame, array-like, or callable
 |          Where `cond` is False, keep the original value. Where
 |          True, replace with corresponding value from `other`.
 |          If `cond` is callable, it is computed on the NDFrame and
 |          should return boolean NDFrame or array. The callable must
 |          not change input NDFrame (though pandas doesn't check it).
 |      
 |          .. versionadded:: 0.18.1
 |              A callable can be used as cond.
 |      
 |      other : scalar, NDFrame, or callable
 |          Entries where `cond` is True are replaced with
 |          corresponding value from `other`.
 |          If other is callable, it is computed on the NDFrame and
 |          should return scalar or NDFrame. The callable must not
 |          change input NDFrame (though pandas doesn't check it).
 |      
 |          .. versionadded:: 0.18.1
 |              A callable can be used as other.
 |      
 |      inplace : boolean, default False
 |          Whether to perform the operation in place on the data.
 |      axis : int, default None
 |          Alignment axis if needed.
 |      level : int, default None
 |          Alignment level if needed.
 |      errors : str, {'raise', 'ignore'}, default `raise`
 |          Note that currently this parameter won't affect
 |          the results and will always coerce to a suitable dtype.
 |      
 |          - `raise` : allow exceptions to be raised.
 |          - `ignore` : suppress exceptions. On error return original object.
 |      
 |      try_cast : boolean, default False
 |          Try to cast the result back to the input type (if possible).
 |      raise_on_error : boolean, default True
 |          Whether to raise on invalid data types (e.g. trying to where on
 |          strings).
 |      
 |          .. deprecated:: 0.21.0
 |      
 |             Use `errors`.
 |      
 |      Returns
 |      -------
 |      wh : same type as caller
 |      
 |      See Also
 |      --------
 |      :func:`DataFrame.where` : Return an object of same shape as
 |          self.
 |      
 |      Notes
 |      -----
 |      The mask method is an application of the if-then idiom. For each
 |      element in the calling DataFrame, if ``cond`` is ``False`` the
 |      element is used; otherwise the corresponding element from the DataFrame
 |      ``other`` is used.
 |      
 |      The signature for :func:`DataFrame.where` differs from
 |      :func:`numpy.where`. Roughly ``df1.where(m, df2)`` is equivalent to
 |      ``np.where(m, df1, df2)``.
 |      
 |      For further details and examples see the ``mask`` documentation in
 |      :ref:`indexing <indexing.where_mask>`.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series(range(5))
 |      >>> s.where(s > 0)
 |      0    NaN
 |      1    1.0
 |      2    2.0
 |      3    3.0
 |      4    4.0
 |      dtype: float64
 |      
 |      >>> s.mask(s > 0)
 |      0    0.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      >>> s.where(s > 1, 10)
 |      0    10
 |      1    10
 |      2    2
 |      3    3
 |      4    4
 |      dtype: int64
 |      
 |      >>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
 |      >>> m = df % 3 == 0
 |      >>> df.where(m, -df)
 |         A  B
 |      0  0 -1
 |      1 -2  3
 |      2 -4 -5
 |      3  6 -7
 |      4 -8  9
 |      >>> df.where(m, -df) == np.where(m, df, -df)
 |            A     B
 |      0  True  True
 |      1  True  True
 |      2  True  True
 |      3  True  True
 |      4  True  True
 |      >>> df.where(m, -df) == df.mask(~m, -df)
 |            A     B
 |      0  True  True
 |      1  True  True
 |      2  True  True
 |      3  True  True
 |      4  True  True
 |  
 |  pct_change(self, periods=1, fill_method='pad', limit=None, freq=None, **kwargs)
 |      Percentage change between the current and a prior element.
 |      
 |      Computes the percentage change from the immediately previous row by
 |      default. This is useful in comparing the percentage of change in a time
 |      series of elements.
 |      
 |      Parameters
 |      ----------
 |      periods : int, default 1
 |          Periods to shift for forming percent change.
 |      fill_method : str, default 'pad'
 |          How to handle NAs before computing percent changes.
 |      limit : int, default None
 |          The number of consecutive NAs to fill before stopping.
 |      freq : DateOffset, timedelta, or offset alias string, optional
 |          Increment to use from time series API (e.g. 'M' or BDay()).
 |      **kwargs
 |          Additional keyword arguments are passed into
 |          `DataFrame.shift` or `Series.shift`.
 |      
 |      Returns
 |      -------
 |      chg : Series or DataFrame
 |          The same type as the calling object.
 |      
 |      See Also
 |      --------
 |      Series.diff : Compute the difference of two elements in a Series.
 |      DataFrame.diff : Compute the difference of two elements in a DataFrame.
 |      Series.shift : Shift the index by some number of periods.
 |      DataFrame.shift : Shift the index by some number of periods.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([90, 91, 85])
 |      >>> s
 |      0    90
 |      1    91
 |      2    85
 |      dtype: int64
 |      
 |      >>> s.pct_change()
 |      0         NaN
 |      1    0.011111
 |      2   -0.065934
 |      dtype: float64
 |      
 |      >>> s.pct_change(periods=2)
 |      0         NaN
 |      1         NaN
 |      2   -0.055556
 |      dtype: float64
 |      
 |      See the percentage change in a Series where filling NAs with last
 |      valid observation forward to next valid.
 |      
 |      >>> s = pd.Series([90, 91, None, 85])
 |      >>> s
 |      0    90.0
 |      1    91.0
 |      2     NaN
 |      3    85.0
 |      dtype: float64
 |      
 |      >>> s.pct_change(fill_method='ffill')
 |      0         NaN
 |      1    0.011111
 |      2    0.000000
 |      3   -0.065934
 |      dtype: float64
 |      
 |      **DataFrame**
 |      
 |      Percentage change in French franc, Deutsche Mark, and Italian lira from
 |      1980-01-01 to 1980-03-01.
 |      
 |      >>> df = pd.DataFrame({
 |      ...     'FR': [4.0405, 4.0963, 4.3149],
 |      ...     'GR': [1.7246, 1.7482, 1.8519],
 |      ...     'IT': [804.74, 810.01, 860.13]},
 |      ...     index=['1980-01-01', '1980-02-01', '1980-03-01'])
 |      >>> df
 |                      FR      GR      IT
 |      1980-01-01  4.0405  1.7246  804.74
 |      1980-02-01  4.0963  1.7482  810.01
 |      1980-03-01  4.3149  1.8519  860.13
 |      
 |      >>> df.pct_change()
 |                        FR        GR        IT
 |      1980-01-01       NaN       NaN       NaN
 |      1980-02-01  0.013810  0.013684  0.006549
 |      1980-03-01  0.053365  0.059318  0.061876
 |      
 |      Percentage of change in GOOG and APPL stock volume. Shows computing
 |      the percentage change between columns.
 |      
 |      >>> df = pd.DataFrame({
 |      ...     '2016': [1769950, 30586265],
 |      ...     '2015': [1500923, 40912316],
 |      ...     '2014': [1371819, 41403351]},
 |      ...     index=['GOOG', 'APPL'])
 |      >>> df
 |                2016      2015      2014
 |      GOOG   1769950   1500923   1371819
 |      APPL  30586265  40912316  41403351
 |      
 |      >>> df.pct_change(axis='columns')
 |            2016      2015      2014
 |      GOOG   NaN -0.151997 -0.086016
 |      APPL   NaN  0.337604  0.012002
 |  
 |  pipe(self, func, *args, **kwargs)
 |      Apply func(self, \*args, \*\*kwargs).
 |      
 |      Parameters
 |      ----------
 |      func : function
 |          function to apply to the NDFrame.
 |          ``args``, and ``kwargs`` are passed into ``func``.
 |          Alternatively a ``(callable, data_keyword)`` tuple where
 |          ``data_keyword`` is a string indicating the keyword of
 |          ``callable`` that expects the NDFrame.
 |      args : iterable, optional
 |          positional arguments passed into ``func``.
 |      kwargs : mapping, optional
 |          a dictionary of keyword arguments passed into ``func``.
 |      
 |      Returns
 |      -------
 |      object : the return type of ``func``.
 |      
 |      See Also
 |      --------
 |      DataFrame.apply
 |      DataFrame.applymap
 |      Series.map
 |      
 |      Notes
 |      -----
 |      
 |      Use ``.pipe`` when chaining together functions that expect
 |      Series, DataFrames or GroupBy objects. Instead of writing
 |      
 |      >>> f(g(h(df), arg1=a), arg2=b, arg3=c)
 |      
 |      You can write
 |      
 |      >>> (df.pipe(h)
 |      ...    .pipe(g, arg1=a)
 |      ...    .pipe(f, arg2=b, arg3=c)
 |      ... )
 |      
 |      If you have a function that takes the data as (say) the second
 |      argument, pass a tuple indicating which keyword expects the
 |      data. For example, suppose ``f`` takes its data as ``arg2``:
 |      
 |      >>> (df.pipe(h)
 |      ...    .pipe(g, arg1=a)
 |      ...    .pipe((f, 'arg2'), arg1=a, arg3=c)
 |      ...  )
 |  
 |  pop(self, item)
 |      Return item and drop from frame. Raise KeyError if not found.
 |      
 |      Parameters
 |      ----------
 |      item : str
 |          Column label to be popped
 |      
 |      Returns
 |      -------
 |      popped : Series
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([('falcon', 'bird',    389.0),
 |      ...                    ('parrot', 'bird',     24.0),
 |      ...                    ('lion',   'mammal',   80.5),
 |      ...                    ('monkey', 'mammal', np.nan)],
 |      ...                   columns=('name', 'class', 'max_speed'))
 |      >>> df
 |           name   class  max_speed
 |      0  falcon    bird      389.0
 |      1  parrot    bird       24.0
 |      2    lion  mammal       80.5
 |      3  monkey  mammal        NaN
 |      
 |      >>> df.pop('class')
 |      0      bird
 |      1      bird
 |      2    mammal
 |      3    mammal
 |      Name: class, dtype: object
 |      
 |      >>> df
 |           name  max_speed
 |      0  falcon      389.0
 |      1  parrot       24.0
 |      2    lion       80.5
 |      3  monkey        NaN
 |  
 |  rank(self, axis=0, method='average', numeric_only=None, na_option='keep', ascending=True, pct=False)
 |      Compute numerical data ranks (1 through n) along axis. Equal values are
 |      assigned a rank that is the average of the ranks of those values.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          index to direct ranking
 |      method : {'average', 'min', 'max', 'first', 'dense'}
 |          * average: average rank of group
 |          * min: lowest rank in group
 |          * max: highest rank in group
 |          * first: ranks assigned in order they appear in the array
 |          * dense: like 'min', but rank always increases by 1 between groups
 |      numeric_only : boolean, default None
 |          Include only float, int, boolean data. Valid only for DataFrame or
 |          Panel objects
 |      na_option : {'keep', 'top', 'bottom'}
 |          * keep: leave NA values where they are
 |          * top: smallest rank if ascending
 |          * bottom: smallest rank if descending
 |      ascending : boolean, default True
 |          False for ranks by high (1) to low (N)
 |      pct : boolean, default False
 |          Computes percentage rank of data
 |      
 |      Returns
 |      -------
 |      ranks : same type as caller
 |  
 |  reindex_like(self, other, method=None, copy=True, limit=None, tolerance=None)
 |      Return an object with matching indices as other object.
 |      
 |      Conform the object to the same index on all axes. Optional
 |      filling logic, placing NaN in locations having no value
 |      in the previous index. A new object is produced unless the
 |      new index is equivalent to the current one and copy=False.
 |      
 |      Parameters
 |      ----------
 |      other : Object of the same data type
 |          Its row and column indices are used to define the new indices
 |          of this object.
 |      method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}
 |          Method to use for filling holes in reindexed DataFrame.
 |          Please note: this is only applicable to DataFrames/Series with a
 |          monotonically increasing/decreasing index.
 |      
 |          * None (default): don't fill gaps
 |          * pad / ffill: propagate last valid observation forward to next
 |            valid
 |          * backfill / bfill: use next valid observation to fill gap
 |          * nearest: use nearest valid observations to fill gap
 |      
 |      copy : bool, default True
 |          Return a new object, even if the passed indexes are the same.
 |      limit : int, default None
 |          Maximum number of consecutive labels to fill for inexact matches.
 |      tolerance : optional
 |          Maximum distance between original and new labels for inexact
 |          matches. The values of the index at the matching locations most
 |          satisfy the equation ``abs(index[indexer] - target) <= tolerance``.
 |      
 |          Tolerance may be a scalar value, which applies the same tolerance
 |          to all values, or list-like, which applies variable tolerance per
 |          element. List-like includes list, tuple, array, Series, and must be
 |          the same size as the index and its dtype must exactly match the
 |          index's type.
 |      
 |          .. versionadded:: 0.21.0 (list-like tolerance)
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Same type as caller, but with changed indices on each axis.
 |      
 |      See Also
 |      --------
 |      DataFrame.set_index : Set row labels.
 |      DataFrame.reset_index : Remove row labels or move them to new columns.
 |      DataFrame.reindex : Change to new indices or expand indices.
 |      
 |      Notes
 |      -----
 |      Same as calling
 |      ``.reindex(index=other.index, columns=other.columns,...)``.
 |      
 |      Examples
 |      --------
 |      >>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
 |      ...                     [31, 87.8, 'high'],
 |      ...                     [22, 71.6, 'medium'],
 |      ...                     [35, 95, 'medium']],
 |      ...     columns=['temp_celsius', 'temp_fahrenheit', 'windspeed'],
 |      ...     index=pd.date_range(start='2014-02-12',
 |      ...                         end='2014-02-15', freq='D'))
 |      
 |      >>> df1
 |                  temp_celsius  temp_fahrenheit windspeed
 |      2014-02-12          24.3             75.7      high
 |      2014-02-13          31.0             87.8      high
 |      2014-02-14          22.0             71.6    medium
 |      2014-02-15          35.0             95.0    medium
 |      
 |      >>> df2 = pd.DataFrame([[28, 'low'],
 |      ...                     [30, 'low'],
 |      ...                     [35.1, 'medium']],
 |      ...     columns=['temp_celsius', 'windspeed'],
 |      ...     index=pd.DatetimeIndex(['2014-02-12', '2014-02-13',
 |      ...                             '2014-02-15']))
 |      
 |      >>> df2
 |                  temp_celsius windspeed
 |      2014-02-12          28.0       low
 |      2014-02-13          30.0       low
 |      2014-02-15          35.1    medium
 |      
 |      >>> df2.reindex_like(df1)
 |                  temp_celsius  temp_fahrenheit windspeed
 |      2014-02-12          28.0              NaN       low
 |      2014-02-13          30.0              NaN       low
 |      2014-02-14           NaN              NaN       NaN
 |      2014-02-15          35.1              NaN    medium
 |  
 |  rename_axis(self, mapper=None, index=None, columns=None, axis=None, copy=True, inplace=False)
 |      Set the name of the axis for the index or columns.
 |      
 |      Parameters
 |      ----------
 |      mapper : scalar, list-like, optional
 |          Value to set the axis name attribute.
 |      index, columns : scalar, list-like, dict-like or function, optional
 |          A scalar, list-like, dict-like or functions transformations to
 |          apply to that axis' values.
 |      
 |          Use either ``mapper`` and ``axis`` to
 |          specify the axis to target with ``mapper``, or ``index``
 |          and/or ``columns``.
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to rename.
 |      copy : bool, default True
 |          Also copy underlying data.
 |      inplace : bool, default False
 |          Modifies the object directly, instead of creating a new Series
 |          or DataFrame.
 |      
 |      Returns
 |      -------
 |      Series, DataFrame, or None
 |          The same type as the caller or None if `inplace` is True.
 |      
 |      See Also
 |      --------
 |      Series.rename : Alter Series index labels or name.
 |      DataFrame.rename : Alter DataFrame index labels or name.
 |      Index.rename : Set new names on index.
 |      
 |      Notes
 |      -----
 |      Prior to version 0.21.0, ``rename_axis`` could also be used to change
 |      the axis *labels* by passing a mapping or scalar. This behavior is
 |      deprecated and will be removed in a future version. Use ``rename``
 |      instead.
 |      
 |      ``DataFrame.rename_axis`` supports two calling conventions
 |      
 |      * ``(index=index_mapper, columns=columns_mapper, ...)``
 |      * ``(mapper, axis={'index', 'columns'}, ...)``
 |      
 |      The first calling convention will only modify the names of
 |      the index and/or the names of the Index object that is the columns.
 |      In this case, the parameter ``copy`` is ignored.
 |      
 |      The second calling convention will modify the names of the
 |      the corresponding index if mapper is a list or a scalar.
 |      However, if mapper is dict-like or a function, it will use the
 |      deprecated behavior of modifying the axis *labels*.
 |      
 |      We *highly* recommend using keyword arguments to clarify your
 |      intent.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series(["dog", "cat", "monkey"])
 |      >>> s
 |      0       dog
 |      1       cat
 |      2    monkey
 |      dtype: object
 |      >>> s.rename_axis("animal")
 |      animal
 |      0    dog
 |      1    cat
 |      2    monkey
 |      dtype: object
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame({"num_legs": [4, 4, 2],
 |      ...                    "num_arms": [0, 0, 2]},
 |      ...                   ["dog", "cat", "monkey"])
 |      >>> df
 |              num_legs  num_arms
 |      dog            4         0
 |      cat            4         0
 |      monkey         2         2
 |      >>> df = df.rename_axis("animal")
 |      >>> df
 |              num_legs  num_arms
 |      animal
 |      dog            4         0
 |      cat            4         0
 |      monkey         2         2
 |      >>> df = df.rename_axis("limbs", axis="columns")
 |      >>> df
 |      limbs   num_legs  num_arms
 |      animal
 |      dog            4         0
 |      cat            4         0
 |      monkey         2         2
 |      
 |      **MultiIndex**
 |      
 |      >>> df.index = pd.MultiIndex.from_product([['mammal'],
 |      ...                                        ['dog', 'cat', 'monkey']],
 |      ...                                       names=['type', 'name'])
 |      >>> df
 |      limbs          num_legs  num_arms
 |      type   name
 |      mammal dog            4         0
 |             cat            4         0
 |             monkey         2         2
 |      
 |      >>> df.rename_axis(index={'type': 'class'})
 |      limbs          num_legs  num_arms
 |      class  name
 |      mammal dog            4         0
 |             cat            4         0
 |             monkey         2         2
 |      
 |      >>> df.rename_axis(columns=str.upper)
 |      LIMBS          num_legs  num_arms
 |      type   name
 |      mammal dog            4         0
 |             cat            4         0
 |             monkey         2         2
 |  
 |  resample(self, rule, how=None, axis=0, fill_method=None, closed=None, label=None, convention='start', kind=None, loffset=None, limit=None, base=0, on=None, level=None)
 |      Resample time-series data.
 |      
 |      Convenience method for frequency conversion and resampling of time
 |      series. Object must have a datetime-like index (`DatetimeIndex`,
 |      `PeriodIndex`, or `TimedeltaIndex`), or pass datetime-like values
 |      to the `on` or `level` keyword.
 |      
 |      Parameters
 |      ----------
 |      rule : str
 |          The offset string or object representing target conversion.
 |      how : str
 |          Method for down/re-sampling, default to 'mean' for downsampling.
 |      
 |          .. deprecated:: 0.18.0
 |             The new syntax is ``.resample(...).mean()``, or
 |             ``.resample(...).apply(<func>)``
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Which axis to use for up- or down-sampling. For `Series` this
 |          will default to 0, i.e. along the rows. Must be
 |          `DatetimeIndex`, `TimedeltaIndex` or `PeriodIndex`.
 |      fill_method : str, default None
 |          Filling method for upsampling.
 |      
 |          .. deprecated:: 0.18.0
 |             The new syntax is ``.resample(...).<func>()``,
 |             e.g. ``.resample(...).pad()``
 |      closed : {'right', 'left'}, default None
 |          Which side of bin interval is closed. The default is 'left'
 |          for all frequency offsets except for 'M', 'A', 'Q', 'BM',
 |          'BA', 'BQ', and 'W' which all have a default of 'right'.
 |      label : {'right', 'left'}, default None
 |          Which bin edge label to label bucket with. The default is 'left'
 |          for all frequency offsets except for 'M', 'A', 'Q', 'BM',
 |          'BA', 'BQ', and 'W' which all have a default of 'right'.
 |      convention : {'start', 'end', 's', 'e'}, default 'start'
 |          For `PeriodIndex` only, controls whether to use the start or
 |          end of `rule`.
 |      kind : {'timestamp', 'period'}, optional, default None
 |          Pass 'timestamp' to convert the resulting index to a
 |          `DateTimeIndex` or 'period' to convert it to a `PeriodIndex`.
 |          By default the input representation is retained.
 |      loffset : timedelta, default None
 |          Adjust the resampled time labels.
 |      limit : int, default None
 |          Maximum size gap when reindexing with `fill_method`.
 |      
 |          .. deprecated:: 0.18.0
 |      base : int, default 0
 |          For frequencies that evenly subdivide 1 day, the "origin" of the
 |          aggregated intervals. For example, for '5min' frequency, base could
 |          range from 0 through 4. Defaults to 0.
 |      on : str, optional
 |          For a DataFrame, column to use instead of index for resampling.
 |          Column must be datetime-like.
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      level : str or int, optional
 |          For a MultiIndex, level (name or number) to use for
 |          resampling. `level` must be datetime-like.
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      Returns
 |      -------
 |      Resampler object
 |      
 |      See Also
 |      --------
 |      groupby : Group by mapping, function, label, or list of labels.
 |      Series.resample : Resample a Series.
 |      DataFrame.resample: Resample a DataFrame.
 |      
 |      Notes
 |      -----
 |      See the `user guide
 |      <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling>`_
 |      for more.
 |      
 |      To learn more about the offset strings, please see `this link
 |      <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
 |      
 |      Examples
 |      --------
 |      
 |      Start by creating a series with 9 one minute timestamps.
 |      
 |      >>> index = pd.date_range('1/1/2000', periods=9, freq='T')
 |      >>> series = pd.Series(range(9), index=index)
 |      >>> series
 |      2000-01-01 00:00:00    0
 |      2000-01-01 00:01:00    1
 |      2000-01-01 00:02:00    2
 |      2000-01-01 00:03:00    3
 |      2000-01-01 00:04:00    4
 |      2000-01-01 00:05:00    5
 |      2000-01-01 00:06:00    6
 |      2000-01-01 00:07:00    7
 |      2000-01-01 00:08:00    8
 |      Freq: T, dtype: int64
 |      
 |      Downsample the series into 3 minute bins and sum the values
 |      of the timestamps falling into a bin.
 |      
 |      >>> series.resample('3T').sum()
 |      2000-01-01 00:00:00     3
 |      2000-01-01 00:03:00    12
 |      2000-01-01 00:06:00    21
 |      Freq: 3T, dtype: int64
 |      
 |      Downsample the series into 3 minute bins as above, but label each
 |      bin using the right edge instead of the left. Please note that the
 |      value in the bucket used as the label is not included in the bucket,
 |      which it labels. For example, in the original series the
 |      bucket ``2000-01-01 00:03:00`` contains the value 3, but the summed
 |      value in the resampled bucket with the label ``2000-01-01 00:03:00``
 |      does not include 3 (if it did, the summed value would be 6, not 3).
 |      To include this value close the right side of the bin interval as
 |      illustrated in the example below this one.
 |      
 |      >>> series.resample('3T', label='right').sum()
 |      2000-01-01 00:03:00     3
 |      2000-01-01 00:06:00    12
 |      2000-01-01 00:09:00    21
 |      Freq: 3T, dtype: int64
 |      
 |      Downsample the series into 3 minute bins as above, but close the right
 |      side of the bin interval.
 |      
 |      >>> series.resample('3T', label='right', closed='right').sum()
 |      2000-01-01 00:00:00     0
 |      2000-01-01 00:03:00     6
 |      2000-01-01 00:06:00    15
 |      2000-01-01 00:09:00    15
 |      Freq: 3T, dtype: int64
 |      
 |      Upsample the series into 30 second bins.
 |      
 |      >>> series.resample('30S').asfreq()[0:5]   # Select first 5 rows
 |      2000-01-01 00:00:00   0.0
 |      2000-01-01 00:00:30   NaN
 |      2000-01-01 00:01:00   1.0
 |      2000-01-01 00:01:30   NaN
 |      2000-01-01 00:02:00   2.0
 |      Freq: 30S, dtype: float64
 |      
 |      Upsample the series into 30 second bins and fill the ``NaN``
 |      values using the ``pad`` method.
 |      
 |      >>> series.resample('30S').pad()[0:5]
 |      2000-01-01 00:00:00    0
 |      2000-01-01 00:00:30    0
 |      2000-01-01 00:01:00    1
 |      2000-01-01 00:01:30    1
 |      2000-01-01 00:02:00    2
 |      Freq: 30S, dtype: int64
 |      
 |      Upsample the series into 30 second bins and fill the
 |      ``NaN`` values using the ``bfill`` method.
 |      
 |      >>> series.resample('30S').bfill()[0:5]
 |      2000-01-01 00:00:00    0
 |      2000-01-01 00:00:30    1
 |      2000-01-01 00:01:00    1
 |      2000-01-01 00:01:30    2
 |      2000-01-01 00:02:00    2
 |      Freq: 30S, dtype: int64
 |      
 |      Pass a custom function via ``apply``
 |      
 |      >>> def custom_resampler(array_like):
 |      ...     return np.sum(array_like) + 5
 |      ...
 |      >>> series.resample('3T').apply(custom_resampler)
 |      2000-01-01 00:00:00     8
 |      2000-01-01 00:03:00    17
 |      2000-01-01 00:06:00    26
 |      Freq: 3T, dtype: int64
 |      
 |      For a Series with a PeriodIndex, the keyword `convention` can be
 |      used to control whether to use the start or end of `rule`.
 |      
 |      Resample a year by quarter using 'start' `convention`. Values are
 |      assigned to the first quarter of the period.
 |      
 |      >>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
 |      ...                                             freq='A',
 |      ...                                             periods=2))
 |      >>> s
 |      2012    1
 |      2013    2
 |      Freq: A-DEC, dtype: int64
 |      >>> s.resample('Q', convention='start').asfreq()
 |      2012Q1    1.0
 |      2012Q2    NaN
 |      2012Q3    NaN
 |      2012Q4    NaN
 |      2013Q1    2.0
 |      2013Q2    NaN
 |      2013Q3    NaN
 |      2013Q4    NaN
 |      Freq: Q-DEC, dtype: float64
 |      
 |      Resample quarters by month using 'end' `convention`. Values are
 |      assigned to the last month of the period.
 |      
 |      >>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01',
 |      ...                                                   freq='Q',
 |      ...                                                   periods=4))
 |      >>> q
 |      2018Q1    1
 |      2018Q2    2
 |      2018Q3    3
 |      2018Q4    4
 |      Freq: Q-DEC, dtype: int64
 |      >>> q.resample('M', convention='end').asfreq()
 |      2018-03    1.0
 |      2018-04    NaN
 |      2018-05    NaN
 |      2018-06    2.0
 |      2018-07    NaN
 |      2018-08    NaN
 |      2018-09    3.0
 |      2018-10    NaN
 |      2018-11    NaN
 |      2018-12    4.0
 |      Freq: M, dtype: float64
 |      
 |      For DataFrame objects, the keyword `on` can be used to specify the
 |      column instead of the index for resampling.
 |      
 |      >>> d = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19],
 |      ...           'volume': [50, 60, 40, 100, 50, 100, 40, 50]})
 |      >>> df = pd.DataFrame(d)
 |      >>> df['week_starting'] = pd.date_range('01/01/2018',
 |      ...                                     periods=8,
 |      ...                                     freq='W')
 |      >>> df
 |         price  volume week_starting
 |      0     10      50    2018-01-07
 |      1     11      60    2018-01-14
 |      2      9      40    2018-01-21
 |      3     13     100    2018-01-28
 |      4     14      50    2018-02-04
 |      5     18     100    2018-02-11
 |      6     17      40    2018-02-18
 |      7     19      50    2018-02-25
 |      >>> df.resample('M', on='week_starting').mean()
 |                     price  volume
 |      week_starting
 |      2018-01-31     10.75    62.5
 |      2018-02-28     17.00    60.0
 |      
 |      For a DataFrame with MultiIndex, the keyword `level` can be used to
 |      specify on which level the resampling needs to take place.
 |      
 |      >>> days = pd.date_range('1/1/2000', periods=4, freq='D')
 |      >>> d2 = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19],
 |      ...            'volume': [50, 60, 40, 100, 50, 100, 40, 50]})
 |      >>> df2 = pd.DataFrame(d2,
 |      ...                    index=pd.MultiIndex.from_product([days,
 |      ...                                                     ['morning',
 |      ...                                                      'afternoon']]
 |      ...                                                     ))
 |      >>> df2
 |                            price  volume
 |      2000-01-01 morning       10      50
 |                 afternoon     11      60
 |      2000-01-02 morning        9      40
 |                 afternoon     13     100
 |      2000-01-03 morning       14      50
 |                 afternoon     18     100
 |      2000-01-04 morning       17      40
 |                 afternoon     19      50
 |      >>> df2.resample('D', level=0).sum()
 |                  price  volume
 |      2000-01-01     21     110
 |      2000-01-02     22     140
 |      2000-01-03     32     150
 |      2000-01-04     36      90
 |  
 |  sample(self, n=None, frac=None, replace=False, weights=None, random_state=None, axis=None)
 |      Return a random sample of items from an axis of object.
 |      
 |      You can use `random_state` for reproducibility.
 |      
 |      Parameters
 |      ----------
 |      n : int, optional
 |          Number of items from axis to return. Cannot be used with `frac`.
 |          Default = 1 if `frac` = None.
 |      frac : float, optional
 |          Fraction of axis items to return. Cannot be used with `n`.
 |      replace : bool, default False
 |          Sample with or without replacement.
 |      weights : str or ndarray-like, optional
 |          Default 'None' results in equal probability weighting.
 |          If passed a Series, will align with target object on index. Index
 |          values in weights not found in sampled object will be ignored and
 |          index values in sampled object not in weights will be assigned
 |          weights of zero.
 |          If called on a DataFrame, will accept the name of a column
 |          when axis = 0.
 |          Unless weights are a Series, weights must be same length as axis
 |          being sampled.
 |          If weights do not sum to 1, they will be normalized to sum to 1.
 |          Missing values in the weights column will be treated as zero.
 |          Infinite values not allowed.
 |      random_state : int or numpy.random.RandomState, optional
 |          Seed for the random number generator (if int), or numpy RandomState
 |          object.
 |      axis : int or string, optional
 |          Axis to sample. Accepts axis number or name. Default is stat axis
 |          for given data type (0 for Series and DataFrames, 1 for Panels).
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          A new object of same type as caller containing `n` items randomly
 |          sampled from the caller object.
 |      
 |      See Also
 |      --------
 |      numpy.random.choice: Generates a random sample from a given 1-D numpy
 |          array.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
 |      ...                    'num_wings': [2, 0, 0, 0],
 |      ...                    'num_specimen_seen': [10, 2, 1, 8]},
 |      ...                   index=['falcon', 'dog', 'spider', 'fish'])
 |      >>> df
 |              num_legs  num_wings  num_specimen_seen
 |      falcon         2          2                 10
 |      dog            4          0                  2
 |      spider         8          0                  1
 |      fish           0          0                  8
 |      
 |      Extract 3 random elements from the ``Series`` ``df['num_legs']``:
 |      Note that we use `random_state` to ensure the reproducibility of
 |      the examples.
 |      
 |      >>> df['num_legs'].sample(n=3, random_state=1)
 |      fish      0
 |      spider    8
 |      falcon    2
 |      Name: num_legs, dtype: int64
 |      
 |      A random 50% sample of the ``DataFrame`` with replacement:
 |      
 |      >>> df.sample(frac=0.5, replace=True, random_state=1)
 |            num_legs  num_wings  num_specimen_seen
 |      dog          4          0                  2
 |      fish         0          0                  8
 |      
 |      Using a DataFrame column as weights. Rows with larger value in the
 |      `num_specimen_seen` column are more likely to be sampled.
 |      
 |      >>> df.sample(n=2, weights='num_specimen_seen', random_state=1)
 |              num_legs  num_wings  num_specimen_seen
 |      falcon         2          2                 10
 |      fish           0          0                  8
 |  
 |  select(self, crit, axis=0)
 |      Return data corresponding to axis labels matching criteria.
 |      
 |      .. deprecated:: 0.21.0
 |          Use df.loc[df.index.map(crit)] to select via labels
 |      
 |      Parameters
 |      ----------
 |      crit : function
 |          To be called on each index (label). Should return True or False
 |      axis : int
 |      
 |      Returns
 |      -------
 |      selection : same type as caller
 |  
 |  set_axis(self, labels, axis=0, inplace=None)
 |      Assign desired index to given axis.
 |      
 |      Indexes for column or row labels can be changed by assigning
 |      a list-like or Index.
 |      
 |      .. versionchanged:: 0.21.0
 |      
 |         The signature is now `labels` and `axis`, consistent with
 |         the rest of pandas API. Previously, the `axis` and `labels`
 |         arguments were respectively the first and second positional
 |         arguments.
 |      
 |      Parameters
 |      ----------
 |      labels : list-like, Index
 |          The values for the new index.
 |      
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          The axis to update. The value 0 identifies the rows, and 1
 |          identifies the columns.
 |      
 |      inplace : boolean, default None
 |          Whether to return a new %(klass)s instance.
 |      
 |          .. warning::
 |      
 |             ``inplace=None`` currently falls back to to True, but in a
 |             future version, will default to False. Use inplace=True
 |             explicitly rather than relying on the default.
 |      
 |      Returns
 |      -------
 |      renamed : %(klass)s or None
 |          An object of same type as caller if inplace=False, None otherwise.
 |      
 |      See Also
 |      --------
 |      DataFrame.rename_axis : Alter the name of the index or columns.
 |      
 |      Examples
 |      --------
 |      **Series**
 |      
 |      >>> s = pd.Series([1, 2, 3])
 |      >>> s
 |      0    1
 |      1    2
 |      2    3
 |      dtype: int64
 |      
 |      >>> s.set_axis(['a', 'b', 'c'], axis=0, inplace=False)
 |      a    1
 |      b    2
 |      c    3
 |      dtype: int64
 |      
 |      The original object is not modified.
 |      
 |      >>> s
 |      0    1
 |      1    2
 |      2    3
 |      dtype: int64
 |      
 |      **DataFrame**
 |      
 |      >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
 |      
 |      Change the row labels.
 |      
 |      >>> df.set_axis(['a', 'b', 'c'], axis='index', inplace=False)
 |         A  B
 |      a  1  4
 |      b  2  5
 |      c  3  6
 |      
 |      Change the column labels.
 |      
 |      >>> df.set_axis(['I', 'II'], axis='columns', inplace=False)
 |         I  II
 |      0  1   4
 |      1  2   5
 |      2  3   6
 |      
 |      Now, update the labels inplace.
 |      
 |      >>> df.set_axis(['i', 'ii'], axis='columns', inplace=True)
 |      >>> df
 |         i  ii
 |      0  1   4
 |      1  2   5
 |      2  3   6
 |  
 |  slice_shift(self, periods=1, axis=0)
 |      Equivalent to `shift` without copying data. The shifted data will
 |      not include the dropped periods and the shifted axis will be smaller
 |      than the original.
 |      
 |      Parameters
 |      ----------
 |      periods : int
 |          Number of periods to move, can be positive or negative
 |      
 |      Returns
 |      -------
 |      shifted : same type as caller
 |      
 |      Notes
 |      -----
 |      While the `slice_shift` is faster than `shift`, you may pay for it
 |      later during alignment.
 |  
 |  squeeze(self, axis=None)
 |      Squeeze 1 dimensional axis objects into scalars.
 |      
 |      Series or DataFrames with a single element are squeezed to a scalar.
 |      DataFrames with a single column or a single row are squeezed to a
 |      Series. Otherwise the object is unchanged.
 |      
 |      This method is most useful when you don't know if your
 |      object is a Series or DataFrame, but you do know it has just a single
 |      column. In that case you can safely call `squeeze` to ensure you have a
 |      Series.
 |      
 |      Parameters
 |      ----------
 |      axis : {0 or 'index', 1 or 'columns', None}, default None
 |          A specific axis to squeeze. By default, all length-1 axes are
 |          squeezed.
 |      
 |          .. versionadded:: 0.20.0
 |      
 |      Returns
 |      -------
 |      DataFrame, Series, or scalar
 |          The projection after squeezing `axis` or all the axes.
 |      
 |      See Also
 |      --------
 |      Series.iloc : Integer-location based indexing for selecting scalars.
 |      DataFrame.iloc : Integer-location based indexing for selecting Series.
 |      Series.to_frame : Inverse of DataFrame.squeeze for a
 |          single-column DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> primes = pd.Series([2, 3, 5, 7])
 |      
 |      Slicing might produce a Series with a single value:
 |      
 |      >>> even_primes = primes[primes % 2 == 0]
 |      >>> even_primes
 |      0    2
 |      dtype: int64
 |      
 |      >>> even_primes.squeeze()
 |      2
 |      
 |      Squeezing objects with more than one value in every axis does nothing:
 |      
 |      >>> odd_primes = primes[primes % 2 == 1]
 |      >>> odd_primes
 |      1    3
 |      2    5
 |      3    7
 |      dtype: int64
 |      
 |      >>> odd_primes.squeeze()
 |      1    3
 |      2    5
 |      3    7
 |      dtype: int64
 |      
 |      Squeezing is even more effective when used with DataFrames.
 |      
 |      >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
 |      >>> df
 |         a  b
 |      0  1  2
 |      1  3  4
 |      
 |      Slicing a single column will produce a DataFrame with the columns
 |      having only one value:
 |      
 |      >>> df_a = df[['a']]
 |      >>> df_a
 |         a
 |      0  1
 |      1  3
 |      
 |      So the columns can be squeezed down, resulting in a Series:
 |      
 |      >>> df_a.squeeze('columns')
 |      0    1
 |      1    3
 |      Name: a, dtype: int64
 |      
 |      Slicing a single row from a single column will produce a single
 |      scalar DataFrame:
 |      
 |      >>> df_0a = df.loc[df.index < 1, ['a']]
 |      >>> df_0a
 |         a
 |      0  1
 |      
 |      Squeezing the rows produces a single scalar Series:
 |      
 |      >>> df_0a.squeeze('rows')
 |      a    1
 |      Name: 0, dtype: int64
 |      
 |      Squeezing all axes wil project directly into a scalar:
 |      
 |      >>> df_0a.squeeze()
 |      1
 |  
 |  swapaxes(self, axis1, axis2, copy=True)
 |      Interchange axes and swap values axes appropriately.
 |      
 |      Returns
 |      -------
 |      y : same as input
 |  
 |  tail(self, n=5)
 |      Return the last `n` rows.
 |      
 |      This function returns last `n` rows from the object based on
 |      position. It is useful for quickly verifying data, for example,
 |      after sorting or appending rows.
 |      
 |      Parameters
 |      ----------
 |      n : int, default 5
 |          Number of rows to select.
 |      
 |      Returns
 |      -------
 |      type of caller
 |          The last `n` rows of the caller object.
 |      
 |      See Also
 |      --------
 |      DataFrame.head : The first `n` rows of the caller object.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'animal':['alligator', 'bee', 'falcon', 'lion',
 |      ...                    'monkey', 'parrot', 'shark', 'whale', 'zebra']})
 |      >>> df
 |            animal
 |      0  alligator
 |      1        bee
 |      2     falcon
 |      3       lion
 |      4     monkey
 |      5     parrot
 |      6      shark
 |      7      whale
 |      8      zebra
 |      
 |      Viewing the last 5 lines
 |      
 |      >>> df.tail()
 |         animal
 |      4  monkey
 |      5  parrot
 |      6   shark
 |      7   whale
 |      8   zebra
 |      
 |      Viewing the last `n` lines (three in this case)
 |      
 |      >>> df.tail(3)
 |        animal
 |      6  shark
 |      7  whale
 |      8  zebra
 |  
 |  take(self, indices, axis=0, convert=None, is_copy=True, **kwargs)
 |      Return the elements in the given *positional* indices along an axis.
 |      
 |      This means that we are not indexing according to actual values in
 |      the index attribute of the object. We are indexing according to the
 |      actual position of the element in the object.
 |      
 |      Parameters
 |      ----------
 |      indices : array-like
 |          An array of ints indicating which positions to take.
 |      axis : {0 or 'index', 1 or 'columns', None}, default 0
 |          The axis on which to select elements. ``0`` means that we are
 |          selecting rows, ``1`` means that we are selecting columns.
 |      convert : bool, default True
 |          Whether to convert negative indices into positive ones.
 |          For example, ``-1`` would map to the ``len(axis) - 1``.
 |          The conversions are similar to the behavior of indexing a
 |          regular Python list.
 |      
 |          .. deprecated:: 0.21.0
 |             In the future, negative indices will always be converted.
 |      
 |      is_copy : bool, default True
 |          Whether to return a copy of the original object or not.
 |      **kwargs
 |          For compatibility with :meth:`numpy.take`. Has no effect on the
 |          output.
 |      
 |      Returns
 |      -------
 |      taken : same type as caller
 |          An array-like containing the elements taken from the object.
 |      
 |      See Also
 |      --------
 |      DataFrame.loc : Select a subset of a DataFrame by labels.
 |      DataFrame.iloc : Select a subset of a DataFrame by positions.
 |      numpy.take : Take elements from an array along an axis.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([('falcon', 'bird',    389.0),
 |      ...                    ('parrot', 'bird',     24.0),
 |      ...                    ('lion',   'mammal',   80.5),
 |      ...                    ('monkey', 'mammal', np.nan)],
 |      ...                    columns=['name', 'class', 'max_speed'],
 |      ...                    index=[0, 2, 3, 1])
 |      >>> df
 |           name   class  max_speed
 |      0  falcon    bird      389.0
 |      2  parrot    bird       24.0
 |      3    lion  mammal       80.5
 |      1  monkey  mammal        NaN
 |      
 |      Take elements at positions 0 and 3 along the axis 0 (default).
 |      
 |      Note how the actual indices selected (0 and 1) do not correspond to
 |      our selected indices 0 and 3. That's because we are selecting the 0th
 |      and 3rd rows, not rows whose indices equal 0 and 3.
 |      
 |      >>> df.take([0, 3])
 |           name   class  max_speed
 |      0  falcon    bird      389.0
 |      1  monkey  mammal        NaN
 |      
 |      Take elements at indices 1 and 2 along the axis 1 (column selection).
 |      
 |      >>> df.take([1, 2], axis=1)
 |          class  max_speed
 |      0    bird      389.0
 |      2    bird       24.0
 |      3  mammal       80.5
 |      1  mammal        NaN
 |      
 |      We may take elements using negative integers for positive indices,
 |      starting from the end of the object, just like with Python lists.
 |      
 |      >>> df.take([-1, -2])
 |           name   class  max_speed
 |      1  monkey  mammal        NaN
 |      3    lion  mammal       80.5
 |  
 |  to_clipboard(self, excel=True, sep=None, **kwargs)
 |      Copy object to the system clipboard.
 |      
 |      Write a text representation of object to the system clipboard.
 |      This can be pasted into Excel, for example.
 |      
 |      Parameters
 |      ----------
 |      excel : bool, default True
 |          - True, use the provided separator, writing in a csv format for
 |            allowing easy pasting into excel.
 |          - False, write a string representation of the object to the
 |            clipboard.
 |      
 |      sep : str, default ``'\t'``
 |          Field delimiter.
 |      **kwargs
 |          These parameters will be passed to DataFrame.to_csv.
 |      
 |      See Also
 |      --------
 |      DataFrame.to_csv : Write a DataFrame to a comma-separated values
 |          (csv) file.
 |      read_clipboard : Read text from clipboard and pass to read_table.
 |      
 |      Notes
 |      -----
 |      Requirements for your platform.
 |      
 |        - Linux : `xclip`, or `xsel` (with `gtk` or `PyQt4` modules)
 |        - Windows : none
 |        - OS X : none
 |      
 |      Examples
 |      --------
 |      Copy the contents of a DataFrame to the clipboard.
 |      
 |      >>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
 |      >>> df.to_clipboard(sep=',')
 |      ... # Wrote the following to the system clipboard:
 |      ... # ,A,B,C
 |      ... # 0,1,2,3
 |      ... # 1,4,5,6
 |      
 |      We can omit the the index by passing the keyword `index` and setting
 |      it to false.
 |      
 |      >>> df.to_clipboard(sep=',', index=False)
 |      ... # Wrote the following to the system clipboard:
 |      ... # A,B,C
 |      ... # 1,2,3
 |      ... # 4,5,6
 |  
 |  to_csv(self, path_or_buf=None, sep=',', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, mode='w', encoding=None, compression='infer', quoting=None, quotechar='"', line_terminator=None, chunksize=None, tupleize_cols=None, date_format=None, doublequote=True, escapechar=None, decimal='.')
 |      Write object to a comma-separated values (csv) file.
 |      
 |      .. versionchanged:: 0.24.0
 |          The order of arguments for Series was changed.
 |      
 |      Parameters
 |      ----------
 |      path_or_buf : str or file handle, default None
 |          File path or object, if None is provided the result is returned as
 |          a string.  If a file object is passed it should be opened with
 |          `newline=''`, disabling universal newlines.
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |             Was previously named "path" for Series.
 |      
 |      sep : str, default ','
 |          String of length 1. Field delimiter for the output file.
 |      na_rep : str, default ''
 |          Missing data representation.
 |      float_format : str, default None
 |          Format string for floating point numbers.
 |      columns : sequence, optional
 |          Columns to write.
 |      header : bool or list of str, default True
 |          Write out the column names. If a list of strings is given it is
 |          assumed to be aliases for the column names.
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |             Previously defaulted to False for Series.
 |      
 |      index : bool, default True
 |          Write row names (index).
 |      index_label : str or sequence, or False, default None
 |          Column label for index column(s) if desired. If None is given, and
 |          `header` and `index` are True, then the index names are used. A
 |          sequence should be given if the object uses MultiIndex. If
 |          False do not print fields for index names. Use index_label=False
 |          for easier importing in R.
 |      mode : str
 |          Python write mode, default 'w'.
 |      encoding : str, optional
 |          A string representing the encoding to use in the output file,
 |          defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
 |      compression : str, default 'infer'
 |          Compression mode among the following possible values: {'infer',
 |          'gzip', 'bz2', 'zip', 'xz', None}. If 'infer' and `path_or_buf`
 |          is path-like, then detect compression from the following
 |          extensions: '.gz', '.bz2', '.zip' or '.xz'. (otherwise no
 |          compression).
 |      
 |          .. versionchanged:: 0.24.0
 |      
 |             'infer' option added and set to default.
 |      
 |      quoting : optional constant from csv module
 |          Defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
 |          then floats are converted to strings and thus csv.QUOTE_NONNUMERIC
 |          will treat them as non-numeric.
 |      quotechar : str, default '\"'
 |          String of length 1. Character used to quote fields.
 |      line_terminator : string, optional
 |          The newline character or character sequence to use in the output
 |          file. Defaults to `os.linesep`, which depends on the OS in which
 |          this method is called ('\n' for linux, '\r\n' for Windows, i.e.).
 |      
 |          .. versionchanged:: 0.24.0
 |      chunksize : int or None
 |          Rows to write at a time.
 |      tupleize_cols : bool, default False
 |          Write MultiIndex columns as a list of tuples (if True) or in
 |          the new, expanded format, where each MultiIndex column is a row
 |          in the CSV (if False).
 |      
 |          .. deprecated:: 0.21.0
 |             This argument will be removed and will always write each row
 |             of the multi-index as a separate row in the CSV file.
 |      date_format : str, default None
 |          Format string for datetime objects.
 |      doublequote : bool, default True
 |          Control quoting of `quotechar` inside a field.
 |      escapechar : str, default None
 |          String of length 1. Character used to escape `sep` and `quotechar`
 |          when appropriate.
 |      decimal : str, default '.'
 |          Character recognized as decimal separator. E.g. use ',' for
 |          European data.
 |      
 |      Returns
 |      -------
 |      None or str
 |          If path_or_buf is None, returns the resulting csv format as a
 |          string. Otherwise returns None.
 |      
 |      See Also
 |      --------
 |      read_csv : Load a CSV file into a DataFrame.
 |      to_excel : Load an Excel file into a DataFrame.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
 |      ...                    'mask': ['red', 'purple'],
 |      ...                    'weapon': ['sai', 'bo staff']})
 |      >>> df.to_csv(index=False)
 |      'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n'
 |  
 |  to_dense(self)
 |      Return dense representation of NDFrame (as opposed to sparse).
 |  
 |  to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=None, inf_rep='inf', verbose=True, freeze_panes=None)
 |      Write object to an Excel sheet.
 |      
 |      To write a single object to an Excel .xlsx file it is only necessary to
 |      specify a target file name. To write to multiple sheets it is necessary to
 |      create an `ExcelWriter` object with a target file name, and specify a sheet
 |      in the file to write to.
 |      
 |      Multiple sheets may be written to by specifying unique `sheet_name`.
 |      With all data written to the file it is necessary to save the changes.
 |      Note that creating an `ExcelWriter` object with a file name that already
 |      exists will result in the contents of the existing file being erased.
 |      
 |      Parameters
 |      ----------
 |      excel_writer : str or ExcelWriter object
 |          File path or existing ExcelWriter.
 |      sheet_name : str, default 'Sheet1'
 |          Name of sheet which will contain DataFrame.
 |      na_rep : str, default ''
 |          Missing data representation.
 |      float_format : str, optional
 |          Format string for floating point numbers. For example
 |          ``float_format="%.2f"`` will format 0.1234 to 0.12.
 |      columns : sequence or list of str, optional
 |          Columns to write.
 |      header : bool or list of str, default True
 |          Write out the column names. If a list of string is given it is
 |          assumed to be aliases for the column names.
 |      index : bool, default True
 |          Write row names (index).
 |      index_label : str or sequence, optional
 |          Column label for index column(s) if desired. If not specified, and
 |          `header` and `index` are True, then the index names are used. A
 |          sequence should be given if the DataFrame uses MultiIndex.
 |      startrow : int, default 0
 |          Upper left cell row to dump data frame.
 |      startcol : int, default 0
 |          Upper left cell column to dump data frame.
 |      engine : str, optional
 |          Write engine to use, 'openpyxl' or 'xlsxwriter'. You can also set this
 |          via the options ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
 |          ``io.excel.xlsm.writer``.
 |      merge_cells : bool, default True
 |          Write MultiIndex and Hierarchical Rows as merged cells.
 |      encoding : str, optional
 |          Encoding of the resulting excel file. Only necessary for xlwt,
 |          other writers support unicode natively.
 |      inf_rep : str, default 'inf'
 |          Representation for infinity (there is no native representation for
 |          infinity in Excel).
 |      verbose : bool, default True
 |          Display more information in the error logs.
 |      freeze_panes : tuple of int (length 2), optional
 |          Specifies the one-based bottommost row and rightmost column that
 |          is to be frozen.
 |      
 |          .. versionadded:: 0.20.0.
 |      
 |      See Also
 |      --------
 |      to_csv : Write DataFrame to a comma-separated values (csv) file.
 |      ExcelWriter : Class for writing DataFrame objects into excel sheets.
 |      read_excel : Read an Excel file into a pandas DataFrame.
 |      read_csv : Read a comma-separated values (csv) file into DataFrame.
 |      
 |      Notes
 |      -----
 |      For compatibility with :meth:`~DataFrame.to_csv`,
 |      to_excel serializes lists and dicts to strings before writing.
 |      
 |      Once a workbook has been saved it is not possible write further data
 |      without rewriting the whole workbook.
 |      
 |      Examples
 |      --------
 |      
 |      Create, write to and save a workbook:
 |      
 |      >>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
 |      ...                    index=['row 1', 'row 2'],
 |      ...                    columns=['col 1', 'col 2'])
 |      >>> df1.to_excel("output.xlsx")  # doctest: +SKIP
 |      
 |      To specify the sheet name:
 |      
 |      >>> df1.to_excel("output.xlsx",
 |      ...              sheet_name='Sheet_name_1')  # doctest: +SKIP
 |      
 |      If you wish to write to more than one sheet in the workbook, it is
 |      necessary to specify an ExcelWriter object:
 |      
 |      >>> df2 = df1.copy()
 |      >>> with pd.ExcelWriter('output.xlsx') as writer:  # doctest: +SKIP
 |      ...     df1.to_excel(writer, sheet_name='Sheet_name_1')
 |      ...     df2.to_excel(writer, sheet_name='Sheet_name_2')
 |      
 |      To set the library that is used to write the Excel file,
 |      you can pass the `engine` keyword (the default engine is
 |      automatically chosen depending on the file extension):
 |      
 |      >>> df1.to_excel('output1.xlsx', engine='xlsxwriter')  # doctest: +SKIP
 |  
 |  to_hdf(self, path_or_buf, key, **kwargs)
 |      Write the contained data to an HDF5 file using HDFStore.
 |      
 |      Hierarchical Data Format (HDF) is self-describing, allowing an
 |      application to interpret the structure and contents of a file with
 |      no outside information. One HDF file can hold a mix of related objects
 |      which can be accessed as a group or as individual objects.
 |      
 |      In order to add another DataFrame or Series to an existing HDF file
 |      please use append mode and a different a key.
 |      
 |      For more information see the :ref:`user guide <io.hdf5>`.
 |      
 |      Parameters
 |      ----------
 |      path_or_buf : str or pandas.HDFStore
 |          File path or HDFStore object.
 |      key : str
 |          Identifier for the group in the store.
 |      mode : {'a', 'w', 'r+'}, default 'a'
 |          Mode to open file:
 |      
 |          - 'w': write, a new file is created (an existing file with
 |            the same name would be deleted).
 |          - 'a': append, an existing file is opened for reading and
 |            writing, and if the file does not exist it is created.
 |          - 'r+': similar to 'a', but the file must already exist.
 |      format : {'fixed', 'table'}, default 'fixed'
 |          Possible values:
 |      
 |          - 'fixed': Fixed format. Fast writing/reading. Not-appendable,
 |            nor searchable.
 |          - 'table': Table format. Write as a PyTables Table structure
 |            which may perform worse but allow more flexible operations
 |            like searching / selecting subsets of the data.
 |      append : bool, default False
 |          For Table formats, append the input data to the existing.
 |      data_columns :  list of columns or True, optional
 |          List of columns to create as indexed data columns for on-disk
 |          queries, or True to use all columns. By default only the axes
 |          of the object are indexed. See :ref:`io.hdf5-query-data-columns`.
 |          Applicable only to format='table'.
 |      complevel : {0-9}, optional
 |          Specifies a compression level for data.
 |          A value of 0 disables compression.
 |      complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
 |          Specifies the compression library to be used.
 |          As of v0.20.2 these additional compressors for Blosc are supported
 |          (default if no compressor specified: 'blosc:blosclz'):
 |          {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
 |          'blosc:zlib', 'blosc:zstd'}.
 |          Specifying a compression library which is not available issues
 |          a ValueError.
 |      fletcher32 : bool, default False
 |          If applying compression use the fletcher32 checksum.
 |      dropna : bool, default False
 |          If true, ALL nan rows will not be written to store.
 |      errors : str, default 'strict'
 |          Specifies how encoding and decoding errors are to be handled.
 |          See the errors argument for :func:`open` for a full list
 |          of options.
 |      
 |      See Also
 |      --------
 |      DataFrame.read_hdf : Read from HDF file.
 |      DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
 |      DataFrame.to_sql : Write to a sql table.
 |      DataFrame.to_feather : Write out feather-format for DataFrames.
 |      DataFrame.to_csv : Write out to a csv file.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
 |      ...                   index=['a', 'b', 'c'])
 |      >>> df.to_hdf('data.h5', key='df', mode='w')
 |      
 |      We can add another object to the same file:
 |      
 |      >>> s = pd.Series([1, 2, 3, 4])
 |      >>> s.to_hdf('data.h5', key='s')
 |      
 |      Reading from HDF file:
 |      
 |      >>> pd.read_hdf('data.h5', 'df')
 |      A  B
 |      a  1  4
 |      b  2  5
 |      c  3  6
 |      >>> pd.read_hdf('data.h5', 's')
 |      0    1
 |      1    2
 |      2    3
 |      3    4
 |      dtype: int64
 |      
 |      Deleting file with data:
 |      
 |      >>> import os
 |      >>> os.remove('data.h5')
 |  
 |  to_json(self, path_or_buf=None, orient=None, date_format=None, double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False, compression='infer', index=True)
 |      Convert the object to a JSON string.
 |      
 |      Note NaN's and None will be converted to null and datetime objects
 |      will be converted to UNIX timestamps.
 |      
 |      Parameters
 |      ----------
 |      path_or_buf : string or file handle, optional
 |          File path or object. If not specified, the result is returned as
 |          a string.
 |      orient : string
 |          Indication of expected JSON string format.
 |      
 |          * Series
 |      
 |            - default is 'index'
 |            - allowed values are: {'split','records','index','table'}
 |      
 |          * DataFrame
 |      
 |            - default is 'columns'
 |            - allowed values are:
 |              {'split','records','index','columns','values','table'}
 |      
 |          * The format of the JSON string
 |      
 |            - 'split' : dict like {'index' -> [index],
 |              'columns' -> [columns], 'data' -> [values]}
 |            - 'records' : list like
 |              [{column -> value}, ... , {column -> value}]
 |            - 'index' : dict like {index -> {column -> value}}
 |            - 'columns' : dict like {column -> {index -> value}}
 |            - 'values' : just the values array
 |            - 'table' : dict like {'schema': {schema}, 'data': {data}}
 |              describing the data, and the data component is
 |              like ``orient='records'``.
 |      
 |              .. versionchanged:: 0.20.0
 |      
 |      date_format : {None, 'epoch', 'iso'}
 |          Type of date conversion. 'epoch' = epoch milliseconds,
 |          'iso' = ISO8601. The default depends on the `orient`. For
 |          ``orient='table'``, the default is 'iso'. For all other orients,
 |          the default is 'epoch'.
 |      double_precision : int, default 10
 |          The number of decimal places to use when encoding
 |          floating point values.
 |      force_ascii : bool, default True
 |          Force encoded string to be ASCII.
 |      date_unit : string, default 'ms' (milliseconds)
 |          The time unit to encode to, governs timestamp and ISO8601
 |          precision.  One of 's', 'ms', 'us', 'ns' for second, millisecond,
 |          microsecond, and nanosecond respectively.
 |      default_handler : callable, default None
 |          Handler to call if object cannot otherwise be converted to a
 |          suitable format for JSON. Should receive a single argument which is
 |          the object to convert and return a serialisable object.
 |      lines : bool, default False
 |          If 'orient' is 'records' write out line delimited json format. Will
 |          throw ValueError if incorrect 'orient' since others are not list
 |          like.
 |      
 |          .. versionadded:: 0.19.0
 |      
 |      compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}
 |      
 |          A string representing the compression to use in the output file,
 |          only used when the first argument is a filename. By default, the
 |          compression is inferred from the filename.
 |      
 |          .. versionadded:: 0.21.0
 |          .. versionchanged:: 0.24.0
 |             'infer' option added and set to default
 |      index : bool, default True
 |          Whether to include the index values in the JSON string. Not
 |          including the index (``index=False``) is only supported when
 |          orient is 'split' or 'table'.
 |      
 |          .. versionadded:: 0.23.0
 |      
 |      See Also
 |      --------
 |      read_json
 |      
 |      Examples
 |      --------
 |      
 |      >>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
 |      ...                   index=['row 1', 'row 2'],
 |      ...                   columns=['col 1', 'col 2'])
 |      >>> df.to_json(orient='split')
 |      '{"columns":["col 1","col 2"],
 |        "index":["row 1","row 2"],
 |        "data":[["a","b"],["c","d"]]}'
 |      
 |      Encoding/decoding a Dataframe using ``'records'`` formatted JSON.
 |      Note that index labels are not preserved with this encoding.
 |      
 |      >>> df.to_json(orient='records')
 |      '[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
 |      
 |      Encoding/decoding a Dataframe using ``'index'`` formatted JSON:
 |      
 |      >>> df.to_json(orient='index')
 |      '{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
 |      
 |      Encoding/decoding a Dataframe using ``'columns'`` formatted JSON:
 |      
 |      >>> df.to_json(orient='columns')
 |      '{"col 1":{"row 1":"a","row 2":"c"},"col 2":{"row 1":"b","row 2":"d"}}'
 |      
 |      Encoding/decoding a Dataframe using ``'values'`` formatted JSON:
 |      
 |      >>> df.to_json(orient='values')
 |      '[["a","b"],["c","d"]]'
 |      
 |      Encoding with Table Schema
 |      
 |      >>> df.to_json(orient='table')
 |      '{"schema": {"fields": [{"name": "index", "type": "string"},
 |                              {"name": "col 1", "type": "string"},
 |                              {"name": "col 2", "type": "string"}],
 |                   "primaryKey": "index",
 |                   "pandas_version": "0.20.0"},
 |        "data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
 |                 {"index": "row 2", "col 1": "c", "col 2": "d"}]}'
 |  
 |  to_latex(self, buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=False, column_format=None, longtable=None, escape=None, encoding=None, decimal='.', multicolumn=None, multicolumn_format=None, multirow=None)
 |      Render an object to a LaTeX tabular environment table.
 |      
 |      Render an object to a tabular environment table. You can splice
 |      this into a LaTeX document. Requires \usepackage{booktabs}.
 |      
 |      .. versionchanged:: 0.20.2
 |         Added to Series
 |      
 |      Parameters
 |      ----------
 |      buf : file descriptor or None
 |          Buffer to write to. If None, the output is returned as a string.
 |      columns : list of label, optional
 |          The subset of columns to write. Writes all columns by default.
 |      col_space : int, optional
 |          The minimum width of each column.
 |      header : bool or list of str, default True
 |          Write out the column names. If a list of strings is given,
 |          it is assumed to be aliases for the column names.
 |      index : bool, default True
 |          Write row names (index).
 |      na_rep : str, default 'NaN'
 |          Missing data representation.
 |      formatters : list of functions or dict of {str: function}, optional
 |          Formatter functions to apply to columns' elements by position or
 |          name. The result of each function must be a unicode string.
 |          List must be of length equal to the number of columns.
 |      float_format : str, optional
 |          Format string for floating point numbers.
 |      sparsify : bool, optional
 |          Set to False for a DataFrame with a hierarchical index to print
 |          every multiindex key at each row. By default, the value will be
 |          read from the config module.
 |      index_names : bool, default True
 |          Prints the names of the indexes.
 |      bold_rows : bool, default False
 |          Make the row labels bold in the output.
 |      column_format : str, optional
 |          The columns format as specified in `LaTeX table format
 |          <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g. 'rcl' for 3
 |          columns. By default, 'l' will be used for all columns except
 |          columns of numbers, which default to 'r'.
 |      longtable : bool, optional
 |          By default, the value will be read from the pandas config
 |          module. Use a longtable environment instead of tabular. Requires
 |          adding a \usepackage{longtable} to your LaTeX preamble.
 |      escape : bool, optional
 |          By default, the value will be read from the pandas config
 |          module. When set to False prevents from escaping latex special
 |          characters in column names.
 |      encoding : str, optional
 |          A string representing the encoding to use in the output file,
 |          defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
 |      decimal : str, default '.'
 |          Character recognized as decimal separator, e.g. ',' in Europe.
 |          .. versionadded:: 0.18.0
 |      multicolumn : bool, default True
 |          Use \multicolumn to enhance MultiIndex columns.
 |          The default will be read from the config module.
 |          .. versionadded:: 0.20.0
 |      multicolumn_format : str, default 'l'
 |          The alignment for multicolumns, similar to `column_format`
 |          The default will be read from the config module.
 |          .. versionadded:: 0.20.0
 |      multirow : bool, default False
 |          Use \multirow to enhance MultiIndex rows. Requires adding a
 |          \usepackage{multirow} to your LaTeX preamble. Will print
 |          centered labels (instead of top-aligned) across the contained
 |          rows, separating groups via clines. The default will be read
 |          from the pandas config module.
 |          .. versionadded:: 0.20.0
 |      
 |      Returns
 |      -------
 |      str or None
 |          If buf is None, returns the resulting LateX format as a
 |          string. Otherwise returns None.
 |      
 |      See Also
 |      --------
 |      DataFrame.to_string : Render a DataFrame to a console-friendly
 |          tabular output.
 |      DataFrame.to_html : Render a DataFrame as an HTML table.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
 |      ...                    'mask': ['red', 'purple'],
 |      ...                    'weapon': ['sai', 'bo staff']})
 |      >>> df.to_latex(index=False) # doctest: +NORMALIZE_WHITESPACE
 |      '\\begin{tabular}{lll}\n\\toprule\n      name &    mask &    weapon
 |      \\\\\n\\midrule\n   Raphael &     red &       sai \\\\\n Donatello &
 |       purple &  bo staff \\\\\n\\bottomrule\n\\end{tabular}\n'
 |  
 |  to_msgpack(self, path_or_buf=None, encoding='utf-8', **kwargs)
 |      Serialize object to input file path using msgpack format.
 |      
 |      THIS IS AN EXPERIMENTAL LIBRARY and the storage format
 |      may not be stable until a future release.
 |      
 |      Parameters
 |      ----------
 |      path : string File path, buffer-like, or None
 |          if None, return generated string
 |      append : bool whether to append to an existing msgpack
 |          (default is False)
 |      compress : type of compressor (zlib or blosc), default to None (no
 |          compression)
 |  
 |  to_pickle(self, path, compression='infer', protocol=4)
 |      Pickle (serialize) object to file.
 |      
 |      Parameters
 |      ----------
 |      path : str
 |          File path where the pickled object will be stored.
 |      compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None},         default 'infer'
 |          A string representing the compression to use in the output file. By
 |          default, infers from the file extension in specified path.
 |      
 |          .. versionadded:: 0.20.0
 |      protocol : int
 |          Int which indicates which protocol should be used by the pickler,
 |          default HIGHEST_PROTOCOL (see [1]_ paragraph 12.1.2). The possible
 |          values for this parameter depend on the version of Python. For
 |          Python 2.x, possible values are 0, 1, 2. For Python>=3.0, 3 is a
 |          valid value. For Python >= 3.4, 4 is a valid value. A negative
 |          value for the protocol parameter is equivalent to setting its value
 |          to HIGHEST_PROTOCOL.
 |      
 |          .. [1] https://docs.python.org/3/library/pickle.html
 |          .. versionadded:: 0.21.0
 |      
 |      See Also
 |      --------
 |      read_pickle : Load pickled pandas object (or any object) from file.
 |      DataFrame.to_hdf : Write DataFrame to an HDF5 file.
 |      DataFrame.to_sql : Write DataFrame to a SQL database.
 |      DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
 |      
 |      Examples
 |      --------
 |      >>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
 |      >>> original_df
 |         foo  bar
 |      0    0    5
 |      1    1    6
 |      2    2    7
 |      3    3    8
 |      4    4    9
 |      >>> original_df.to_pickle("./dummy.pkl")
 |      
 |      >>> unpickled_df = pd.read_pickle("./dummy.pkl")
 |      >>> unpickled_df
 |         foo  bar
 |      0    0    5
 |      1    1    6
 |      2    2    7
 |      3    3    8
 |      4    4    9
 |      
 |      >>> import os
 |      >>> os.remove("./dummy.pkl")
 |  
 |  to_sql(self, name, con, schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None, method=None)
 |      Write records stored in a DataFrame to a SQL database.
 |      
 |      Databases supported by SQLAlchemy [1]_ are supported. Tables can be
 |      newly created, appended to, or overwritten.
 |      
 |      Parameters
 |      ----------
 |      name : string
 |          Name of SQL table.
 |      con : sqlalchemy.engine.Engine or sqlite3.Connection
 |          Using SQLAlchemy makes it possible to use any DB supported by that
 |          library. Legacy support is provided for sqlite3.Connection objects.
 |      schema : string, optional
 |          Specify the schema (if database flavor supports this). If None, use
 |          default schema.
 |      if_exists : {'fail', 'replace', 'append'}, default 'fail'
 |          How to behave if the table already exists.
 |      
 |          * fail: Raise a ValueError.
 |          * replace: Drop the table before inserting new values.
 |          * append: Insert new values to the existing table.
 |      
 |      index : bool, default True
 |          Write DataFrame index as a column. Uses `index_label` as the column
 |          name in the table.
 |      index_label : string or sequence, default None
 |          Column label for index column(s). If None is given (default) and
 |          `index` is True, then the index names are used.
 |          A sequence should be given if the DataFrame uses MultiIndex.
 |      chunksize : int, optional
 |          Rows will be written in batches of this size at a time. By default,
 |          all rows will be written at once.
 |      dtype : dict, optional
 |          Specifying the datatype for columns. The keys should be the column
 |          names and the values should be the SQLAlchemy types or strings for
 |          the sqlite3 legacy mode.
 |      method : {None, 'multi', callable}, default None
 |          Controls the SQL insertion clause used:
 |      
 |          * None : Uses standard SQL ``INSERT`` clause (one per row).
 |          * 'multi': Pass multiple values in a single ``INSERT`` clause.
 |          * callable with signature ``(pd_table, conn, keys, data_iter)``.
 |      
 |          Details and a sample callable implementation can be found in the
 |          section :ref:`insert method <io.sql.method>`.
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Raises
 |      ------
 |      ValueError
 |          When the table already exists and `if_exists` is 'fail' (the
 |          default).
 |      
 |      See Also
 |      --------
 |      read_sql : Read a DataFrame from a table.
 |      
 |      Notes
 |      -----
 |      Timezone aware datetime columns will be written as
 |      ``Timestamp with timezone`` type with SQLAlchemy if supported by the
 |      database. Otherwise, the datetimes will be stored as timezone unaware
 |      timestamps local to the original timezone.
 |      
 |      .. versionadded:: 0.24.0
 |      
 |      References
 |      ----------
 |      .. [1] http://docs.sqlalchemy.org
 |      .. [2] https://www.python.org/dev/peps/pep-0249/
 |      
 |      Examples
 |      --------
 |      
 |      Create an in-memory SQLite database.
 |      
 |      >>> from sqlalchemy import create_engine
 |      >>> engine = create_engine('sqlite://', echo=False)
 |      
 |      Create a table from scratch with 3 rows.
 |      
 |      >>> df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})
 |      >>> df
 |           name
 |      0  User 1
 |      1  User 2
 |      2  User 3
 |      
 |      >>> df.to_sql('users', con=engine)
 |      >>> engine.execute("SELECT * FROM users").fetchall()
 |      [(0, 'User 1'), (1, 'User 2'), (2, 'User 3')]
 |      
 |      >>> df1 = pd.DataFrame({'name' : ['User 4', 'User 5']})
 |      >>> df1.to_sql('users', con=engine, if_exists='append')
 |      >>> engine.execute("SELECT * FROM users").fetchall()
 |      [(0, 'User 1'), (1, 'User 2'), (2, 'User 3'),
 |       (0, 'User 4'), (1, 'User 5')]
 |      
 |      Overwrite the table with just ``df1``.
 |      
 |      >>> df1.to_sql('users', con=engine, if_exists='replace',
 |      ...            index_label='id')
 |      >>> engine.execute("SELECT * FROM users").fetchall()
 |      [(0, 'User 4'), (1, 'User 5')]
 |      
 |      Specify the dtype (especially useful for integers with missing values).
 |      Notice that while pandas is forced to store the data as floating point,
 |      the database supports nullable integers. When fetching the data with
 |      Python, we get back integer scalars.
 |      
 |      >>> df = pd.DataFrame({"A": [1, None, 2]})
 |      >>> df
 |           A
 |      0  1.0
 |      1  NaN
 |      2  2.0
 |      
 |      >>> from sqlalchemy.types import Integer
 |      >>> df.to_sql('integers', con=engine, index=False,
 |      ...           dtype={"A": Integer()})
 |      
 |      >>> engine.execute("SELECT * FROM integers").fetchall()
 |      [(1,), (None,), (2,)]
 |  
 |  to_xarray(self)
 |      Return an xarray object from the pandas object.
 |      
 |      Returns
 |      -------
 |      xarray.DataArray or xarray.Dataset
 |          Data in the pandas structure converted to Dataset if the object is
 |          a DataFrame, or a DataArray if the object is a Series.
 |      
 |      See Also
 |      --------
 |      DataFrame.to_hdf : Write DataFrame to an HDF5 file.
 |      DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
 |      
 |      Notes
 |      -----
 |      See the `xarray docs <http://xarray.pydata.org/en/stable/>`__
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([('falcon', 'bird',  389.0, 2),
 |      ...                    ('parrot', 'bird', 24.0, 2),
 |      ...                    ('lion',   'mammal', 80.5, 4),
 |      ...                    ('monkey', 'mammal', np.nan, 4)],
 |      ...                    columns=['name', 'class', 'max_speed',
 |      ...                             'num_legs'])
 |      >>> df
 |           name   class  max_speed  num_legs
 |      0  falcon    bird      389.0         2
 |      1  parrot    bird       24.0         2
 |      2    lion  mammal       80.5         4
 |      3  monkey  mammal        NaN         4
 |      
 |      >>> df.to_xarray()
 |      <xarray.Dataset>
 |      Dimensions:    (index: 4)
 |      Coordinates:
 |        * index      (index) int64 0 1 2 3
 |      Data variables:
 |          name       (index) object 'falcon' 'parrot' 'lion' 'monkey'
 |          class      (index) object 'bird' 'bird' 'mammal' 'mammal'
 |          max_speed  (index) float64 389.0 24.0 80.5 nan
 |          num_legs   (index) int64 2 2 4 4
 |      
 |      >>> df['max_speed'].to_xarray()
 |      <xarray.DataArray 'max_speed' (index: 4)>
 |      array([389. ,  24. ,  80.5,   nan])
 |      Coordinates:
 |        * index    (index) int64 0 1 2 3
 |      
 |      >>> dates = pd.to_datetime(['2018-01-01', '2018-01-01',
 |      ...                         '2018-01-02', '2018-01-02'])
 |      >>> df_multiindex = pd.DataFrame({'date': dates,
 |      ...                    'animal': ['falcon', 'parrot', 'falcon',
 |      ...                               'parrot'],
 |      ...                    'speed': [350, 18, 361, 15]}).set_index(['date',
 |      ...                                                    'animal'])
 |      >>> df_multiindex
 |                         speed
 |      date       animal
 |      2018-01-01 falcon    350
 |                 parrot     18
 |      2018-01-02 falcon    361
 |                 parrot     15
 |      
 |      >>> df_multiindex.to_xarray()
 |      <xarray.Dataset>
 |      Dimensions:  (animal: 2, date: 2)
 |      Coordinates:
 |        * date     (date) datetime64[ns] 2018-01-01 2018-01-02
 |        * animal   (animal) object 'falcon' 'parrot'
 |      Data variables:
 |          speed    (date, animal) int64 350 18 361 15
 |  
 |  truncate(self, before=None, after=None, axis=None, copy=True)
 |      Truncate a Series or DataFrame before and after some index value.
 |      
 |      This is a useful shorthand for boolean indexing based on index
 |      values above or below certain thresholds.
 |      
 |      Parameters
 |      ----------
 |      before : date, string, int
 |          Truncate all rows before this index value.
 |      after : date, string, int
 |          Truncate all rows after this index value.
 |      axis : {0 or 'index', 1 or 'columns'}, optional
 |          Axis to truncate. Truncates the index (rows) by default.
 |      copy : boolean, default is True,
 |          Return a copy of the truncated section.
 |      
 |      Returns
 |      -------
 |      type of caller
 |          The truncated Series or DataFrame.
 |      
 |      See Also
 |      --------
 |      DataFrame.loc : Select a subset of a DataFrame by label.
 |      DataFrame.iloc : Select a subset of a DataFrame by position.
 |      
 |      Notes
 |      -----
 |      If the index being truncated contains only datetime values,
 |      `before` and `after` may be specified as strings instead of
 |      Timestamps.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
 |      ...                    'B': ['f', 'g', 'h', 'i', 'j'],
 |      ...                    'C': ['k', 'l', 'm', 'n', 'o']},
 |      ...                    index=[1, 2, 3, 4, 5])
 |      >>> df
 |         A  B  C
 |      1  a  f  k
 |      2  b  g  l
 |      3  c  h  m
 |      4  d  i  n
 |      5  e  j  o
 |      
 |      >>> df.truncate(before=2, after=4)
 |         A  B  C
 |      2  b  g  l
 |      3  c  h  m
 |      4  d  i  n
 |      
 |      The columns of a DataFrame can be truncated.
 |      
 |      >>> df.truncate(before="A", after="B", axis="columns")
 |         A  B
 |      1  a  f
 |      2  b  g
 |      3  c  h
 |      4  d  i
 |      5  e  j
 |      
 |      For Series, only rows can be truncated.
 |      
 |      >>> df['A'].truncate(before=2, after=4)
 |      2    b
 |      3    c
 |      4    d
 |      Name: A, dtype: object
 |      
 |      The index values in ``truncate`` can be datetimes or string
 |      dates.
 |      
 |      >>> dates = pd.date_range('2016-01-01', '2016-02-01', freq='s')
 |      >>> df = pd.DataFrame(index=dates, data={'A': 1})
 |      >>> df.tail()
 |                           A
 |      2016-01-31 23:59:56  1
 |      2016-01-31 23:59:57  1
 |      2016-01-31 23:59:58  1
 |      2016-01-31 23:59:59  1
 |      2016-02-01 00:00:00  1
 |      
 |      >>> df.truncate(before=pd.Timestamp('2016-01-05'),
 |      ...             after=pd.Timestamp('2016-01-10')).tail()
 |                           A
 |      2016-01-09 23:59:56  1
 |      2016-01-09 23:59:57  1
 |      2016-01-09 23:59:58  1
 |      2016-01-09 23:59:59  1
 |      2016-01-10 00:00:00  1
 |      
 |      Because the index is a DatetimeIndex containing only dates, we can
 |      specify `before` and `after` as strings. They will be coerced to
 |      Timestamps before truncation.
 |      
 |      >>> df.truncate('2016-01-05', '2016-01-10').tail()
 |                           A
 |      2016-01-09 23:59:56  1
 |      2016-01-09 23:59:57  1
 |      2016-01-09 23:59:58  1
 |      2016-01-09 23:59:59  1
 |      2016-01-10 00:00:00  1
 |      
 |      Note that ``truncate`` assumes a 0 value for any unspecified time
 |      component (midnight). This differs from partial string slicing, which
 |      returns any partially matching dates.
 |      
 |      >>> df.loc['2016-01-05':'2016-01-10', :].tail()
 |                           A
 |      2016-01-10 23:59:55  1
 |      2016-01-10 23:59:56  1
 |      2016-01-10 23:59:57  1
 |      2016-01-10 23:59:58  1
 |      2016-01-10 23:59:59  1
 |  
 |  tshift(self, periods=1, freq=None, axis=0)
 |      Shift the time index, using the index's frequency if available.
 |      
 |      Parameters
 |      ----------
 |      periods : int
 |          Number of periods to move, can be positive or negative
 |      freq : DateOffset, timedelta, or time rule string, default None
 |          Increment to use from the tseries module or time rule (e.g. 'EOM')
 |      axis : int or basestring
 |          Corresponds to the axis that contains the Index
 |      
 |      Returns
 |      -------
 |      shifted : NDFrame
 |      
 |      Notes
 |      -----
 |      If freq is not specified then tries to use the freq or inferred_freq
 |      attributes of the index. If neither of those attributes exist, a
 |      ValueError is thrown
 |  
 |  tz_convert(self, tz, axis=0, level=None, copy=True)
 |      Convert tz-aware axis to target time zone.
 |      
 |      Parameters
 |      ----------
 |      tz : string or pytz.timezone object
 |      axis : the axis to convert
 |      level : int, str, default None
 |          If axis ia a MultiIndex, convert a specific level. Otherwise
 |          must be None
 |      copy : boolean, default True
 |          Also make a copy of the underlying data
 |      
 |      Returns
 |      -------
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the axis is tz-naive.
 |  
 |  tz_localize(self, tz, axis=0, level=None, copy=True, ambiguous='raise', nonexistent='raise')
 |      Localize tz-naive index of a Series or DataFrame to target time zone.
 |      
 |      This operation localizes the Index. To localize the values in a
 |      timezone-naive Series, use :meth:`Series.dt.tz_localize`.
 |      
 |      Parameters
 |      ----------
 |      tz : string or pytz.timezone object
 |      axis : the axis to localize
 |      level : int, str, default None
 |          If axis ia a MultiIndex, localize a specific level. Otherwise
 |          must be None
 |      copy : boolean, default True
 |          Also make a copy of the underlying data
 |      ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
 |          When clocks moved backward due to DST, ambiguous times may arise.
 |          For example in Central European Time (UTC+01), when going from
 |          03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at
 |          00:30:00 UTC and at 01:30:00 UTC. In such a situation, the
 |          `ambiguous` parameter dictates how ambiguous times should be
 |          handled.
 |      
 |          - 'infer' will attempt to infer fall dst-transition hours based on
 |            order
 |          - bool-ndarray where True signifies a DST time, False designates
 |            a non-DST time (note that this flag is only applicable for
 |            ambiguous times)
 |          - 'NaT' will return NaT where there are ambiguous times
 |          - 'raise' will raise an AmbiguousTimeError if there are ambiguous
 |            times
 |      nonexistent : str, default 'raise'
 |          A nonexistent time does not exist in a particular timezone
 |          where clocks moved forward due to DST. Valid valuse are:
 |      
 |          - 'shift_forward' will shift the nonexistent time forward to the
 |            closest existing time
 |          - 'shift_backward' will shift the nonexistent time backward to the
 |            closest existing time
 |          - 'NaT' will return NaT where there are nonexistent times
 |          - timedelta objects will shift nonexistent times by the timedelta
 |          - 'raise' will raise an NonExistentTimeError if there are
 |            nonexistent times
 |      
 |          .. versionadded:: 0.24.0
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Same type as the input.
 |      
 |      Raises
 |      ------
 |      TypeError
 |          If the TimeSeries is tz-aware and tz is not None.
 |      
 |      Examples
 |      --------
 |      
 |      Localize local times:
 |      
 |      >>> s = pd.Series([1],
 |      ... index=pd.DatetimeIndex(['2018-09-15 01:30:00']))
 |      >>> s.tz_localize('CET')
 |      2018-09-15 01:30:00+02:00    1
 |      dtype: int64
 |      
 |      Be careful with DST changes. When there is sequential data, pandas
 |      can infer the DST time:
 |      
 |      >>> s = pd.Series(range(7), index=pd.DatetimeIndex([
 |      ... '2018-10-28 01:30:00',
 |      ... '2018-10-28 02:00:00',
 |      ... '2018-10-28 02:30:00',
 |      ... '2018-10-28 02:00:00',
 |      ... '2018-10-28 02:30:00',
 |      ... '2018-10-28 03:00:00',
 |      ... '2018-10-28 03:30:00']))
 |      >>> s.tz_localize('CET', ambiguous='infer')
 |      2018-10-28 01:30:00+02:00    0
 |      2018-10-28 02:00:00+02:00    1
 |      2018-10-28 02:30:00+02:00    2
 |      2018-10-28 02:00:00+01:00    3
 |      2018-10-28 02:30:00+01:00    4
 |      2018-10-28 03:00:00+01:00    5
 |      2018-10-28 03:30:00+01:00    6
 |      dtype: int64
 |      
 |      In some cases, inferring the DST is impossible. In such cases, you can
 |      pass an ndarray to the ambiguous parameter to set the DST explicitly
 |      
 |      >>> s = pd.Series(range(3), index=pd.DatetimeIndex([
 |      ... '2018-10-28 01:20:00',
 |      ... '2018-10-28 02:36:00',
 |      ... '2018-10-28 03:46:00']))
 |      >>> s.tz_localize('CET', ambiguous=np.array([True, True, False]))
 |      2018-10-28 01:20:00+02:00    0
 |      2018-10-28 02:36:00+02:00    1
 |      2018-10-28 03:46:00+01:00    2
 |      dtype: int64
 |      
 |      If the DST transition causes nonexistent times, you can shift these
 |      dates forward or backwards with a timedelta object or `'shift_forward'`
 |      or `'shift_backwards'`.
 |      >>> s = pd.Series(range(2), index=pd.DatetimeIndex([
 |      ... '2015-03-29 02:30:00',
 |      ... '2015-03-29 03:30:00']))
 |      >>> s.tz_localize('Europe/Warsaw', nonexistent='shift_forward')
 |      2015-03-29 03:00:00+02:00    0
 |      2015-03-29 03:30:00+02:00    1
 |      dtype: int64
 |      >>> s.tz_localize('Europe/Warsaw', nonexistent='shift_backward')
 |      2015-03-29 01:59:59.999999999+01:00    0
 |      2015-03-29 03:30:00+02:00              1
 |      dtype: int64
 |      >>> s.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H'))
 |      2015-03-29 03:30:00+02:00    0
 |      2015-03-29 03:30:00+02:00    1
 |      dtype: int64
 |  
 |  where(self, cond, other=nan, inplace=False, axis=None, level=None, errors='raise', try_cast=False, raise_on_error=None)
 |      Replace values where the condition is False.
 |      
 |      Parameters
 |      ----------
 |      cond : boolean NDFrame, array-like, or callable
 |          Where `cond` is True, keep the original value. Where
 |          False, replace with corresponding value from `other`.
 |          If `cond` is callable, it is computed on the NDFrame and
 |          should return boolean NDFrame or array. The callable must
 |          not change input NDFrame (though pandas doesn't check it).
 |      
 |          .. versionadded:: 0.18.1
 |              A callable can be used as cond.
 |      
 |      other : scalar, NDFrame, or callable
 |          Entries where `cond` is False are replaced with
 |          corresponding value from `other`.
 |          If other is callable, it is computed on the NDFrame and
 |          should return scalar or NDFrame. The callable must not
 |          change input NDFrame (though pandas doesn't check it).
 |      
 |          .. versionadded:: 0.18.1
 |              A callable can be used as other.
 |      
 |      inplace : boolean, default False
 |          Whether to perform the operation in place on the data.
 |      axis : int, default None
 |          Alignment axis if needed.
 |      level : int, default None
 |          Alignment level if needed.
 |      errors : str, {'raise', 'ignore'}, default `raise`
 |          Note that currently this parameter won't affect
 |          the results and will always coerce to a suitable dtype.
 |      
 |          - `raise` : allow exceptions to be raised.
 |          - `ignore` : suppress exceptions. On error return original object.
 |      
 |      try_cast : boolean, default False
 |          Try to cast the result back to the input type (if possible).
 |      raise_on_error : boolean, default True
 |          Whether to raise on invalid data types (e.g. trying to where on
 |          strings).
 |      
 |          .. deprecated:: 0.21.0
 |      
 |             Use `errors`.
 |      
 |      Returns
 |      -------
 |      wh : same type as caller
 |      
 |      See Also
 |      --------
 |      :func:`DataFrame.mask` : Return an object of same shape as
 |          self.
 |      
 |      Notes
 |      -----
 |      The where method is an application of the if-then idiom. For each
 |      element in the calling DataFrame, if ``cond`` is ``True`` the
 |      element is used; otherwise the corresponding element from the DataFrame
 |      ``other`` is used.
 |      
 |      The signature for :func:`DataFrame.where` differs from
 |      :func:`numpy.where`. Roughly ``df1.where(m, df2)`` is equivalent to
 |      ``np.where(m, df1, df2)``.
 |      
 |      For further details and examples see the ``where`` documentation in
 |      :ref:`indexing <indexing.where_mask>`.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series(range(5))
 |      >>> s.where(s > 0)
 |      0    NaN
 |      1    1.0
 |      2    2.0
 |      3    3.0
 |      4    4.0
 |      dtype: float64
 |      
 |      >>> s.mask(s > 0)
 |      0    0.0
 |      1    NaN
 |      2    NaN
 |      3    NaN
 |      4    NaN
 |      dtype: float64
 |      
 |      >>> s.where(s > 1, 10)
 |      0    10
 |      1    10
 |      2    2
 |      3    3
 |      4    4
 |      dtype: int64
 |      
 |      >>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
 |      >>> m = df % 3 == 0
 |      >>> df.where(m, -df)
 |         A  B
 |      0  0 -1
 |      1 -2  3
 |      2 -4 -5
 |      3  6 -7
 |      4 -8  9
 |      >>> df.where(m, -df) == np.where(m, df, -df)
 |            A     B
 |      0  True  True
 |      1  True  True
 |      2  True  True
 |      3  True  True
 |      4  True  True
 |      >>> df.where(m, -df) == df.mask(~m, -df)
 |            A     B
 |      0  True  True
 |      1  True  True
 |      2  True  True
 |      3  True  True
 |      4  True  True
 |  
 |  xs(self, key, axis=0, level=None, drop_level=True)
 |      Return cross-section from the Series/DataFrame.
 |      
 |      This method takes a `key` argument to select data at a particular
 |      level of a MultiIndex.
 |      
 |      Parameters
 |      ----------
 |      key : label or tuple of label
 |          Label contained in the index, or partially in a MultiIndex.
 |      axis : {0 or 'index', 1 or 'columns'}, default 0
 |          Axis to retrieve cross-section on.
 |      level : object, defaults to first n levels (n=1 or len(key))
 |          In case of a key partially contained in a MultiIndex, indicate
 |          which levels are used. Levels can be referred by label or position.
 |      drop_level : bool, default True
 |          If False, returns object with same levels as self.
 |      
 |      Returns
 |      -------
 |      Series or DataFrame
 |          Cross-section from the original Series or DataFrame
 |          corresponding to the selected index levels.
 |      
 |      See Also
 |      --------
 |      DataFrame.loc : Access a group of rows and columns
 |          by label(s) or a boolean array.
 |      DataFrame.iloc : Purely integer-location based indexing
 |          for selection by position.
 |      
 |      Notes
 |      -----
 |      `xs` can not be used to set values.
 |      
 |      MultiIndex Slicers is a generic way to get/set values on
 |      any level or levels.
 |      It is a superset of `xs` functionality, see
 |      :ref:`MultiIndex Slicers <advanced.mi_slicers>`.
 |      
 |      Examples
 |      --------
 |      >>> d = {'num_legs': [4, 4, 2, 2],
 |      ...      'num_wings': [0, 0, 2, 2],
 |      ...      'class': ['mammal', 'mammal', 'mammal', 'bird'],
 |      ...      'animal': ['cat', 'dog', 'bat', 'penguin'],
 |      ...      'locomotion': ['walks', 'walks', 'flies', 'walks']}
 |      >>> df = pd.DataFrame(data=d)
 |      >>> df = df.set_index(['class', 'animal', 'locomotion'])
 |      >>> df
 |                                 num_legs  num_wings
 |      class  animal  locomotion
 |      mammal cat     walks              4          0
 |             dog     walks              4          0
 |             bat     flies              2          2
 |      bird   penguin walks              2          2
 |      
 |      Get values at specified index
 |      
 |      >>> df.xs('mammal')
 |                         num_legs  num_wings
 |      animal locomotion
 |      cat    walks              4          0
 |      dog    walks              4          0
 |      bat    flies              2          2
 |      
 |      Get values at several indexes
 |      
 |      >>> df.xs(('mammal', 'dog'))
 |                  num_legs  num_wings
 |      locomotion
 |      walks              4          0
 |      
 |      Get values at specified index and level
 |      
 |      >>> df.xs('cat', level=1)
 |                         num_legs  num_wings
 |      class  locomotion
 |      mammal walks              4          0
 |      
 |      Get values at several indexes and levels
 |      
 |      >>> df.xs(('bird', 'walks'),
 |      ...       level=[0, 'locomotion'])
 |               num_legs  num_wings
 |      animal
 |      penguin         2          2
 |      
 |      Get values at specified column and axis
 |      
 |      >>> df.xs('num_wings', axis=1)
 |      class   animal   locomotion
 |      mammal  cat      walks         0
 |              dog      walks         0
 |              bat      flies         2
 |      bird    penguin  walks         2
 |      Name: num_wings, dtype: int64
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from pandas.core.generic.NDFrame:
 |  
 |  at
 |      Access a single value for a row/column label pair.
 |      
 |      Similar to ``loc``, in that both provide label-based lookups. Use
 |      ``at`` if you only need to get or set a single value in a DataFrame
 |      or Series.
 |      
 |      Raises
 |      ------
 |      KeyError
 |          When label does not exist in DataFrame
 |      
 |      See Also
 |      --------
 |      DataFrame.iat : Access a single value for a row/column pair by integer
 |          position.
 |      DataFrame.loc : Access a group of rows and columns by label(s).
 |      Series.at : Access a single value using a label.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
 |      ...                   index=[4, 5, 6], columns=['A', 'B', 'C'])
 |      >>> df
 |          A   B   C
 |      4   0   2   3
 |      5   0   4   1
 |      6  10  20  30
 |      
 |      Get value at specified row/column pair
 |      
 |      >>> df.at[4, 'B']
 |      2
 |      
 |      Set value at specified row/column pair
 |      
 |      >>> df.at[4, 'B'] = 10
 |      >>> df.at[4, 'B']
 |      10
 |      
 |      Get value within a Series
 |      
 |      >>> df.loc[5].at['B']
 |      4
 |  
 |  blocks
 |      Internal property, property synonym for as_blocks().
 |      
 |      .. deprecated:: 0.21.0
 |  
 |  dtypes
 |      Return the dtypes in the DataFrame.
 |      
 |      This returns a Series with the data type of each column.
 |      The result's index is the original DataFrame's columns. Columns
 |      with mixed types are stored with the ``object`` dtype. See
 |      :ref:`the User Guide <basics.dtypes>` for more.
 |      
 |      Returns
 |      -------
 |      pandas.Series
 |          The data type of each column.
 |      
 |      See Also
 |      --------
 |      pandas.DataFrame.ftypes : Dtype and sparsity information.
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame({'float': [1.0],
 |      ...                    'int': [1],
 |      ...                    'datetime': [pd.Timestamp('20180310')],
 |      ...                    'string': ['foo']})
 |      >>> df.dtypes
 |      float              float64
 |      int                  int64
 |      datetime    datetime64[ns]
 |      string              object
 |      dtype: object
 |  
 |  empty
 |      Indicator whether DataFrame is empty.
 |      
 |      True if DataFrame is entirely empty (no items), meaning any of the
 |      axes are of length 0.
 |      
 |      Returns
 |      -------
 |      bool
 |          If DataFrame is empty, return True, if not return False.
 |      
 |      See Also
 |      --------
 |      pandas.Series.dropna
 |      pandas.DataFrame.dropna
 |      
 |      Notes
 |      -----
 |      If DataFrame contains only NaNs, it is still not considered empty. See
 |      the example below.
 |      
 |      Examples
 |      --------
 |      An example of an actual empty DataFrame. Notice the index is empty:
 |      
 |      >>> df_empty = pd.DataFrame({'A' : []})
 |      >>> df_empty
 |      Empty DataFrame
 |      Columns: [A]
 |      Index: []
 |      >>> df_empty.empty
 |      True
 |      
 |      If we only have NaNs in our DataFrame, it is not considered empty! We
 |      will need to drop the NaNs to make the DataFrame empty:
 |      
 |      >>> df = pd.DataFrame({'A' : [np.nan]})
 |      >>> df
 |          A
 |      0 NaN
 |      >>> df.empty
 |      False
 |      >>> df.dropna().empty
 |      True
 |  
 |  ftypes
 |      Return the ftypes (indication of sparse/dense and dtype) in DataFrame.
 |      
 |      This returns a Series with the data type of each column.
 |      The result's index is the original DataFrame's columns. Columns
 |      with mixed types are stored with the ``object`` dtype.  See
 |      :ref:`the User Guide <basics.dtypes>` for more.
 |      
 |      Returns
 |      -------
 |      pandas.Series
 |          The data type and indication of sparse/dense of each column.
 |      
 |      See Also
 |      --------
 |      pandas.DataFrame.dtypes: Series with just dtype information.
 |      pandas.SparseDataFrame : Container for sparse tabular data.
 |      
 |      Notes
 |      -----
 |      Sparse data should have the same dtypes as its dense representation.
 |      
 |      Examples
 |      --------
 |      >>> arr = np.random.RandomState(0).randn(100, 4)
 |      >>> arr[arr < .8] = np.nan
 |      >>> pd.DataFrame(arr).ftypes
 |      0    float64:dense
 |      1    float64:dense
 |      2    float64:dense
 |      3    float64:dense
 |      dtype: object
 |      
 |      >>> pd.SparseDataFrame(arr).ftypes
 |      0    float64:sparse
 |      1    float64:sparse
 |      2    float64:sparse
 |      3    float64:sparse
 |      dtype: object
 |  
 |  iat
 |      Access a single value for a row/column pair by integer position.
 |      
 |      Similar to ``iloc``, in that both provide integer-based lookups. Use
 |      ``iat`` if you only need to get or set a single value in a DataFrame
 |      or Series.
 |      
 |      Raises
 |      ------
 |      IndexError
 |          When integer position is out of bounds
 |      
 |      See Also
 |      --------
 |      DataFrame.at : Access a single value for a row/column label pair.
 |      DataFrame.loc : Access a group of rows and columns by label(s).
 |      DataFrame.iloc : Access a group of rows and columns by integer position(s).
 |      
 |      Examples
 |      --------
 |      >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
 |      ...                   columns=['A', 'B', 'C'])
 |      >>> df
 |          A   B   C
 |      0   0   2   3
 |      1   0   4   1
 |      2  10  20  30
 |      
 |      Get value at specified row/column pair
 |      
 |      >>> df.iat[1, 2]
 |      1
 |      
 |      Set value at specified row/column pair
 |      
 |      >>> df.iat[1, 2] = 10
 |      >>> df.iat[1, 2]
 |      10
 |      
 |      Get value within a series
 |      
 |      >>> df.loc[0].iat[1]
 |      2
 |  
 |  iloc
 |      Purely integer-location based indexing for selection by position.
 |      
 |      ``.iloc[]`` is primarily integer position based (from ``0`` to
 |      ``length-1`` of the axis), but may also be used with a boolean
 |      array.
 |      
 |      Allowed inputs are:
 |      
 |      - An integer, e.g. ``5``.
 |      - A list or array of integers, e.g. ``[4, 3, 0]``.
 |      - A slice object with ints, e.g. ``1:7``.
 |      - A boolean array.
 |      - A ``callable`` function with one argument (the calling Series, DataFrame
 |        or Panel) and that returns valid output for indexing (one of the above).
 |        This is useful in method chains, when you don't have a reference to the
 |        calling object, but would like to base your selection on some value.
 |      
 |      ``.iloc`` will raise ``IndexError`` if a requested indexer is
 |      out-of-bounds, except *slice* indexers which allow out-of-bounds
 |      indexing (this conforms with python/numpy *slice* semantics).
 |      
 |      See more at ref:`Selection by Position <indexing.integer>`.
 |      
 |      See Also
 |      --------
 |      DataFrame.iat : Fast integer location scalar accessor.
 |      DataFrame.loc : Purely label-location based indexer for selection by label.
 |      Series.iloc : Purely integer-location based indexing for
 |                     selection by position.
 |      
 |      Examples
 |      --------
 |      
 |      >>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
 |      ...           {'a': 100, 'b': 200, 'c': 300, 'd': 400},
 |      ...           {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
 |      >>> df = pd.DataFrame(mydict)
 |      >>> df
 |            a     b     c     d
 |      0     1     2     3     4
 |      1   100   200   300   400
 |      2  1000  2000  3000  4000
 |      
 |      **Indexing just the rows**
 |      
 |      With a scalar integer.
 |      
 |      >>> type(df.iloc[0])
 |      <class 'pandas.core.series.Series'>
 |      >>> df.iloc[0]
 |      a    1
 |      b    2
 |      c    3
 |      d    4
 |      Name: 0, dtype: int64
 |      
 |      With a list of integers.
 |      
 |      >>> df.iloc[[0]]
 |         a  b  c  d
 |      0  1  2  3  4
 |      >>> type(df.iloc[[0]])
 |      <class 'pandas.core.frame.DataFrame'>
 |      
 |      >>> df.iloc[[0, 1]]
 |           a    b    c    d
 |      0    1    2    3    4
 |      1  100  200  300  400
 |      
 |      With a `slice` object.
 |      
 |      >>> df.iloc[:3]
 |            a     b     c     d
 |      0     1     2     3     4
 |      1   100   200   300   400
 |      2  1000  2000  3000  4000
 |      
 |      With a boolean mask the same length as the index.
 |      
 |      >>> df.iloc[[True, False, True]]
 |            a     b     c     d
 |      0     1     2     3     4
 |      2  1000  2000  3000  4000
 |      
 |      With a callable, useful in method chains. The `x` passed
 |      to the ``lambda`` is the DataFrame being sliced. This selects
 |      the rows whose index label even.
 |      
 |      >>> df.iloc[lambda x: x.index % 2 == 0]
 |            a     b     c     d
 |      0     1     2     3     4
 |      2  1000  2000  3000  4000
 |      
 |      **Indexing both axes**
 |      
 |      You can mix the indexer types for the index and columns. Use ``:`` to
 |      select the entire axis.
 |      
 |      With scalar integers.
 |      
 |      >>> df.iloc[0, 1]
 |      2
 |      
 |      With lists of integers.
 |      
 |      >>> df.iloc[[0, 2], [1, 3]]
 |            b     d
 |      0     2     4
 |      2  2000  4000
 |      
 |      With `slice` objects.
 |      
 |      >>> df.iloc[1:3, 0:3]
 |            a     b     c
 |      1   100   200   300
 |      2  1000  2000  3000
 |      
 |      With a boolean array whose length matches the columns.
 |      
 |      >>> df.iloc[:, [True, False, True, False]]
 |            a     c
 |      0     1     3
 |      1   100   300
 |      2  1000  3000
 |      
 |      With a callable function that expects the Series or DataFrame.
 |      
 |      >>> df.iloc[:, lambda df: [0, 2]]
 |            a     c
 |      0     1     3
 |      1   100   300
 |      2  1000  3000
 |  
 |  is_copy
 |      Return the copy.
 |  
 |  ix
 |      A primarily label-location based indexer, with integer position
 |      fallback.
 |      
 |      Warning: Starting in 0.20.0, the .ix indexer is deprecated, in
 |      favor of the more strict .iloc and .loc indexers.
 |      
 |      ``.ix[]`` supports mixed integer and label based access. It is
 |      primarily label based, but will fall back to integer positional
 |      access unless the corresponding axis is of integer type.
 |      
 |      ``.ix`` is the most general indexer and will support any of the
 |      inputs in ``.loc`` and ``.iloc``. ``.ix`` also supports floating
 |      point label schemes. ``.ix`` is exceptionally useful when dealing
 |      with mixed positional and label based hierarchical indexes.
 |      
 |      However, when an axis is integer based, ONLY label based access
 |      and not positional access is supported. Thus, in such cases, it's
 |      usually better to be explicit and use ``.iloc`` or ``.loc``.
 |      
 |      See more at :ref:`Advanced Indexing <advanced>`.
 |  
 |  loc
 |      Access a group of rows and columns by label(s) or a boolean array.
 |      
 |      ``.loc[]`` is primarily label based, but may also be used with a
 |      boolean array.
 |      
 |      Allowed inputs are:
 |      
 |      - A single label, e.g. ``5`` or ``'a'``, (note that ``5`` is
 |        interpreted as a *label* of the index, and **never** as an
 |        integer position along the index).
 |      - A list or array of labels, e.g. ``['a', 'b', 'c']``.
 |      - A slice object with labels, e.g. ``'a':'f'``.
 |      
 |        .. warning:: Note that contrary to usual python slices, **both** the
 |            start and the stop are included
 |      
 |      - A boolean array of the same length as the axis being sliced,
 |        e.g. ``[True, False, True]``.
 |      - A ``callable`` function with one argument (the calling Series, DataFrame
 |        or Panel) and that returns valid output for indexing (one of the above)
 |      
 |      See more at :ref:`Selection by Label <indexing.label>`
 |      
 |      Raises
 |      ------
 |      KeyError:
 |          when any items are not found
 |      
 |      See Also
 |      --------
 |      DataFrame.at : Access a single value for a row/column label pair.
 |      DataFrame.iloc : Access group of rows and columns by integer position(s).
 |      DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the
 |          Series/DataFrame.
 |      Series.loc : Access group of values using labels.
 |      
 |      Examples
 |      --------
 |      **Getting values**
 |      
 |      >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
 |      ...      index=['cobra', 'viper', 'sidewinder'],
 |      ...      columns=['max_speed', 'shield'])
 |      >>> df
 |                  max_speed  shield
 |      cobra               1       2
 |      viper               4       5
 |      sidewinder          7       8
 |      
 |      Single label. Note this returns the row as a Series.
 |      
 |      >>> df.loc['viper']
 |      max_speed    4
 |      shield       5
 |      Name: viper, dtype: int64
 |      
 |      List of labels. Note using ``[[]]`` returns a DataFrame.
 |      
 |      >>> df.loc[['viper', 'sidewinder']]
 |                  max_speed  shield
 |      viper               4       5
 |      sidewinder          7       8
 |      
 |      Single label for row and column
 |      
 |      >>> df.loc['cobra', 'shield']
 |      2
 |      
 |      Slice with labels for row and single label for column. As mentioned
 |      above, note that both the start and stop of the slice are included.
 |      
 |      >>> df.loc['cobra':'viper', 'max_speed']
 |      cobra    1
 |      viper    4
 |      Name: max_speed, dtype: int64
 |      
 |      Boolean list with the same length as the row axis
 |      
 |      >>> df.loc[[False, False, True]]
 |                  max_speed  shield
 |      sidewinder          7       8
 |      
 |      Conditional that returns a boolean Series
 |      
 |      >>> df.loc[df['shield'] > 6]
 |                  max_speed  shield
 |      sidewinder          7       8
 |      
 |      Conditional that returns a boolean Series with column labels specified
 |      
 |      >>> df.loc[df['shield'] > 6, ['max_speed']]
 |                  max_speed
 |      sidewinder          7
 |      
 |      Callable that returns a boolean Series
 |      
 |      >>> df.loc[lambda df: df['shield'] == 8]
 |                  max_speed  shield
 |      sidewinder          7       8
 |      
 |      **Setting values**
 |      
 |      Set value for all items matching the list of labels
 |      
 |      >>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
 |      >>> df
 |                  max_speed  shield
 |      cobra               1       2
 |      viper               4      50
 |      sidewinder          7      50
 |      
 |      Set value for an entire row
 |      
 |      >>> df.loc['cobra'] = 10
 |      >>> df
 |                  max_speed  shield
 |      cobra              10      10
 |      viper               4      50
 |      sidewinder          7      50
 |      
 |      Set value for an entire column
 |      
 |      >>> df.loc[:, 'max_speed'] = 30
 |      >>> df
 |                  max_speed  shield
 |      cobra              30      10
 |      viper              30      50
 |      sidewinder         30      50
 |      
 |      Set value for rows matching callable condition
 |      
 |      >>> df.loc[df['shield'] > 35] = 0
 |      >>> df
 |                  max_speed  shield
 |      cobra              30      10
 |      viper               0       0
 |      sidewinder          0       0
 |      
 |      **Getting values on a DataFrame with an index that has integer labels**
 |      
 |      Another example using integers for the index
 |      
 |      >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
 |      ...      index=[7, 8, 9], columns=['max_speed', 'shield'])
 |      >>> df
 |         max_speed  shield
 |      7          1       2
 |      8          4       5
 |      9          7       8
 |      
 |      Slice with integer labels for rows. As mentioned above, note that both
 |      the start and stop of the slice are included.
 |      
 |      >>> df.loc[7:9]
 |         max_speed  shield
 |      7          1       2
 |      8          4       5
 |      9          7       8
 |      
 |      **Getting values with a MultiIndex**
 |      
 |      A number of examples using a DataFrame with a MultiIndex
 |      
 |      >>> tuples = [
 |      ...    ('cobra', 'mark i'), ('cobra', 'mark ii'),
 |      ...    ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
 |      ...    ('viper', 'mark ii'), ('viper', 'mark iii')
 |      ... ]
 |      >>> index = pd.MultiIndex.from_tuples(tuples)
 |      >>> values = [[12, 2], [0, 4], [10, 20],
 |      ...         [1, 4], [7, 1], [16, 36]]
 |      >>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
 |      >>> df
 |                           max_speed  shield
 |      cobra      mark i           12       2
 |                 mark ii           0       4
 |      sidewinder mark i           10      20
 |                 mark ii           1       4
 |      viper      mark ii           7       1
 |                 mark iii         16      36
 |      
 |      Single label. Note this returns a DataFrame with a single index.
 |      
 |      >>> df.loc['cobra']
 |               max_speed  shield
 |      mark i          12       2
 |      mark ii          0       4
 |      
 |      Single index tuple. Note this returns a Series.
 |      
 |      >>> df.loc[('cobra', 'mark ii')]
 |      max_speed    0
 |      shield       4
 |      Name: (cobra, mark ii), dtype: int64
 |      
 |      Single label for row and column. Similar to passing in a tuple, this
 |      returns a Series.
 |      
 |      >>> df.loc['cobra', 'mark i']
 |      max_speed    12
 |      shield        2
 |      Name: (cobra, mark i), dtype: int64
 |      
 |      Single tuple. Note using ``[[]]`` returns a DataFrame.
 |      
 |      >>> df.loc[[('cobra', 'mark ii')]]
 |                     max_speed  shield
 |      cobra mark ii          0       4
 |      
 |      Single tuple for the index with a single label for the column
 |      
 |      >>> df.loc[('cobra', 'mark i'), 'shield']
 |      2
 |      
 |      Slice from index tuple to single label
 |      
 |      >>> df.loc[('cobra', 'mark i'):'viper']
 |                           max_speed  shield
 |      cobra      mark i           12       2
 |                 mark ii           0       4
 |      sidewinder mark i           10      20
 |                 mark ii           1       4
 |      viper      mark ii           7       1
 |                 mark iii         16      36
 |      
 |      Slice from index tuple to index tuple
 |      
 |      >>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
 |                          max_speed  shield
 |      cobra      mark i          12       2
 |                 mark ii          0       4
 |      sidewinder mark i          10      20
 |                 mark ii          1       4
 |      viper      mark ii          7       1
 |  
 |  ndim
 |      Return an int representing the number of axes / array dimensions.
 |      
 |      Return 1 if Series. Otherwise return 2 if DataFrame.
 |      
 |      See Also
 |      --------
 |      ndarray.ndim : Number of array dimensions.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
 |      >>> s.ndim
 |      1
 |      
 |      >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
 |      >>> df.ndim
 |      2
 |  
 |  size
 |      Return an int representing the number of elements in this object.
 |      
 |      Return the number of rows if Series. Otherwise return the number of
 |      rows times number of columns if DataFrame.
 |      
 |      See Also
 |      --------
 |      ndarray.size : Number of elements in the array.
 |      
 |      Examples
 |      --------
 |      >>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
 |      >>> s.size
 |      3
 |      
 |      >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
 |      >>> df.size
 |      4
 |  
 |  values
 |      Return a Numpy representation of the DataFrame.
 |      
 |      .. warning::
 |      
 |         We recommend using :meth:`DataFrame.to_numpy` instead.
 |      
 |      Only the values in the DataFrame will be returned, the axes labels
 |      will be removed.
 |      
 |      Returns
 |      -------
 |      numpy.ndarray
 |          The values of the DataFrame.
 |      
 |      See Also
 |      --------
 |      DataFrame.to_numpy : Recommended alternative to this method.
 |      pandas.DataFrame.index : Retrieve the index labels.
 |      pandas.DataFrame.columns : Retrieving the column names.
 |      
 |      Notes
 |      -----
 |      The dtype will be a lower-common-denominator dtype (implicit
 |      upcasting); that is to say if the dtypes (even of numeric types)
 |      are mixed, the one that accommodates all will be chosen. Use this
 |      with care if you are not dealing with the blocks.
 |      
 |      e.g. If the dtypes are float16 and float32, dtype will be upcast to
 |      float32.  If dtypes are int32 and uint8, dtype will be upcast to
 |      int32. By :func:`numpy.find_common_type` convention, mixing int64
 |      and uint64 will result in a float64 dtype.
 |      
 |      Examples
 |      --------
 |      A DataFrame where all columns are the same type (e.g., int64) results
 |      in an array of the same type.
 |      
 |      >>> df = pd.DataFrame({'age':    [ 3,  29],
 |      ...                    'height': [94, 170],
 |      ...                    'weight': [31, 115]})
 |      >>> df
 |         age  height  weight
 |      0    3      94      31
 |      1   29     170     115
 |      >>> df.dtypes
 |      age       int64
 |      height    int64
 |      weight    int64
 |      dtype: object
 |      >>> df.values
 |      array([[  3,  94,  31],
 |             [ 29, 170, 115]], dtype=int64)
 |      
 |      A DataFrame with mixed type columns(e.g., str/object, int64, float32)
 |      results in an ndarray of the broadest type that accommodates these
 |      mixed types (e.g., object).
 |      
 |      >>> df2 = pd.DataFrame([('parrot',   24.0, 'second'),
 |      ...                     ('lion',     80.5, 1),
 |      ...                     ('monkey', np.nan, None)],
 |      ...                   columns=('name', 'max_speed', 'rank'))
 |      >>> df2.dtypes
 |      name          object
 |      max_speed    float64
 |      rank          object
 |      dtype: object
 |      >>> df2.values
 |      array([['parrot', 24.0, 'second'],
 |             ['lion', 80.5, 1],
 |             ['monkey', nan, None]], dtype=object)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes inherited from pandas.core.generic.NDFrame:
 |  
 |  __array_priority__ = 1000
 |  
 |  timetuple = None
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from pandas.core.base.PandasObject:
 |  
 |  __sizeof__(self)
 |      Generates the total memory usage for an object that returns
 |      either a value or Series of values
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from pandas.core.base.StringMixin:
 |  
 |  __bytes__(self)
 |      Return a string representation for a particular object.
 |      
 |      Invoked by bytes(obj) in py3 only.
 |      Yields a bytestring in both py2/py3.
 |  
 |  __repr__(self)
 |      Return a string representation for a particular object.
 |      
 |      Yields Bytestring in Py2, Unicode String in py3.
 |  
 |  __str__(self)
 |      Return a string representation for a particular Object
 |      
 |      Invoked by str(df) in both py2/py3.
 |      Yields Bytestring in Py2, Unicode String in py3.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from pandas.core.base.StringMixin:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from pandas.core.accessor.DirNamesMixin:
 |  
 |  __dir__(self)
 |      Provide method name lookup and completion
 |      Only provide 'public' methods

Tons of information, but basically we can use as input a Numpy array. So let's just try to do that and see what comes out. Our list of coordinates arrays only contains x and y positions but no time. So first we will add a column to each array. Let's test on the first array:

In [34]:
first_array = centroids_time[0].copy()
#first_array

We now append a column to this array that contains the time of this frame:

In [35]:
time = 0
first_array = np.c_[first_array, time *np.ones(first_array.shape[0])]
#first_array

Let's do the same thing for all time points simply using a comprehension list:

In [33]:
centroids_time2 = [np.c_[x, ind *np.ones(x.shape[0])] for ind, x in enumerate(centroids_time)]
#centroids_time2[6]

Now we can concatenate this list of arrays into one large array that we are then going to transform into a dataframe

In [17]:
centroids_time2 = np.concatenate(centroids_time2)
centroids_time2
Out[17]:
array([[ 44.60991736, 617.96859504,   0.        ],
       [ 66.87583893, 525.50503356,   0.        ],
       [ 69.8377193 , 214.86403509,   0.        ],
       ...,
       [392.24482109, 507.03578154,   9.        ],
       [397.68828452, 456.37656904,   9.        ],
       [401.73901099, 294.92582418,   9.        ]])

Let's simply pass that array to Pandas:

In [18]:
pd.DataFrame(centroids_time2)
Out[18]:
0 1 2
0 44.609917 617.968595 0.0
1 66.875839 525.505034 0.0
2 69.837719 214.864035 0.0
3 84.217116 344.353407 0.0
4 87.518409 610.238586 0.0
5 92.680292 443.620438 0.0
6 102.700752 536.621053 0.0
7 111.597923 308.824926 0.0
8 110.965699 656.401055 0.0
9 111.904153 96.333866 0.0
10 124.475000 385.454167 0.0
11 126.619847 177.270229 0.0
12 125.789174 243.280627 0.0
13 133.640000 499.158182 0.0
14 135.221003 587.832288 0.0
15 140.683748 445.540264 0.0
16 155.810651 652.556213 0.0
17 163.572843 113.851485 0.0
18 161.836915 332.108723 0.0
19 162.773829 552.245557 0.0
20 166.139059 20.282209 0.0
21 177.107994 404.063114 0.0
22 189.304945 463.741758 0.0
23 189.364353 511.083596 0.0
24 193.846939 272.607143 0.0
25 192.450355 627.601064 0.0
26 203.456770 201.928222 0.0
27 210.922010 555.934142 0.0
28 215.804094 59.897661 0.0
29 218.667190 328.299843 0.0
... ... ... ...
591 261.488584 18.415525 9.0
592 256.252083 521.881250 9.0
593 277.380328 38.986885 9.0
594 264.311734 404.861646 9.0
595 269.465693 116.259854 9.0
596 268.803468 351.578035 9.0
597 270.057569 651.000000 9.0
598 268.963362 607.433190 9.0
599 283.126338 549.415418 9.0
600 289.171131 451.400298 9.0
601 304.688552 489.774411 9.0
602 308.226389 247.875000 9.0
603 308.165963 322.538678 9.0
604 311.628705 173.538222 9.0
605 311.965986 591.654762 9.0
606 316.619968 399.328502 9.0
607 321.930368 637.094778 9.0
608 329.679577 75.926056 9.0
609 336.223762 558.419802 9.0
610 343.482549 506.983308 9.0
611 358.255682 433.789773 9.0
612 361.546603 217.857820 9.0
613 360.375706 303.473164 9.0
614 365.469630 637.958519 9.0
615 372.372781 138.245562 9.0
616 378.394495 368.862385 9.0
617 377.549098 575.364729 9.0
618 392.244821 507.035782 9.0
619 397.688285 456.376569 9.0
620 401.739011 294.925824 9.0

621 rows × 3 columns

Not too bad. The x, y and time columns of our arrays are now integrated into a dataframe.

We'd like now to change the headers of our dataframe. In the help we saw that there was on optional field called columns. We can give the appropriate name there:

In [19]:
coords_dataframe = pd.DataFrame(centroids_time2, columns=('x','y','frame'))
coords_dataframe
Out[19]:
x y frame
0 44.609917 617.968595 0.0
1 66.875839 525.505034 0.0
2 69.837719 214.864035 0.0
3 84.217116 344.353407 0.0
4 87.518409 610.238586 0.0
5 92.680292 443.620438 0.0
6 102.700752 536.621053 0.0
7 111.597923 308.824926 0.0
8 110.965699 656.401055 0.0
9 111.904153 96.333866 0.0
10 124.475000 385.454167 0.0
11 126.619847 177.270229 0.0
12 125.789174 243.280627 0.0
13 133.640000 499.158182 0.0
14 135.221003 587.832288 0.0
15 140.683748 445.540264 0.0
16 155.810651 652.556213 0.0
17 163.572843 113.851485 0.0
18 161.836915 332.108723 0.0
19 162.773829 552.245557 0.0
20 166.139059 20.282209 0.0
21 177.107994 404.063114 0.0
22 189.304945 463.741758 0.0
23 189.364353 511.083596 0.0
24 193.846939 272.607143 0.0
25 192.450355 627.601064 0.0
26 203.456770 201.928222 0.0
27 210.922010 555.934142 0.0
28 215.804094 59.897661 0.0
29 218.667190 328.299843 0.0
... ... ... ...
591 261.488584 18.415525 9.0
592 256.252083 521.881250 9.0
593 277.380328 38.986885 9.0
594 264.311734 404.861646 9.0
595 269.465693 116.259854 9.0
596 268.803468 351.578035 9.0
597 270.057569 651.000000 9.0
598 268.963362 607.433190 9.0
599 283.126338 549.415418 9.0
600 289.171131 451.400298 9.0
601 304.688552 489.774411 9.0
602 308.226389 247.875000 9.0
603 308.165963 322.538678 9.0
604 311.628705 173.538222 9.0
605 311.965986 591.654762 9.0
606 316.619968 399.328502 9.0
607 321.930368 637.094778 9.0
608 329.679577 75.926056 9.0
609 336.223762 558.419802 9.0
610 343.482549 506.983308 9.0
611 358.255682 433.789773 9.0
612 361.546603 217.857820 9.0
613 360.375706 303.473164 9.0
614 365.469630 637.958519 9.0
615 372.372781 138.245562 9.0
616 378.394495 368.862385 9.0
617 377.549098 575.364729 9.0
618 392.244821 507.035782 9.0
619 397.688285 456.376569 9.0
620 401.739011 294.925824 9.0

621 rows × 3 columns

That's it! We now have an appropriately formated dataframe to pass to our linking function, which required x,y and frame columns. Information can be retried from dataframes in similar ways as from Numpy arrays or Python dictionaries. For example, one can select a column (the head function limits the output):

In [20]:
coords_dataframe['x'].head()
Out[20]:
0    44.609917
1    66.875839
2    69.837719
3    84.217116
4    87.518409
Name: x, dtype: float64

One can access a specific row using its index:

In [21]:
coords_dataframe.loc[0]
Out[21]:
x         44.609917
y        617.968595
frame      0.000000
Name: 0, dtype: float64

And one can use logical indexing. For example one can find all the lines corresponding to a given time frame, and extract them:

In [22]:
coords_dataframe[coords_dataframe['frame']==0].head()
Out[22]:
x y frame
0 44.609917 617.968595 0.0
1 66.875839 525.505034 0.0
2 69.837719 214.864035 0.0
3 84.217116 344.353407 0.0
4 87.518409 610.238586 0.0

A dataframe and its contents have also a series of methods attached to them. For example we can get the maximum value from a given columns like this:

In [23]:
coords_dataframe['x'].max()
Out[23]:
409.8050595238095

Pandas and Numpy are very close, so of course we could also have used the Numpy function:

In [24]:
np.max(coords_dataframe['x'])
Out[24]:
409.8050595238095

Using the Pandas package would be a course on itself as it is a very powerful tool to handle tabular data. We just showed some very basic features here so that what follows makes sense. Note that this is a situation that occurs often: you just need a few features of a package within a larger project, and have to figure out the basics of it. However, if you work with large tabular data, learning Pandas is highly recommended.

11.3.2 Tracking

There are multiple options in the tracking function. E.g. in how many frames a signal is allowed to disappear, how we calculate distances between objects etc. We are only going to give a value for the fields search_range which specifies in what neighborhood one is doing the tracking.

In [25]:
tracks = trackpy.link_df(coords_dataframe, search_range=20)
Frame 9: 63 trajectories present.

The output is a new dataframe. It contains the position (x,y,frame) of each particle, and to what track (particle) it belongs:

In [26]:
tracks.head()
Out[26]:
x y frame particle
0 44.609917 617.968595 0 0
33 248.584356 137.056748 0 1
34 255.506154 227.063077 0 2
35 260.481848 524.721122 0 3
36 268.189189 384.758347 0 4

We have seen before that we can use indexing. So let's do that to recover all the points forming for example the trajectory = 10

In [27]:
tracks[tracks['particle']==10]
Out[27]:
x y frame particle
42 292.320814 437.802817 0 10
103 290.185759 437.803406 1 10
163 288.868012 438.596273 2 10
225 288.651537 439.784773 3 10
288 288.668721 439.288136 4 10
350 289.728213 440.728213 5 10
413 288.701534 443.525802 6 10
476 288.875000 445.761765 7 10
538 289.774924 448.592145 8 10
600 289.171131 451.400298 9 10

We see that in that particular case, we have one point per frame and the successive points seem close together, so the tracking seems to have worked properly. We can recover all such trajectories and plot them on a single xy plot:

In [28]:
plt.figure(figsize=(10,10))
for particle_id in range(tracks['particle'].max()):
    plt.plot(tracks[tracks.particle==particle_id].y,tracks[tracks.particle==particle_id].x,'o-')
plt.show()

11.4 Analysing the data

Now that we have those tracks, we can finally do some quantification of the process. For example we can measure what is the largest distance traveled by each nucleus.

In [29]:
msd = trackpy.imsd(tracks,1,1)
In [30]:
msd.loc[9].hist()
Out[30]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f011e0bf7f0>
In [31]:
distances = []
for particle_id in range(tracks['particle'].max()):
    #recover current track
    current_track = tracks[tracks.particle==particle_id]
    
    #find beginning and end of track
    min_time = np.min(current_track['frame'])
    max_time = np.max(current_track['frame'])
    
    #get positions at begin and end and measure distance
    x1 = current_track[current_track['frame']==min_time].iloc[0].x
    y1 = current_track[current_track['frame']==min_time].iloc[0].y   
    x2 = current_track[current_track['frame']==max_time].iloc[0].x
    y2 = current_track[current_track['frame']==max_time].iloc[0].y

    distances.append(np.sqrt((x2-x1)**2+(y2-y1)**2))
    
In [32]:
plt.hist(distances)
plt.show()

As we could have guesses from looking at the displacement plot, we have two categories of nucle: those that move on the left of the image, and those that don't on the right.

13-Pixel_classification

13. Pixel classification

We have for the moment mostly seen methods that rely on pixel intensity and shapes of objects to segment features. When dealing with natural images (typical RGB images) one can however also exploit the fact that the channels taken together give information on the image structure. To illustrate this we are going to use a classical clustering method (Kmeans) found in the package scikit-learn. That package is the reference for anyone who wants to apply machine learning methods to their data. It is a nice pendant to scikit-image as it also has a simple syntax, a good documentation and many examples.

In [1]:
import numpy as np
import matplotlib.pyplot as plt
plt.gray
import sklearn.cluster
import skimage.io

We are going to deal again with a geography satellite image that can be loaded here:

In [2]:
image = skimage.io.imread('Data/geography/naip/m_3910505_nw_13_1_20150919/crop/m_3910505_nw_13_1_20150919_crop.tif')
/usr/local/lib/python3.5/dist-packages/skimage/external/tifffile/tifffile.py:2617: RuntimeWarning: py_decodelzw encountered unexpected end of stream
  strip = decompress(strip)
/usr/local/lib/python3.5/dist-packages/skimage/external/tifffile/tifffile.py:2552: UserWarning: unpack: buffer size must be a multiple of element size
  warnings.warn("unpack: %s" % e)

Let's just keep the first three RGB channels (no clue what the fourth one is...)

In [3]:
image = image[:,:,0:3]
In [4]:
plt.figure(figsize=(20,10))
plt.imshow(image);

The image is quite large, so let's focus on a smaller region first, to reduce computational time:

In [5]:
subim = image[0:1000,0:1000,:]
In [6]:
plt.figure(figsize=(20,10))
plt.imshow(subim)
plt.show()

If we want to use a clustering approach, i.e. grouping pixels which have similar features, we have to reshape our image into an actual dataset where each pixel is a datapoint with three "properties", in this case RGB.

In [7]:
X = np.reshape(subim,(subim.shape[0]*subim.shape[1],3))

We can have a look at how this dataset looks like. Let's plot the first and second "features". We reduce the number of data points and make them transparent so that we don't saturate the plot:

In [8]:
plt.plot(X[::100,0],X[::100,1],'o',alpha = 0.01)
plt.show()

We see by eye that we have at least two categories, with two levels of Red/Green. Let's do some clustering just on these two components to better understand what happens for the image.

We are going to feed the Kmeans algorithm with a dataset containing the Red and Green features and say that we want two categories in the end. The algorithm is going to iteratively assign each pixel to one category, and is certain to converge. Of course there are other clustering methods that you can use in sklearn.

In [9]:
kmeans = sklearn.cluster.KMeans(n_clusters=2, random_state=0).fit(X[:,0:2])

The labels of each element are stored in here:

In [10]:
kmeans.labels_
Out[10]:
array([0, 0, 0, ..., 1, 1, 1], dtype=int32)

Let's plot them by selecting them by label:

In [11]:
plt.plot(X[kmeans.labels_ == 0,0],X[kmeans.labels_ == 0,1],'ro',alpha = 0.01)
plt.plot(X[kmeans.labels_ == 1,0],X[kmeans.labels_ == 1,1],'bo',alpha = 0.01)
plt.show()

We see thats the algorith split the sample more or less at the expected position. Let's use now all the components and classify our pixels

In [12]:
kmeans = sklearn.cluster.KMeans(n_clusters=2, random_state=0).fit(X)
In [13]:
labels_im = np.reshape(kmeans.labels_,(1000,1000))
In [14]:
fig,ax = plt.subplots(1,2, figsize = (20,10))
ax[0].imshow(subim)
ax[1].imshow(labels_im,cmap = 'gray');

We see that we managed to plsit really well the data into forest and other types (roads, earth). Of course we couls use more categories. Maybe with four categories we could split roads, light forest, dark forest and earth. Let's do that and superpose each category to the original image.

In [15]:
kmeans = sklearn.cluster.KMeans(n_clusters=4, random_state=0).fit(X)
In [16]:
labels_im = np.reshape(kmeans.labels_,(1000,1000))
In [17]:
fig,ax = plt.subplots(1,4, figsize = (20,10))
for i in range(4):
    ax[i].imshow(subim)
    ax[i].imshow(labels_im==i,cmap = 'Reds', alpha = 0.4);

Of course this is a very crude approach, but we still managed to nicely recover different features on that image in only a few lines. The dataset for the entire image is huge and Kmeans clustering would be very time consuming. However we can just re-use the model we trained on the smaller image to classify all the pixels of the image:

In [18]:
X_large = np.reshape(image,(image.shape[0]*image.shape[1],3))
In [19]:
labels_large = kmeans.predict(X_large)
In [20]:
labels_im = np.reshape(labels_large,(image.shape[0],image.shape[1]))
In [21]:
plt.figure(figsize=(20,10))
plt.imshow(image)
plt.imshow(labels_im==3,cmap = 'Reds', alpha = 0.4);
15-DeepLearning

15. Deep learning

Deep learning methods are used more and more fequently for complex segmentation tasks. The basic idea of that approach is to let a system learn by itself what are the important features of the objects to segment by feeding it training examples.

Of course you will not learn all the details about deep learning in this single notebook. The goal here is simply to give a very brief overview of the steps involved. In particular the goal is to show that if you are provided with a trained network e.g. by a collaborator, using it to segment your data is very straightforward.

The example here uses Tensorflow and Keras. Tensorflow is Google's deep learning library that is widely used. Keras is a layer that sits on top of tools like Tensorflow and allows one to simplify the prototyping of a deep learning pipeline. It can also transparently be used with other "backends" like PyTorch, Facebook's deep learning library.

In [2]:
import numpy as np
import matplotlib.pyplot as plt
from skimage.external.tifffile import TiffFile
from skimage.measure import label, regionprops
from skimage.segmentation import watershed

#import your function
import sys, os
from course_functions import detect_nuclei

if not os.path.isdir('MyData/DL'):
    os.makedirs('MyData/DL')

15.1 Creating the training set

As a simple example, we are going to use the Zebra fish embryo nuclei that we have tried to segment before. Usually, one would create a training set by manually segmenting data or at least manually correcting them. Here we cheat and use our previous segmentation pipeline to create a learning dataset.

First we have to decide how large our training images are going to be. This is set by the type of computing resource used and the memory size.

In [3]:
imsize = 64
image_rows = 64
image_cols = 64
channels = 1
In [4]:
#load the image to process
data = TiffFile('Data/30567/30567.tif')
image = data.pages[0].asarray()
per_image = np.floor(np.array(image.shape)/imsize)

To create our training set, we are going to segment 5 images using our previous pipeline. Then we are going to cut the original image and its mask into 64x64 pieces. We exclude images which have no nuclei as they don't contain interesting information.

In [5]:
all_images = []
all_masks = []
for t in (3,13,23,33,43):
    image = data.pages[t].asarray()
    im_float = image.astype(np.float32)
    #create your mask
    nuclei = detect_nuclei(image)
    nuclei = nuclei.astype(np.uint8)
    
    for i in range(int(per_image[0])):
        for j in range(int(per_image[1])):
            if np.sum(nuclei[i*imsize:(i+1)*imsize,j*imsize:(j+1)*imsize])>1:
                all_images.append(im_float[i*imsize:(i+1)*imsize,j*imsize:(j+1)*imsize])
                all_masks.append(nuclei[i*imsize:(i+1)*imsize,j*imsize:(j+1)*imsize])

plt.imshow(nuclei, cmap = 'gray')
/usr/local/lib/python3.5/dist-packages/skimage/filters/rank/generic.py:102: UserWarning: Bitdepth of 14 may result in bad rank filter performance due to large number of bins.
  "performance due to large number of bins." % bitdepth)
Out[5]:
<matplotlib.image.AxesImage at 0x7fbdbc1573c8>

Here we could split our dataset into a training and testing set. We have enough other data so we use all examples for training.

In [6]:
num_images = 5
total = len(all_masks)

num_train = int(0.99*total)
num_test = total-num_train
print(total)
print(num_train)
print(num_test)
283
280
3

Now we create empty arrays that are going to contain all our data. Note that this works only if the data are not too large or you have a computer with a lot of RAM. The alternative is to use a more complex approach using Python generators, which are going to serve images sequentially.

In [7]:
imgs = np.ndarray((num_train, image_rows, image_cols,channels), dtype=np.float64)
imgs_mask = np.ndarray((num_train, image_rows, image_cols), dtype=np.uint8)
imgs_test = np.ndarray((num_test, image_rows, image_cols,channels), dtype=np.float64)
imgs_id = np.ndarray((num_test, ), dtype=np.int32)
imgs_weight = np.ndarray((num_train, image_rows, image_cols), dtype=np.uint8)
imgs_weight[:]=1

Now we fill up our containers. Note that they have to be in special shapes to be fed correctly to the network. Also, in addition to our images and masks, we have so-called weights. This is an image that is going to assign more importance to certain regions. This is important for example if one category of pixels appears much less than another, like in our case nuclei vs. background.

Note also that we correct all images by normalizing them to avoid extreme values.

In [8]:
for counter in range(total):
    if counter<num_train:
        imgs[counter] = all_images[counter][..., np.newaxis]
        imgs_mask[counter] = all_masks[counter]
        imgs_weight[counter] = 10*all_masks[counter]+1
    else:
        imgs_test[counter-num_train] = all_images[counter][..., np.newaxis]
        imgs_id[counter-num_train] = counter-num_train

mean_val = np.mean(imgs)
imgs = imgs - mean_val
std_val = np.std(imgs)
imgs = imgs/std_val

np.save('MyData/DL/'+'imgs_train.npy', imgs)
np.save('MyData/DL/'+'imgs_mask_train.npy', imgs_mask.reshape((num_train,image_rows*image_cols)))
np.save('MyData/DL/'+'imgs_test.npy', imgs_test)
np.save('MyData/DL/'+'imgs_id_test.npy', imgs_id)
np.save('MyData/DL/'+'imgs_weight_train.npy', imgs_weight.reshape((num_train,image_rows*image_cols)))

15.2 Training the network

Now we can import our small deep learning module.

In [9]:
import deeplearning
Using TensorFlow backend.

And we can run the training of our network.

In [ ]:
image_rows = 64
image_cols = 64

deeplearning.nuclei_train('MyData/DL/', image_rows,image_cols, dims=1, batch_size = 10, epochs = 100, weights = None)
WARNING: Logging before flag parsing goes to stderr.
W0123 11:17:27.983967 140454236452608 deprecation_wrapper.py:119] From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0123 11:17:29.209715 140454236452608 deprecation.py:323] From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0123 11:17:33.899000 140454236452608 deprecation_wrapper.py:119] From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Train on 224 samples, validate on 56 samples
Epoch 1/100
224/224 [==============================] - 37s 164ms/step - loss: 0.8377 - dice_coef: 0.2907 - val_loss: 0.4347 - val_dice_coef: 0.4183
Epoch 2/100
224/224 [==============================] - 27s 118ms/step - loss: 0.2871 - dice_coef: 0.6174 - val_loss: 0.1904 - val_dice_coef: 0.7066
Epoch 3/100
224/224 [==============================] - 30s 134ms/step - loss: 0.1857 - dice_coef: 0.7411 - val_loss: 0.1603 - val_dice_coef: 0.7344
Epoch 4/100
224/224 [==============================] - 28s 127ms/step - loss: 0.1449 - dice_coef: 0.7921 - val_loss: 0.1272 - val_dice_coef: 0.8245
Epoch 5/100
 40/224 [====>.........................] - ETA: 27s - loss: 0.1383 - dice_coef: 0.8084

15.3 Using the trained network

Let's load an image that we did not use for training and select a 512x512 region.

In [ ]:
image = data.pages[143].asarray()[0:512,0:512]
im_float = image.astype(float)

Now we load again the network and say what the input size will be. Then most importantly, we use the weights that we just trained.

In [ ]:
model = deeplearning.get_unet(1,512,512)
model.load_weights('MyData/DL/weights.h5')

We correct now this single picture with the same factors used for the training set, so that it is in the same state.

In [ ]:
imgs_test = im_float.astype('float32')
imgs_test = imgs_test
imgs_test = imgs_test - mean_val
imgs_test = imgs_test/std_val
plt.imshow(imgs_test)
plt.show()

Finally we reshape it to fit into the network and use the predict() function to generate a prediction for each pixel to be foreground or background.

In [ ]:
imgs_test = imgs_test[np.newaxis,...,np.newaxis]
imgs_mask_test = model.predict(imgs_test, verbose=1)
imgs_mask_test = np.reshape(imgs_mask_test,imgs_test.shape)

Finally we can plot the resulting image, which has values from 0 to 1.

In [ ]:
plt.imshow(imgs_mask_test[0,:,:,0], vmin = 0, vmax = 1, cmap= 'gray')
plt.show()

We can now set a threshold for what should be considerd foreground to generate a mask, and compare to the previous segmentation.

In [ ]:
nuclei = detect_nuclei(image)

plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(imgs_mask_test[0,:,:,0]>0.9, cmap = 'gray')
plt.subplot(1,2,2)
plt.imshow(nuclei[0:512,0:512], cmap = 'gray')
plt.show()
16-Image_classification

16. Image classification using deep learning

In the previous notebooks, we have mostly focused on the segmentation task, i.e isolating structures in images. Another major image processing task is instead to classify entire images. For example when screening for skin caner, one is not necessarily in segmenting a tumor but rather saying whether a tumor is absent or present in an image.

Deep learning methods have been shown in the past years to be very efficient in this exercise, and many different networks have been designed. A lot of models can be found online, for example on Github. In addition, Keras, a very popular high-level package for machine learning, offers ready-to-use implementations of many popular networks. Those networks have already been trained on specific datasets, but of course one can re-train them to solve other classification tasks. Here we are going to see how to use these Keras implementations.

16.1 Importing the model

It is straightforward to import the needed model. Documentations can be found here. Here we are using the VGG16 model that has been trained on the ImageNet dataset, which classifies objects in 1000 categories.

In [1]:
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.applications.vgg16 import decode_predictions

#from keras.applications.xception import Xception
#from keras.applications.xception import preprocess_input
#from keras.applications.xception import decode_predictions

import numpy as np
import skimage
import skimage.io
import skimage.transform
import matplotlib.pyplot as plt
Using TensorFlow backend.

Now we load the model, specifying the weights to be used. Those weights define all the filters that are used in the convolution steps as well as the actual weights that combine information from the output of different filters.

In [2]:
model = VGG16(weights='imagenet', include_top=True)
#model = Xception(weights='imagenet', include_top=True)
WARNING: Logging before flag parsing goes to stderr.
W0123 11:15:52.456051 140581987952384 deprecation_wrapper.py:119] From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5
553467904/553467096 [==============================] - 91s 0us/step

We can have a look at the structure of the network:

In [3]:
model.summary()
Model: "vgg16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________

16.2 Choosing and adjusting an image

Let's test the network on a simple image of an elephant:

In [4]:
image = skimage.io.imread('https://upload.wikimedia.org/wikipedia/commons/1/19/Afrikanische_Elefant%2C_Miami2.jpg')
In [5]:
plt.imshow(image)
plt.show()

Models are always expecting images of a certain size, and with intensities around a given values. This is taken care of here:

In [6]:
#adjust image size and dimensions
image_resize = skimage.transform.resize(image,(224,224),preserve_range=True)
x = np.expand_dims(image_resize, axis=0)

#adjust image intensities
x = preprocess_input(x)
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "

16.3 Prediction

Finally, we can pass that modified image to the network to give a prediction:

In [7]:
features = model.predict(x)
W0123 11:17:29.866466 140581987952384 deprecation_wrapper.py:119] From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

When we look at the dimensions of the output, we see that we have a vector of 1000 dimensions. Each dimensions corresponds to a category and the value represents the probability that the image contains that category. If we plot the vector we see that the image clearly belong to one category:

In [8]:
features.shape
Out[8]:
(1, 1000)
In [9]:
plt.plot(features.T)
plt.show()

We can use the decond function, to know what this category index corresponds to:

In [10]:
decode_predictions(features, top=1000)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/imagenet_class_index.json
40960/35363 [==================================] - 0s 1us/step
Out[10]:
[[('n02504458', 'African_elephant', 0.97247916),
  ('n01871265', 'tusker', 0.02319269),
  ('n02504013', 'Indian_elephant', 0.004200729),
  ('n02437312', 'Arabian_camel', 9.9511075e-05),
  ('n02100583', 'vizsla', 6.808777e-06),
  ('n02099849', 'Chesapeake_Bay_retriever', 2.5692e-06),
  ('n03124170', 'cowboy_hat', 1.1540138e-06),
  ('n01704323', 'triceratops', 1.0709498e-06),
  ('n02389026', 'sorrel', 1.0571564e-06),
  ('n02422106', 'hartebeest', 8.660311e-07),
  ('n02096051', 'Airedale', 7.121821e-07),
  ('n02090379', 'redbone', 6.9493774e-07),
  ('n02087394', 'Rhodesian_ridgeback', 6.583336e-07),
  ('n04604644', 'worm_fence', 5.9125585e-07),
  ('n02092339', 'Weimaraner', 5.736652e-07),
  ('n03124043', 'cowboy_boot', 5.6223433e-07),
  ('n01688243', 'frilled_lizard', 4.8969315e-07),
  ('n03697007', 'lumbermill', 3.873958e-07),
  ('n03404251', 'fur_coat', 3.7333308e-07),
  ('n04350905', 'suit', 3.6049698e-07),
  ('n04259630', 'sombrero', 3.3618875e-07),
  ('n04399382', 'teddy', 3.2949515e-07),
  ('n07734744', 'mushroom', 3.0491432e-07),
  ('n07754684', 'jackfruit', 2.5534945e-07),
  ('n02408429', 'water_buffalo', 2.43335e-07),
  ('n04458633', 'totem_pole', 2.3871908e-07),
  ('n04597913', 'wooden_spoon', 2.3287092e-07),
  ('n11879895', 'rapeseed', 2.2912964e-07),
  ('n02963159', 'cardigan', 2.2274801e-07),
  ('n07802026', 'hay', 1.8962464e-07),
  ('n02088466', 'bloodhound', 1.8776879e-07),
  ('n02129165', 'lion', 1.8023877e-07),
  ('n02410509', 'bison', 1.5872558e-07),
  ('n02403003', 'ox', 1.5586099e-07),
  ('n02454379', 'armadillo', 1.5301437e-07),
  ('n03498962', 'hatchet', 1.4770363e-07),
  ('n04208210', 'shovel', 1.4289928e-07),
  ('n01518878', 'ostrich', 1.2654296e-07),
  ('n02412080', 'ram', 1.2329042e-07),
  ('n02109047', 'Great_Dane', 1.1550219e-07),
  ('n04417672', 'thatch', 1.0752834e-07),
  ('n03134739', 'croquet_ball', 1.0582936e-07),
  ('n03000684', 'chain_saw', 1.0351832e-07),
  ('n02906734', 'broom', 9.7853764e-08),
  ('n04099969', 'rocking_chair', 9.2880164e-08),
  ('n04562935', 'water_tower', 9.1811906e-08),
  ('n02489166', 'proboscis_monkey', 9.11181e-08),
  ('n02793495', 'barn', 8.838817e-08),
  ('n04371430', 'swimming_trunks', 8.648289e-08),
  ('n02113799', 'standard_poodle', 8.531001e-08),
  ('n04599235', 'wool', 8.296619e-08),
  ('n02843684', 'birdhouse', 8.085067e-08),
  ('n03776460', 'mobile_home', 7.9733795e-08),
  ('n02012849', 'crane', 7.9576395e-08),
  ('n02099429', 'curly-coated_retriever', 7.6460026e-08),
  ('n02397096', 'warthog', 7.4261834e-08),
  ('n01677366', 'common_iguana', 7.244132e-08),
  ('n02391049', 'zebra', 6.915432e-08),
  ('n02095570', 'Lakeland_terrier', 6.5524716e-08),
  ('n02093991', 'Irish_terrier', 6.52822e-08),
  ('n03743016', 'megalith', 6.347313e-08),
  ('n04532670', 'viaduct', 6.3057904e-08),
  ('n02422699', 'impala', 5.915353e-08),
  ('n02093647', 'Bedlington_terrier', 5.7262092e-08),
  ('n02099601', 'golden_retriever', 5.716737e-08),
  ('n03803284', 'muzzle', 5.6893036e-08),
  ('n03873416', 'paddle', 5.4728215e-08),
  ('n01695060', 'Komodo_dragon', 5.4479095e-08),
  ('n02415577', 'bighorn', 4.832162e-08),
  ('n03047690', 'clog', 4.7853764e-08),
  ('n03868242', 'oxcart', 4.773407e-08),
  ('n04479046', 'trench_coat', 4.3397336e-08),
  ('n09421951', 'sandbar', 4.32897e-08),
  ('n02091635', 'otterhound', 4.3287308e-08),
  ('n03017168', 'chime', 4.3020872e-08),
  ('n03930313', 'picket_fence', 4.1656634e-08),
  ('n09256479', 'coral_reef', 4.1114035e-08),
  ('n03976657', 'pole', 3.7683844e-08),
  ('n03000247', 'chain_mail', 3.767982e-08),
  ('n02879718', 'bow', 3.4870876e-08),
  ('n01917289', 'brain_coral', 3.4547018e-08),
  ('n13044778', 'earthstar', 3.3149608e-08),
  ('n03933933', 'pier', 3.201476e-08),
  ('n02437616', 'llama', 3.1714706e-08),
  ('n02108422', 'bull_mastiff', 3.156504e-08),
  ('n02137549', 'mongoose', 2.9723848e-08),
  ('n13133613', 'ear', 2.9454137e-08),
  ('n02089973', 'English_foxhound', 2.8814835e-08),
  ('n02091244', 'Ibizan_hound', 2.8095045e-08),
  ('n02091831', 'Saluki', 2.769632e-08),
  ('n04507155', 'umbrella', 2.7398773e-08),
  ('n03884397', 'panpipe', 2.656791e-08),
  ('n12144580', 'corn', 2.6197602e-08),
  ('n03595614', 'jersey', 2.6191858e-08),
  ('n01496331', 'electric_ray', 2.5924116e-08),
  ('n04049303', 'rain_barrel', 2.5172854e-08),
  ('n01824575', 'coucal', 2.5111856e-08),
  ('n02007558', 'flamingo', 2.4974003e-08),
  ('n02114712', 'red_wolf', 2.4958098e-08),
  ('n02423022', 'gazelle', 2.4626429e-08),
  ('n04254777', 'sock', 2.4583395e-08),
  ('n13054560', 'bolete', 2.3513051e-08),
  ('n04326547', 'stone_wall', 2.1967924e-08),
  ('n02909870', 'bucket', 2.1730758e-08),
  ('n02105162', 'malinois', 2.1461329e-08),
  ('n02883205', 'bow_tie', 2.1060647e-08),
  ('n03902125', 'pay-phone', 2.0461677e-08),
  ('n10148035', 'groom', 1.9235646e-08),
  ('n03967562', 'plow', 1.9046386e-08),
  ('n04192698', 'shield', 1.8631171e-08),
  ('n04254680', 'soccer_ball', 1.819816e-08),
  ('n02074367', 'dugong', 1.7633921e-08),
  ('n04044716', 'radio_telescope', 1.7616742e-08),
  ('n02641379', 'gar', 1.7576768e-08),
  ('n03042490', 'cliff_dwelling', 1.6449762e-08),
  ('n04613696', 'yurt', 1.6424746e-08),
  ('n03538406', 'horse_cart', 1.6036495e-08),
  ('n02640242', 'sturgeon', 1.6000076e-08),
  ('n04270147', 'spatula', 1.5890983e-08),
  ('n03425413', 'gas_pump', 1.5823385e-08),
  ('n01697457', 'African_crocodile', 1.5490157e-08),
  ('n03146219', 'cuirass', 1.5170368e-08),
  ('n02107574', 'Greater_Swiss_Mountain_dog', 1.5065611e-08),
  ('n03141823', 'crutch', 1.4982739e-08),
  ('n02795169', 'barrel', 1.497868e-08),
  ('n02099712', 'Labrador_retriever', 1.493677e-08),
  ('n01440764', 'tench', 1.4627513e-08),
  ('n02099267', 'flat-coated_retriever', 1.4546051e-08),
  ('n02797295', 'barrow', 1.4197849e-08),
  ('n03733281', 'maze', 1.4002735e-08),
  ('n02108089', 'boxer', 1.356423e-08),
  ('n04509417', 'unicycle', 1.3452794e-08),
  ('n03720891', 'maraca', 1.33941125e-08),
  ('n03804744', 'nail', 1.2996165e-08),
  ('n09332890', 'lakeside', 1.2460943e-08),
  ('n03461385', 'grocery_store', 1.2266772e-08),
  ('n02105412', 'kelpie', 1.2208488e-08),
  ('n01818515', 'macaw', 1.2167434e-08),
  ('n04332243', 'strainer', 1.199679e-08),
  ('n03980874', 'poncho', 1.1954397e-08),
  ('n02114855', 'coyote', 1.1713053e-08),
  ('n01614925', 'bald_eagle', 1.1709188e-08),
  ('n01806143', 'peacock', 1.1699344e-08),
  ('n02095314', 'wire-haired_fox_terrier', 1.1537376e-08),
  ('n03710637', 'maillot', 1.1533152e-08),
  ('n13052670', 'hen-of-the-woods', 1.14829755e-08),
  ('n02098105', 'soft-coated_wheaten_terrier', 1.1461575e-08),
  ('n07684084', 'French_loaf', 1.1281792e-08),
  ('n03532672', 'hook', 1.1261561e-08),
  ('n02841315', 'binoculars', 1.104861e-08),
  ('n02111129', 'Leonberg', 1.0944369e-08),
  ('n02104029', 'kuvasz', 1.0876135e-08),
  ('n04136333', 'sarong', 1.0849407e-08),
  ('n02018795', 'bustard', 1.0660523e-08),
  ('n02672831', 'accordion', 1.0643072e-08),
  ('n01687978', 'agama', 1.0528449e-08),
  ('n03947888', 'pirate', 1.04642055e-08),
  ('n02115641', 'dingo', 1.038824e-08),
  ('n03899768', 'patio', 1.029023e-08),
  ('n02859443', 'boathouse', 1.002748e-08),
  ('n01855672', 'goose', 9.945669e-09),
  ('n03991062', 'pot', 9.9173505e-09),
  ('n02727426', 'apiary', 9.779531e-09),
  ('n03372029', 'flute', 9.739491e-09),
  ('n02094114', 'Norfolk_terrier', 9.687578e-09),
  ('n02317335', 'starfish', 9.576979e-09),
  ('n09246464', 'cliff', 9.438921e-09),
  ('n02514041', 'barracouta', 9.358341e-09),
  ('n04540053', 'volleyball', 9.238558e-09),
  ('n02951358', 'canoe', 9.224877e-09),
  ('n03045698', 'cloak', 9.170108e-09),
  ('n03495258', 'harp', 9.0106145e-09),
  ('n02807133', 'bathing_cap', 8.645168e-09),
  ('n04371774', 'swing', 8.607408e-09),
  ('n02100236', 'German_short-haired_pointer', 8.5745855e-09),
  ('n12985857', 'coral_fungus', 8.013321e-09),
  ('n02786058', 'Band_Aid', 8.008828e-09),
  ('n03424325', 'gasmask', 7.880754e-09),
  ('n02090721', 'Irish_wolfhound', 7.843608e-09),
  ('n01768244', 'trilobite', 7.807607e-09),
  ('n04525038', 'velvet', 7.682252e-09),
  ('n04204347', 'shopping_cart', 7.636511e-09),
  ('n13037406', 'gyromitra', 7.601866e-09),
  ('n02667093', 'abaya', 7.215597e-09),
  ('n02093754', 'Border_terrier', 7.0909505e-09),
  ('n02013706', 'limpkin', 6.867661e-09),
  ('n02277742', 'ringlet', 6.8644916e-09),
  ('n02089867', 'Walker_hound', 6.795319e-09),
  ('n03763968', 'military_uniform', 6.453072e-09),
  ('n03445924', 'golfcart', 6.439707e-09),
  ('n02117135', 'hyena', 6.406616e-09),
  ('n02088364', 'beagle', 6.3973973e-09),
  ('n01558993', 'robin', 6.3624848e-09),
  ('n01753488', 'horned_viper', 6.330286e-09),
  ('n01698640', 'American_alligator', 6.3221055e-09),
  ('n02398521', 'hippopotamus', 6.2997643e-09),
  ('n03127925', 'crate', 6.2848464e-09),
  ('n02130308', 'cheetah', 6.0471335e-09),
  ('n02125311', 'cougar', 6.035104e-09),
  ('n01755581', 'diamondback', 5.867962e-09),
  ('n02486261', 'patas', 5.8670446e-09),
  ('n04266014', 'space_shuttle', 5.8610943e-09),
  ('n03903868', 'pedestal', 5.8524363e-09),
  ('n01748264', 'Indian_cobra', 5.8266187e-09),
  ('n03325584', 'feather_boa', 5.8069145e-09),
  ('n02113978', 'Mexican_hairless', 5.734677e-09),
  ('n02107142', 'Doberman', 5.6214526e-09),
  ('n09428293', 'seashore', 5.5234066e-09),
  ('n02480495', 'orangutan', 5.486845e-09),
  ('n04311004', 'steel_arch_bridge', 5.481385e-09),
  ('n01608432', 'kite', 5.452397e-09),
  ('n02092002', 'Scottish_deerhound', 5.436706e-09),
  ('n03637318', 'lampshade', 5.429007e-09),
  ('n03710193', 'mailbox', 5.3815614e-09),
  ('n02992211', 'cello', 5.377006e-09),
  ('n02095889', 'Sealyham_terrier', 5.357351e-09),
  ('n03891251', 'park_bench', 5.2829816e-09),
  ('n03250847', 'drumstick', 5.2142477e-09),
  ('n04356056', 'sunglasses', 5.1750613e-09),
  ('n04127249', 'safety_pin', 5.158673e-09),
  ('n03781244', 'monastery', 5.123724e-09),
  ('n02088094', 'Afghan_hound', 5.0996034e-09),
  ('n02417914', 'ibex', 5.01151e-09),
  ('n04204238', 'shopping_basket', 4.96886e-09),
  ('n02011460', 'bittern', 4.9469775e-09),
  ('n01616318', 'vulture', 4.827158e-09),
  ('n02749479', 'assault_rifle', 4.766008e-09),
  ('n03770439', 'miniskirt', 4.745644e-09),
  ('n03710721', 'maillot', 4.690221e-09),
  ('n03627232', 'knot', 4.6468207e-09),
  ('n02006656', 'spoonbill', 4.6262425e-09),
  ('n04501370', 'turnstile', 4.5893476e-09),
  ('n03623198', 'knee_pad', 4.5015e-09),
  ('n04523525', 'vault', 4.46011e-09),
  ('n02097130', 'giant_schnauzer', 4.3299755e-09),
  ('n02787622', 'banjo', 4.312017e-09),
  ('n04133789', 'sandal', 4.2853125e-09),
  ('n02100877', 'Irish_setter', 4.2800763e-09),
  ('n04355338', 'sundial', 4.270503e-09),
  ('n06794110', 'street_sign', 4.1123425e-09),
  ('n02101388', 'Brittany_spaniel', 4.049506e-09),
  ('n02814860', 'beacon', 4.039462e-09),
  ('n03888605', 'parallel_bars', 3.9464374e-09),
  ('n03786901', 'mortar', 3.9410137e-09),
  ('n02782093', 'balloon', 3.872617e-09),
  ('n04229816', 'ski_mask', 3.8667345e-09),
  ('n03837869', 'obelisk', 3.8529526e-09),
  ('n04380533', 'table_lamp', 3.810222e-09),
  ('n04493381', 'tub', 3.785882e-09),
  ('n01694178', 'African_chameleon', 3.760384e-09),
  ('n04239074', 'sliding_door', 3.702508e-09),
  ('n01664065', 'loggerhead', 3.6234304e-09),
  ('n01930112', 'nematode', 3.6183543e-09),
  ('n13040303', 'stinkhorn', 3.5951435e-09),
  ('n03527444', 'holster', 3.517661e-09),
  ('n04486054', 'triumphal_arch', 3.512532e-09),
  ('n07583066', 'guacamole', 3.4796444e-09),
  ('n03633091', 'ladle', 3.4760423e-09),
  ('n02002556', 'white_stork', 3.4494259e-09),
  ('n03494278', 'harmonica', 3.4429448e-09),
  ('n04367480', 'swab', 3.4420125e-09),
  ('n03649909', 'lawn_mower', 3.3972238e-09),
  ('n02869837', 'bonnet', 3.3792127e-09),
  ('n04591157', 'Windsor_tie', 3.3539964e-09),
  ('n04465501', 'tractor', 3.3519627e-09),
  ('n02106550', 'Rottweiler', 3.347069e-09),
  ('n12998815', 'agaric', 3.3381113e-09),
  ('n03379051', 'football_helmet', 3.2864529e-09),
  ('n03530642', 'honeycomb', 3.2771763e-09),
  ('n02676566', 'acoustic_guitar', 3.2552163e-09),
  ('n07875152', 'potpie', 3.2359455e-09),
  ('n07717556', 'butternut_squash', 3.2088285e-09),
  ('n01514668', 'cock', 3.1114917e-09),
  ('n07714990', 'broccoli', 3.0654619e-09),
  ('n03160309', 'dam', 3.0282343e-09),
  ('n04428191', 'thresher', 3.016883e-09),
  ('n03220513', 'dome', 3.0145821e-09),
  ('n02113023', 'Pembroke', 3.0060843e-09),
  ('n04118538', 'rugby_ball', 2.9956386e-09),
  ('n01910747', 'jellyfish', 2.9772513e-09),
  ('n02268443', 'dragonfly', 2.9743283e-09),
  ('n03481172', 'hammer', 2.9573803e-09),
  ('n02486410', 'baboon', 2.9454406e-09),
  ('n01943899', 'conch', 2.9308473e-09),
  ('n02089078', 'black-and-tan_coonhound', 2.9075182e-09),
  ('n02894605', 'breakwater', 2.9024316e-09),
  ('n03482405', 'hamper', 2.8648282e-09),
  ('n03709823', 'mailbag', 2.8267735e-09),
  ('n03394916', 'French_horn', 2.8257223e-09),
  ('n04456115', 'torch', 2.8198806e-09),
  ('n02009912', 'American_egret', 2.7913742e-09),
  ('n02395406', 'hog', 2.7763263e-09),
  ('n02865351', 'bolo_tie', 2.7258804e-09),
  ('n02396427', 'wild_boar', 2.7255682e-09),
  ('n09472597', 'volcano', 2.7021974e-09),
  ('n02804610', 'bassoon', 2.6886144e-09),
  ('n06596364', 'comic_book', 2.67847e-09),
  ('n09835506', 'ballplayer', 2.6097344e-09),
  ('n01689811', 'alligator_lizard', 2.5984648e-09),
  ('n01675722', 'banded_gecko', 2.5858027e-09),
  ('n04560804', 'water_jug', 2.5609523e-09),
  ('n02977058', 'cash_machine', 2.542055e-09),
  ('n04485082', 'tripod', 2.53923e-09),
  ('n03877472', 'pajama', 2.5352422e-09),
  ('n02892201', 'brass', 2.5075466e-09),
  ('n03958227', 'plastic_bag', 2.4904827e-09),
  ('n07693725', 'bagel', 2.4611015e-09),
  ('n03535780', 'horizontal_bar', 2.460449e-09),
  ('n02834397', 'bib', 2.4332873e-09),
  ('n03445777', 'golf_ball', 2.432684e-09),
  ('n01682714', 'American_chameleon', 2.4089877e-09),
  ('n04435653', 'tile_roof', 2.4041176e-09),
  ('n04090263', 'rifle', 2.3761813e-09),
  ('n02107908', 'Appenzeller', 2.3732554e-09),
  ('n03457902', 'greenhouse', 2.3515907e-09),
  ('n01828970', 'bee_eater', 2.3370186e-09),
  ('n02487347', 'macaque', 2.324061e-09),
  ('n04355933', 'sunglass', 2.306362e-09),
  ('n09468604', 'valley', 2.2739424e-09),
  ('n02115913', 'dhole', 2.2589075e-09),
  ('n02088238', 'basset', 2.2333955e-09),
  ('n03792782', 'mountain_bike', 2.2198479e-09),
  ('n03617480', 'kimono', 2.2117754e-09),
  ('n03866082', 'overskirt', 2.2011322e-09),
  ('n02109525', 'Saint_Bernard', 2.1958781e-09),
  ('n12768682', 'buckeye', 2.182537e-09),
  ('n02106382', 'Bouvier_des_Flandres', 2.1271775e-09),
  ('n04536866', 'violin', 2.1147863e-09),
  ('n03814639', 'neck_brace', 2.1135362e-09),
  ('n03724870', 'mask', 2.1025355e-09),
  ('n02992529', 'cellular_telephone', 2.0971767e-09),
  ('n02102480', 'Sussex_spaniel', 2.090208e-09),
  ('n02100735', 'English_setter', 2.0812094e-09),
  ('n02526121', 'eel', 2.0468731e-09),
  ('n04590129', 'window_shade', 2.0452537e-09),
  ('n03888257', 'parachute', 2.0096889e-09),
  ('n03000134', 'chainlink_fence', 2.0087423e-09),
  ('n02788148', 'bannister', 1.9934752e-09),
  ('n07715103', 'cauliflower', 1.984469e-09),
  ('n03956157', 'planetarium', 1.9731838e-09),
  ('n03496892', 'harvester', 1.960388e-09),
  ('n02111500', 'Great_Pyrenees', 1.9518367e-09),
  ('n02480855', 'gorilla', 1.9470696e-09),
  ('n02361337', 'marmot', 1.946821e-09),
  ('n02108551', 'Tibetan_mastiff', 1.9326472e-09),
  ('n03249569', 'drum', 1.9294761e-09),
  ('n02730930', 'apron', 1.9262218e-09),
  ('n03447721', 'gong', 1.9187523e-09),
  ('n03775071', 'mitten', 1.9064659e-09),
  ('n02190166', 'fly', 1.8947062e-09),
  ('n03876231', 'paintbrush', 1.893767e-09),
  ('n01817953', 'African_grey', 1.88943e-09),
  ('n02009229', 'little_blue_heron', 1.883698e-09),
  ('n03376595', 'folding_chair', 1.8608115e-09),
  ('n04325704', 'stole', 1.849806e-09),
  ('n07747607', 'orange', 1.833815e-09),
  ('n04606251', 'wreck', 1.8241764e-09),
  ('n03534580', 'hoopskirt', 1.7711204e-09),
  ('n02105505', 'komondor', 1.7691587e-09),
  ('n04517823', 'vacuum', 1.7536944e-09),
  ('n03127747', 'crash_helmet', 1.7234724e-09),
  ('n03680355', 'Loafer', 1.7128317e-09),
  ('n03201208', 'dining_table', 1.7018638e-09),
  ('n07615774', 'ice_lolly', 1.700569e-09),
  ('n01751748', 'sea_snake', 1.66651e-09),
  ('n07860988', 'dough', 1.6293468e-09),
  ('n02106030', 'collie', 1.6195991e-09),
  ('n02093428', 'American_Staffordshire_terrier', 1.6058065e-09),
  ('n02090622', 'borzoi', 1.6048818e-09),
  ('n02132136', 'brown_bear', 1.5797834e-09),
  ('n04515003', 'upright', 1.5728983e-09),
  ('n03259280', 'Dutch_oven', 1.5682191e-09),
  ('n01667778', 'terrapin', 1.5518479e-09),
  ('n02088632', 'bluetick', 1.5345496e-09),
  ('n04209133', 'shower_cap', 1.5328062e-09),
  ('n03590841', "jack-o'-lantern", 1.5257968e-09),
  ('n02483708', 'siamang', 1.5143012e-09),
  ('n02802426', 'basketball', 1.5142263e-09),
  ('n04447861', 'toilet_seat', 1.514105e-09),
  ('n01622779', 'great_grey_owl', 1.503876e-09),
  ('n04462240', 'toyshop', 1.4872741e-09),
  ('n04070727', 'refrigerator', 1.485828e-09),
  ('n03970156', 'plunger', 1.4739446e-09),
  ('n02093859', 'Kerry_blue_terrier', 1.4703927e-09),
  ('n02094258', 'Norwich_terrier', 1.4545821e-09),
  ('n03041632', 'cleaver', 1.4506755e-09),
  ('n03384352', 'forklift', 1.441554e-09),
  ('n03594945', 'jeep', 1.4287298e-09),
  ('n02916936', 'bulletproof_vest', 1.4160547e-09),
  ('n02483362', 'gibbon', 1.4108757e-09),
  ('n07749582', 'lemon', 1.4091142e-09),
  ('n06785654', 'crossword_puzzle', 1.3814062e-09),
  ('n01744401', 'rock_python', 1.3734168e-09),
  ('n04589890', 'window_screen', 1.3723773e-09),
  ('n02837789', 'bikini', 1.3686262e-09),
  ('n01498041', 'stingray', 1.3660104e-09),
  ('n02134084', 'ice_bear', 1.3617805e-09),
  ('n02051845', 'pelican', 1.361378e-09),
  ('n03729826', 'matchstick', 1.3525522e-09),
  ('n04147183', 'schooner', 1.327057e-09),
  ('n01532829', 'house_finch', 1.3246344e-09),
  ('n04251144', 'snorkel', 1.3219615e-09),
  ('n04370456', 'sweatshirt', 1.321281e-09),
  ('n03126707', 'crane', 1.3200391e-09),
  ('n01514859', 'hen', 1.3178554e-09),
  ('n02091134', 'whippet', 1.313757e-09),
  ('n02799071', 'baseball', 1.3057879e-09),
  ('n03255030', 'dumbbell', 1.3013174e-09),
  ('n02895154', 'breastplate', 1.2973497e-09),
  ('n07695742', 'pretzel', 1.292006e-09),
  ('n01990800', 'isopod', 1.2867307e-09),
  ('n02488291', 'langur', 1.2817779e-09),
  ('n02536864', 'coho', 1.2776065e-09),
  ('n03240683', 'drilling_platform', 1.2774165e-09),
  ('n03792972', 'mountain_tent', 1.2761063e-09),
  ('n03764736', 'milk_can', 1.27248e-09),
  ('n09399592', 'promontory', 1.2705131e-09),
  ('n04153751', 'screw', 1.2700043e-09),
  ('n02606052', 'rock_beauty', 1.2596981e-09),
  ('n03840681', 'ocarina', 1.2517106e-09),
  ('n02091032', 'Italian_greyhound', 1.2429912e-09),
  ('n02101006', 'Gordon_setter', 1.2317596e-09),
  ('n02607072', 'anemone_fish', 1.2300315e-09),
  ('n02102973', 'Irish_water_spaniel', 1.2295319e-09),
  ('n01756291', 'sidewinder', 1.2243624e-09),
  ('n04409515', 'tennis_ball', 1.2233446e-09),
  ('n04141076', 'sax', 1.221738e-09),
  ('n03388043', 'fountain', 1.2208224e-09),
  ('n02747177', 'ashcan', 1.2199147e-09),
  ('n03271574', 'electric_fan', 1.2011002e-09),
  ('n07753592', 'banana', 1.1795122e-09),
  ('n04039381', 'racket', 1.1602184e-09),
  ('n02927161', 'butcher_shop', 1.1483998e-09),
  ('n02999410', 'chain', 1.1432377e-09),
  ('n03930630', 'pickup', 1.1387612e-09),
  ('n02105641', 'Old_English_sheepdog', 1.1188804e-09),
  ('n02106662', 'German_shepherd', 1.1059885e-09),
  ('n03599486', 'jinrikisha', 1.1005237e-09),
  ('n03393912', 'freight_car', 1.0986445e-09),
  ('n04033995', 'quilt', 1.0877416e-09),
  ('n03929855', 'pickelhaube', 1.0839412e-09),
  ('n02111277', 'Newfoundland', 1.0828067e-09),
  ('n04258138', 'solar_dish', 1.0721857e-09),
  ('n04476259', 'tray', 1.0499286e-09),
  ('n02965783', 'car_mirror', 1.0420383e-09),
  ('n02494079', 'squirrel_monkey', 1.0329652e-09),
  ('n03598930', 'jigsaw_puzzle', 1.0254506e-09),
  ('n04366367', 'suspension_bridge', 1.0251905e-09),
  ('n04235860', 'sleeping_bag', 1.0250752e-09),
  ('n04357314', 'sunscreen', 1.0192497e-09),
  ('n02138441', 'meerkat', 1.0137106e-09),
  ('n07742313', 'Granny_Smith', 1.0109379e-09),
  ('n04311174', 'steel_drum', 1.0087672e-09),
  ('n04146614', 'school_bus', 1.0055782e-09),
  ('n02835271', 'bicycle-built-for-two', 9.998388e-10),
  ('n02281406', 'sulphur_butterfly', 9.805806e-10),
  ('n04550184', 'wardrobe', 9.705458e-10),
  ('n04005630', 'prison', 9.63037e-10),
  ('n02097474', 'Tibetan_terrier', 9.614861e-10),
  ('n03661043', 'library', 9.500126e-10),
  ('n02113712', 'miniature_poodle', 9.471882e-10),
  ('n03452741', 'grand_piano', 9.389213e-10),
  ('n04116512', 'rubber_eraser', 9.378026e-10),
  ('n02769748', 'backpack', 9.332703e-10),
  ('n03891332', 'parking_meter', 9.31611e-10),
  ('n03877845', 'palace', 9.3060226e-10),
  ('n01685808', 'whiptail', 9.2821273e-10),
  ('n11939491', 'daisy', 9.2719527e-10),
  ('n03929660', 'pick', 9.1781e-10),
  ('n02128385', 'leopard', 9.152944e-10),
  ('n03895866', 'passenger_car', 9.0734736e-10),
  ('n02319095', 'sea_urchin', 8.913179e-10),
  ('n03337140', 'file', 8.8972296e-10),
  ('n10565667', 'scuba_diver', 8.8607677e-10),
  ('n02119022', 'red_fox', 8.760009e-10),
  ('n06874185', 'traffic_light', 8.750624e-10),
  ('n02817516', 'bearskin', 8.7164254e-10),
  ('n02120505', 'grey_fox', 8.682543e-10),
  ('n04347754', 'submarine', 8.6000024e-10),
  ('n02231487', 'walking_stick', 8.5194973e-10),
  ('n03759954', 'microphone', 8.45273e-10),
  ('n02776631', 'bakery', 8.3889024e-10),
  ('n04067472', 'reel', 8.3149554e-10),
  ('n03196217', 'digital_clock', 8.266267e-10),
  ('n02102177', 'Welsh_springer_spaniel', 8.179567e-10),
  ('n02108000', 'EntleBucher', 8.1151086e-10),
  ('n01631663', 'eft', 8.0918466e-10),
  ('n02708093', 'analog_clock', 8.074733e-10),
  ('n03110669', 'cornet', 8.013897e-10),
  ('n02825657', 'bell_cote', 8.0118034e-10),
  ('n06359193', 'web_site', 7.9363505e-10),
  ('n02910353', 'buckle', 7.8334383e-10),
  ('n02808304', 'bath_towel', 7.799313e-10),
  ('n01693334', 'green_lizard', 7.7600204e-10),
  ('n01749939', 'green_mamba', 7.757949e-10),
  ('n01877812', 'wallaby', 7.6629747e-10),
  ('n02101556', 'clumber', 7.6488105e-10),
  ('n09229709', 'bubble', 7.4045076e-10),
  ('n03630383', 'lab_coat', 7.308492e-10),
  ('n03584254', 'iPod', 7.282207e-10),
  ('n04548362', 'wallet', 7.195445e-10),
  ('n02002724', 'black_stork', 7.187873e-10),
  ('n04141975', 'scale', 7.0556805e-10),
  ('n03272010', 'electric_guitar', 7.042034e-10),
  ('n01798484', 'prairie_chicken', 7.0276773e-10),
  ('n02058221', 'albatross', 7.014272e-10),
  ('n03063689', 'coffeepot', 6.9591694e-10),
  ('n02264363', 'lacewing', 6.94775e-10),
  ('n03467068', 'guillotine', 6.9437095e-10),
  ('n04442312', 'toaster', 6.878315e-10),
  ('n01494475', 'hammerhead', 6.871392e-10),
  ('n02112137', 'chow', 6.833111e-10),
  ('n02105251', 'briard', 6.785822e-10),
  ('n01984695', 'spiny_lobster', 6.7205685e-10),
  ('n04554684', 'washer', 6.680452e-10),
  ('n03028079', 'church', 6.6689554e-10),
  ('n02493793', 'spider_monkey', 6.6632594e-10),
  ('n02481823', 'chimpanzee', 6.625557e-10),
  ('n01737021', 'water_snake', 6.5798045e-10),
  ('n01443537', 'goldfish', 6.540316e-10),
  ('n01829413', 'hornbill', 6.51606e-10),
  ('n03924679', 'photocopier', 6.415625e-10),
  ('n03207743', 'dishrag', 6.4024475e-10),
  ('n02870880', 'bookcase', 6.3560773e-10),
  ('n01985128', 'crayfish', 6.3162087e-10),
  ('n03594734', 'jean', 6.2582395e-10),
  ('n04418357', 'theater_curtain', 6.247888e-10),
  ('n12267677', 'acorn', 6.209052e-10),
  ('n03785016', 'moped', 6.17591e-10),
  ('n03065424', 'coil', 6.170105e-10),
  ('n02326432', 'hare', 6.1295785e-10),
  ('n04065272', 'recreational_vehicle', 6.10461e-10),
  ('n07248320', 'book_jacket', 6.0178473e-10),
  ('n02129604', 'tiger', 5.981492e-10),
  ('n02028035', 'redshank', 5.944732e-10),
  ('n02119789', 'kit_fox', 5.940924e-10),
  ('n03032252', 'cinema', 5.933574e-10),
  ('n04296562', 'stage', 5.901463e-10),
  ('n02033041', 'dowitcher', 5.8953314e-10),
  ('n03874599', 'padlock', 5.807222e-10),
  ('n01860187', 'black_swan', 5.7301036e-10),
  ('n02077923', 'sea_lion', 5.6928146e-10),
  ('n02226429', 'grasshopper', 5.677969e-10),
  ('n04118776', 'rule', 5.661532e-10),
  ('n04033901', 'quill', 5.658746e-10),
  ('n03188531', 'diaper', 5.6572463e-10),
  ('n02236044', 'mantis', 5.583781e-10),
  ('n03950228', 'pitcher', 5.5761823e-10),
  ('n01945685', 'slug', 5.542907e-10),
  ('n03887697', 'paper_towel', 5.502153e-10),
  ('n03216828', 'dock', 5.4701027e-10),
  ('n04596742', 'wok', 5.458242e-10),
  ('n02699494', 'altar', 5.451417e-10),
  ('n07930864', 'cup', 5.44377e-10),
  ('n03691459', 'loudspeaker', 5.437969e-10),
  ('n01742172', 'boa_constrictor', 5.4144955e-10),
  ('n01739381', 'vine_snake', 5.400592e-10),
  ('n02123159', 'tiger_cat', 5.390754e-10),
  ('n01986214', 'hermit_crab', 5.3430715e-10),
  ('n01740131', 'night_snake', 5.325927e-10),
  ('n03125729', 'cradle', 5.313964e-10),
  ('n03355925', 'flagpole', 5.3099414e-10),
  ('n03063599', 'coffee_mug', 5.305953e-10),
  ('n01807496', 'partridge', 5.304263e-10),
  ('n01883070', 'wombat', 5.2782856e-10),
  ('n02939185', 'caldron', 5.2563737e-10),
  ('n03476684', 'hair_slide', 5.243907e-10),
  ('n07760859', 'custard_apple', 5.218384e-10),
  ('n03450230', 'gown', 5.2170307e-10),
  ('n02110806', 'basenji', 5.178436e-10),
  ('n04467665', 'trailer_truck', 5.1278376e-10),
  ('n03089624', 'confectionery', 5.1046123e-10),
  ('n01819313', 'sulphur-crested_cockatoo', 5.1015564e-10),
  ('n03483316', 'hand_blower', 5.0897475e-10),
  ('n03935335', 'piggy_bank', 5.0380644e-10),
  ('n03777568', 'Model_T', 4.951931e-10),
  ('n03938244', 'pillow', 4.945173e-10),
  ('n03223299', 'doormat', 4.942835e-10),
  ('n07716906', 'spaghetti_squash', 4.9159443e-10),
  ('n01955084', 'chiton', 4.841013e-10),
  ('n02114548', 'white_wolf', 4.8323323e-10),
  ('n07880968', 'burrito', 4.81804e-10),
  ('n03908714', 'pencil_sharpener', 4.7827614e-10),
  ('n03769881', 'minibus', 4.763634e-10),
  ('n02356798', 'fox_squirrel', 4.737701e-10),
  ('n04081281', 'restaurant', 4.7201937e-10),
  ('n02056570', 'king_penguin', 4.702042e-10),
  ('n01882714', 'koala', 4.6836124e-10),
  ('n04023962', 'punching_bag', 4.6329773e-10),
  ('n03868863', 'oxygen_mask', 4.596231e-10),
  ('n01795545', 'black_grouse', 4.5814474e-10),
  ('n04522168', 'vase', 4.5697796e-10),
  ('n02107312', 'miniature_pinscher', 4.530768e-10),
  ('n03733131', 'maypole', 4.4359455e-10),
  ('n03944341', 'pinwheel', 4.3933981e-10),
  ('n02346627', 'porcupine', 4.2693202e-10),
  ('n04026417', 'purse', 4.222133e-10),
  ('n02363005', 'beaver', 4.2125284e-10),
  ('n01776313', 'tick', 4.190475e-10),
  ('n01843065', 'jacamar', 4.1839499e-10),
  ('n02441942', 'weasel', 4.146824e-10),
  ('n03761084', 'microwave', 4.1229906e-10),
  ('n01978287', 'Dungeness_crab', 4.0772652e-10),
  ('n02114367', 'timber_wolf', 4.0496584e-10),
  ('n01729322', 'hognose_snake', 4.032947e-10),
  ('n07718472', 'cucumber', 4.0168638e-10),
  ('n02643566', 'lionfish', 4.006664e-10),
  ('n07579787', 'plate', 3.9920636e-10),
  ('n03485794', 'handkerchief', 3.982253e-10),
  ('n02106166', 'Border_collie', 3.9775622e-10),
  ('n03529860', 'home_theater', 3.9737402e-10),
  ('n03131574', 'crib', 3.9660622e-10),
  ('n04111531', 'rotisserie', 3.9358136e-10),
  ('n04579432', 'whistle', 3.8771011e-10),
  ('n04277352', 'spindle', 3.850761e-10),
  ('n02980441', 'castle', 3.7506118e-10),
  ('n02492035', 'capuchin', 3.677444e-10),
  ('n04404412', 'television', 3.656566e-10),
  ('n01630670', 'common_newt', 3.6531084e-10),
  ('n02113624', 'toy_poodle', 3.6420048e-10),
  ('n02102318', 'cocker_spaniel', 3.6006592e-10),
  ('n04482393', 'tricycle', 3.5850414e-10),
  ('n01784675', 'centipede', 3.5728634e-10),
  ('n01665541', 'leatherback_turtle', 3.5640424e-10),
  ('n04389033', 'tank', 3.55304e-10),
  ('n04201297', 'shoji', 3.5426664e-10),
  ('n04487394', 'trombone', 3.541491e-10),
  ('n02134418', 'sloth_bear', 3.5365497e-10),
  ('n02655020', 'puffer', 3.5110498e-10),
  ('n04584207', 'wig', 3.5087402e-10),
  ('n02107683', 'Bernese_mountain_dog', 3.4487924e-10),
  ('n15075141', 'toilet_tissue', 3.442871e-10),
  ('n04228054', 'ski', 3.427578e-10),
  ('n02279972', 'monarch', 3.3774197e-10),
  ('n02097209', 'standard_schnauzer', 3.3620778e-10),
  ('n02704792', 'amphibian', 3.3069564e-10),
  ('n04310018', 'steam_locomotive', 3.3040498e-10),
  ('n02105855', 'Shetland_sheepdog', 3.3018824e-10),
  ('n01983481', 'American_lobster', 3.26808e-10),
  ('n02098413', 'Lhasa', 3.2562641e-10),
  ('n03347037', 'fire_screen', 3.2195985e-10),
  ('n03773504', 'missile', 3.2151493e-10),
  ('n04286575', 'spotlight', 3.1893804e-10),
  ('n02229544', 'cricket', 3.1520309e-10),
  ('n07768694', 'pomegranate', 3.1334124e-10),
  ('n04346328', 'stupa', 3.1329045e-10),
  ('n02971356', 'carton', 3.1298705e-10),
  ('n01728920', 'ringneck_snake', 3.0962397e-10),
  ('n02165456', 'ladybug', 3.052362e-10),
  ('n02133161', 'American_black_bear', 3.0335284e-10),
  ('n07714571', 'head_cabbage', 3.0175779e-10),
  ('n02172182', 'dung_beetle', 3.0001085e-10),
  ('n04275548', 'spider_web', 2.9836678e-10),
  ('n03721384', 'marimba', 2.96024e-10),
  ('n01833805', 'hummingbird', 2.9384808e-10),
  ('n02948072', 'candle', 2.907439e-10),
  ('n04141327', 'scabbard', 2.903543e-10),
  ('n02105056', 'groenendael', 2.8723443e-10),
  ('n02325366', 'wood_rabbit', 2.8659908e-10),
  ('n12620546', 'hip', 2.8282335e-10),
  ('n01728572', 'thunder_snake', 2.8160144e-10),
  ('n04120489', 'running_shoe', 2.815832e-10),
  ('n03016953', 'chiffonier', 2.7901967e-10),
  ('n04553703', 'washbasin', 2.7703637e-10),
  ('n02808440', 'bathtub', 2.758016e-10),
  ('n01537544', 'indigo_bunting', 2.749633e-10),
  ('n04557648', 'water_bottle', 2.745189e-10),
  ('n04376876', 'syringe', 2.7446237e-10),
  ('n04443257', 'tobacco_shop', 2.7226052e-10),
  ('n07753275', 'pineapple', 2.721027e-10),
  ('n01872401', 'echidna', 2.6951905e-10),
  ('n03692522', 'loupe', 2.6833877e-10),
  ('n01797886', 'ruffed_grouse', 2.6689118e-10),
  ('n04330267', 'stove', 2.6407326e-10),
  ('n04429376', 'throne', 2.635791e-10),
  ('n02085620', 'Chihuahua', 2.612881e-10),
  ('n12057211', "yellow_lady's_slipper", 2.5918404e-10),
  ('n03290653', 'entertainment_center', 2.5837654e-10),
  ('n01734418', 'king_snake', 2.5834745e-10),
  ('n04162706', 'seat_belt', 2.5826866e-10),
  ('n01914609', 'sea_anemone', 2.5653116e-10),
  ('n04344873', 'studio_couch', 2.5651112e-10),
  ('n02093256', 'Staffordshire_bullterrier', 2.5140487e-10),
  ('n02669723', 'academic_gown', 2.507005e-10),
  ('n01806567', 'quail', 2.5017172e-10),
  ('n04008634', 'projectile', 2.482348e-10),
  ('n03775546', 'mixing_bowl', 2.444287e-10),
  ('n02777292', 'balance_beam', 2.4431684e-10),
  ('n04209239', 'shower_curtain', 2.4416963e-10),
  ('n04040759', 'radiator', 2.440453e-10),
  ('n09288635', 'geyser', 2.4352684e-10),
  ('n04263257', 'soup_bowl', 2.43362e-10),
  ('n02692877', 'airship', 2.4266722e-10),
  ('n03787032', 'mortarboard', 2.4092575e-10),
  ('n07831146', 'carbonara', 2.391804e-10),
  ('n01560419', 'bulbul', 2.3743998e-10),
  ('n02892767', 'brassiere', 2.3692198e-10),
  ('n02457408', 'three-toed_sloth', 2.365197e-10),
  ('n02492660', 'howler_monkey', 2.3259444e-10),
  ('n03794056', 'mousetrap', 2.3167969e-10),
  ('n03832673', 'notebook', 2.3112445e-10),
  ('n04317175', 'stethoscope', 2.3000218e-10),
  ('n02974003', 'car_wheel', 2.2875485e-10),
  ('n01944390', 'snail', 2.2802693e-10),
  ('n02086646', 'Blenheim_spaniel', 2.2687209e-10),
  ('n04525305', 'vending_machine', 2.2487526e-10),
  ('n03871628', 'packet', 2.239571e-10),
  ('n02950826', 'cannon', 2.2058719e-10),
  ('n02442845', 'mink', 2.1870405e-10),
  ('n02509815', 'lesser_panda', 2.175093e-10),
  ('n04336792', 'stretcher', 2.1647337e-10),
  ('n01820546', 'lorikeet', 2.1626041e-10),
  ('n01667114', 'mud_turtle', 2.1559692e-10),
  ('n02098286', 'West_Highland_white_terrier', 2.1558007e-10),
  ('n07711569', 'mashed_potato', 2.1355957e-10),
  ('n02951585', 'can_opener', 2.1290436e-10),
  ('n01968897', 'chambered_nautilus', 2.0997251e-10),
  ('n03314780', 'face_powder', 2.0963117e-10),
  ('n03187595', 'dial_telephone', 2.0510499e-10),
  ('n01692333', 'Gila_monster', 2.0336025e-10),
  ('n01729977', 'green_snake', 2.0249787e-10),
  ('n02666196', 'abacus', 2.015073e-10),
  ('n03662601', 'lifeboat', 1.9968495e-10),
  ('n02112706', 'Brabancon_griffon', 1.9842182e-10),
  ('n04131690', 'saltshaker', 1.970629e-10),
  ('n04423845', 'thimble', 1.9537463e-10),
  ('n03733805', 'measuring_cup', 1.9517947e-10),
  ('n03814906', 'necklace', 1.8858755e-10),
  ('n02281787', 'lycaenid', 1.877667e-10),
  ('n03995372', 'power_drill', 1.864829e-10),
  ('n07718747', 'artichoke', 1.8622343e-10),
  ('n02328150', 'Angora', 1.8275116e-10),
  ('n03782006', 'monitor', 1.7878012e-10),
  ('n03838899', 'oboe', 1.75675e-10),
  ('n02096437', 'Dandie_Dinmont', 1.7494439e-10),
  ('n04069434', 'reflex_camera', 1.7321321e-10),
  ('n03874293', 'paddlewheel', 1.7056519e-10),
  ('n04149813', 'scoreboard', 1.6995856e-10),
  ('n02443114', 'polecat', 1.697713e-10),
  ('n04238763', 'slide_rule', 1.6710737e-10),
  ('n02790996', 'barbell', 1.6674026e-10),
  ('n01580077', 'jay', 1.6395994e-10),
  ('n04154565', 'screwdriver', 1.6329144e-10),
  ('n03179701', 'desk', 1.6230588e-10),
  ('n02113186', 'Cardigan', 1.5688616e-10),
  ('n02978881', 'cassette', 1.5678504e-10),
  ('n04552348', 'warplane', 1.5496021e-10),
  ('n03788195', 'mosque', 1.5373465e-10),
  ('n07730033', 'cardoon', 1.5235103e-10),
  ('n07892512', 'red_wine', 1.5179355e-10),
  ('n01491361', 'tiger_shark', 1.5123264e-10),
  ('n03791053', 'motor_scooter', 1.5023414e-10),
  ('n01978455', 'rock_crab', 1.5016939e-10),
  ('n04532106', 'vestment', 1.4944876e-10),
  ('n03272562', 'electric_locomotive', 1.4630791e-10),
  ('n02112018', 'Pomeranian', 1.4607314e-10),
  ('n01582220', 'magpie', 1.4575807e-10),
  ('n02871525', 'bookshop', 1.4437077e-10),
  ('n07697537', 'hotdog', 1.4405775e-10),
  ('n02966687', "carpenter's_kit", 1.4290288e-10),
  ('n02804414', 'bassinet', 1.4140702e-10),
  ('n02259212', 'leafhopper', 1.4044393e-10),
  ('n09193705', 'alp', 1.400061e-10),
  ('n02066245', 'grey_whale', 1.3865398e-10),
  ('n02174001', 'rhinoceros_beetle', 1.3602482e-10),
  ('n02233338', 'cockroach', 1.3585577e-10),
  ('n02097298', 'Scotch_terrier', 1.3385262e-10),
  ('n01530575', 'brambling', 1.3286569e-10),
  ('n03345487', 'fire_engine', 1.3187046e-10),
  ('n03207941', 'dishwasher', 1.3174174e-10),
  ('n07871810', 'meat_loaf', 1.3152808e-10),
  ('n02490219', 'marmoset', 1.3034655e-10),
  ('n01592084', 'chickadee', 1.298151e-10),
  ('n04152593', 'screen', 1.2976906e-10),
  ('n03796401', 'moving_van', 1.2974531e-10),
  ('n04548280', 'wall_clock', 1.2922517e-10),
  ('n03291819', 'envelope', 1.2896957e-10),
  ('n01773549', 'barn_spider', 1.2877195e-10),
  ('n02165105', 'tiger_beetle', 1.2784444e-10),
  ('n02017213', 'European_gallinule', 1.2774792e-10),
  ('n03770679', 'minivan', 1.2649921e-10),
  ('n01601694', 'water_ouzel', 1.257254e-10),
  ('n02127052', 'lynx', 1.2316508e-10),
  ('n04086273', 'revolver', 1.2301624e-10),
  ('n03085013', 'computer_keyboard', 1.1942465e-10),
  ('n04041544', 'radio', 1.15881776e-10),
  ('n03992509', "potter's_wheel", 1.1562324e-10),
  ('n01981276', 'king_crab', 1.15550715e-10),
  ('n03459775', 'grille', 1.13997756e-10),
  ('n03942813', 'ping-pong_ball', 1.136116e-10),
  ('n02500267', 'indri', 1.13219614e-10),
  ('n03544143', 'hourglass', 1.1212597e-10),
  ('n04398044', 'teapot', 1.11991104e-10),
  ('n07932039', 'eggnog', 1.1144752e-10),
  ('n02097047', 'miniature_schnauzer', 1.1136592e-10),
  ('n03788365', 'mosquito_net', 1.0974794e-10),
  ('n02110185', 'Siberian_husky', 1.0715371e-10),
  ('n02124075', 'Egyptian_cat', 1.05006635e-10),
  ('n02268853', 'damselfly', 1.04507715e-10),
  ('n04179913', 'sewing_machine', 1.03029654e-10),
  ('n01843383', 'toucan', 1.02989964e-10),
  ('n02027492', 'red-backed_sandpiper', 1.02042215e-10),
  ('n03400231', 'frying_pan', 1.0067638e-10),
  ('n02123045', 'tabby', 1.0006415e-10),
  ('n02794156', 'barometer', 9.9345115e-11),
  ('n03443371', 'goblet', 9.8317014e-11),
  ('n02110341', 'dalmatian', 9.7829723e-11),
  ('n02111889', 'Samoyed', 9.762934e-11),
  ('n02280649', 'cabbage_butterfly', 9.753516e-11),
  ('n03388549', 'four-poster', 9.7380506e-11),
  ('n02219486', 'ant', 9.726356e-11),
  ('n01855032', 'red-breasted_merganser', 9.7178446e-11),
  ('n02701002', 'ambulance', 9.652432e-11),
  ('n03976467', 'Polaroid_camera', 9.556879e-11),
  ('n03476991', 'hair_spray', 9.510038e-11),
  ('n03109150', 'corkscrew', 9.4093594e-11),
  ('n02102040', 'English_springer', 9.398059e-11),
  ('n02097658', 'silky_terrier', 9.27041e-11),
  ('n02091467', 'Norwegian_elkhound', 9.2478566e-11),
  ('n04254120', 'soap_dispenser', 9.130846e-11),
  ('n02488702', 'colobus', 8.9681186e-11),
  ('n03658185', 'letter_opener', 8.965861e-11),
  ('n03095699', 'container_ship', 8.934127e-11),
  ('n02096177', 'cairn', 8.911152e-11),
  ('n01644900', 'tailed_frog', 8.903914e-11),
  ('n02497673', 'Madagascar_cat', 8.858583e-11),
  ('n04483307', 'trimaran', 8.8285754e-11),
  ('n03218198', 'dogsled', 8.799844e-11),
  ('n02177972', 'weevil', 8.788053e-11),
  ('n07697313', 'cheeseburger', 8.7678614e-11),
  ('n01873310', 'platypus', 8.705356e-11),
  ('n04335435', 'streetcar', 8.506633e-11),
  ('n01980166', 'fiddler_crab', 8.1508446e-11),
  ('n02493509', 'titi', 8.118665e-11),
  ('n01534433', 'junco', 8.0875376e-11),
  ('n02128925', 'jaguar', 8.08581e-11),
  ('n02484975', 'guenon', 7.9276474e-11),
  ('n04392985', 'tape_player', 7.874585e-11),
  ('n02123394', 'Persian_cat', 7.8639435e-11),
  ('n01773797', 'garden_spider', 7.848165e-11),
  ('n03843555', 'oil_filter', 7.761586e-11),
  ('n02783161', 'ballpoint', 7.7613485e-11),
  ('n02823750', 'beer_glass', 7.6652074e-11),
  ('n02815834', 'beaker', 7.593598e-11),
  ('n02444819', 'otter', 7.589109e-11),
  ('n02791270', 'barbershop', 7.527351e-11),
  ('n03026506', 'Christmas_stocking', 7.4976754e-11),
  ('n04591713', 'wine_bottle', 7.481091e-11),
  ('n04273569', 'speedboat', 7.380956e-11),
  ('n01669191', 'box_turtle', 7.359083e-11),
  ('n04592741', 'wing', 7.2345935e-11),
  ('n02443484', 'black-footed_ferret', 7.1546734e-11),
  ('n02981792', 'catamaran', 7.128656e-11),
  ('n03920288', 'Petri_dish', 6.983659e-11),
  ('n02814533', 'beach_wagon', 6.8749804e-11),
  ('n01847000', 'drake', 6.8425765e-11),
  ('n02037110', 'oystercatcher', 6.833629e-11),
  ('n01770081', 'harvestman', 6.689221e-11),
  ('n01770393', 'scorpion', 6.643941e-11),
  ('n07565083', 'menu', 6.624441e-11),
  ('n02979186', 'cassette_player', 6.5997735e-11),
  ('n03197337', 'digital_watch', 6.590892e-11),
  ('n07920052', 'espresso', 6.565485e-11),
  ('n02966193', 'carousel', 6.481276e-11),
  ('n03447447', 'gondola', 6.4762834e-11),
  ('n03478589', 'half_track', 6.469271e-11),
  ('n02110063', 'malamute', 6.4345924e-11),
  ('n02364673', 'guinea_pig', 6.395987e-11),
  ('n03444034', 'go-kart', 6.339753e-11),
  ('n02116738', 'African_hunting_dog', 6.3055644e-11),
  ('n07745940', 'strawberry', 6.254621e-11),
  ('n04252077', 'snowmobile', 6.236895e-11),
  ('n07753113', 'fig', 6.203024e-11),
  ('n01774750', 'tarantula', 6.202989e-11),
  ('n03133878', 'Crock_Pot', 6.067068e-11),
  ('n07716358', 'zucchini', 6.0629377e-11),
  ('n02071294', 'killer_whale', 5.9111036e-11),
  ('n04037443', 'racer', 5.880841e-11),
  ('n03983396', 'pop_bottle', 5.821325e-11),
  ('n02110958', 'pug', 5.7583466e-11),
  ('n02086079', 'Pekinese', 5.6792123e-11),
  ('n02690373', 'airliner', 5.655787e-11),
  ('n04461696', 'tow_truck', 5.6397473e-11),
  ('n02086240', 'Shih-Tzu', 5.6362737e-11),
  ('n03344393', 'fireboat', 5.569834e-11),
  ('n03657121', 'lens_cap', 5.510532e-11),
  ('n02510455', 'giant_panda', 5.472281e-11),
  ('n04019541', 'puck', 5.4291568e-11),
  ('n07584110', 'consomme', 5.4228746e-11),
  ('n04200800', 'shoe_shop', 5.4213547e-11),
  ('n03417042', 'garbage_truck', 5.418222e-11),
  ('n03180011', 'desktop_computer', 5.3984064e-11),
  ('n02096294', 'Australian_terrier', 5.394536e-11),
  ('n02085936', 'Maltese_dog', 5.38478e-11),
  ('n03014705', 'chest', 5.297153e-11),
  ('n02109961', 'Eskimo_dog', 5.194112e-11),
  ('n04372370', 'switch', 5.1888147e-11),
  ('n04328186', 'stopwatch', 5.1460294e-11),
  ('n01775062', 'wolf_spider', 5.0395518e-11),
  ('n02276258', 'admiral', 5.0093256e-11),
  ('n01484850', 'great_white_shark', 5.0052285e-11),
  ('n02169497', 'leaf_beetle', 4.9670355e-11),
  ('n02321529', 'sea_cucumber', 4.9445954e-11),
  ('n03998194', 'prayer_rug', 4.8860572e-11),
  ('n03908618', 'pencil_box', 4.858503e-11),
  ('n01629819', 'European_fire_salamander', 4.7499407e-11),
  ('n03673027', 'liner', 4.6042656e-11),
  ('n03018349', 'china_cabinet', 4.5900523e-11),
  ('n04265275', 'space_heater', 4.449128e-11),
  ('n04009552', 'projector', 4.4461164e-11),
  ('n02840245', 'binder', 4.298947e-11),
  ('n02018207', 'American_coot', 4.266778e-11),
  ('n01950731', 'sea_slug', 4.234551e-11),
  ('n07590611', 'hot_pot', 4.2181144e-11),
  ('n03961711', 'plate_rack', 4.215396e-11),
  ('n04612504', 'yawl', 4.1374362e-11),
  ('n02168699', 'long-horned_beetle', 4.1119306e-11),
  ('n01641577', 'bullfrog', 4.0968295e-11),
  ('n03297495', 'espresso_maker', 4.075848e-11),
  ('n04579145', 'whiskey_jug', 3.991407e-11),
  ('n03717622', 'manhole_cover', 3.9603605e-11),
  ('n02917067', 'bullet_train', 3.9217685e-11),
  ('n07614500', 'ice_cream', 3.847808e-11),
  ('n01632458', 'spotted_salamander', 3.7284984e-11),
  ('n04505470', 'typewriter_keyboard', 3.7087527e-11),
  ('n04125021', 'safe', 3.6562645e-11),
  ('n02087046', 'toy_terrier', 3.6455543e-11),
  ('n01632777', 'axolotl', 3.631383e-11),
  ('n03706229', 'magnetic_compass', 3.5095347e-11),
  ('n02104365', 'schipperke', 3.4450498e-11),
  ('n02094433', 'Yorkshire_terrier', 3.376677e-11),
  ('n03690938', 'lotion', 3.3092636e-11),
  ('n03666591', 'lighter', 3.307964e-11),
  ('n02110627', 'affenpinscher', 3.229037e-11),
  ('n02791124', 'barber_chair', 3.209866e-11),
  ('n01531178', 'goldfinch', 3.1669802e-11),
  ('n03670208', 'limousine', 3.13913e-11),
  ('n07720875', 'bell_pepper', 3.106217e-11),
  ('n02206856', 'bee', 3.1015874e-11),
  ('n03825788', 'nipple', 3.0897136e-11),
  ('n01924916', 'flatworm', 3.0341053e-11),
  ('n03937543', 'pill_bottle', 3.0088747e-11),
  ('n04243546', 'slot', 2.9773167e-11),
  ('n03075370', 'combination_lock', 2.8551264e-11),
  ('n03742115', 'medicine_chest', 2.8415714e-11),
  ('n04285008', 'sports_car', 2.7910839e-11),
  ('n03854065', 'organ', 2.78077e-11),
  ('n03954731', 'plane', 2.7697598e-11),
  ('n02108915', 'French_bulldog', 2.7634908e-11),
  ('n07873807', 'pizza', 2.7514938e-11),
  ('n02256656', 'cicada', 2.7293737e-11),
  ('n02120079', 'Arctic_fox', 2.7246767e-11),
  ('n03100240', 'convertible', 2.7216692e-11),
  ('n02930766', 'cab', 2.6895609e-11),
  ('n03841143', 'odometer', 2.6122618e-11),
  ('n02860847', 'bobsled', 2.6030901e-11),
  ('n02096585', 'Boston_bull', 2.5804348e-11),
  ('n03982430', 'pool_table', 2.5729694e-11),
  ('n04487081', 'trolleybus', 2.490903e-11),
  ('n01796340', 'ptarmigan', 2.4398514e-11),
  ('n01735189', 'garter_snake', 2.4079197e-11),
  ('n03388183', 'fountain_pen', 2.2041254e-11),
  ('n03642806', 'laptop', 2.1664096e-11),
  ('n02123597', 'Siamese_cat', 2.1392148e-11),
  ('n03208938', 'disk_brake', 2.1373143e-11),
  ('n02086910', 'papillon', 2.1370981e-11),
  ('n02167151', 'ground_beetle', 2.051299e-11),
  ('n04264628', 'space_bar', 2.0249274e-11),
  ('n04542943', 'waffle_iron', 1.9986053e-11),
  ('n03916031', 'perfume', 1.9739798e-11),
  ('n01774384', 'black_widow', 1.884954e-11),
  ('n07836838', 'chocolate_sauce', 1.7904279e-11),
  ('n03793489', 'mouse', 1.7676695e-11),
  ('n03062245', 'cocktail_shaker', 1.7644929e-11),
  ('n02823428', 'beer_bottle', 1.750805e-11),
  ('n03485407', 'hand-held_computer', 1.7413999e-11),
  ('n03584829', 'iron', 1.6825801e-11),
  ('n03777754', 'modem', 1.6526188e-11),
  ('n02342885', 'hamster', 1.5631468e-11),
  ('n03977966', 'police_van', 1.473041e-11),
  ('n02447366', 'badger', 1.4215548e-11),
  ('n02112350', 'keeshond', 1.3434671e-11),
  ('n01644373', 'tree_frog', 1.2981826e-11),
  ('n03602883', 'joystick', 1.2753725e-11),
  ('n07717410', 'acorn_squash', 1.1627049e-11),
  ('n01773157', 'black_and_gold_garden_spider', 1.0486868e-11),
  ('n04074963', 'remote_control', 9.9966615e-12),
  ('n03676483', 'lipstick', 9.848382e-12),
  ('n04252225', 'snowplow', 9.555944e-12),
  ('n02445715', 'skunk', 8.092101e-12),
  ('n04004767', 'printer', 7.635087e-12),
  ('n02687172', 'aircraft_carrier', 7.623983e-12),
  ('n02085782', 'Japanese_spaniel', 6.395545e-12),
  ('n02988304', 'CD_player', 5.974052e-12),
  ('n03492542', 'hard_disc', 5.6362276e-12),
  ('n03857828', 'oscilloscope', 5.5326663e-12),
  ('n02128757', 'snow_leopard', 4.8289922e-12),
  ('n07613480', 'trifle', 4.0962025e-12),
  ('n02025239', 'ruddy_turnstone', 2.030949e-12),
  ('n02877765', 'bottlecap', 1.8086249e-12)]]

The three best categories are three categories of different elephants, but the best one is indeed the African one.

16.4 Image with multiple content

What happens if multiple objects are in an image like here a dog and a cat or a banana and strawberries?

In [11]:
#image = skimage.io.imread('https://upload.wikimedia.org/wikipedia/commons/0/07/Chien-lit_%26_Chat-en-lit.jpg')
image = skimage.io.imread('https://live.staticflickr.com/3652/3295428010_9284075e7b_b.jpg')
In [12]:
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.show()

We preprocess the image and do the prediction:

In [13]:
model = VGG16(weights='imagenet', include_top=True)
#model = Xception(weights='imagenet', include_top=True)

image_resize = skimage.transform.resize(image,(224,224),preserve_range=True)
x = np.expand_dims(image_resize, axis=0)
x = preprocess_input(x)

features = model.predict(x)
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
In [14]:
decode_predictions(features, top=1000)
Out[14]:
[[('n07753592', 'banana', 0.5761249),
  ('n07745940', 'strawberry', 0.19425765),
  ('n07753275', 'pineapple', 0.05217132),
  ('n07614500', 'ice_cream', 0.030278875),
  ('n07749582', 'lemon', 0.015831826),
  ('n07760859', 'custard_apple', 0.012895143),
  ('n07753113', 'fig', 0.011580526),
  ('n07747607', 'orange', 0.009989414),
  ('n04476259', 'tray', 0.009962709),
  ('n07579787', 'plate', 0.0068863747),
  ('n07718472', 'cucumber', 0.006387197),
  ('n04332243', 'strainer', 0.0054645333),
  ('n07768694', 'pomegranate', 0.0054520713),
  ('n07836838', 'chocolate_sauce', 0.003862604),
  ('n07742313', 'Granny_Smith', 0.0028679618),
  ('n04597913', 'wooden_spoon', 0.0027667894),
  ('n03461385', 'grocery_store', 0.0026932321),
  ('n07716358', 'zucchini', 0.0026920456),
  ('n07613480', 'trifle', 0.0022338615),
  ('n07583066', 'guacamole', 0.00217574),
  ('n03089624', 'confectionery', 0.0013857629),
  ('n07932039', 'eggnog', 0.0013704551),
  ('n04204238', 'shopping_basket', 0.0013690287),
  ('n07717556', 'butternut_squash', 0.0013453267),
  ('n07718747', 'artichoke', 0.0012477095),
  ('n07930864', 'cup', 0.0011815256),
  ('n02909870', 'bucket', 0.0010817345),
  ('n03775546', 'mixing_bowl', 0.0010295234),
  ('n03944341', 'pinwheel', 0.0009666784),
  ('n03127925', 'crate', 0.0009592887),
  ('n07714990', 'broccoli', 0.00087834854),
  ('n03729826', 'matchstick', 0.0007504259),
  ('n02776631', 'bakery', 0.00071163604),
  ('n02971356', 'carton', 0.0006731103),
  ('n03482405', 'hamper', 0.00065947045),
  ('n07720875', 'bell_pepper', 0.00064582424),
  ('n03633091', 'ladle', 0.0006443229),
  ('n07892512', 'red_wine', 0.00063594204),
  ('n03445777', 'golf_ball', 0.00061149464),
  ('n03786901', 'mortar', 0.0006095598),
  ('n03908618', 'pencil_box', 0.00054616673),
  ('n03720891', 'maraca', 0.00052341406),
  ('n04399382', 'teddy', 0.0005185749),
  ('n12620546', 'hip', 0.00051782426),
  ('n07715103', 'cauliflower', 0.00050635735),
  ('n07871810', 'meat_loaf', 0.00050255464),
  ('n03047690', 'clog', 0.0004752425),
  ('n07693725', 'bagel', 0.0004202755),
  ('n07716906', 'spaghetti_squash', 0.000415875),
  ('n01945685', 'slug', 0.00041408345),
  ('n01734418', 'king_snake', 0.0004139101),
  ('n04270147', 'spatula', 0.0004090329),
  ('n03950228', 'pitcher', 0.00040275132),
  ('n07717410', 'acorn_squash', 0.00039611929),
  ('n02110341', 'dalmatian', 0.00039234685),
  ('n04263257', 'soup_bowl', 0.00039161675),
  ('n04259630', 'sombrero', 0.00038066693),
  ('n03991062', 'pot', 0.00036915473),
  ('n04133789', 'sandal', 0.00030882948),
  ('n07880968', 'burrito', 0.00030034475),
  ('n04141975', 'scale', 0.00029977187),
  ('n07734744', 'mushroom', 0.0002680286),
  ('n03133878', 'Crock_Pot', 0.00026488805),
  ('n04026417', 'purse', 0.0002394798),
  ('n03041632', 'cleaver', 0.00022614613),
  ('n03063599', 'coffee_mug', 0.00022026774),
  ('n02526121', 'eel', 0.00021577944),
  ('n04317175', 'stethoscope', 0.00021190982),
  ('n02869837', 'bonnet', 0.00020534305),
  ('n03125729', 'cradle', 0.00019756991),
  ('n12144580', 'corn', 0.00019337438),
  ('n04591713', 'wine_bottle', 0.00019080336),
  ('n07714571', 'head_cabbage', 0.00018923331),
  ('n12267677', 'acorn', 0.00018359443),
  ('n01944390', 'snail', 0.00018355627),
  ('n02797295', 'barrow', 0.00017819768),
  ('n04398044', 'teapot', 0.0001722923),
  ('n12768682', 'buckeye', 0.00017072978),
  ('n07615774', 'ice_lolly', 0.00016237529),
  ('n01514668', 'cock', 0.00014993304),
  ('n03887697', 'paper_towel', 0.00013445782),
  ('n03134739', 'croquet_ball', 0.00013335473),
  ('n04423845', 'thimble', 0.0001321641),
  ('n02085620', 'Chihuahua', 0.00013126497),
  ('n03532672', 'hook', 0.00012913393),
  ('n01986214', 'hermit_crab', 0.00012685721),
  ('n07920052', 'espresso', 0.00012499139),
  ('n04442312', 'toaster', 0.00012365519),
  ('n01693334', 'green_lizard', 0.00012309478),
  ('n07711569', 'mashed_potato', 0.00012253305),
  ('n02786058', 'Band_Aid', 0.00012101741),
  ('n04116512', 'rubber_eraser', 0.00011974641),
  ('n03935335', 'piggy_bank', 0.00011951345),
  ('n03871628', 'packet', 0.00011785878),
  ('n03733805', 'measuring_cup', 0.00011539107),
  ('n07754684', 'jackfruit', 0.00011343148),
  ('n01740131', 'night_snake', 0.00011308099),
  ('n01742172', 'boa_constrictor', 0.000107874854),
  ('n07860988', 'dough', 0.00010770483),
  ('n13133613', 'ear', 0.000106777625),
  ('n07684084', 'French_loaf', 0.00010209033),
  ('n03400231', 'frying_pan', 0.000101994185),
  ('n02948072', 'candle', 0.00010143284),
  ('n02091032', 'Italian_greyhound', 0.00010129218),
  ('n03627232', 'knot', 9.950634e-05),
  ('n07697537', 'hotdog', 9.747456e-05),
  ('n02174001', 'rhinoceros_beetle', 9.4692434e-05),
  ('n03014705', 'chest', 9.385559e-05),
  ('n01514859', 'hen', 9.3512506e-05),
  ('n04356056', 'sunglasses', 9.159683e-05),
  ('n03124170', 'cowboy_hat', 9.073299e-05),
  ('n04200800', 'shoe_shop', 8.880587e-05),
  ('n03942813', 'ping-pong_ball', 8.825859e-05),
  ('n07697313', 'cheeseburger', 8.65847e-05),
  ('n04081281', 'restaurant', 8.54177e-05),
  ('n13054560', 'bolete', 8.202711e-05),
  ('n02808440', 'bathtub', 8.18662e-05),
  ('n03775071', 'mitten', 8.131856e-05),
  ('n04553703', 'washbasin', 8.085242e-05),
  ('n04131690', 'saltshaker', 7.9935766e-05),
  ('n01682714', 'American_chameleon', 7.87932e-05),
  ('n02910353', 'buckle', 7.635821e-05),
  ('n04118776', 'rule', 7.6120465e-05),
  ('n02666196', 'abacus', 7.23343e-05),
  ('n03584829', 'iron', 7.215881e-05),
  ('n03063689', 'coffeepot', 7.0141374e-05),
  ('n02843684', 'birdhouse', 7.006182e-05),
  ('n07584110', 'consomme', 6.8237976e-05),
  ('n02342885', 'hamster', 6.7695386e-05),
  ('n01753488', 'horned_viper', 6.736089e-05),
  ('n03291819', 'envelope', 6.719286e-05),
  ('n04204347', 'shopping_cart', 6.63194e-05),
  ('n01692333', 'Gila_monster', 6.554545e-05),
  ('n03188531', 'diaper', 6.47956e-05),
  ('n03109150', 'corkscrew', 6.398369e-05),
  ('n03544143', 'hourglass', 6.303943e-05),
  ('n01943899', 'conch', 6.294649e-05),
  ('n04554684', 'washer', 6.1656225e-05),
  ('n03814906', 'necklace', 6.090897e-05),
  ('n03481172', 'hammer', 6.000116e-05),
  ('n07695742', 'pretzel', 5.8985755e-05),
  ('n03026506', 'Christmas_stocking', 5.7654528e-05),
  ('n04120489', 'running_shoe', 5.6863893e-05),
  ('n07831146', 'carbonara', 5.6298235e-05),
  ('n01667778', 'terrapin', 5.5908833e-05),
  ('n03131574', 'crib', 5.5611254e-05),
  ('n03876231', 'paintbrush', 5.5427263e-05),
  ('n03476684', 'hair_slide', 5.5181063e-05),
  ('n01739381', 'vine_snake', 5.502625e-05),
  ('n02096585', 'Boston_bull', 5.3970176e-05),
  ('n02892767', 'brassiere', 5.3867956e-05),
  ('n02840245', 'binder', 5.3718304e-05),
  ('n03062245', 'cocktail_shaker', 5.350199e-05),
  ('n02177972', 'weevil', 5.2725438e-05),
  ('n04162706', 'seat_belt', 5.1819523e-05),
  ('n04596742', 'wok', 5.1218296e-05),
  ('n07590611', 'hot_pot', 5.1086732e-05),
  ('n01644373', 'tree_frog', 4.985802e-05),
  ('n03065424', 'coil', 4.9609822e-05),
  ('n03676483', 'lipstick', 4.9301838e-05),
  ('n02100236', 'German_short-haired_pointer', 4.843132e-05),
  ('n03840681', 'ocarina', 4.796299e-05),
  ('n13040303', 'stinkhorn', 4.6456476e-05),
  ('n04376876', 'syringe', 4.5252436e-05),
  ('n02965783', 'car_mirror', 4.4657845e-05),
  ('n11939491', 'daisy', 4.428231e-05),
  ('n12998815', 'agaric', 4.3976896e-05),
  ('n02804414', 'bassinet', 4.3952536e-05),
  ('n02799071', 'baseball', 4.3251974e-05),
  ('n13052670', 'hen-of-the-woods', 4.3237662e-05),
  ('n02883205', 'bow_tie', 4.2845495e-05),
  ('n01644900', 'tailed_frog', 4.261245e-05),
  ('n03658185', 'letter_opener', 4.218317e-05),
  ('n04522168', 'vase', 4.0939405e-05),
  ('n04493381', 'tub', 4.04621e-05),
  ('n04355933', 'sunglass', 3.9921426e-05),
  ('n03443371', 'goblet', 3.8971404e-05),
  ('n07565083', 'menu', 3.828917e-05),
  ('n04462240', 'toyshop', 3.8079816e-05),
  ('n03690938', 'lotion', 3.641337e-05),
  ('n04254120', 'soap_dispenser', 3.6373945e-05),
  ('n04209133', 'shower_cap', 3.6261798e-05),
  ('n12057211', "yellow_lady's_slipper", 3.5722187e-05),
  ('n03485794', 'handkerchief', 3.5239005e-05),
  ('n07873807', 'pizza', 3.493371e-05),
  ('n06785654', 'crossword_puzzle', 3.4831344e-05),
  ('n03141823', 'crutch', 3.4771638e-05),
  ('n01748264', 'Indian_cobra', 3.4530472e-05),
  ('n01641577', 'bullfrog', 3.4056942e-05),
  ('n03794056', 'mousetrap', 3.397458e-05),
  ('n03782006', 'monitor', 3.3855067e-05),
  ('n04548362', 'wallet', 3.3497618e-05),
  ('n04254777', 'sock', 3.33212e-05),
  ('n01983481', 'American_lobster', 3.297275e-05),
  ('n03666591', 'lighter', 3.2138825e-05),
  ('n02319095', 'sea_urchin', 3.1543514e-05),
  ('n02927161', 'butcher_shop', 3.1374308e-05),
  ('n02113978', 'Mexican_hairless', 3.1249816e-05),
  ('n01729322', 'hognose_snake', 3.1181968e-05),
  ('n01818515', 'macaw', 3.06403e-05),
  ('n03127747', 'crash_helmet', 3.03109e-05),
  ('n03637318', 'lampshade', 2.9371524e-05),
  ('n02992529', 'cellular_telephone', 2.9276049e-05),
  ('n02165456', 'ladybug', 2.9242981e-05),
  ('n01833805', 'hummingbird', 2.8988887e-05),
  ('n04070727', 'refrigerator', 2.8835033e-05),
  ('n01632777', 'axolotl', 2.8370774e-05),
  ('n03938244', 'pillow', 2.8075028e-05),
  ('n02877765', 'bottlecap', 2.8017765e-05),
  ('n02999410', 'chain', 2.7896956e-05),
  ('n04179913', 'sewing_machine', 2.727592e-05),
  ('n07248320', 'book_jacket', 2.708506e-05),
  ('n02317335', 'starfish', 2.692179e-05),
  ('n04560804', 'water_jug', 2.667041e-05),
  ('n01985128', 'crayfish', 2.6247586e-05),
  ('n01675722', 'banded_gecko', 2.6211816e-05),
  ('n03958227', 'plastic_bag', 2.6203143e-05),
  ('n04033995', 'quilt', 2.6013191e-05),
  ('n01820546', 'lorikeet', 2.5654212e-05),
  ('n01829413', 'hornbill', 2.4800902e-05),
  ('n04277352', 'spindle', 2.4788393e-05),
  ('n01729977', 'green_snake', 2.4691233e-05),
  ('n02091134', 'whippet', 2.460459e-05),
  ('n04409515', 'tennis_ball', 2.4333598e-05),
  ('n03259280', 'Dutch_oven', 2.3777688e-05),
  ('n03877472', 'pajama', 2.3559978e-05),
  ('n04127249', 'safety_pin', 2.3512617e-05),
  ('n02108915', 'French_bulldog', 2.3418314e-05),
  ('n02823750', 'beer_glass', 2.3075021e-05),
  ('n03825788', 'nipple', 2.2788907e-05),
  ('n03793489', 'mouse', 2.2502109e-05),
  ('n03314780', 'face_powder', 2.2230006e-05),
  ('n01694178', 'African_chameleon', 2.2008999e-05),
  ('n03530642', 'honeycomb', 2.2002765e-05),
  ('n03970156', 'plunger', 2.1918864e-05),
  ('n04525038', 'velvet', 2.180428e-05),
  ('n03223299', 'doormat', 2.175412e-05),
  ('n02124075', 'Egyptian_cat', 2.1604475e-05),
  ('n01984695', 'spiny_lobster', 2.0945907e-05),
  ('n02169497', 'leaf_beetle', 2.076357e-05),
  ('n01978455', 'rock_crab', 2.0757352e-05),
  ('n01669191', 'box_turtle', 2.0698582e-05),
  ('n04344873', 'studio_couch', 2.0674355e-05),
  ('n01677366', 'common_iguana', 2.0488678e-05),
  ('n03709823', 'mailbag', 2.0475705e-05),
  ('n03929660', 'pick', 2.0240863e-05),
  ('n03642806', 'laptop', 2.0135685e-05),
  ('n03584254', 'iPod', 1.992797e-05),
  ('n04146614', 'school_bus', 1.9779227e-05),
  ('n04507155', 'umbrella', 1.9748202e-05),
  ('n04380533', 'table_lamp', 1.9697793e-05),
  ('n03207941', 'dishwasher', 1.9588635e-05),
  ('n03617480', 'kimono', 1.9556492e-05),
  ('n03832673', 'notebook', 1.910795e-05),
  ('n03250847', 'drumstick', 1.9024581e-05),
  ('n02823428', 'beer_bottle', 1.8961384e-05),
  ('n01685808', 'whiptail', 1.8879735e-05),
  ('n04099969', 'rocking_chair', 1.8633707e-05),
  ('n03961711', 'plate_rack', 1.8090077e-05),
  ('n02951585', 'can_opener', 1.7763494e-05),
  ('n03761084', 'microwave', 1.7602515e-05),
  ('n04447861', 'toilet_seat', 1.755055e-05),
  ('n02747177', 'ashcan', 1.7231721e-05),
  ('n01847000', 'drake', 1.7133418e-05),
  ('n04591157', 'Windsor_tie', 1.7025443e-05),
  ('n03916031', 'perfume', 1.6821812e-05),
  ('n03983396', 'pop_bottle', 1.6666281e-05),
  ('n03759954', 'microphone', 1.647004e-05),
  ('n02279972', 'monarch', 1.621669e-05),
  ('n03724870', 'mask', 1.6000666e-05),
  ('n02092339', 'Weimaraner', 1.5948382e-05),
  ('n02113624', 'toy_poodle', 1.5889327e-05),
  ('n01560419', 'bulbul', 1.5623365e-05),
  ('n02730930', 'apron', 1.5524774e-05),
  ('n04557648', 'water_bottle', 1.551045e-05),
  ('n02607072', 'anemone_fish', 1.5461033e-05),
  ('n03297495', 'espresso_maker', 1.5182074e-05),
  ('n04542943', 'waffle_iron', 1.4967452e-05),
  ('n02109047', 'Great_Dane', 1.471483e-05),
  ('n04325704', 'stole', 1.4612102e-05),
  ('n04154565', 'screwdriver', 1.4305257e-05),
  ('n02268443', 'dragonfly', 1.4206283e-05),
  ('n03187595', 'dial_telephone', 1.4161779e-05),
  ('n04152593', 'screen', 1.41334995e-05),
  ('n03337140', 'file', 1.3361717e-05),
  ('n02128925', 'jaguar', 1.3210455e-05),
  ('n01817953', 'African_grey', 1.3197649e-05),
  ('n04040759', 'radiator', 1.3171232e-05),
  ('n01978287', 'Dungeness_crab', 1.3066266e-05),
  ('n02128385', 'leopard', 1.305472e-05),
  ('n01950731', 'sea_slug', 1.300963e-05),
  ('n03447721', 'gong', 1.29627415e-05),
  ('n01744401', 'rock_python', 1.2735874e-05),
  ('n03179701', 'desk', 1.2721127e-05),
  ('n02834397', 'bib', 1.252468e-05),
  ('n04579432', 'whistle', 1.238697e-05),
  ('n02951358', 'canoe', 1.22655165e-05),
  ('n13044778', 'earthstar', 1.223513e-05),
  ('n03930313', 'picket_fence', 1.21722705e-05),
  ('n02808304', 'bath_towel', 1.2088653e-05),
  ('n04367480', 'swab', 1.20439945e-05),
  ('n02123045', 'tabby', 1.1915001e-05),
  ('n01728572', 'thunder_snake', 1.1866579e-05),
  ('n02701002', 'ambulance', 1.16872525e-05),
  ('n03483316', 'hand_blower', 1.151631e-05),
  ('n02236044', 'mantis', 1.1364192e-05),
  ('n02110806', 'basenji', 1.1268435e-05),
  ('n02281787', 'lycaenid', 1.1192648e-05),
  ('n02879718', 'bow', 1.1034128e-05),
  ('n04201297', 'shoji', 1.0955457e-05),
  ('n02788148', 'bannister', 1.0920512e-05),
  ('n03982430', 'pool_table', 1.0643293e-05),
  ('n15075141', 'toilet_tissue', 1.0533208e-05),
  ('n03602883', 'joystick', 1.0504337e-05),
  ('n01756291', 'sidewinder', 1.0494635e-05),
  ('n04074963', 'remote_control', 1.0448e-05),
  ('n03891251', 'park_bench', 1.0438557e-05),
  ('n02088632', 'bluetick', 1.030371e-05),
  ('n02123159', 'tiger_cat', 1.0233123e-05),
  ('n07875152', 'potpie', 9.964009e-06),
  ('n03447447', 'gondola', 9.900007e-06),
  ('n03680355', 'Loafer', 9.514181e-06),
  ('n04355338', 'sundial', 9.4244815e-06),
  ('n01532829', 'house_finch', 9.417985e-06),
  ('n04435653', 'tile_roof', 9.345736e-06),
  ('n01632458', 'spotted_salamander', 9.339312e-06),
  ('n03498962', 'hatchet', 9.315376e-06),
  ('n03124043', 'cowboy_boot', 9.215123e-06),
  ('n02168699', 'long-horned_beetle', 9.179923e-06),
  ('n02094258', 'Norwich_terrier', 9.150406e-06),
  ('n02219486', 'ant', 9.1129505e-06),
  ('n02104365', 'schipperke', 9.109432e-06),
  ('n02099712', 'Labrador_retriever', 9.085563e-06),
  ('n04550184', 'wardrobe', 9.0416615e-06),
  ('n03884397', 'panpipe', 8.932923e-06),
  ('n01735189', 'garter_snake', 8.929934e-06),
  ('n02966193', 'carousel', 8.872431e-06),
  ('n03710193', 'mailbox', 8.802839e-06),
  ('n04509417', 'unicycle', 8.794952e-06),
  ('n04404412', 'television', 8.6083655e-06),
  ('n04208210', 'shovel', 8.484278e-06),
  ('n03721384', 'marimba', 8.46204e-06),
  ('n02321529', 'sea_cucumber', 8.427981e-06),
  ('n01755581', 'diamondback', 8.331277e-06),
  ('n03207743', 'dishrag', 8.26288e-06),
  ('n02097298', 'Scotch_terrier', 8.228116e-06),
  ('n02837789', 'bikini', 8.1783055e-06),
  ('n04039381', 'racket', 8.104708e-06),
  ('n02102318', 'cocker_spaniel', 7.889874e-06),
  ('n01689811', 'alligator_lizard', 7.8648345e-06),
  ('n03085013', 'computer_keyboard', 7.773028e-06),
  ('n04604644', 'worm_fence', 7.644364e-06),
  ('n02672831', 'accordion', 7.618622e-06),
  ('n04599235', 'wool', 7.524708e-06),
  ('n03388183', 'fountain_pen', 7.500656e-06),
  ('n02281406', 'sulphur_butterfly', 7.48323e-06),
  ('n03873416', 'paddle', 7.384595e-06),
  ('n03937543', 'pill_bottle', 7.361349e-06),
  ('n02974003', 'car_wheel', 7.3557135e-06),
  ('n01807496', 'partridge', 7.3441624e-06),
  ('n03495258', 'harp', 7.3184956e-06),
  ('n01443537', 'goldfish', 7.306369e-06),
  ('n03000134', 'chainlink_fence', 7.2608987e-06),
  ('n01968897', 'chambered_nautilus', 7.1223753e-06),
  ('n03016953', 'chiffonier', 7.1140935e-06),
  ('n02791124', 'barber_chair', 7.083541e-06),
  ('n02930766', 'cab', 7.0594583e-06),
  ('n06596364', 'comic_book', 6.9886473e-06),
  ('n02454379', 'armadillo', 6.9823054e-06),
  ('n03459775', 'grille', 6.9705893e-06),
  ('n02090379', 'redbone', 6.953452e-06),
  ('n04590129', 'window_shade', 6.9087887e-06),
  ('n09229709', 'bubble', 6.9013345e-06),
  ('n04371774', 'swing', 6.8768873e-06),
  ('n02643566', 'lionfish', 6.834175e-06),
  ('n03355925', 'flagpole', 6.7163605e-06),
  ('n02102040', 'English_springer', 6.714785e-06),
  ('n02110958', 'pug', 6.677441e-06),
  ('n02094114', 'Norfolk_terrier', 6.552929e-06),
  ('n04584207', 'wig', 6.4194883e-06),
  ('n03976467', 'Polaroid_camera', 6.387647e-06),
  ('n03255030', 'dumbbell', 6.3651355e-06),
  ('n03249569', 'drum', 6.300604e-06),
  ('n03791053', 'motor_scooter', 6.2877048e-06),
  ('n02130308', 'cheetah', 6.2834847e-06),
  ('n04192698', 'shield', 6.261017e-06),
  ('n04482393', 'tricycle', 6.172326e-06),
  ('n01728920', 'ringneck_snake', 6.133615e-06),
  ('n10148035', 'groom', 6.116675e-06),
  ('n02871525', 'bookshop', 6.090737e-06),
  ('n02113712', 'miniature_poodle', 6.003593e-06),
  ('n02229544', 'cricket', 5.891922e-06),
  ('n03995372', 'power_drill', 5.881872e-06),
  ('n01630670', 'common_newt', 5.866814e-06),
  ('n02259212', 'leafhopper', 5.829713e-06),
  ('n01981276', 'king_crab', 5.8168625e-06),
  ('n04357314', 'sunscreen', 5.804704e-06),
  ('n03924679', 'photocopier', 5.7983966e-06),
  ('n02087046', 'toy_terrier', 5.731078e-06),
  ('n02088238', 'basset', 5.730149e-06),
  ('n01629819', 'European_fire_salamander', 5.6096637e-06),
  ('n01797886', 'ruffed_grouse', 5.5845708e-06),
  ('n02097658', 'silky_terrier', 5.525091e-06),
  ('n02128757', 'snow_leopard', 5.5077617e-06),
  ('n03692522', 'loupe', 5.468899e-06),
  ('n03657121', 'lens_cap', 5.463144e-06),
  ('n02093859', 'Kerry_blue_terrier', 5.4529887e-06),
  ('n02112018', 'Pomeranian', 5.4500983e-06),
  ('n04326547', 'stone_wall', 5.4020616e-06),
  ('n02769748', 'backpack', 5.382823e-06),
  ('n03710637', 'maillot', 5.3740314e-06),
  ('n03770679', 'minivan', 5.348676e-06),
  ('n02113799', 'standard_poodle', 5.3323424e-06),
  ('n04265275', 'space_heater', 5.306025e-06),
  ('n04443257', 'tobacco_shop', 5.276631e-06),
  ('n01843383', 'toucan', 5.271848e-06),
  ('n01667114', 'mud_turtle', 5.2616265e-06),
  ('n02264363', 'lacewing', 5.25406e-06),
  ('n03814639', 'neck_brace', 5.2208743e-06),
  ('n04589890', 'window_screen', 5.176872e-06),
  ('n04086273', 'revolver', 5.175184e-06),
  ('n04041544', 'radio', 5.147104e-06),
  ('n01749939', 'green_mamba', 5.1171596e-06),
  ('n01980166', 'fiddler_crab', 5.109012e-06),
  ('n02870880', 'bookcase', 5.0958392e-06),
  ('n03201208', 'dining_table', 5.035813e-06),
  ('n02807133', 'bathing_cap', 4.960464e-06),
  ('n02226429', 'grasshopper', 4.959234e-06),
  ('n03670208', 'limousine', 4.9085443e-06),
  ('n07730033', 'cardoon', 4.8714187e-06),
  ('n01806567', 'quail', 4.8272946e-06),
  ('n03590841', "jack-o'-lantern", 4.7724293e-06),
  ('n03196217', 'digital_clock', 4.718961e-06),
  ('n04118538', 'rugby_ball', 4.694359e-06),
  ('n03770439', 'miniskirt', 4.688181e-06),
  ('n02490219', 'marmoset', 4.686241e-06),
  ('n03804744', 'nail', 4.670665e-06),
  ('n02105162', 'malinois', 4.6660743e-06),
  ('n03920288', 'Petri_dish', 4.652744e-06),
  ('n02094433', 'Yorkshire_terrier', 4.587578e-06),
  ('n02233338', 'cockroach', 4.576837e-06),
  ('n03857828', 'oscilloscope', 4.5262555e-06),
  ('n01631663', 'eft', 4.4442545e-06),
  ('n02814533', 'beach_wagon', 4.4343137e-06),
  ('n02093428', 'American_Staffordshire_terrier', 4.4262356e-06),
  ('n02096177', 'cairn', 4.3366113e-06),
  ('n02190166', 'fly', 4.331614e-06),
  ('n02108089', 'boxer', 4.3213977e-06),
  ('n01665541', 'leatherback_turtle', 4.269602e-06),
  ('n02708093', 'analog_clock', 4.215636e-06),
  ('n01924916', 'flatworm', 4.21126e-06),
  ('n04336792', 'stretcher', 4.1996295e-06),
  ('n02795169', 'barrel', 4.1438047e-06),
  ('n01872401', 'echidna', 4.11055e-06),
  ('n02027492', 'red-backed_sandpiper', 3.9873694e-06),
  ('n03494278', 'harmonica', 3.9285433e-06),
  ('n01558993', 'robin', 3.9153365e-06),
  ('n01496331', 'electric_ray', 3.902561e-06),
  ('n04370456', 'sweatshirt', 3.891152e-06),
  ('n02939185', 'caldron', 3.8511844e-06),
  ('n03598930', 'jigsaw_puzzle', 3.847561e-06),
  ('n03803284', 'muzzle', 3.81553e-06),
  ('n01773797', 'garden_spider', 3.7809266e-06),
  ('n02099849', 'Chesapeake_Bay_retriever', 3.7631094e-06),
  ('n02100583', 'vizsla', 3.7561501e-06),
  ('n01914609', 'sea_anemone', 3.7149784e-06),
  ('n04111531', 'rotisserie', 3.7011162e-06),
  ('n01775062', 'wolf_spider', 3.6985405e-06),
  ('n02978881', 'cassette', 3.6984134e-06),
  ('n02099267', 'flat-coated_retriever', 3.6919507e-06),
  ('n02206856', 'bee', 3.6490565e-06),
  ('n03930630', 'pickup', 3.642888e-06),
  ('n01687978', 'agama', 3.6279662e-06),
  ('n01704323', 'triceratops', 3.5954888e-06),
  ('n03661043', 'library', 3.5909245e-06),
  ('n02783161', 'ballpoint', 3.589185e-06),
  ('n02276258', 'admiral', 3.5882645e-06),
  ('n03100240', 'convertible', 3.5826092e-06),
  ('n02007558', 'flamingo', 3.5033345e-06),
  ('n03742115', 'medicine_chest', 3.5016844e-06),
  ('n03017168', 'chime', 3.49185e-06),
  ('n03874599', 'padlock', 3.4659324e-06),
  ('n02086240', 'Shih-Tzu', 3.4588559e-06),
  ('n02102973', 'Irish_water_spaniel', 3.4208517e-06),
  ('n02093256', 'Staffordshire_bullterrier', 3.4153231e-06),
  ('n12985857', 'coral_fungus', 3.3790345e-06),
  ('n04286575', 'spotlight', 3.367386e-06),
  ('n02782093', 'balloon', 3.3520898e-06),
  ('n02105412', 'kelpie', 3.3511883e-06),
  ('n02699494', 'altar', 3.350543e-06),
  ('n02865351', 'bolo_tie', 3.3185136e-06),
  ('n03908714', 'pencil_sharpener', 3.3120575e-06),
  ('n01798484', 'prairie_chicken', 3.2731618e-06),
  ('n03347037', 'fire_screen', 3.2724874e-06),
  ('n04330267', 'stove', 3.1586999e-06),
  ('n03345487', 'fire_engine', 3.1171844e-06),
  ('n02107312', 'miniature_pinscher', 3.1131144e-06),
  ('n04548280', 'wall_clock', 3.1111288e-06),
  ('n01698640', 'American_alligator', 3.1070163e-06),
  ('n04392985', 'tape_player', 3.0476622e-06),
  ('n04501370', 'turnstile', 3.0471042e-06),
  ('n03325584', 'feather_boa', 3.0436454e-06),
  ('n02443484', 'black-footed_ferret', 3.02106e-06),
  ('n03018349', 'china_cabinet', 3.0195304e-06),
  ('n02231487', 'walking_stick', 3.017691e-06),
  ('n03788195', 'mosque', 2.990497e-06),
  ('n03956157', 'planetarium', 2.9676098e-06),
  ('n04487394', 'trombone', 2.9446476e-06),
  ('n02100735', 'English_setter', 2.940814e-06),
  ('n03208938', 'disk_brake', 2.9223843e-06),
  ('n04523525', 'vault', 2.905106e-06),
  ('n03388043', 'fountain', 2.8869963e-06),
  ('n03630383', 'lab_coat', 2.8862391e-06),
  ('n03467068', 'guillotine', 2.8690242e-06),
  ('n02012849', 'crane', 2.851493e-06),
  ('n03868242', 'oxcart', 2.847186e-06),
  ('n02123597', 'Siamese_cat', 2.7862618e-06),
  ('n04485082', 'tripod', 2.765188e-06),
  ('n02017213', 'European_gallinule', 2.7576773e-06),
  ('n04235860', 'sleeping_bag', 2.7554374e-06),
  ('n04033901', 'quill', 2.7396218e-06),
  ('n01531178', 'goldfinch', 2.7134402e-06),
  ('n03372029', 'flute', 2.7001022e-06),
  ('n02108422', 'bull_mastiff', 2.6929636e-06),
  ('n04525305', 'vending_machine', 2.6875987e-06),
  ('n02106166', 'Border_collie', 2.6691516e-06),
  ('n03457902', 'greenhouse', 2.6031892e-06),
  ('n03954731', 'plane', 2.5869786e-06),
  ('n03691459', 'loudspeaker', 2.5823101e-06),
  ('n06359193', 'web_site', 2.5636273e-06),
  ('n02655020', 'puffer', 2.5551965e-06),
  ('n03733131', 'maypole', 2.5483168e-06),
  ('n02086910', 'papillon', 2.5419238e-06),
  ('n03902125', 'pay-phone', 2.540765e-06),
  ('n02988304', 'CD_player', 2.5121583e-06),
  ('n02089973', 'English_foxhound', 2.507663e-06),
  ('n04243546', 'slot', 2.5069914e-06),
  ('n02364673', 'guinea_pig', 2.506623e-06),
  ('n04136333', 'sarong', 2.4487276e-06),
  ('n02256656', 'cicada', 2.4308024e-06),
  ('n01774384', 'black_widow', 2.3822026e-06),
  ('n04228054', 'ski', 2.372456e-06),
  ('n03478589', 'half_track', 2.370493e-06),
  ('n04238763', 'slide_rule', 2.3525413e-06),
  ('n02802426', 'basketball', 2.322042e-06),
  ('n02087394', 'Rhodesian_ridgeback', 2.3190323e-06),
  ('n01530575', 'brambling', 2.3113266e-06),
  ('n03180011', 'desktop_computer', 2.3106898e-06),
  ('n01917289', 'brain_coral', 2.3027767e-06),
  ('n02101388', 'Brittany_spaniel', 2.3026605e-06),
  ('n03075370', 'combination_lock', 2.2782594e-06),
  ('n04019541', 'puck', 2.2632498e-06),
  ('n03000247', 'chain_mail', 2.258614e-06),
  ('n03045698', 'cloak', 2.2330562e-06),
  ('n01773549', 'barn_spider', 2.2261713e-06),
  ('n04209239', 'shower_curtain', 2.2209058e-06),
  ('n02091244', 'Ibizan_hound', 2.2070192e-06),
  ('n02790996', 'barbell', 2.2044426e-06),
  ('n04456115', 'torch', 2.195377e-06),
  ('n02096051', 'Airedale', 2.1852522e-06),
  ('n03992509', "potter's_wheel", 2.1717171e-06),
  ('n04069434', 'reflex_camera', 2.1716528e-06),
  ('n04239074', 'sliding_door', 2.1619119e-06),
  ('n02107142', 'Doberman', 2.155415e-06),
  ('n03843555', 'oil_filter', 2.1446867e-06),
  ('n02280649', 'cabbage_butterfly', 2.142344e-06),
  ('n03891332', 'parking_meter', 2.140792e-06),
  ('n01582220', 'magpie', 2.1364049e-06),
  ('n04254680', 'soccer_ball', 2.0744737e-06),
  ('n06874185', 'traffic_light', 2.0283055e-06),
  ('n02980441', 'castle', 2.0240757e-06),
  ('n02493509', 'titi', 2.0228542e-06),
  ('n03452741', 'grand_piano', 2.0197583e-06),
  ('n02088364', 'beagle', 2.0016241e-06),
  ('n01688243', 'frilled_lizard', 1.9978786e-06),
  ('n02667093', 'abaya', 1.9772426e-06),
  ('n04418357', 'theater_curtain', 1.9712384e-06),
  ('n03028079', 'church', 1.9464046e-06),
  ('n02395406', 'hog', 1.915768e-06),
  ('n03450230', 'gown', 1.915516e-06),
  ('n03763968', 'military_uniform', 1.9136862e-06),
  ('n02443114', 'polecat', 1.9104223e-06),
  ('n02346627', 'porcupine', 1.9027002e-06),
  ('n02398521', 'hippopotamus', 1.895192e-06),
  ('n03394916', 'French_horn', 1.8923637e-06),
  ('n03424325', 'gasmask', 1.8913316e-06),
  ('n02085936', 'Maltese_dog', 1.8847128e-06),
  ('n02325366', 'wood_rabbit', 1.8773578e-06),
  ('n02113186', 'Cardigan', 1.865324e-06),
  ('n01751748', 'sea_snake', 1.8608749e-06),
  ('n04579145', 'whiskey_jug', 1.8518989e-06),
  ('n03538406', 'horse_cart', 1.8445138e-06),
  ('n02172182', 'dung_beetle', 1.8375663e-06),
  ('n04285008', 'sports_car', 1.83689e-06),
  ('n01773157', 'black_and_gold_garden_spider', 1.8362086e-06),
  ('n02098286', 'West_Highland_white_terrier', 1.8328094e-06),
  ('n02777292', 'balance_beam', 1.8075908e-06),
  ('n03271574', 'electric_fan', 1.8037026e-06),
  ('n03220513', 'dome', 1.8005541e-06),
  ('n04517823', 'vacuum', 1.7981002e-06),
  ('n03476991', 'hair_spray', 1.7901526e-06),
  ('n01697457', 'African_crocodile', 1.7789678e-06),
  ('n13037406', 'gyromitra', 1.7642907e-06),
  ('n02102480', 'Sussex_spaniel', 1.7612377e-06),
  ('n02006656', 'spoonbill', 1.759121e-06),
  ('n02025239', 'ruddy_turnstone', 1.7563266e-06),
  ('n02127052', 'lynx', 1.7455764e-06),
  ('n01737021', 'water_snake', 1.7395322e-06),
  ('n03599486', 'jinrikisha', 1.7336728e-06),
  ('n04311004', 'steel_arch_bridge', 1.7313332e-06),
  ('n03977966', 'police_van', 1.7308726e-06),
  ('n03272010', 'electric_guitar', 1.7078416e-06),
  ('n02815834', 'beaker', 1.6984762e-06),
  ('n02099429', 'curly-coated_retriever', 1.6921481e-06),
  ('n04153751', 'screw', 1.6901176e-06),
  ('n01819313', 'sulphur-crested_cockatoo', 1.6662443e-06),
  ('n09256479', 'coral_reef', 1.6641449e-06),
  ('n03649909', 'lawn_mower', 1.6448307e-06),
  ('n02088466', 'bloodhound', 1.6363116e-06),
  ('n09332890', 'lakeside', 1.6059654e-06),
  ('n02791270', 'barbershop', 1.6009419e-06),
  ('n02966687', "carpenter's_kit", 1.5872705e-06),
  ('n04536866', 'violin', 1.5811818e-06),
  ('n04125021', 'safe', 1.5696611e-06),
  ('n01774750', 'tarantula', 1.565026e-06),
  ('n03197337', 'digital_watch', 1.5630034e-06),
  ('n02115641', 'dingo', 1.562516e-06),
  ('n04479046', 'trench_coat', 1.5517869e-06),
  ('n04429376', 'throne', 1.5482052e-06),
  ('n02086079', 'Pekinese', 1.547206e-06),
  ('n03764736', 'milk_can', 1.5452491e-06),
  ('n03899768', 'patio', 1.5331962e-06),
  ('n02105056', 'groenendael', 1.5317697e-06),
  ('n03379051', 'football_helmet', 1.5299128e-06),
  ('n02676566', 'acoustic_guitar', 1.5241429e-06),
  ('n02441942', 'weasel', 1.5157356e-06),
  ('n04023962', 'punching_bag', 1.5147906e-06),
  ('n03877845', 'palace', 1.5120713e-06),
  ('n06794110', 'street_sign', 1.5066605e-06),
  ('n03445924', 'golfcart', 1.4947432e-06),
  ('n03874293', 'paddlewheel', 1.4924485e-06),
  ('n02107683', 'Bernese_mountain_dog', 1.4920855e-06),
  ('n02906734', 'broom', 1.4900861e-06),
  ('n03733281', 'maze', 1.4885509e-06),
  ('n04311174', 'steel_drum', 1.483913e-06),
  ('n02442845', 'mink', 1.4807846e-06),
  ('n03485407', 'hand-held_computer', 1.4798331e-06),
  ('n02099601', 'golden_retriever', 1.469917e-06),
  ('n03792782', 'mountain_bike', 1.4630418e-06),
  ('n01768244', 'trilobite', 1.4596816e-06),
  ('n04461696', 'tow_truck', 1.4594646e-06),
  ('n03492542', 'hard_disc', 1.4579955e-06),
  ('n03792972', 'mountain_tent', 1.4549189e-06),
  ('n02097047', 'miniature_schnauzer', 1.4527186e-06),
  ('n02606052', 'rock_beauty', 1.4375174e-06),
  ('n02979186', 'cassette_player', 1.4285522e-06),
  ('n02095570', 'Lakeland_terrier', 1.4100092e-06),
  ('n02536864', 'coho', 1.4030353e-06),
  ('n02704792', 'amphibian', 1.3934185e-06),
  ('n03032252', 'cinema', 1.3931741e-06),
  ('n01860187', 'black_swan', 1.3791208e-06),
  ('n02013706', 'limpkin', 1.3761227e-06),
  ('n03787032', 'mortarboard', 1.3741857e-06),
  ('n02086646', 'Blenheim_spaniel', 1.3711185e-06),
  ('n01806143', 'peacock', 1.3607364e-06),
  ('n03769881', 'minibus', 1.3400056e-06),
  ('n01955084', 'chiton', 1.2847538e-06),
  ('n03444034', 'go-kart', 1.2842969e-06),
  ('n03388549', 'four-poster', 1.2815441e-06),
  ('n02494079', 'squirrel_monkey', 1.2756436e-06),
  ('n02028035', 'redshank', 1.2714837e-06),
  ('n02093754', 'Border_terrier', 1.2521436e-06),
  ('n02817516', 'bearskin', 1.247456e-06),
  ('n03425413', 'gas_pump', 1.2274648e-06),
  ('n01796340', 'ptarmigan', 1.2228089e-06),
  ('n02107908', 'Appenzeller', 1.2216934e-06),
  ('n01664065', 'loggerhead', 1.2209538e-06),
  ('n03903868', 'pedestal', 1.2150878e-06),
  ('n04009552', 'projector', 1.2064934e-06),
  ('n02091831', 'Saluki', 1.1998567e-06),
  ('n02268853', 'damselfly', 1.1865894e-06),
  ('n02095314', 'wire-haired_fox_terrier', 1.1767255e-06),
  ('n02093991', 'Irish_terrier', 1.1684349e-06),
  ('n02841315', 'binoculars', 1.1662651e-06),
  ('n04486054', 'triumphal_arch', 1.1644602e-06),
  ('n04328186', 'stopwatch', 1.1585473e-06),
  ('n03785016', 'moped', 1.1582271e-06),
  ('n04090263', 'rifle', 1.1539547e-06),
  ('n02091467', 'Norwegian_elkhound', 1.1463349e-06),
  ('n02277742', 'ringlet', 1.1367297e-06),
  ('n02089867', 'Walker_hound', 1.135789e-06),
  ('n02011460', 'bittern', 1.1232958e-06),
  ('n02113023', 'Pembroke', 1.1125555e-06),
  ('n03376595', 'folding_chair', 1.1093706e-06),
  ('n02492035', 'capuchin', 1.1085943e-06),
  ('n02098413', 'Lhasa', 1.0907672e-06),
  ('n02835271', 'bicycle-built-for-two', 1.090265e-06),
  ('n09421951', 'sandbar', 1.0827908e-06),
  ('n03527444', 'holster', 1.0799837e-06),
  ('n02950826', 'cannon', 1.0785376e-06),
  ('n04004767', 'printer', 1.0565255e-06),
  ('n03998194', 'prayer_rug', 1.050286e-06),
  ('n04275548', 'spider_web', 1.0488016e-06),
  ('n03706229', 'magnetic_compass', 1.0387251e-06),
  ('n03773504', 'missile', 1.0371938e-06),
  ('n02108000', 'EntleBucher', 1.0335739e-06),
  ('n03710721', 'maillot', 1.0236545e-06),
  ('n02104029', 'kuvasz', 1.0169425e-06),
  ('n04229816', 'ski_mask', 1.0164364e-06),
  ('n02105251', 'briard', 1.0141175e-06),
  ('n04350905', 'suit', 9.888396e-07),
  ('n03868863', 'oxygen_mask', 9.820187e-07),
  ('n03534580', 'hoopskirt', 9.4845166e-07),
  ('n02641379', 'gar', 9.404275e-07),
  ('n02097130', 'giant_schnauzer', 9.3083463e-07),
  ('n02106030', 'collie', 9.2053074e-07),
  ('n04147183', 'schooner', 9.199295e-07),
  ('n02089078', 'black-and-tan_coonhound', 9.1739673e-07),
  ('n01580077', 'jay', 9.1670756e-07),
  ('n02102177', 'Welsh_springer_spaniel', 9.1417485e-07),
  ('n02894605', 'breakwater', 9.024583e-07),
  ('n04371430', 'swimming_trunks', 9.0240576e-07),
  ('n03110669', 'cornet', 8.965521e-07),
  ('n04065272', 'recreational_vehicle', 8.912888e-07),
  ('n04389033', 'tank', 8.777903e-07),
  ('n04149813', 'scoreboard', 8.764177e-07),
  ('n04067472', 'reel', 8.73224e-07),
  ('n03976657', 'pole', 8.6671776e-07),
  ('n03777754', 'modem', 8.662244e-07),
  ('n02123394', 'Persian_cat', 8.5186014e-07),
  ('n02963159', 'cardigan', 8.512949e-07),
  ('n03967562', 'plow', 8.405926e-07),
  ('n01498041', 'stingray', 8.403682e-07),
  ('n03697007', 'lumbermill', 8.3694823e-07),
  ('n02117135', 'hyena', 8.2271305e-07),
  ('n09428293', 'seashore', 8.213928e-07),
  ('n02167151', 'ground_beetle', 8.1477845e-07),
  ('n01440764', 'tench', 8.118814e-07),
  ('n04515003', 'upright', 8.062627e-07),
  ('n01616318', 'vulture', 8.037983e-07),
  ('n02107574', 'Greater_Swiss_Mountain_dog', 8.0182144e-07),
  ('n04141327', 'scabbard', 7.990299e-07),
  ('n03000684', 'chain_saw', 7.901752e-07),
  ('n01770393', 'scorpion', 7.7805305e-07),
  ('n02116738', 'African_hunting_dog', 7.7665266e-07),
  ('n02111500', 'Great_Pyrenees', 7.744589e-07),
  ('n03417042', 'garbage_truck', 7.5762034e-07),
  ('n02096294', 'Australian_terrier', 7.535368e-07),
  ('n04273569', 'speedboat', 7.5332196e-07),
  ('n02447366', 'badger', 7.416973e-07),
  ('n02056570', 'king_penguin', 7.331464e-07),
  ('n03496892', 'harvester', 7.288903e-07),
  ('n03529860', 'home_theater', 7.2721906e-07),
  ('n02109961', 'Eskimo_dog', 7.156668e-07),
  ('n02037110', 'oystercatcher', 7.1321733e-07),
  ('n02101556', 'clumber', 7.075111e-07),
  ('n03290653', 'entertainment_center', 7.067006e-07),
  ('n02111277', 'Newfoundland', 6.8580874e-07),
  ('n02101006', 'Gordon_setter', 6.842891e-07),
  ('n02110063', 'malamute', 6.8025497e-07),
  ('n02098105', 'soft-coated_wheaten_terrier', 6.7732446e-07),
  ('n04251144', 'snorkel', 6.771785e-07),
  ('n01882714', 'koala', 6.7602735e-07),
  ('n04264628', 'space_bar', 6.7096164e-07),
  ('n02137549', 'mongoose', 6.6530157e-07),
  ('n04532106', 'vestment', 6.6352993e-07),
  ('n02106382', 'Bouvier_des_Flandres', 6.5908495e-07),
  ('n02110627', 'affenpinscher', 6.5748844e-07),
  ('n02992211', 'cello', 6.561405e-07),
  ('n01695060', 'Komodo_dragon', 6.3896306e-07),
  ('n02093647', 'Bedlington_terrier', 6.3744267e-07),
  ('n02106662', 'German_shepherd', 6.311312e-07),
  ('n04037443', 'racer', 6.3057587e-07),
  ('n04372370', 'switch', 6.2346896e-07),
  ('n02787622', 'banjo', 6.2028425e-07),
  ('n02457408', 'three-toed_sloth', 6.1489686e-07),
  ('n03854065', 'organ', 6.1306764e-07),
  ('n04346328', 'stupa', 6.0340176e-07),
  ('n03841143', 'odometer', 5.954864e-07),
  ('n09193705', 'alp', 5.9262885e-07),
  ('n04049303', 'rain_barrel', 5.896747e-07),
  ('n03594945', 'jeep', 5.891834e-07),
  ('n02071294', 'killer_whale', 5.883866e-07),
  ('n02391049', 'zebra', 5.779105e-07),
  ('n02109525', 'Saint_Bernard', 5.7271797e-07),
  ('n02510455', 'giant_panda', 5.7128875e-07),
  ('n10565667', 'scuba_diver', 5.659538e-07),
  ('n07802026', 'hay', 5.6374955e-07),
  ('n04505470', 'typewriter_keyboard', 5.610956e-07),
  ('n03837869', 'obelisk', 5.5923283e-07),
  ('n02066245', 'grey_whale', 5.5740065e-07),
  ('n02106550', 'Rottweiler', 5.4826774e-07),
  ('n02727426', 'apiary', 5.4745425e-07),
  ('n02092002', 'Scottish_deerhound', 5.4546246e-07),
  ('n01491361', 'tiger_shark', 5.4394457e-07),
  ('n04008634', 'projectile', 5.3896264e-07),
  ('n02097209', 'standard_schnauzer', 5.251147e-07),
  ('n01776313', 'tick', 5.2128365e-07),
  ('n03796401', 'moving_van', 5.2064274e-07),
  ('n03866082', 'overskirt', 5.199501e-07),
  ('n02444819', 'otter', 5.1959864e-07),
  ('n02085782', 'Japanese_spaniel', 5.1606713e-07),
  ('n02860847', 'bobsled', 5.089081e-07),
  ('n02105855', 'Shetland_sheepdog', 5.0838906e-07),
  ('n01518878', 'ostrich', 5.053611e-07),
  ('n02804610', 'bassoon', 5.0120747e-07),
  ('n02445715', 'skunk', 5.008539e-07),
  ('n03947888', 'pirate', 4.992696e-07),
  ('n02096437', 'Dandie_Dinmont', 4.983924e-07),
  ('n04465501', 'tractor', 4.978979e-07),
  ('n01883070', 'wombat', 4.9747024e-07),
  ('n02977058', 'cash_machine', 4.9674014e-07),
  ('n02509815', 'lesser_panda', 4.94355e-07),
  ('n04532670', 'viaduct', 4.9202725e-07),
  ('n02825657', 'bell_cote', 4.8976796e-07),
  ('n01990800', 'isopod', 4.800504e-07),
  ('n02487347', 'macaque', 4.7473918e-07),
  ('n04428191', 'thresher', 4.721318e-07),
  ('n01877812', 'wallaby', 4.6791246e-07),
  ('n04141076', 'sax', 4.657443e-07),
  ('n04467665', 'trailer_truck', 4.6352787e-07),
  ('n02514041', 'barracouta', 4.630282e-07),
  ('n11879895', 'rapeseed', 4.6236454e-07),
  ('n02077923', 'sea_lion', 4.619903e-07),
  ('n03344393', 'fireboat', 4.6121113e-07),
  ('n03717622', 'manhole_cover', 4.5486513e-07),
  ('n03929855', 'pickelhaube', 4.5192144e-07),
  ('n03535780', 'horizontal_bar', 4.512952e-07),
  ('n03933933', 'pier', 4.4521045e-07),
  ('n03095699', 'container_ship', 4.3805989e-07),
  ('n02097474', 'Tibetan_terrier', 4.3508385e-07),
  ('n03384352', 'forklift', 4.3478974e-07),
  ('n02110185', 'Siberian_husky', 4.304236e-07),
  ('n02504013', 'Indian_elephant', 4.2824905e-07),
  ('n09835506', 'ballplayer', 4.2698687e-07),
  ('n03777568', 'Model_T', 4.2667588e-07),
  ('n03595614', 'jersey', 4.2497595e-07),
  ('n03404251', 'fur_coat', 4.2094874e-07),
  ('n02892201', 'brass', 4.1852059e-07),
  ('n03788365', 'mosquito_net', 4.181743e-07),
  ('n01855672', 'goose', 4.1223407e-07),
  ('n04366367', 'suspension_bridge', 4.0636e-07),
  ('n02129604', 'tiger', 4.0536872e-07),
  ('n02105641', 'Old_English_sheepdog', 4.033761e-07),
  ('n04483307', 'trimaran', 4.0085624e-07),
  ('n02112706', 'Brabancon_griffon', 3.9984067e-07),
  ('n01843065', 'jacamar', 3.9543497e-07),
  ('n03623198', 'knee_pad', 3.9525472e-07),
  ('n01784675', 'centipede', 3.8657137e-07),
  ('n02051845', 'pelican', 3.8632922e-07),
  ('n04296562', 'stage', 3.8322082e-07),
  ('n03895866', 'passenger_car', 3.811175e-07),
  ('n02403003', 'ox', 3.80954e-07),
  ('n02165105', 'tiger_beetle', 3.6443112e-07),
  ('n02497673', 'Madagascar_cat', 3.6254738e-07),
  ('n03216828', 'dock', 3.600785e-07),
  ('n01855032', 'red-breasted_merganser', 3.5803427e-07),
  ('n02326432', 'hare', 3.5595477e-07),
  ('n02002724', 'black_stork', 3.5519483e-07),
  ('n04458633', 'totem_pole', 3.5485527e-07),
  ('n04606251', 'wreck', 3.5295508e-07),
  ('n01930112', 'nematode', 3.5103457e-07),
  ('n02138441', 'meerkat', 3.492058e-07),
  ('n01537544', 'indigo_bunting', 3.490563e-07),
  ('n04562935', 'water_tower', 3.4862816e-07),
  ('n03980874', 'poncho', 3.4444167e-07),
  ('n04335435', 'streetcar', 3.4165825e-07),
  ('n02119789', 'kit_fox', 3.3942806e-07),
  ('n03126707', 'crane', 3.3680973e-07),
  ('n02111889', 'Samoyed', 3.347274e-07),
  ('n02423022', 'gazelle', 3.334097e-07),
  ('n03776460', 'mobile_home', 3.3180436e-07),
  ('n02895154', 'breastplate', 3.2841035e-07),
  ('n02108551', 'Tibetan_mastiff', 3.2366867e-07),
  ('n02981792', 'catamaran', 3.207232e-07),
  ('n04252225', 'snowplow', 3.1913635e-07),
  ('n01795545', 'black_grouse', 3.1679085e-07),
  ('n02088094', 'Afghan_hound', 3.1401186e-07),
  ('n01824575', 'coucal', 3.0895288e-07),
  ('n02018207', 'American_coot', 3.0811518e-07),
  ('n02328150', 'Angora', 3.0591715e-07),
  ('n02814860', 'beacon', 3.0567688e-07),
  ('n01873310', 'platypus', 3.0511418e-07),
  ('n02493793', 'spider_monkey', 3.043217e-07),
  ('n02437616', 'llama', 3.011784e-07),
  ('n02100877', 'Irish_setter', 2.958955e-07),
  ('n03781244', 'monastery', 2.9141253e-07),
  ('n03673027', 'liner', 2.8903585e-07),
  ('n02794156', 'barometer', 2.8863946e-07),
  ('n01592084', 'chickadee', 2.8575457e-07),
  ('n03888257', 'parachute', 2.8293954e-07),
  ('n03393912', 'freight_car', 2.824831e-07),
  ('n09399592', 'promontory', 2.8110392e-07),
  ('n02483708', 'siamang', 2.7946936e-07),
  ('n02917067', 'bullet_train', 2.7769332e-07),
  ('n01494475', 'hammerhead', 2.7539255e-07),
  ('n02489166', 'proboscis_monkey', 2.7483475e-07),
  ('n02749479', 'assault_rifle', 2.720292e-07),
  ('n09468604', 'valley', 2.6796533e-07),
  ('n04005630', 'prison', 2.6744937e-07),
  ('n03594734', 'jean', 2.5947259e-07),
  ('n01871265', 'tusker', 2.4810953e-07),
  ('n02090721', 'Irish_wolfhound', 2.4775676e-07),
  ('n02112350', 'keeshond', 2.4367267e-07),
  ('n01484850', 'great_white_shark', 2.4118341e-07),
  ('n02412080', 'ram', 2.367995e-07),
  ('n02111129', 'Leonberg', 2.3133344e-07),
  ('n02492660', 'howler_monkey', 2.309109e-07),
  ('n02033041', 'dowitcher', 2.2919744e-07),
  ('n02690373', 'airliner', 2.2846983e-07),
  ('n02669723', 'academic_gown', 2.2366271e-07),
  ('n02120505', 'grey_fox', 2.2299396e-07),
  ('n02356798', 'fox_squirrel', 2.2188152e-07),
  ('n01770081', 'harvestman', 2.2009186e-07),
  ('n02125311', 'cougar', 2.170097e-07),
  ('n01828970', 'bee_eater', 2.1663668e-07),
  ('n03662601', 'lifeboat', 2.1337588e-07),
  ('n02793495', 'barn', 2.1235518e-07),
  ('n03743016', 'megalith', 2.1187817e-07),
  ('n04613696', 'yurt', 2.1019164e-07),
  ('n02095889', 'Sealyham_terrier', 2.0787701e-07),
  ('n03218198', 'dogsled', 2.0605856e-07),
  ('n04612504', 'yawl', 2.0055212e-07),
  ('n03888605', 'parallel_bars', 2.001862e-07),
  ('n02114367', 'timber_wolf', 1.9979748e-07),
  ('n02091635', 'otterhound', 1.9902006e-07),
  ('n02640242', 'sturgeon', 1.9773454e-07),
  ('n01534433', 'junco', 1.9044201e-07),
  ('n02009229', 'little_blue_heron', 1.8505617e-07),
  ('n02437312', 'Arabian_camel', 1.8130214e-07),
  ('n04592741', 'wing', 1.7944384e-07),
  ('n02114712', 'red_wolf', 1.7553512e-07),
  ('n02486410', 'baboon', 1.740439e-07),
  ('n02500267', 'indri', 1.7338704e-07),
  ('n02486261', 'patas', 1.6444879e-07),
  ('n03838899', 'oboe', 1.6423267e-07),
  ('n01601694', 'water_ouzel', 1.5955592e-07),
  ('n02859443', 'boathouse', 1.5935186e-07),
  ('n02422699', 'impala', 1.5628878e-07),
  ('n02090622', 'borzoi', 1.5393738e-07),
  ('n04487081', 'trolleybus', 1.4914886e-07),
  ('n02105505', 'komondor', 1.4366643e-07),
  ('n02488291', 'langur', 1.4224963e-07),
  ('n02408429', 'water_buffalo', 1.4205239e-07),
  ('n02504458', 'African_elephant', 1.4153893e-07),
  ('n02114855', 'coyote', 1.3954431e-07),
  ('n02002556', 'white_stork', 1.3758712e-07),
  ('n02133161', 'American_black_bear', 1.3329375e-07),
  ('n04540053', 'volleyball', 1.3171935e-07),
  ('n03042490', 'cliff_dwelling', 1.3026153e-07),
  ('n02481823', 'chimpanzee', 1.2420873e-07),
  ('n01910747', 'jellyfish', 1.2354691e-07),
  ('n02132136', 'brown_bear', 1.2257927e-07),
  ('n04552348', 'warplane', 1.2160598e-07),
  ('n04252077', 'snowmobile', 1.2154626e-07),
  ('n02415577', 'bighorn', 1.2015119e-07),
  ('n02363005', 'beaver', 1.1937075e-07),
  ('n09246464', 'cliff', 1.1439359e-07),
  ('n01608432', 'kite', 1.14079725e-07),
  ('n04266014', 'space_shuttle', 1.1255663e-07),
  ('n01622779', 'great_grey_owl', 1.11373325e-07),
  ('n01614925', 'bald_eagle', 1.1111192e-07),
  ('n02009912', 'American_egret', 1.0589369e-07),
  ('n02484975', 'guenon', 1.05126944e-07),
  ('n03146219', 'cuirass', 1.0502113e-07),
  ('n02129165', 'lion', 1.0445438e-07),
  ('n02687172', 'aircraft_carrier', 1.0314829e-07),
  ('n02119022', 'red_fox', 1.0100304e-07),
  ('n02389026', 'sorrel', 1.0047971e-07),
  ('n02115913', 'dhole', 9.666792e-08),
  ('n02692877', 'airship', 9.317205e-08),
  ('n02361337', 'marmot', 9.2581615e-08),
  ('n04044716', 'radio_telescope', 9.149724e-08),
  ('n02916936', 'bulletproof_vest', 9.0021956e-08),
  ('n04258138', 'solar_dish', 8.630675e-08),
  ('n02112137', 'chow', 8.574706e-08),
  ('n02410509', 'bison', 8.536952e-08),
  ('n02396427', 'wild_boar', 8.316313e-08),
  ('n02483362', 'gibbon', 8.11186e-08),
  ('n02480855', 'gorilla', 8.1035324e-08),
  ('n02114548', 'white_wolf', 7.675322e-08),
  ('n02134084', 'ice_bear', 7.589809e-08),
  ('n02488702', 'colobus', 7.512281e-08),
  ('n02018795', 'bustard', 7.43634e-08),
  ('n04347754', 'submarine', 7.342488e-08),
  ('n04417672', 'thatch', 7.2710634e-08),
  ('n02058221', 'albatross', 7.1849755e-08),
  ('n09472597', 'volcano', 6.475818e-08),
  ('n03160309', 'dam', 5.7200587e-08),
  ('n02074367', 'dugong', 4.8488506e-08),
  ('n03240683', 'drilling_platform', 4.3388532e-08),
  ('n02480495', 'orangutan', 4.3289997e-08),
  ('n02417914', 'ibex', 4.3238586e-08),
  ('n02120079', 'Arctic_fox', 4.2070614e-08),
  ('n03272562', 'electric_locomotive', 4.1462076e-08),
  ('n02422106', 'hartebeest', 4.0936378e-08),
  ('n02397096', 'warthog', 3.2859568e-08),
  ('n09288635', 'geyser', 2.4705871e-08),
  ('n02134418', 'sloth_bear', 2.0422515e-08),
  ('n04310018', 'steam_locomotive', 1.21683055e-08)]]

We end up with probabilitis split among multiple categories of a certain "style" like multiple dog breeds. One way to try improving on this, is to use this classifier to do an approximative segmentation by splitting the image into subregions.

We create overlapping patches and do the prediction on those:

In [15]:
patch = 400
step = 100
all_features =[]
for i in np.arange(0,image.shape[0]-patch-1,step):
    print(i)
    for j in np.arange(0,image.shape[1]-patch-1,step):
        subimage = image[i:i+patch,j:j+patch,:]
        image_resize = skimage.transform.resize(subimage,(224,224),preserve_range=True)
        x = np.expand_dims(image_resize, axis=0)
        x = preprocess_input(x)

        features = model.predict(x)
        all_features.append(features)
    
0
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
100
200
In [16]:
[decode_predictions(x, top=1000)[0][0] for x in all_features if decode_predictions(x, top=1000)[0][0][2]>0.3]
Out[16]:
[('n07753592', 'banana', 0.78769964),
 ('n07753592', 'banana', 0.47260636),
 ('n07753592', 'banana', 0.5114516),
 ('n07745940', 'strawberry', 0.68358153),
 ('n07745940', 'strawberry', 0.81029093),
 ('n07745940', 'strawberry', 0.92421764),
 ('n07753592', 'banana', 0.8903582),
 ('n07753592', 'banana', 0.7702213),
 ('n07753592', 'banana', 0.8909377),
 ('n07745940', 'strawberry', 0.9785351),
 ('n07745940', 'strawberry', 0.9900946),
 ('n07745940', 'strawberry', 0.98152024),
 ('n07745940', 'strawberry', 0.7820029),
 ('n07753592', 'banana', 0.98062783),
 ('n07753592', 'banana', 0.80547625),
 ('n07745940', 'strawberry', 0.69026893),
 ('n07745940', 'strawberry', 0.99909794),
 ('n07745940', 'strawberry', 0.99729496),
 ('n07745940', 'strawberry', 0.9918213),
 ('n07745940', 'strawberry', 0.4741534)]

We can now superpose those segmented features over the original image:

In [17]:
import matplotlib.colors
cmap = matplotlib.colors.ListedColormap ( np.random.rand ( 256,3))

reshaped = np.reshape([np.argmax(x) for x in all_features],
                      (len(np.arange(0,image.shape[0]-patch-1,step)),len(np.arange(0,image.shape[1]-patch-1,step))))

plt.figure(figsize=(10,10))
plt.imshow(image)
plt.imshow(skimage.transform.resize(reshaped,(image.shape[0],image.shape[1]), order=0,preserve_range=True),cmap = cmap,alpha = 0.8)
plt.show()
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "

Let's create an array with the index names and plot them on top of the image:

In [18]:
names = np.reshape([decode_predictions(x, top=1000)[0][0][1] for x in all_features],
          (len(np.arange(0,image.shape[0]-patch-1,step)),len(np.arange(0,image.shape[1]-patch-1,step))))
In [19]:
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.imshow(skimage.transform.resize(reshaped,(image.shape[0],image.shape[1]), order=0,preserve_range=True),cmap = cmap,alpha = 0.8)
fact = image.shape[0]/reshaped.shape[0]
for x in range(names.shape[0]):
    for y in range(names.shape[1]):
        plt.text(x=(y)*image.shape[1]/reshaped.shape[1],y=(x+0.5)*image.shape[0]/reshaped.shape[0],s = names[x,y])
plt.show()
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
17-Semantic_segmentation

17. Semantic segmentation: Github resources

Whenever one desires to try out some advanced technique not yet available as a nicely packaged tool like scikit-image, the best solution is to first search for open-source code that approximates what one wants to do. One of the main repositories of such code is Github. As an examples, we will here do semantic segmentation, i.e. segmenting objects in an image.

In [1]:
import sys
import numpy as np
import skimage
import skimage.io
import skimage.transform
from matplotlib import pyplot as plt

17.1 Finding and exploring a repository

Let's have a look at this repository.

17.2 Installing

We follow the instructions as given. We first check what version of tensorflow we have:

In [2]:
import tensorflow
In [3]:
tensorflow.__version__
Out[3]:
'1.14.0'

So we have to follow the second set of instructions. These are unix type commands that we would normally type in a terminal. As Jupyter support bash commands we can also do it right here:

In [4]:
%%bash
git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20
fatal: destination path 'keras-deeplab-v3-plus' already exists and is not an empty directory.
HEAD is now at 714a6b7... Merge branch 'master' of https://github.com/bonlime/keras-deeplab-v3-plus

17.3 Making the package accessible

Since we only want to try out the package, we will simply add it's path to our current path. If we try multiple packages, this avoid over-crowding the conda environement with useless code. If we want to use it "in production" we can always install it later.

In [5]:
sys.path.append('keras-deeplab-v3-plus')

Now we can finally import the package:

In [6]:
from model import Deeplabv3
Using TensorFlow backend.

17.4 Using the network

We simply follow the instructions given in the repository to run the code. We only modify the image importation as we use a different package (skimage). As always there are some parameters set for pre-processing:

In [7]:
trained_image_width=512 
mean_subtraction_value=127.5

Then we can pick the image of our choice:

In [8]:
image = skimage.io.imread('https://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/Cow_female_black_white.jpg/1920px-Cow_female_black_white.jpg')
#image = skimage.io.imread('https://upload.wikimedia.org/wikipedia/commons/3/33/Chat-affut.JPG')
#image = skimage.io.imread('https://upload.wikimedia.org/wikipedia/commons/1/18/TrailKitty.jpg')
image = image.astype('float')

And run the remaining of the proposed code:

In [9]:
# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = skimage.transform.resize(image,(int(ratio * w),int(ratio * h)))
#resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image,0))
labels = np.argmax(res.squeeze(), -1)
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/usr/local/lib/python3.5/dist-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
WARNING: Logging before flag parsing goes to stderr.
W0123 11:16:21.968451 139872335447808 deprecation_wrapper.py:119] From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:4074: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-a13c11e8142d> in <module>()
     14 
     15 # make prediction
---> 16 deeplab_model = Deeplabv3()
     17 res = deeplab_model.predict(np.expand_dims(resized_image,0))
     18 labels = np.argmax(res.squeeze(), -1)

~/Documents/CAS_data_science/CAS_21.01.2020_Python_Image_Processing/PyImageCourse-master/keras-deeplab-v3-plus/model.py in Deeplabv3(weights, input_tensor, input_shape, classes, backbone, OS, alpha)
    441     b4 = BatchNormalization(name='image_pooling_BN', epsilon=1e-5)(b4)
    442     b4 = Activation('relu')(b4)
--> 443     b4 = BilinearUpsampling((int(np.ceil(input_shape[0] / OS)), int(np.ceil(input_shape[1] / OS))))(b4)
    444 
    445     # simple 1x1

/usr/local/lib/python3.5/dist-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
    487             # Actually call the layer,
    488             # collecting output(s), mask(s), and shape(s).
--> 489             output = self.call(inputs, **kwargs)
    490             output_mask = self.compute_mask(inputs, previous_mask)
    491 

~/Documents/CAS_data_science/CAS_21.01.2020_Python_Image_Processing/PyImageCourse-master/keras-deeplab-v3-plus/model.py in call(self, inputs)
     91     def call(self, inputs):
     92         if self.upsampling:
---> 93             return K.tf.image.resize_bilinear(inputs, (inputs.shape[1] * self.upsampling[0],
     94                                                        inputs.shape[2] * self.upsampling[1]),
     95                                               align_corners=True)

AttributeError: module 'keras.backend' has no attribute 'tf'

Since we padded and reshaped the image in the pre-processing step, we have now to correct the size of the output labels:

In [ ]:
if pad_x > 0:
    labels = labels[:-pad_x,:]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = skimage.transform.resize(labels,(w, h),preserve_range=True, order=0)

17.5 Checking the output

In [ ]:
plt.imshow(labels)
plt.show()
plt.imshow(image[:,:,0])
plt.show()
In [ ]:
class_names = np.array(['background','aeroplane', 'bicycle', 'bird', 'boat',
                      'bottle', 'bus', 'car', 'cat', 'chair',
                      'cow', 'diningtable', 'dog', 'horse',
                      'motorbike', 'person', 'pottedplant',
                      'sheep', 'sofa', 'train', 'tvmonitor'])
In [ ]:
class_names[np.unique(labels).astype(int)]
18-Application_DICOM

18. Application: DICOM

DICOM (Digital Imaging and Communications in Medicine) is the international standard to transmit, store, retrieve, print, process, and display medical imaging information. It is in particular widely used to store volumetric data from methods such as CT, MR, Ultrasound, etc.

This kind of specific image format is typically not supported by general packages such as scikit-image. However in most cases, independent dedicated packages exist. A simple Google search leads us to the pydicom package.

In [1]:
import os
import matplotlib.pyplot as plt
plt.gray()
import pydicom
import numpy as np
import skimage
import ipyvolume as ipv

We will use an MRI dataset of a head available on the data sharing platform Zenodo. In this course, most data have been made directly available. To show the full procedure, we will here include the download step.

Install the missing package:

In [2]:
!pip install --user pydicom
Requirement already satisfied: pydicom in /usr/local/lib/python3.5/dist-packages (1.4.1)
You are using pip version 19.0.3, however version 20.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
In [3]:
import pydicom

18.1. Download

The donwload address on Zenodo is:

In [4]:
data_address= 'https://zenodo.org/record/16956/files/DICOM.zip?download=1'

Create a folder where to put the data:

In [5]:
#os.makedirs('MyData')

We can use the urllib native package to proceed with download which provides us with a zip file:

In [6]:
import urllib

urllib.request.urlretrieve(data_address, 'MyData/mri.zip')
Out[6]:
('MyData/mri.zip', <http.client.HTTPMessage at 0x7fb257782400>)

To automate the process we now also automatically unzip the file using the zipfile module:

In [7]:
import zipfile
In [8]:
with zipfile.ZipFile('MyData/mri.zip', 'r') as zip_ref:
    zip_ref.extractall('MyData/mri/')

18.2. Importing one slice

We define the general path to the folder containing slices:

In [9]:
path = 'MyData/mri/DICOM/ST000000/SE000002/'

Now we use the pydicom package to import a single slice using the dcmread() function:

In [10]:
single_slice = pydicom.dcmread(path+'MR000000')

A DICOM file does not just contain image data but a very extensive set of metadata. You can see these metadata by just printing the variable:

In [11]:
single_slice;

All that information is also available as attributes of the variable. For example you can get the patient's name:

In [12]:
single_slice.PatientName
Out[12]:
'LIONHEART^WILLIAM'

But also numerical values such as pixel spacing or position of slice in the stack:

In [13]:
single_slice.PixelSpacing
Out[13]:
[0.8984375, 0.8984375]
In [14]:
single_slice.SliceLocation
Out[14]:
"0.0"

18.3. Loading the complete stack

As we have already done previously, we have first to parse the folder content to gather the files belonging to the stack. Here we simply list the folder content:

In [15]:
file_list = os.listdir(path)
In [16]:
#file_list

We can now load each slice using a comprehension list. From the file sorting, we already see that we'll later have to reorder the slices.

In [17]:
slices = [pydicom.dcmread(path+x) for x in os.listdir(path)]

In principle we could reorder the file by names but this is going to depend on file name formatting. A more general solution is to reorganize based on the location of the file in the stack. Let's recover that position:

In [18]:
positions = [int(x.SliceLocation) for x in slices]
In [19]:
#positions

We then use np.argsort() function to get the indices of the ordered list:

In [20]:
import numpy as np
index_ordered = np.argsort(positions)
In [21]:
index_ordered
Out[21]:
array([21,  2,  1, 20,  3, 11, 13,  9, 29, 28, 22, 26, 18,  5, 23, 16, 31,
       15, 12, 10,  0, 19,  6,  4, 24, 14, 17,  8, 30,  7, 27, 25])

And finally use that ordered list to reorder the slices themselves:

In [22]:
reordered = []
slices_ordered = [slices[x] for x in index_ordered]

18.4. Visualization

Finally we can visualize our volume. First let's create an actual volume by stacking the planes:

In [23]:
volume = np.stack([x.pixel_array for x in slices_ordered])
In [24]:
volume.shape
Out[24]:
(32, 256, 256)

For the rendering, we'll see here two different solutions. The first one is ipyvolume, a leight-weight volume viewer purely based on browser technology. The syntax is very similar to matplotlib.

In [25]:
#import ipyvolume as ipv
In [26]:
ipv.figure()
ipv.volshow(volume)
ipv.show()
/usr/local/lib/python3.5/dist-packages/ipyvolume/serialize.py:81: RuntimeWarning: invalid value encountered in true_divide
  gradient = gradient / np.sqrt(gradient[0]**2 + gradient[1]**2 + gradient[2]**2)

As ipyvolume is fully browser-based, it's very easy to save an image as a web page. For example we can just type:

In [27]:
ipv.save('interactive_view.html')
/usr/local/lib/python3.5/dist-packages/ipyvolume/serialize.py:81: RuntimeWarning: invalid value encountered in true_divide
  gradient = gradient / np.sqrt(gradient[0]**2 + gradient[1]**2 + gradient[2]**2)

And this saves for us a full interactive version of the figure above. This can therefore be very useful for demonstration purposed e.g. to insert an image on a web-page.

Note that customizing the aspect of the view requires some work and that this package is not as mature as others.

An alternative solution is to use the ITK (Insight Toolkit), a very popular image processing tool suite in medical imaging (an interesting but more challenging alternative to scikit-image). ITK in particular offers a volume viewer compatible with Python and Jupyter:

In [29]:
import itkwidgets as itkw
import itk

We can just call the view() function:

In [30]:
itkw.view(volume)

We see that the head looks compressed because the acquisition is anisotropic (large depth dimension that width/height). Above we simply passed a Numpy array to the viewer. However we can also create a native ITK format to adjust parameters more easily:

In [31]:
image_from_array = itk.image_from_array(volume)

This object has now several new attributes and methods such as:

In [32]:
image_from_array.GetSpacing()
Out[32]:
itkVectorD3 ([1, 1, 1])

We can try to guess and adjust the spacing:

In [33]:
image_from_array.SetSpacing((1,1,10))

Or we can use the itk package to read the native spacing:

In [34]:
itk_slice = itk.imread(path+'MR000001')
spacing = itk_slice.GetSpacing()
spacing
Out[34]:
itkVectorD3 ([0.898438, 0.898438, 6])
In [35]:
image_from_array.SetSpacing(spacing)
In [36]:
itkw.view(image_from_array)

18.5. Image processing

Finally, we can do the same image processing operations as we did before, just in 3D. For example a thresholding:

In [37]:
import skimage.filters
In [38]:
vol_thresh = volume>200
In [39]:
itkw.view(vol_thresh.astype(np.uint8))